uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
1,941,325,220,388 | arxiv | \section{Introduction}
\label{sec:introduction}
Jets are one of the most common objects appearing in proton-proton
colliders such as the Large Hadron Collider (LHC) at CERN.
They are defined as collimated bunches of high-energy particles, which
emerge from the interactions of quarks and gluons, the fundamental
constituents of the proton.
In modern analyses, final-state particle momenta (i.e.\ the product of
mass and velocity of the outgoing particles) are mapped to jet momenta
using a sequential recombination algorithm with a single free
parameter, the jet radius $R$, which defines up to which angle
particles can get recombined into a given
jet~\cite{Cacciari:2008gp}.
Due to the very high energies of its collisions, the LHC is routinely
producing heavy particles with transverse momenta, the momentum
component transverse to the beam axis, far greater than their rest
mass.
When these objects are sufficiently energetic (or boosted), they can
often generate very collimated decays, which are then reconstructed as
a single fat jet, whose radiation patterns differ from standard quark
or gluon jets.
Since the advent of the LHC program, the study of the substructure of
jets has matured into a remarkably active field of research that has
become notably conducive to applications of recent Machine Learning
techniques~\cite{deOliveira:2015xxd,Louppe:2017ipp,Datta:2017rhs}.
A particularly useful set of tools for experimental analyses are jet
grooming algorithms, defined as a post-processing treatment of jets to
remove unenergetic wide-angle radiation which is not associated with
the underlying hard substructure.
Grooming techniques play a crucial role in Standard Model
measurements~\cite{Aaboud:2017qwh,Sirunyan:2018xdh} and in improving
the boson- and top-tagging efficiencies at the LHC.
In these proceedings, we describe the {\tt GroomRL}
framework~\cite{Carrazza:2019efs}, which is used to train a grooming
algorithm using reinforcement learning (RL), and introduce the {\tt
libGroomRL} C++ library which makes it straightforward to use
the resulting grooming strategy in a real analysis.
To train the RL agent, we decompose the problem of jet grooming into
successive steps for which a reward function can be designed taking
into account the physical features that characterize such a system.
We then use a modified implementation of a Deep Q-Network (DQN)
agent~\cite{DBLP:journals/corr/MnihKSGAWR13,mnih2015humanlevel} and
train a dense neural network (NN) to optimally remove radiation
unassociated from the core of the jet.
The trained model can then be applied on other data sets, showing
improved resolution compared to state-of-the-art techniques as well as
a strong resilience to non-perturbative effects.
The framework and data used in this paper are available as open-source
and published material in~\cite{groomRL,groomRL_lib,groomRL_data}.%
\footnote{The code is available at
\url{https://github.com/JetsGame/GroomRL}, along with a C++ library
at
\url{https://github.com/JetsGame/libGroomRL}.}
\section{Jet representation}
\label{sec:jet-rep}
Let us start by introducing the representation we use for jets.
We take the particle constituents of a jet, as defined by any modern
algorithm, and recombine them using a Cambridge/Aachen (CA) sequential
clustering algorithm~\cite{Dokshitzer:1997in}.
The CA algorithm does a pairwise recombination, adding together the
momenta of the two particles with the closest distance as defined by
the measure
\begin{equation}
\label{eq:CA-alg}
\Delta^2_{ij} = (y_i - y_j)^2 + (\phi_i - \phi_j)^2\,,
\end{equation}
where $y_i$ is the rapidity, a measure of relativistic velocity along
the beam axis, and $\phi_i$ is the azimuthal angle of particle $i$
around the same axis.
This clustering sequence is then used to recast the jet as a full binary
tree, where each of the nodes contains information about the kinematic
properties of the two parent particles.
For each node $i$ of the tree we define an object $\mathcal{T}^{(i)}$
containing the current observable state $s_t$, as well as a pointer to
the two children nodes and one to the parent node.
The children nodes $a$ and $b$ are ordered in transverse momentum such
that $p_{t,a}>p_{t,b}$, and we label $a$ the ``harder'' child and $b$
the ``softer'' one.
The set of possible states is defined by a five dimensional box, such
that the state of the node is a tuple
\begin{equation}
\label{eq:node-tuple}
s_t=\left\{z, \Delta_{ab}, \psi, m, k_t\right\}\,,
\end{equation}
where $z=p_{t,b}/(p_{t,a}+p_{t,b})$ is the momentum fraction of the
softer child $b$,
$\psi=\tan^{-1}\big(\tfrac{y_b-y_a}{\phi_a-\phi_b}\big)$ is the
azimuthal angle around the $i$ axis, $m$ is the mass, and
$k_t= p_{t,b}\Delta_{ab}$ is the transverse momentum of $b$ relative
to $a$.
\subsection{Grooming algorithm}
\label{sec:groom-alg}
A grooming algorithm acting on a jet tree can be defined by a simple
recursive procedure which follows each of the branches and uses a
policy $\pi_g(s_t)$ to decide based on the values of
the current tuple $s_t$ whether to remove the softer of
the two branches.
This is shown in Algorithm~\ref{alg:grooming}, where the minus sign is
understood to mean the update of the kinematics of a node after
removal of a soft branch.
The grooming policy $\pi_g(s_t)$ returns an action $a_t\in\{0,1\}$,
with $a_t=1$ corresponding to the removal of a branch, and $a_t=0$
leaving the node unchanged.
The state $s_t$ is used to evaluate the current action-values
$Q^*(s,a)$ for each possible action, which in turn are used to
determine the best action at this step through a greedy policy.
It is easy to translate modern grooming algorithms in this language.
For example, Recursive Soft Drop (RSD)~\cite{Dreyer:2018tjj}
corresponds to a policy
\begin{equation}
\label{eq:RSD-alg}
\pi_\text{RSD}(s_t) =
\begin{cases}
0\quad\text{if} \quad z > z_\text{cut} \big(\frac{\Delta_{ab}}{R_0}\big)^\beta\,,\\
1\quad \text{else}\,,
\end{cases}
\end{equation}
where $z_\text{cut}$, $\beta$ and $R_0$ are the parameters of the
algorithm, and $1$ corresponds as before to the action of removing the
tree branch with smaller transverse momentum.
\begin{algorithm}[tb]
\caption{Grooming}
\label{alg:grooming}
\begin{algorithmic}
\State {\bfseries Input:} policy $\pi_g$, binary tree node $\mathcal{T}^{(i)}$
\State $a_t = \pi_g(\mathcal{T}^{(i)}\rightarrow s_t)$
\If{$a_t==1$}
\State $\mathcal{T}^{(j)} = \mathcal{T}^{(i)}$
\While{$\mathcal{T}^{(j)} = (\mathcal{T}^{(j)}\rightarrow \text{parent})$}
\State $\mathcal{T}^{(j)}\rightarrow s_t =(\mathcal{T}^{(j)}\rightarrow s_t)\,
-\,(\mathcal{T}^{(i)}\rightarrow b\rightarrow s_t)$
\EndWhile
\State $\mathcal{T}^{(i)} = (\mathcal{T}^{(i)}\rightarrow a)$
\State Grooming($\pi_g,\, \mathcal{T}^{(i)}$)
\Else
\State Grooming($\pi_g,\, \mathcal{T}^{(i)}\rightarrow a$)
\State Grooming($\pi_g,\, \mathcal{T}^{(i)}\rightarrow b$)
\EndIf
\end{algorithmic}
\end{algorithm}
\section{Setting up a grooming environment}
\label{sec:groomenv}
In order to find an optimal grooming policy $\pi_g$, we introduce an
environment and a reward function, formulating the problem in a way
that can be solved using a RL algorithm.
We initialize a list of all trees used for the training, from which a
tree is randomly selected at the beginning of each episode.
Each step consists in taking the node with the largest $\Delta_{ab}$
value and taking an action on which of its branches to keep based on
the state $s_t$ of that node.
Once a decision has been taken on the removal of the softer branch,
and the parent nodes have been updated accordingly, the remaining
children of the node are added to the list of nodes to consider in a
following step of this episode.
The reward function is then evaluated using the current state of the
tree.
The episode terminates once all nodes have been iterated over.
The framework described here deviates from usual RL implementations in
that the range of possible states for any episode are fixed at the start.
The transition probability between states
$\mathcal{P}(s_{t+1}|s_t,a_t)$ therefore does not always depend
very strongly on the action, although a grooming action can result in
the removal of some of the future states and will therefore still have
an effect on the distribution.
\subsection{Finding optimal hyper-parameters}
\label{sec:hyperopt}
The optimal choice of hyper-parameters, both for the model
architecture and for the grooming parameters, is determined using the
distributed asynchronous hyper-parameter optimization library
\texttt{hyperopt}~\cite{Bergstra:2013:MSM:3042817.3042832}.
The performance of an agent is evaluated by defining a loss function,
which is evaluated on a distinct validation set consisting of 50k
signal and background jets.
For each sample, we evaluate the jet mass after grooming of each jet
and derive the corresponding distribution.
To calculate the loss function $\mathcal{L}$, we start by determining
a window $(w_\text{min},w_\text{max})$ containing a fraction $f=0.6$
of the final jet masses of the groomed signal distribution, defining
$w_\text{med}$ as the median value on that interval.
The loss function is then defined as
\begin{equation}
\label{eq:loss-func}
\mathcal{L} = \frac15|w_\text{max} - w_\text{min}|
+ |m_\text{target}-w_\text{med}|
+ 20 f_\text{bkg}\,,
\end{equation}
where $f_\text{bkg}$ is the fraction of the groomed background sample
contained in the same interval, and $m_\text{target}$ is a reference
value for the signal.
We scan hyper-parameters using 1000 iterations and select the
ones for which the loss $\mathcal{L}$ evaluated on the validation set
is minimal.
In practice we will do three different scans: to determine the best
parameters of the reward function, to find an optimal grooming
environment, and to determine the architecture of the DQN agent.
The scan is performed by requiring \texttt{hyperopt} to use a uniform
search space for continuous parameters, a log-uniform search space for
the learning rate and a binary choice for all integer or boolean
parameters.
The optimization used in all the results presented in this work
rely on the Tree-structured Parzen Estimator (TPE) algorithm.
\subsection{Defining a reward function}
\label{sec:def-reward}
One of the key ingredients for the optimization of the grooming policy
is the reward function used at each step during the training.
We consider a reward with two components: a first piece evaluated
on the full tree, and another that considers only the kinematics of
the current node.
The first component of the reward compares the mass of the current jet
to a set target mass, typically the mass of the underlying boosted
object.
We implement this mass reward using a Cauchy distribution, which has
two free parameters, the target mass $m_\text{target}$ and a width
$\Gamma$, so that
\begin{equation}
\label{eq:mass-reward}
R_M(m) = \frac{\Gamma^2}{\pi(|m - m_\text{target}|^2 + \Gamma^2)}\,.
\end{equation}
Separately, we calculate a reward on the current node which gives a
positive reward for the removal of wide-angle soft radiation, as well
as for leaving intact hard-collinear emissions. This provides a
baseline behavior for the groomer.
We label this reward component ``Soft-Drop'' due to its similarity
with the Soft Drop condition~\cite{Larkoski:2014wba}, and implement it
through exponential distributions
\begin{multline}
\label{eq:SD-reward}
R_\text{SD}(a_t, \Delta, z) =
a_t\min\big(1,e^{-\alpha_1 \ln(1/\Delta) + \beta_1\ln (z_1/z)}\big)\\
+(1-a_t)\max\big(0,1 - e^{-\alpha_2 \ln(1/\Delta) + \beta_2 \ln (z_2/z)}\big)\,,
\end{multline}
where $a_t=0,1$ is the action taken by the policy, and
$\alpha_{i}, \beta_{i}, z_{i}$ are free parameters.
The total reward function is then given by
\begin{equation}
\label{eq:total-reward}
R(m, a_t, \Delta, z) = R_M(m) +
\frac{1}{N_\text{SD}} R_\text{SD}(a_t, \Delta, z)\,.
\end{equation}
Here $N_\text{SD}$ is a normalization factor determining the weight
given to the second component of the reward.
\begin{figure*}
\centering
\subfloat[QCD]{\includegraphics[width=0.33\textwidth]{figures/groomed_jetmass_QCD}%
\label{fig:qcd_jetmass}}%
\subfloat[W]{\includegraphics[width=0.33\textwidth]{figures/groomed_jetmass_WW}%
\label{fig:W_jetmass}}%
\subfloat[top]{\includegraphics[width=0.33\textwidth]{figures/groomed_jetmass_Top}%
\label{fig:top_jetmass}}%
\caption{Jet mass spectrum for (a) QCD jets, (b) $W$ jets,
(c) top jets. The \texttt{GroomRL-W} curve is obtained from
training on $W$ data.}
\label{fig:jet-mass}
\end{figure*}
\subsection{RL implementation and multi-level training}
\label{sec:multi-level}
For the applications in this paper, we have implemented a DQN agent
that contains a groomer module, which is defined by the underlying NN
model and the test policy used by the agent.
The groomer can be extracted after the model has been trained, using a
greedy policy to select the best action based on the $Q$-values
predicted by the NN.
This allows for straightforward application of the resulting grooming
strategy on new samples.
The training sample consists of 500k signal and background jets
simulated using \texttt{Pythia} 8.223~\cite{Sjostrand:2014zea}.
We will construct two separate models by considering two signal
samples, one with boosted $W$ jets and one with boosted top jets,
while the background always consists of QCD jets.
We use the $WW$ and $t\bar{t}$ processes, with hadronically decaying
$W$ and top, to create the signal samples, and the dijet process for
the background.
All samples used in this article can are available
online~\cite{groomRL_data}.
The grooming environment is initialized by reading in the training
data and creating an event array containing the corresponding jet
trees.
To train the RL agent, we use a multi-level approach taking into
account both signal and background samples.
At the beginning of each episode, we select either a signal jet or a
background jet, with probability $1-p_\text{bkg}$.
For signal jets, the reward function uses a reference mass set to the
$W$-boson mass, $m_\text{target}=m_W$, or to the top mass,
$m_\text{target} = m_t$, depending on the choice of sample.
In the case of the background the mass reward function in
equation~(\ref{eq:total-reward}) is changed to
\begin{equation}
\label{eq:mass-reward-bkg}
R^{\rm bkg}_M(m) = \frac{m}{\Gamma_{\rm bkg}}
\exp\Big(-\frac{m}{\Gamma_{\rm bkg}}\,\Big)\,.
\end{equation}
The width parameters $\Gamma$, $\Gamma_{\rm bkg}$ are also set to
different values for signal and background reward functions, and are
determined through a hyper-parameter scan.
We found that while this multi-level training only marginally improves
the performance, it noticeably reduces the variability of the model.
\section{Jet mass spectrum}
\label{sec:jetmass}
Let us now apply the {\tt GroomRL} models to new data samples.
We consider three test sets of 50k elements each: one with QCD jets,
one with $W$ initiated jets and one with top jets.
The size of the window containing $60\%$ of the mass spectrum of the
$W$ sample, as well as the corresponding median value, are given in
table~\ref{tab:window} for each different grooming strategy.
As a benchmark, we compare to the RSD algorithm, using parameters
$z_\text{cut}=0.05$, $\beta=1$ and $R_0=1$.
One can notice a sizeable reduction of the window size after grooming
with the machine learning based algorithms, while all groomers are
able to reconstruct the peak location to a value very close to the $W$
mass.
The distribution of the jet mass after grooming for each of these
samples is shown in figure~\ref{fig:jet-mass}.
Each curve gives the differential cross section $d\sigma/d m_j$
normalized by the total cross section.
We show results for the grooming algorithm trained on a $W$ sample, as
well as for the ungroomed (or plain) jet mass and the jet mass after
RSD grooming.
As expected, one can observe that for the ungroomed case the
resolution is very poor, with the QCD jets having large masses due to
wide-angle radiation, while the $W$ and top mass peaks are heavily
distorted.
In contrast, after applying RSD or {\tt GroomRL}, the jet mass is
reconstructed much more accurately.
One interesting feature of {\tt GroomRL} is that it is able to lower
the jet mass for quark and gluon jets, further reducing the background
contamination in windows close to a heavy particle mass.
In top jets, displayed in figure~\ref{fig:top_jetmass}, there are also
noticeable enhancements after grooming with {\tt GroomRL}, despite the
fact that the training did not involve any top-related data.
This demonstrates that the tools derived from our framework
are robust and can be applied to data sets beyond their training range
with good results.
\begin{table}
\caption{Size of the window $\Delta w$ containing $60\%$ of the $W$
mass spectrum, and median value on that interval.}
\label{tab:window}
\vskip 0.15in
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{lcccc}
\toprule
& Plain & \texttt{GRL-W}
&\hspace{-2mm} \texttt{GRL-Top}\hspace{-2mm}
& RSD\\
\midrule
$\Delta w$ [GeV] & $44.65$ & $10.70$
& $13.88$ & $16.96$\\
$w_\mathrm{med}$ [GeV] & $104.64$ & $80.09$
& $80.46$ & $80.46$\\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\end{center}
\vskip -0.1in
\end{table}
\section{Conclusions}
We have shown a promising application of RL to the issue of jet
grooming.
Using a carefully designed reward function, we have constructed a
groomer from a dense NN trained with a DQN agent.
This grooming algorithm was then applied to a range of data samples,
showing excellent results for the mass resolution of boosted heavy
particles.
In particular, while the training of the NN is performed on samples
consisting of $W$ (or top) jets, the groomer yields noticeable gains
in the top (or $W$) case as well, on data outside of the training
range.
The improvements in resolution and background reduction compared to
alternative state-of-the-art methods provide an encouraging
demonstration of the relevance of machine learning for jet grooming.
In particular, we showed that it is possible for a RL agent to
extract the underlying physics of jet grooming and distill this
knowledge into an efficient algorithm.
Due to its simplicity, the model we developed also retains most of the
calculability of other existing methods such as Soft Drop.
Accurate numerical computations of groomed jet observables are
therefore achievable, allowing for the possibility of direct
comparisons with data.
Furthermore, given an appropriate sample, one could also attempt to
train the grooming strategy on real data, bypassing some of the
limitations due to the use of parton shower programs.
The {\tt GroomRL} framework, available online~\cite{groomRL}, is
generic and can easily be extended to higher-dimensional inputs, for
example to consider multiple emissions per step or additional
kinematic information.
Algorithms derived from \texttt{GroomRL} can easily be used to analyze
real events through the associated \texttt{libGroomRL} C++
library~\cite{groomRL_lib}.
While the method presented in this article was applied to a specific
problem in particle physics, we expect that with a suitable choice of
reward function, this framework is in principle also applicable to a
range of problems where a tree requires pruning.
\textbf{Acknowledgments:}
We are grateful to Jia-Jie Zhu and Gavin Salam for comments on the
manuscript and to Jesse Thaler for useful discussions.
We also acknowledge the NVIDIA Corporation for the donation of a Titan
Xp GPU used for this research.
F.D.\ is supported by the Science and Technology Facilities Council
(STFC) under grant ST/P000770/1. S.C.\ is supported by the European
Research Council under the European Union's Horizon 2020 research and
innovation Programme (grant agreement number 740006).
\begin{figure*}
\centering
\includegraphics[width=1.0\linewidth]{figures/hyperopt_groom_scan}\\
\includegraphics[width=1.0\linewidth]{figures/hyperopt_arch_scan}
\caption{Distribution of the loss value for different parameters.
%
The best performing model is indicated in red.}
\label{fig:param-scan}
\end{figure*}
|
1,941,325,220,389 | arxiv | \section{Introduction}
Quantum entanglement describes non-classical correlations of multipartite quantum systems~\cite{Horodecki}. It can appear between the parties' internal (\eg spin) degree of freedom (dof), or between their external (\eg spatial modes) dof. One usually refers to these two cases as particle or mode entanglement, respectively. Apart from its fundamental interest, entanglement is a key resource in quantum information science and quantum technologies~\cite{Bollinger,PS09}. This is evidenced, for instance, in the context of quantum sensing and metrology, where quantitative relations between metrological sensitivity and the number of entangled particles in Ramsey interferometers exist~\cite{HyllusToth}. In atomic ensembles, entangled multipartite quantum states with the potential to enhance interferometric measurements can be prepared by controlling the interactions between particles, which is a well-established technique in today's experiments~\cite{RMP}.
Most experiments on ultracold atomic ensembles focus on quantum states where particles share the same external mode, and can thus only be addressed and measured collectively. In recent years, new technologies, such as quantum gas microscopes~\cite{BakrQuantumGasMicroscope}, optical tweezer traps~\cite{Browaeys,Lukin}, and split Bose-Einstein condensates (BECs)~\cite{FadelSplitBEC,KunkelSplitBEC,LangeSplitBEC} have enabled the investigation of spatially distributed, entangled atomic ensembles, see Fig.~\ref{fig:scheme}. In such systems, on top of the entanglement among the particles, we can study the entanglement of spatially separated modes~\cite{Oudot,Jing}. On the one hand, this is interesting for practical applications such as spatially-resolved metrology~\cite{HumphreysPRL2013,ProctorPRL2018,GePRL2018,GessnerPRL2018}, optical clocks \cite{Kajtoch18}, and quantum information tasks~\cite{Nielsen2000}. On the other hand, it allows to investigate fundamental concepts such as the extraction of entanglement from a system of indistinguishable particles \cite{Killoran14,LoFranco18,Morris19,Barros19,Sun20}.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{SplittingSpinsFigureV4.png}
\caption{By spatially splitting an entangled ensemble of $N$ identical particles into $M$ external modes, we generate entanglement between addressable modes.}
\label{fig:scheme}
\end{figure}
Particle entanglement can be revealed experimentally through spin-squeezing coefficients~\cite{Wineland,SorensenNature,SorensenMolmer,Ma,GuehneToth,Hyllus1,Hyllus2}. Among several methods to quantify spin squeezing~\cite{Ma}, the Wineland \textit{et al.} spin-squeezing coefficient~\cite{Wineland} has the advantage of establishing a link between the entanglement detected in a quantum state and its quantum gain for interferometric measurements, thereby detecting metrologically useful entanglement~\cite{PS09}. Moreover, this spin-squeezing coefficient expresses a sensitivity gain that can be reached by a simple parameter estimation protocol after sufficiently many experimental repetitions \cite{Wineland}. Being a function of averages and variances of linear operators only, spin-squeezing coefficients are particularly suitable to detect multiparticle entanglement of spin states that can be approximated as Gaussian quantum states.
While particle entanglement can be detected with collective measurements, standard methods to reveal mode entanglement require local measurements on each mode. Criteria based on variances and mean values can be found in the form of modified uncertainty relations that hold for arbitrary separable states, but can be violated through entanglement~\cite{DGCZ,Raymer,Giovannetti,Werner,HyllusEisert}. These approaches are powerful tools to detect entanglement in arbitrary-dimensional systems with high flexibility. They allow us to study entanglement between specific partitions of a composite system, thereby providing precise microscopic information about which subsystems share quantum correlations~\cite{GessnerQuantum,QinNPJQI2019}. These criteria exist for both discrete and continuous variables, and they can be extended to study the stronger class of quantum correlations known as steering that is at the heart of the Einstein-Podolsky-Rosen (EPR) paradox~\cite{Reid1989,ReidRMP}.
In this work, we show that under conditions that hold for a wide range of systems, the uncertainty-type mode-entanglement criteria coincide with the Wineland \textit{et al.} spin-squeezing coefficient, which detects particle entanglement and quantifies the metrological quantum gain. This allows us to establish a direct relation between the detected entanglement of particles and modes, as well as the sensitivity of spin states. While this does not replace the need for multimode entanglement witnesses, our results reveal the required level of spin squeezing for the generation of multimode entanglement by distributing the spins into addressable modes, e.g., by splitting a BEC into individually addressable ensembles. By linking these quantities to the spin-squeezing coefficient, we further relate mode entanglement and EPR steering to the quantum advantage in metrology measurements. Finally, we improve the best known bounds on the number of entangled particles that can be identified from the spin-squeezing coefficient without knowledge of the average polarization, and we clarify the connection to the spin-squeezing multipartite entanglement criterion by S\o{}rensen and M\o{}lmer~\cite{SorensenMolmer}.
\section{Mode vs particle entanglement}
A collection of systems (labeled $1,...,\Xi$) is entangled if their quantum state cannot be written as
\begin{equation}\label{entdef}
\rho = \sum_\gamma p_\gamma \rho_\gamma^{(1)} \otimes \cdots\otimes \rho_\gamma^{(\Xi)} \;,
\end{equation}
where $\sum_\gamma p_\gamma=1$ is a probability distribution, and $\rho_\gamma^{(i)}$ are density matrices for system $i$. The local systems may refer either to the $\Xi=N$ particles or to the $\Xi=M$ modes that they occupy, giving rise to particle or mode entanglement, respectively. In practice, determining whether a given quantum state allows for a decomposition of the form of Eq.~\eqref{entdef} is an extremely hard task. One therefore relies on entanglement witnesses or, more generally, necessary conditions that any separable state must satisfy~\cite{GuehneToth,HuberReview}. A violation of these criteria then represents a witness for entanglement.
\subsection{Uncertainty-based mode-entanglement criterion}
Criteria based on first and second moments of linear observables are powerful tools to detect entanglement~\cite{DGCZ,Raymer,Hofmann,GuehneToth} and steering~\cite{Reid1989,ReidRMP} in arbitrary-dimensional systems with high flexibility. An important class of these criteria take on the form of Heisenberg-Robertson-type uncertainty relations, with a modified lower bound on the variances that can be violated by entangled states. The most general formulation of these criteria for bipartite systems was given by Giovannetti \textit{et al.} in Ref.~\cite{Giovannetti}, where it was furthermore shown that (nonlinear) product criteria are generally more powerful than (linear) sum criteria (see also \cite{HyllusEisert,ReidRMP}). In the context of atomic spin ensembles it is convenient to express these criteria in terms of collective spin observables. For the case of $N$ spins distributed into $M=2$ modes, labeled as $A, B$, these take the form $\vect{S}^A=\sum_{i\in A} \vect{s}^{(i)}/2$, $\vect{S}^B=\sum_{i\in B} \vect{s}^{(i)}/2$, where $\vect{s}^{(i)}$ is the spin for particle $i$. The criterion expresses that all mode-separable states satisfy
\begin{equation}\label{Giova}
\mathcal{G}^2 := \dfrac{4 \ensuremath{\, \text{Var}}\left[S_z^A + S_z^B \right] \ensuremath{\, \text{Var}}\left[S_y^A - S_y^B \right] }{\left( \vert\langle S_x^A\rangle\vert + \vert\langle S_x^B\rangle\vert \right)^2} \geq 1 \;.
\end{equation}
The choice of observables (\eg local spin components) can further be optimized to identify the most sensitive entanglement criterion for a given quantum state. These criteria can be generalized to study entanglement in specific multipartitions (with precise microscopic information about which subsystems share quantum correlations)~\cite{GessnerQuantum} as well as full inseparability~\cite{VanLoock,Toscano,ReidTeh,RiedTehSpin}, \ie the violation of these bounds in all possible partitions.
\subsection{Spin squeezing particle-entanglement criterion}
In the case of a large number of spins, it becomes challenging to address each particle individually. Nevertheless, particle entanglement among the individual spins can be detected from collective measurements of the spin components through spin-squeezing criteria. For all fully separable spin$-1/2$ states it holds that
\begin{equation}\label{Wine}
\xi^2 := \dfrac{N \ensuremath{\, \text{Var}}\left[S_z\right]}{\vert\langle S_x\rangle\vert^2} \geq 1 \;,
\end{equation}
where $\xi^2$ is the Wineland \textit{et al.} spin-squeezing coefficient~\cite{Wineland}. States with $\xi^2<1$ are characterized by a variance of the collective spin operator that is smaller than that of a coherent spin state, while at the same time being strongly polarized along the $S_x$ direction.
The spin-squeezing coefficient can be considered as a simple Gaussian approximation of the full metrological sensitivity that can be extracted from the quantum state~\cite{PS09,RMP,GessnerPRL2019}. For this reason, states with $\xi^2<1$ can achieve a quantum enhancement beyond the standard quantum limit in metrology measurements~\cite{Wineland} and the entanglement revealed by this condition is metrologically useful. This approach can be extended to fluctuating particle numbers~\cite{Hyllus1}, multipartite entanglement~\cite{HyllusToth,GessnerPRA2017}, Bell nonlocality~\cite{Frowis}, and to analyze the multimode entanglement structure in addressable systems of arbitrary dimension~\cite{GessnerQuantum,QinNPJQI2019}.
\section{Equivalence of mode and particle entanglement: Two-mode case}\label{sec:symmetry}
In the following we prove that, if the state of the system is symmetric under the exchange of particle labels and of modes, the two criteria~(\ref{Giova}) and~(\ref{Wine}) are equivalent, namely that
\begin{equation}\label{GWequi}
\mathcal{G}^2 = \xi^2 \;.
\end{equation}
To show this, imagine a system of $i=1,\dots,N$ particles with an internal (spin) and an external (mode) dof. We associate to each particle the operator $s_{\vec{u}}^{(i)} \Pi^{I,(i)}$, where $s_{\vec{u}}^{(i)}$ is the spin operator along direction $\vec{u}$, and $\Pi^{I,(i)}$ is the projection operator of the external dof onto one of the $M=2$ modes labeled as $I=1\equiv A$ and $I=2\equiv B$.
Let us now assume the following properties valid for all $i,j=1,\dots,N$, and $I,J=1,\dots,M \geq 2$:
\begin{enumerate}[label=(\roman*)]
\item \textbf{dof's factorize}: there are no correlations between the spin and the spatial dof, \eg $\avg{s_{\vec{u}}^{(i)}\Pi^{I,(i)}}=\avg{s_{\vec{u}}^{(i)}}\avg{\Pi^{I,(i)}}$, $\avg{s_{\vec{u}}^{(i)}\PXis_{\vec{v}}^{(j)}\Pi^{J,(j)}}=\avg{s_{\vec{u}}^{(i)}s_{\vec{v}}^{(j)}}\avg{\Pi^{I,(i)}\Pi^{J,(j)}}$;
\item \textbf{particle symmetry}: the state is invariant under permutations of the particle labels, \eg $\avg{s_{\vec{u}}^{(i)}}=\avg{s_{\vec{u}}^{(1)}}$, $\avg{s_{\vec{u}}^{(i)}s_{\vec{v}}^{(j)}}=\avg{s_{\vec{u}}^{(1)}s_{\vec{v}}^{(2)}}$ and $\avg{\Pi^{I,(i)}}=\avg{\Pi^{I,(1)}}$, $\avg{\Pi^{I,(i)}\Pi^{J,(j)}}=\avg{\Pi^{I,(1)}\Pi^{J,(2)}}$, for $i\neq j$;
\item \textbf{symmetric splitting}: a) there is equal probability for a particle to be found in any of the modes, \ie $\avg{\Pi^{I,(i)}}=1/M$, and b) these probabilities are independent, \ie $\avg{\Pi^{I,(i)}\Pi^{J,(j)}}=\avg{\Pi^{I,(i)}}\avg{\Pi^{J,(j)}}$.
\end{enumerate}
Let us mention that these assumptions are relevant for a number of experimental systems. For example, they apply to an ensemble of identical atoms distributed symmetrically in a set of external modes, as in Refs.~\cite{FadelSplitBEC,KunkelSplitBEC,LangeSplitBEC}.
Exploiting assumptions (i) and (ii), we can now compute expectation values of collective spin observables as
\begin{align}
\avg{ S_{\vec{u}}^I } &= \sum_{i=1}^N \avg{s_{\vec{u}}^{(i)} \Pi^{I,(i)}}\nonumber\\
&= \avg{\Pi^{I}} N \avg{s_{\vec{u}}} \label{evS}
\end{align}
Note that here, and in the following, we use the shorthand notation $\avg{s_{\vec{u}}^{(1)}}=\avg{s_{\vec{u}}}$ and $\avg{\Pi^{I,(1)}}=\avg{\Pi^{I}}$.
Similarly, we obtain that correlators take the form (see Appendix~\ref{app:A1corr} for details)
\begin{align}
\avg{ S_{\vec{u}}^I S_{\vec{v}}^J } = & \delta_{I,J} \avg{\Pi^{I}} N \avg{s_{\vec{u}}^{(1)} s_{\vec{v}}^{(2)}} + \nonumber\\
& + \avg{\Pi^{I}} \avg{\Pi^{J}} N(N-1) \avg{s_{\vec{u}}^{(1)} s_{\vec{v}}^{(2)}} \;. \label{evSS}
\end{align}
To prove now the relation Eq.~\eqref{GWequi}, we use Eqs.~(\ref{evS}) and~(\ref{evSS}) to rewrite the variance appearing in Eq.~\eqref{Giova} as (see the Appendix~\ref{app:A2VarY} for a detailed derivation)
\begin{equation}\label{VyG}
\ensuremath{\, \text{Var}}\left[S_y^A - S_y^B \right] = 2 \langle\Pi^{A}\rangle \dfrac{N}{4} = \dfrac{N}{4} \;\quad\text{using (iii)}.
\end{equation}
For the other variance in Eq.~\eqref{Giova} we can simply write
\begin{equation}\label{VzG}
\ensuremath{\, \text{Var}}\left[S_z^A + S_z^B \right] = \ensuremath{\, \text{Var}}\left[S_z\right] \;.
\end{equation}
The same holds also for the denominator, which can be written as
\begin{equation}\label{xG}
\left( \vert\langle S_x^A\rangle\vert + \vert\langle S_x^B\rangle\vert \right)^2 = \vert\langle S_x\rangle\vert^2 \;,
\end{equation}
since symmetry implies that $\langle S_x^A\rangle$ and $\langle S_x^B\rangle$ have the same sign. It is now straightforward to combine the results of Eqs.~(\ref{VyG}), (\ref{VzG}), and~(\ref{xG}) to see that, under the assumptions introduced before, we obtain Eq.~\eqref{GWequi}.
This result highlights a correspondence of the detected mode entanglement in two addressable modes and the detected particle entanglement in fully symmetric many-body quantum states. In the following we further generalize this criterion to an arbitrary number of modes $M$, and show how full multipartite inseparability can be detected with these methods.
\section{Generalization to multipartite entanglement}
When considering a collection of systems, entanglement can emerge in different partitions of the ensemble, \ie across any separation of the ensemble into groups of systems. Let us denote one specific partition as $\Lambda=\{\mathcal{A}_1,\dots,\mathcal{A}_k\}$, where the $\mathcal{A}$'s are non-overlapping groups of $1\leq|\mathcal{A}_q|\leq \Xi$ systems, such that $\sum_{q=1}^k|\mathcal{A}_q|=\Xi$. An $\Xi$-partite quantum state $\rho$ is called $\Lambda$-separable if it can be written as
\begin{align}\label{eq:Lsep}
\rho_{\Lambda-\mathrm{sep}}=\sum_{\gamma}p_{\gamma}\rho_{\gamma}^{(\mathcal{A}_1)}\otimes\cdots\otimes\rho_{\gamma}^{(\mathcal{A}_{k} )} \;,
\end{align}
where the $\rho_{\gamma}^{(\mathcal{A}_q)}$ are quantum states of the subsystem $\mathcal{A}_q$. For an overview of different classes of entangled states in multipartite systems, we refer to Appendix~\ref{app:ent}.
\subsection{Inseparability of $M$ modes}
The entanglement criterion~(\ref{Giova}) can be generalized to yield a criterion for $\Lambda$-separable states of an $M$-mode system as follows~\cite{GessnerQuantum}: Any $\Lambda$-separable state must satisfy
\begin{equation}\label{GiovaM}
\mathcal{G}_{\Lambda}^M(\vec{g},\vec{h})^2 := \dfrac{ \ensuremath{\, \text{Var}}\left[\sum_{I=1}^M g_I S_z^{I}\right] \ensuremath{\, \text{Var}}\left[\sum_{I=1}^M h_I S_y^{I}\right]}{\mathcal{B}_{\Lambda}^M(\vec{g},\vec{h})^2} \geq 1 \;,
\end{equation}
where
\begin{align}\label{Bk}
\mathcal{B}_{\Lambda}^M(\vec{g},\vec{h}):=\frac{1}{2}\sum_{q=1}^l\left|\sum_{I\in\mathcal{A}_q}g_Ih_I\langle S^{I}_z\rangle\right| \;.
\end{align}
This bound holds for arbitrary choices of the coefficient vectors $\vec{g}=(g_1,\dots,g_M)$ and $\vec{h}=(h_1,\dots,h_M)$, which can be optimized to obtain the strongest possible witness. A violation of Eq.~\eqref{GiovaM} witnesses inseparability in the partition $\Lambda$.
We may further exclude separability in all partitions $\Lambda$ that contain at most $k$ subsystems by observing a violation of the single condition
\begin{align}\label{Gk}
\mathcal{G}_{k}^M(\vec{g},\vec{h}):=\max_{\Lambda:\:l\geq k}\mathcal{G}_{\Lambda}^M(\vec{g},\vec{h}) \geq 1,
\end{align}
where the maximization runs over all partitions that consist of $l\geq k$ subsystems. A violation of the above bound with $k=M$, where each mode is treated as an individual subsystem, indicates that there is entanglement somewhere in the system without specifying how many subsystems are entangled. If the bound is violated for $k=2$, this means that we cannot identify even two separable groups, and we must thus consider the ensemble of all spins as a single entangled system. We remark here that this criterion analyzes each partition on a one-by-one basis, but it does not exclude arbitrary mixtures of separable models for different partitions, which is known as genuine multipartite entanglement~\cite{ReidTeh,HyllusEisert,GuehneToth} (see Appendix~\ref{app:ent} for details).
It is evident that the computation of the bound~(\ref{Gk}) becomes very demanding since the number of possible partitions increases exponentially with $M$. Moreover, identifying a suitable choice for the $\{\vec{g},\vec{h}\}$ introduces additional complexity.
A special case of Eq.~\eqref{GiovaM} is obtained for the choice of $\{\vec{g},\vec{h}\}$ given by
\begin{equation}\label{ghStar}
g_I^{\ast}=1 \;,\qquad h_1^{\ast}=1 \;,\qquad h_{J}^{\ast}=-\frac{1}{M-1} \;,
\end{equation}
for all $I=1,\dots,M$ and $J=2,\dots,M$.
With this choice we note that $g_1^{\ast} h_1^{\ast} = 1$, and $g_{I}^{\ast} h_{I}^{\ast} = -(M-1)^{-1}$ for $I>1$.
Since for the symmetric spin states considered here, the variances in Eq.~(\ref{GiovaM}) do not depend on the partition $\Lambda$, the maximization in Eq.~(\ref{Gk}) affects only the bound~(\ref{Bk}). We thus obtain that $\mathcal{G}_{k}^M(\vec{g},\vec{h})=\mathcal{G}_{\Lambda_{\min}}^M(\vec{g},\vec{h})$, where $\Lambda_{\min}$ is the partition that achieves the minimum
\begin{align}\label{beta}
\beta^M_{k}(\vec{g},\vec{h})=\frac{|\langle S_z^A\rangle|}{2}\min_{\Lambda: l\geq k}\beta^M_{\Lambda}(\vec{g},\vec{h}) \;,
\end{align}
and $\beta^M_{\Lambda}(\vec{g},\vec{h}):=\sum_{q=1}^l\left|\sum_{I\in\mathcal{A}_q}g_Ih_I\right|$. In writing Eq.~(\ref{beta}), we made use of the symmetry property~(\ref{evS}) to limit the optimization procedure to the coefficients $\{\vec{g},\vec{h}\}$. Next, we observe that all contributions of terms from sets with $I>1$ will increase $\beta^M_{\Lambda}(\vec{g}^*,\vec{h}^*)$ whenever they appear in a partition $\Lambda$ that distinguishes them from mode $I=1$, whereas these terms will decrease $\beta^M_{\Lambda}(\vec{g}^*,\vec{h}^*)$ when in a partition $\Lambda$ that lumps them into the set $\mathcal{A}_1$ together with mode $1$. From this argument we also see that it is advantageous to pick a partition that splits the system in as few subsystems as possible. Since for a given $k$, at least $k$ subsystems must be formed, the optimal partition describes $k-1$ single-mode subsystems (with $I>1$) and places all other modes (including $I=1$) into a single subsystem. The minimum bound is thus given by
\begin{align}
\beta^M_{\Lambda}(\vec{g}^*,\vec{h}^*)&=\vert g_1^{\ast} h_1^{\ast} \underbrace{ + \dots+g_{M-(k-1)}^{\ast} h_{M-(k-1)}^{\ast}}_{M-k \;\text{terms}} \vert\notag\\&\quad+\underbrace{ \vert g_{M-(k-2)}^{\ast} h_{M-(k-2)}^{\ast} \vert + \cdots + \vert g_M^{\ast} h_M^{\ast} \vert }_{k-1 \;\text{terms}}\notag\\
& =\dfrac{2(k-1)}{M-1}\;.
\end{align}
In summary, the minimum bound at the denominator of Eq.~\eqref{GiovaM} takes the form
\begin{equation}\label{eq:BkM}
\mathcal{B}_k^M(\vec{g}^\ast,\vec{h}^\ast) = \dfrac{1}{2}\left( \dfrac{2(k-1)}{M-1} \right)\abs{\avg{S_x^A}} \;.
\end{equation}
The choice given in Eq.~\eqref{ghStar} gives for the variances
\begin{equation}\label{eq:Vargisi}
\ensuremath{\, \text{Var}}\left[\sum_{I=1}^M g_I S_z^{I}\right] = \ensuremath{\, \text{Var}}\left[ S_z \right] \;,
\end{equation}
and, using again Eqs.~(\ref{evS}) and (\ref{evSS}) with $\avg{\Pi^{I}}=1/M$, that
\begin{equation}\label{varY}
\ensuremath{\, \text{Var}}\left[\sum_{I=1}^M h_I S_y^{I}\right] = \dfrac{N}{4(M-1)} \;.
\end{equation}
A detailed calculation is given in Appendix~\ref{app:VyG}.
Using the definition of $\xi^2$~(\ref{Wine}), together with Eqs.~(\ref{eq:BkM}), (\ref{eq:Vargisi}) and (\ref{varY}), we can express Eq.~(\ref{GiovaM}) as
\begin{align}
\mathcal{G}_k^M(\vec{g}^\ast,\vec{h}^\ast)^2
&= \xi^2 \dfrac{M^2(M-1)}{4 (k-1)^2} \geq 1 \;. \label{GWequiM}
\end{align}
From this, we conclude that any state that is separable into $k$ subsystems or more must satisfy
\begin{align}
\xi^2 \geq \dfrac{4 (k-1)^2}{M^2(M-1)} \;. \label{kSepGCrit}
\end{align}
Therefore, observing $\xi^2 < 4 (k-1)^2/(M^2(M-1))$ implies more than $k$ partite inseparability (see blue lines in Fig.~\ref{fig:Mkp}). This is the main result of this section. It implies, \eg that mode entanglement $(k=M)$ is observed among $M$ modes whenever $M < 2( 1 + \sqrt{1-\xi^2})/\xi^2$ (black line in Fig.~\ref{fig:Mkp}).
Since any state can be considered as a single indivisible system, the bound becomes trivial for $k=1$ and it can never be violated in this case. Generally, meaningful values for $k$ range from $2$ to $M$, and the smaller $k$ is, the more modes are recognized as entangled. If the bound is violated for $k=2$ this implies that there is no separable partition at all, and hence all $M$ modes must be entangled. For $M=2$, the criterion Eq.~\eqref{GWequiM} reduces to Eq.~\eqref{GWequi}, as expected.
We recall that our conclusions are based on specific entanglement witnesses, \ie sufficient conditions for entanglement. Hence, these results only put a lower bound on the actual number of entangled modes.
\subsection{Limits on global and local spin-squeezing}
Let us now investigate the lower bound for the spin-squeezing coefficient $\xi^2$. As we show in Appendix~\ref{app:xilimit}, an arbitrary spin-$S$ system always satisfies the bound
\begin{equation}\label{eq:minxi2S}
\xi^2\geq \frac{1}{1+S} \;,
\end{equation}
where the equality can be approached asymptotically.
Furthermore, we can define the local spin-squeezing coefficient
\begin{equation}\label{WineLoc}
\xi^2_{I} := \dfrac{N^I \ensuremath{\, \text{Var}}\left[S_z^I\right]}{\vert\langle S_x^I\rangle\vert^2} \;,
\end{equation}
and show that there exists a limit on the squeezing that can be achieved locally from the splitting of a symmetric squeezed state. Under the assumptions (i), (ii) and (iiib) of Sec.~\ref{sec:symmetry}, the local squeezing obeys the bound
\begin{equation}\label{locB}
\xi^2_{I} \geq 1 - \avg{\Pi^{I}} \;,
\end{equation}
where the equality can be approached asymptotically in the limit $S\rightarrow\infty$ (see Appendix~\ref{app:LCLxilimit}). Let us emphasize that in the derivation of Eq.~\eqref{locB} we did not use assumption (iiia), meaning that the inequality holds even for asymmetric splittings into $M$ modes, \ie for more general cases where $\avg{\Pi^{I}}$ depends on $I$.
To conclude, we can also show that there is an exact relation between the global and the local spin-squeezing coefficients, namely (see Appendix~\ref{app:GLxiRelation})
\begin{equation}\label{eq:GlobLoc}
\xi^2 = \sum_{I=1}^M \xi^2_I - \dfrac{N^2 (M-1)}{4 \avg{S_x}^2} \;.
\end{equation}
Also here, analogously to Eq.~\eqref{locB}, it is worth emphasizing that Eq.~\eqref{eq:GlobLoc} holds even for asymmetric splitting where $\avg{\Pi^{I}}$ depends on $I$. However, in the case where $\avg{\Pi^{I}}=1/M$, Eq.~\eqref{eq:GlobLoc} can be used in conjunction with Eq.~\eqref{kSepGCrit} to relate local squeezing and collective polarization to mode inseparability.
\begin{figure}
\centering
\includegraphics[width=0.46\textwidth]{WineModeParticleEnt.pdf}
\caption{Bounds for mode and particle entanglement from collective spin measurements. We compare the Wineland \textit{et al.} spin-squeezing coefficient to the limits on particle entanglement~\eqref{WkprodTight} (red lines) and mode entanglement for splitting into $M$ modes~\eqref{kSepGCrit} (blue lines) as a function of $M$. For values of $\xi^2$ below the top black line (where $k=M$), mode entanglement is revealed. Upon crossing additional blue lines, the entanglement does not allow us to split the system into more than $k$ separable sets of modes, where $k$ is indicated in blue next to the lines. The yellow region corresponds to $k=2$, and it indicates where all modes must be treated as one entangled entity that cannot be partitioned into separable groups.}
\label{fig:Mkp}
\end{figure}
\subsection{Multipartite entanglement detection from spin squeezing}
\subsubsection{State-independent multipartite entanglement bounds}
Spin squeezing provides quantitative bounds on the number of entangled particles from collective measurements. Furthermore, the detected entanglement is relevant for the improvement of measurement precision in quantum metrology. To see this, recall that the quantum Fisher information $\mathcal{F}_Q[\rho,H]$~\cite{BraunsteinCaves} quantifies the metrological sensitivity of a quantum state $\rho$ under an evolution generated by the Hamiltonian $H$~\cite{HelstromBook,GiovannettiReview,RMP}. By virtue of the inequality~\cite{PS09}
\begin{align}\label{eq:xiFQ}
\xi^{-2}\leq \frac{\mathcal{F}_Q[\rho,S_y]}{N},
\end{align}
the inverse spin-squeezing coefficient $\xi^{-2}$ can be interpreted as a Gaussian approximation to the full sensitivity, normalized by the total number of particles~\cite{GessnerPRL2019}. The detection of metrologically useful entanglement makes use of the fact that $p$-producible $N$-qubit quantum states (\ie states that contain at most $p$ entangled particles; see Appendix~\ref{app:ent}) can only achieve sensitivites up to $\mathcal{F}_Q[\rho_{p},S_y]\leq pN$~\cite{PS09,HyllusToth}. Combining this bound with the inequality~(\ref{eq:xiFQ}), we find that a violation of~\cite{GessnerPRA2017}
\begin{align}\label{Wkprod}
\xi^2\geq \frac{1}{p} \;,
\end{align}
implies the presence of entanglement among more than $p$ particles (see blue lines in Fig.~\ref{fig:MSF}). For non-integer $N/p$ a small improvement of this bound can be achieved using a more general expression~\cite{HyllusToth}.
Interestingly, we can derive a much tighter bound than Eq.~\eqref{Wkprod}. This is possible because the limit $1/p$ arises from the bound on the quantum Fisher information that is achieved only by products of maximally entangled Greenberger-Horne-Zeilinger (GHZ) states~\cite{HyllusToth}, for which $\xi^2$ actually diverges, so that Eq.~\eqref{Wkprod} can never be saturated. Instead, by making use of the asymptotically achievable limit~(\ref{eq:minxi2S}), we can show that a violation of
\begin{equation}\label{WkprodTight}
\xi^2\geq \frac{1}{1+p/2}
\end{equation}
implies entanglement of more than $p$ spins among the total number of $N$ spins-$1/2$ particles. This is the central result of this section and it follows as a consequence of convexity and subadditivity properties of the inverse spin-squeezing coefficient $\xi^{-2}$. The details can be found in Appendix~\ref{app:tighterbound}, where we also demonstrate that for non-integer $N/p$, the bound~\eqref{WkprodTight} can be improved to the expression:
\begin{equation}\label{WkprodTightNp}
\xi^{2}\geq \frac{N}{N_p \frac{p^2}{2} +\frac{r^2}{2}+N},
\end{equation}
with $N_p=\lfloor N/p\rfloor$ and $r=N-pN_p$.
We emphasize that, contrary to Eq.~\eqref{Wkprod}, this bound can be (asymptotically) saturated, and for $p$ large it is higher than Eq.~\eqref{Wkprod} by a factor of two (see red solid lines in Fig.~\ref{fig:MSF}).
In a system with $p$ spin-$1/2$ particles, the bound~(\ref{WkprodTight}) can be approached asymptotically in the limit of infinite squeezing and vanishing polarization $\avg{S_x}$. Such states are known as Twin-Fock states $|\Psi_{\rm TF}\rangle$ (see, e.g., Ref.~\cite{Klempt} for an experimental study of their metrological entanglement) and the bound~(\ref{WkprodTight}) expresses their full sensitivity as quantified by the quantum Fisher information $\mathcal{F}_Q[|\Psi_{\rm TF}\rangle,S_y]=p(1+p/2)$ [compare Eqs.~(\ref{WkprodTight}) and~(\ref{eq:xiFQ})]. By quantifying the maximum sensitivity achievable with Gaussian measurements, the result~(\ref{WkprodTight}) implies that any sensitivity of $p$ spin-$1/2$ particles that exceeds this bound, \ie any state with $\mathcal{F}_Q[\rho,S_y]>p(1+p/2)$, must necessarily be non-Gaussian in the sense that its metrological features cannot be captured through spin-squeezing coefficients.
\subsubsection{State-dependent multipartite entanglement bounds}
In practice, in order to access $\xi^2$ one actually measures $\ensuremath{\, \text{Var}}[S_z]$ and $\avg{S_x}$ separately, rather than the ratio $\ensuremath{\, \text{Var}}[S_z]/\avg{S_x}^2$ itself. Having independent knowledge of these two quantities, it is possible to construct a stronger multipartite entanglement witness than Eq.~\eqref{WkprodTight}. S\o{}rensen and M\o{}lmer~\cite{SorensenMolmer} showed that states with no more than $p$-partite entanglement satisfy
\begin{align}\label{MS}
\ensuremath{\, \text{Var}}\left[ S_z \right]&\geq
S \, F_{S_p}\left[\frac{\langle S_x\rangle}{S}\right],
\end{align}
where $S_p=p/2$, and the functions $F_{S}[x]$ are obtained (\eg numerically) by minimizing the variance $\ensuremath{\, \text{Var}}[S_z]$ of a spin $S$ as a function of its mean spin $\avg{S_x}$~\cite{SorensenMolmer}. This approach is constructed such that it detects the largest family of entangled states on the basis of $\ensuremath{\, \text{Var}}[S_z]$ and $\avg{S_x}$. However, since the metrological sensitivity is determined only by the ratio of these two quantities, the multipartite entanglement detected by this approach is not immediately linked to a metrological advantage. Yet, the criterion~(\ref{MS}) is more powerful than Eq.~(\ref{WkprodTight}), since it makes use of the additional information provided by $\avg{S_x}$. Indeed, we demonstrate in Appendix~\ref{app:MSWINE} that in the limit $\avg{S_x}\to 0$, the condition~(\ref{MS}) coincides with~(\ref{WkprodTight}). Since this corresponds to the limit in which the criterion~(\ref{MS}) is least effective, we can interpret this limit as ignoring the additional information that is provided by the mean spin length, assuming the worst-case scenario. This can be seen in Fig.~\ref{fig:MSF}, where we compare the constant bound for $p$-partite entanglement obtained from Eq.~\eqref{WkprodTight} (red continuous lines) to the state-dependent bound from Eq.~\eqref{MS} (red dashed lines).
We show in Appendix~\ref{app:SM} how condition~\eqref{MS} can be improved for non-integer $N/p$, and how the resulting expression reproduces the bound~(\ref{WkprodTightNp}) in the limit of vanishing polarization $\avg{S_x}$. Condition~\eqref{MS} also allows to identify genuine $p$-partite entanglement \cite{ReidHeDrummundFrontiers2011}, meaning that one can exclude convex combinations of $(p-1)$-producible states (see Appendix \ref{app:ent} and \ref{app:genpMS}). Moreover, it can be generalized to systems with fluctuating particle numbers~\cite{Hyllus2}.
\begin{figure}
\centering
\includegraphics[width=0.46\textwidth]{MolmerSorensenVsFisher.pdf}
\caption{Detecting multiparticle entanglement from spin squeezing. A number $p$ of entangled particles is detected when the Wineland \textit{et al.} spin-squeezing coefficient is lower than the respective lines. The solid blue lines are the constant bounds obtained from the Fisher information, Eq.~\eqref{Wkprod}. The solid red lines are improved bounds obtained from minimizing $\xi^2$ for fixed $S=p/2$, Eq.~\eqref{WkprodTight}. These bounds are independent of the polarization $\avg{S_x}/S$. In contrast, the dashed lines are state-dependent bounds obtained from the S{\o}rensen-M{\o}lmer relation Eq.~\eqref{MS}, where $\xi^2$ is minimized numerically for a fixed $S=p/2$ and polarization \cite{SorensenMolmer}. The state-independent bounds~\eqref{WkprodTight} can be recovered from this approach in the limit of vanishing contrast.}
\label{fig:MSF}
\end{figure}
\section{Relation with two-way EPR steering}
For a bipartite scenario ($M=2$) the criterion Eq.~\eqref{Giova} can be extended to detect also a stronger form of entanglement, namely EPR steering. Specifically, states that do not allow for steering of system $B$ by $A$ satisfy the condition~\cite{CavalcantiReid,Bowen,ReidRMP}
\begin{equation}\label{GiovaS}
\mathcal{R}^2 := \dfrac{4 \ensuremath{\, \text{Var}}\left[S_z^A + S_z^B \right] \ensuremath{\, \text{Var}}\left[S_y^A - S_y^B \right] }{\vert\langle S_x^B\rangle\vert^2} \geq 1 \;.
\end{equation}
Therefore, a violation of Eq.~\eqref{GiovaS} reveals steering of $B$ by $A$. A similar criterion holds for steering of $A$ by $B$.
In the following we will focus on the symmetric scenario, where measurements in system $A$ and $B$ yield the same results. In this case we have $\avg{ S_x^A}=\avg{S_x^B}$ which allows us to express the condition~(\ref{GiovaS}) equivalently as
\begin{equation}\label{RGW}
\mathcal{R}^2 = 4\mathcal{G}^2 = 4\xi^2 \geq 1 \;.
\end{equation}
Because of symmetry, a violation of this relation directly implies two-way steering between modes $A$ and $B$.
Combined with our results from the previous section, we conclude that if we want to observe steering through Eq.~\eqref{GiovaS}, we need to satisfy the condition
$\xi^2<1/4$, which implies entanglement of $p>6$ particles. However, note that Eq.~\eqref{GiovaS} can be generalized by including free coefficients in front of the spin operators, similarly to Eq.~\eqref{GiovaM}. This allows to detect EPR correlations with less squeezing \cite{FadelSplitBEC}, but the correspondence given in Eq.~\eqref{RGW} is lost.
Interestingly, a bipartite EPR criterion can also be derived from the S\o{}rensen-M\o{}lmer bounds Eq.~\eqref{MS}. We show in Appendix~\ref{app:SMepr} that a violation of
\begin{equation}\label{eq:eprMS}
\ensuremath{\, \text{Var}}[S_z] \geq S_B \, F_{S_B}\left[ \dfrac{\langle S_x^{B} \rangle }{S_B} \right]
\end{equation}
implies steering of $B$ by $A$. This criterion is easier to violate than the condition $\ensuremath{\, \text{Var}}[S_z] \geq S \, F_{S_B}\left[ \dfrac{\langle S_x^{B} \rangle }{S} \right]$ that was derived in Ref.~\cite{ReidHeDrummundFrontiers2011}. However, it is still very demanding to witness steering with this approach, since no assumptions can be made about the properties of system $A$.
\vspace{5mm}
\section{Discussions and conclusions}
In this work we established relations between criteria for multipartite entanglement of particles and modes based on the measurement of first and second moments of collective spin observables. In the case of symmetric spin states, we found that the Wineland \textit{et al.} spin-squeezing coefficient~\cite{Wineland} coincides with a witness of mode entanglement that is based on Heisenberg-Robertson-type uncertainty relations with modified bounds~\cite{Giovannetti}. This correspondence can be extended to reveal a direct relation between the spin-squeezing coefficient of symmetric spin states and a two-way EPR-steering criterion of two addressable modes.
We further revealed the relation between different multipartite entanglement criteria based on spin squeezing. The Wineland \textit{et al.} spin-squeezing coefficient~\cite{Wineland} captures the metrological sensitivity gain and can be used to study multiparticle entanglement. The approach by S\o{}rensen and M\o{}lmer~\cite{SorensenMolmer} makes use of the independent knowledge of the spin polarization to derive optimized state-dependent bounds on the spin-squeezing coefficient for multipartite entangled states. Alternatively, state-independent bounds can be derived by exploiting the relation between spin squeezing and the Fisher information~\cite{PS09}, but these bounds are not saturable by Gaussian states~\cite{HyllusToth}. We addressed this limitation by deriving state-independent bounds that can be asymptotically saturated. This provides the tightest state-independent bounds on the spin-squeezing coefficient for the detection of multipartite entangled states. Interestingly, we observe that these bounds coincide with those of S\o{}rensen and M\o{}lmer \cite{SorensenMolmer} in the limit of vanishing polarization.
Moreover, we identified a simple expression for the maximum spin squeezing that can be achieved locally from the splitting of a squeezed state. Our results provide bounds on the amount of addressable multimode entanglement that can be generated by distributing identical particles into external modes. For example, they apply to nonclassical states of BECs that are split into different spatial modes, as in Refs.~\cite{FadelSplitBEC,KunkelSplitBEC,LangeSplitBEC}.
\section{Acknowledgments}
We thank Qiongyi He, Margaret Reid, Run Yan Teh and Philipp Treutlein for useful discussions. MF acknowledges support by the Swiss National Science Foundation. MG acknowledges funding by the LabEx ENS-ICFP: ANR-10-LABX-0010/ANR-10-IDEX-0001-02 PSL*.
\clearpage
\begin{widetext}
|
1,941,325,220,390 | arxiv | \section{Introduction}
Frobenius algebras play an important role in the study of topological quantum field theory (TQFT). In particular, commutative Frobenius algebras are known to classify $2$-dimensional TQFTs \cites{abrams, dijkgraaf:thesis, kock-book}. Noncommutative Frobenius algebras appear in the classification of extended TQFTs \cite{schommer-pries:thesis}, open-closed TQFTs \cites{lauda-pfeiffer, BCT}, and lattice TQFTs \cites{lauda-pfeiffer:statesum, fhk}.
The appearance of Frobenius algebras in these contexts is closely related to the fact that Frobenius algebras can be defined completely in terms of algebraic data in the category of vector spaces. This fact allows the definition to be generalized by replacing the category of vector spaces with an arbitrary monoidal category $\mathcal{C}$, giving the notion of a \emph{Frobenius object in $\mathcal{C}$}. (This point of view appears, for example, in \cites{lauda-pfeiffer, lauda-pfeiffer:statesum, fuchs-stigner,kock-book, heunen-vicary:book, walton-yadav}.) If $\mathcal{C}$ is symmetric monoidal, then one can define commutative Frobenius objects, which correspond to $2$-dimensional oriented topological field theories (TFTs) with values in $\mathcal{C}$.
In this paper, we study Frobenius objects in the monoidal category $\catname{Span}$, where the objects are sets, the morphisms are isomorphism classes of spans of sets, and the monoidal structure is given by the Cartesian product. There are several reasons that $\catname{Span}$ is an interesting category to study in this context.
\begin{itemize}
\item Our primary motivation is that $\catname{Span}$ is a good set-theoretic ``toy model'' for the Wehrheim-Woodward symplectic category \cite{ww}; this is explained in Section \ref{sec:spanww}. It is well-known that symplectic manifolds with compatible algebraic structures (such as symplectic groupoids \cite{cdw}) are closely connected to constructions of topological sigma models (such as the Poisson sigma model \cites{aksz, cattaneo-felder:poissonsigma,schaller-strobl}). It will be interesting to see a similar relationship established using functorial methods, and the present work provides a step in that direction.
\item Since $\catname{Span}$ is symmetric monoidal, one can further consider commutative Frobenius objects, which classify $2$-dimensional TFTs with values in $\catname{Span}$. The resulting topological invariants of closed surfaces take values in the monoid of cardinalities. In particular, in the finite case, the invariants are natural numbers.
There are simple examples where the invariants are sufficient to distinguish surfaces of different genus (see Section \ref{sec:examples}). This makes for a ``proof of concept'' that $\catname{Span}$ can be a useful target category for the more interesting case of $3$-dimensional TFTs.
\item If we restrict to the subcategory of finite spans, then there is a symmetric monoidal functor to the category $\catname{Vect}_\Bbbk$ of vector spaces over a field $\Bbbk$. Thus, a finite Frobenius object in $\catname{Span}$ gives a Frobenius algebra that is universal, in the sense that it is defined over any base field. Some well-known examples of Frobenius algebras, such as matrix algebras and group algebras, arise in this manner.
\end{itemize}
Our main result is a one-to-one correspondence, up to isomorphism, between Frobenius objects in $\catname{Span}$ and simplicial sets that are equipped with an automorphism of the set of $1$-simplices, satisfying certain properties. In a sense, this correspondence gives us a way to relate TFTs to geometric structures. A simple example is that one can form a Frobenius object in $\catname{Span}$ from a group $G$, and the corresponding simplicial set is the nerve of $G$, whose realization is the classifying space $BG$. The automorphism of the set of $1$-simplices can be the inverse map, or more generally, it can be the inverse map ``twisted'' by an element of $G$.
More generally, one can form a Frobenius object in $\catname{Span}$ from a groupoid. This fact is related to \cite{hcc} (also see \cite{heunen-vicary:book}), where there is a correspondence between groupoids and special dagger Frobenius objects in $\catname{Rel}$, the category of sets and relations, and \cite{Mehta-Zhang}, where Frobenius objects in $\catname{Rel}$ were similarly described in terms of simplicial sets. However, the results here aren't strictly a generalization of the results of \cite{Mehta-Zhang}, since not every Frobenius object in $\catname{Rel}$ can be lifted to a Frobenius object in $\catname{Span}$.
There is a close relationship between our work and that of Stern \cite{stern:2segal}, who proved an $\infty$-categorical equivalence between monoid objects in $\catname{Span}(\mathcal{C})$ and $2$-Segal simplicial objects in $\mathcal{C}$, and between Calabi-Yau algebra objects in $\catname{Span}(\mathcal{C})$ and $2$-Segal cyclic objects in $\mathcal{C}$. The objects that Stern studies are equipped with higher coherence data, making them more restrictive than ours, and thus our results don't directly follow from his. Nonetheless, Stern's work implies a rich source of examples of Frobenius objects in $\catname{Span}$, coming from $2$-Segal cyclic sets, and our work suggests directions in which Stern's results can be extended. We give more details about the relationship to $2$-Segal structures in Sections \ref{sec:2segal} and \ref{sec:symmetric}.
This work is also related, at least in spirit, to recent work of Calaque, Haugseng, and Scheimbauer \cites{calaque:lagrangian, calaque:threelectures, Haugseng, CPTTV, CHS}
on constructing (extended) TFTs with values in a version of the symplectic category from shifted symplectic structures via the AKSZ formalism. TFTs with values in $\catname{Span}$ reveal a shadow of these constructions in a context where it is easier to describe explicit examples.
The structure of the paper is as follows. In Section \ref{sec:spanandww}, we give a brief overview of the category $\catname{Span}$, and we explain how $\catname{Span}$ is related to the Wehrheim-Woodward construction. In Section \ref{sec:monoids}, we consider monoids in $\catname{Span}$, and we prove that they are in correspondence with simplicial sets satisfying certain properties. In Section \ref{sec:frob}, we consider Frobenius objects in $\catname{Span}$, and we prove that a Frobenius structure on a monoid can be encoded by an automorphism of the set of $1$-simplices of the corresponding simplicial set, satisfying certain properties. In Section \ref{sec:examples}, we describe some examples of commutative Frobenius objects in $\catname{Span}$, including examples associated to abelian groups and two infinite families of $2$-element examples, and we find explicit formulas for the associated topological invariants. Finally, in Section \ref{sec:vect}, we briefly describe how finite Frobenius objects in $\catname{Span}$ give rise to Frobenius algebras.
\subsection*{Acknowledgements}
The authors would like to thank Chris Heunen, Adele Long, Sophia Marx, and Sasha Peterka for helpful conversations on topics related to the paper. We especially thank Walker Stern for suggesting the relationship to $2$-Segal conditions, and for helping us figure out the specifics thereof. I.C. and R.A.M. would also like to thank the LEGO Group for entertaining our children long enough for us to finish this work during the pandemic.
\section{Spans and the Wehrheim-Woodward construction}\label{sec:spanandww}
In this section, we review the definition of the category of spans, and we explain why it can be viewed as a set-theoretic model for the symplectic category. Some references for spans are \cites{benabou, cks, dawson-pare-pronk:universal}.
\subsection{The category of spans}\label{sec:span}
Given two sets $X,Y$, a \emph{span} from $X$ to $Y$ is a triplet $(A,f_1,f_2)$, where $A$ is a set, $f_1$ is a map from $A$ to $X$, and $f_2$ is a map from $A$ to $Y$. It can be helpful to visualize a span as a diagram as in Figure \ref{fig:span}.
\begin{figure}[th]
\begin{tikzcd}
& A \arrow{dr}{f_2} \arrow[swap]{dl}{f_1} & \\
X & & Y
\end{tikzcd}
\caption{A span.}
\label{fig:span}
\end{figure}
Two spans $(A,f_1,f_2)$, $(A',f_1',f_2')$ from $X$ to $Y$ are considered isomorphic if there exists a bijection $\phi: A \to A'$ such that the diagram
\begin{equation}
\label{diag:isospan}
\begin{tikzcd}
& A \arrow[swap]{drr}{f_2} \arrow[swap]{dl}{f_1} \arrow{r}{\phi} & A' \arrow{dr}{f_2'} \arrow{dll}{f_1'} & \\
X & & & Y
\end{tikzcd}
\end{equation}
commutes.
The category of spans, denoted $\catname{Span}$, is defined as follows.
\begin{itemize}
\item The objects of $\catname{Span}$ are sets.
\item If $X$ and $Y$ are sets, then a morphism from $X$ to $Y$ is an isomorphism class of spans from $X$ to $Y$.
\item Composition of morphisms is given by pullback; see Figure \ref{fig:composition}.
\begin{figure}[th]
\begin{tikzcd}
&& A \bitimes{f_2}{g_1} B \arrow[swap]{dl}{p_1} \arrow{dr}{p_2} && \\
& A \arrow[swap]{dl}{f_1} \arrow{dr}{f_2} && B \arrow[swap]{dl}{g_1} \arrow{dr}{g_2} & \\
X && Y && Z
\end{tikzcd}
\caption{Composition of spans.}
\label{fig:composition}
\end{figure}
\item The Cartesian product endows $\catname{Span}$ with the structure of a monoidal category.
\end{itemize}
We use the notation $f:X \spanto Y$ to denote a morphism $f$ from $X$ to $Y$ in $\catname{Span}$. Because the span representing a morphism in $\catname{Span}$ is only well-defined up to isomorphism, it can be difficult to extract equations involving maps of sets from equations involving morphisms in $\catname{Span}$. However, there are some identities in $\catname{Span}$ that can be upgraded to identities at the level of sets.
For any set $X$, the identity morphism $\mathbf{1}: X \spanto X$ has a canonical representative
\[
\begin{tikzcd}
& X \arrow[swap]{dl}{1} \arrow{dr}{1} & \\
X && X
\end{tikzcd}
\]
through which all other representatives uniquely factor. Furthermore, the compositions of a span $(A, f_1, f_2)$ with identity morphisms have natural representatives
\[
\begin{tikzcd}
&& A \arrow[swap]{dl}{1} \arrow{dr}{f_2}&& \\
& A \arrow[swap]{dl}{f_1} \arrow{dr}{f_2} && Y \arrow[swap]{dl}{1} \arrow{dr}{1} & \\
X && Y && Y
\end{tikzcd}
\]
and
\[
\begin{tikzcd}
&& A \arrow[swap]{dl}{f_1} \arrow{dr}{1}&& \\
& X \arrow[swap]{dl}{1} \arrow{dr}{1} && A \arrow[swap]{dl}{f_1} \arrow{dr}{f_2} & \\
X && X && Y
\end{tikzcd}
\]
where the identity map on $A$ gives an isomorphism with $(A, f_1, f_2)$.
One of the properties of a monoidal category is the following ``slide move''. If $\alpha: X \spanto Y$ and $\beta: W \spanto Z$ are morphisms in $\catname{Span}$, then $(\mathbf{1} \times \beta) \circ (\alpha \times \mathbf{1}) = \alpha \times \beta = (\alpha \times \mathbf{1}) \circ (\mathbf{1} \times \beta)$. If $\alpha$ and $\beta$ are represented by $(A, f_1, f_2)$ and $(B, g_1, g_2)$, respectively, then $(\mathbf{1} \times \beta) \circ (\alpha \times \mathbf{1})$ is represented by
\[
\begin{tikzcd}
&& A \times B \arrow[swap]{dl}{1 \times g_1} \arrow{dr}{f_2 \times 1} && \\
& A \times W \arrow[swap]{dl}{f_1 \times 1} \arrow{dr}{f_2 \times 1} && Y \times B \arrow[swap]{dl}{1 \times g_1} \arrow{dr}{1 \times g_2} & \\
X \times W && Y \times W && Y \times Z
\end{tikzcd}
\]
and $(\alpha \times \mathbf{1}) \circ (\mathbf{1} \times \beta)$ is represented by
\[
\begin{tikzcd}
&& A \times B \arrow[swap]{dl}{f_1 \times 1} \arrow{dr}{1 \times g_2} && \\
& X \times B \arrow[swap]{dl}{1 \times g_1} \arrow{dr}{1 \times g_2} && A \times Z \arrow[swap]{dl}{f_1 \times 1} \arrow{dr}{f_2 \times 1} & \\
X \times W && X \times Z && Y \times Z
\end{tikzcd}
\]
where the identity map on $A \times B$ gives an isomorphism between the above two compositions. The conclusion is that, if we can recognize two spans as being related via composition with the identity and/or the slide move, as given above, then we know that the spans are not only isomorphic, but equal.
Finally, the following basic fact about $\catname{Span}$ will be used later in the paper. We note that this proof is similar to that of \cite{Haugseng}*{Lemma 8.2}, but specialized to the present situation.
\begin{prop}\label{prop:spaniso}
A span $(A, f_1, f_2)$ represents an isomorphism in $\catname{Span}$ if and only if $f_1$ and $f_2$ are bijections.
\end{prop}
\begin{proof}
A span $(A, f_1, f_2)$ from $X$ to $Y$ represents an isomorphism if and only if there exists a span $(B, g_1, g_2)$ from $Y$ to $X$ such that the compositions
\begin{equation}\label{diag:spaniso1}
\begin{tikzcd}
&& A \bitimes{f_2}{g_1} B \arrow[swap]{dl}{p_1} \arrow{dr}{p_2} && \\
& A \arrow[swap]{dl}{f_1} \arrow{dr}{f_2} && B \arrow[swap]{dl}{g_1} \arrow{dr}{g_2} \\
X && Y && X
\end{tikzcd}
\end{equation}
and
\begin{equation}\label{diag:spaniso2}
\begin{tikzcd}
&& B \bitimes{g_2}{f_1} A \arrow[swap]{dl}{q_1} \arrow{dr}{q_2} && \\
& B \arrow[swap]{dl}{g_1} \arrow{dr}{g_2} && A \arrow[swap]{dl}{f_1} \arrow{dr}{f_2} \\
Y && X && Y
\end{tikzcd}
\end{equation}
are isomorphic to the identity spans on $X$ and $Y$, respectively. Here, we use $p_i$ and $q_i$ to denote the projection maps onto the $i$th component.
The composition \eqref{diag:spaniso1} is isomorphic to the identity span on $X$ if and only if there is a bijection $\phi: X \to A \bitimes{f_2}{g_1} B$ such that
\begin{equation}\label{eqn:spaniso1}
f_1 p_1 \phi = g_2 p_2 \phi = 1.
\end{equation}
Similarly, \eqref{diag:spaniso2} is isomorphic to the identity span on $Y$ if and only if there is a bijection $\phi': Y \to B \bitimes{g_2}{f_1} A$ such that
\begin{equation}\label{eqn:spaniso2}
g_1 q_1 \phi' = f_2 q_2 \phi' = 1.
\end{equation}
If $f_1$ and $f_2$ are bijections, then we can take $B=A$, $g_1 = f_2$, $g_2 = f_1$, and
\begin{align*}
\phi &= (f_1^{-1},f_1^{-1}): X \to A \bitimes{f_2}{f_2} A,\\
\phi' &= (f_2^{-1},f_2^{-1}): Y \to A \bitimes{f_1}{f_1} A,
\end{align*}
and \eqref{eqn:spaniso1} and \eqref{eqn:spaniso2} will be satisfied.
On the other hand, given bijections $\phi$ and $\phi'$ satisfying \eqref{eqn:spaniso1} and \eqref{eqn:spaniso2}, it follows that $f_1$, $f_2$, $g_1$, and $g_2$ are surjective, and that $p_1$, $p_2$, $q_1$, and $q_2$ are injective. Together, the injectivity of $p_2$ and the surjectivity of $g_1$ imply that $f_2$ is injective. Similarly, the injectivity of $q_1$ and the surjectivity of $g_2$ imply that $f_1$ is injective. Thus, $f_1$ and $f_2$ are bijections.
\end{proof}
\subsection{The Wehrheim-Woodward construction}\label{sec:spanww}
Given two sets $X,Y$, a \emph{relation} from $X$ to $Y$ is a subset of $X \times Y$. There is a category, denoted $\catname{Rel}$, for which the objects are sets and the morphisms are relations. There is a natural functor from $\catname{Span}$ to $\catname{Rel}$, where a span $(A,f,g)$ is sent to the relation $\{(f(a),g(a)) \mid a \in A\}$.
The functor from $\catname{Span}$ to $\catname{Rel}$ is only a part of a more sophisticated relationship that was discovered by Li-Bland and Weinstein \cite{li-bland-weinstein} as part of an effort to better understand the symplectic category. We provide a very brief review here, referring to \cite{li-bland-weinstein} for details.
A \emph{selective category} is a category with a distinguished class of morphisms, called \emph{suave}, and a distinguished class of composable pairs of suave morphisms, called \emph{congenial}, satisfying certain axioms. The examples that are relevant for the present work are as follows:
\begin{itemize}
\item $\catname{Rel}$, where all of the morphisms are suave, and the congenial pairs are those that are monic. To be precise, if $R \subseteq X \times Y$ is a relation from $X$ to $Y$ and $S \subseteq Y \times Z$ is a relation from $Y$ to $Z$, then the pair $(S,R)$ is \emph{monic} if, for all $(x,z) \in X \times Z$, there exists at most one $y \in Y$ such that $(x,y) \in R$ and $(y,z) \in S$.
\item $\catname{SRel}$ is the category where the objects are symplectic manifolds, and where the morphisms are all set-theoretic relations. The suave morphisms are the Lagrangian relations, and the congenial pairs are those that are strongly transversal \cite{weinstein:ww}.
\end{itemize}
Li-Bland and Weinstein defined a construction that takes a selective category $\mathcal{C}$ and constructs a new category $\WW(\mathcal{C})$. The objects of $\WW(\mathcal{C})$ are the same as those of $\mathcal{C}$. If $X$ and $Y$ are objects, a morphism from $X$ to $Y$ is a finite sequence of suave morphisms
\[ X = X_0 \tolabel{f_1} X_1 \tolabel{f_2} \cdots \tolabel{f_n} X_n = Y,\]
modulo the following equivalence relation. If we write such a sequence as a formal product $f_n\fprod \cdots \fprod f_1$, then the equivalence relation is generated by relations of the form $f_{i+1} \fprod f_i = f_{i+1} f_i$ for congenial pairs $(f_{i+1}, f_i)$.
In the case where $\mathcal{C} = \catname{SRel}$, this construction reproduces the Wehrheim-Woodward category \cite{ww}, which is a rigorous construction of the symplectic category $\catname{Symp}$. The following results appear in \cite{li-bland-weinstein}:
\begin{itemize}
\item There is a canonical isomorphism between $\WW(\catname{Rel})$ and $\catname{Span}$. We review this isomorphism below in Section \ref{sec:iso}.
\item The forgetful functor $\catname{SRel} \to \catname{Rel}$ is compatible with the selective structures, i.e.\ it sends suave morphisms to suave morphisms and congenial pairs to congenial pairs. Therefore, there is an induced functor $\catname{Symp} = \WW(\catname{SRel}) \to \WW(\catname{Rel}) = \catname{Span}$.
\end{itemize}
If $\mathcal{C}$ is a selective category, then there is a \emph{composition functor} $\WW(\mathcal{C}) \to \mathcal{C}$, taking $f_n \fprod \cdots \fprod f_1$ to $f_n \cdots f_1$. Via the isomorphism $\WW(\catname{Rel}) \stackrel{\sim}{\to} \catname{Span}$, this recovers the functor $\catname{Span} \to \catname{Rel}$ described at the beginning of this section.
The above discussion can be summarized by the following sequence of functors:
\[ \catname{Symp} \to \catname{Span} \to \catname{Rel}.\]
This relationship explains our assertion that $\catname{Span}$ is a good set-theoretic model for $\catname{Symp}$, incorporating aspects of the Wehrheim-Woodward category that don't appear in $\catname{Rel}$.
\subsection{\texorpdfstring{$\WW(\catname{Rel}) = \catname{Span}$}{WW(Rel)=Span}}\label{sec:iso}
In this section, we will describe an isomorphism between $\WW(\catname{Rel})$ and $\catname{Span}$, using an approach that is slightly more direct than the one in \cite{li-bland-weinstein}.
Recall that, if $X$ and $Y$ are sets, then a relation from $X$ to $Y$ is a subset $R \subseteq X \times Y$. When viewing such a subset as a morphism in $\catname{Rel}$, we will denote it as $X \reltolabel{R} Y$.
The objects of $\WW(\catname{Rel})$ are sets, and a morphism from $X$ to $Y$ is represented by a finite sequence of relations
\begin{equation}\label{eqn:wwrel}
X = X_0 \reltolabel{R_1} X_1 \reltolabel{R_2} \cdots \reltolabel{R_n} X_n = Y,
\end{equation}
viewed as a formal product $R_n \fprod \cdots \fprod R_1$, modulo the relation $R_{i+1} \fprod R_i = R_{i+1} R_i$ when the pair $(R_{i+1}, R_i)$ is monic.
If $f:X \to Y$ is a map of sets, then the graph of $f$ is a relation from $X$ to $Y$. If $g:Y \to X$ is a map of sets, then the graph of $g$ can also be viewed as a relation from $X$ to $Y$. In these cases, we will denote the corresponding relations by $X \tolabel{f} Y$ and $X \fromlabel{g} Y$. If $(A, f_1, f_2)$ is a span from $X$ to $Y$, then $X \fromlabel{f_1} A \tolabel{f_2} Y$ represents a morphism in $\WW(\catname{Rel})$ from $X$ to $Y$.
\begin{prop}\label{prop:functor}
The map $(A, f_1, f_2) \mapsto (X \fromlabel{f_1} A \tolabel{f_2} Y)$ defines a functor $F: \catname{Span} \to \WW(\catname{Rel})$.
\end{prop}
\begin{proof}
First, we observe that $F$ is well-defined at the level of morphisms, i.e.\ that isomorphic spans are mapped to equivalent sequences of relations. Specifically, given an isomorphism of spans as in \eqref{diag:isospan}, we see that ${X \fromlabel{f_1'} A' \tolabel{f_2'} Y}$ is equivalent to $X \fromlabel{f_1} A \tolabel{f_2} Y$ since $f_1' \phi = f_1$ and $f_2' \phi = f_2$ are monic compositions.
Next, we consider a composition of spans as in Figure \ref{fig:composition}. We observe that $A \tolabel{f_2} Y \fromlabel{g_1} B$ and $A \fromlabel{p_1} A \bitimes{f_2}{g_1} B \tolabel{p_2} B$ are both monic compositions that give rise to the same relation $A \reltolabel{R} B$, where $R = \{(a,b) \mid f_2(a) = g_1(b)\}$. Thus, $X \fromlabel{f_1 p_1} A \bitimes{f_2}{g_1} B \tolabel{g_2 p_2} Z$ is equivalent to $X \fromlabel{f_1} A \tolabel{f_2} Y \fromlabel{g_1} B \tolabel{g_2} Z$, which proves that $F$ preserves compositions.
Finally, we observe that the identity span on $X$ is sent to $X \fromlabel{1} X \tolabel{1} X$, which is a monic composition that gives rise to the identity relation $X \reltolabel{1} X$.
\end{proof}
\begin{prop}
The functor $F$ in Proposition \ref{prop:functor} is an isomorphism of categories.
\end{prop}
\begin{proof}
Since $F$ is the identity on objects, it suffices to prove that $F$ is full and faithful.
Any relation $X \reltolabel{R} Y$ can be factored as a monic composition $X \gets R \to Y$ and is thus in the image of $F$. Since the morphisms in $\WW(\catname{Rel})$ are generated by relations, it follows that $F$ is full.
Given a sequence of relations as in \eqref{eqn:wwrel}, we can obtain a span as follows. Let $A = A(R_1, \dots, R_n)$ be the set consisting of all \emph{trajectories} from $X$ to $Y$, i.e.\ sequences $(x_0,\dots, x_n)$, $x_i \in X_0$, such that $(x_{i-1}, x_i) \in R_i$ for $i = 1, \dots, n$. Projection onto the first and last components give a span $X \gets A \to Y$.
If $(R_{i+1}, R_i)$ is a monic pair, then the contraction $(x_0, \dots, x_n) \mapsto (x_0, \dots \hat{x}_i, \dots, x_n)$ gives an isomorphism of spans from $A(R_1, \dots, R_n)$ to $A(R_1, \dots, R_{i+1}R_i, \dots R_n)$. Thus the map $R_n \fprod \cdots \fprod R_1 \mapsto A(R_1, \dots R_n)$ gives a well-defined map from morphisms in $\WW(\catname{Rel})$ to morphisms in $\catname{Span}$. This map is a left inverse to $F$, demonstrating that $F$ is faithful.
\end{proof}
As an aside, we note that both $\WW(\catname{Rel})$ and $\catname{Span}$ can be viewed as truncations of $2$-categories, so it seems reasonable to expect that this isomorphism can be upgraded to an equivalence of $2$-categories.
\section{Monoids in \texorpdfstring{$\catname{Span}$}{Span} and simplicial sets}\label{sec:monoids}
If $\mathcal{C}$ is a monoidal category, then one can define the notion of a \emph{monoid} in $\mathcal{C}$. In this section, we study monoids in $\catname{Span}$. We will see that there is a correspondence, up to isomorphism, between monoids in $\catname{Span}$ and simplicial sets satisfying certain properties.
\subsection{Monoids in \texorpdfstring{$\catname{Span}$}{Span}}
A monoid in $\catname{Span}$ consists of a set $X$, equipped with a morphism $\eta: \bullet \spanto X$ (\emph{unit}) and a morphism $\mu: X \times X \spanto X$ (\emph{multiplication}), satisfying
\begin{enumerate}
\item (unit axiom) $\mu \circ (\mathbf{1} \times \eta) = \mu \circ (\eta \times \mathbf{1}) = \mathbf{1}$,
\item (associativity) $\mu \circ (\mathbf{1} \times \mu) = \mu \circ (\mu \times \mathbf{1})$.
\end{enumerate}
Here, $\bullet$ denotes a set containing one element, and $\mathbf{1}$ denotes the identity morphism on $X$.
It is frequently convenient to use string diagrams to depict morphisms that are constructed out of the unit, multiplication, and identity via composition and monoidal product. Specifically, $\eta$, $\mu$, and $\mathbf{1}$ are respectively depicted as follows, where the diagrams should be read from top to bottom:
\stringdiagram{
\unit{-3}{0}
\multiplication{0}{0}
\identity{3}{0}
}
The unit and associativity axioms can then be depicted as follows:
\stringdiagram{
\begin{scope}
\unit{-2}{1}
\identity{-4}{1}
\multiplication{-3}{-1}
\equals{-1}{0}
\unit{0}{1}
\identity{2}{1}
\multiplication{1}{-1}
\equals{3}{0}
\identity{4}{0}
\end{scope}
\begin{scope}[shift={(8,0)}]
\identity{0}{1}
\multiplication{2}{1}
\multiplication{1}{-1}
\equals{3.5}{0}
\multiplication{5}{1}
\identity{7}{1}
\multiplication{6}{-1}
\end{scope}
}
\subsection{Simplicial sets}\label{sec:simplicial}
To establish notation and terminology, we briefly review some definitions relating to simplicial sets.
\begin{definition} \label{dfn:simplicial}A \emph{simplicial set} $\mathcal{X}$ is a sequence $X_0, X_1, \dots$ of sets equipped with maps $d_i^q: X_q \to X_{q-1}$ (called \emph{face maps}), $0 \leq i \leq q$, and $s_i^q : X_q \to X_{q+1}$ (called \emph{degeneracy maps}), $0 \leq i \leq q$, such that
\begin{align}
d_i^{q-1}d_j^q &= d_{j-1}^{q-1}d_i^q, \qquad i < j,\label{eqn:twoface}\\
s_i^{q+1}s_j^q &= s_{j+1}^{q+1}s_i^q, \qquad i \leq j, \label{eqn:twodegen}\\
d_i^{q+1}s_j^q &= \begin{cases}
s_{j-1}^{q-1}d_i^q, & i< j,\\
\mathrm{id}, & i = j \mbox{ or }j+1, \\
s_j^{q-1}d_{i-1}^q, & i > j+1. \end{cases}\label{eqn:facedegen}
\end{align}
\end{definition}
We will also need to consider $n$-truncated versions of simplicial sets, which only include data going up to $X_n$:
\begin{definition}\label{dfn:nsimplicial}
An \emph{$n$-truncated simplicial set} $\mathcal{X}$ is a sequence $X_0, X_1, \dots X_n$ of sets equipped with face maps $d_i^q: X_q \to X_{q-1}$, $0 \leq i \leq q \leq n$, and degeneracy maps $s_i^q : X_q \to X_{q+1}$, $0 \leq i \leq q < n$, satisfying \eqref{eqn:twoface}--\eqref{eqn:facedegen} whenever both sides of an equation are defined.
\end{definition}
Suppose that $\mathcal{X}$ is a (possibly $n$-truncated) simplicial set. For $1 \leq q \leq n+1$, let $\Delta_q \mathcal{X}$ denote the set of $(q+1)$-tuples $(\zeta_0, \dots \zeta_q)$, $\zeta_i \in X_{q-1}$, such that
\begin{equation}\label{eqn:horncompat}
d_i^{q-1} \zeta_j = d_{j-1}^{q-1} \zeta_i
\end{equation}
for $i < j$. There is a natural \emph{boundary map} $\delta^q: X_q \to \Delta_q \mathcal{X}$, given by
\[ \delta^q(w) = (d_0^q w, \dots, d_q^q w).\]
\begin{definition}
A simplicial set $\mathcal{X}$ is called \emph{$n$-coskeletal} if $\delta^q$ is a bijection for $q>n$.
\end{definition}
It is well-known (see, for example, \cite{artin-mazur}) that any $n$-truncated simplicial set has a unique extension to an $n$-coskeletal simplicial set. The extension can be recursively constructed by taking $X_{n+1} = \Delta_{n+1} \mathcal{X}$.
\subsection{From simplicial sets to monoids}
Let $X_\bullet$ be a $2$-truncated simplicial set. Without any further assumptions, we can construct the spans
\begin{equation}\label{diag:simp2monoid}
\begin{tikzcd}
& X_0 \arrow{dl} \arrow{dr}{s_0^0} & \\
\bullet && X_1
\end{tikzcd}
\;\;\;\;\;
\begin{tikzcd}
& X_2 \arrow[swap]{dl}{(d_2^2,d_0^2)} \arrow{dr}{d_1^2} & \\
X_1 \times X_1 && X_1
\end{tikzcd}
\end{equation}
which respectively represent morphisms $\eta: \bullet \spanto X_1$ and $\mu: X_1 \times X_1 \spanto X_1$ in $\catname{Span}$. We can then ask whether $(X_1, \eta, \mu)$ satisfies the axioms of a monoid in $\catname{Span}$. The following lemmas establish necessary and sufficient conditions.
\begin{lemma}\label{lemma:simpunit}
The unit axiom holds if and only if the following conditions hold for all $\zeta \in X_2$:
\begin{enumerate}
\item If $d_2^2 \zeta \in \im(s_0^0)$, then $\zeta \in \im(s_0^1)$.
\item If $d_0^2 \zeta \in \im(s_0^0)$, then $\zeta \in \im(s_1^1)$.
\end{enumerate}
\end{lemma}
\begin{proof}
Consider the composition $\mu \circ (\eta \times \mathbf{1})$:
\[
\begin{tikzcd}
&&[-20pt] (X_0 \times X_1) * X_2 \arrow{dl} \arrow{dr} &[-15pt]& \\
& X_0 \times X_1 \arrow[swap]{dl}{p_2} \arrow{dr}{s_0^0 \times 1} & & X_2 \arrow[swap]{dl}{(d_2^2,d_0^2)} \arrow{dr}{d_1^2} &\\
X_1 && X_1 \times X_1 && X_1
\end{tikzcd}
\]
Here, $p_2$ is projection onto the second component.
The pullback in the above diagram is
\[ (X_0 \times X_1) * X_2 = \{(u,x,\zeta) \in X_0 \times X_1 \times X_2 \mid s_0^0 u = d_2^2 \zeta, \; x = d_0^2 \zeta \},\]
which, since $s_0^0$ is injective, can be identified with
\[ A := \{\zeta \in X_2 \mid d_2^2 \zeta \in \im(s_0^0)\} \subseteq X_2.\]
The equation $\mu \circ (\eta \times \mathbf{1}) = \mathbf{1}$ holds if and only if there is a bijection $\phi: A \to X_1$ such that $\phi(\zeta) = d_0^2 \zeta = d_1^2 \zeta$. This condition completely determines $\phi$, so the question is whether it is well-defined, i.e.\ if $d_0^2 \zeta = d_1^2 \zeta$, and if it is bijective.
For all $x \in X_1$, the degenerate $2$-simplex $s_0^1 x$ satisfies
\begin{align*}
d_2^2 s_0^1 x &= s_0^0 d_1^1 x, & d_0^2 s_0^1 x = d_1^2 s_0^1 x &= x.
\end{align*}
Thus, $s_0^1 x$ is in $A$, and its image under $\phi$ is well-defined and equal to $x$. It follows that $\phi$ is well-defined and bijective if and only if $A = \{s_0^1 x \mid x \in X_1\}$, which is equivalent to condition (1) of the lemma.
The proof that the equation $\mu \circ (\mathbf{1} \times \eta) = \mathbf{1}$ is equivalent to condition (2) of the lemma is similar.
\end{proof}
To describe associativity in the language of simplicial sets, we will use the following notion. For $0 \leq i < j \leq 3$, the set $T_{ij} \mathcal{X}$ of \emph{$(ij)$-tacos} is defined as
\[ T_{ij}\mathcal{X} = \{(\zeta, \zeta') \in X_2 \times X_2 \mid d_{j-1}^2 \zeta = d_i^2 \zeta'\}.\]
Geometrically, an element of $T_{ij}\mathcal{X}$ is a pair of $2$-simplices that share an edge in a way that allows them to form the $i$th and $j$th faces of a $3$-simplex; see Figure \ref{fig:taco}.
\begin{figure}[th]
\begin{tikzpicture}
\begin{scope}[semithick]
\coordinate [label=left:2] (b) at (0,0);
\coordinate [label=left:1] (c) at (-1,2);
\coordinate [label=left:3] (a) at (.75,3);
\coordinate [label=right:0] (d) at (3,1);
\coordinate (e) at (intersection of c--d and a--b);
\draw[<-] (a) -- (b);
\draw[<-] (a) -- (d);
\draw[<-] (b) -- (c);
\draw[<-] (b) -- (d);
\draw[<-] (b) -- (c);
\draw [<-] (c) -- (e);
\draw[dashed] (e) -- (d);
\end{scope}
\end{tikzpicture}
\caption{A $(13)$-taco.}
\label{fig:taco}
\end{figure}
The boundary of a taco consists of four $1$-simplices that satisfy certain compatibility conditions. In particular, the $(02)$-tacos and the $(13)$-tacos are complementary, so their boundaries have the same compatibility conditions. Specifically, let $S\mathcal{X}$ be the subset of $(X_1)^4$ consisting of all $(x_{01}, x_{12}, x_{23}, x_{03})$ such that
\begin{align*}
d_0^1 x_{01} &= d_1^1 x_{12}, & d_0^1 x_{12} &= d_1^1 x_{23}, \\
d_0^1 x_{23} &= d_0^1 x_{03}, & d_1^1 x_{03} &= d_1^1 x_{01}.
\end{align*}
The boundary maps $\partial_{02}: T_{02}\mathcal{X} \to S\mathcal{X}$ and $\partial_{13}: T_{13}\mathcal{X} \to S\mathcal{X}$ are defined by
\begin{align*}
\partial_{02}(\zeta, \zeta') &= (d_2^2 \zeta', d_2^2 \zeta, d_0^2 \zeta, d_1^2 \zeta'),\\
\partial_{13}(\zeta, \zeta') &= (d_2^2 \zeta', d_0^2 \zeta', d_0^2 \zeta, d_1^2 \zeta).
\end{align*}
\begin{lemma}\label{lemma:simpassociativity}
The associativity axiom holds if and only if there exists a bijection $T_{02}\mathcal{X} \cong T_{13}\mathcal{X}$ that commutes with the boundary maps to $S\mathcal{X}$.
\end{lemma}
\begin{proof}
The composition $\mu \circ (\mathbf{1} \times \mu)$ is as follows:
\begin{equation}\label{diag:t02}
\begin{tikzcd}
&[-15pt] &[-15pt] (X_1 \times X_2) * X_2 \arrow{dl} \arrow{dr} &[-15pt]& \\
& X_1 \times X_2 \arrow[swap]{dl}{1 \times (d_2^2, d_0^2)} \arrow{dr}{1 \times d_1^2} && X_2 \arrow[swap]{dl}{(d_2^2,d_0^2)} \arrow{dr}{d_1^2}& \\
X_1 \times X_1 \times X_1 && X_1 \times X_1 && X_1
\end{tikzcd}
\end{equation}
The pullback in \eqref{diag:t02} is
\begin{align*}
(X_1 \times X_2) * X_2 &= \{(x,\zeta,\zeta') \in X_1 \times X_2 \times X_2 \mid x = d_2^2 \zeta', \; d_1^2 \zeta = d_0^2 \zeta'\} \\
&\cong \{(\zeta,\zeta') \in X_2 \times X_2 \mid d_1^2 \zeta = d_0^2 \zeta'\}\\
&\cong T_{02}\mathcal{X}.
\end{align*}
On the other hand, the composition $\mu \circ (\mu \times \mathbf{1})$ is as follows:
\begin{equation}\label{diag:t13}
\begin{tikzcd}
&[-15pt]&[-15pt] (X_2 \times X_1) * X_2 \arrow{dl} \arrow{dr} &[-15pt]& \\
& X_2 \times X_1 \arrow[swap]{dl}{(d_2^2, d_0^2) \times 1} \arrow{dr}{d_1^2 \times 1} && X_2 \arrow[swap]{dl}{(d_2^2,d_0^2)} \arrow{dr}{d_1^2}& \\
X_1 \times X_1 \times X_1 && X_1 \times X_1 && X_1
\end{tikzcd}
\end{equation}
The pullback in \eqref{diag:t13} is
\begin{align*}
(X_2 \times X_1) * X_2 &= \{(\zeta'',x,\zeta''') \in X_2 \times X_1 \times X_2 \mid x = d_0^2 \zeta''', \; d_1^2 \zeta'' = d_2^2 \zeta'''\} \\
&\cong \{(\zeta'',\zeta''') \in X_2 \times X_2 \mid d_1^2 \zeta'' = d_2^2 \zeta'''\} \\
&\cong T_{13}\mathcal{X}.
\end{align*}
Note that the isomorphism with $T_{13}\mathcal{X}$ involves swapping the two components.
The associativity axiom holds if and only if the spans \eqref{diag:t02} and \eqref{diag:t13} are isomorphic, i.e.\ there exists a bijection $T_{02}\mathcal{X} \to T_{13}\mathcal{X}$, $(\zeta, \zeta') \mapsto (\zeta''', \zeta'')$, such that
\begin{align*}
d_2^2 \zeta' &= d_2^2 \zeta '', & d_2^2 \zeta =& d_0^2 \zeta'', & d_0^2 \zeta &= d_0^2 \zeta''', & d_1^2 \zeta' &= d_1^2 \zeta''',
\end{align*}
or, equivalently, $\partial_{02}(\zeta,\zeta') = \partial_{13}(\zeta''', \zeta'')$.
\end{proof}
Together, Lemmas \ref{lemma:simpunit} and \ref{lemma:simpassociativity} give us the following the result.
\begin{thm}\label{thm:simp2monoid}
Suppose $X_\bullet$ is a $2$-truncated simplicial set, and let $\eta: \bullet \spanto X_1$ and $\mu: X_1 \times X_1 \spanto X_1$ be given by the spans in \eqref{diag:simp2monoid}. Then $(X_1, \eta, \mu)$ is a monoid in $\catname{Span}$ if and only if the conditions in Lemmas \ref{lemma:simpunit} and \ref{lemma:simpassociativity} hold.
\end{thm}
\begin{example}\label{ex:cat}
Let $\mathcal{C}$ be a small category. As is well-known, there is an associated simplicial set $N\mathcal{C}_\bullet$, called the \emph{nerve} of $\mathcal{C}$. The nerve of $\mathcal{C}$ is $2$-coskeletal, with $N\mathcal{C}_0 = \Ob(\mathcal{C})$, $N\mathcal{C}_1 = \Mor(\mathcal{C})$, and
\[ N\mathcal{C}_2 = \{(f,g) \in \Mor(\mathcal{C}) \mid s(f) = t(g)\},\]
where $s,t : \Mor(\mathcal{C}) \to \Ob(\mathcal{C})$ are the source and target maps. The face and degeneracy maps are as follows:
\begin{align*}
d_0^1 &= s, & d_1^1 &= t, &s_0^0(x) &= 1_x,\\
d_0^2(f,g) &= g, & d_1^2(f,g) &= fg, & d_2^2(f,g) &= f, \\
s_0^1(f) &= (1_{t(f)}, f), & s_1^1(f) &= (f,1_{s(f)}).
\end{align*}
One can directly see that $N\mathcal{C}_\bullet$ satisfies the condition of Lemma \ref{lemma:simpunit}. Furthermore, in this case, $T_{02}N\mathcal{C}$ and $T_{13}N\mathcal{C}$ can both be canonically identified with $N\mathcal{C}_3$, giving the isomorphism for the condition in Lemma \ref{lemma:simpassociativity}.
\end{example}
\begin{example}\label{ex:twoelement}
Consider the case where $X_0 = \{e\}$ has one point, $X_1=\{a,b\} $ has two points, and $X_2$ is finite. Here we will describe all possible $2$-truncated simplicial sets of this form that satisfy the conditions of Theorem \ref{thm:simp2monoid}.
Without loss of generality, we can take $s_0^0(e) = a$. We can decompose $X_2$ into a disjoint union of $Y_{ijk}$ for $i,j,k \in X_1$, where $d_0(\zeta) = i$, $d_1(\zeta) = j$, and $d_2(\zeta) = k$ for $\zeta \in Y_{ijk}$. Then $X_2$ is determined up to isomorphism by the cardinalities $n_{ijk}$ of $Y_{ijk}$.
From the simplicial axioms, we have that $s_0^1(a) = s_1^1(a) \in Y_{aaa}$, $s_0^1(b) \in Y_{bba}$, and $s_1^1(b) \in Y_{abb}$.
Condition (1) in Lemma \ref{lemma:simpunit} says that, if $d_2^2 \zeta = a$, then $\zeta = s_0^1(a)$ or $\zeta = s_0^1(b)$. Thus, $n_{aaa} = n_{bba} = 1$, and $n_{aba} = n_{baa} = 0$. Similarly, condition (2) in Lemma \ref{lemma:simpunit} says that, if $d_0^2 \zeta = a$, then $\zeta = s_1^1(a)$ or $\zeta = s_1^1(b)$. Thus, $n_{abb} = 1$, and $n_{aab} = 0$.
We haven't yet imposed the condition of Lemma \ref{lemma:simpassociativity}, but the cardinalities that remain unconstrained so far are $n_{bab}$ and $n_{bbb}$. It turns out, however, that the condition of Lemma \ref{lemma:simpassociativity} holds for all values of $n_{bab}$ and $n_{bbb}$. This can be shown by the long but straightforward process of considering each of the $16$ elements of $S\mathcal{X}$ and checking that the fibers of $\partial_{02}$ and $\partial_{13}$ have the same number of elements.
Thus, every choice of $n_{bab}$ and $n_{bbb}$ gives a monoid in $\catname{Span}$ on the $2$-element set $X_1 = \{a,b\}$.
\end{example}
\subsection{From monoids to simplicial sets}
Suppose $(X, \eta, \mu)$ is a monoid in $\catname{Span}$, and let $E$ and $M$ be sets with maps $s_0^0: E \to X$ and $d_i^2: M \to X$, $i=0,1,2$, such that the spans
\begin{equation}\label{diag:monoidspan}
\begin{tikzcd}
& E \arrow{dr}{s_0^0} \arrow{dl} & \\
\bullet & & X
\end{tikzcd}
\;\;\;\;\;
\begin{tikzcd}
& M \arrow{dr}{d_1^2} \arrow[swap]{dl}{(d_2^2,d_0^2)} & \\
X \times X & & X
\end{tikzcd}
\end{equation}
represent $\eta$ and $\mu$, respectively. We will see that the maps $s_0^0$ and $d_i^2$ form part of the structure of a ($2$-truncated) simplicial set, where the set of $0$-simplices is $E$, the set of $1$-simplices is $X$, and the set of $2$-simplices is $M$.
The unit axiom can be expressed in terms of spans as follows. The composition $\mu \circ (\mathbf{1} \times \eta)$ is represented by the diagram
\[
\begin{tikzcd}
&& (X \times E) * M \arrow{dl} \arrow{dr} && \\
& X \times E \arrow[swap]{dl}{p_1} \arrow{dr}{1 \times s_0^0} && M \arrow[swap]{dl}{(d_2^2,d_0^2)} \arrow{dr}{d_1^2} & \\
X && X \times X && X
\end{tikzcd}
\]
where $p_1: X \times E \to X$ is projection onto the first component, and where $(X \times E) * M$ is the pullback over $X \times X$. The equation $\mu \circ (\mathbf{1} \times \eta) = \mathbf{1}$ implies that there exists a bijection $X \stackrel{\sim}{\to} (X \times E) * M$ such that the diagram
\begin{equation}\label{diag:unit}
\begin{tikzcd}
&[-10pt] & X \arrow[<->]{d} \arrow[bend left=30]{dddrr}{1} \arrow[bend right=30, swap]{dddll}{1} && \\
&& (X \times E) * M \arrow{dl} \arrow{dr} && \\
& X \times E \arrow{dl}{p_1} \arrow{dr}{1 \times s_0^0} && M \arrow[swap]{dl}{(d_2^2,d_0^2)} \arrow[swap]{dr}{d_1^2} & \\
X && X \times X && X
\end{tikzcd}
\end{equation}
commutes. Note that, because the outer maps in \eqref{diag:unit} are the identity, the bijection $X \stackrel{\sim}{\to} (X \times E) * M$ is unique.
The composition $X \stackrel{\sim}{\to} (X \times E) * M \to X \times E \to E$, where the last map is projection onto the second component, gives a map which we denote as $d_0^1: X \to E$. The composition $X \stackrel{\sim}{\to} (X \times E) * M \to M$ gives a map which we denote as $s_1^1: X \to M$. Using the maps we have just introduced, we obtain the commutative diagram
\begin{equation}\label{diag:unit2}
\begin{tikzcd}
&& X \arrow{dl}{(1, d_0^1)} \arrow[swap]{dr}{s_1^1} \arrow[bend left=30]{ddrr}{1} \arrow[bend right=30,swap]{ddll}{1}&& \\
& X \times E \arrow{dr}{1 \times s_0^0} \arrow{dl}{p_1} && M \arrow[swap]{dl}{(d_2^2,d_0^2)} \arrow[swap]{dr}{d_1^2} &\\
X && X \times X && X
\end{tikzcd}
\end{equation}
as an abbreviation of \eqref{diag:unit}.
Similarly, we can use the equation $\mu \circ (\eta \times \mathbf{1}) = \mathbf{1}$ to obtain maps $d_1^1: X \to E$ and $s_0^1: X \to M$, such that the diagram
\begin{equation}\label{diag:unit3}
\begin{tikzcd}
& & X \arrow{dl}{(d_1^1,1)} \arrow[swap]{dr}{s_0^1} \arrow[bend left=30]{ddrr}{1} \arrow[swap,bend right=30]{ddll}{1} & &\\
& E \times X \arrow{dr}{s_0^0 \times 1} \arrow{dl}{p_2} && M \arrow[swap]{dl}{(d_2^2,d_0^2)} \arrow[swap]{dr}{d_1^2} &\\
X & & X \times X && X
\end{tikzcd}
\end{equation}
commutes.
We now turn to the associativity axiom, which can be expressed in terms of spans as follows. The diagram
\begin{equation*}
\begin{tikzcd}
&[-15pt]& (X \times M) * M \arrow{dl} \arrow{dr} && \\
& X \times M \arrow[swap]{dl}{1 \times (d_2^2,d_0^2)} \arrow{dr}{1 \times d_1^2} && M \arrow[swap]{dl}{(d_2^2,d_0^2)} \arrow{dr}{d_1^2} & \\
X \times X \times X && X \times X && X
\end{tikzcd}
\end{equation*}
represents the composition $\mu \circ (\mathbf{1} \times \mu)$. Using the identification
\begin{align*}
(X \times M) * M &= \{(x,m,m') \in X \times M \times M \mid x = d_2^2 m', \; d_1^2 m = d_0^2 m'\} \\
&= \{(m,m') \mid d_1^2 m = d_0^2 m'\}\\
&= M \bitimes{d_1^2}{d_0^2} M,
\end{align*}
we obtain
\begin{equation}\label{diag:ass1}
\begin{tikzcd}
&[-15pt]& M \bitimes{d_1^2}{d_0^2} M \arrow[swap]{dl}{(d_2^2 p_2, p_1)} \arrow{dr}{p_2} && \\
& X \times M \arrow[swap]{dl}{1 \times (d_2^2,d_0^2)} \arrow{dr}{1 \times d_1^2} && M \arrow[swap]{dl}{(d_2^2,d_0^2)} \arrow{dr}{d_1^2} & \\
X \times X \times X && X \times X && X
\end{tikzcd}
\end{equation}
as a representative of $\mu \circ (\mathbf{1} \times \mu)$. Similarly, the diagram
\begin{equation}\label{diag:ass2}
\begin{tikzcd}
&[-15pt]& M \bitimes{d_1^2}{d_2^2} M \arrow[swap]{dl}{(p_1, d_0^2 p_2)} \arrow{dr}{p_2} && \\
& M \times X \arrow[swap]{dl}{(d_2^2,d_0^2) \times 1} \arrow{dr}{d_1^2 \times 1} && M \arrow[swap]{dl}{(d_2^2,d_0^2)} \arrow{dr}{d_1^2} & \\
X \times X \times X && X \times X && X
\end{tikzcd}
\end{equation}
represents the composition $\mu \circ (\mu \times \mathbf{1})$.
Associativity implies that there is a bijection $M \bitimes{d_1^2}{d_0^2} M \cong M \bitimes{d_1^2}{d_2^2} M$ giving an isomorphism of spans between \eqref{diag:ass1} and \eqref{diag:ass2}. It will be convenient to have a neutral model, so let $T$ be a set with maps $t_1: T \to X \times X \times X$ and $t_2: T \to X$ such that
\begin{equation}
\begin{tikzcd}
&[-15pt] T \arrow[swap]{dl}{t_1} \arrow{dr}{t_2}& \\
X \times X \times X & & X
\end{tikzcd}
\end{equation}
is isomorphic to \eqref{diag:ass1} and \eqref{diag:ass2}. We will use
\stringdiagram{
\tripleprod{0}{0}
}
to diagrammatically represent $T$.
\begin{thm}\label{thm:monoid2simp}
Suppose $(X, \eta, \mu)$ is a monoid in $\catname{Span}$, where $\eta$ and $\mu$ are represented by spans as in \eqref{diag:monoidspan}. Then the maps $d_i^k$ and $s_i^k$ in \eqref{diag:monoidspan}, \eqref{diag:unit2}, and \eqref{diag:unit3} are, respectively, the face and degeneracy maps for a $2$-truncated simplicial set $M \,\substack{\to \\[-1em] \to \\[-1em] \to}\, X \,\substack{\to \\[-1em] \to}\, E$.
\end{thm}
\begin{proof}
The identities
\begin{align}
d_0^2 \circ s_1^1 &= s_0^0 \circ d_0^1, &
d_1^2 \circ s_1^1 &= d_2^2 \circ s_1^1 = 1, \label{eqn:ds1} \\
d_2^2 \circ s_0^1 &= s_0^0 \circ d_1^1, &
d_0^2 \circ s_0^1 &= d_1^2 \circ s_0^1 = 1.
\label{eqn:ds2}
\end{align}
follow from the commutativity of \eqref{diag:unit2} and \eqref{diag:unit3}.
Now, consider the equation $\mu \circ (\mathbf{1} \times \eta) \circ \eta = \mu \circ (\eta \times \eta) = \mu \circ (\eta \times \mathbf{1}) \circ \eta$, diagrammatically shown as follows:
\stringdiagramlabel{
\begin{scope}
\unit{0}{2.5}
\identity{0}{.5}
\unit{2}{.5}
\multiplication{1}{-1.5}
\equals{3}{0}
\unit{4}{1}
\unit{6}{1}
\multiplication{5}{-1}
\equals{7}{0}
\unit{10}{2.5}
\unit{8}{.5}
\identity{10}{.5}
\multiplication{9}{-1.5}
\draw[gray,dotted] (-.5,1.5) rectangle (.5,3.5);
\draw[gray,dotted] (9.5,1.5) rectangle (10.5,3.5);
\end{scope}
}{diag:doubleid1}
Because of the unit axiom, we can see that the left and right sides of \eqref{diag:doubleid1} are postcompositions of $\eta$ (boxed) with the identity morphism on $X$. Additionally, all three diagrams in \eqref{diag:doubleid1} are related via a slide move. Together, these relationships allow us to obtain natural representatives of the left and right sides of \eqref{diag:doubleid1} as compositions of spans, shown in Figures \ref{fig:doubleid1} and \ref{fig:doubleid2}, that are isomorphic via the identity map $1:E \to E$. Because they both give the same representative of the middle of \eqref{diag:doubleid1} via the slide move, the maps $E \to E \times E$ and $E \to M$ are equal, so by comparing these maps we obtain the identities
\begin{align} \label{eqn:e}
d_0^1 s_0^0 &= d_1^1 s_0^0, & s_1^1 s_0^0 &= s_0^1 s_0^0.
\end{align}
\begin{figure}[th]
\begin{tikzcd}
&&& E \arrow[swap]{dl}{(1,d_0^1 s_0^0)} \arrow{dr}{s_0^0} &&& \\
&& E \times E \arrow[swap]{dl}{p_1} \arrow{dr}{s_0^0 \times 1} && X \arrow[swap]{dl}{(1,d_0^1)} \arrow{dr}{s_1^1} && \\
& E \arrow{dl} \arrow{dr}{s_0^0} && X \times E \arrow[swap]{dl}{p_1} \arrow{dr}{1 \times s_0^0} && M \arrow[swap]{dl}{(d_2^2,d_0^2)} \arrow{dr}{d_1^2} & \\
\bullet && X && X \times X && X
\end{tikzcd}
\caption{Composition of spans representing the left side of \eqref{diag:doubleid1}.}
\label{fig:doubleid1}
\end{figure}
\begin{figure}[th]
\begin{tikzcd}
&&& E \arrow[swap]{dl}{(d_1^1 s_0^0,1)} \arrow{dr}{s_0^0} &&& \\
&& E \times E \arrow[swap]{dl}{p_2} \arrow{dr}{1 \times s_0^0} && X \arrow[swap]{dl}{(d_1^1,1)} \arrow{dr}{s_0^1} && \\
& E \arrow{dl} \arrow{dr}{s_0^0} && E \times X \arrow[swap]{dl}{p_2} \arrow{dr}{s_0^0 \times 1} && M \arrow[swap]{dl}{(d_2^2,d_0^2)} \arrow{dr}{d_1^2} & \\
\bullet && X && X \times X && X
\end{tikzcd}
\caption{Composition of spans representing the right side of \eqref{diag:doubleid1}.}
\label{fig:doubleid2}
\end{figure}
Next, we consider the equation
\stringdiagramlabel{
\identity{-4}{2}
\unit{-2}{2}
\identity{-1}{2}
\multiplication{-3}{0}
\identity{-1}{0}
\multiplication{-2}{-2}
\equals{0}{0}
\identity{1}{1}
\unit{2}{1}
\identity{3}{1}
\tripleprod{2}{-1}
\equals{4}{0}
\identity{8}{2}
\unit{6}{2}
\identity{5}{2}
\multiplication{7}{0}
\identity{5}{0}
\multiplication{6}{-2}
\draw[gray,dotted] (-3.5,-3) rectangle (-.5,-.5);
\draw[gray,dotted] (4.5,-3) rectangle (7.5,-.5);
}{diag:midface}
which follows from associativity. The left and right sides of \eqref{diag:midface} are precompositions of $\mu$ (boxed) with the identity morphism on $X \times X$. As a result, we obtain the representatives in Figures \ref{fig:midface1} and \ref{fig:midface2} that are isomorphic via the identity map $1: M \to M$. Because they both should give the same representative of the middle of \eqref{diag:midface}, the maps $M \to X \times E \times X$ are equal, so we obtain the identity
\begin{equation}\label{eqn:faceface1}
d_0^1 d_2^2 = d_1^1 d_0^2.
\end{equation}
\begin{figure}[th]
\begin{tikzcd}
&[-30pt]&[-30pt]&[-25pt] M \arrow[swap]{dl}{(d_2^2,d_0^2)} \arrow{dr} &[-20pt]&[-15pt]&\\
&& X \times X \arrow[swap]{dl}{(1,d_0^1)\times 1} \arrow{dr}{s_1^1 \times 1} && T \arrow{dl} \arrow{dr} && \\
& X \times E \times X \arrow[swap]{dl}{p_{13}} \arrow[swap]{dr}{1 \times s_0^0 \times 1} && M \times X \arrow[swap]{dl}{(d_2^2,d_0^2)\times 1} \arrow{dr}{d_1^2 \times 1} && M \arrow{dl}{(d_2^2,d_0^2)} \arrow{dr}{d_1^2} & \\
X \times X &&X \times X \times X && X \times X && X
\end{tikzcd}
\caption{Composition of spans representing the left side of \eqref{diag:midface}.}
\label{fig:midface1}
\end{figure}
\begin{figure}[th]
\begin{tikzcd}
&[-30pt]&[-30pt]&[-25pt] M \arrow[swap]{dl}{(d_2^2,d_0^2)} \arrow{dr} &[-20pt]&[-15pt]&\\
&& X \times X \arrow[swap]{dl}{1 \times (d_1^1,1)} \arrow{dr}{1 \times s_0^1} && T \arrow{dl} \arrow{dr} && \\
& X \times E \times X \arrow[swap]{dl}{p_{13}} \arrow[swap]{dr}{1 \times s_0^0 \times 1} && X \times M \arrow[swap]{dl}{1 \times (d_2^2,d_0^2)} \arrow{dr}{1 \times d_1^2} && M \arrow{dl}{(d_2^2,d_0^2)} \arrow{dr}{d_1^2} & \\
X \times X &&X \times X \times X && X \times X && X
\end{tikzcd}
\caption{Composition of spans representing the right side of \eqref{diag:midface}.}
\label{fig:midface2}
\end{figure}
Next, we consider the equation
\stringdiagramlabel{
\unit{-4}{2}
\identity{-2}{2}
\identity{-1}{2}
\multiplication{-3}{0}
\identity{-1}{0}
\multiplication{-2}{-2}
\equals{0}{0}
\unit{1}{1}
\identity{2}{1}
\identity{3}{1}
\tripleprod{2}{-1}
\equals{4}{0}
\identity{8}{2}
\identity{6}{2}
\unit{5}{2}
\multiplication{7}{0}
\identity{5}{0}
\multiplication{6}{-2}
\draw[gray,dotted] (-3.5,-3) rectangle (-.5,-.5);
}{diag:leftface}
which follows from associativity. The left side of \eqref{diag:leftface} is the precomposition of $\mu$ (boxed) with the identity map on $X \times X$, so it is naturally represented by the composition in Figure \ref{fig:leftface1}. To obtain a natural representative of the right side of \eqref{diag:leftface}, we first consider
\stringdiagramlabel{
\multiplication{1}{3}
\unit{-1}{1}
\identity{1}{1}
\multiplication{0}{-1}
\draw[gray,dotted] (-.5,2) rectangle (2.5,4.5);
}{diag:leftface2}
which is a postcomposition of $\mu$ (boxed) with the identity map on $X$, and is naturally represented by the composition in Figure \ref{fig:leftface2}. We note that the map $M \to E \times M$ in Figure \ref{fig:leftface2} is determined by commutativity of the diagram.
\begin{figure}[th]
\begin{tikzcd}
&[-30pt]&[-30pt]&[-25pt] M \arrow[swap]{dl}{(d_2^2,d_0^2)} \arrow{dr} &[-20pt]&[-15pt]&\\
&& X \times X \arrow[swap]{dl}{(d_1^1,1)\times 1} \arrow{dr}{s_0^1 \times 1} && T \arrow{dl} \arrow{dr} && \\
& E \times X \times X \arrow[swap]{dl}{p_{23}} \arrow[swap]{dr}{s_0^1 \times 1 \times 1} && M \times X \arrow[swap]{dl}{(d_2^2,d_0^2)\times 1} \arrow{dr}{d_1^2 \times 1} && M \arrow{dl}{(d_2^2,d_0^2)} \arrow{dr}{d_1^2} & \\
X \times X &&X \times X \times X && X \times X && X
\end{tikzcd}
\caption{Composition of spans representing the left side of \eqref{diag:leftface}.}
\label{fig:leftface1}
\end{figure}
\begin{figure}[th]
\begin{tikzcd}
&[-10pt]&&[-15pt] M \arrow[swap]{dl}{(d_1^1 d_1^2,1)} \arrow{dr}{d^2_1} &[-15pt]&[-10pt]&\\
&& E \times M \arrow[swap]{dl}{p_2} \arrow{dr}{1 \times d_1^2} && X \arrow[swap]{dl}{(d_1^1,1)} \arrow{dr}{s_0^1} && \\
& M \arrow[swap]{dl}{(d_2^2,d_0^2)} \arrow{dr}{d_1^2} && E \times X \arrow[swap]{dl}{p_2} \arrow{dr}{s_0^0 \times 1} && M \arrow{dl}{(d_2^2,d_0^2)} \arrow{dr}{d_1^2} & \\
X \times X &&X && X \times X && X
\end{tikzcd}
\caption{Composition of spans representing \eqref{diag:leftface2}.}
\label{fig:leftface2}
\end{figure}
We then perform a slide move to obtain a natural representative of the right side of \eqref{diag:leftface}, shown in Figure \ref{fig:leftface3}.
\begin{figure}[th]
\begin{tikzcd}
&[-20pt]&[-25pt]&[-15pt] M \arrow[swap]{dl}{(d_1^1 d_1^2,1)} \arrow{dr} &[-15pt]&[-10pt]&\\
&& E \times M \arrow[swap]{dl}{1 \times (d_2^2,d_0^2)} \arrow{dr}{s_0^0 \times 1} && T \arrow[swap]{dl} \arrow{dr} && \\
& E \times X \times X \arrow[swap]{dl}{p_{23}} \arrow{dr}[swap]{s_0^0 \times 1 \times 1} && X \times M \arrow[swap]{dl}{1 \times (d_2^2, d_0^0)} \arrow{dr}{1 \times d_1^2} && M \arrow{dl}{(d_2^2,d_0^2)} \arrow{dr}{d_1^2} & \\
X \times X &&X && X \times X && X
\end{tikzcd}
\caption{Composition of spans representing the right side of \eqref{diag:leftface}.}
\label{fig:leftface3}
\end{figure}
Because the compositions in Figures \ref{fig:leftface1} and \ref{fig:leftface3} should both give the same representation of the middle of \eqref{diag:leftface}, the maps $M \to E \times X \times X$ are equal, so we obtain the identity
\begin{equation}\label{eqn:faceface2}
d_1^1 d_2^2 = d_1^1 d_1^2.
\end{equation}
A similar analysis for the equation
\stringdiagramlabel{
\identity{-4}{2}
\identity{-2}{2}
\unit{-1}{2}
\multiplication{-3}{0}
\identity{-1}{0}
\multiplication{-2}{-2}
\equals{0}{0}
\identity{1}{1}
\identity{2}{1}
\unit{3}{1}
\tripleprod{2}{-1}
\equals{4}{0}
\unit{8}{2}
\identity{6}{2}
\identity{5}{2}
\multiplication{7}{0}
\identity{5}{0}
\multiplication{6}{-2}
}{diag:rightface}
yields the identity
\begin{equation}\label{eqn:faceface3}
d_0^1 d_1^2 = d_0^1 d_0^2.
\end{equation}
Together, \eqref{eqn:ds1}, \eqref{eqn:ds2}, \eqref{eqn:e}, \eqref{eqn:faceface1}, \eqref{eqn:faceface2}, and \eqref{eqn:faceface3} are the identities necessary to show that $M \,\substack{\to \\[-1em] \to \\[-1em] \to}\, X \,\substack{\to \\[-1em] \to}\, E$ is a $2$-truncated simplicial set.
\end{proof}
One can readily verify that the constructions in Theorems \ref{thm:simp2monoid} and \ref{thm:monoid2simp} give a one-to-one correspondence, up to isomorphism, between monoids in $\catname{Span}$ and $2$-truncated simplicial sets satisfying the conditions in Lemmas \ref{lemma:simpunit} and \ref{lemma:simpassociativity}.
\subsection{Relation to 2-Segal sets}
\label{sec:2segal}
It was shown by Stern \cite{stern:2segal} that there is an $\infty$-categorical equivalence between pseudomonoids in the $(2,1)$-category of spans and $2$-Segal simplicial sets (see \cites{dyckerhoff-kapranov, gkt, boors, penney}). We sketch here the relationship between his result and the correspondence in Theorems \ref{thm:simp2monoid} and \ref{thm:monoid2simp}.
The pseudomonoids considered in \cite{stern:2segal} have more structure than the monoids in $\catname{Span}$ considered here, requiring specific isomorphisms of spans representing the unit and associativity axioms, as well as additional coherence conditions on these isomorphisms. Given such a pseudomonoid, one can obtain a monoid in $\catname{Span}$ by forgetting the isomorphisms. In the other direction, given a monoid in $\catname{Span}$, one could choose unitor and associator isomorphisms, but it may not be possible to make the choices so that the coherence conditions are satisfied.
Stern's correspondence, combined with ours, implies a similar relationship between $2$-Segal sets and simplicial sets satisfying the conditions in Theorem \ref{thm:simp2monoid}, which we can now see directly.
Suppose that $\mathcal{X}= X_\bullet$ is a $2$-Segal set. It was shown in \cite{fgkpw} that $X_\bullet$ automatically satisfies unital conditions, which in the lowest dimension say that the diagrams
\begin{equation*}
\begin{tikzcd}
X_1 \arrow{r}{d_0^1} \arrow[swap]{d}{s_1^1} & X_0 \arrow{d}{s_0^0} \\
X_2 \arrow{r}{d_0^2 }& X_1
\end{tikzcd}
\;\;\;\;
\begin{tikzcd}
X_1 \arrow{r}{d_1^1} \arrow[swap]{d}{s_0^1} & X_0 \arrow{d}{s_0^0} \\
X_2 \arrow{r}{d_2^2 }& X_1
\end{tikzcd}
\end{equation*}
are pullbacks. These conditions are equivalent to the conditions in Lemma \ref{lemma:simpunit}.
The lowest-dimension $2$-Segal conditions say that the maps
\begin{align*}
(d_0^3,d_2^3): X_3 &\to X_2 \bitimes{d_1^2}{d_0^2} X_2,
& (d_1^3,d_3^3): X_3 &\to X_2 \bitimes{d_2^2}{d_1^2} X_2
\end{align*}
are bijections. We note that the codomains of these maps are exactly the taco sets $T_{02}\mathcal{X}$ and $T_{13}\mathcal{X}$. Composing the two bijections, we obtain a bijection $T_{02}\mathcal{X} \cong T_{13}\mathcal{X}$ satisfying the condition of Lemma \ref{lemma:simpassociativity}.
In the other direction, suppose that $X_\bullet$ is a $2$-truncated simplicial set satisfying the conditions of Theorem \ref{thm:simp2monoid}. Choose a bijection as in Lemma \ref{lemma:simpassociativity}, and set $X_3 \subseteq \Delta_3 \mathcal{X}$ to be the graph of the bijection. It is possible to choose the bijection carefully so that $X_3$ contains all degenerate $3$-simplices, thus giving a $3$-truncated simplicial set that can be extended to a $3$-coskeletal simplicial set, which we will still denote as $X_\bullet$.
By construction $X_\bullet$ satisfies the lowest-dimension $2$-Segal conditions. However, it is not necessarily the case that $X_\bullet$ satisfies the higher-dimensional $2$-Segal conditions. These conditions constitute coherence conditions, which may or may not be possible to satisfy, on the bijection in Lemma \ref{lemma:simpassociativity}.
To summarize, if $\mathcal{X}$ is a $2$-Segal set, then the $2$-truncation of $\mathcal{X}$ satisfies the conditions of Theorem \ref{thm:simp2monoid}. We note that there is a wealth of examples of $2$-Segal sets in, e.g.\ \cites{dyckerhoff-kapranov,gkt,boors}, from which we can obtain many interesting examples of monoids in $\catname{Span}$.
On the other hand, obtaining a $2$-Segal set from a simplicial set satisfying the conditions of Theorem \ref{thm:simp2monoid} requires a choice of bijection as in Lemma \ref{lemma:simpassociativity}, subject to nontrivial coherence conditions.
\section{Frobenius objects in \texorpdfstring{$\catname{Span}$}{Span}}
\label{sec:frob}
\subsection{Nondegeneracy}\label{sec:nondegeneracy}
Let $X$ be a set. A morphism $\alpha: X \times X \spanto \bullet$ in $\catname{Span}$ is called \emph{nondegenerate} if there exists a morphism $\beta: \bullet \spanto X \times X$ such that the \emph{snake identity}
\begin{equation}\label{eqn:nondegen}
(\alpha \times \mathbf{1}) \circ (\mathbf{1} \times \beta) = (\mathbf{1} \times \alpha) \circ (\beta \times \mathbf{1}) = \mathbf{1}
\end{equation}
holds. If we depict $\alpha$ and $\beta$, respectively, by
\stringdiagram{
\pairing{0}{0}
\copairing{4}{1}
}
then \eqref{eqn:nondegen} is given by
\stringdiagram{
\identity{-1}{2}
\copairing{2}{2}
\pairing{0}{0}
\identity{3}{0}
\equals{4}{1}
\copairing{6}{2}
\identity{9}{2}
\identity{5}{0}
\pairing{8}{0}
\equals{10}{1}
\identity{11}{2}
\identity{11}{0}
}
which explains where the name ``snake identity'' comes from.
Let $A$ be a set with maps $\alpha_1, \alpha_2: A \to X$, so that the span
\begin{equation*}
\begin{tikzcd}
& [-10pt] A \arrow[swap]{dl}{(\alpha_1, \alpha_2)} \arrow{dr}& \\
X \times X & & \bullet
\end{tikzcd}
\end{equation*}
represents a morphism $\alpha: X \times X \spanto \bullet$. The following can be deduced from Proposition \ref{prop:spaniso} and the well-known fact that $\catname{Span}$ is a rigid category, but we include a more explicit proof here.
\begin{prop}\label{prop:nondegen}
$\alpha$ is nondegenerate if and only if $\alpha_1$ and $\alpha_2$ are bijections.
\end{prop}
\begin{proof}
Consider a span
\begin{equation*}
\begin{tikzcd}
& B \arrow{dr}{(\beta_1, \beta_2)} \arrow{dl}& [-10pt]\\
\bullet && X \times X
\end{tikzcd}
\end{equation*}
where $B$ is a set with maps $\beta_1, \beta_2: B \to X$, and let $\beta: \bullet \spanto X \times X$ denote the corresponding morphism in $\catname{Span}$. Then the composition of spans
\begin{equation}\label{diag:nondegen1}
\begin{tikzcd}
& & [-15pt] (X \times B) * (A \times X) \arrow{dl} \arrow{dr} &[-15pt]&\\
& X \times B \arrow[swap]{dl}{p_1} \arrow[swap]{dr}{1 \times(\beta_1, \beta_2)} && A \times X \arrow{dl}{(\alpha_1,\alpha_2) \times 1} \arrow{dr}{p_2} &\\
X && X \times X \times X && X
\end{tikzcd}
\end{equation}
represents $(\alpha \times \mathbf{1}) \circ (\mathbf{1} \times \beta)$. Using the identification
\begin{align*}
&(X \times B) * (A \times X) \\
&= \{(x,b,a,x') \in X \times B \times A \times X \mid x = \alpha_1(a), \beta_1(b) = \alpha_2(a), \beta_2(b) = x'\} \\
&= A \bitimes{\alpha_2}{\beta_1} B,
\end{align*}
we can rewrite \eqref{diag:nondegen1} as
\begin{equation*}
\begin{tikzcd}
& & [-15pt] A \bitimes{\alpha_2}{\beta_1} B \arrow[swap]{dl}{\alpha_1 \times 1} \arrow{dr}{1 \times \beta_2} &[-15pt]&\\
& X \times B \arrow[swap]{dl}{p_1} \arrow[swap]{dr}{1 \times(\beta_1, \beta_2)} && A \times X \arrow{dl}{(\alpha_1,\alpha_2) \times 1} \arrow{dr}{p_2} &\\
X && X \times X \times X && X
\end{tikzcd}
\end{equation*}
which is isomorphic to
\begin{equation}\label{diag:nondegen3}
\begin{tikzcd}
& & [-15pt] A \bitimes{\alpha_2}{\beta_1} B \arrow[swap]{dl}{p_1} \arrow{dr}{p_2} &[-15pt]&\\
& A \arrow[swap]{dl}{\alpha_!} \arrow{dr}{\alpha_2} && B \arrow{dl}[swap]{\beta_1} \arrow{dr}{\beta_2} &\\
X && X && X
\end{tikzcd}
\end{equation}
via the identity map on $A \bitimes{\alpha_2}{\beta_1} B$. Similarly, the span
\begin{equation}\label{diag:nondegen4}
\begin{tikzcd}
& & [-15pt] B \bitimes{\beta_2}{\alpha_1} A \arrow[swap]{dl}{p_1} \arrow{dr}{p_2} &[-15pt]&\\
& B \arrow[swap]{dl}{\beta_!} \arrow{dr}{\beta_2} && A \arrow{dl}[swap]{\alpha_1} \arrow{dr}{\alpha_2} &\\
X && X && X
\end{tikzcd}
\end{equation}
represents the composition $(\mathbf{1} \times \alpha) \circ (\beta \times \mathbf{1})$.
Thus we see that $\alpha$ is nondegenerate if and only if there exists $(B, \beta_1, \beta_2)$ such that the spans \eqref{diag:nondegen3} and \eqref{diag:nondegen4} are isomorphic to the identity on $X$. This is the case if and only if the morphism $\alpha^\flat: X \spanto X$ represented by the span $(A, \alpha_1, \alpha_2)$ is an isomorphism. The result then follows from Proposition \ref{prop:spaniso}.
\end{proof}
An immediate consequence of Proposition \ref{prop:nondegen} is the following.
\begin{cor}
A morphism $\alpha: X \times X \spanto \bullet$ is nondegenerate if and only if it can be represented by a span of the form
\begin{equation}\label{diag:alpha}
\begin{tikzcd}
& X \arrow[swap]{dl}{(1, \hat{\alpha})} \arrow{dr} & \\
X \times X & & \bullet
\end{tikzcd}
\end{equation}
where $\hat{\alpha}: X \to X$ is a bijection. This representation is unique.
\end{cor}
\begin{remark}
When $\alpha$ is represented by \eqref{diag:alpha}, then the span
\begin{equation}\label{diag:beta}
\begin{tikzcd}
& X \arrow{dr}{(\hat{\alpha},1)} \arrow{dl} & \\
\bullet && X \times X
\end{tikzcd}
\end{equation}
represents the corresponding morphism $\beta: \bullet \spanto X \times X$.
\end{remark}
\subsection{Frobenius objects}
\begin{definition}
A \emph{Frobenius object} in $\catname{Span}$ is a monoid $(X, \eta, \mu)$ in $\catname{Span}$ that is equipped with a morphism $\varepsilon: X \spanto \bullet$ (\emph{counit}), such that $\varepsilon \circ \mu$ is nondegenerate.
\end{definition}
Suppose that $X$ is a Frobenius object in $\catname{Span}$, and let $\alpha = \varepsilon \circ \mu$. Since $\alpha$ is nondegenerate, there is a unique bijection $\hat{\alpha}: X \to X$ such that $\alpha$ is represented by \eqref{diag:alpha}.
The counit can then be recovered from $\hat{\alpha}$, as follows.
\begin{lemma}\label{lemma:counit}
If the unit morphism $\eta$ is represented as in \eqref{diag:monoidspan}, then
\begin{equation}\label{diag:counit}
\begin{tikzcd}
& E \arrow[swap]{dl}{\hat{\alpha}\circ s_0^0} \arrow{dr} & \\
X & & \bullet
\end{tikzcd}
\end{equation}
represents the counit $\varepsilon$.
\end{lemma}
\begin{proof}
It follows from the unit axiom that $\varepsilon = \varepsilon \circ \mu \circ (\eta \times \mathbf{1}) = \alpha \circ (\eta \times \mathbf{1})$. Thus, the composition of spans
\begin{equation}\label{diag:counit1}
\begin{tikzcd}
&& [-15pt] (E \times X) * X \arrow{dl} \arrow{dr} &[-15pt]& \\
& E \times X \arrow[swap]{dl}{p_2} \arrow{dr}{s_0^0 \times 1} && X \arrow[swap]{dl}{(1,\hat{\alpha})} \arrow{dr} & \\
X && X \times X && \bullet
\end{tikzcd}
\end{equation}
represents $\varepsilon$. Using the identification
\begin{align*}
(E \times X) * X &= \{(e,x,x') \in E \times X \times X \mid s_0^0(e) = x', x = \hat{\alpha}(x')\} \\
&= E,
\end{align*}
we can rewrite \eqref{diag:counit1} as
\begin{equation*}
\begin{tikzcd}
&& [-15pt] E \arrow[swap]{dl}{(1, \hat{\alpha} \circ s_0^0)} \arrow{dr}{s_0^0} &[-15pt]& \\
& E \times X \arrow[swap]{dl}{p_2} \arrow{dr}{s_0^0 \times 1} && X \arrow[swap]{dl}{(1,\hat{\alpha})} \arrow{dr} & \\
X && X \times X && \bullet
\end{tikzcd}
\end{equation*}
which gives the result.
\end{proof}
Now, suppose that $(X, \eta, \mu)$ is a monoid in $\catname{Span}$, and that $\hat{\alpha}:X \to X$ is a bijection. We can then define a morphism $\varepsilon: X \spanto \bullet$ via \eqref{diag:counit}. As a result of the correspondence in Lemma \ref{lemma:counit}, nondegeneracy of $\varepsilon \circ \mu$ is equivalent to the condition $\alpha = \varepsilon \circ \mu$. This gives us an alternative characterization of Frobenius structures.
\begin{prop}\label{prop:frob}
Suppose $(X, \eta, \mu)$ is a monoid in $\catname{Span}$ equipped with a bijection $\hat{\alpha}:X \to X$. Let $\alpha: X \times X \spanto \bullet$ and $\varepsilon: X \spanto \bullet$ be defined as in \eqref{diag:alpha} and \eqref{diag:counit}, respectively. Then $(X, \eta, \mu, \varepsilon)$ is a Frobenius object in $\catname{Span}$ if and only if $\alpha = \varepsilon \circ \mu$.
\end{prop}
We remark that the result of Proposition \ref{prop:frob} holds more generally in any rigid monoidal category and is known by some people.
The condition in Proposition \ref{prop:frob} makes it relatively straightforward to understand Frobenius structures in terms of the simplicial set perspective, since we can view $\hat{\alpha}$ as a bijection of the set of $1$-simplices.
\begin{thm}\label{thm:frobenius}
Let $X_\bullet$ be a $2$-truncated simplicial set satisfying the conditions in Lemmas \ref{lemma:simpunit} and \ref{lemma:simpassociativity}, and let $\hat{\alpha}:X_1 \to X_1$ be a bijection. Then the corresponding monoid in $\catname{Span}$, as given by Theorem \ref{thm:simp2monoid}, together with $\varepsilon$ given by \eqref{diag:counit}, is a Frobenius object in $\catname{Span}$ if and only if there exists a map $\gamma:X_1 \to X_2$ such that
\begin{enumerate}
\item $d_0^2 \gamma(x) = \hat{\alpha}(x)$, $d_1^2 \gamma(x) \in \hat{\alpha}(s_0^0(X_0))$, and $d_2^2 \gamma(x) = x$ for all $x \in X_1$, and
\item if $\zeta \in X_2$ is such that $d_1^2 \zeta \in \hat{\alpha}(s_0^0(X_0))$, then $\zeta \in \gamma(X_1)$.
\end{enumerate}
\end{thm}
\begin{proof}
From Proposition \ref{prop:frob}, we have that $X_1$, with the given data, is a Frobenius object in $\catname{Span}$ if and only if $\alpha = \varepsilon \circ \mu$, where $\alpha$ and $\varepsilon$ are given by \eqref{diag:alpha} and \eqref{diag:counit}, respectively. Thus we consider the composition
\begin{equation}\label{diag:alphafrob}
\begin{tikzcd}
&& X_2 * X_0 \arrow{dl} \arrow{dr} && \\
& X_2 \arrow[swap]{dl}{(d_2^2,d_0^2)} \arrow{dr}{d_1^2} && X_0 \arrow[swap]{dl}{\hat{\alpha}\circ s_0^0} \arrow{dr} \\
X_1 \times X_1 && X_1 && \bullet
\end{tikzcd}
\end{equation}
which represents $\varepsilon \circ \mu$. We can rewrite the pullback in \eqref{diag:alphafrob} using the following identification:
\begin{equation}\label{eqn:epsilonmu}
\begin{split}
X_2 * X_0 &= \{(\zeta, u) \in X_2 \times X_0 \mid d_1^2(\zeta) = \hat{\alpha} \circ s_0^0(u)\} \\
&= \{ \zeta \in X_2 \mid d_1^2(\zeta) \in \hat{\alpha}(s_0^0(X_0))\}.
\end{split}
\end{equation}
Here, we have used the fact that $\hat{\alpha}$ and $s_0^0$ are both injective, so the $u$, if it exists, is unique.
The equation $\alpha = \varepsilon \circ \mu$ holds if and only if there exists an isomorphism of spans from \eqref{diag:alpha} to \eqref{diag:alphafrob}. Using \eqref{eqn:epsilonmu}, we see that such an isomorphism is given by a map $\gamma: X_1 \to X_2$ such that $d_1^2(\gamma(x)) \in \hat{\alpha}(s_0^0(X_0))$ for all $x \in X_1$. Compatibility with the spans adds the requirements that $d_2^2(\gamma(x)) = x$ and $d_0^2(\gamma(x)) = \hat{\alpha}(x)$. These conditions already imply that $\gamma$ is injective, so for $\gamma$ to be an isomorphism we only need to add the surjectivity condition (2).
\end{proof}
Intuitively, we can think of Theorem \ref{thm:frobenius} as saying that a Frobenius structure gives an additional degeneracy-type map $\gamma$, relative to the bijection $\hat{\alpha}$. Condition (1) expresses compatibility conditions between $\gamma$ and the face maps, and condition (2) is analogous to the conditions in Lemma \ref{lemma:simpunit}. The following example is illustrative.
\begin{example}\label{ex:groupoid}
Let $G_1 \,\substack{\to \\[-1em] \to}\, G_0$ be a groupoid, and let $G_\bullet$ be the nerve; see Example \ref{ex:cat}. Let $\hat{\alpha}: G_1 \to G_1$ be given by $\hat{\alpha}(x) = x^{-1}$. Then there is a unique $\gamma: G_1 \to G_2$ satisfying condition (1) in Theorem \ref{thm:frobenius}, given by $\gamma(x) = (x,x^{-1})$. Condition (2) then says that, if $x,y \in G_1$ are such that $xy$ is an identity morphism, then $y = x^{-1}$, which is true for a groupoid.
It is worth noting that the above Frobenius structure is not the only possibility for a groupoid. The most general possibility is as follows. Let $\sigma: G_0 \to G_1$ be a section of the target map, and set
\[ \hat{\alpha}(x) = x^{-1}\sigma(1_{t(x)}).\]
Then the map $\gamma(x) = (x, \hat{\alpha}(x))$ satisfies the conditions of Theorem \ref{thm:frobenius}.
\end{example}
\subsection{Comultiplication}
If $X$ is a Frobenius object in $\catname{Span}$, then it has a naturally-induced \emph{comultiplication} $\delta: X \spanto X \times X$, defined as $\delta = (\mathbf{1} \times \mu) \circ (\beta \times \mathbf{1})$; see Figure \ref{fig:comult}. General diagrammatic arguments prove that $\delta$ is counital (with counit $\varepsilon$), coassociative, and satisfies the \emph{Frobenius equation} $(\mu \times \mathbf{1}) \circ (\mathbf{1} \times \delta) = \delta \circ \mu = (\mathbf{1} \times \mu) \circ (\delta \circ \mathbf{1})$.
\begin{figure}[th]
\stringdiagram{
\comultiplication{0}{0}
\equals{2}{0}
\copairing{4}{1}
\identity{7}{1}
\identity{3}{-1}
\multiplication{6}{-1}
}
\caption{Definition of comultiplication via string diagrams.}
\label{fig:comult}
\end{figure}
\begin{prop}
Let $\mu$ and $\alpha$ be represented as in \eqref{diag:monoidspan} and \eqref{diag:alpha}. Then
\[
\begin{tikzcd}
& M \arrow[swap]{dl}{d_0^2} \arrow{dr}{(\hat{\alpha} \circ d_2^2, d_1^2)}& \\
X && X \times X
\end{tikzcd}
\]
represents the comultiplication $\delta$.
\end{prop}
\begin{proof}
The composition of spans
\[
\begin{tikzcd}
&&[-20pt] M \arrow[swap]{dl}{(d_2^2,d_0^2)} \arrow{dr}{(\hat{\alpha}\circ d_2^2, 1)} &[-20pt]&[-15pt] \\
& X \times X \arrow[swap]{dl}{p_2} \arrow[swap]{dr}{(\hat{\alpha},1)\times 1} && X \times M \arrow[swap]{dl}{1 \times (d_2^2,d_0^2)} \arrow{dr}{1 \times d_1^2} & \\
X && X \times X \times X && X \times X
\end{tikzcd}
\]
represents $(\mathbf{1} \times \mu) \circ (\beta \times \mathbf{1}) = \delta$.
\end{proof}
\subsection{Commutativity and TQFT}
Let $X$ be a set, and let $\hat{\tau}: X \times X \to X \times X$ be the ``twist map'' that exchanges the components: $\hat{\tau}(x,x') = (x',x)$. The span
\[
\begin{tikzcd}
& X \times X \arrow[swap]{dl}{\hat{\tau}} \arrow{dr}{1}& \\
X \times X && X \times X
\end{tikzcd}
\]
gives an induced ``twist morphism'' $\tau: X \times X \spanto X \times X$ in $\catname{Span}$.
A monoid $X$ in $\catname{Span}$ is \emph{commutative} if $\mu \circ \tau = \mu$. A straightforward calculation shows that, if $\mu$ is represented as in \eqref{diag:monoidspan}, then $X$ is commutative if and only if there exists a bijection $\theta: M \to M$ such that $d_1^2 \circ \theta = d_1^2$, $d_2^2 \circ \theta = d_0^2$, and $d_0^2 \circ \theta = d_2^2$.
From the general theory of TQFT (e.g. \cite{kock-book}), we know that a commutative Frobenius object in $\catname{Span}$ is equivalent to a symmetric monoidal functor from the oriented two-dimensional cobordism category to $\catname{Span}$. From such a functor, we can obtain invariants of closed oriented surfaces as the partition function of the TQFT. Specifically, for the genus $g$ surface $\Sigma_g$, the partition function is the morphism
\begin{equation}\label{eqn:partition}
Z(\Sigma_g) = \varepsilon \circ (\mu \circ \delta)^g \circ \eta: \bullet \spanto \bullet.
\end{equation}
Such a morphism is equivalent to an isomorphism class of sets and can therefore be identified with a cardinality. If we're lucky, the cardinality will be finite, in which case $Z(\Sigma_g)$ can be viewed as a natural number.
A convenient way to compute $Z(\Sigma_g)$ is to count the number of trajectories in \eqref{eqn:partition}; see Section \ref{sec:spanww}.
\subsection{Symmetric Frobenius objects}\label{sec:symmetric}
A Frobenius object is called \emph{symmetric} if $\alpha \circ \tau = \alpha$.
\begin{prop}
A Frobenius object in $\catname{Span}$ is symmetric if and only if $\hat{\alpha}^2 = 1$.
\end{prop}
\begin{proof}
The composition of spans
\[
\begin{tikzcd}
&[-15pt]&[-15pt] X \arrow[swap]{dl}{(1,\hat{\alpha})} \arrow{dr}{1} &[-10pt]& \\
& X \times X \arrow[swap]{dl}{\hat{\tau}} \arrow{dr}{1} && X \arrow[swap]{dl}{(1,\hat{\alpha})} \arrow{dr} & \\
X\times X && X \times X && \bullet
\end{tikzcd}
\]
represents $\alpha \circ \tau$. The equation $\alpha \circ \tau = \alpha$ holds if and only if there is a bijection $f:X \to X$ such that $(1, \hat{\alpha}) \circ f = \hat{\tau} \circ (1, \hat{\alpha})$. Expressing the latter condition in components, we obtain the equations $\hat{\alpha} = f$ and $\hat{\alpha} \circ f = 1$, from which the result follows.
\end{proof}
\begin{example}
Let $G$ be a group, and let $\omega \in G$ be an arbitrary (fixed) element of $G$. Then $G$ has the structure of a Frobenius object in $\catname{Span}$, with $\hat{\alpha}(x) = x^{-1}\omega$ (see Example \ref{ex:groupoid}). Since $\hat{\alpha}^2(x) = \omega^{-1} x \omega$, we see that this Frobenius structure is symmetric if and only if $\omega$ is in the center of $G$.
\end{example}
The second main result in Stern's paper \cite{stern:2segal} (building on the result discussed in Section \ref{sec:2segal}) is an $\infty$-categorical equivalence between Calabi-Yau algebra objects in the $(2,1)$-category of spans and $2$-Segal cyclic sets. This result relates to the result of Theorem \ref{thm:frobenius} in a way that parallels the discussion in Section \ref{sec:2segal}. Roughly, given a Calabi-Yau algebra object, one can obtain a symmetric Frobenius object by forgetting the higher categorical data. On the other hand, given a symmetric Frobenius object, one could try to choose the required higher data, but there are nontrivial coherence conditions that would need to be satisfied in order to obtain a Calabi-Yau algebra object.
As in Section \ref{sec:2segal}, there is a corresponding relationship on the simplicial set side. A cyclic structure on a $2$-Segal set includes an involution on $X_1$ that satisfies the conditions of Theorem \ref{thm:frobenius}, but the other direction requires extending the action of $\hat{\alpha}$ on $X_1$ to higher-dimensional simplices in a coherent way.
Finally, we point out that the result of Theorem \ref{thm:frobenius} includes non-symmetric Frobenius objects, and this suggests the existence of a higher categorical analogue that extends Stern's result.
\section{Examples}\label{sec:examples}
In this section, we look at some examples of commutative Frobenius objects in $\catname{Span}$, and we compute the associated topological invariants for closed surfaces.
\begin{example}\label{ex:abeliangp}
Let $G$ be a finite abelian group, and fix an element $\omega \in G$. Then, as a special case of Example \ref{ex:groupoid}, we obtain a commutative Frobenius object in $\catname{Span}$ associated to the nerve of $G$, where $\hat{\alpha}(x) = x^{-1}\omega$. In this case, the unit, multiplication, counit, and coumultiplication are given by the following compositions:
\begin{align*}
\eta&: \bullet \tolabel{e} G,\\
\mu&: G \times G \tolabel{m} G,\\
\varepsilon&: G \fromlabel{\omega} \bullet,\\
\delta&: G \fromlabel{p_2} G \times G \tolabel{(\hat{\alpha}\circ p_1, m)} G \times G.
\end{align*}
To compute \eqref{eqn:partition}, we first consider the trajectories from $x$ to $y$ in $\mu \circ \delta: G \spanto G$. Using the above formulas, we can see that such a trajectory only exists when $y = \omega x$ and is of the form
\[ x \mapsfrom (z,x) \mapsto (z^{-1}\omega, zx) \mapsto \omega x = y.\]
Thus the number of trajectories from $x$ to $y$ in $\mu \circ \delta$ is $|G|$ if $y = \omega x$ and $0$ otherwise.
By iteration, we have that, for any $g \geq 0$, the number of trajectories from $x$ to $y$ in $(\mu \circ \delta)^g$ is $|G|^g$ if $y = \omega^g x$ and $0$ otherwise. Incorporating the unit and counit in \eqref{eqn:partition} has the effect of setting $x=e$ and $y=\omega$. Thus the partition function is
\[ Z(\Sigma_g) =
\begin{cases}
|G|^g & \mbox{if } \omega^g = \omega,\\
0 & \mbox{otherwise}.
\end{cases}
\]
In particular, if we set $\omega = e$, then $Z(S^2) = 0$ and $Z(\Sigma_g) = |G|^g$ for $g \geq 1$.
\end{example}
\begin{example}\label{ex:twoelement2}
Recall the family of monoids in $\catname{Span}$ described in Example \ref{ex:twoelement}, which are all commutative. We now consider the compatible Frobenius structures. Since $X = \{a,b\}$ has two elements, there are two possibilities for $\hat{\alpha}$.
We first consider the case where $\hat{\alpha}$ is the identity map. In this case, condition (1) in Theorem \ref{thm:frobenius} says that $\gamma(a) \in Y_{aaa}$ and $\gamma(b) \in Y_{bab}$. Condition (2) in Theorem \ref{thm:frobenius} implies that there are no other elements of $Y_{bab}$, so $n_{bab} = 1$. However, $n_{bbb}$ remains unconstrained, leaving an infinite family of Frobenius objects in $\catname{Span}$. For simplicity of notation, we write $n = n_{bbb}$.
In this case, the unit and counit both give one trajectory between $\bullet$ and $a$. The numbers of trajectories for the multiplication and comultiplication are given in Figure \ref{fig:twoelement}.
\begin{figure}[th]
\begin{tabular}{c|c|c}
$\mu$ & $a$ & $b$ \\ \hline
$(a,a)$ & $1$ & $0$ \\ \hline
$(a,b)$ & $0$ & $1$ \\ \hline
$(b,a)$ & $0$ & $1$ \\ \hline
$(b,b)$ & $1$ & $n$
\end{tabular}
\hspace{.5in}
\begin{tabular}{c|c|c|c|c}
$\delta$ & $(a,a)$ & $(a,b)$ & $(b,a)$ & $(b,b)$ \\ \hline
$a$ & $1$ & $0$ & $0$ & $1$ \\ \hline
$b$ & $0$ & $0$ & $1$ & $n$
\end{tabular}
\caption{Numbers of trajectories in $\mu$ and $\delta$ for Example \ref{ex:twoelement2} when $\hat{\alpha} = 1$.}
\label{fig:twoelement}
\end{figure}
From these tables, we can calculate the numbers of trajectories in $\mu \circ \delta$ as the entries in the matrix
\[ A = \mat{2 & n \\ n & n^2+1}.\]
For example, there are $2$ trajectories from $a$ to $a$ (one passing through $(a,a)$ and one passing through $(b,b)$) and $n$ trajectories from $a$ to $b$ (all passing through $(b,b)$). Taking powers of $A$ gives us the numbers of trajectories in $(\mu \circ \delta)^g$ for any $g \geq 0$, so we can use matrix algebra to calculate the partition function:
\begin{align*}
Z(\Sigma_g) &= \mat{1&0}A^g\mat{1\\0} \\
&= \mat{1&0} \left(\frac{1}{1+n^2}\mat{1&-n\\n&1}\mat{n^2+2&0\\0&1}\mat{1&n\\-n&1}\right)^g \mat{1\\0}\\
&= \frac{1}{1+n^2}\mat{1&-n}\mat{(n^2+2)^g&0\\0&1}\mat{1\\-n}\\
&= \frac{(n^2+2)^g + n^2}{1+n^2}.
\end{align*}
Now we consider the case where $\hat{\alpha}$ is the nontrivial automorphism of $X = \{a,b\}$. In this case, condition (1) in Theorem \ref{thm:frobenius} says that $\gamma(a) \in Y_{bba}$ and $\gamma(b) \in Y_{abb}$. Condition (2) in Theorem \ref{thm:frobenius} implies that there are no elements in $Y_{bbb}$, so $n_{bbb} = 0$. However, $n_{bab}$ remains unconstrained, leaving an infinite family of Frobenius objects in $\catname{Span}$. Write $m = n_{bab}$. By calculations similar to the previous case, we obtain the matrix
\[ B = \mat{0&2m\\2&0}\]
with the numbers of trajectories in $\mu \circ \delta$. Then the partition function is
\[ Z(\Sigma_g) = \mat{0 & 1} B^g \mat{1\\0} = \begin{cases}
2^g m^{(g-1)/2} & \mbox{if $g$ odd},\\
0 & \mbox{if $g$ even}.
\end{cases}.\]
\end{example}
\section{Frobenius algebras via \texorpdfstring{$\catname{Span}$}{Span}} \label{sec:vect}
Let $\Bbbk$ be a field, and let $\catname{Vect}_\Bbbk$ denote the category of vector spaces over $\Bbbk$. Let $\catname{FSpan}$ denote the subcategory of $\catname{Span}$ consisting of finite sets and finite spans.
There is a functor from $\catname{FSpan}$ to $\catname{Vect}_\Bbbk$, which on objects takes a finite set $X$ to the space $\Bbbk[X]$ of $\Bbbk$-valued functions on $X$. Given a finite span $(A, f_1, f_2)$ from $X$ to $Y$, the induced map $\Bbbk[X] \to \Bbbk[Y]$ is obtained by pulling back along $f_1$ and then fiberwise summing over $f_2$. This functor is described, for example, in \cites{dyckerhoff-kapranov,morton:two} and is a simplified version of the degroupoidification functor described in \cite{bhw:groupoidification}.
The functor $\catname{FSpan} \to \catname{Vect}_\Bbbk$ is symmetric monoidal, taking the Cartesian product in $\catname{FSpan}$ to the tensor product in $\catname{Vect}_\Bbbk$. Thus it induces a map taking Frobenius objects in $\catname{FSpan}$ to Frobenius algebras over $\Bbbk$, with commutative Frobenius objects going to commutative Frobenius algebras.
Suppose that $X_\bullet$ is a finite $2$-truncated simplicial set satisfying the conditions of Theorem \ref{thm:simp2monoid}, equipped with a bijection $\hat{\alpha}: X_1 \to X_1$, satisfying the conditions of Theorem \ref{thm:frobenius}. Then $X_\bullet$ corresponds to a Frobenius object in $\catname{FSpan}$. The induced Frobenius algebra is $\Bbbk[X_1]$, where the multiplication is given by a convolution product
\[ \mu(\varphi, \psi)(x) = \sum_{\substack{\zeta \in X_2,\\ x=d_1^2(\zeta)}} \varphi(d_0^2(\zeta)) \psi(d_2^2(\zeta))\]
for $\varphi, \psi \in \Bbbk[X_1]$. The identity element $\mathbbm{1} \in \Bbbk[X_1]$ is given by
\[ \mathbbm{1}(x) = \begin{cases}
1 & \mbox{ if $x \in d_0^0(X_0)$,} \\
0 & \mbox{ otherwise,}
\end{cases}
\]
and the counit $\varepsilon: \Bbbk[X_1] \to \Bbbk$ is given by
\[ \varepsilon(\varphi) = \sum_{u \in X_0} \varphi(\hat{\alpha} \circ d_0^0(u)).\]
We note that the conditions in Theorems \ref{thm:simp2monoid} and \ref{thm:frobenius} could be interpreted (in the finite case) as necessary and sufficient conditions for the above data to satisfy the axioms of a Frobenius algebra.
\begin{example}\label{ex:groupoidvect}
Let $G_1 \,\substack{\to \\[-1em] \to}\, G_0$ be a finite groupoid, and let $X = G_1$ be the corresponding Frobenius object in $\catname{FSpan}$ (see Example \ref{ex:groupoid}). Then the induced Frobenius algebra is $\Bbbk[G_1]$, where the multiplication is the convolution product, given by
\[ \mu(\varphi, \psi)(x) = \sum_{\substack{(y,z) \in G_2,\\ x = yz}} \varphi(z) \psi(y)\]
for $\varphi, \psi \in \Bbbk[G_1]$. If the counit is given as in Example \ref{ex:groupoid} by a section $\sigma: G_0 \to G_1$ of the target map, then the induced counit $\Bbbk[G_1] \to \Bbbk$ is given by
\[ \varepsilon(\varphi) = \sum_{p \in G_0} \varphi(\sigma(p)).\]
\end{example}
\begin{example}
A special case of Example \ref{ex:groupoidvect} is the pair groupoid $([n] \times [n]) \,\substack{\to \\[-1em] \to}\, [n]$, where $[n] = \{1, \dots, n\}$, and where $\sigma$ is the diagonal map. Then the induced Frobenius algebra $\Bbbk[[n] \times [n]]$ can be identified with the algebra of $n \times n$ matrices over $\Bbbk$.
We note that the functor $\catname{FSpan} \to \catname{Vect}_\Bbbk$ preserves categorical products, taking disjoint unions to direct sums. Thus, by the Artin-Wedderburn theorem, it follows that, if $\Bbbk$ is algebraically closed, then every finite-dimensional semisimple algebra arises from a Frobenius object in $\catname{FSpan}$.
\end{example}
\begin{example}
Let $X = \{a,b\}$ be a Frobenius object in $\catname{FSpan}$ of the form described in Example \ref{ex:twoelement2}. In this case, $\Bbbk[X]$ is generated as a vector space by two elements, one of which is the unit element $1$, and the other we will call $\theta$. In the case where $\hat{\alpha}$ is the identity map and $n_{bab}=1$, $n_{bbb}=n$, the algebra multiplication is given by $\theta^2 = 1 + n\theta$, and the counit is given by $\varepsilon(1)=1$, $\varepsilon(\theta)=0$. In the case where $\hat{\alpha}$ is the nontrivial automorphism and $n_{bab}=m$, $n_{bbb}=0$, the algebra multiplication is given by $\theta^2=m$, and the counit is given by $\varepsilon(1)=0$, $\varepsilon(\theta)=1$. We remark that the case $m=0$ gives a Frobenius algebra that is isomorphic to the cohomology ring of $S^2$.
\end{example}
|
1,941,325,220,391 | arxiv | \section{Introduction}
Over the last years, \emph{sparsity} has been a key model in machine learning, signal processing, and statistics. While sparsity modelling is powerful,
\emph{structured sparsity} models further exploit domain knowledge by characterizing the interdependency between the non-zero coefficients of an unknown parameter vector $w$.
For example, in certain applications domain knowledge may dictate that we should favor non-zero patterns corresponding to:
unions of groups \cite{obozinski2011group} in cancer prognosis from gene expression data; or complements of union of groups \cite{jacob2009group} in neuroimaging and background substraction, or rooted connected trees \cite{jenatton2011proximal, zhao2006grouped} in natural image processing.
Incorporating such key prior information beyond just sparsity leads to significant improvements in estimation performance, noise robustness, interpretability and sample complexity \cite{baraniuk2010model}.
Structured sparsity models are naturally encoded by combinatorial functions. However, direct combinatorial treatments often lead to intractable learning problems. Hence, we often use either non-convex greedy methods or continuous convex relaxations, where the combinatorial penalty is replaced by a tractable convex surrogate; \textit{cf.}, \cite{baraniuk2010model, huang2011learning,bach2011learning}.
In this paper, we adopt the convex approach because it benefits from a mature set of efficient numerical algorithms as well as strong analysis tools that rely on convex geometry in order to establish statistical efficiency. Convex formulations are also robust to model mis-specifications. Moreover, there is a rich set of
convex penalties with structure-inducing properties already studied in the literature
\cite{yuan2006model, jacob2009group, jenatton2011structured, jenatton2011proximal, zhao2006grouped, obozinski2011group}.
For an overview, we refer the reader to \cite{bach2011learning} and references therein.
For choosing a convex relaxation, a systematic approach, already adopted in \cite{bach2010structured, chandrasekaran2012convex, obozinski2012convex, halabi2015totally}, considers the \emph{tightest} convex relaxation of combinatorial penalties expressing the desired structure. For instance, \cite{bach2010structured} shows that computing the tightest convex relaxation over the unit $\ell_\infty$-ball is tractable for the ensemble of \emph{monotone submodular functions}.
Similarly, the authors in \cite{halabi2015totally} demonstrates the tractability of such relaxation for combinatorial penalties that can be described via \emph{totally unimodular} constraints.
A different principled approach in convex relaxations is proposed by \cite{obozinski2012convex}, where the authors considered the tightest \emph{homogeneous} convex relaxation of general set functions regularized by an $\ell_p$-norm. The authors show, for instance, the resulting norm takes the form of a generalized latent group Lasso norm \cite{obozinski2011group}. The homogeneity imposed in \cite{obozinski2012convex} naturally ensures the invariance of the regularizer to rescaling of the data. However, such requirement may cost a loss of structure as was observed in an example in \cite{halabi2015totally}.
This observation begs the question:
\begin{center}
\begin{minipage}{.75\linewidth}
When do the \emph{homogeneous} and \emph{non-homogeneous} convex relaxations differ and which structures can be encoded by each?
\end{minipage}
\end{center}
In order to answer this question, we rigorously identify which combinatorial structures are preserved by the non-homogeneous relaxation in a manner similar to \cite{obozinski2012convex} for the homogeneous one. We
further study the statistical properties of both relaxations. In particular, we consider the problem of support recovery in the context of regularized learning problems by these relaxed convex penalties, which was only investigated so far
in special cases, e.g., for norms associated with submodular functions \cite{bach2010structured}, or for the latent group Lasso norm \cite{obozinski2011group}.
To this end, this paper makes the following contributions:
\begin{itemize} \setlength\itemsep{0.2em}
\item We derive formulations of the non-homogeneous tightest convex relaxation of general $\ell_p$-regularized combinatorial penalties (Section \ref{sect:ConvRels}). We show that any \emph{monotone} set function is preserved by such relaxation, while the homogeneous relaxation only preserves a smaller subset of set-functions (Section \ref{sect:LCE}).
\item We identify necessary conditions for support recovery in learning problems regularized by general convex penalties (Section \ref{sect:NecCond}).
\item We propose an adaptive weight estimation scheme
and provide sufficient conditions for support recovery under the asymptotic regime (Section \ref{sect:SuffCond}). This scheme does not require any irrepresentability condition and is applicable to general monotone convex regularizers.
\item We identify sufficient conditions with respect to combinatorial penalties which ensure that the sufficient support recovery conditions hold with respect to the associated convex relaxations (Section~\ref{Sect:Disct}).
\item We illustrate numerically the effect on support recovery of the choice of the relaxation
as well as the adaptive weights scheme (Section~ \ref{sect:Simul}).
\end{itemize}
In the sequel, we defer all proofs to the Appendix.
\paragraph{Notation.}
Let $V = \{1,\dots,d\}$ be the ground set and $2^V = \{A | A \subseteq V\}$ be its power-set.
Given $w \in \mathbb{R}^d$ and a set $J \subseteq V$, $w_J$ denotes the vector in $\mathbb{R}^d$ s.t., $[w_J]_i = w_i, i \in J$ and $[w_J]_i = 0, i \not \in J$. $Q_{JJ}$ is defined similarly for a matrix $Q \in \mathbb{R}^{d \times d}$.
Accordingly, we let $\mathds{1}_J$ be the indicator vector of the set $J$.
We drop the subscript for $J = V$, so that $\mathds{1}_V = \mathds{1}$ denotes the vector of all ones. The notation $J^c$ denotes the set complement of $J$ with respect to $V$.
The operations $|w|, w \geq w'$ and $w \circ v$ are applied element-wise.
For $p > 0$, the $\ell_p$-(quasi) norm is given by $\| w \|_p = (\sum_{i=1}^d |w_i|^{p})^{1/p}$, and $\| w \|_\infty = \max_i |w_i|$.
For $p \in [1,\infty]$, we define the conjugate $q \in [1,\infty]$ via $\frac{1}{p} + \frac{1}{q} = 1$.
We call the set of non-zero elements of a vector $w$ the support, denoted by $\mathrm{supp}(w) = \{i : w_i \not = 0\} $.
We use the notation from submodular analysis, $w(A) = \sum_{i \in A} w_i$.
We write $\overline{\mathbb{R}}_+$ for $\mathbb{R}_+ \cup \{+\infty\}$. For a function $f : \mathbb{R}^d \rightarrow \overline{\mathbb{R}} = \mathbb{R} \cup \{+\infty\}$, we will denote by $f^\ast$ its Fenchel-Legendre conjugate.
We will denote by $\iota_S(w)$ the indicator function of the set $S$, taking value $0$ on the set $S$ and $+\infty$ outside it.
\input{ConvRel}
\input{ConvOpt}
\input{DiscreteStable}
\input{Simulations}
\section{Conclusion}
We presented an analysis of homogeneous and non-homogeneous convex relaxations of $\ell_p$-regularized combinatorial penalties. Our results show that structure encoded by submodular priors can be equally well expressed by both relaxations, while the non-homogeneous relaxation is able to express the structure of more general monotone set functions. We also identified necessary and sufficient stability conditions on the supports to be correctly recovered. We proposed an adaptive weight scheme that is guaranteed to recover supports that satisfy the sufficient stability conditions, in the asymptotic setting, even under correlated design matrix.
\subsubsection*{Acknowledgements}
We thank Ya-Ping Hsieh for helpful discussions.
This work was supported in part by the European Commission under ERC Future Proof, SNF 200021-146750,
SNF CRSII2-147633, NCCR Marvel. Francis Bach acknowledges support from the chaire Economie des nouvelles donn\'ees with the data science joint research initiative with the fonds AXA pour la recherche, and the Initiative de Recherche ``Machine Learning for Large-Scale Insurance'' from the Institut Louis Bachelier.
{
\bibliographystyle{plain}
\section{Appendix}
\subsection{Variational forms of convex envelopes ({Proof of lemma \ref{lem:NonHomEnv} and Remark \ref{rmk:SubNonHomoEnv})} }
In this section, we recall the different variational forms of the homogeneous convex envelope derived in \cite{obozinski2012convex} and derive similar variational forms for the non-homogeneous convex envelope, which includes the ones stated in lemma \ref{lem:NonHomEnv}). These variational forms will be needed in some of our proofs below.
\begin{lemma}
The homogeneous convex envelope $\Omega_p$ of $F_p$ admits the following variational forms.
\begin{align}
\Omega_\infty(w) &= \min_{\alpha} \{ \sum_{S \subseteq V} \alpha_S F(S) : \sum_{S \subseteq V} \alpha_S \mathds{1}_S \geq |w|, \alpha_S \geq 0\}.
\label{eq:ConvCoverA}\\
\Omega_p(w) &= \min_{v} \{ \sum_{S \subseteq V} F(S)^{1/q} \| v^S \|_p : \sum_{S \subseteq V} v^S = |w|, \mathrm{supp}(v^S) \subseteq S\}. \label{eq:LpLGLgen}\\
&= \max_{\kappa \in \mathbb{R}^d_+} \sum_{i = 1}^d \kappa_i^{1/q} |w_i| \text{ s.t. } \kappa(A) \leq F(A), \forall A \subseteq V. \label{eq:SupportForm}\\
&=\inf_{\eta \in \mathbb{R}^d_+} \frac{1}{p} \sum_{j=1}^d \frac{|w_j|^{p}}{\eta_j^{p -1}} + \frac{1}{q} \Omega_\infty(\eta). \label{eq:VarLpLinfA}
\end{align}
\end{lemma}
The non-homogeneous convex envelope of a set function $F$, over the unit $\ell_\infty$-ball was derived in \cite{halabi2015totally}, where it was shown that $\Theta_\infty(w) = \inf_{\eta \in [0,1]^d} \{ f(\eta) : \eta \geq |w|\}$ where $f$ is any proper, l.s.c. convex \emph{extension} of $F$ (c.f., Lemma 1 \cite{halabi2015totally}). A natural choice for $f$ is the \emph{convex closure} of $F$, which corresponds to the \emph{tightest} convex extension of $F$ on $[0,1]^d$. We recall the two equivalent definitions of convex closure, which we have adjusted to allow for infinite values.
\begin{definition}[Convex Closure; c.f., {\cite[Def. 3.1]{dughmi2009submodular}}]\label{def:convClosure}
Given a set function $F : 2^V \to \overline{\mathbb{R}}$, the convex closure $f^{-}: [0,1]^d \to \overline{\mathbb{R}}$ is the point-wise largest convex function from $[0,1]^d$ to $\overline{\mathbb{R}}$ that always lowerbounds $F$.
\end{definition}
\begin{definition}[Equivalent definition of Convex Closure; c.f., {\cite[Def. 1]{Vondrak2010}} and {\cite[Def. 3.2]{dughmi2009submodular}}] \label{def:convClosureExp}
Given any set function $f: \{0,1\}^n \rightarrow \overline{\mathbb{R}}$, the convex closure of $f$ can equivalently be defined $\forall w \in [0,1]^n$ as:
\[f^-(w) = \inf \{ \sum_{S \subseteq V} \alpha_S F(S) : w= \sum_{S \subseteq V} \alpha_S \mathds{1}_S, \sum_{S \subseteq V} \alpha_S =1, \alpha_S \geq 0\}\]
\end{definition}
It is interesting to note that $f^-(w) = f_L(w)$ where $f_L$ is Lov\'asz extension iff $F$ is a submodular function \cite{Vondrak2010}.
The following lemma derive variational forms of $\Theta_p$ for any $p \geq 1$ that parallel the ones known for the homogeneous envelope.
\begin{lemma}\label{lem:NonHomVarForms}
The non-homogeneous convex envelope $\Theta_p$ of $F_p$ admits the following variational forms.
\begin{align}
\Theta_\infty(w) &= \inf \{ \sum_{S \subseteq V} \alpha_S F(S) : \sum_{S \subseteq V} \alpha_S \mathds{1}_S \geq |w|, \sum_{S \subseteq V} \alpha_S =1, \alpha_S \geq 0\}.
\label{eq:ConvCoverNonHomoA}\\
\Theta_p(w)
&= \max_{\kappa \in \mathbb{R}^d} \sum_{j=1}^d \psi_j(\kappa_j,w_j) + \min_{S \subseteq V} F(S) - \kappa(S), ~\forall w \in \mathrm{dom}(\Theta_p(w)). \label{eq:SupportFormNonHomo}\\
&=\inf_{\eta \in [0,1]^d} \frac{1}{p} \sum_{j=1}^d \frac{|w_j|^{p}}{\eta_j^{p -1}} + \frac{1}{q} f^-(\eta),
\label{eq:VarLpLinfNonHomoA}
\end{align}
where $\mathrm{dom}(\Theta_p) = \{ w | \exists \eta \in [0,1]^d \text{ s.t } \mathrm{supp}(w) \subseteq \mathrm{supp}(\eta), \eta \in \mathrm{dom}(f^-)\}$, and
where we define
\begin{align*}
\psi_j(\kappa_j,w_j) &:=\begin{cases} \kappa_j^{1/q}|w_j| &\text{ if $|w_j| \leq \kappa_j^{1/p}, \kappa_j \geq 0$}\\
\frac{1}{p} |w_j|^{p}+ \frac{1}{q} \kappa_j &\text{otherwise.}
\end{cases}
\end{align*}
If $F$ is monotone, $\Theta_\infty = f^-$, then we can replace $f^-$ by $\Theta_\infty$ in \eqref{eq:VarLpLinfNonHomoA} and we can restrict $\kappa \in \mathbb{R}_+^d$ in \eqref{eq:SupportFormNonHomo}.
\end{lemma}
To prove the variational form \eqref{eq:ConvCoverNonHomoA} in Lemma \ref{lem:NonHomVarForms}, we need to show first the following property of $f^-$.
\begin{proposition}[c.f., {\cite[Prop. 3.23]{dughmi2009submodular}} ] \label{prop:minClosure}
The minimum values of a proper set function $F$ and its convex closure $f^{-}$ are equal, i.e.,
\[ \min_{w \in [0,1]^d} f^{-}(w) = \min_{S \subseteq V} F(S)\]
If $S$ is a minimizer of $f(S)$, then $\mathds{1}_S$ is a minimizer of $f^{-}$. Moreover, if $w$ is a minimizer of $f^{-}$, then every set in the support of $\alpha$, where $f^{-}(w) = \sum_{S \subseteq V} \alpha_S F(S)$, is a minimizer of $F$.
\end{proposition}
\begin{proof}
First note that, $\{0,1\}^d \subseteq [0,1]^d$ implies that $f^{-}(w^*) \leq F(S^*)$. On the other hand, $f^-(w^*) = \sum_{S \subseteq V} \alpha^*_S F(S) \geq \sum_{S \subseteq V} \alpha^*_S F(S^*) = F(S^*)$. The rest of the proposition follows directly.
\end{proof}
Given the choice of the extension $f = f^-$, the variational form \eqref{eq:ConvCoverNonHomoA} of $\Theta_\infty$ given in lemma \ref{lem:NonHomVarForms} follows directly from definition \ref{def:convClosureExp} and proposition \ref{prop:minClosure}, as shown in the following corollary.
\begin{corollary}\label{corr:varformConvEnvelope}
Given any set function $F:2^V \to \overline{\mathbb{R}}_+$ and its corresponding convex closure $f^-$, the convex envelope of $F(\mathrm{supp}(w))$ over the unit $\ell_\infty$-ball is given by
\begin{align*}
\Theta_\infty(w) &= \inf_\alpha \{ \sum_{S \subseteq V} \alpha_S F(S) : \sum_{S \subseteq V} \alpha_S \mathds{1}_S \geq |w|, \sum_{S \subseteq V} \alpha_S =1, \alpha_S \geq 0\}.
\\
&= \inf_{v} \{ \sum_{S \subseteq V} F(S) \| v^S \|_\infty : \sum_{S \subseteq V} v^S = |w|, \sum_{S \subseteq V} \| v^S \|_\infty =1, \mathrm{supp}(v^S) \subseteq S\}.
\end{align*}
\end{corollary}
\begin{proof}
$f^-$ satisfies the first 2 assumptions required in Lemma 1 of \cite{halabi2015totally}, namely, $f^{-}$ is a lower semi-continuous convex extension of $F$ which satisfies
\[ \max_{S \subseteq V} m(S) - F(S) = \max_{w \in [ 0,1]^d } m^Tw - f^-(w), \forall m \in \mathbb{R}_+^d\]
To see this note that
$m^Tw^* - f^-(w^*) = \sum_{S \subseteq V} \alpha^*_S (m^T\mathds{1}_S - F(S) )\geq \sum_{S \subseteq V} \alpha^*_S (m^T\mathds{1}_{S^*} - F(S^*)) = m(S^*) - F(S^*)$. The other inequality is trivial.
The corollary then follows directly from Lemma 1 in \cite{halabi2015totally} and definition \ref{def:convClosureExp}.
\end{proof}
Note that $\mathrm{dom}(\Theta_\infty) = \{w: \exists \eta \in [0,1]^d \cap \mathrm{dom}(f^-), \eta \geq |w| \}$. Note also that $\Theta_\infty$ is monotone even if $F$ is not. On the other hand, if $F$ is monotone, then $f^-$ is monotone on $[0,1]^d$ and $\Theta_\infty(w) = f^-(|w|)$.
Then the proof of remark \ref{rmk:SubNonHomoEnv} follows, since
if $F$ is a monotone submodular function and $f_L$ is its Lov\'asz extension, then $\Theta_\infty(w) = f^-(|w|) = f_L(|w|) = \Omega_\infty(w), \forall w \in [-1,1]^d$, where the last equality was shown in \cite{bach2010structured}.
Next, we derive the convex relaxation of $F_p$ for a general $p \geq 1$.
\begin{proposition}\label{prop:LpregNonHomo}
Given any set function $F:2^V \to \overline{\mathbb{R}}_+$ and its corresponding convex closure $f^-$, the convex envelope of $F_{\mu \lambda}(w) = \mu F(\mathrm{supp}(w)) + \lambda \| w \|_p^p$ is given by
\begin{align*}
\Theta_p(w) =\inf_{\eta \in [0,1]^d} \lambda \sum_{j=1}^d \frac{|w_j|^{p}}{\eta_j^{p -1}} + \mu f^-(\eta).
\end{align*}
Note that $\mathrm{dom}(\Theta_p) = \{ w | \exists \eta \in [0,1]^d \text{ s.t } \mathrm{supp}(w) \subseteq \mathrm{supp}(\eta), \eta \in \mathrm{dom}(f^-)\}$.
\end{proposition}
\begin{proof}
Given any proper l.s.c. convex extension $f$ of $F$, we have:
First for the case where $p=1$:
\begin{align*}
F_{\mu \lambda}^\ast(s) &= \sup_{w \in \mathbb{R}^n} w^Ts -\mu F(\mathrm{supp}(w)) - \lambda \| w \|_1\\
&= \sup_{\eta \in \{ 0,1\}^d} \sup_{\scriptsize \colcst{\mathds{1}_{\mathrm{supp}(w)} = \eta \\ \mathrm{sign}(w) = \mathrm{sign}(s)}} |w|^T (|s|- \lambda \mathds{1}) - \mu F(\eta) \\
&= \iota_{\{|s| \leq \lambda \mathds{1}\} }(s) - \inf_{\eta \in \{ 0,1\}^d} \mu F(\eta).
\end{align*}
Hence $F_{\mu \lambda}^{\ast \ast}(w)= \lambda \| w \|_1 + \inf_{\eta \in \{ 0,1\}^d} \lambda F(\eta)$.
For the case $p \in (1,\infty)$.
\begin{align*}
F_{\mu \lambda}^\ast(s) &= \sup_{w \in \mathbb{R}^d} w^T s - \mu F(\mathrm{supp}(w)) - \lambda \| w \|_p^p\\
&= \sup_{\eta \in \{ 0,1\}^d} \sup_{\scriptsize \colcst{\mathds{1}_{\mathrm{supp}(w)} = \eta \\ \mathrm{sign}(w) = \mathrm{sign}(s)}} |w|^T |s|-\lambda \| w \|_p^p - \mu F(\eta) \\
&= \sup_{\eta \in \{ 0,1\}^d} \frac{\lambda (p-1)}{(\lambda p )^q} \eta^T|s|^q- \mu F(\eta) \tag{$|s_i|=\lambda p |x^*_i|^{p-1}, \forall \eta_i \not = 0$}\\
&= \sup_{\eta \in [0,1]^d} \frac{\lambda (p-1)}{(\lambda p )^q} \eta^T|s|^q - \mu f^-(\eta) .
\end{align*}
We denote $\hat{\lambda} = \frac{\lambda (p-1)}{(\lambda p )^q}$.
\begin{align*}
F_{\mu \lambda}^{\ast \ast}(w) &= \sup_{s \in \mathbb{R}^d} w^T s - F_{\mu \lambda}^{\ast}(s) \\
&= \sup_{s\in \mathbb{R}^d } \min_{\eta \in [0,1]^d} s^T w - \hat{\lambda} \eta^T| s|^q + \mu f^-(\eta) \\
&\stackrel{\star}{=} \inf_{\eta \in [0,1]^d} \sup_{\scriptsize \colcst{s \in \mathbb{R}^p \\ \mathrm{sign}(s) = \mathrm{sign}(w)}} |s|^T |w| - \hat{\lambda} \eta^T| s|^q + \mu f^-(\eta) \\
&=\inf_{\eta \in [0,1]^d} \lambda ( |w|^p)^T \eta^{1-p} + \mu f^-(\eta),
\end{align*}
where the last equality holds since $|w_i|=\hat{\lambda} \eta_i q |s^*_i|^{q-1}, \forall \eta_i \not = 0$, otherwise $s^*_i = 0$ if $w_i = 0$ and $\infty$ otherwise.
$(\star)$ holds by Sion's minimax theorem \cite[Corollary 3.3]{sion1958general}.
Note then that the minimizer $\eta^\ast$ (if it exists) satisfies $\mathrm{supp}(w) \subseteq \mathrm{supp}(\eta^*)$.
Finally, note that if we take the limit as $p \to \infty$, we recover $\Theta_{\infty}= \inf_{
\eta \in [0,1]^d}\{ f^-(\eta) : \eta \geq |x| \}$.
\end{proof}
The variational form \eqref{eq:VarLpLinfNonHomoA} given in lemma \ref{lem:NonHomVarForms} follows from proposition \ref{prop:LpregNonHomo} for the choice $\mu = \frac{1}{q}, \lambda = \frac{1}{p}$.
The following proposition derives the variational form \eqref{eq:SupportFormNonHomo} for $p = \infty$.
\begin{proposition}\label{prop:SuppFctNonHomo}
Given any set function $F:2^V \to \mathbb{R} \cup \{+\infty\}$, and its corresponding convex closure $f^-$, $\Theta_\infty$ can be written $\forall w \in \mathrm{dom}(\Theta_\infty)$ as
\begin{align*}
\Theta_\infty(w) &= \max_{\kappa \in \mathbb{R}^d_+} \{ \kappa^T |w| + \min_{S \subseteq V} F(S) - \kappa(S) \} \\
&= \max_{\kappa \in \mathbb{R}^d_+} \{ \kappa^T |w| + \min_{S \subseteq \mathrm{supp}(w)} F(S) - \kappa(S) \} \tag{if $F$ is monotone}
\end{align*}
Similarly $\forall w \in \mathrm{dom}( f^-)$ we can write
\begin{align*}
f^-(w) &= \max_{\kappa \in \mathbb{R}^d} \{ \kappa^T |w| + \min_{S \subseteq V} F(S) - \kappa(S) \} \\
&= \Theta_\infty(w)= \max_{\kappa \in \mathbb{R}^d_+} \{ \kappa^T w + \min_{S \subseteq \mathrm{supp}(w)} F(S) - \kappa(S) \} \tag{if $F$ is monotone}
\end{align*}
\end{proposition}
\begin{proof}
$\forall w \in \mathrm{dom}(\Theta_\infty)$, strong duality holds by Slater's condition, hence
\begin{align*}
\Theta_\infty(w) &= \min_{\alpha} \{ \sum_{S \subseteq V} \alpha_S F(S) : \sum_{S \subseteq V} \alpha_S \mathds{1}_S \geq |w|, \sum_{S \subseteq V} \alpha_S =1, \alpha_S \geq 0\}.\\
&= \min_{\alpha \geq 0} \max_{\rho \in \mathbb{R}, \kappa \in \mathbb{R}^d_+}\{ \sum_{S \subseteq V} \alpha_S F(S) + \kappa^T( |w| - \sum_{S \subseteq V} \alpha_S \mathds{1}_S) +\rho(1- \sum_{S \subseteq V} \alpha_S )\}.\\
&= \max_{\rho \in \mathbb{R}, \kappa \in \mathbb{R}^d_+} \min_{\alpha \geq 0} \{ \kappa^T |w| + \sum_{S \subseteq V} \alpha_S ( F(S) - \kappa^T \mathds{1}_S - \rho ) +\rho\}.\\
&= \max_{\rho \in \mathbb{R}, \kappa \in \mathbb{R}^d_+} \{ \kappa^T |w| + \rho: F(S) \geq \kappa^T \mathds{1}_S + \rho ) \}.\\
&= \max_{\kappa \in \mathbb{R}^d_+} \{ \kappa^T |w| + \min_{S \subseteq V} F(S) - \kappa(S) \}.
\end{align*}
Let $J = \mathrm{supp}(|w|)$ then $\kappa^*_{J^c} = 0$. Then for monotone functions $F(S) - \kappa^*(S) \geq F(S \cap J) - \kappa^*(S)$, so we can restrict the minimum to $S \subseteq J$.
The same proof holds for $f^-$, with the Lagrange multiplier $\kappa \in \mathbb{R}^d$ not constrained to be positive.
\end{proof}
The following Corollary derives the variational form \eqref{eq:SupportFormNonHomo} for $p \in [1,\infty]$.
\begin{corollary}
Given any set function $F:2^V \to \mathbb{R} \cup \{+\infty\}$, $\Theta_p$ can be written $\forall w \in \mathrm{dom}(\Theta_p)$ as
\begin{align*}
\Theta_p(w) &= \max_{\kappa \in \mathbb{R}^d} \sum_{j=1}^d \psi_j(\kappa_j,w_j) + \min_{S \subseteq V} F(S) - \kappa(S). \\
&= \max_{\kappa \in \mathbb{R}_+^d} \sum_{j=1}^d \psi_j(\kappa_j,w_j) + \min_{S \subseteq V} F(S) - \kappa(S). \tag{if $F$ is monotone}
\end{align*}
where
\begin{align*}
\psi_j(\kappa_j,w_j) &:=\begin{cases} \kappa_j^{1/q}|w_j| &\text{ if $|w_j| \leq \kappa_j^{1/p}, \kappa_j \geq 0$}\\
\frac{1}{p} |w_j|^{p}+ \frac{1}{q} \kappa_j &\text{otherwise}
\end{cases}
\end{align*}
\end{corollary}
\begin{proof}
By Propositions \ref{prop:LpregNonHomo} and \ref{prop:SuppFctNonHomo}, we have $\forall w \in \mathrm{dom}(\Theta_p)$, i.e., $\exists \eta \in [0,1]^d,$ s.t $\mathrm{supp}(w) \subseteq \mathrm{supp}(\eta), \eta \in \mathrm{dom}(f^-)$,
\begin{align*}
\Theta_p(w) &=\inf_{\eta \in [0,1]^d} \frac{1}{p} \sum_{j=1}^d \frac{|w_j|^{p}}{\eta_j^{p -1}} + \frac{1}{q} f^-(\eta) \\
&= \inf_{\eta \in [0,1]^d} \frac{1}{p} \sum_{j=1}^d \frac{|w_j|^{p}}{\eta_j^{p -1}} + \frac{1}{q} \max_{\rho \in \mathbb{R}, \kappa \in \mathbb{R}^d} \{ \kappa^T \eta+ \rho: F(S) \geq \kappa^T \mathds{1}_S + \rho \}.\\
&\stackrel{\star}{=} \max_{\rho \in \mathbb{R}, \kappa \in \mathbb{R}^d} \inf_{\eta \in [0,1]^d} \{\frac{1}{p} \sum_{j=1}^d \frac{|w_j|^{p}}{\eta_j^{p -1}} + \frac{1}{q} \kappa^T \eta+ \rho: F(S) \geq \kappa^T \mathds{1}_S + \rho \}.
\end{align*}
$(\star)$ holds by Sion's minimax theorem \cite[Corollary 3.3]{sion1958general}.
Note also that
\begin{align*}
\inf_{\eta_j \in [0,1]} \frac{1}{p} \frac{|w_j|^{p}}{\eta_j^{p -1}} + \frac{1}{q} \kappa_j \eta_j &=\begin{cases} \kappa_j^{1/q}|w_j| &\text{ if ${|w_j|} \leq {\kappa_j^{1/p}}, \kappa_j \geq 0$}\\
\frac{1}{p} |w_j|^{p}+ \frac{1}{q} \kappa_j &\text{otherwise}
\end{cases} := \psi_j(\kappa_j,w_j)
\end{align*}
where the minimum is $\eta^*_j = 1$ if $\kappa_j \leq 0$. If $\kappa_j \geq 0$,
the infimum is zero if $w_j = 0$. Otherwise, the minimum is achieved at $\eta^*_j = \min\{\frac{|w_j|}{\kappa_j^{1/p}},1\}$ (if $\kappa_j = 0, \eta^*_j =1$).
Hence,
\begin{align*}
\Theta_p(w) &= \max_{\kappa \in \mathbb{R}^d} \sum_{j=1}^d \psi_j(\kappa_j,w_j) + \min_{S \subseteq V} F(S) - \kappa(S).
\end{align*}
\end{proof}
\subsection{Necessary conditions for support recovery (Proof of Theorem \ref{them:NecessaryStableGeneral})}
Before proving Theorem \ref{them:NecessaryStableGeneral}, we need the following technical Lemma.
\begin{lemma}\label{lem:decomposable}
Given $J \subset V$ and a vector $w$ s.t $\mathrm{supp}(w) \subseteq J$, if $\Phi$ is {not decomposable at $w$ w.r.t $J$}, then $\exists i \in J^c$ such that the $i$-th component of all subgradients at $w$ is zero; $0 = [ \partial \Phi ({w}) ]_i$.
\end{lemma}
\begin{proof}
If $\Phi$ is not decomposable at $w$ and $0 \not = [ \partial \Phi ({w}) ]_i, \forall i \in J^c$, then $\forall M_J >0, \exists \Delta \not = 0, \mathrm{supp}(\Delta) \subseteq J^c$ s.t., $\Phi(w + \Delta) < \Phi(w) + M_J \| \Delta\|_\infty$. In particular,
we can choose $M_J = \inf_{i \in J^c,v \in \partial \Phi ({w}_J), v_i \not = 0 } |v_i| >0$, if the inequality holds for some $\Delta \not = 0$, then let $i_{\max}$ denote the index where $|\Delta_{i_{\max}}| = \| \Delta\|_\infty$. Then given any $v \in \partial \Phi ({w})$ s.t., $v_{i_{\max}} \not = 0$, we have
\begin{align*}
\Phi( w+ \| \Delta \|_\infty \mathds{1}_{i_{\max}}) \leq \Phi( w + \Delta ) &< \Phi(w ) + M_J \| \Delta\|_\infty \\
&\leq \Phi(w) + \langle v ,\| \Delta \|_\infty \mathds{1}_{i_{\max}} \mathrm{sign}(v_{i_{\max}}) \rangle \\
&\leq \Phi(w + \| \Delta\|_\infty \mathds{1}_{i_{\max}})
\end{align*}
which leads to a contradiction.
\end{proof}
\primeNecStableProp*
\begin{proof}
We will show in particular that $\Phi$ is decomposable at $\hat{w}$ w.r.t $\mathrm{supp}(\hat{w})$.
Since $L$ is strongly-convex, given $z$ the corresponding minimizer $\hat{w}$ is unique, then the function $h(z) := \argmin_{w \in \mathbb{R}^d} L(w) - z^Tw + \lambda \Phi(w)$ is well defined.
We want to show that
\begin{align*}
&P(\forall z, \text{ $\Phi$ is decomposable at $h(z)$ w.r.t $\mathrm{supp}(h(z))$ }) \\
&= 1 - P(\exists z, \text{s.t, $\Phi$ is not decomposable at $h(z)$ w.r.t $\mathrm{supp}(h(z))$ } )\\
&\geq 1 - P(\exists z, \text{ s.t., } \exists i \in \left( \mathrm{supp}(h(z)) \right)^c, [\partial \Phi(h(z))]_i = 0 ) &\text{by lemma \ref{lem:decomposable} }\\
&= 1.
\end{align*}
Given fixed $i \in V$, we show that the set $S_i := \{ z : i \in \left( \mathrm{supp}(h(z)) \right)^c, [\partial \Phi(h(z))]_i = 0 \}$ has measure zero. Then, taking the union of the finitely many sets $S_i, \forall i \in V$, all of zero measure, we have $P(\exists z, \text{ s.t., } \exists i \in \left( \mathrm{supp}(h(z)) \right)^c, [\partial \Phi(h(z))]_i = 0 ) = 0$ .
To show that the set $S_i$ has measure zero, let $z_1, z_2 \in S_i$ and denote by $\mu>0$ the strong convexity constant of $L$. We have by convexity of $\Phi$:
\begin{align*}
\Big( \big( z_1 - \nabla L(h(z_1)) \big) - \big( z_2 - \nabla L(h(z_2)) \big) \Big)^\top \Big( h(z_1)- h(z_2)\Big) &\geq 0\\
(z_1 - z_2)^\top(h(z_1)- h(z_2)) &\geq \big( \nabla L(h(z_1)) - \nabla L(h(z_2)) \big)^\top \big( h(z_1)- h(z_2) \big) \\
(z_1 - z_2)^\top(h(z_1)- h(z_2)) &\geq \mu \| h(z_1)- h(z_2)\|_2^2 \\
\frac{1}{\mu} \|z_1 - z_2\|_2 &\geq \| h(z_1)- h(z_2)\|_2
\end{align*}
Thus $h$ is a deterministic Lipschitz-continuous function of $z$.
Let $J = \mathrm{supp}(h(z))$, then
by optimality conditions $z_J - \nabla L(h(z_J)) \in \partial \Phi (h(z_J))$ (since $h(z) = h(z_J)$), then $z_i - \nabla L(h(z_J))_i =0$ since $[\partial \Phi(h(z_J))]_i = 0$.
and thus $z_i$ is a Lipschitz-continuous function of $z_J$, which can only happen with zero measure.
\end{proof}
\subsection{Sufficient conditions for support recovery (Proof of Lemma \ref{lem:Majorizer} and Theorem \ref{Thm:Consistency})}
\primeMajoLem*
\begin{proof}
The function $w \rightarrow w^\alpha$ is concave on $\mathbb{R}_+ \setminus \{0\}$, hence
\begin{align*}
|w_j|^\alpha &\leq |w^0_j|^\alpha + \alpha |w^0_j|^{\alpha-1} (|w_j| - |w_j|^0) \\
|w_j|^\alpha &\leq (1 - \alpha) |w^0_j|^\alpha + \alpha |w^0_j|^{\alpha-1} |w_j| \\
\Phi(|w|^\alpha) &\leq \Phi((1 - \alpha) |w^0|^\alpha + \alpha |w^0|^{\alpha-1} \circ|w_j|) \tag{by monotonicity}\\
\Phi(|w|^\alpha) &\leq (1-\alpha) \Phi( |w^0|^\alpha) + \alpha \Phi(|w^0|^{\alpha-1} \circ |w| ) \tag{by convexity}\\
\end{align*}
If $w_j= 0$ for any $j$, the upper bound goes to infinity and hence it still holds.
\end{proof}
\primeConsistThem*
\begin{proof}
We will follow the proof in \cite{zou2006adaptive}.
We write $\hat{w} = w^* + \frac{\hat{u}}{\sqrt{n}}$ and $\Psi_n(u) = \frac{1}{2} \| y - X(w^* + \frac{{u}}{\sqrt{n}})\|_2^2 + \lambda_n \Phi(c \circ |w^* + \frac{{u}}{\sqrt{n}}|)$, where $c = |{w^0}|^{\alpha-1}$. Then $\hat{u} = \argmin_{u \in \mathbb{R}^d} \Psi_n(u)$.
Let $V_n(u) = \Psi_n(u) - \Psi_n(0)$, then
$$ V_n(u) = \frac{1}{2} u^T Q u - \epsilon^T \frac{X u}{\sqrt{n}} + {\lambda_n}\big( \Phi(c \circ |w^* + \frac{{u}}{\sqrt{n}}|) - \Phi(c \circ |w^*|)\big)$$
Since $w^0$ is a $\sqrt{n}$-consistent estimator to $w^*$, then $\sqrt{n} w^0_{J^c} = O_p(1)$ and $n^{\frac{1-\alpha}{2}} c^{-1}_{J^c} = O_p(1)$. Since $\frac{\lambda_n}{{n}^{\alpha/2}} \to \infty$, by stability of $J$, we have
\begin{align}\label{eq:limAtzero}
{\lambda_n} \big( \Phi(c \circ |w^* + \frac{{u}}{\sqrt{n}}|) - \Phi(c \circ |w^*|) \big)
&= {\lambda_n} \big( \Phi(c_{J} \circ |w_J^* + \frac{{u_J}}{\sqrt{n}}| + c_{J^c} \circ \frac{{|u_{J^c} |}}{\sqrt{n}}) - \Phi(c_J \circ |w_J^*|) \big) \nonumber \\
&\geq {\lambda_n} \big( \Phi(c_{J} \circ |w_J^* + \frac{{u_J}}{\sqrt{n}}| ) + M_J \|c_{J^c} \circ \frac{{|u_{J^c} |}}{\sqrt{n}}\|_\infty - \Phi(c_J \circ |w_J^*|) \big) \nonumber\\
&= {\lambda_n} \big( \Phi(c_{J} \circ |w_J^* + \frac{{u_J}}{\sqrt{n}}| ) - \Phi(c_J \circ |w_J^*|) \big) + M_J \| {\lambda_n} n^{-\alpha/2} n^{\frac{\alpha-1}{2}} c_{J^c} \circ {|u_{J^c} |}\|_\infty \nonumber \\
&\xrightarrow{p} \infty \quad \text{if $u_{J^c}\not =0$}
\end{align}
Otherwise, if $u_{J^c} = 0$, we argue that
\begin{equation}\label{eq:limAtNonZero}
{\lambda_n} \big( \Phi(c \circ |w^* + \frac{{u}}{\sqrt{n}}|) - \Phi(c \circ |w^*|) \big) =
\lambda_{n} ( \Phi(c_J \circ |w_J^* + \frac{{u_J}}{\sqrt{n}}|) - \Phi(c_J \circ |w_J^*|)) \xrightarrow{p} 0.
\end{equation}
To see this note first that since $w^0$ is a $\sqrt{n}$-consistent estimator to $w^*$, then $c_{J} = |w_{J}^0|^{\alpha-1} \xrightarrow{p} |w_{J}^*|^{\alpha-1} $, $c_{J} \circ |w_J^*| \xrightarrow{p} |w_{J}^*|^{\alpha}$ and $c_J \circ |w_J^* + \frac{{u_J}}{\sqrt{n}}| \xrightarrow{p} |w_{J}^*|^{\alpha}$.
Then by the assumption $|w^*|^\alpha \in \mathrm{int} ~\mathrm{dom}~\Phi$, we have that both $c_{J} \circ |w_J^*|, c_J \circ |w_J^* + \frac{{u_J}}{\sqrt{n}}| \in \mathrm{int} ~\mathrm{dom}~\Phi$ with probability going to one.
By convexity, we then have:
\begin{align*}
\lambda_{n} ( \Phi(c_J \circ |w_J^* + \frac{{u_J}}{\sqrt{n}}|) - \Phi(c_J \circ |w_J^*|)) &\geq \langle \nabla \Phi(c_J \circ |w_J^*|) , \lambda_{n} \frac{{u_J}}{\sqrt{n}} \rangle \\
\lambda_{n} ( \Phi(c_J \circ |w_J^* + \frac{{u_J}}{\sqrt{n}}|) - \Phi(c_J \circ |w_J^*|)) &\leq \langle \nabla \Phi( c_J \circ |w_J^* + \frac{{u_J}}{\sqrt{n}} |) , \lambda_{n} \frac{{u_J}}{\sqrt{n}} \rangle
\end{align*}
where $\nabla \Phi(w)$ denotes a subgradient of $\Phi$ at $w$.
For all $w \in \mathrm{int}~\mathrm{dom}~\Phi$ where $\Phi$ is convex, monotone and normalized, we have that $\| z \|_\infty < \infty, \forall z \in \partial \Phi(w)$.
To see this, note that since $w \in \mathrm{int} ~\mathrm{dom} ~\Phi$, $\exists \delta>0$ s.t., $\forall w' \in B_{\delta}(w), \Phi(w') < +\infty$. Let $w' = w + \mathrm{sign}(z) \mathds{1}_{i_{\max}} \delta$, where $i_{\max}$ denotes the index where $|z_{i_{\max}}| = \| z \|_\infty$ then by convexity we have
\begin{align*}
\Phi(w') &\geq \Phi(w) + \langle z, w' - w\rangle, &\forall z \in \partial \Phi(w) \\
+ \infty > \Phi(w') &\geq \| z \|_\infty \delta, &\forall z \in \partial \Phi(w), &\quad \text{(since $\Phi(w) \geq 0$)}
\end{align*}
Since $\frac{\lambda_n}{\sqrt{n}} \to 0$, we can then conclude by Slutsky's theorem that \eqref{eq:limAtNonZero} holds.
Hence by \eqref{eq:limAtzero} and \eqref{eq:limAtNonZero},
\begin{align}\label{eq:limPhi}
{\lambda_n}\big( \Phi(c \circ |w^* + \frac{{u}}{\sqrt{n}}|) - \Phi(c \circ |w^*|)\big) &\xrightarrow{p} \begin{cases}
0 & \text{if $u_{J^c}=0$}\\
\infty &\text{Otherwise}
\end{cases}.
\end{align}
By CLT, $\frac{X^T\epsilon}{\sqrt{n}} \xrightarrow{d} W \sim \mathcal{N}(0,\sigma^2 Q)$, it follows then that $ V_n(u) \xrightarrow{d} V(u)$, where
\begin{align*}
V(u) &= \begin{cases}
\frac{1}{2} u_{J}^T Q_{J J} u_{J} - W^T_{J} u_{J} & \text{if $u_{{J}^c}=0$}\\
\infty &\text{Otherwise}
\end{cases}.
\end{align*}
$V_n$ is convex and the unique minimum of $V$ is $u_{J} = Q^{-1}_{J J} W_{J}, u_{{J}^c} = 0$, hence by epi-convergence results [c.f., \cite{zou2006adaptive}]
\begin{align}\label{eq:AsympNormal}
\hat{u}_{J} \xrightarrow{d} Q^{-1}_{J J} W_{J} \sim \mathcal{N}(0,\sigma^2 Q^{-1}_{J J} ), \quad \hat{u}_{{J}^c} \xrightarrow{d} 0.
\end{align}
Since $\hat{u} = \sqrt{n}(\hat{w} - w^*)$, then it follows from \eqref{eq:AsympNormal} that
\begin{align}\label{eq:convergProb}
\hat{w}_{J} \xrightarrow{p} w^*_{J}, &\quad \hat{w}_{{J}^c} \xrightarrow{p} 0
\end{align}
Hence, $P(\mathrm{supp}(\hat{w}) \supseteq J) \to 1$ and it is sufficient to show that $P(\mathrm{supp}(\hat{w}) \subseteq J) \to 1$ to complete the proof.\\
For that denote $\hat{J} = \mathrm{supp}(\hat{w})$ and let's consider the event $\hat{J} \setminus J \not = \emptyset$.
By optimality conditions, we know that
\begin{align*}
- X_{\hat{J} \setminus J}^T(X\hat{w} - y ) \in \lambda_n [\partial \Phi(c \circ \cdot)(\hat{w})]_{\hat{J} \setminus J}
\end{align*}
Note, that $- \frac{X_{\hat{J} \setminus J}^T(X\hat{w} - y )}{\sqrt{n}} = \frac{X_{\hat{J} \setminus J}^TX(\hat{w} - w^* )}{\sqrt{n}} - \frac{X_{\hat{J} \setminus J}^T\epsilon }{\sqrt{n}} $.
By CLT, $\frac{X_{\hat{J} \setminus J}^T\epsilon }{\sqrt{n}} \xrightarrow{d} W \sim \mathcal{N}(0,\sigma^2 Q_{{\hat{J} \setminus J} ,{\hat{J} \setminus J}})$ and by \eqref{eq:convergProb} $\hat{w} - w^* \xrightarrow{p} 0$ then $- \frac{X_{\hat{J} \setminus J}^T(X\hat{w} - y )}{\sqrt{n}} = O_p(1)$.\\%by proposition 91 in myNotes
On the other hand, $\frac{\lambda_n c_{\hat{J} \setminus J} }{\sqrt{n}} = \lambda_n n^{\frac{1-\alpha}{2}} n^{\frac{\alpha-1}{2}} c_{\hat{J} \setminus J} \to \infty$, hence $\frac{\lambda_n c_{\hat{J} \setminus J} }{\sqrt{n}} c^{-1}_{\hat{J} \setminus J}v_{\hat{J} \setminus J}\to \infty$, $\forall v \in \partial \Phi(c \circ \cdot)(\hat{w})$, since $c^{-1}_{\hat{J} \setminus J} v_{\hat{J} \setminus J} = O_p(1)^{-1}$.
To see this, let $w'_J = \hat{w}_J$ and $0$ elsewhere. Note that by definition of the subdifferential and the stability assumption on $J$, there must exists $M_J>0$ s.t
\begin{align*}
\Phi(c \circ w') &\geq \Phi(c \circ \hat{w} ) + \langle v_{\hat{J} \setminus J} , -\hat{w}_{\hat{J} \setminus J} \rangle\\
&\geq \Phi(c \circ w') + M_J \|c_{\hat{J} \setminus J} \circ \hat{w}_{\hat{J} \setminus J} \|_\infty - \|c^{-1}_{\hat{J} \setminus J} \circ v_{\hat{J} \setminus J} \|_1 \|c_{\hat{J} \setminus J} \circ\hat{w}_{\hat{J} \setminus J}\|_\infty\\
\|c^{-1}_{\hat{J} \setminus J} \circ v_{\hat{J} \setminus J} \|_1 &\geq M_J
\end{align*}
We deduce then $P(\mathrm{supp}(\hat{w}) \subseteq J) = 1 - P(\hat{J} \setminus J \not = \emptyset) = 1 - P(\text{optimality condition holds}) \to 1$.
\end{proof}
\subsection{Discrete stability (Proof of Proposition \ref{prop:EqDefStableProp} and relation to weak submodularity)}
\primeEqDefStableProp*
\begin{proof}
If $F$ is $\rho$-submodular and $J$ is weakly stable, then $\forall A \subseteq J, \forall i \in J^c, 0 <\rho [ F(J \cup \{i\}) - F(J) ] \leq F(J \cup \{i\}) - F(J) $, i.e., $J$ is strongly stable w.r.t. $F$.
If $F$ is such that any weakly stable set is also strongly stable, then if $F$ is not $\rho$-submodular, then $\forall \rho \in (0,1]$ there must exists a set $B \subseteq V$, s.t., $\exists A \subseteq B, i \in B^c$, s.t., $ \rho [ F(B \cup \{i\}) - F(B) ] > F(A \cup \{i\}) - F(A) \geq 0$. Hence, $F(B \cup \{i\}) - F(B)>0$, i.e., $B$ is weakly stable and thus it is also strongly stable and we must have $F(A \cup \{i\}) - F(A)>0$. Choosing then in particular, $\rho = \min_{B \subseteq V} \min_{A \subseteq B, i \in B^c} \frac{F(A \cup \{i\}) - F(A)}{F(B \cup \{i\}) - F(B)} \in (0,1]$, leads to a contradiction; $\min_{A \subseteq B, i \in B^c} {F(A \cup \{i\}) - F(A)} \geq \rho [ F(B \cup \{i\}) - F(B) ] > F(A \cup \{i\}) - F(A) $.
\end{proof}
We show that $\rho$-submodularity is a stronger condition than weak submodularity. First, we recall the definition of weak submodular functions.
\begin{definition}[Weak Submodularity (c.f., \cite{das2011submodular, elenberg2016restricted})]
A function $F$ is weakly submodular if $\forall S, L, S \cap L = \emptyset, F(L \cup S) - F(L)>0$,
$$ \gamma_{S,L} = \frac{\sum_{i \in S} F(L \cup \{i\}) - F(L)}{ F(L \cup S ) - F(L)} >0$$
\end{definition}
\begin{proposition}
If $F$ is $\rho$-submodular then $F$ is weakly submodular. But the converse is not true.
\end{proposition}
\begin{proof}
If $F$ is $\rho$-submodular then $\forall S, L, S \cap L = \emptyset, F(L \cup S) - F(L)>0$, let $S = \{i_1, i_2, \cdots, i_r\}$
\begin{align*}
F(L \cup S) - F(L) &= \sum_{k =1}^r F(L \cup \{i_1, \cdots, i_k\}) - F(L \cup \{i_1, \cdots, i_{k-1}\})\\
&\leq \sum_{k =1}^r \frac{1}{\rho} (F(L \cup \{ i_k\}) - F(L ) ) \\
\Rightarrow \gamma_{S,T} &= \rho >0.
\end{align*}
We show that the converse is not true by giving a counter-example: Consider the function defined on $V=\{1,2,3\}$, where $F(\{i\}) = 1, \forall i, F(\{1,2\})=1, F(\{2,3\})=2, F(\{1,3\})=2, F(\{1,2,3\})=3$. Then note that this function is weakly submodular. We only need to consider sets $|S|\geq 2$, since otherwise $\gamma_{S,T}>0$ holds trivially. Accordingly, we also only need to consider $L$ which is the empty set or a singleton. In both cases $\gamma_{S,T}>0$. However, this $F$ is not $\rho$-submodular, since $F(1,2) - F(1) = 0 < \rho (F(1,2,3) - F(1,3)) = \rho$ for any $\rho>0$.
\end{proof}
\subsection{Relation between discrete and continuous stability (Proof of Propositions \ref{prop:DS-CS} and \ref{prop:CS-DSpinf}, and Corollary \ref{corr:SubStEq})}
First, we present a useful simple lemma, which provides an equivalent definition of decomposability for monotone function.
\begin{lemma}\label{lem:eqDecompDef}
Given $w \in \mathbb{R}^d, J \subseteq J, \mathrm{supp}(w) = J$, if $\Phi$ is a monotone function, then $\Phi$ is decomposable at $w$ w.r.t $J$ iff $\exists M_J>0, \forall \delta>0, i \in J^c,$ s.t,
$$\Phi(w + \delta \mathds{1}_i) \geq \Phi(w) + M_J \delta.$$
\end{lemma}
\begin{proof}
By definition \ref{def:ContStability}, $ \exists M_J>0, \forall \Delta \in \mathbb{R}^d, \mathrm{supp}(\Delta) \subseteq J^c,$
$$\Phi(w + \Delta) \geq \Phi(w) + M_J \| \Delta\|_\infty.$$
in particular this must hold for $\Delta = \delta \mathds{1}_i$. On the other hand, if the inequality hold for all $\delta \mathds{1}_i$, then given any $\Delta$ s.t. $\mathrm{supp}(\Delta) \subseteq J^c$ let $i_{\max}$ be the index where $\Delta_{i_{\max} }= \| \Delta \|_\infty$ and let $\delta = \| \Delta \|_\infty$, then
\begin{align*}
\Phi(w + \Delta) \geq \Phi(w + \delta _{i_{\max} }) \geq \Phi(w) + M_J \delta = \Phi(w) + M_J \| \Delta\|_\infty.
\end{align*}
\end{proof}
\primeDSCSProp*
\begin{proof}
We make use of the variational form \eqref{eq:SupportForm}.
Given a set $J$ stable w.r.t to $F$ and $\mathrm{supp}(w) \subseteq J$, let $\kappa^* \in \argmax_{\kappa \in \mathbb{R}^d_+} \{ \sum_{i \in J} \kappa_i^{1/q} |w_i| : \kappa(A) \leq F(A), \forall A \subseteq V\}$, then $\Omega(w) = |w_J|^T(\kappa_J^*)^{1/q}$.
Note that $\forall A \subseteq J, F(A \cup i) > F(A)$, by definition \ref{def:DisStable}.
Hence,
$\forall i \in J^c$, we can define $\kappa' \in \mathbb{R}^d_+$ s.t., $\kappa'_J = \kappa_J^*$, $\kappa'_{(J \cup i)^c} = 0$ and $\kappa'_i = \min_{A \subseteq J} F(A \cup i) - F(A)>0$.
Note that $\kappa'$ is feasible, since $\forall A \subseteq J, \kappa'(A) = \kappa^*(A) \leq F(A)$ and $\kappa'(A + i) = \kappa^*(A) +\kappa'_i \leq F(A) + F(A \cup i) - F(A) = F(A \cup i)$. For any other set $\kappa'(A) = \kappa'( A \cap (J+i)) \leq F(A \cap (J+i)) \leq F(A)$, by monotonicity.
It follows then that $\Omega(w + \delta \mathds{1}_i ) = \max_{\kappa \in \mathbb{R}^d_+} \{ \sum_{i \in J \cup i}^d \kappa_i^{1/q} |w_i| : \kappa(A) \leq F(A), \forall A \subseteq V\} \geq |w_J|^T(\kappa_J^*)^{1/q} + \delta (\kappa'_i)^{1/q} \geq \Omega(w) + \delta M$, with $M = (\kappa'_i)^{1/q} >0$. The proposition then follows by lemma \ref{lem:eqDecompDef}.\\
Similarly, the proof for $\Theta_p$ follows in a similar fashion.
We make use of the variational form \eqref{eq:SupportFormNonHomo}.
Given a set $J$ stable w.r.t to $F$ and $\mathrm{supp}(w) \subseteq J$, first note that this implicity implies that $F(J)< +\infty$ and hence $\Theta_p(w) < +\infty$.
Let $\kappa^* \in \argmax_{\kappa \in \mathbb{R}_+^d} \sum_{j=1}^d \psi_j(\kappa_j,w_j) + \min_{S \subseteq V} F(S) - \kappa(S)$ and $S^* \in \argmin_{S \subseteq J} F(S) - \kappa^*(S) $.
Note that $\forall S \subseteq J, \forall i \in J^c, F(S \cup i) > F(S)$, by definition \ref{def:DisStable}. Hence,
$\forall i \in J^c$, we can define $\kappa' \in \mathbb{R}^d_+$ s.t., $\kappa'_J = \kappa_J^*$, $\kappa'_{(J \cup i)^c} = 0$ and $\kappa'_i = \min_{S \subseteq J} F(S \cup i) - F(S)>0$.
Note that $\forall S \subseteq J, F(S) - \kappa'(S) = F(S) -\kappa^*(S) \geq F(S^*) -\kappa^*(S^*)$ and $F(S + i) - \kappa'(S+i) = F(S + i) - \kappa^*(S) -\kappa'_i \geq F(S +i) - \kappa^*(S) - F(S + i) + F(S) \geq F(S^*) -\kappa^*(S^*)$. Note also that $\psi_i(\kappa'_i,\delta) = (\kappa'_i)^{1/q} \delta$ if $\delta \leq (\kappa'_i)^{1/p}$, and $\psi_i(\kappa'_i,\delta) = \frac{1}{p} \delta^p + \frac{1}{q} \kappa'_i = \delta ( \frac{1}{p} \delta^{p-1} + \frac{1}{q} \kappa'_i \delta^{-1}) \geq \delta (\kappa'_i)^{1/q} $ otherwise.
It follows then that $\Theta_p(w + \delta \mathds{1}_i ) \geq \sum_{j \in J} \psi_j(\kappa_j,w_j) + (\kappa_i')^{1/q} \delta+ \min_{S \subseteq J \cup i} F(S) - \kappa'(S) \geq \sum_{j \in J} \psi_j(\kappa_j,w_j) + (\kappa_i')^{1/q} \delta+ \min_{S \subseteq J} F(S) - \kappa^*(S) = \Theta_p(w) + \delta M$ with $M = (\kappa_i')^{1/q}>0$. The proposition then follows by lemma \ref{lem:eqDecompDef}.
\end{proof}
\primeCSDSpinfProp*
\begin{proof}
$
F(A + i ) = \Omega_\infty(\mathds{1}_A + \mathds{1}_i) = \Theta_\infty(\mathds{1}_A + \mathds{1}_i) > \Omega_\infty(\mathds{1}_A) = \Theta_\infty(\mathds{1}_A ) = F(A), \forall A \subseteq J.$
\end{proof}
\primeSubStEqCorr*
\begin{proof}
If $F$ is a monotone submodular function, then $\Omega_\infty(w) = \Theta_\infty(w) = f_L(|w|)$. If $J$ is not weakly stable w.r.t $F$, then $\exists i \in J^c$ s.t., $F(J \cup \{i\}) = F(J)$. Thus, given any $w, \mathrm{supp}(w) = J$, choosing $0 < \delta < \min_{i \in J} |w_i| $, result in $f_L(|w| + \delta \mathds{1}_i) = f_L(|w| )$, which contradicts the weak stability of $J$ w.r.t to $\Omega_\infty = \Theta_\infty$.
\end{proof}
\section{Sparsity-inducing properties of convex relaxations}\label{sect:ContCond}
The notion of LCE captures the combinatorial structure preserved by convex relaxations in a geometric sense.
In this section, we characterize the preserved structure from a statistical perspective.
To this end, we consider the linear regression model $y = X w^* + \epsilon$, where $X \in \mathbb{R}^{n \times d}$ is a fixed design matrix, $y \in \mathbb{R}^n$ is the response vector, and $\epsilon$ is a vector of iid random variables with mean $0$ and variance $\sigma^2$.
Given $\lambda_n > 0$, we define $\hat{w}$ as a minimizer of the regularized least-squares:
\begin{equation}\label{eq:GenEstimator}
\min_{w \in \mathbb{R}^d} \frac{1}{2} \| y - Xw\|_2^2 + \lambda_n \Phi(w),
\end{equation}
We are interested in the sparsity-inducing properties of $\Omega_p$ and $\Theta_p$ on the solutions of \eqref{eq:GenEstimator}.
In this section, we consider though the more general setting
where $\Phi$ is any proper normalized ($\Phi(0) = 0$) convex function which is absolute, i.e., $\Phi(w) = \Phi(|w|)$ and
monotonic in the absolute values of $w$, that is $|w| \geq |w'| \Rightarrow \Phi(w) \geq \Phi(w')$.
In what follows, monotone functions refer to this notion of monotonicity.
We determine in Section \ref{sect:NecCond} necessary conditions for support recovery in \eqref{eq:GenEstimator} and in Section \ref{sect:SuffCond} we provide sufficient conditions for support recovery and consistency of a variant of \eqref{eq:GenEstimator}.
As both $\Omega_p$ and $\Theta_p$ are normalized absolute monotone convex functions, the results presented in this section apply directly to them as a corollary.
For simplicity, we assume $Q = X^TX/n \succ 0$, thus $\hat{w}$ is unique.
This forbids the high-dimensional setting.
We expect though the insights developed towards the presented results to contribute to understanding the high-dimensional learning setting, which we defer to a later work.
\subsection{Continuous stable supports}\label{sect:NecCond}
Existing results on the consistency of special cases of the estimator \eqref{eq:GenEstimator} typically rely heavily on decomposition properties of $\Phi$ \cite{negahban2011unified, bach2010structured, obozinski2011group, obozinski2012convex}. The notions of decomposability assumed in these prior works are either too strong or too specific to be applicable to the general convex penalties $\Omega_p$ and $\Theta_p$ we are considering. Instead, we introduce a general weak notion of decomposability applicable to any absolute monotone convex regularizer. \\%, and show that true
\begin{definition}[Decomposability]
Given $J \subseteq V$ and $w \in \mathbb{R}^d$, $\mathrm{supp}(w) \subseteq J$, we say that $\Phi$ is \emph{decomposable} at $w$ w.r.t $J$ if
$\exists M_J>0$ such that $\forall \Delta \in \mathbb{R}^d, \mathrm{supp}(\Delta) \subseteq J^c,$
$$\Phi(w + \Delta) \geq \Phi(w) + M_J \| \Delta\|_\infty.$$
\end{definition}
For example, for the $\ell_1$-norm, this decomposability property holds for any $J \subseteq V$ and $w \in \mathbb{R}^d$, with $M_J = 1$.
It is reasonable to expect this property to hold at the solution $\hat{w}$ of \eqref{eq:GenEstimator} and its support $\hat{J} = \mathrm{supp}(\hat{w})$. Theorem \ref{them:NecessaryStableGeneral} shows that this is indeed the case.
In Section \ref{sect:SuffCond}, we devise an estimation scheme able to recover supports $J$ that satisfy this property at \emph{any} $w \in \mathbb{R}^d$. This leads then to following notion of \emph{continuous} stable supports, which characterizes supports with respect to the continuous penalty $\Phi$. In Section \ref{Sect:Disct}, we relate this to the notion of \emph{discrete} stable supports, which characterizes supports with respect to the combinatorial penalty $F$. \\
\begin{definition}[Continuous stability] \label{def:ContStability}
We say that $J \subseteq V$ is \emph{weakly stable} w.r.t $\Phi$ if \emph{there exists} $w \in \mathbb{R}^d$, $\mathrm{supp}(w) = J$ such that $\Phi$ is \emph{decomposable} at $w$ wrt $J$.
Furthermore, we say that $J \subseteq V$ is \emph{strongly stable} w.r.t $\Phi$ if \emph{for all} $w \in \mathbb{R}^d$ s.t. $\mathrm{supp}(w) \subseteq J$, $\Phi$ is \emph{decomposable} at $w$ wrt $J$.
\end{definition}
Theorem \ref{them:NecessaryStableGeneral} considers slightly more general estimators than \eqref{eq:GenEstimator} and shows that weak stability is a necessary condition for a non-zero pattern to be allowed as a solution. \\
\begin{restatable}{theorem}{primeNecStableProp} \label{them:NecessaryStableGeneral}
The minimizer $\hat{w}$ of $\min_{w \in \mathbb{R}^d} L(w) - {z}^\top {w} + \lambda \Phi(w)$, where $L$ is a strongly-convex and smooth loss function and $z \in \mathbb{R}^d$ has a continuous density w.r.t to the Lebesgue measure, has a weakly stable support w.r.t.~$\Phi$, with probability one.
\end{restatable}
This new result extends and simplifies the result in \cite{bach2010structured} which consideres the special case of quadratic loss functions and $\Phi$ being the $\ell_\infty$-convex relaxation of a submodular function. The proof we present, in the Appendix, is also shorter and simpler.
\begin{corollary}\label{corr:NecessaryStable}
Assume $y \in \mathbb{R}^d$ has a continuous density w.r.t to the Lebesgue measure,
then the support of the minimizer $\hat{w}$ of Eq. \eqref{eq:GenEstimator}
is weakly stable wrt $\Phi$ with probability one.
\end{corollary}
\subsection{Adaptive estimation} \label{sect:SuffCond}
Restricting the choice of regularizers in \eqref{eq:GenEstimator} to convex relaxations as surrogates to combinatorial penalties is motivated by computational tractability concerns.
However, other non-convex regularizers such as
$\ell_\alpha$-quasi-norms \cite{knight2000asymptotics, frank1993statistical} or more generally penalties of the form $\Phi(w) = \sum_{i=1}^d \phi(|w_i|)$, where $\phi$ is a monotone concave penalty \cite{fan2001variable, daubechies2010iteratively, gasso2009recovering} can be more advantageous than the convex $\ell_1$-norm.
Such penalties are closer to the $\ell_0$-quasi-norm and penalize more aggressively small coefficients, thus they have a stronger sparsity-inducing effect than $\ell_1$-norm.
The authors in \cite{jenatton2010structured} extended such concave penalties to the $\ell_\alpha /\ell_2$- quasi-norm $\Phi(w) = \sum_{i=1}^M \| w_{G_i}\|_\alpha$ for some $\alpha \in (0,1)$, which enforces sparsity at the group level more aggressively.
We generalize this to $\Phi(|w|^\alpha)$ where $\Phi$
is any structured sparsity-inducing monotone convex regularizer.
These non-convex penalties lead to intractable estimation problems, but approximate solutions can be obtained by majorization-minimization algorithms, as suggested for e.g., in \cite{figueiredo2007majorization, zou2008one, candes2008enhancing}.
\begin{restatable}{lemma}{primeMajoLem} \label{lem:Majorizer}
Let $\Phi$ be a monotone convex function,
$\Phi(|w|^\alpha)$ admits the following majorizer, $\forall w^0 \in \mathbb{R}^d$, $\Phi(|w|^\alpha) \leq (1-\alpha) \Phi( |w^0|^\alpha) + \alpha \Phi(|w^0|^{\alpha-1} \circ |w| )$, which is tight at $w^0$.
\end{restatable}
We consider the adaptive weight estimator (\ref{eq:1StepEst}) resulting from applying a 1-step majorization-minimization to \eqref{eq:GenEstimator},
\begin{equation}\label{eq:1StepEst}
\min_{w \in \mathbb{R}^d} \frac{1}{2} \| y - Xw\|_2^2 + \lambda_n \Phi(|{w^0}|^{\alpha-1} \circ |w|),
\end{equation}
where ${w^0}$ is a $\sqrt{n}$-consistent estimator to $w^*$, that is converging to $w^*$ at rate $1/\sqrt{n}$ (typically obtained from $w^0 = \mathds{1}$ or ordinary least-squares).
We study sufficient support recovery and estimation consistency conditions for (\ref{eq:1StepEst}) for general convex monotone regularizers $\Phi$.
Such consistency results have been established for (\ref{eq:1StepEst}), in the classical asymptotic setting, only in the special case of $\ell_1$-norm in \cite{zou2006adaptive} and
for the (non-adaptive) estimator \eqref{eq:GenEstimator} for homogeneous convex envelopes of monotone submodular functions, for $p = \infty$ in \cite{bach2010structured} and for general $p$ in \cite{obozinski2012convex}, in the high dimensional setting, and for latent group Lasso norm in \cite{obozinski2011group}, in the asymptotic setting.
Compared to prior works, the discussion of support recovery is complicated here by the fact that $\Phi$ is not necessarily a norm (e.g., if $\Phi = \Theta_p$) and only satisfies a weak notion of decomposability.
As in \cite{zou2006adaptive}, we consider the classical asymptotic regime in which the model generating the data is of fixed finite dimension $d$ while $n \to \infty$. As before, we assume $Q \succ 0$ and thus the minimizer of \eqref{eq:1StepEst} is unique, we denote it by $\hat{w}$.
The following Theorem extends the results of \cite{zou2006adaptive} for the $\ell_1$-norm to any normalized absolute monotone convex regularizer if the true support satisfy the sufficient condition of strong stability in Definition \ref{def:ContStability}. As we previously remarked this condition is trivially satisfied for the $\ell_1$-norm.
\begin{restatable}{theorem}{primeConsistThem}[Consistency and Support Recovery]\label{Thm:Consistency}
Let $\Phi: \mathbb{R}^d \to \overline{\mathbb{R}}_+$ be a { proper normalized absolute monotone convex} function and denote by $J$ the true support $J = \mathrm{supp}(w^*)$.
If $|w^*|^\alpha \in \mathrm{int} ~\mathrm{dom} ~\Phi$, $J$ is strongly stable with respect to $\Phi$ and $\lambda_n$
satisfies $\frac{\lambda_n}{\sqrt{n}} \rightarrow 0, \frac{\lambda_n}{{n}^{\alpha/2}} \rightarrow \infty$,
then the estimator \eqref{eq:1StepEst} is consistent and asymptotically normal, i.e., it satisfies
\begin{equation}
\sqrt{n}(\hat{w}_{J} - w^*_{J}) \xrightarrow{d} \mathcal{N}(0, \sigma^2 Q_{J J}^{-1}),
\end{equation}
and
\begin{equation}
P(\mathrm{supp}(\hat{w}) = J) \rightarrow 1.
\end{equation}
\end{restatable}
Consistency results in most existing works are established under various necessary conditions on $X$, some of which are difficult to verify in practice, such as the \emph{irrepresentability condition} (c.f., \cite{zou2006adaptive, bach2010structured, obozinski2011group, obozinski2012convex}).
Adding data-dependent weights does not require such conditions and allows recovery even in the correlated measurement matrix setup as illustrated in our numerical results (c.f., Sect. \ref{sect:Simul}).
\section{Combinatorial penalties and convex relaxations}
We consider positive-valued set functions of the form $F:2^V \rightarrow \overline{\mathbb{R}}_+$ such that $F(\varnothing) = 0, F(A)>0, \forall A \subseteq V$ to encode structured sparsity models.
For generality, we do not assume a priori that $F$ is \emph{monotone} (i.e., $F(A) \leq F(B), \forall A \subseteq B$). However, as we argue in the sequel, convex relaxations of non-monotone set functions is hopeless.
The domain of $F$ is defined as $\mathcal{D} := \{A: F(A) < +\infty\}$. We assume that it covers $V$, i.e., $\cup_{A \in \mathcal{D}} A = V$, which is equivalent to assuming that $F$ is finite at singletons if $F$ is monotone.
A finite-valued set function
$F$ is
\emph{submodular} if and only if for all $A\subseteq B$ and $i \in B^c$, $F(B \cup \{i\}) - F(B) \leqslant F(A \cup \{i\}) - F(A)$~(see, e.g., \cite{fujishige2005submodular,bach2011learning}). Unless explicitly stated, we do not assume that $F$ is submodular.
We consider the same model in \cite{obozinski2012convex}, parametrized by $w \in \mathbb{R}^d$, with general $\ell_p$-regularized combinatorial penalties
$$F_p(w) = \frac{1}{q} F(\mathrm{supp}(w)) + \frac{1}{p} \| w \|_p^p$$
for $p \geq 1$, where the set function $F$ controls the structure of the model in terms of allowed/favored non-zero patterns and the $\ell_p$-norm serves to control the magnitude of the coefficients. Allowing $F$ to take infinite values let us
enforce hard constraints.
For $p = \infty$, $F_p$ reduces to $F_\infty(w) = F(\mathrm{supp}(w)) + \iota_{\| w \|_\infty \leq 1}(w)$.
Considering the case $p \not = \infty$ is appealing to avoid the clustering artifacts of the values of
the learned vector induced by the $\ell_\infty$-norm.
Since such combinatorial regularizers lead to computationally intractable problems, we seek convex surrogate penalties that capture the encoded structure as much as possible. A natural candidate for a convex surrogate of $F_p$ is then its \emph{convex envelope} (largest convex lower bound), given by the biconjugate (the Fenchel conjugate of the Fenchel conjugate) $F_p^{\ast \ast}$. Two general approaches are proposed in the literature to achieve just this; one requires the surrogate to also be positively homogeneous \cite{obozinski2012convex} and thus considers the convex envelope of the positively homogeneous envelope of $F_p$, given by $F(\mathrm{supp}(w))^{1/q} \| w \|_p$,
which we denote by $\Omega_p$, the other computes instead the convex envelope of $F_p$ directly \cite{halabi2015totally, bach2010structured}, which we denote by $\Theta_p$. Note that from the definition of convex envelope, it holds that $\Theta_p \geq \Omega_p$.
\subsection{Homogeneous and non-homogeneous convex envelopes} \label{sect:ConvRels}
In \cite{obozinski2012convex}, the homogeneous convex envelope $\Omega_p$ of $F_p$ was shown to correspond to the \emph{latent group Lasso norm} \cite{obozinski2011group} with groups set to all elements of the power set $2^V$. We recall this form of $\Omega_\infty$ in Lemma \ref{lem:HomEnv} as well as a variational form of $\Omega_p$ which highlights the relation between the two. Other variational forms can be found in the Appendix.
\begin{lemma}[\cite{obozinski2012convex}] \label{lem:HomEnv}
The homogeneous convex envelope $\Omega_p$ of $F_p$ is given by
{\small
\begin{align}
\Omega_p(w)
&=\inf_{\eta \in \mathbb{R}^d_+} \frac{1}{p} \sum_{j=1}^d \frac{|w_j|^{p}}{\eta_j^{p -1}} + \frac{1}{q} \Omega_\infty(\eta), \label{eq:VarLpLinf} \\
\Omega_\infty(w) &= \min_{\alpha \geq 0} \Big\{ \sum_{S \subseteq V} \alpha_S F(S) : \sum_{S \subseteq V} \alpha_S \mathds{1}_S \geq |w| \Big\}.
\label{eq:ConvCover}
\end{align}
}
\end{lemma}
The non-homogeneous convex envelope $\Theta_p$ of $F_p$ is only considered thus far in the case where $p = \infty$.
\cite{halabi2015totally} shows that $\Theta_\infty(w) = \inf_{\eta \in [0,1]^d} \{ f(\eta) : \eta \geq |w|\}$ where $f$ is any proper $(\mathrm{dom}(f) \not = \emptyset)$, lower semi-continuous (l.s.c.) convex \emph{extension} of $F$, i.e., $f(\mathds{1}_A) = F(A), \forall A\subseteq V$ (\textit{cf.}, Lemma 1 in \cite{halabi2015totally}). A natural choice for $f$ is the \emph{convex closure} of $F$, which corresponds to the \emph{tightest} convex extension of $F$ on $[0,1]^d$ (\textit{cf.}, Appendix for a more rigorous treatment).
Lemma \ref{lem:NonHomEnv} below presents this choice, deriving a new form of $\Theta_\infty$ that parallels \eqref{eq:ConvCover}.
We also derive the non-homogeneous convex envelope $\Theta_p$ of $F_p$ for any $p \geq 1$ and present the variational form relating it to $\Theta_\infty$ in Lemma \ref{lem:NonHomEnv}.
For simplicity, the variational form~\eqref{eq:VarLpLinfNonHomo} presented below holds only for monotone functions $F$; the general form and other variational forms that parallel the ones known for the homogeneous envelope are presented in the Appendix.
\begin{lemma}\label{lem:NonHomEnv}
The non-homogeneous convex envelope $\Theta_p$ of $F_p$, for monotone functions $F$, is given by
{\small
\begin{align}
\Theta_p(w)
&=\inf_{\eta \in [0,1]^d} \frac{1}{p} \sum_{j=1}^d \frac{|w_j|^{p}}{\eta_j^{p -1}} + \frac{1}{q} \Theta_\infty(\eta),
\label{eq:VarLpLinfNonHomo} \\
\Theta_\infty(w) &= \min_{\alpha \geq 0} \Big\{ \sum_{S \subseteq V} \alpha_S F(S) : \sum_{S \subseteq V} \alpha_S \mathds{1}_S \geq |w|, \sum_{S \subseteq V} \alpha_S =1 \Big\}.
\label{eq:ConvCoverNonHomo}
\end{align}
}
\end{lemma}
The infima in \eqref{eq:VarLpLinf} and \eqref{eq:VarLpLinfNonHomo}, for $w \in \mathrm{dom}(\Theta_p)$, can be replaced by a minimization, if we extend $b \rightarrow \frac{a}{b}$ by continuity in zero with $\frac{a}{0} = \infty$ if $a \not = 0$ and $0$ otherwise, as suggested in \cite{jenatton2010structured} and \cite{bach2012optimization}.
Note that, for $p = 1$, both relaxations reduce to $\Omega_1 =\Theta_1 = \| \cdot \|_1$. Hence, the $\ell_1$-relaxations essentially lose the combinatorial structure encoded in $F$. We thus follow up on the case $p > 1$.
In order to decide when to employ $\Omega_p$ or $\Theta_p$, it is of interest to study the respective properties of these two relaxations and to identify when they coincide. Remark \ref{rmk:SubNonHomoEnv} shows that the
homogeneous and non-homogeneous envelopes are identical, for $p=\infty$, for monotone submodular functions.
\begin{remark}\label{rmk:SubNonHomoEnv}
If $F$ is a monotone submodular function then $\Theta_\infty(w) = \Omega_\infty(w) = f_L(|w|), \forall w \in [-1,1]^d$, where $f_L$ denotes the Lov\'asz extension of $F$ \cite{lovasz1983submodular}.
\end{remark}
\begin{figure*}\label{fig:l0l2fct}
\vspace{-10pt}
\centering
\begin{tabular}{c c c}
\includegraphics[trim=53 200 20 200, clip, scale=.22]{figs/l0l2fct.pdf} &
\includegraphics[scale=.25]{figs/l0l2fct2D_Homo_LR.png} &
\includegraphics[scale = .25]{figs/l0l2fct2D_nonHomo_LR.png}
\end{tabular}
\caption{$\ell_2$-regularized cardinality example in one dimension (left) and two dimensions (middle: homogeneous, right: non-homogeneous). }
\end{figure*}
The two relaxations do not coincide in general: Note the added constraints $\eta \in [0,1]^d$ in (\ref{eq:VarLpLinfNonHomo}) and the sum constraint on $\alpha$ in (\ref{eq:ConvCoverNonHomo}).
Another clear difference to note is that $\Omega_p$ are norms that belong to the broad family of H-norms \cite{micchelli2013regularizers, bach2012optimization}, as shown in \cite{obozinski2012convex}.
On the other hand, by virtue of being non-homogeneous, $\Theta_p$ are not norms in general.
We illustrate below two interesting examples where $\Omega_p$ and $\Theta_p$ differ.
\begin{example}[Berhu penalty] Since the cardinality function $F(S) = |S|$ is a monotone submodular function, $\Theta_\infty(w) = \Omega_\infty(w) = \| w \|_1$. However, this is not the case for $p\not = \infty$. In particular, we consider the $\ell_2$-regularized cardinality function $F^{card}_2(w) = \frac{1}{2}\| w \|_0 + \frac{1}{2} \| w \|_2^2$.
Figure \ref{fig:l0l2fct} shows that the non-homogeneous envelope is \emph{tighter} than the homogeneous one in this case. Indeed, $\Omega^{card}_2$ is simply the $\ell_1$-norm, while $\Theta^{card}_2$ is given by $[\Theta^{card}_2(w)]_i = |w_i|$ if $|w_i| \leq 1$ and $[\Theta^{card}_2(w)]_i = \frac{1}{2} |w_i|^2 + \frac{1}{2}$ otherwise. This penalty, called ``Berhu,'' is introduced in \cite{owen2007robust} to produce a robust ridge regression estimator and is shown to be the convex envelope of $F^{card}_2$ in~\cite{jojic2011convex}.
\end{example}
This kind of behavior, where the non-homogeneous relaxation $\Theta_p$ acts as an $\ell_1$-norm on the small coefficients and as $\frac{1}{p} \| w\|_p^p$ for large ones, is not limited to the Berhu penalty, but holds for general set functions. However the point where the penalty moves from one mode to the other depends on the structure of $F$ and is different along each coordinate.
This is easier to see via the second variational form of $\Theta_p$ presented in the Appendix. We further illustrate in the following example.
\begin{example}[Range penalty] Consider the range function defined as $\text{range}(A) = \max(A) - \min(A) + 1$ where $\max(A)$ ($\min(A)$) denotes maximal (minimal) element in $A$. This penalty allow us to favor the selection of interval non-zero patterns on a chain or rectangular patterns on grids. It was shown in \cite{obozinski2012convex} that $\Omega_p(w) = \| w \|_1$ for any $p \geq 1$. On the other hand, $\Theta_p$ has no closed form solution, but is different from $\ell_1$-norm. Figure \ref{fig:rangeBalls} illustrates the balls of different radii of $\Theta_\infty$ and $\Theta_2$. We can see how the penalty morphs from $\ell_1$-norm to $\ell_\infty$ and squared $\ell_2$-norm respectively, with different ``speed'' along each coordinate. Looking carefully for example on the ball $\Theta_2(w) \leq 2$, we can see that the penalty acts as an $\ell_1$-norm along the $(x,z)$-plane and as a squared $\ell_2$-norm along the $(y,z)$-plane.
\end{example}
We highlight other ways in which the two relaxations differ and their implications in the sequel.
In terms of computational efficiency, note that even though
the formulations (\ref{eq:VarLpLinf}) and (\ref{eq:VarLpLinfNonHomo}) are jointly convex in $(w,\eta)$,
$\Omega_p$ and $\Theta_p$ can still be intractable to compute and to optimize.
However, for certain classes of functions, they are tractable. For example, since for monotone submodular functions, $\Omega_\infty = \Theta_\infty$ is the Lov\'asz extension of $F$, as stated in Remark \ref{rmk:SubNonHomoEnv}, then they can be efficiently computed by the greedy algorithm~\cite{bach2011learning}. Moreover, efficient algorithms to compute $\Omega_p$, the associated proximal operator and to solve learning problems regularized with $\Omega_p$ is proposed in \cite{obozinski2012convex}.
Similarly, if $F$ can be expressed by integer programs over totally unimodular constraints as in \cite{halabi2015totally}, then $\Omega_\infty$, $\Theta_\infty$ and their associated Fenchel-type operators can be computed efficiently by linear programs. Hence, we can use conditional gradient algorithms for numerical solutions.
\begin{figure}\label{fig:rangeBalls}
\centering
\begin{tabular}{c c c}
\hspace{-15pt}
\includegraphics[scale=.25]{figs/range_linfBall_r1_eps01.png} &
\includegraphics[scale=.33]{figs/range_linfBall_r2_eps01.png} &
\includegraphics[scale=.15]{figs/range_linfBall_r3_eps01.png} \\
\hspace{-15pt}
\includegraphics[scale=.25]{figs/range_l2Ball_r1_eps015.png} &
\includegraphics[scale=.28]{figs/range_l2Ball_r2_eps015.png} &
\includegraphics[scale=.36]{figs/range_l2Ball_r4_eps015.png} \\
\end{tabular}
\caption{Balls of different radii of the non-homogeneous $\ell_\infty$-convex envelope of the range function (top): $\Theta_\infty(w) \leq 1$ (left), $\Theta_\infty(w) \leq 2$ (middle), $\Theta_\infty(w) \leq 3$ (right) and of its $\ell_2$-convex envelope (bottom): $\Theta_2(w) \leq 1$ (left), $\Theta_2(w) \leq 2$ (middle), $\Theta_2(w) \leq 4$ (right).}
\vspace{-12pt}
\end{figure}
\subsection{Lower combinatorial envelopes} \label{sect:LCE}
In this section, we are interested in analyzing which combinatorial structures are preserved by each relaxation.
To that end, we generalize
the notion of \emph{lower combinatorial envelope} (LCE) \cite{obozinski2012convex}.
The homogeneous LCE $F_-$ of $F$ is defined as the set function which agrees with the $\ell_\infty$-homogeneous convex relaxation of $F$ at the vertices of the unit hypercube, i.e., $F_-(A) = \Omega_\infty(\mathds{1}_A), \forall A \subseteq V$.
For the non-homogeneous relaxation, we define the non-homogeneous LCE similarly as $\tilde{F}_-(A) = \Theta_\infty(\mathds{1}_A)$.
The $\ell_\infty$-relaxation reflects most directly the combinatorial structure of the function $F$.
Indeed, $\ell_p$-relaxations only depend on $F$ through the $\ell_\infty$-relaxation as expressed in the variational forms \eqref{eq:VarLpLinf} and \eqref{eq:VarLpLinfNonHomo}.
We say $\Omega_\infty$ is a tight relaxation of $F$ if $F = F_-$. Similarly, $\Theta_\infty$ is a tight relaxation of $F$ if $\tilde{F}_- = F$. $\Omega_\infty$ and $\Theta_\infty$ are then \emph{extensions} of $F$ from $\{0,1\}^d$ to $\mathbb{R}^d$; in this sense, the relaxation is tight for all $w$ of the form $w=\mathds{1}_A$. Moreover, following the
definition of convex envelope, the relaxation
$\Omega_\infty$ (resp.~$\Theta_\infty$) is the same for $F$ and $F_-$ (resp.~ $F$ and $\tilde{F}_-$), and hence, the LCE can be interpreted as the combinatorial structure preserved by each convex relaxation.
The homogeneous relaxation can capture any monotone submodular function \cite{obozinski2012convex}. Since $\Omega_\infty$ is the Lov\'asz extension \cite{bach2010structured} in this case, and hence, $F_-(A) = \Omega_\infty(\mathds{1}_A) = f_L(\mathds{1}_A) = F(A)$. Also, since the two $\ell_\infty$-relaxations are identical for this class of functions, their LCEs are also equal, i.e., $\tilde{F}_-(A) = \Theta_\infty(\mathds{1}_A) = \Omega_\infty(\mathds{1}_A) = F(A)$.
The LCEs, however, are not equal in general. In fact, the non-homogeneous relaxation is tight for a larger class of functions.
In particular, the following proposition shows
that $\tilde{F}_-$ is equal to the \emph{monotonization} of $F$, that is $\tilde{F}_-(A) = \inf_{S \subseteq V} \{ F(S) : A \subseteq S\}$, for all set functions $F$, and is thus equal to the function itself if $F$ is monotone.
\begin{restatable}{proposition}{primeLCEnonhomoProp}\label{prop:LCENonHomo}
The non-homogenous lower combinatorial envelope can be written as
{\small
\begin{align*}
\tilde{F}_-(A) &= \Theta_\infty(\mathds{1}_A) \\
&= \!\! \inf_{\alpha_S \in \{0,1\}} \{ \sum_{S \subseteq V} \alpha_S F(S) : \sum_{S \subseteq V} \! \alpha_S \mathds{1}_S \geq \mathds{1}_A, \sum_{S \subseteq V} \! \alpha_S =1 \} \\[-.1cm]
&= \inf_{S \subseteq V} \{ F(S) : A \subseteq S\}. \\[-.6cm]
\end{align*}
}
\end{restatable}
\begin{proof}
To see why we can restrict $\alpha_S$ to be integral, let $\mathcal{E} = \{S: \alpha_S>0\}$, then $\forall T \subseteq V$ such that $\exists e \in A, e \not \in T$,
then $\sum_{\alpha_S >0, S \not = T} \alpha_S = 1$ and hence $\alpha_{T}=0$. Hence $\forall S \in \mathcal{E}$ we have $A \subseteq S$ and
$ \sum_{\alpha_S > 0} \alpha_S F(S) = \min_{\alpha_S > 0} F(S)$.
\end{proof}
Proposition \ref{prop:LCENonHomo} argues that the non-homogeneous convex envelope is tight if and only if $F$ is monotone. Two important practical implications follow from this result.
Given a target model that cannot be expressed by a monotone function, it is impossible to obtain a tight convex relaxation. Non-convex methods can be potentially better.
On the other hand, if the model can be expressed by a monotone non-submodular set function, the homogeneous function may not be tight, and hence, a non-homogeneous relaxation can be more useful. For instance, \cite{obozinski2012convex} shows that for any set function where $F(\{e\})=1$ for all singletons $e \in V$ and $F(A) \geq |A|, \forall A \subseteq V$, the homogeneous LCE $F_-(A) = |A|$ and accordingly $\Omega_p$ is the $\ell_1$-norm, thus losing completely the structure encoded in $F$.
We discuss three examples that fall in this class of functions, where the non-homogeneous relaxation is tight while the homogeneous one is not.\\
\begin{example}[Range penalty] Consider $\text{range}(A) = \max(A) - \min(A) + 1$. For $F(A) = \text{range}(A)$, we have $F_-(A) = |A|$, while $\tilde{F}_-=F$ by Prop. \ref{prop:LCENonHomo}.
\end{example}
\begin{example}[Dispersive $\ell_0$-penalty] \label{ex:Dispersive}
Given a set of predefined groups $\{G_1, \cdots, G_M\}$, consider the dispersive $\ell_0$-penalty, introduced by \cite{halabi2015totally}: $F(A) = |A| + \iota_{B^T \mathds{1}_A \leq \mathds{1}}(A)$ where the columns of $B$ correspond to the indicator vectors of the groups, i.e., $B_{V,i} = \mathds{1}_{G_i}$. The dispersive penalty enforces the selection of sparse supports where no two non-zeros are selected from the same group. Neural sparsity models induce such structures \cite{hegde2009compressive}. In this case, we have $F_-(A) = |A|$, while $\tilde{F}_-=F$ by Prop. \ref{prop:LCENonHomo}.
\end{example}
\begin{example}[Weighted graph model] \label{ex:Graph}
Given a graph $\mathcal{G} = (V,E)$, consider a relaxed version of the weighted graph model of \cite{hegde2015nearly}: $F(A) = |A| + \iota_{\gamma(F_A) \leq g, \omega(F_A) \leq B }(A)$, where $\gamma(F_A) $ is the number of connected components formed by the forest $F_A$ corresponding to $A$ and $ \omega(F_A) $ is the total weight of edges in the forest $F_A$. This model describes a wide range of structures, including 1D-clustering, tree hierarchies, and the Earth Mover Distance model. We have $F_-(A) = |A|$, while $\tilde{F}_-=F$ by Prop. \ref{prop:LCENonHomo}.
\end{example}
The last two examples belong to a natural class of structured sparsity penalties of the form $F(A) = |A| + \iota_{A \in \mathcal{M}}(A)$, which favors sparse non-zero patterns among a set $\mathcal{M}$ of allowed patterns. If $\mathcal{M}$ is down-monotone, i.e., $\forall A \in \mathcal{M}, \forall B \subseteq A, B \in \mathcal{M}$, then the non-homogeneous relaxation preserves its structure, i.e., $\tilde{F}_-=F$, while its homogeneous relaxation is oblivious to the hard constraints, with $F_-(A) = |A|$.
\section{Sparsity-inducing properties of combinatorial penalties}\label{Sect:Disct}
In section \ref{sect:ContCond}, we derived neccesary and sufficient conditions for support recovery defined with respect to the continuous convex penalties $\Omega_p$ and $\Theta_p$.
In this Section,
we translate these to conditions with respect to the combinatorial penalties $F_p$ themselves. Hence, the results of this section allows one to check which supports to expect to recover, without the need to compute the corresponding convex relaxation.
To that end, we introduce in Section \ref{sec:DisStable} discrete counterparts of weak and strong stability, and show in Section \ref{sec:RelationDisceteConv} that discrete strong stability is a sufficient, and in some cases necessary, condition for support recovery.
\subsection{Discrete stable supports}
\label{sec:DisStable}
We recall the concept of discrete stable sets \cite{bach2010structured}, also referred to as \emph{flat} or \emph{closed} sets \cite{krause2012near}. We refer to such sets as discrete weakly stable sets and introduce a stronger notion of discrete stability.
\begin{definition}[Discrete stability] \label{def:DisStable}
Given a monotone set function $F: 2^V \to \overline{\mathbb{R}}_+$, a set $J \subseteq V$ is said to be \emph{weakly stable} w.r.t $F$ if $\forall i \in J^c, F(J \cup \{i\}) > F(J)$.\\
A set $J \subseteq V$ is said to be \emph{strongly stable} w.r.t $F$ if $\forall A \subseteq J, \forall i \in J^c, F(A \cup \{i\}) > F(A)$.
\end{definition}
Note that discrete stability imply in particular feasibility, i.e., $F(J) < + \infty$. Also, if $F$ is a strictly monotone function, such as the cardinality function, then all supports are stable w.r.t $F$.
It is interesting to note that for monotone submodular functions, weak and strong stability are equivalent. In fact, this equivalence holds for a more general class of functions, we call $\rho$-submodular.
\begin{definition}
A function $F: 2^V \to \mathbb{R}$ is $\rho$-submodular iff $\exists \rho \in (0,1]$ s.t., $\forall B \subseteq V, A \subseteq B, i \in B^c$
$$ \rho [ F(B \cup \{i\}) - F(B) ] \leq F(A \cup \{i\}) - F(A) $$
\end{definition}
The notion of $\rho$-submodularity is a special case of the weakly DR-submodular property defined for continuous functions \cite{hassani2017gradient}. It
is also related to the notion of weak submodularity (c.f., \cite{das2011submodular, elenberg2016restricted}). We show in the appendix that $\rho$-submodularity is a stronger condition.
\begin{restatable}{proposition}{primeEqDefStableProp}\label{prop:EqDefStableProp}
If $F$ is a finite-valued monotone function, $F$ is $\rho$-submodular iff discrete weak stability is equivalent to strong stability.
\end{restatable}
\begin{example}
The range function $\text{range}(A) = \max(A) - \min(A) + 1$ is $\rho$-submodular with $\rho = \frac{1}{d-1}$.
\end{example}
\subsection{Relation between discrete and continuous stability} \label{sec:RelationDisceteConv}
This section provides several technical results relating the discrete and continuous notions of stability. It thus provides us with the necessary tools to characterize which supports can be correctly estimated w.r.t the combinatorial penalty itself, without going through its relaxations.
\begin{restatable}{proposition}{primeDSCSProp}
\label{prop:DS-CS}
Given any {monotone} set function $F$, all sets $J \subseteq V$ {strongly stable w.r.t to $F$} are also strongly stable w.r.t $\Omega_p$ and $\Theta_p$.
\end{restatable}
It follows then by Theorem \ref{Thm:Consistency} that discrete strong stability is a sufficient condition for correct estimation.
\begin{corollary}\label{cor:Consistent}
If $\Phi$ is equal to $\Omega_p$ or $\Theta_p$ for $p \in (1,\infty)$ and $\mathrm{supp}(w^*) = J$ is strongly stable w.r.t $F$, then Theorem \ref{Thm:Consistency} holds, i.e., the adaptive estimator \eqref{eq:1StepEst} is consistent and correctly recovers the support. This also holds for $p = \infty$ if we further assume that $\| w^* \|_\infty <1$.
\end{corollary}
Furthermore, if $F$ is $\rho$-submodular, then by Proposition \ref{prop:EqDefStableProp}, it is enough for $\mathrm{supp}(w^*) = J$ to be weakly stable w.r.t $F$ for Corollary \ref{cor:Consistent} to hold. Conversely, Proposition \ref{prop:CS-DSpinf} below shows that discrete strong stability is also a necessary condition for continuous strong stability, in the case where $p = \infty$ and $F$ is equal to its LCE. \\
\begin{restatable}{proposition}{primeCSDSpinfProp} \label{prop:CS-DSpinf}
If $F = {F}_-$ and $J$ is strongly stable w.r.t $\Omega_\infty$, then $J$ is strongly stable w.r.t $F$.
Similarly, for any monotone $F$, if $J$ is strongly stable w.r.t $\Theta_\infty$, then $J$ is strongly stable w.r.t $F$.
\end{restatable}
Finally, in the special case of monotone submodular function, the following Corollary \ref{corr:SubStEq}, along with Proposition \ref{prop:DS-CS} demonstrates that all definitions of stability become equivalent.
We thus recover the result in \cite{bach2010structured} showing that discrete weakly stable supports correspond to the set of allowed sparsity patterns for monotone submodular functions.
\begin{restatable}{corollary}{primeSubStEqCorr} \label{corr:SubStEq}
If $F$ is monotone submodular and $J$ is weakly stable w.r.t $\Omega_\infty = \Theta_\infty$ then $J$ is weakly stable w.r.t $F$.
\end{restatable}
\subsection{Examples}\label{sect:ex}
We highlight in this section what are the supports recovered by the adaptive estimator (AE) \eqref{eq:1StepEst} with the homogeneous convex relaxation $\Omega_p$ and non-homogeneous convex relaxation $\Theta_p$ of some examples of structure priors. For simplicity, we will focus on the case $p=\infty$. Also in all the examples we consider below, weak and strong discrete stability are equivalent, so we omit the weak/strong specification.
Note that it is desirable that the regularizer used enforces the recovery of only the non-zero patterns satisfying the desired structure.
\textbf{Monotone submodular functions:} As discussed above, for this class of functions, all stability definitions are equivalent and $\Omega_\infty = \Theta_\infty = f_L$. As a result, AE recovers any discrete stable non-zero pattern. This includes the following examples (c.f., \cite{obozinski2012convex} for further examples).
\begin{itemize}
\item \textbf{Cardinality:} As a strictly monotone function, all supports are stable w.r.t to it. Thus AE recovers \emph{all} non-zero patterns with $\Omega_\infty$ and $\Theta_\infty$, given by the $\ell_1$-norm.
\item \textbf{Overlap count function:} $F_\cap(A) = \sum_{G \in \mathcal{G}, G \cap A \not = \emptyset} d_G $ where $\mathcal{G}$ is a collection of predefined groups $G$ and $d_G$ their associated weights. $\Omega_\infty$ and $\Theta_\infty$ are given by the $\ell_1 / \ell_\infty$-group Lasso norm, and stable patterns are complements of union of groups. For example, for hierarchical groups (i.e., groups consisting of each node and its descendants on a tree), AE recovers rooted connected tree supports.
\item \textbf{Modified range function:} The range function can be transformed into a submodular function, if scaled by a constant as suggested in \cite{bach2010structured}, yielding the monotone submodular function $F^{\text{mr}}(A) = d - 1 + \text{range}(A), \forall A \not = \emptyset$ and $F^{\text{mr}}(\varnothing) = 0$. This can actually be written as an instance of $F_\cap$ with groups defined as $\mathcal{G} = \{[1,k] : 1 \leq k \leq d\} \cup \{[k,d] : 1 \leq k \leq d\}$.
This norm was proposed to induce interval patterns by \cite{jenatton2011structured}, and indeed its stable patterns are interval supports. We will compare this function in the experiments with the direct convex relaxations of the range function.
\end{itemize}
\textbf{Range function:} The range function is $\frac{1}{d-1}$-submodular, thus its discrete strongly and weakly stable supports are identical and they correspond to interval supports. As a result, AE recovers interval supports with $\Theta_\infty$. On the other hand, since the homogeneous LCE of the range function is the cardinality, AE recovers all supports with $\Omega_\infty$.
\textbf{Down monotone structures:} Functions of the form $F(A) = |A| + \iota_{A \in \mathcal{M}}(A)$, where $\mathcal{M}$ is down-monotone, also have their discrete strongly and weakly stable supports identical and given by the feasible set $\mathcal{M}$.
These structures include the dispersive and graph models discussed in examples \ref{ex:Dispersive} and \ref{ex:Graph}.
Since their homogeneous LCE is also the cardinality, then AE recovers all supports with $\Omega_\infty$, and only feasible supports with $\Theta_\infty$.
\section{Numerical Illustration}\label{sect:Simul}
\begin{figure}\label{fig:toyexample}
\vspace{-20pt}
\centering
\begin{tabular}{c c}\hspace{-15pt}
\includegraphics[trim=50 180 50 150, clip, scale=.24]{figs/SuppErr_corr0_d250_k100_n500_nolegend.pdf} & \hspace{-15pt}
\includegraphics[trim=40 180 50 150, clip, scale=.24]{figs/ObjErr_corr0_d250_k100_n500_nolegend.pdf} \\
\hspace{-15pt}
\includegraphics[trim=50 180 50 150, clip, scale=.24]{figs/SuppErr_corr05_d250_k100_n500_nolegend.pdf} & \hspace{-15pt}
\includegraphics[trim=40 180 50 150, clip, scale=.24]{figs/ObjErr_corr05_d250_k100_n500_nolegend.pdf}
\end{tabular}
\caption{ (Left column) Best Hamming distance and (Right column) best least square error to the true vector $w^*$, along the regularization path, averaged over 5 runs. }
\end{figure}
To illustrate the results presented in this paper, we consider the problem of estimating the support of a parameter vector $w^* \in \mathbb{R}^d$ whose support is an interval. It is natural then to choose as combinatorial penalty the range function whose stable supports are intervals. We aim to study the effect of adaptive weights, as well as the effect of the choice of homogeneous vs. non-homogeneous convex relaxation for regularization, on the quality of support recovery.
As discussed in Section \ref{sect:ex}, the $\ell_\infty$-homogeneous convex envelope of the range is simply the $\ell_1$-norm.
Its $\ell_\infty$-non-homogeneous convex envelope $\Theta^{r}_\infty$ can be computed using the formulation \eqref{eq:VarLpLinfNonHomo}, where only interval sets need to be considered in the constraints, leading to a quadratic number of constraints.
We also consider the $\ell_1 / \ell_\infty$-norm that corresponds to the convex relaxation of the modified range function $F^{\text{mr}}$.
We consider a simple regression setting in which $w^* \in \mathbb{R}^d$ is a constant signal whose support is an interval. The choice of $p = \infty$ is well suited for constant valued signals.
The design matrix $X \in \mathbb{R}^{d \times n}$ is either drawn as (1) an i.i.d Gaussian matrix with normalized columns, or (2) a correlated Gaussian matrix with normalized columns, with the off-diagonal values of the covariance matrix set to a value $\rho = 0.5$. We observe noisy linear measurements $y = X w^* + \epsilon$, where the noise vector is i.i.d.~with variance $\sigma^2$, where $\sigma$ is varied between $10^{-5}$ and $1$. We solve problem \eqref{eq:1StepEst} with and without adaptive weights $|w^0|^{\alpha - 1}$, where $w^0$ is taken to be the least squares solution and $\alpha = 0.3$.
We assess the estimators obtained through the different regularizers both in terms of support recovery and in terms of estimation error. Figure \ref{fig:toyexample} plots (in logscale) these two criteria against the noise level~$\sigma$. We plot the best achieved error on the regularization path, where the regularization parameter $\lambda$ was varied between $10^{-6}$ and $10^3$. We set the parameters to $d = 250, k = 100, n = 500$.
We observe that the adaptive weight scheme helps in support recovery, especially in the correlated design setting. Indeed, Lasso is only guaranteed to recover the support under an ``irrepresentability condition" \cite{zou2006adaptive}. This is satisfied with high probability only in the non-correlated design. On the other hand, adaptive weights allow us to recover any strongly stable support, without any additional condition, as shown in Theorem \ref{Thm:Consistency}.
The $\ell_1 / \ell_\infty$-norm performs poorly in this setup. In fact, the modified range function $F^{\text{mr}}$, introduced a gap of $d$ between non-empty sets and the empty set. This leads to the undersirable behavior, already documented in \cite{bach2010structured, jenatton2011structured} of adding all the variables in one step, as opposed to gradually.
Adaptive weights seem to correct for this effect, as seen by the significant improvement in performance.
Finally, note that choosing the ``tighter" non-homogeneous convex relaxation leads to better support recovery. Indeed, $\Theta^{r}_\infty$ performs better than $\ell_1$-norm in all setups.
|
1,941,325,220,392 | arxiv | \section{Introduction}
Suppose that $W$ is a finite complex reflection group acting on the complex vector space $V$.
Let $\A = \A(W)$ be the associated hyperplane arrangement of the reflecting hyperplanes of $W$.
Then $\A$ is free, \cite{Terao1980}.
There are several stronger notions of freeness.
In this paper we are mainly interested in two, namely \emph{inductive freeness}, first introduced by Terao in \cite{Terao1980}
and \emph{recursive freeness} which was introduced by Ziegler in \cite{Zielger87}.
In \cite{MR2854188} Barakat and Cuntz proved that all Coxeter arrangements are inductively free
and in \cite{2012arXiv1208.3131H} Hoge and R\"ohrle completed the classification of inductively free reflection arrangements
by inspecting the remaining complex cases.
They gave an easy characterization for all the cases but one, namely if the complex reflection group $W$ admits
an irreducible factor isomorphic to $\crg{31}$ and handling this case also turns out to be the most difficult part of this paper.
In \cite{MR3272729} Cuntz and Hoge gave first examples for free but not recursively free arrangements.
One of them is the reflection arrangement of the exceptional complex reflection group $\crg{27}$.
Very recently, Abe, Cuntz, Kawanoue, and Nozawa \cite{2014arXiv1411.3351A}
found smaller examples (with $13$ hyperplanes, being the smallest possible, and with $15$ hyperplanes)
for free but not recursively free arrangements in characteristic $0$.
Nevertheless, free but not recursively free arrangements seem to be rare.
Since reflection arrangements play an important role in the theory of hyperplane arrangements, it is natural to ask
which other reflection arrangements are free but not recursively free.
In this paper we answer this question and complete the picture for reflection arrangements
by showing which of the not inductively free reflection arrangements are recursively free
and which are free but not recursively free.
We obtain a classification of all recursively free reflection arrangements:
\begin{theorem}\label{thm:RFcrArr}
For $W$ a finite complex reflection group, the reflection arrangement $\A(W)$ of $W$ is
recursively free if and only if $W$ does not admit an irreducible factor isomorphic to one of the
exceptional reflection groups $\crg{27}, \crg{29}, \crg{31}, \crg{33}$ and $\crg{34}$.
\end{theorem}
Furthermore, for the special case $W \cong \crg{31}$, we obtain a (with respect to ``Addition'' and ``Deletion'') isolated cluster
of free but not recursively free subarrangements of $\A(W)$ in dimension $4$.
Recently in \cite{AbeDivFree2015}, Abe introduced the new class of divisionally free arrangements, based
on his Division-Theorem, \cite[Thm.~1.1]{AbeDivFree2015}, about freeness
and division of characteristic polynomials analogous to the class of
inductively free arrangements based on the Addition-Deletion-Theorem.
With Theorem \ref{thm:RFcrArr}, we are able to positively settle a conjecture by Abe, \cite[Conj.~5.11]{AbeDivFree2015},
about this new class of free arrangements, which we state as the next theorem.
\begin{theorem}\label{thm:ConjAbe}
There is an arrangement $\A$ such that $\A \in \DFC$ and $\A \notin \RFC$.
\end{theorem}
Finally, we will comment on the situation of a restriction of a reflection arrangement.
In order to compute the different intersection lattices of the reflection arrangements in question,
to obtain the respective invariants, and to recheck our results
we used the functionality of the GAP computer algebra system, \cite{GAP4}.
The author thanks his thesis advisor Professor Michael Cuntz and Torsten Hoge for many helpful discussions and remarks.
\section{Recollection and Preliminaries}
We review the required notions and definitions. Compare with \cite{orlik1992arrangements}.
\subsection{Arrangements of hyperplanes}
\begin{definition}
An $\ell$\emph{-arrangement of hyperplanes} is a pair $\AnV{A}{V}$, where $\A$ is a finite collection of hyperplanes
(codimension $1$ subspaces) in $V = \mathbb{K}^\ell$, a finite dimensional vector space over a fixed field $\mathbb{K}$.
For $\AnV{A}{V}$ we simply write $\A$ if the vector space $V$ is unambiguous.
We denote the empty $\ell$-arrangement by $\emptA{\ell}$.
\end{definition}
In this paper we are only interested in complex \emph{central} arrangements $\A$, that is, all the hyperplanes in $\A$
are linear subspaces and $V$ is a finite dimensional complex vector space $V = \Komplex^\ell$.
If we want to explicitly write down the hyperplanes of an $\ell$-arrangement,
we will use the notation:
$H = \Kern{\alpha} =: \alpha^\perp$ for a linear form $\alpha \in V^*$.
If $\{x_1,\ldots,x_n\}$ is a basis for $V^*$, we write $\alpha = \sum_{i=1}^\ell a_i x_i \in V^*$
also as a row vector $(a_1,\ldots,a_\ell)$.
The \emph{intersection lattice} $\LA{\A}$ of $\A$ is the set of all subspaces $X$ of $V$ of the form
$X = H_1 \cap \ldots \cap H_r$ with $\{ H_1,\ldots,H_r\} \subseteq \A$.
If $X \in \LA{\A}$, then the \emph{rank} $r(X)$ of $X$ is defined as $r(X) := \ell - \dim{X}$ and the rank of the arrangement
$\A$ is defined as $r(\A) := r(T(\A))$ where $T(\A) := \bigcap_{H \in \A} H$ is the \emph{center} of $\A$.
An $\ell$-arrangement $\A$ is called \emph{essential} if $r(\A)=\ell$.
For $X \in \LA{\A}$, we define the localization
$\A_X := \{ H \in \A \mid X \subseteq H \}$ of $\A$ at $X$, and the \emph{restriction of} $\A$ to $X$,
$(\A^X,X)$, where $\A^X := \{ X\cap H \mid H \in \A \setminus \A_X \}$.
For $0 \leq q \leq \ell$ we write $\LAq{\A}{q} := \{ X \in \LA{\A} \mid r(X) = q \}$.
For $\A \neq \emptA{\ell}$, let $H_0 \in \A$.
Define $\A' := \A \setminus \{ H_0 \}$, and $\A'' := \A^{H_0}$.
We say that $\TripleA{\A}$ is a \emph{triple} of arrangements (with respect to $H_0$), \cite[Def.~1.14]{orlik1992arrangements}.
The \emph{product} $\A = (\A_1 \times \A_2,V_1 \oplus V_2)$ of two arrangements $(\A_1,V_1)$, $(\A_2,V_2)$
is defined by
\begin{equation*}
\A := \A_1 \times \A_2 = \{ H_1 \oplus V_2 \mid H_1 \in \A_1 \} \cup \{ V_1 \oplus H_2 \mid H_2 \in \A_2 \},
\end{equation*}
see \cite[Def.~2.13]{orlik1992arrangements}. In particular $\Betrag{\A} = \Betrag{\A_1} + \Betrag{\A_2}$.
If an arrangement $\A$ can be written as a non-trivial product $\A = \A_1 \times \A_2$,
then $\A$ is called \emph{reducible}, otherwise \emph{irreducible}.
For an arrangement $\A$ the \emph{M\"obius function} $\Abb{\mu}{\LA{\A}}{\Ganz}$ is defined by:
\begin{equation*}
\mu(X) = \left\{ \begin{array}{l l}
1 & \quad \text{if } X=V \text{,}\\
-\sum_{V \supseteq Y \supsetneq X} \mu(Y) & \quad \text{if }X \neq V\text{.}
\end{array} \right.
\end{equation*}
We denote by $\CharPolyA{\A}$
the \emph{characteristic polynomial} of $\A$ which is defined by:
\begin{equation*}
\CharPolyA{\A} = \sum_{X \in \LA{\A}} \mu(X) t^{\dim{X}}.
\end{equation*}
An element $X \in \LA{\A}$ is called \emph{modular} if $X + Y \in \LA{\A}$ for all $Y \in \LA{\A}$.
An arrangement $\A$ with $r(\A) = \ell$ is called \emph{supersolvable} if the intersection lattice $\LA{\A}$ is supersolvable, i.e.\ it has
a maximal chain of modular elements $ V = X_0 \supsetneq X_1 \supsetneq \ldots \supsetneq X_\ell = T(\A)$,
$X_i \in \LA{\A}$ modular.
For example an essential $3$-arrangement $\A$ is supersolvable if there exists an $X \in \LA{\A}_2$ which is
connected to all other $Y \in \LA{\A}_2$ by a suitable hyperplane $H \in \A$, (i.e.\ $X + Y \in \A$).
\subsection{Free Arrangements}
Let $S = \SymA{V}$ be the symmetric algebra of the dual space $V^*$ of $V$. If $x_1,\ldots,x_\ell$ is a basis of $V^*$,
then we identify $S$ with the polynomial ring $\PolyRing{\Komplex}{x_1,\ldots,x_\ell}$ in $\ell$ variables.
The algebra $S$ has a natural $\Ganz$-grading:
Let $S_p$ denote the $\Komplex$-subspace of $S$ of the homogeneous polynomials of degree $p$ ($p \in \Nat_{\geq 0}$),
then $S = \bigoplus_{p \in \Ganz} S_p$, where $S_p = 0$ for $p < 0$.
Let $\Derivation{S}$ be the $S$-module of $\Komplex$-derivations of $S$. It is a free $S$-module with basis
$D_1,\ldots,D_\ell$ where $D_i$ is the partial derivation $\ddxi{i}$.
We say that $\theta \in \Derivation{S}$ is \emph{homogeneous of polynomial degree} $p$ provided
$\theta = \sum_{i=1}^\ell f_i D_i$, with $f_i \in S_p$ for each $1 \leq i \leq \ell$.
In this case we write $\pdeg{\theta} = p$.
With this definition we get a $\Ganz$-grading for the $S$-module $\Derivation{S}$:
Let $\Derivation{S}_p$ be the $\Komplex$-subspace of $\Derivation{S}$ consisting of
all homogeneous derivations of polynomial degree $p$, then
$\Derivation{S} = \bigoplus_{p \in \Ganz} \Derivation{S}_p$.
\begin{definition}
Let $\A$ be an arrangement of hyperplanes in $V$. Then for $H \in \A$ we fix $\alpha_H \in V^*$ with $H = \Kern{\alpha_H}$.
A \emph{defining polynomial} $\PolyA{\A}$ is given by $\PolyA{\A} := \prod_{H \in \A} \alpha_H \in S$.
The \emph{module of $\A$-derivations} of $\A$ is defined by
\begin{equation*}
\DerA{\A} := \DerA{\PolyA{\A}} := \{ \theta \in \Derivation{S} \mid \theta(\PolyA{\A}) \in \PolyA{\A}S \}.
\end{equation*}
We say that $\A$ is \emph{free} if the module of $\A$-derivations is a free $S$-module.
\end{definition}
If $\A$ is a free arrangement, let $\{ \theta_1, \ldots, \theta_\ell \}$ be a homogeneous basis for $\DerA{\A}$.
Then the polynomial degrees of the $\theta_i$, $i \in \{1,\ldots,\ell\}$, are called the \emph{exponents} of $\A$.
We write $\expAA{\A} := \{\{\pdeg{\theta_1},\ldots,\pdeg{\theta_\ell}\}\}$, where the notation $\{\{ * \}\}$ emphasizes the fact,
that $\expAA{\A}$ is a multiset in general.
The multiset $\expAA{\A}$ is uniquely determined by $\A$, see also \cite[Def.\ 4.25]{orlik1992arrangements}.
If $\A$ is free with exponents $\expA{\A}{b_1,\ldots,b_\ell}$, then by \cite[Thm.\ 4.23]{orlik1992arrangements}:
\begin{equation}
\sum_{i=1}^\ell b_i = \Betrag{\A}. \label{Sum_exp}
\end{equation}
The following proposition shows that the product construction mentioned before is compatible with the notion
of freeness:
\begin{proposition}[{\cite[Prop.~4.28]{orlik1992arrangements}}]\label{prop:Prod_free}
Let $(\A_1,V_1)$ and $(\A_2,V_2)$ be two arrangements.
The product arrangement $(\A_1 \times \A_2, V_1 \oplus V_2)$ is free if and only if both $(\A_1,V_1)$ and
$(\A_2,V_2)$ are free. In this case
\begin{equation*}
\expAA{\A_1 \times \A_2} = \expAA{\A_1} \cup \expAA{\A_2}.
\end{equation*}
\end{proposition}
Throughout our exposition we will frequently use the following important results about free arrangements.
\begin{theorem}[Addition-Deletion {\cite[Thm.\ 4.51]{orlik1992arrangements}}]\label{thm:Addition_Deletion}
Let $\A$ be a hyperplane arrangement and $\A \neq \emptA{\ell}$. Let $(\A,\A',\A'')$ be a triple.
Any two of the following statements imply the third:
\begin{eqnarray*}
\A \text{ is free with } \expAA{\A} &=& \{\{b_1,\ldots,b_{l-1},b_l\}\}, \\
\A' \text{ is free with } \expAA{\A'} &=& \{\{b_1,\ldots,b_{l-1},b_l-1\}\}, \\
\A'' \text{ is free with } \expAA{\A''} &=& \{\{b_1,\ldots,b_{l-1}\}\}.
\end{eqnarray*}
\end{theorem}
Choose a hyperplane $H_0 = \ker{\alpha_0} \in \A$. Let $\bar{S} = S / \alpha_0 S$.
If $\theta \in \DerA{\A}$, then $\theta(\alpha_0 S) \subseteq \alpha_0 S$.
Thus we may define $\Abb{\bar{\theta}}{\bar{S}}{\bar{S}}$ by $\bar{\theta}(f + \alpha_0 S) = \theta(f) + \alpha_0 S$.
Then $\bar{\theta} \in \DerA{\An{A''}}$, \cite[Def.\ 4.43, Prop.\ 4.44]{orlik1992arrangements}.
\begin{theorem}[{\cite[Thm.\ 4.46]{orlik1992arrangements}}]\label{thm:A_AoH_exp}
Suppose $\A$ and $\A'$ are free arrangements with $\A' := \A \setminus \{ H_0 \}$, $H_0 := \ker{\alpha_0}$.
Then there is a basis $\{ \theta_1,\ldots,\theta_\ell\}$ for $\DerA{\A'}$ such that
\begin{enumerate}
\item[(1)] $\{ \theta_1,\ldots,\theta_{i-1},\alpha_0\theta_i,\theta_{i+1},\ldots,\theta_\ell\}$ is a basis for $\DerA{\A}$,
\item[(2)] $\{ \bar{\theta}_1,\ldots,\bar{\theta}_{i-1},\bar{\theta}_{i+1},\ldots,\bar{\theta}_\ell \}$ is a basis for $\DerA{\A''}$.
\end{enumerate}
\end{theorem}
\begin{theorem}[Factorization {\cite[Thm.\ 4.137]{orlik1992arrangements}}]\label{thm:A_free_factZ}
If $\A$ is a free arrangement with $\expA{\A}{b_1,\ldots,b_\ell}$, then
\begin{equation*}
\CharPolyA{\A} = \prod_{i=1}^\ell (t - b_i).
\end{equation*}
\end{theorem}
A very recent and remarkable result is due to Abe which connects the division of characteristic polynomials with freeness:
\begin{theorem}[Division theorem {\cite[Thm.~1.1]{AbeDivFree2015}}]\label{thm:Div_Thm}
Let $\A$ be a hyperplane arrangement and $\A \neq \emptA{\ell}$. Assume that there is a hyperplane $H \in \A$ such that
$\CharPolyA{\A^H}$ divides $\CharPolyA{\A}$ and $\A^H$ is free. Then $\A$ is free.
\end{theorem}
\subsection{Inductively, recursively and divisionally free arrangements}
Theorem \ref{thm:Addition_Deletion} motivates the following two definitions of classes of free arrangements.
\begin{definition}[{\cite[Def.\ 4.53]{orlik1992arrangements}}] \label{def:IF}
The class $\IFC$ of \emph{inductively free} arrangements is the smallest class of arrangements which satisfies
\begin{enumerate}
\item The empty arrangement $\emptA{\ell}$ of rank $\ell$ is in $\IFC$ for $\ell \geq 0$,
\item if there exists a hyperplane $H_0 \in \A$ such that $\A'' \in \IFC$, $\A' \in \IFC$, and
$\expAA{\A''} \subset \expAA{\A'}$, then $\A$ also belongs to $\IFC$.
\end{enumerate}
\end{definition}
\begin{example}\label{ex:SS_IF}
All supersolvable arrangements are inductively free by \cite[Thm.\ 4.58]{orlik1992arrangements}.
\end{example}
\begin{definition}[{\cite[Def.\ 4.60]{orlik1992arrangements}}] \label{def:RF}
The class $\RFC$ of \emph{recursively free} arrangements is the smallest class of arrangements which satisfies
\begin{enumerate}
\item The empty arrangement $\emptA{\ell}$ of rank $\ell$ is in $\RFC$ for $\ell \geq 0$,
\item if there exists a hyperplane $H_0 \in \A$ such that $\A'' \in \RFC$, $\A' \in \RFC$, and
$\expAA{\A''} \subset \expAA{\A'}$, then $\A$ also belongs to $\RFC$,
\item if there exists a hyperplane $H_0 \in \A$ such that $\A'' \in \RFC$, $\A \in \RFC$, and
$\expAA{\A''} \subset \expAA{\A}$, then $\A'$ also belongs to $\RFC$.
\end{enumerate}
\end{definition}
Note that we have:
\begin{equation*}
\IFC \subsetneq \RFC \subsetneq \{ \text{ \emph{free arrangements} } \},
\end{equation*}
where the properness of the last inclusion was only recently established by Cuntz and Hoge in \cite{MR3272729}.
Furthermore, similarly to the class $\IFC$ of inductively free arrangements, Theorem \ref{thm:Div_Thm} motivates the following class
of free arrangements:
\begin{definition}[{\cite[Def.~4.3]{AbeDivFree2015}}]\label{def:DF}
The class $\DFC$ of \emph{divisionally free} arrangements is the smallest class of arrangements which satisfies
\begin{enumerate}
\item If $\A$ is an $\ell$-arrangement, $\ell \leq 2$, or $\A = \emptA{\ell}$, $\ell \geq 3$,
then $\A$ belongs to $\DFC$,
\item if there exists a hyperplane $H_0 \in \A$ such that $\A'' \in \DFC$ and
$\CharPolyA{\A^{H_0}} \mid \CharPolyA{\A}$, then $\A$ also belongs to $\DFC$.
\end{enumerate}
\end{definition}
Abe showed that the new class of divisionally free arrangements properly contains the class of inductively free
arrangements:
\begin{equation*}
\IFC \subsetneq \DFC,
\end{equation*}
by \cite[Thm.~1.6]{AbeDivFree2015}.
He conjectured that there are arrangements which are divisionally free but not recursively free.
Our classification of recursively free reflection arrangements in this paper provides examples to confirm his conjecture
(see Theorem \ref{thm:ConjAbe} and Section \ref{sec:Abes_Conjecture}).
The next easy lemma will be useful to disprove the recursive freeness of a given arrangement:
\begin{lemma}\label{lem:A_u_H}
Let $\A$ and $\A' = \A \setminus \{H\}$ be free arrangements
and $L := \LA{\A'}$.
Let $P_H := \{ X \in L_2 \mid X \subseteq H \} = \A'' \cap L_2$,
then $\sum_{X \in P_H} (\Betrag{\A'_{X}} - 1) \in \expAA{\A'}$,
and if $\A'$ is irreducible then $\sum_{X \in P_H} (\Betrag{\A'_{X}} - 1) \neq 1$.
\end{lemma}
\begin{proof}
By Theorem \ref{thm:A_AoH_exp} and (\ref{Sum_exp}) there is a $b \in \expAA{\A'}$, such that
$\Betrag{\A^{H}} = \Betrag{\A'} - b$ and if $\A'$ is irreducible, then $b \neq 1$.
Since $\Betrag{\A^{H}} = \Betrag{\A'} - \sum_{X \in P_H} (\Betrag{\A'_X} -1)$,
the claim directly follows.
\end{proof}
The next two results are due to Hoge, R\"ohrle, and Schauenburg, \cite{HRS15}.
\begin{proposition}[{\cite[Thm.\ 1.1]{HRS15}}]\label{prop:Arf_AXrf}
Let $\A$ be a recursively free arrangement and $X \in \LA{\A}$.
Then $\A_X$ is recursively free.
\end{proposition}
Hoge and R\"ohrle have shown in \cite[Prop.\ 2.10]{2012arXiv1208.3131H}
that the product construction is compatible with the notion of inductively free arrangements.
The following refines the statement for recursively free arrangements:
\begin{proposition}[{\cite[Thm.\ 1.2]{HRS15}}]\label{prop:Prod_RF}
Let $(\A_1,V_1), (\A_2,V_2)$ be two arrangements. Then $\A = (\A_1 \times \A_2,V_1 \oplus V_2)$
is recursively free if and only if both $(\A_1,V_1)$ and $(\A_2,V_2)$ are recursively free and in that case the multiset of
exponents of $\A$ is given by $\expAA{\A} = \expAA{\A_1} \cup \expAA{\A_2}$.
\end{proposition}
\subsection{Reflection arrangements}
Let $V = \Komplex^\ell$ be a finite dimensional complex vector space.
An element $s \in \GeneralLinearGroup{V}$ of finite order with fixed point set
$V^s = \{ x \in V \mid sx =x \} =H_s$ a hyperplane in $V$ is called a \emph{reflection}.
A finite subgroup $W \leq \GeneralLinearGroup{V}$ which is generated by reflections
is called a \emph{finite complex reflection group}.
The finite complex reflection groups were classified by Shephard and Todd, \cite{ST_1954_fcrg}.
Let $W \leq \GeneralLinearGroup{V}$ be a finite complex reflection group acting on the vector space $V$.
The \emph{reflection arrangement} $(\A(W),V)$ is the arrangement of hyperplanes
consisting of all the reflecting hyperplanes of reflections of $W$.
Terao \cite{Terao1980} has shown that each reflection arrangement is free, see also \cite[Prop.~6.59]{orlik1992arrangements}.
The complex reflection group $W$ is called \emph{reducible} if $V = V_1 \oplus V_2$ where $V_i$ are stable under $W$.
Then the restriction $W_i$ of $W$ to $V_i$ is a reflection group in $V_i$.
In this case the reflection arrangement $(\A(W),V)$ is the product of the two
reflection arrangements $(\A(W_1),V_1)$ and $(\A(W_2),V_2)$.
The complex reflection group $W$ is called \emph{irreducible} if it is not reducible,
and then the reflection arrangement $\A(W)$ is irreducible.
For later purposes, we now look at the action of a finite complex reflection group $W$ on its
associated reflection arrangement $\A(W)$ and (reflection) subarrangements $\A(W') \subseteq \A(W)$
corresponding to reflection subgroups $W' \leq W$.
Let $W$ be a finite complex reflection group and $\A := \A(W)$.
Then $W$ acts on the set $\SAA := \{ \B \mid \B \subseteq \A \}$ of subarrangements of $\A$
by $w.\B = \{ w.H \mid H \in \B\}$ for $\B \in \SAA$.
The \emph{(setwise) stabilizer} $S_\B$ of $\B$ in $W$ ist defined by $S_\B = \{ w \in W \mid w.\B=\B \}$.
We denote by $W.\B = \{w.\B \mid w \in W\} \subseteq \SAA$ the orbit of $\B$ under $W$.
The following lemma is similar to a statement from \cite[Lem.~6.88]{orlik1992arrangements}.
\begin{lemma}\label{lem:Action_W_RSA}
Let $W$ be a finite complex reflection group, $\A := \A(W)$, and $\SAA = \{ \B \mid \B \subseteq \A \}$.
Let $\B := \A(W') \in \SAA$ be a reflection subarrangement for a reflection subgroup $W' \leq W$.
Then $S_\B = N_W(W')$ and $\Betrag{W.\B} =
\Betrag{W:S_\B} = \Betrag{W:N_W(W')}$.
\end{lemma}
\begin{proof}
Let $W$, $W'$, $\A$, and $\B$ be as above.
Let $S_\B$ be the stabilizer of $\B$ in $W$.
It is clear by the Orbit-Stabilizer-Theorem that $\Betrag{W.\B} = \Betrag{W:S_\B}$.
Let $H_r \in W'$ for a reflection $r \in W'$, then $w.H_r = H_{w^{-1}rw}$.
So we have
\begin{align*}
S_\B = & \quad \{ w \in W \mid w.H_r \in \B \text{ for all reflections } r \in W' \} \\
= & \quad \{ w \in W \mid w^{-1}rw \in W' \text{ for all reflections } r \in W'\} \\
= & \quad N_W(W').
\end{align*}
The last equality is because $W'$ is by definition generated by the reflections it contains
and the group normalizing all generators of $W'$ is the normalizer of $W'$.
\end{proof}
The following theorem proved by Barakat, Cuntz, Hoge and R\"ohrle, which provides a classification of all inductively free
reflection arrangments, is our starting point for inspecting the recursive freeness of reflection arrangements:
\begin{theorem}[{\cite[Thm.~1.1]{2012arXiv1208.3131H}, \cite[Thm.~5.14]{MR2854188}}]\label{thm:RA_IF}
For $W$ a finite complex reflection group, the reflection arrangement $\A(W)$ is inductively free
if and only if $W$ does not admit an irreducible factor isomorphic to a monomial group $\crgM{r}{r}{\ell}$ for $r,\ell \geq 3$,
$\crg{24}$, $\crg{27}$, $\crg{29}$, $\crg{31}$, $\crg{33}$, or $\crg{34}$.
\end{theorem}
Thus, to prove Theorem \ref{thm:RFcrArr}, we only have to check
the non\--in\-duc\-tively free cases from Theorem \ref{thm:RA_IF} since inductive freeness implies recursive freeness.
\section{Proof of Theorem \ref{thm:RFcrArr}}
Thanks to Proposition \ref{prop:Prod_RF}, the proof of Theorem \ref{thm:RFcrArr} reduces to the case
when $\A(W)$ respectively $W$ are irreducible.
We consider the different irreducible reflection arrangements, provided by Theorem \ref{thm:RA_IF},
which are not inductively free, in turn.
\subsection{The reflection arrangements $\A(\crgM{r}{r}{\ell})$, $r,\ell \geq3$}\label{sec3}
For an integer $r \geq 2$ let $\theta = \exp{(2\pi i / r)}$, and $C(r)$ the cyclic group generated by $\theta$.
The reflection arrangement $\A(W)$ with $W = \crgM{r}{r}{\ell}$ contains the hyperplanes
\begin{equation*}
H_{i,j}(\zeta) := \Kern{ x_i - \zeta x_j},
\end{equation*}
with $i,j \leq \ell$ and $i \neq j$, $\zeta \in C(r)$,
and if $W$ is the full monomial group $\crgM{r}{1}{\ell}$,
then $\A(\crgM{r}{1}{\ell}$
additionally contains the coordinate hyperplanes $E_i := \Kern{x_i}$, \cite[Ch.~6.4]{orlik1992arrangements}.
To show that the reflection arrangements $\A(\crgM{r}{r}{\ell})$ for $r,\ell \geq 3$ are
recursively free, we need the intermediate arrangements $\A_\ell^k(r)$ with
$\A(\crgM{r}{r}{\ell}) \subseteq \A_\ell^k(r) \subseteq \A(\crgM{r}{1}{\ell})$.
They are defined as follows:
\begin{equation*}
\A_\ell^k(r) := \A(\crgM{r}{r}{\ell}) \dot{\cup} \{ E_1,\ldots,E_k\},
\end{equation*}
and their defining polynomial is given by
\begin{equation*}
Q(\A_\ell^k(r)) = x_1\cdots x_k \prod_{\substack{1 \leq i < j \leq \ell \\ 0 \leq n < r}} (x_i - \zeta^n x_j).
\end{equation*}
The following result by Amend, Hoge and R\"ohrle immediately implies the recursive freeness of $\A(\crgM{r}{r}{\ell})$,
for $r,\ell \geq 3$.
\begin{theorem}[{\cite[Thm.~3.6]{MR3250448}}]\label{thm:Akl_rf}
Suppose $r \geq 2$, $\ell \geq 3$ and $0 \leq k \leq \ell$.
Then $\A_\ell^k(r)$ is recursively free.
\end{theorem}
\begin{corollary}\label{coro:Grrl_RF}
Let $W$ be the finite complex reflection group $W = \crgM{r}{r}{\ell}$.
Then the reflection arrangement $\A := \A(W)$ is recursively free.
\end{corollary}
\begin{proof}
We have $\A \cong \A_\ell^0(r)$ and by
Theorem \ref{thm:Akl_rf}, $\A_\ell^0(r)$ is recursively free.
\end{proof}
\subsection{The reflection arrangement $\A(\crg{24})$}\label{subsec:A24}
We show that the reflection arrangement of the finite complex reflection group $\crg{24}$ is
recursively free by constructing a so called supersolvable resolution for the arrangement, (see also \cite[Ch.~3.6]{Zielger87}),
and making sure that in each addition-step
of a new hyperplane the resulting arrangements and restrictions are free with suitable exponents.
As a supersolvable arrangement is always inductively free (Example \ref{ex:SS_IF}),
it follows that $\A(\crg{24})$ is recursively free.
\begin{lemma}
Let $W$ be the complex reflection group $W = \crg{24}$.
Then the reflection arrangement $\A = \A(W)$ is recursively free.
\end{lemma}
\begin{proof}
Let $\omega := -\frac{1}{2}(1+i\sqrt{7})$, then the reflecting hyperplanes of $\A$ can be defined by the following
linear forms (see also \cite[Ch.\ 7, 6.2]{lehrer2009unitary}):
\begin{align*}
\A = \quad \{ \, &(1,0,0)^\perp,(0,1,0)^\perp,(0,0,1)^\perp,(1,1,0)^\perp,(-1,1,0)^\perp, \\
& (1,0,1)^\perp,(-1,0,1)^\perp,(0,1,1)^\perp,(0,-1,1)^\perp,(\omega,\omega,2)^\perp, \\
& (-\omega,\omega,2)^\perp,(\omega,-\omega,2)^\perp,(-\omega,-\omega,2)^\perp,(\omega,2,\omega)^\perp, \\
& (-\omega,2,\omega)^\perp,(\omega,2,-\omega)^\perp,(-\omega,2,-\omega)^\perp,(2,\omega,\omega)^\perp, \\
& (2,-\omega,\omega)^\perp,(2,\omega,-\omega)^\perp,(2,-\omega,-\omega)^\perp \, \}.
\end{align*}
The exponents of $\A$ are $\expA{\A}{1,9,11}$.
If we define
\begin{align*}
\{ H_1,\ldots,H_{12} \} := \quad \{ \, &(\omega^2,\omega,0)^\perp,(-\omega^2,\omega,0)^\perp,(\omega,\omega^2,0)^\perp \\
&(-\omega,\omega^2,0)^\perp,(2-\omega,\omega,0)^\perp, (-2+\omega,\omega,0)^\perp, \\
&(\omega,2-\omega,0)^\perp,(-\omega,2-\omega,0)^\perp,(\omega,2,0)^\perp, \\
&(-\omega,2,0)^\perp, (2,\omega,0)^\perp,(-2,\omega,0)^\perp \, \},
\end{align*}
and the arrangements $\A_j := \A \dot{\cup} \{ H_1,\ldots,H_j \}$ for $1 \leq j \leq 12$,
then
\begin{equation*}
X = (1,0,0)^\perp \cap (0,1,0)^\perp \cap (1,1,0)^\perp \cap (-1,1,0)^\perp \cap_{j=1}^{12} H_j \in \LA{\A_{12}}
\end{equation*}
is a rank $2$ modular element, and $\A_{12}$ is supersolvable.
In each step, $\A_j$ is free, $\A_j^{H_j}$ is inductively free (since $\A_j^{H_j}$ is a $2$-arrangement),
and $\expAA{\A_j^{H_j}} \subseteq \expAA{\A_j}$.
The exponents of the arrangements $\A_j$ and $\A_j^{H_j}$ are listed in Table \ref{tab_exp_Ai}.
\begin{table}
\begin{tabular}{l l l}
\hline
$j$ & $\expAA{\A_j}$ & $\expAA{\A_j^{H_j}}$ \\
\hline
\hline
1 & 1,10,11 & 1,11 \\
2 & 1,11,11 & 1,11 \\
3 & 1,11,12 & 1,11 \\
4 & 1,11,13 & 1,11 \\
5 & 1,12,13 & 1,13 \\
6 & 1,13,13 & 1,13 \\
7 & 1,13,14 & 1,13 \\
8 & 1,13,15 & 1,13 \\
9 & 1,14,15 & 1,15 \\
10 & 1,15,15 & 1,15 \\
11 & 1,15,16 & 1,15 \\
12 & 1,15,17 & 1,15 \\
\hline
\\
\end{tabular}
\caption{The exponents of the free arrangements $\A_j$ and $\A_j^{H_j}$.}\label{tab_exp_Ai}
\end{table}
Since by Example \ref{ex:SS_IF} a supersolvable arrangement is inductively free, $\A$ is recursively free.
\end{proof}
We found the set of hyperplanes $\{ H_1,\ldots,H_{12} \}$
by ``connecting'' a suitable $X \in \LA{\A}_2$ to other $Y \in \LA{\A}_2$
via addition of new hyperplanes such that $X$ becomes a modular element in the resulting intersection lattice,
subject to each addition of a new hyperplane results in a free arrangement,
(compare with \cite[Ex.~4.59]{orlik1992arrangements}).
\subsection{The reflection arrangement $\A(\crg{27})$}\label{subsec:A27}
In \cite[Remark~3.7]{MR3272729} Cuntz and Hoge have shown that the reflection arrangement $\A(\crg{27})$ is not
recursively free. In particular they have shown that there is no hyperplane which can be added or removed from $\A(\crg{27})$
resulting in a free arrangement.
\subsection{The reflection arrangements $\A(\crg{29})$ and $\A(\crg{31})$}\label{subsec:A29_A31}
In \cite[Lemma~3.5]{2012arXiv1208.3131H} Hoge and R\"ohrle settled the case that the reflection arrangement $\A(\crg{31})$
of the exceptional finite complex reflection group $\crg{31}$ is not inductively free by testing several cases with the computer.
In this part we will see, that the reflection arrangement $\A(\crg{31})$ is additionally not recursively free and
as a consequence the closely related reflection subarrangement $\A(\crg{29})$ is also not recursively free.
Furthermore we obtain a new computer-free proof, that $\A(\crg{31})$ is not inductively free.
\begin{theorem}\label{thm:G31_G29_nRF}
Let $\A = \A(W)$ be the reflection arrangement with $W$ isomorphic to one of the finite complex reflection groups
$\crg{29}$, $\crg{31}$. Then $\A$ is not recursively free.
\end{theorem}
We will prove the theorem in two parts.
In the first part, we will characterize certain free subarrangements of $\A(\crg{31})$ which we can obtain out of $\A(\crg{31})$ by
successive deletion of hyperplanes such that all the arrangements in between are also free.
We call such arrangements \emph{\FFSAsp}.
Then we will investigate the relation between the two reflection arrangements $\A(\crg{29})$ and $\A(\crg{31})$,
and obtain that $\A(\crg{29})$ is the smallest of these \FFSAs of $\A(\crg{31})$.
This yields a new proof, that $\A(\crg{31})$ is not inductively free (since inductive freeness implies that the empty arrangement
is a \FFSAp).
In the second part, we will show that if $\tilde{\A}$ is a \FFSA of $\A(\crg{31})$,
there is no possible way to obtain a free arrangement out of $\tilde{\A}$
by adding a new hyperplane which is not already contained in $\A(\crg{31})$.
This will conclude the proof of Theorem \ref{thm:G31_G29_nRF}.
\begin{definition}\label{def:ArrG31}
Let $i = \sqrt{-1}$. The arrangement $\A(\crg{31})$ can be defined as the union of the following collections of hyperplanes:
\begin{align
\begin{split}
\A(\crg{31}) := \, %
& \{ \Kern{x_p- i^k x_q} \mid 0 \leq k \leq 3, 1 \leq p < q \leq 4 \} \, \dot{\cup} \\
& \{ \alpha^\perp \mid \alpha \in \crgM{4}{4}{4}.(1,1,1,1) \} \, \dot{\cup} \\
& \{ (1,0,0,0)^\perp, (0,1,0,0)^\perp, (0,0,1,0)^\perp, (0,0,0,1)^\perp \} \, \dot{\cup} \\
& \{ \alpha^\perp \mid \alpha \in \crgM{4}{4}{4}.(-1,1,1,1) \}.
\end{split}
\end{align}
The first set contains the hyperplanes of the reflection arrangement $\Arr{\crgM{4}{4}{4}}$.
The second and the last set contain the hyperplanes defined by the linear forms in orbits of the group $\crgM{4}{4}{4}$.
The union of the first and the second set gives the $40$ hyperplanes of the reflection arrangement $\A(\crg{29})$.
In particular, $\A(\crg{29}) \subseteq \A(\crg{31})$, compare with \cite[Ch.\ 7, 6.2]{lehrer2009unitary}.
\end{definition}
\subsubsection{The free filtration subarrangements of $\A(\crg{31})$}
In this subsection we characterize certain free subarrangements of $\A(\crg{31})$ which we can obtain by successively removing
hyperplanes from $\A(\crg{31})$, the so called \emph{free filtration subarrangements}.
We will use this characterization in Subsection \ref{subsubsec:A29_A31_nRF_e}
to prove Theorem \ref{thm:G31_G29_nRF}.
Furthermore, along the way, we obtain another (computer-free) proof that the arrangement
$\A(\crg{31})$ cannot be inductively free
(recall Definition \ref{def:IF}) without checking all the cases for a possible inductive
chain but rather by examining the intersection lattices of certain subarrangements and using the fact, that $\A(\crg{29})$
plays a ``special'' role among the free filtration subarrangements of $\A(\crg{31})$.
\begin{definition}\label{def:FFSA}
Let $\A$ be a free $\ell$-arrangement and $\tilde{\A} \subseteq \A$ a free sub\-arrangement.
A strictly decreasing sequence of free arrangements
\begin{equation*}
\A = \A_0 \supsetneq \A_1 \supsetneq \ldots \supsetneq \A_{n-1} \supsetneq \A_n = \tilde{\A}
\end{equation*}
is called a \emph{(finite) free filtration} from $\A$ to $\tilde{\A}$ if $\Betrag{\A_i} = \Betrag{\A}-i$ for each $i$.
If there exists a (finite) free filtration from $\A$ to $\tilde{\A}$,
we call $\tilde{\A}$ a \emph{\FFSAp}.
\end{definition}
The notion of free filtration was first introduced by Abe and Terao in \cite{Abe2015}.
Note that, since all the subarrangements $\A_i$ in the definition are free,
with Theorem \ref{thm:A_AoH_exp} the restrictions $\A_{i-1}^{H_i}$ are free and
we automatically have $\expAA{\A_{i-1}^{H_i}} \subseteq \expAA{\A_{i-1}}$
and $\expAA{\A_{i-1}^{H_i}} \subseteq \expAA{\A_{i}}$.
If $\A$ is an inductively free $\ell$-arrangement, then $\emptA{\ell}$ is a free filtration subarrangement.
The main result of this subsection is the following proposition which we will prove in several steps divided into
some lemmas.
\begin{proposition}\label{prop:G31_NIF}
Let $\A := \A(\crg{31})$ be the reflection arrangement of the finite complex reflection group $\crg{31}$.
Let $\tilde{\A}$ be a smallest (w.r.t. the number of hyperplanes)
free filtration subarrangement.
Then $\tilde{\A} \cong \A(\crg{29})$.
In particular $\A$, $\A(\crg{29})$ and all other free filtration subarrangements $\tilde{\A} \subseteq \A$
are not inductively free.
\end{proposition}
To prove Proposition \ref{prop:G31_NIF}, we will characterize all free filtration subarrangements of $\A(\crg{31})$
by certain combinatorial properties of their intersection lattices.
The next lemma gives a sufficient condition for $\tilde{\A} \subseteq \A(\crg{31})$ being a free filtration subarrangement.
With an additional assumption on $\Betrag{\tilde{\A}}$, this condition is also necessary.
\begin{lemma}\label{lem:SubsetN_FFSA}
Let $\An{N} \subseteq \A :=\A(\crg{31})$ be a subcollection of hyperplanes
and $\tilde{\A} := \A \setminus \An{N}$. If $\An{N}$ satisfies
\begin{equation}\label{eq:SubsetN_FFSA}
\tag{$\ast$} \bigcup_{X \in \LAq{\An{N}}{2}} X \subseteq \bigcup_{H \in {\tilde{\A}}} H \text{,}
\end{equation}
then $\tilde{\A} \subseteq \A$ is a free filtration subarrangement,
with exponents $\expA{\tilde{\A}}{1,13,17,29-\Betrag{\An{N}}}$.
If furthermore $\Betrag{\An{N}} \leq 13$, then $\tilde{\A} \subseteq \A$ is a free filtration subarrangement if and only if
$\An{N}$ satisfies (\ref{eq:SubsetN_FFSA}).
\end{lemma}
\begin{proof}
We proceed by induction on $\Betrag{\An{N}}$.
We use the fact, that $\crg{31}$ acts transitively on the hyperplanes of $\A$.
In particular, all the 3-arrangements $\A^H$ for $H \in \A$ are isomorphic and furthermore they are
free with exponents $\expA{\A^{H}}{1,13,17}$ (see \cite[App.\ C and App.\ D]{orlik1992arrangements}).
First let $\An{N} = \{ H\}$ consist of only a single hyperplane.
Since $\A$ is free with exponents $\expA{\A}{1,13,17,29}$,
the arrangement $\tilde{\A}= \A'$ is just a deletion with respect to $H$, hence free
by Theorem \ref{thm:Addition_Deletion}, and $\tilde{\A}$ is a \FFSA
with $\expA{\tilde{\A}}{1,13,17,28}$.
With $\An{N}$, each subcollection $\An{N'} = \An{N} \setminus \{K\}$, for a
$K \in \An{N}$, fulfills the assumption of the lemma.
By the induction hypotheses $\B = \A \setminus \An{N}'$ is a \FFSA with
$\expA{\B}{1,13,17,29-\Betrag{\An{N}'}} = \{\{1,13,17,29-\Betrag{\An{N}}+1\}\}$.
Now condition (\ref{eq:SubsetN_FFSA})
just means that
$\Betrag{\B^K} = 31$, so $\An{B}^K \cong \A^H$ for any $H \in \A$ and is free with $\expA{\An{B}^K}{1,13,17}$.
Hence, again by Theorem \ref{thm:Addition_Deletion}, the deletion
$\An{B}' = \An{B} \setminus \{K\}$ is free
and thus $\tilde{\A} = \A \setminus \An{N} = \An{B}'$ is a \FFSA
with $\expA{\tilde{\A}}{1,13,17,29-\Betrag{\An{N}}}$.
Finally, let $\tilde{\A} = \A \setminus \An{N}$ be a \FFSA with $\Betrag{\An{N}} \leq 13$.
For an associated \FF $\A = \A_0 \supsetneq \ldots \supsetneq \A_n = \tilde{\A}$
with say $\A_i = \A_{i-1}' = \A_{i-1} \setminus \{H_i\}$ for $1 \leq i \leq n$,
we have $\Betrag{\A_{i-1}^{H_i}} = 31$.
So $H_i \cap H_j \subseteq K$, $j < i$, for a $K \in \A_i$ and for $i=n$ this is condition (\ref{eq:SubsetN_FFSA}).
\end{proof}
Before we continue with the characterization of the \FFSAsp, we give a helpful partition of
the reflection arrangement $\A(\crg{31})$:
\begin{lemma}\label{lem:Part_AG31}
Let $\A = \A(\crg{31})$.
There are exactly $6$ subcollections $M_1,\ldots,M_6 \subseteq \A$, such that
$\A \setminus M_i \cong \A(\crg{29})$, $M_i \cap M_j \cong \A(A_1^4)$ and $M_i \cap M_j \cap M_k = \emptyset$ for
$1\leq i <j<k\leq 6$.
Thus we get a partion of $\A$ into $15$ disjoint subsets $\{ M_i \cap M_j \mid 1\leq i < j \leq 6 \} =:\F$
on which $\crg{31}$ acts transitively.
\end{lemma}
\begin{figure}[h!]
\setlength{\unitlength}{10mm}
\begin{picture}(12,7)
\put(2,6){\line(1,0){10}}
\put(2,5){\line(1,0){10}}
\put(4,4){\line(1,0){8}}
\put(6,3){\line(1,0){6}}
\put(8,2){\line(1,0){4}}
\put(10,1){\line(1,0){2}}
\put(12,6){\line(0,-1){5}}
\put(10,6){\line(0,-1){5}}
\put(8,6){\line(0,-1){4}}
\put(6,6){\line(0,-1){3}}
\put(4,6){\line(0,-1){2}}
\put(2,6){\line(0,-1){1}}
\put(2.2,5.4){$M_1 \cap M_2$}
\put(4.2,5.4){$M_1 \cap M_3$}
\put(6.2,5.4){$M_1 \cap M_4$}
\put(8.2,5.4){$M_1 \cap M_5$}
\put(10.2,5.4){$M_1 \cap M_6$}
\put(4.2,4.4){$M_2 \cap M_3$}
\put(6.2,4.4){$M_2 \cap M_4$}
\put(8.2,4.4){$M_2 \cap M_5$}
\put(10.2,4.4){$M_2 \cap M_6$}
\put(6.2,3.4){$M_3 \cap M_4$}
\put(8.2,3.4){$M_3 \cap M_5$}
\put(10.2,3.4){$M_3 \cap M_6$}
\put(8.2,2.4){$M_4 \cap M_5$}
\put(10.2,2.4){$M_4 \cap M_6$}
\put(10.2,1.4){$M_5 \cap M_6$}
\put(6.05,4.95){\line(1,0){1.9}}
\put(6.05,4.05){\line(1,0){1.9}}
\put(6.05,4.95){\line(0,-1){0.9}}
\put(7.95,4.95){\line(0,-1){0.9}}
\put(3,4.5){\circle{0.8}}
\put(2.75,4.4){$M_2$}
\put(3,6){\line(0,-1){1.1}}
\put(3.4,4.5){\line(1,0){8.6}}
\put(7,2.5){\circle{0.8}}
\put(6.75,2.4){$M_4$}
\put(7,6){\line(0,-1){3.1}}
\put(7.4,2.5){\line(1,0){4.6}}
\end{picture}
\caption{The partition of $\A$ into $15$ disjoint subsets $\F = \{M_i \cap M_j$, $1 \leq i < j \leq 6\}$, each consisting of $4$ hyperplanes.}%
\label{fig:Part_A_15}
\end{figure}
\begin{proof}
Let $W := \crg{31}$ and $W' := \crg{29} \leq W$.
Then $N_W(W') = W'$ and $\Betrag{W:W'} = 6$, so with Lemma \ref{lem:Action_W_RSA} there are exactly $6$
subarrangements, say $\B_1,\ldots,\B_6$ with $\B_i \cong \A(W') \subseteq \A$,
(respectively 6 conjugate reflection subgroups of $W$ isomorphic to $W'$).
Now we get the $M_i$ as $M_i = \A \setminus \B_i$ and
in particular the corresponding action of $W$ on the subcollections is transitive.
To see the claim about their intersections we look at the different orbits of reflection subgroups of $W$ on $\A$
acting on hyperplanes.
First $W'$ has 2 orbits $\An{O}_1 = \A(W')$, and $\An{O}_2 = \A \setminus \A(W') = M_i$ for an $i \in\{1,\ldots,6\}$.
Similarly a subgroup $\tilde{W'} = g^{-1}W'g \neq W'$ conjugate to $W'$ has also 2 orbits
$\tilde{\An{O}}_1 = \A(\tilde{W'})$, and $\tilde{\An{O}}_2 = \A \setminus \A(\tilde{W'}) = M_j$ and $j \in \{1,\ldots,6\} \setminus \{i\}$.
Now the intersection $W' \cap \tilde{W'}$ of these two conjugate subgroups is isomorphic to
$\crgM{4}{4}{4} \leq W$ and $\crgM{4}{4}{4}$ has two orbits $\An{O}_{21}$, $\An{O}_{22}$ on $\An{O}_2$ of size $16$ and $4$,
respectively two orbits $\tilde{\An{O}}_{21}$, $\tilde{\An{O}}_{22}$ on $\tilde{\An{O}}_2$ of size $16$ and $4$
(see Definition \ref{def:ArrG31}).
Because of the cardinalities of $\A(W')$ and $\A(\tilde{W'})$ we have $M_i \cap M_j = \An{O}_2 \cap \tilde{\An{O}}_2 \neq \emptyset$,
and $M_i \cap M_j = \An{O}_{22} = \tilde{\An{O}}_{22}$. Since the collection $M_i \cap M_j$
is stabilized by $\crgM{4}{2}{4} \geq \crgM{4}{4}{4}$, the lines orthogonal to
the hyperplanes in $M_i \cap M_j$ are the unique system of imprimitivity $\crgM{4}{2}{4}$.
Hence we get $M_i \cap M_j \cong \A(A_1^4) = \{ \ker(x_i) \mid 1 \leq i \leq 4\}$.
Now let $W' = \crgM{4}{2}{4}$. Here we also have $N_W(W') = W'$, so $\Betrag{W:W'} = 15$,
and hence again with Lemma \ref{lem:Action_W_RSA} there are $15$ distinct
subarrangements isomorphic to $\A(W') \subseteq \A$.
Since each to $W'$ conjugate reflection subgroup of $W$ has a unique system of imprimitivity consisting of the lines orthogonal to
the hyperplanes in $M_i \cap M_j$ for $i,j \in \{1,\ldots,6\}, i \neq j$ and they are distinct, the $M_i \cap M_j$ are distinct
and disjoint.
Finally each hyperplane in $\A$ belongs to a unique intersection $M_i \cap M_j$,
so they form a partition $\F$ of $\A$.
Since $W$ acts transitively on $\A$, and interchanges the
systems of imprimitivity corresponding to the reflection subarrangements isomorphic to $\A(\crgM{4}{2}{4})$,
it acts transitively on $\F$.
\end{proof}
The partition $\F$ in Lemma \ref{lem:Part_AG31} can be visualized in a picture, see Figure \ref{fig:Part_A_15}.
In the above proof we used some facts about the actions and orders of complex reflection (sub)groups from the book
by Lehrer and Taylor, \cite{lehrer2009unitary}, (see in particular \cite[Ch.~8, 10.5]{lehrer2009unitary}).
Furthermore it will be helpful to know the distribution of the $\A_X, X \in \LAq{\A}{2}$ with respect to the partition
given by Lemma \ref{lem:Part_AG31}:
\begin{lemma}\label{lem:Part_A_X}
Let $H \in \A$, $X \in \A^H$, and $H \in \B_{ij} := M_i\cap M_j \in \F$ for some $1 \leq i < j \leq 6$.
For $\A_X$ there are $3$ cases:
\begin{enumerate}
\item $A_X = \{H,K_1,\ldots,K_5\} \cong \A(\crgM{4}{2}{2})$ with $K_1 \in \B$, $\{K_2,K_3\} \subseteq \B_{km} = M_k\cap M_m$,
and $\{K_4,K_5\} \subseteq \B_{pq} = M_p\cap M_q$, with $\{i,j,k,m,p,q\} = \{1,\ldots,6\}$.
\item $A_X = \{H,K_1,K_2\} \cong \A(A_2)$ with $K_1 \in \B_{ik} = M_i \cap M_k$, and $K_2 \in \B_{jK} = M_j \cap M_k$
for some $k \in \{1,\ldots,6\} \setminus \{i,j\}$.
\item $A_X = \{H,K\} \cong \A(A_1^2)$ with $K \in \B_{km} = M_k\cap M_m$ for some $k,m \in \{1,\ldots,6\} \setminus \{i,j\}$.
\end{enumerate}
\end{lemma}
\begin{proof}
This is by explicitly writing down the partition $\F$ from Lemma \ref{lem:Part_AG31} with respect to definition \ref{def:ArrG31}
and a simple computation.
\end{proof}
The following lemma provides the next step towards a complete characterization of the \FFSAs of $\A(\crg{31})$ .
\begin{lemma}\label{lem:G29_G31_arbt_desc}
Let $\An{M} \subseteq \A := \A(\crg{31})$ be a subcollection, such that $\An{B} = \A \setminus \An{M} \cong \A(\crg{29})$.
Then for all $\An{N} \subseteq \An{M}$, $\tilde{\A}:= \A \setminus \An{N}$ is a \FFSA with exponents
$\expA{\A^{(\An{N})}}{ 1,13,17,29 - \Betrag{\An{N}} }$.
\end{lemma}
\begin{proof}
Let $\An{M} \subseteq \A$ such that $\An{B} = \A \setminus \An{M} \cong \A(\crg{29})$.
We claim that $\An{M}$ satisfies condition (\ref{eq:SubsetN_FFSA}), so with Lemma \ref{lem:SubsetN_FFSA},
$\An{B}$ is a \FFSAp. Furthermore, if $\An{M}$ satisfies condition (\ref{eq:SubsetN_FFSA}), so does every subcollection
$\An{N} \subseteq \An{M}$ and $\tilde{\A} := \A \setminus \An{N}$ is a \FFSA with exponents
$\expA{\tilde{\A}}{ 1,13,17,29 - \Betrag{\An{N}} }$.
Now let $H \in \An{M}$ be an arbitrary hyperplane in $\An{M}$ and let $X \in \A^H$.
Then by Proposition \ref{lem:Part_A_X} there are three different cases:
\begin{eqnarray*}
\text{(1) } \Betrag{\A_X} &= 2 \text{,}& \A_X = \{ H,K \}, \\
\text{(2) } \Betrag{\A_X} &= 3 \text{,}& \A_X = \{ H,H',K \}, \\
\text{(3) } \Betrag{\A_X} &= 6 \text{,}& \A_X = \{ H,H',K_1,\ldots,K_4 \},
\end{eqnarray*}
with $H' \in \An{M}$ and $K,K_i \in \B \cong \A(\crg{29})$.
For arbitrary $H, H' \in \An{M}$ there is a hyperplane $K \in \B$ such that $H \cap H' = X \subseteq K$.
Hence $\An{M}$ satisfies condition (\ref{eq:SubsetN_FFSA}) and as mentioned before with Lemma \ref{lem:SubsetN_FFSA}
$\tilde{\A}$ is a \FFSA with exponents $\expA{\tilde{\A}}{ 1,13,17,29 - \Betrag{\An{N}} }$.
\end{proof}
The next lemma completes the characterization of the \FFSAs $\tilde{\A} \subseteq \A(\crg{31})$
and enables us to prove Proposition \ref{prop:G31_NIF}.
\begin{lemma}\label{lem:FFSA_G31}
Let $\A = \A(\crg{31})$.
A subarrangement $\A \setminus \An{N} = \tilde{\A} \subseteq \A$ is a \FFSA if and only if
\begin{enumerate}
\item $\A(\crg{29}) \subseteq \tilde{\A}$ \\
or
\item $\Betrag{\An{N}} \leq 13$ and $\An{N}$ satisfies (\ref{eq:SubsetN_FFSA}) from Lemma \ref{lem:SubsetN_FFSA}.
\end{enumerate}
In both cases the exponents of $\tilde{\A}$ are $\expA{\tilde{\A}}{1,13,17,29-\Betrag{\An{N}}}$.
\end{lemma}
\begin{proof}
Let $\tilde{\A} \subseteq \A$ be a subarrangement.
If $\tilde{\A}$ satisfies (1) then by Lemma \ref{lem:G29_G31_arbt_desc} it is a \FFSA and if $\tilde{\A}$ satisfies (2) then
by Lemma \ref{lem:SubsetN_FFSA} it is also a \FFSAp.
This gives one direction.
The other direction requires more effort.
The main idea is to use the partion $\F$ of $\A$ from Lemma \ref{lem:Part_AG31}, the distribution of the localizations
$\A_X$ with respect to this partion given by Lemma \ref{lem:Part_A_X}, and some counting arguments.
So let $\A \setminus \N' = \tilde{\A}' \subseteq \A$ be a subarrangement such that $\A(\crg{29}) \nsubseteq \tilde{\A}'$,
$\Betrag{\N'} \geq 14$, and suppose that $\tilde{\A}'$ is a \FFSAp.
Since $\tilde{\A}'$ is a \FFSA there has to be another \FFSA say $\tilde{\A} \supseteq \tilde{\A}'$,
$\tilde{\A} = \A \setminus \N$ such that $\Betrag{\N} = 13$.
By Lemma \ref{lem:SubsetN_FFSA} we then have $\bigcup_{X \in \LAq{\N}{2}} X \subseteq \bigcup_{H \in \tilde{\A}} H$
and $\expA{\tilde{\A}}{1,13,16,17}$.
We claim that there is no $H \in \tilde{\A}$ such that $\Betrag{\tilde{\A}^H} \in \{30,31\}$, so by Theorem \ref{thm:A_AoH_exp}
contradicting the fact that $\tilde{\A}'$ is a \FFSAp.
If $\A(\crg{29}) \subseteq \tilde{\A}$ then by Lemma \ref{lem:Part_AG31} there is an $1 \leq i \leq 6$ such that
$\N \subseteq M_i$.
With respect to renumbering the $M_i$ we may assume that $\N \subseteq M_1$.
Let $\B_{1j} = M_1 \cap M_j$, $2 \leq j \leq 6$ be the blocks of the partition of $M_1$ from
Lemma \ref{lem:Part_AG31}.
Since $\Betrag{\N} = 13$ we have $\B_{1j} \cap \N \neq \emptyset$, and there is a $k$ such that $\Betrag{\B_{1k} \cap \N} \geq 3$.
By $\tilde{\A}' \supsetneq \A(\crg{29})$, we have $H \notin M_1$.
But then, using Lemma \ref{lem:Part_A_X}, we see that $\Betrag{\tilde{\A}^H} < 30$ (because
$\N$ completely contains at least two localizations as in Lemma \ref{lem:Part_A_X}(2), and (3)), so $\tilde{\A}'$ is not free
by Theorem \ref{thm:A_AoH_exp} and in particular it is not a \FFSA contradicting our assumption.
If $\A(\crg{29}) \nsubseteq \tilde{\A}$ we claim that for such a \FFSA $\tilde{\A}$ with $\Betrag{\N} = 13$
there is a $H \in \A$, $H \in \B \in \F$ (see Lemma \ref{lem:Part_AG31}), such that
\begin{align}\label{N_13}
\N = \bigcup_{H' \in \B \setminus \{H\}} \A_{H \cap H'} \setminus \{H'\},
\end{align}
which enables us to describe $\tilde{\A}^K$ for each $K \in \tilde{\A}$.
So let $\tilde{\A} = \A \setminus \N$ be a \FFSA with $\A(\crg{29}) \nsubseteq \tilde{\A}$ and $\Betrag{\N} = 13$.
By Lemma \ref{lem:SubsetN_FFSA} $\N$ has to satisfy condition (\ref{eq:SubsetN_FFSA}).
Let $\F_\N := \{ \B \in \F \mid \N \cap \B \neq \emptyset \}$ be the blocks in the partition $\F$ of $\A$ containing the hyperplanes
from $\N$ and let $\B_{ab} := M_a \cap M_b \in \F$ ($a \neq b$, $a,b \in \{1,\ldots,6\}$).
First we notice that $\Betrag{\F_\N} \geq 4$ because $\Betrag{\N} = 13$.
Since $\A(\crg{29}) \nsubseteq \tilde{\A}$, by Lemma \ref{lem:Part_AG31} we have one of the following cases
\begin{enumerate}
\item there are $\B_{ij}, \B_{kl} \in \F_\N$, such that $\Betrag{\{i,j,k,l\}}=4$,
\item there are $\B_{ij}, \B_{ik}, \B_{jk} \in \F_\N$, such that $\Betrag{\{i,j,k\}}=3$.
\end{enumerate}
But since $\Betrag{\F_\N} \geq 4$, in case (2) there is a $\B_{ab} \in \F_\N$ with $a \in \{i,k,l\}$ and $b \notin \{i,j,k\}$,
so we are again in case (1), (compare with Figure \ref{fig:Part_A_15}).
Hence (with possibly renumering the $M_i$) we have $\B_{12}, \B_{34} \in \F_\N$.
By the distribution of the simply intersecting hyperplanes in $\A$ with respect to $\F$ (Lemma \ref{lem:Part_A_X}(3))
and by condition (\ref{eq:SubsetN_FFSA}) we further have $\Betrag{\N\cap\B_{12}} \leq 2$, $\Betrag{\N\cap\B_{34}} \leq 2$
resulting in $\Betrag{\F_\N} \geq 5$.
Next, suppose for all $\B_{ab} \in \F$ we have $\{a,b\} \subseteq \{1,2,3,4\}$, so in particular $\N \subseteq \A(\crgM{4}{4}{4})$
(see Figure \ref{fig:Part_A_15}, Definition \ref{def:ArrG31} and Lemma \ref{lem:Part_AG31}).
Then because of $\Betrag{\N\cap\B_{12}} \leq 2$, $\Betrag{\N\cap\B_{34}} \leq 2$, $\Betrag{N} = 13$, and
$\Betrag{\F_\N} \geq 5$ we find $\B_{1a}, \B_{2b} \in \F_\N$, $a,b \in \{3,4\}$, such that $\Betrag{(\B_{1a} \cup \B_{2b}) \cap \N} \geq 5$.
But this contradicts condition (\ref{eq:SubsetN_FFSA}) by Lemma \ref{lem:Part_A_X}(2).
So there is a $\B_{ab} \in \F_\N$ with $\{a,b\} \nsubseteq \{1,2,3,4\}$.
Now for $\B_{ab}$ there are again two possible cases
\begin{enumerate}
\item $a=5$ and $b=6$,
\item $a \in \{1,2,3,4\}$ and $b \in \{5,6\}$.
\end{enumerate}
In the first case, by Lemma \ref{lem:Part_A_X}(3) and condition (\ref{eq:SubsetN_FFSA}), we then have $\Betrag{\N\cap\B} \leq 2$
for all $\B \in \F_\N$ so $\Betrag{\F_\N} \geq 7$.
So in this (after renumbering the $M_i$ once more) we may assume that we are in the second case.
In the second case, again by Lemma \ref{lem:Part_A_X}(3) and condition (\ref{eq:SubsetN_FFSA})
we then have $\Betrag{\B_{ij} \cap \N}\leq 2$ for $i \neq a, j \neq a$.
We may assume that $a=1$ (the other cases are similar),
then only $\Betrag{(\B_{13}\cup\B_{14})\cap\N} \leq 4$ by Lemma \ref{lem:Part_A_X}(2) and condition (\ref{eq:SubsetN_FFSA}).
So in this case we also have $\Betrag{\F_\N} \geq 7$ and further $\Betrag{\B_{34}\cap\N} = 1$ by Lemma \ref{lem:Part_A_X}(3).
We remark that for a subarrangement $\C \subseteq \A$ with $\C \cong \A(\crgM{4}{2}{4})$ there is a $\B_{ij} \in \F$,
such that $\C = \B_{ij} \cup (\A \setminus (M_i \cup M_j)) = \B_{ij} \cup \bigcup_{a,b \in \{1,\ldots,6\} \setminus \{i,j\}} \B_{ab}$
(compare again with Figure \ref{fig:Part_A_15}, Definition \ref{def:ArrG31} and Lemma \ref{lem:Part_AG31}).
If $\N$ is of the claimed form (\ref{N_13}), by Lemma \ref{lem:Part_A_X}(1) we have $\N \subseteq \A(\crgM{4}{2}{4})$
and furthermore, since $\Betrag{\N} = 13$ and $\N$ has to satisfy condition (\ref{eq:SubsetN_FFSA}), with Lemma
\ref{lem:Part_A_X} one easily sees, that if $\N \subseteq \A(\crgM{4}{2}{4})$, it has to be of the form (\ref{N_13}).
To finally prove the claim, we want to show that $\N \subseteq \A(\crgM{4}{2}{4})$ (for one possible realization of $\A(\crgM{4}{2}{4})$
inside $\A$ given by $\F$).
So far we have that there are $\B_{12}, \B_{34}, \B_{1b} \in \F_\N$ ($b \in \{5,6\}$).
This can be visualized in the following picture (Figure \ref{Fig_N_p}(a), compare also with Figure \ref{fig:Part_A_15}),
where the boxes represent the partition $\F$,
a double circle represents a hyperplane already fixed in $\N$ by the above considerations,
a solid circle a hyperplane which can not belong to $\N$ without violating condition (\ref{eq:SubsetN_FFSA}),
and a non solid circle a hyperplane which may or may not belong to $\N$.
\begin{figure}[h!]
\setlength{\unitlength}{4mm}
\begin{subfigure}[b]{0.05\textwidth}
\begin{picture}(1.7,8)
\put(1.5,4){(a)}
\end{picture}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\begin{picture}(10,8)
\put(0,7.5){\line(1,0){10}}
\put(0,6){\line(1,0){10}}
\put(2,4.5){\line(1,0){8}}
\put(4,3){\line(1,0){6}}
\put(6,1.5){\line(1,0){4}}
\put(8,0){\line(1,0){2}}
\put(10,7.5){\line(0,-1){7.5}}
\put(8,7.5){\line(0,-1){7.5}}
\put(6,7.5){\line(0,-1){6}}
\put(4,7.5){\line(0,-1){4.5}}
\put(2,7.5){\line(0,-1){3}}
\put(0,7.5){\line(0,-1){1.5}}
\put(0.5,7.1){\circle{0.6}}
\put(0.5,7.1){\circle*{0.38}}
\put(1.5,7.1){\circle{0.6}}
\put(0.5,6.4){\circle*{0.6}}
\put(1.5,6.4){\circle*{0.6}}
\put(2.5,7.1){\circle{0.6}}
\put(3.5,7.1){\circle{0.6}}
\put(2.5,6.4){\circle{0.6}}
\put(3.5,6.4){\circle{0.6}}
\put(4.5,7.1){\circle{0.6}}
\put(5.5,7.1){\circle{0.6}}
\put(4.5,6.4){\circle{0.6}}
\put(5.5,6.4){\circle{0.6}}
\put(6.5,7.1){\circle*{0.6}}
\put(7.5,7.1){\circle*{0.6}}
\put(6.5,6.4){\circle{0.6}}
\put(7.5,6.4){\circle{0.6}}
\put(8.5,7.1){\circle{0.6}}
\put(8.5,7.1){\circle*{0.38}}
\put(9.5,7.1){\circle{0.6}}
\put(8.5,6.4){\circle*{0.6}}
\put(9.5,6.4){\circle*{0.6}}
\put(2.5,5.6){\circle{0.6}}
\put(3.5,5.6){\circle{0.6}}
\put(2.5,4.9){\circle*{0.6}}
\put(3.5,4.9){\circle*{0.6}}
\put(4.5,5.6){\circle{0.6}}
\put(5.5,5.6){\circle{0.6}}
\put(4.5,4.9){\circle*{0.6}}
\put(5.5,4.9){\circle*{0.6}}
\put(6.5,5.6){\circle{0.6}}
\put(7.5,5.6){\circle{0.6}}
\put(6.5,4.9){\circle*{0.6}}
\put(7.5,4.9){\circle*{0.6}}
\put(8.5,5.6){\circle*{0.6}}
\put(9.5,5.6){\circle*{0.6}}
\put(8.5,4.9){\circle{0.6}}
\put(9.5,4.9){\circle{0.6}}
\put(4.5,4.1){\circle*{0.6}}
\put(5.5,4.1){\circle{0.6}}
\put(5.5,4.1){\circle*{0.38}}
\put(4.5,3.4){\circle*{0.6}}
\put(5.5,3.4){\circle*{0.6}}
\put(6.5,4.1){\circle*{0.6}}
\put(7.5,4.1){\circle{0.6}}
\put(6.5,3.4){\circle*{0.6}}
\put(7.5,3.4){\circle*{0.6}}
\put(8.5,4.1){\circle{0.6}}
\put(9.5,4.1){\circle{0.6}}
\put(8.5,3.4){\circle*{0.6}}
\put(9.5,3.4){\circle*{0.6}}
\put(6.5,2.6){\circle*{0.6}}
\put(7.5,2.6){\circle{0.6}}
\put(6.5,1.9){\circle*{0.6}}
\put(7.5,1.9){\circle*{0.6}}
\put(8.5,2.6){\circle{0.6}}
\put(9.5,2.6){\circle{0.6}}
\put(8.5,1.9){\circle*{0.6}}
\put(9.5,1.9){\circle*{0.6}}
\put(8.5,1.1){\circle{0.6}}
\put(9.5,1.1){\circle{0.6}}
\put(8.5,0.4){\circle*{0.6}}
\put(9.5,0.4){\circle*{0.6}}
\end{picture}
\end{subfigure}
\begin{subfigure}[b]{0.05\textwidth}
\begin{picture}(1.7,8)
\put(1.5,4){(b)}
\end{picture}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\begin{picture}(10,8)
\put(0,7.5){\line(1,0){10}}
\put(0,6){\line(1,0){10}}
\put(2,4.5){\line(1,0){8}}
\put(4,3){\line(1,0){6}}
\put(6,1.5){\line(1,0){4}}
\put(8,0){\line(1,0){2}}
\put(10,7.5){\line(0,-1){7.5}}
\put(8,7.5){\line(0,-1){7.5}}
\put(6,7.5){\line(0,-1){6}}
\put(4,7.5){\line(0,-1){4.5}}
\put(2,7.5){\line(0,-1){3}}
\put(0,7.5){\line(0,-1){1.5}}
\put(0.5,7.1){\circle{0.6}}
\put(0.5,7.1){\circle*{0.38}}
\put(1.5,7.1){\circle*{0.6}}
\put(0.5,6.4){\circle*{0.6}}
\put(1.5,6.4){\circle*{0.6}}
\put(2.5,7.1){\circle{0.6}}
\put(3.5,7.1){\circle*{0.6}}
\put(2.5,6.4){\circle{0.6}}
\put(3.5,6.4){\circle{0.6}}
\put(4.5,7.1){\circle{0.6}}
\put(5.5,7.1){\circle*{0.6}}
\put(4.5,6.4){\circle*{0.6}}
\put(5.5,6.4){\circle{0.6}}
\put(6.5,7.1){\circle*{0.6}}
\put(7.5,7.1){\circle*{0.6}}
\put(6.5,6.4){\circle*{0.6}}
\put(7.5,6.4){\circle{0.6}}
\put(8.5,7.1){\circle{0.6}}
\put(8.5,7.1){\circle*{0.38}}
\put(9.5,7.1){\circle{0.6}}
\put(8.5,6.4){\circle*{0.6}}
\put(9.5,6.4){\circle*{0.6}}
\put(2.5,5.6){\circle{0.6}}
\put(3.5,5.6){\circle{0.6}}
\put(2.5,4.9){\circle*{0.6}}
\put(3.5,4.9){\circle*{0.6}}
\put(4.5,5.6){\circle{0.6}}
\put(5.5,5.6){\circle*{0.6}}
\put(4.5,4.9){\circle*{0.6}}
\put(5.5,4.9){\circle*{0.6}}
\put(6.5,5.6){\circle{0.6}}
\put(7.5,5.6){\circle*{0.6}}
\put(6.5,4.9){\circle*{0.6}}
\put(7.5,4.9){\circle*{0.6}}
\put(8.5,5.6){\circle*{0.6}}
\put(9.5,5.6){\circle*{0.6}}
\put(8.5,4.9){\circle{0.6}}
\put(9.5,4.9){\circle{0.6}}
\put(4.5,4.1){\circle*{0.6}}
\put(5.5,4.1){\circle{0.6}}
\put(5.5,4.1){\circle*{0.38}}
\put(4.5,3.4){\circle*{0.6}}
\put(5.5,3.4){\circle*{0.6}}
\put(6.5,4.1){\circle*{0.6}}
\put(7.5,4.1){\circle{0.6}}
\put(6.5,3.4){\circle*{0.6}}
\put(7.5,3.4){\circle*{0.6}}
\put(8.5,4.1){\circle{0.6}}
\put(9.5,4.1){\circle{0.6}}
\put(9.5,4.1){\circle*{0.38}}
\put(8.5,3.4){\circle*{0.6}}
\put(9.5,3.4){\circle*{0.6}}
\put(6.5,2.6){\circle*{0.6}}
\put(7.5,2.6){\circle{0.6}}
\put(6.5,1.9){\circle*{0.6}}
\put(7.5,1.9){\circle*{0.6}}
\put(8.5,2.6){\circle{0.6}}
\put(9.5,2.6){\circle{0.6}}
\put(8.5,1.9){\circle*{0.6}}
\put(9.5,1.9){\circle*{0.6}}
\put(8.5,1.1){\circle{0.6}}
\put(9.5,1.1){\circle{0.6}}
\put(8.5,0.4){\circle*{0.6}}
\put(9.5,0.4){\circle*{0.6}}
\end{picture}
\end{subfigure}
\caption{Possible choices for hyperplanes in $\N$.}%
\label{Fig_N_p}
\end{figure}
Suppose that there is a $\B_{cd} \in \F_\N$ such that $\{c,d\} \cap \{3,4\} \neq \emptyset$.
This is the case if and only if $\N \subseteq \A(\crgM{4}{2}{4})$ by our remark before.
Then the hyperplanes left to be chosen for $\N$ reduce considerably (see Figure \ref{Fig_N_p}(b)).
If we proceed in this manner using the same arguments as above we arrive at a contradiction to $\Betrag{\N} = 13$,
condition (\ref{eq:SubsetN_FFSA}), and Lemma \ref{lem:Part_A_X}.
To finish the proof, let $\tilde{\A} = \A \setminus \N$ for an $\N$ of the form (\ref{N_13}).
Then by Lemma \ref{lem:Part_A_X}(3) and the distribution of the $H \in \tilde{\A}$
with respect to $\F$ we have $\Betrag{\tilde{\A}^H} \leq 29$ since for $H$ there are at least two hyperplanes in $\N$
simply intersecting $H$ and we are done.
\end{proof}
\begin{example}
We illustrate the change of the set of hyperplanes which can be added to $\An{N}$
along a free filtration from $\A$ to $\A \setminus \An{N} = \tilde{\A}$ with $\Betrag{\tilde{\A}} =47$,
$\A(\crg{29}) \nsubseteq \tilde{\A}$, by the following sequence of pictures (Figure \ref{Fig_Rem_M31}).
Each circle represents a hyperplane in the \FFSA $\A_i$, a solid circle represents a hyperplane
which we can not add to $\An{N}$ without violating condition (\ref{eq:SubsetN_FFSA})
from Lemma \ref{lem:SubsetN_FFSA}.
A non-solid circle represents a hyperplane, which can be added to $\An{N}$, such that (\ref{eq:SubsetN_FFSA}) ist
still satisfied.
The different boxes represent the partition $\F$
of $\A$ into subsets of $4$ hyperplanes given by Lemma \ref{lem:FFSA_G31}:
\begin{figure}[h!]
\setlength{\unitlength}{4mm}
\begin{subfigure}[b]{0.3\textwidth}
\begin{picture}(10,8)
\put(0,7.5){\line(1,0){10}}
\put(0,6){\line(1,0){10}}
\put(2,4.5){\line(1,0){8}}
\put(4,3){\line(1,0){6}}
\put(6,1.5){\line(1,0){4}}
\put(8,0){\line(1,0){2}}
\put(10,7.5){\line(0,-1){7.5}}
\put(8,7.5){\line(0,-1){7.5}}
\put(6,7.5){\line(0,-1){6}}
\put(4,7.5){\line(0,-1){4.5}}
\put(2,7.5){\line(0,-1){3}}
\put(0,7.5){\line(0,-1){1.5}}
\put(0.5,7.1){\circle{0.6}}
\put(1.5,7.1){\circle{0.6}}
\put(0.5,6.4){\circle{0.6}}
\put(1.5,6.4){\circle{0.6}}
\put(2.5,7.1){\circle{0.6}}
\put(3.5,7.1){\circle{0.6}}
\put(2.5,6.4){\circle{0.6}}
\put(3.5,6.4){\circle{0.6}}
\put(4.5,7.1){\circle{0.6}}
\put(5.5,7.1){\circle{0.6}}
\put(4.5,6.4){\circle{0.6}}
\put(5.5,6.4){\circle{0.6}}
\put(6.5,7.1){\circle{0.6}}
\put(7.5,7.1){\circle{0.6}}
\put(6.5,6.4){\circle{0.6}}
\put(7.5,6.4){\circle{0.6}}
\put(8.5,7.1){\circle{0.6}}
\put(9.5,7.1){\circle{0.6}}
\put(8.5,6.4){\circle{0.6}}
\put(9.5,6.4){\circle{0.6}}
\put(2.5,5.6){\circle{0.6}}
\put(3.5,5.6){\circle{0.6}}
\put(2.5,4.9){\circle{0.6}}
\put(3.5,4.9){\circle{0.6}}
\put(4.5,5.6){\circle{0.6}}
\put(5.5,5.6){\circle{0.6}}
\put(4.5,4.9){\circle{0.6}}
\put(5.5,4.9){\circle{0.6}}
\put(6.5,5.6){\circle{0.6}}
\put(7.5,5.6){\circle{0.6}}
\put(6.5,4.9){\circle{0.6}}
\put(7.5,4.9){\circle{0.6}}
\put(8.5,5.6){\circle{0.6}}
\put(9.5,5.6){\circle{0.6}}
\put(8.5,4.9){\circle{0.6}}
\put(9.5,4.9){\circle{0.6}}
\put(4.5,4.1){\circle{0.6}}
\put(5.5,4.1){\circle{0.6}}
\put(4.5,3.4){\circle{0.6}}
\put(5.5,3.4){\circle{0.6}}
\put(6.5,4.1){\circle{0.6}}
\put(7.5,4.1){\circle{0.6}}
\put(6.5,3.4){\circle{0.6}}
\put(7.5,3.4){\circle{0.6}}
\put(8.5,4.1){\circle{0.6}}
\put(9.5,4.1){\circle{0.6}}
\put(8.5,3.4){\circle{0.6}}
\put(9.5,3.4){\circle{0.6}}
\put(6.5,2.6){\circle{0.6}}
\put(7.5,2.6){\circle{0.6}}
\put(6.5,1.9){\circle{0.6}}
\put(7.5,1.9){\circle{0.6}}
\put(8.5,2.6){\circle{0.6}}
\put(9.5,2.6){\circle{0.6}}
\put(8.5,1.9){\circle{0.6}}
\put(9.5,1.9){\circle{0.6}}
\put(8.5,1.1){\circle{0.6}}
\put(9.5,1.1){\circle{0.6}}
\put(8.5,0.4){\circle{0.6}}
\put(9.5,0.4){\circle{0.6}}
\end{picture}
\end{subfigure}
\begin{subfigure}[b]{0.01\textwidth}
\begin{picture}(0.5,8)
\put(0.5,4){$\supsetneq$}
\end{picture}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\begin{picture}(10,8)
\put(0,7.5){\line(1,0){10}}
\put(0,6){\line(1,0){10}}
\put(2,4.5){\line(1,0){8}}
\put(4,3){\line(1,0){6}}
\put(6,1.5){\line(1,0){4}}
\put(8,0){\line(1,0){2}}
\put(10,7.5){\line(0,-1){7.5}}
\put(8,7.5){\line(0,-1){7.5}}
\put(6,7.5){\line(0,-1){6}}
\put(4,7.5){\line(0,-1){4.5}}
\put(2,7.5){\line(0,-1){3}}
\put(0,7.5){\line(0,-1){1.5}}
\put(1.5,7.1){\circle{0.6}}
\put(0.5,6.4){\circle{0.6}}
\put(1.5,6.4){\circle{0.6}}
\put(2.5,7.1){\circle{0.6}}
\put(3.5,7.1){\circle{0.6}}
\put(2.5,6.4){\circle{0.6}}
\put(3.5,6.4){\circle{0.6}}
\put(4.5,7.1){\circle{0.6}}
\put(5.5,7.1){\circle{0.6}}
\put(4.5,6.4){\circle{0.6}}
\put(5.5,6.4){\circle{0.6}}
\put(6.5,7.1){\circle{0.6}}
\put(7.5,7.1){\circle{0.6}}
\put(6.5,6.4){\circle{0.6}}
\put(7.5,6.4){\circle{0.6}}
\put(8.5,7.1){\circle{0.6}}
\put(9.5,7.1){\circle{0.6}}
\put(8.5,6.4){\circle{0.6}}
\put(9.5,6.4){\circle{0.6}}
\put(2.5,5.6){\circle{0.6}}
\put(3.5,5.6){\circle{0.6}}
\put(2.5,4.9){\circle{0.6}}
\put(3.5,4.9){\circle{0.6}}
\put(4.5,5.6){\circle{0.6}}
\put(5.5,5.6){\circle{0.6}}
\put(4.5,4.9){\circle{0.6}}
\put(5.5,4.9){\circle{0.6}}
\put(6.5,5.6){\circle{0.6}}
\put(7.5,5.6){\circle{0.6}}
\put(6.5,4.9){\circle{0.6}}
\put(7.5,4.9){\circle{0.6}}
\put(8.5,5.6){\circle{0.6}}
\put(9.5,5.6){\circle{0.6}}
\put(8.5,4.9){\circle{0.6}}
\put(9.5,4.9){\circle{0.6}}
\put(4.5,4.1){\circle{0.6}}
\put(5.5,4.1){\circle{0.6}}
\put(4.5,3.4){\circle*{0.6}}
\put(5.5,3.4){\circle*{0.6}}
\put(6.5,4.1){\circle{0.6}}
\put(7.5,4.1){\circle{0.6}}
\put(6.5,3.4){\circle*{0.6}}
\put(7.5,3.4){\circle*{0.6}}
\put(8.5,4.1){\circle{0.6}}
\put(9.5,4.1){\circle{0.6}}
\put(8.5,3.4){\circle*{0.6}}
\put(9.5,3.4){\circle*{0.6}}
\put(6.5,2.6){\circle{0.6}}
\put(7.5,2.6){\circle{0.6}}
\put(6.5,1.9){\circle*{0.6}}
\put(7.5,1.9){\circle*{0.6}}
\put(8.5,2.6){\circle{0.6}}
\put(9.5,2.6){\circle{0.6}}
\put(8.5,1.9){\circle*{0.6}}
\put(9.5,1.9){\circle*{0.6}}
\put(8.5,1.1){\circle{0.6}}
\put(9.5,1.1){\circle{0.6}}
\put(8.5,0.4){\circle*{0.6}}
\put(9.5,0.4){\circle*{0.6}}
\end{picture}
\end{subfigure}
\begin{subfigure}[b]{0.01\textwidth}
\begin{picture}(0.5,8)
\put(0.5,4){$\supsetneq$}
\end{picture}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\begin{picture}(10,8)
\put(0,7.5){\line(1,0){10}}
\put(0,6){\line(1,0){10}}
\put(2,4.5){\line(1,0){8}}
\put(4,3){\line(1,0){6}}
\put(6,1.5){\line(1,0){4}}
\put(8,0){\line(1,0){2}}
\put(10,7.5){\line(0,-1){7.5}}
\put(8,7.5){\line(0,-1){7.5}}
\put(6,7.5){\line(0,-1){6}}
\put(4,7.5){\line(0,-1){4.5}}
\put(2,7.5){\line(0,-1){3}}
\put(0,7.5){\line(0,-1){1.5}}
\put(1.5,7.1){\circle{0.6}}
\put(0.5,6.4){\circle*{0.6}}
\put(1.5,6.4){\circle*{0.6}}
\put(2.5,7.1){\circle{0.6}}
\put(3.5,7.1){\circle{0.6}}
\put(2.5,6.4){\circle{0.6}}
\put(3.5,6.4){\circle{0.6}}
\put(4.5,7.1){\circle{0.6}}
\put(5.5,7.1){\circle{0.6}}
\put(4.5,6.4){\circle{0.6}}
\put(5.5,6.4){\circle{0.6}}
\put(6.5,7.1){\circle*{0.6}}
\put(7.5,7.1){\circle*{0.6}}
\put(6.5,6.4){\circle{0.6}}
\put(7.5,6.4){\circle{0.6}}
\put(8.5,7.1){\circle{0.6}}
\put(9.5,7.1){\circle{0.6}}
\put(8.5,6.4){\circle*{0.6}}
\put(9.5,6.4){\circle*{0.6}}
\put(2.5,5.6){\circle{0.6}}
\put(3.5,5.6){\circle{0.6}}
\put(2.5,4.9){\circle{0.6}}
\put(3.5,4.9){\circle{0.6}}
\put(4.5,5.6){\circle{0.6}}
\put(5.5,5.6){\circle{0.6}}
\put(4.5,4.9){\circle{0.6}}
\put(5.5,4.9){\circle{0.6}}
\put(6.5,5.6){\circle{0.6}}
\put(7.5,5.6){\circle{0.6}}
\put(6.5,4.9){\circle*{0.6}}
\put(7.5,4.9){\circle*{0.6}}
\put(8.5,5.6){\circle*{0.6}}
\put(9.5,5.6){\circle*{0.6}}
\put(8.5,4.9){\circle{0.6}}
\put(9.5,4.9){\circle{0.6}}
\put(4.5,4.1){\circle{0.6}}
\put(4.5,3.4){\circle*{0.6}}
\put(5.5,3.4){\circle*{0.6}}
\put(6.5,4.1){\circle{0.6}}
\put(7.5,4.1){\circle{0.6}}
\put(6.5,3.4){\circle*{0.6}}
\put(7.5,3.4){\circle*{0.6}}
\put(8.5,4.1){\circle{0.6}}
\put(9.5,4.1){\circle{0.6}}
\put(8.5,3.4){\circle*{0.6}}
\put(9.5,3.4){\circle*{0.6}}
\put(6.5,2.6){\circle{0.6}}
\put(7.5,2.6){\circle{0.6}}
\put(6.5,1.9){\circle*{0.6}}
\put(7.5,1.9){\circle*{0.6}}
\put(8.5,2.6){\circle{0.6}}
\put(9.5,2.6){\circle{0.6}}
\put(8.5,1.9){\circle*{0.6}}
\put(9.5,1.9){\circle*{0.6}}
\put(8.5,1.1){\circle{0.6}}
\put(9.5,1.1){\circle{0.6}}
\put(8.5,0.4){\circle*{0.6}}
\put(9.5,0.4){\circle*{0.6}}
\end{picture}
\end{subfigure}
\\
\begin{subfigure}[b]{0.01\textwidth}
\begin{picture}(0.5,8)
\put(0.5,4){$\supsetneq$}
\end{picture}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\begin{picture}(10,8)
\put(0,7.5){\line(1,0){10}}
\put(0,6){\line(1,0){10}}
\put(2,4.5){\line(1,0){8}}
\put(4,3){\line(1,0){6}}
\put(6,1.5){\line(1,0){4}}
\put(8,0){\line(1,0){2}}
\put(10,7.5){\line(0,-1){7.5}}
\put(8,7.5){\line(0,-1){7.5}}
\put(6,7.5){\line(0,-1){6}}
\put(4,7.5){\line(0,-1){4.5}}
\put(2,7.5){\line(0,-1){3}}
\put(0,7.5){\line(0,-1){1.5}}
\put(1.5,7.1){\circle{0.6}}
\put(0.5,6.4){\circle*{0.6}}
\put(1.5,6.4){\circle*{0.6}}
\put(2.5,7.1){\circle{0.6}}
\put(3.5,7.1){\circle{0.6}}
\put(2.5,6.4){\circle{0.6}}
\put(3.5,6.4){\circle{0.6}}
\put(4.5,7.1){\circle{0.6}}
\put(5.5,7.1){\circle{0.6}}
\put(4.5,6.4){\circle{0.6}}
\put(5.5,6.4){\circle{0.6}}
\put(6.5,7.1){\circle*{0.6}}
\put(7.5,7.1){\circle*{0.6}}
\put(6.5,6.4){\circle{0.6}}
\put(7.5,6.4){\circle{0.6}}
\put(9.5,7.1){\circle{0.6}}
\put(8.5,6.4){\circle*{0.6}}
\put(9.5,6.4){\circle*{0.6}}
\put(2.5,5.6){\circle{0.6}}
\put(3.5,5.6){\circle{0.6}}
\put(2.5,4.9){\circle*{0.6}}
\put(3.5,4.9){\circle*{0.6}}
\put(4.5,5.6){\circle{0.6}}
\put(5.5,5.6){\circle{0.6}}
\put(4.5,4.9){\circle*{0.6}}
\put(5.5,4.9){\circle*{0.6}}
\put(6.5,5.6){\circle{0.6}}
\put(7.5,5.6){\circle{0.6}}
\put(6.5,4.9){\circle*{0.6}}
\put(7.5,4.9){\circle*{0.6}}
\put(8.5,5.6){\circle*{0.6}}
\put(9.5,5.6){\circle*{0.6}}
\put(8.5,4.9){\circle{0.6}}
\put(9.5,4.9){\circle{0.6}}
\put(4.5,4.1){\circle*{0.6}}
\put(4.5,3.4){\circle*{0.6}}
\put(5.5,3.4){\circle*{0.6}}
\put(6.5,4.1){\circle*{0.6}}
\put(7.5,4.1){\circle{0.6}}
\put(6.5,3.4){\circle*{0.6}}
\put(7.5,3.4){\circle*{0.6}}
\put(8.5,4.1){\circle{0.6}}
\put(9.5,4.1){\circle{0.6}}
\put(8.5,3.4){\circle*{0.6}}
\put(9.5,3.4){\circle*{0.6}}
\put(6.5,2.6){\circle*{0.6}}
\put(7.5,2.6){\circle{0.6}}
\put(6.5,1.9){\circle*{0.6}}
\put(7.5,1.9){\circle*{0.6}}
\put(8.5,2.6){\circle{0.6}}
\put(9.5,2.6){\circle{0.6}}
\put(8.5,1.9){\circle*{0.6}}
\put(9.5,1.9){\circle*{0.6}}
\put(8.5,1.1){\circle{0.6}}
\put(9.5,1.1){\circle{0.6}}
\put(8.5,0.4){\circle*{0.6}}
\put(9.5,0.4){\circle*{0.6}}
\end{picture}
\end{subfigure}
\begin{subfigure}[b]{0.01\textwidth}
\begin{picture}(0.5,8)
\put(0.5,4){$\supsetneq$}
\end{picture}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\begin{picture}(10,8)
\put(0,7.5){\line(1,0){10}}
\put(0,6){\line(1,0){10}}
\put(2,4.5){\line(1,0){8}}
\put(4,3){\line(1,0){6}}
\put(6,1.5){\line(1,0){4}}
\put(8,0){\line(1,0){2}}
\put(10,7.5){\line(0,-1){7.5}}
\put(8,7.5){\line(0,-1){7.5}}
\put(6,7.5){\line(0,-1){6}}
\put(4,7.5){\line(0,-1){4.5}}
\put(2,7.5){\line(0,-1){3}}
\put(0,7.5){\line(0,-1){1.5}}
\put(0.5,6.4){\circle*{0.6}}
\put(1.5,6.4){\circle*{0.6}}
\put(2.5,7.1){\circle{0.6}}
\put(3.5,7.1){\circle{0.6}}
\put(2.5,6.4){\circle{0.6}}
\put(3.5,6.4){\circle{0.6}}
\put(4.5,7.1){\circle{0.6}}
\put(5.5,7.1){\circle{0.6}}
\put(4.5,6.4){\circle{0.6}}
\put(5.5,6.4){\circle{0.6}}
\put(6.5,7.1){\circle*{0.6}}
\put(7.5,7.1){\circle*{0.6}}
\put(6.5,6.4){\circle{0.6}}
\put(7.5,6.4){\circle{0.6}}
\put(9.5,7.1){\circle{0.6}}
\put(8.5,6.4){\circle*{0.6}}
\put(9.5,6.4){\circle*{0.6}}
\put(2.5,5.6){\circle{0.6}}
\put(3.5,5.6){\circle{0.6}}
\put(2.5,4.9){\circle*{0.6}}
\put(3.5,4.9){\circle*{0.6}}
\put(4.5,5.6){\circle{0.6}}
\put(5.5,5.6){\circle{0.6}}
\put(4.5,4.9){\circle*{0.6}}
\put(5.5,4.9){\circle*{0.6}}
\put(6.5,5.6){\circle{0.6}}
\put(7.5,5.6){\circle{0.6}}
\put(6.5,4.9){\circle*{0.6}}
\put(7.5,4.9){\circle*{0.6}}
\put(8.5,5.6){\circle*{0.6}}
\put(9.5,5.6){\circle*{0.6}}
\put(8.5,4.9){\circle{0.6}}
\put(9.5,4.9){\circle{0.6}}
\put(4.5,4.1){\circle*{0.6}}
\put(4.5,3.4){\circle*{0.6}}
\put(5.5,3.4){\circle*{0.6}}
\put(6.5,4.1){\circle*{0.6}}
\put(7.5,4.1){\circle*{0.6}}
\put(6.5,3.4){\circle*{0.6}}
\put(7.5,3.4){\circle*{0.6}}
\put(8.5,4.1){\circle*{0.6}}
\put(9.5,4.1){\circle*{0.6}}
\put(8.5,3.4){\circle*{0.6}}
\put(9.5,3.4){\circle*{0.6}}
\put(6.5,2.6){\circle*{0.6}}
\put(7.5,2.6){\circle*{0.6}}
\put(6.5,1.9){\circle*{0.6}}
\put(7.5,1.9){\circle*{0.6}}
\put(8.5,2.6){\circle*{0.6}}
\put(9.5,2.6){\circle*{0.6}}
\put(8.5,1.9){\circle*{0.6}}
\put(9.5,1.9){\circle*{0.6}}
\put(8.5,1.1){\circle{0.6}}
\put(9.5,1.1){\circle{0.6}}
\put(8.5,0.4){\circle*{0.6}}
\put(9.5,0.4){\circle*{0.6}}
\end{picture}
\end{subfigure}
\begin{subfigure}[b]{0.01\textwidth}
\begin{picture}(0.5,8)
\put(0.5,4){$\supsetneq$}
\end{picture}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\begin{picture}(10,8)
\put(0,7.5){\line(1,0){10}}
\put(0,6){\line(1,0){10}}
\put(2,4.5){\line(1,0){8}}
\put(4,3){\line(1,0){6}}
\put(6,1.5){\line(1,0){4}}
\put(8,0){\line(1,0){2}}
\put(10,7.5){\line(0,-1){7.5}}
\put(8,7.5){\line(0,-1){7.5}}
\put(6,7.5){\line(0,-1){6}}
\put(4,7.5){\line(0,-1){4.5}}
\put(2,7.5){\line(0,-1){3}}
\put(0,7.5){\line(0,-1){1.5}}
\put(0.5,6.4){\circle*{0.6}}
\put(1.5,6.4){\circle*{0.6}}
\put(2.5,7.1){\circle{0.6}}
\put(3.5,7.1){\circle{0.6}}
\put(2.5,6.4){\circle{0.6}}
\put(3.5,6.4){\circle{0.6}}
\put(4.5,7.1){\circle{0.6}}
\put(5.5,7.1){\circle{0.6}}
\put(4.5,6.4){\circle{0.6}}
\put(5.5,6.4){\circle{0.6}}
\put(6.5,7.1){\circle*{0.6}}
\put(7.5,7.1){\circle*{0.6}}
\put(6.5,6.4){\circle{0.6}}
\put(7.5,6.4){\circle{0.6}}
\put(8.5,6.4){\circle*{0.6}}
\put(9.5,6.4){\circle*{0.6}}
\put(2.5,5.6){\circle*{0.6}}
\put(3.5,5.6){\circle*{0.6}}
\put(2.5,4.9){\circle*{0.6}}
\put(3.5,4.9){\circle*{0.6}}
\put(4.5,5.6){\circle*{0.6}}
\put(5.5,5.6){\circle*{0.6}}
\put(4.5,4.9){\circle*{0.6}}
\put(5.5,4.9){\circle*{0.6}}
\put(6.5,5.6){\circle{0.6}}
\put(7.5,5.6){\circle{0.6}}
\put(6.5,4.9){\circle*{0.6}}
\put(7.5,4.9){\circle*{0.6}}
\put(8.5,5.6){\circle*{0.6}}
\put(9.5,5.6){\circle*{0.6}}
\put(8.5,4.9){\circle{0.6}}
\put(9.5,4.9){\circle{0.6}}
\put(4.5,4.1){\circle*{0.6}}
\put(4.5,3.4){\circle*{0.6}}
\put(5.5,3.4){\circle*{0.6}}
\put(6.5,4.1){\circle*{0.6}}
\put(7.5,4.1){\circle*{0.6}}
\put(6.5,3.4){\circle*{0.6}}
\put(7.5,3.4){\circle*{0.6}}
\put(8.5,4.1){\circle*{0.6}}
\put(9.5,4.1){\circle*{0.6}}
\put(8.5,3.4){\circle*{0.6}}
\put(9.5,3.4){\circle*{0.6}}
\put(6.5,2.6){\circle*{0.6}}
\put(7.5,2.6){\circle*{0.6}}
\put(6.5,1.9){\circle*{0.6}}
\put(7.5,1.9){\circle*{0.6}}
\put(8.5,2.6){\circle*{0.6}}
\put(9.5,2.6){\circle*{0.6}}
\put(8.5,1.9){\circle*{0.6}}
\put(9.5,1.9){\circle*{0.6}}
\put(8.5,1.1){\circle{0.6}}
\put(9.5,1.1){\circle{0.6}}
\put(8.5,0.4){\circle*{0.6}}
\put(9.5,0.4){\circle*{0.6}}
\end{picture}
\end{subfigure}
\\
\begin{subfigure}[b]{0.01\textwidth}
\begin{picture}(0.5,8)
\put(0.5,4){$\supsetneq$}
\end{picture}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\begin{picture}(10,8)
\put(0,7.5){\line(1,0){10}}
\put(0,6){\line(1,0){10}}
\put(2,4.5){\line(1,0){8}}
\put(4,3){\line(1,0){6}}
\put(6,1.5){\line(1,0){4}}
\put(8,0){\line(1,0){2}}
\put(10,7.5){\line(0,-1){7.5}}
\put(8,7.5){\line(0,-1){7.5}}
\put(6,7.5){\line(0,-1){6}}
\put(4,7.5){\line(0,-1){4.5}}
\put(2,7.5){\line(0,-1){3}}
\put(0,7.5){\line(0,-1){1.5}}
\put(0.5,6.4){\circle*{0.6}}
\put(1.5,6.4){\circle*{0.6}}
\put(2.5,7.1){\circle{0.6}}
\put(3.5,7.1){\circle*{0.6}}
\put(2.5,6.4){\circle*{0.6}}
\put(3.5,6.4){\circle{0.6}}
\put(4.5,7.1){\circle{0.6}}
\put(5.5,7.1){\circle*{0.6}}
\put(4.5,6.4){\circle*{0.6}}
\put(5.5,6.4){\circle{0.6}}
\put(6.5,7.1){\circle*{0.6}}
\put(7.5,7.1){\circle*{0.6}}
\put(6.5,6.4){\circle{0.6}}
\put(7.5,6.4){\circle{0.6}}
\put(8.5,6.4){\circle*{0.6}}
\put(9.5,6.4){\circle*{0.6}}
\put(2.5,5.6){\circle*{0.6}}
\put(3.5,5.6){\circle*{0.6}}
\put(2.5,4.9){\circle*{0.6}}
\put(3.5,4.9){\circle*{0.6}}
\put(4.5,5.6){\circle*{0.6}}
\put(5.5,5.6){\circle*{0.6}}
\put(4.5,4.9){\circle*{0.6}}
\put(5.5,4.9){\circle*{0.6}}
\put(7.5,5.6){\circle{0.6}}
\put(6.5,4.9){\circle*{0.6}}
\put(7.5,4.9){\circle*{0.6}}
\put(8.5,5.6){\circle*{0.6}}
\put(9.5,5.6){\circle*{0.6}}
\put(8.5,4.9){\circle{0.6}}
\put(9.5,4.9){\circle{0.6}}
\put(4.5,4.1){\circle*{0.6}}
\put(4.5,3.4){\circle*{0.6}}
\put(5.5,3.4){\circle*{0.6}}
\put(6.5,4.1){\circle*{0.6}}
\put(7.5,4.1){\circle*{0.6}}
\put(6.5,3.4){\circle*{0.6}}
\put(7.5,3.4){\circle*{0.6}}
\put(8.5,4.1){\circle*{0.6}}
\put(9.5,4.1){\circle*{0.6}}
\put(8.5,3.4){\circle*{0.6}}
\put(9.5,3.4){\circle*{0.6}}
\put(6.5,2.6){\circle*{0.6}}
\put(7.5,2.6){\circle*{0.6}}
\put(6.5,1.9){\circle*{0.6}}
\put(7.5,1.9){\circle*{0.6}}
\put(8.5,2.6){\circle*{0.6}}
\put(9.5,2.6){\circle*{0.6}}
\put(8.5,1.9){\circle*{0.6}}
\put(9.5,1.9){\circle*{0.6}}
\put(8.5,1.1){\circle{0.6}}
\put(9.5,1.1){\circle{0.6}}
\put(8.5,0.4){\circle*{0.6}}
\put(9.5,0.4){\circle*{0.6}}
\end{picture}
\end{subfigure}
\begin{subfigure}[b]{0.01\textwidth}
\begin{picture}(0.5,8)
\put(0.5,4){$\supsetneq$}
\end{picture}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\begin{picture}(10,8)
\put(0,7.5){\line(1,0){10}}
\put(0,6){\line(1,0){10}}
\put(2,4.5){\line(1,0){8}}
\put(4,3){\line(1,0){6}}
\put(6,1.5){\line(1,0){4}}
\put(8,0){\line(1,0){2}}
\put(10,7.5){\line(0,-1){7.5}}
\put(8,7.5){\line(0,-1){7.5}}
\put(6,7.5){\line(0,-1){6}}
\put(4,7.5){\line(0,-1){4.5}}
\put(2,7.5){\line(0,-1){3}}
\put(0,7.5){\line(0,-1){1.5}}
\put(0.5,6.4){\circle*{0.6}}
\put(1.5,6.4){\circle*{0.6}}
\put(2.5,7.1){\circle*{0.6}}
\put(3.5,7.1){\circle*{0.6}}
\put(2.5,6.4){\circle*{0.6}}
\put(3.5,6.4){\circle*{0.6}}
\put(4.5,7.1){\circle*{0.6}}
\put(5.5,7.1){\circle*{0.6}}
\put(4.5,6.4){\circle*{0.6}}
\put(5.5,6.4){\circle*{0.6}}
\put(6.5,7.1){\circle*{0.6}}
\put(7.5,7.1){\circle*{0.6}}
\put(6.5,6.4){\circle{0.6}}
\put(7.5,6.4){\circle{0.6}}
\put(8.5,6.4){\circle*{0.6}}
\put(9.5,6.4){\circle*{0.6}}
\put(2.5,5.6){\circle*{0.6}}
\put(3.5,5.6){\circle*{0.6}}
\put(2.5,4.9){\circle*{0.6}}
\put(3.5,4.9){\circle*{0.6}}
\put(4.5,5.6){\circle*{0.6}}
\put(5.5,5.6){\circle*{0.6}}
\put(4.5,4.9){\circle*{0.6}}
\put(5.5,4.9){\circle*{0.6}}
\put(6.5,4.9){\circle*{0.6}}
\put(7.5,4.9){\circle*{0.6}}
\put(8.5,5.6){\circle*{0.6}}
\put(9.5,5.6){\circle*{0.6}}
\put(8.5,4.9){\circle{0.6}}
\put(9.5,4.9){\circle{0.6}}
\put(4.5,4.1){\circle*{0.6}}
\put(4.5,3.4){\circle*{0.6}}
\put(5.5,3.4){\circle*{0.6}}
\put(6.5,4.1){\circle*{0.6}}
\put(7.5,4.1){\circle*{0.6}}
\put(6.5,3.4){\circle*{0.6}}
\put(7.5,3.4){\circle*{0.6}}
\put(8.5,4.1){\circle*{0.6}}
\put(9.5,4.1){\circle*{0.6}}
\put(8.5,3.4){\circle*{0.6}}
\put(9.5,3.4){\circle*{0.6}}
\put(6.5,2.6){\circle*{0.6}}
\put(7.5,2.6){\circle*{0.6}}
\put(6.5,1.9){\circle*{0.6}}
\put(7.5,1.9){\circle*{0.6}}
\put(8.5,2.6){\circle*{0.6}}
\put(9.5,2.6){\circle*{0.6}}
\put(8.5,1.9){\circle*{0.6}}
\put(9.5,1.9){\circle*{0.6}}
\put(8.5,1.1){\circle{0.6}}
\put(9.5,1.1){\circle{0.6}}
\put(8.5,0.4){\circle*{0.6}}
\put(9.5,0.4){\circle*{0.6}}
\end{picture}
\end{subfigure}
\begin{subfigure}[b]{0.01\textwidth}
\begin{picture}(0.5,8)
\put(0.5,4){$\supsetneq$}
\end{picture}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\begin{picture}(10,8)
\put(0,7.5){\line(1,0){10}}
\put(0,6){\line(1,0){10}}
\put(2,4.5){\line(1,0){8}}
\put(4,3){\line(1,0){6}}
\put(6,1.5){\line(1,0){4}}
\put(8,0){\line(1,0){2}}
\put(10,7.5){\line(0,-1){7.5}}
\put(8,7.5){\line(0,-1){7.5}}
\put(6,7.5){\line(0,-1){6}}
\put(4,7.5){\line(0,-1){4.5}}
\put(2,7.5){\line(0,-1){3}}
\put(0,7.5){\line(0,-1){1.5}}
\put(0.5,6.4){\circle*{0.6}}
\put(1.5,6.4){\circle*{0.6}}
\put(2.5,7.1){\circle*{0.6}}
\put(3.5,7.1){\circle*{0.6}}
\put(2.5,6.4){\circle*{0.6}}
\put(3.5,6.4){\circle*{0.6}}
\put(4.5,7.1){\circle*{0.6}}
\put(5.5,7.1){\circle*{0.6}}
\put(4.5,6.4){\circle*{0.6}}
\put(5.5,6.4){\circle*{0.6}}
\put(6.5,7.1){\circle*{0.6}}
\put(7.5,7.1){\circle*{0.6}}
\put(7.5,6.4){\circle{0.6}}
\put(8.5,6.4){\circle*{0.6}}
\put(9.5,6.4){\circle*{0.6}}
\put(2.5,5.6){\circle*{0.6}}
\put(3.5,5.6){\circle*{0.6}}
\put(2.5,4.9){\circle*{0.6}}
\put(3.5,4.9){\circle*{0.6}}
\put(4.5,5.6){\circle*{0.6}}
\put(5.5,5.6){\circle*{0.6}}
\put(4.5,4.9){\circle*{0.6}}
\put(5.5,4.9){\circle*{0.6}}
\put(6.5,4.9){\circle*{0.6}}
\put(7.5,4.9){\circle*{0.6}}
\put(8.5,5.6){\circle*{0.6}}
\put(9.5,5.6){\circle*{0.6}}
\put(8.5,4.9){\circle{0.6}}
\put(9.5,4.9){\circle{0.6}}
\put(4.5,4.1){\circle*{0.6}}
\put(4.5,3.4){\circle*{0.6}}
\put(5.5,3.4){\circle*{0.6}}
\put(6.5,4.1){\circle*{0.6}}
\put(7.5,4.1){\circle*{0.6}}
\put(6.5,3.4){\circle*{0.6}}
\put(7.5,3.4){\circle*{0.6}}
\put(8.5,4.1){\circle*{0.6}}
\put(9.5,4.1){\circle*{0.6}}
\put(8.5,3.4){\circle*{0.6}}
\put(9.5,3.4){\circle*{0.6}}
\put(6.5,2.6){\circle*{0.6}}
\put(7.5,2.6){\circle*{0.6}}
\put(6.5,1.9){\circle*{0.6}}
\put(7.5,1.9){\circle*{0.6}}
\put(8.5,2.6){\circle*{0.6}}
\put(9.5,2.6){\circle*{0.6}}
\put(8.5,1.9){\circle*{0.6}}
\put(9.5,1.9){\circle*{0.6}}
\put(8.5,1.1){\circle{0.6}}
\put(9.5,1.1){\circle{0.6}}
\put(8.5,0.4){\circle*{0.6}}
\put(9.5,0.4){\circle*{0.6}}
\end{picture}
\end{subfigure}
\\
\begin{subfigure}[b]{0.01\textwidth}
\begin{picture}(0.5,8)
\put(0.5,4){$\supsetneq$}
\end{picture}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\begin{picture}(10,8)
\put(0,7.5){\line(1,0){10}}
\put(0,6){\line(1,0){10}}
\put(2,4.5){\line(1,0){8}}
\put(4,3){\line(1,0){6}}
\put(6,1.5){\line(1,0){4}}
\put(8,0){\line(1,0){2}}
\put(10,7.5){\line(0,-1){7.5}}
\put(8,7.5){\line(0,-1){7.5}}
\put(6,7.5){\line(0,-1){6}}
\put(4,7.5){\line(0,-1){4.5}}
\put(2,7.5){\line(0,-1){3}}
\put(0,7.5){\line(0,-1){1.5}}
\put(0.5,6.4){\circle*{0.6}}
\put(1.5,6.4){\circle*{0.6}}
\put(2.5,7.1){\circle*{0.6}}
\put(3.5,7.1){\circle*{0.6}}
\put(2.5,6.4){\circle*{0.6}}
\put(3.5,6.4){\circle*{0.6}}
\put(4.5,7.1){\circle*{0.6}}
\put(5.5,7.1){\circle*{0.6}}
\put(4.5,6.4){\circle*{0.6}}
\put(5.5,6.4){\circle*{0.6}}
\put(6.5,7.1){\circle*{0.6}}
\put(7.5,7.1){\circle*{0.6}}
\put(8.5,6.4){\circle*{0.6}}
\put(9.5,6.4){\circle*{0.6}}
\put(2.5,5.6){\circle*{0.6}}
\put(3.5,5.6){\circle*{0.6}}
\put(2.5,4.9){\circle*{0.6}}
\put(3.5,4.9){\circle*{0.6}}
\put(4.5,5.6){\circle*{0.6}}
\put(5.5,5.6){\circle*{0.6}}
\put(4.5,4.9){\circle*{0.6}}
\put(5.5,4.9){\circle*{0.6}}
\put(6.5,4.9){\circle*{0.6}}
\put(7.5,4.9){\circle*{0.6}}
\put(8.5,5.6){\circle*{0.6}}
\put(9.5,5.6){\circle*{0.6}}
\put(8.5,4.9){\circle{0.6}}
\put(9.5,4.9){\circle{0.6}}
\put(4.5,4.1){\circle*{0.6}}
\put(4.5,3.4){\circle*{0.6}}
\put(5.5,3.4){\circle*{0.6}}
\put(6.5,4.1){\circle*{0.6}}
\put(7.5,4.1){\circle*{0.6}}
\put(6.5,3.4){\circle*{0.6}}
\put(7.5,3.4){\circle*{0.6}}
\put(8.5,4.1){\circle*{0.6}}
\put(9.5,4.1){\circle*{0.6}}
\put(8.5,3.4){\circle*{0.6}}
\put(9.5,3.4){\circle*{0.6}}
\put(6.5,2.6){\circle*{0.6}}
\put(7.5,2.6){\circle*{0.6}}
\put(6.5,1.9){\circle*{0.6}}
\put(7.5,1.9){\circle*{0.6}}
\put(8.5,2.6){\circle*{0.6}}
\put(9.5,2.6){\circle*{0.6}}
\put(8.5,1.9){\circle*{0.6}}
\put(9.5,1.9){\circle*{0.6}}
\put(8.5,1.1){\circle{0.6}}
\put(9.5,1.1){\circle{0.6}}
\put(8.5,0.4){\circle*{0.6}}
\put(9.5,0.4){\circle*{0.6}}
\end{picture}
\end{subfigure}
\begin{subfigure}[b]{0.01\textwidth}
\begin{picture}(0.5,8)
\put(0.5,4){$\supsetneq$}
\end{picture}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\begin{picture}(10,8)
\put(0,7.5){\line(1,0){10}}
\put(0,6){\line(1,0){10}}
\put(2,4.5){\line(1,0){8}}
\put(4,3){\line(1,0){6}}
\put(6,1.5){\line(1,0){4}}
\put(8,0){\line(1,0){2}}
\put(10,7.5){\line(0,-1){7.5}}
\put(8,7.5){\line(0,-1){7.5}}
\put(6,7.5){\line(0,-1){6}}
\put(4,7.5){\line(0,-1){4.5}}
\put(2,7.5){\line(0,-1){3}}
\put(0,7.5){\line(0,-1){1.5}}
\put(0.5,6.4){\circle*{0.6}}
\put(1.5,6.4){\circle*{0.6}}
\put(2.5,7.1){\circle*{0.6}}
\put(3.5,7.1){\circle*{0.6}}
\put(2.5,6.4){\circle*{0.6}}
\put(3.5,6.4){\circle*{0.6}}
\put(4.5,7.1){\circle*{0.6}}
\put(5.5,7.1){\circle*{0.6}}
\put(4.5,6.4){\circle*{0.6}}
\put(5.5,6.4){\circle*{0.6}}
\put(6.5,7.1){\circle*{0.6}}
\put(7.5,7.1){\circle*{0.6}}
\put(8.5,6.4){\circle*{0.6}}
\put(9.5,6.4){\circle*{0.6}}
\put(2.5,5.6){\circle*{0.6}}
\put(3.5,5.6){\circle*{0.6}}
\put(2.5,4.9){\circle*{0.6}}
\put(3.5,4.9){\circle*{0.6}}
\put(4.5,5.6){\circle*{0.6}}
\put(5.5,5.6){\circle*{0.6}}
\put(4.5,4.9){\circle*{0.6}}
\put(5.5,4.9){\circle*{0.6}}
\put(6.5,4.9){\circle*{0.6}}
\put(7.5,4.9){\circle*{0.6}}
\put(8.5,5.6){\circle*{0.6}}
\put(9.5,5.6){\circle*{0.6}}
\put(9.5,4.9){\circle{0.6}}
\put(4.5,4.1){\circle*{0.6}}
\put(4.5,3.4){\circle*{0.6}}
\put(5.5,3.4){\circle*{0.6}}
\put(6.5,4.1){\circle*{0.6}}
\put(7.5,4.1){\circle*{0.6}}
\put(6.5,3.4){\circle*{0.6}}
\put(7.5,3.4){\circle*{0.6}}
\put(8.5,4.1){\circle*{0.6}}
\put(9.5,4.1){\circle*{0.6}}
\put(8.5,3.4){\circle*{0.6}}
\put(9.5,3.4){\circle*{0.6}}
\put(6.5,2.6){\circle*{0.6}}
\put(7.5,2.6){\circle*{0.6}}
\put(6.5,1.9){\circle*{0.6}}
\put(7.5,1.9){\circle*{0.6}}
\put(8.5,2.6){\circle*{0.6}}
\put(9.5,2.6){\circle*{0.6}}
\put(8.5,1.9){\circle*{0.6}}
\put(9.5,1.9){\circle*{0.6}}
\put(8.5,1.1){\circle{0.6}}
\put(9.5,1.1){\circle{0.6}}
\put(8.5,0.4){\circle*{0.6}}
\put(9.5,0.4){\circle*{0.6}}
\end{picture}
\end{subfigure}
\begin{subfigure}[b]{0.01\textwidth}
\begin{picture}(0.5,8)
\put(0.5,4){$\supsetneq$}
\end{picture}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\begin{picture}(10,8)
\put(0,7.5){\line(1,0){10}}
\put(0,6){\line(1,0){10}}
\put(2,4.5){\line(1,0){8}}
\put(4,3){\line(1,0){6}}
\put(6,1.5){\line(1,0){4}}
\put(8,0){\line(1,0){2}}
\put(10,7.5){\line(0,-1){7.5}}
\put(8,7.5){\line(0,-1){7.5}}
\put(6,7.5){\line(0,-1){6}}
\put(4,7.5){\line(0,-1){4.5}}
\put(2,7.5){\line(0,-1){3}}
\put(0,7.5){\line(0,-1){1.5}}
\put(0.5,6.4){\circle*{0.6}}
\put(1.5,6.4){\circle*{0.6}}
\put(2.5,7.1){\circle*{0.6}}
\put(3.5,7.1){\circle*{0.6}}
\put(2.5,6.4){\circle*{0.6}}
\put(3.5,6.4){\circle*{0.6}}
\put(4.5,7.1){\circle*{0.6}}
\put(5.5,7.1){\circle*{0.6}}
\put(4.5,6.4){\circle*{0.6}}
\put(5.5,6.4){\circle*{0.6}}
\put(6.5,7.1){\circle*{0.6}}
\put(7.5,7.1){\circle*{0.6}}
\put(8.5,6.4){\circle*{0.6}}
\put(9.5,6.4){\circle*{0.6}}
\put(2.5,5.6){\circle*{0.6}}
\put(3.5,5.6){\circle*{0.6}}
\put(2.5,4.9){\circle*{0.6}}
\put(3.5,4.9){\circle*{0.6}}
\put(4.5,5.6){\circle*{0.6}}
\put(5.5,5.6){\circle*{0.6}}
\put(4.5,4.9){\circle*{0.6}}
\put(5.5,4.9){\circle*{0.6}}
\put(6.5,4.9){\circle*{0.6}}
\put(7.5,4.9){\circle*{0.6}}
\put(8.5,5.6){\circle*{0.6}}
\put(9.5,5.6){\circle*{0.6}}
\put(4.5,4.1){\circle*{0.6}}
\put(4.5,3.4){\circle*{0.6}}
\put(5.5,3.4){\circle*{0.6}}
\put(6.5,4.1){\circle*{0.6}}
\put(7.5,4.1){\circle*{0.6}}
\put(6.5,3.4){\circle*{0.6}}
\put(7.5,3.4){\circle*{0.6}}
\put(8.5,4.1){\circle*{0.6}}
\put(9.5,4.1){\circle*{0.6}}
\put(8.5,3.4){\circle*{0.6}}
\put(9.5,3.4){\circle*{0.6}}
\put(6.5,2.6){\circle*{0.6}}
\put(7.5,2.6){\circle*{0.6}}
\put(6.5,1.9){\circle*{0.6}}
\put(7.5,1.9){\circle*{0.6}}
\put(8.5,2.6){\circle*{0.6}}
\put(9.5,2.6){\circle*{0.6}}
\put(8.5,1.9){\circle*{0.6}}
\put(9.5,1.9){\circle*{0.6}}
\put(8.5,1.1){\circle{0.6}}
\put(9.5,1.1){\circle{0.6}}
\put(8.5,0.4){\circle*{0.6}}
\put(9.5,0.4){\circle*{0.6}}
\end{picture}
\end{subfigure}
\\
\begin{subfigure}[b]{0.01\textwidth}
\begin{picture}(0.5,8)
\put(0.5,4){$\supsetneq$}
\end{picture}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\begin{picture}(10,8)
\put(0,7.5){\line(1,0){10}}
\put(0,6){\line(1,0){10}}
\put(2,4.5){\line(1,0){8}}
\put(4,3){\line(1,0){6}}
\put(6,1.5){\line(1,0){4}}
\put(8,0){\line(1,0){2}}
\put(10,7.5){\line(0,-1){7.5}}
\put(8,7.5){\line(0,-1){7.5}}
\put(6,7.5){\line(0,-1){6}}
\put(4,7.5){\line(0,-1){4.5}}
\put(2,7.5){\line(0,-1){3}}
\put(0,7.5){\line(0,-1){1.5}}
\put(0.5,6.4){\circle*{0.6}}
\put(1.5,6.4){\circle*{0.6}}
\put(2.5,7.1){\circle*{0.6}}
\put(3.5,7.1){\circle*{0.6}}
\put(2.5,6.4){\circle*{0.6}}
\put(3.5,6.4){\circle*{0.6}}
\put(4.5,7.1){\circle*{0.6}}
\put(5.5,7.1){\circle*{0.6}}
\put(4.5,6.4){\circle*{0.6}}
\put(5.5,6.4){\circle*{0.6}}
\put(6.5,7.1){\circle*{0.6}}
\put(7.5,7.1){\circle*{0.6}}
\put(8.5,6.4){\circle*{0.6}}
\put(9.5,6.4){\circle*{0.6}}
\put(2.5,5.6){\circle*{0.6}}
\put(3.5,5.6){\circle*{0.6}}
\put(2.5,4.9){\circle*{0.6}}
\put(3.5,4.9){\circle*{0.6}}
\put(4.5,5.6){\circle*{0.6}}
\put(5.5,5.6){\circle*{0.6}}
\put(4.5,4.9){\circle*{0.6}}
\put(5.5,4.9){\circle*{0.6}}
\put(6.5,4.9){\circle*{0.6}}
\put(7.5,4.9){\circle*{0.6}}
\put(8.5,5.6){\circle*{0.6}}
\put(9.5,5.6){\circle*{0.6}}
\put(4.5,4.1){\circle*{0.6}}
\put(4.5,3.4){\circle*{0.6}}
\put(5.5,3.4){\circle*{0.6}}
\put(6.5,4.1){\circle*{0.6}}
\put(7.5,4.1){\circle*{0.6}}
\put(6.5,3.4){\circle*{0.6}}
\put(7.5,3.4){\circle*{0.6}}
\put(8.5,4.1){\circle*{0.6}}
\put(9.5,4.1){\circle*{0.6}}
\put(8.5,3.4){\circle*{0.6}}
\put(9.5,3.4){\circle*{0.6}}
\put(6.5,2.6){\circle*{0.6}}
\put(7.5,2.6){\circle*{0.6}}
\put(6.5,1.9){\circle*{0.6}}
\put(7.5,1.9){\circle*{0.6}}
\put(8.5,2.6){\circle*{0.6}}
\put(9.5,2.6){\circle*{0.6}}
\put(8.5,1.9){\circle*{0.6}}
\put(9.5,1.9){\circle*{0.6}}
\put(9.5,1.1){\circle{0.6}}
\put(8.5,0.4){\circle*{0.6}}
\put(9.5,0.4){\circle*{0.6}}
\end{picture}
\end{subfigure}
\begin{subfigure}[b]{0.01\textwidth}
\begin{picture}(0.5,8)
\put(0.5,4){$\supsetneq$}
\end{picture}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\begin{picture}(10,8)
\put(0,7.5){\line(1,0){10}}
\put(0,6){\line(1,0){10}}
\put(2,4.5){\line(1,0){8}}
\put(4,3){\line(1,0){6}}
\put(6,1.5){\line(1,0){4}}
\put(8,0){\line(1,0){2}}
\put(10,7.5){\line(0,-1){7.5}}
\put(8,7.5){\line(0,-1){7.5}}
\put(6,7.5){\line(0,-1){6}}
\put(4,7.5){\line(0,-1){4.5}}
\put(2,7.5){\line(0,-1){3}}
\put(0,7.5){\line(0,-1){1.5}}
\put(0.5,6.4){\circle*{0.6}}
\put(1.5,6.4){\circle*{0.6}}
\put(2.5,7.1){\circle*{0.6}}
\put(3.5,7.1){\circle*{0.6}}
\put(2.5,6.4){\circle*{0.6}}
\put(3.5,6.4){\circle*{0.6}}
\put(4.5,7.1){\circle*{0.6}}
\put(5.5,7.1){\circle*{0.6}}
\put(4.5,6.4){\circle*{0.6}}
\put(5.5,6.4){\circle*{0.6}}
\put(6.5,7.1){\circle*{0.6}}
\put(7.5,7.1){\circle*{0.6}}
\put(8.5,6.4){\circle*{0.6}}
\put(9.5,6.4){\circle*{0.6}}
\put(2.5,5.6){\circle*{0.6}}
\put(3.5,5.6){\circle*{0.6}}
\put(2.5,4.9){\circle*{0.6}}
\put(3.5,4.9){\circle*{0.6}}
\put(4.5,5.6){\circle*{0.6}}
\put(5.5,5.6){\circle*{0.6}}
\put(4.5,4.9){\circle*{0.6}}
\put(5.5,4.9){\circle*{0.6}}
\put(6.5,4.9){\circle*{0.6}}
\put(7.5,4.9){\circle*{0.6}}
\put(8.5,5.6){\circle*{0.6}}
\put(9.5,5.6){\circle*{0.6}}
\put(4.5,4.1){\circle*{0.6}}
\put(4.5,3.4){\circle*{0.6}}
\put(5.5,3.4){\circle*{0.6}}
\put(6.5,4.1){\circle*{0.6}}
\put(7.5,4.1){\circle*{0.6}}
\put(6.5,3.4){\circle*{0.6}}
\put(7.5,3.4){\circle*{0.6}}
\put(8.5,4.1){\circle*{0.6}}
\put(9.5,4.1){\circle*{0.6}}
\put(8.5,3.4){\circle*{0.6}}
\put(9.5,3.4){\circle*{0.6}}
\put(6.5,2.6){\circle*{0.6}}
\put(7.5,2.6){\circle*{0.6}}
\put(6.5,1.9){\circle*{0.6}}
\put(7.5,1.9){\circle*{0.6}}
\put(8.5,2.6){\circle*{0.6}}
\put(9.5,2.6){\circle*{0.6}}
\put(8.5,1.9){\circle*{0.6}}
\put(9.5,1.9){\circle*{0.6}}
\put(8.5,0.4){\circle*{0.6}}
\put(9.5,0.4){\circle*{0.6}}
\end{picture}
\end{subfigure}
\caption{The change of hyperplanes which can be removed along a free filtration from $\An{A}$ to $\tilde{\An{A}}$.}%
\label{Fig_Rem_M31}
\end{figure}
\end{example}
Now we can prove Proposition \ref{prop:G31_NIF}:
\begin{proof}[Proof of Proposition \ref{prop:G31_NIF}]
Let $\tilde{\A}$ be a free filtration subarrangement.
If $\A(\crg{29}) \nsubseteq \tilde{\A}$, then with Lemma \ref{lem:FFSA_G31}, $\Betrag{\tilde{\A}} \geq 47$.
Now assume that $\tilde{\A} \cong \A(\crg{29})$.
In Lemma \ref{lem:G29_G31_arbt_desc} we saw, that $\tilde{\A}$ is a free filtration subarrangement.
In \cite[Remark~2.17]{2012arXiv1208.3131H} it is shown
that one cannot remove a single hyperplane from $\A(\crg{29}) = \An{B}$ resulting in a free arrangement $\An{B}'$,
so there is no smaller free filtration subarrangement of $\A$.
\end{proof}
\subsubsection{The reflection arrangements $\A(\crg{29})$ and $\A(\crg{31})$ are not recursivley free}
\label{subsubsec:A29_A31_nRF_e}
Let $\A := \A(W)$ be the reflection arrangement of the complex reflection group $W = \crg{31}$ and
$\An{B} := \A(W)$ the reflection arrangement of the complex reflection group $W = \crg{29}$.
As we saw in the previous section $\B \subsetneq \A$ is a \FFSAp.
We use the characterization of all free filtration subarrangements $\tilde{\A} \subseteq \A$
from Lemma \ref{lem:FFSA_G31}
and show that for all these subarrangements, there exists no hyperplane $H$ outside of $\A$ we can add to $\tilde{\A}$,
such that the resulting arrangement $\tilde{\A}\dot{\cup} \{H\}$ is free.
First we show, that it is not possible for $\tilde{\A} = \A$:
\begin{lemma}\label{lem:Add_H_A31}
There is no way to add a new hyperplane $H$ to $\A$ such that the arrangement
$\tilde{\A} := \A \dot{\cup} \{ H \}$ is free.
\end{lemma}
\begin{proof}
The exponents of $\A$ are $\expA{\A}{1,13,17,29}$.
Inspection of the intersection lattice $L:=\LA{\A}$ gives the following multisets of invariants:
\begin{equation}
\{\{ \Betrag{\A_X} \mid X \in L_2 \}\} = \{\{2^{360}, 3^{320}, 6^{30} \}\} \label{eq_lem:Add_H_A31}.
\end{equation}
Now assume that there exists a new hyperplane $H$ which we can add to $\A$ such that
$\tilde{\A} := \A \dot{\cup} \{ H \}$ is free.
Then by Lemma \ref{lem:A_u_H} we have $\sum_{X \in P_H} (\Betrag{\A_X}-1) \in \expAA{\A}$
where $P_H = \{ X \in L_2 \mid X \subseteq H \}$.
Hence with (\ref{eq_lem:Add_H_A31}) $H$ contains at least $4$ different rank $2$ subspaces
(e.g.\ $13 = (6-1) + (6-1) + (3-1) +(2-1)$) from the intersection lattice.
But up to symmetry there are no more than $5$ possibilities to get a hyperplane $H$ with
$\Betrag{ \{ X \in L_2 \mid X \subseteq H \} } \geq 3$ such that $\CharPolyA{\tilde{\A}}$ factors over the integers,
but in each case $\CharPolyA{\tilde{\A}} = (t-1)(t-15)(t-16)(t-29)$, so with Theorem \ref{thm:A_AoH_exp}
$\tilde{\A}$ can not be free.
\end{proof}
Now we will prove that for all \FFSAs $\tilde{\A} \subseteq \A$
(see definition \ref{def:FFSA})
there exists no other hyperplane $H \notin \A$ we can add to $\tilde{\A}$ such that $\tilde{\A}\dot{\cup}\{H\}$ is
free.
\begin{lemma}\label{lem:Add_H_FFSA}
Let $\tilde{\A} \subseteq \A$ be a \FFSAp.
Let $H$ be a new hyperplane such that $\tilde{\A} \dot{\cup} \{ H \}$ is free.
Then $H \in \A$.
\end{lemma}
\begin{proof}
In Lemma \ref{lem:FFSA_G31} we have shown, that $\tilde{\A}$ is free with exponents
$\expA{\tilde{\A}}{1,13,17,29-n}$, $n \leq 20$.
Let $L = \LA{\A}$ and $\tilde{L} = \LA{\tilde{\A}} \subseteq L$.
We once more use the following multiset of invariants:
\begin{equation*}
\{\{ \Betrag{\A_X} \mid X \in L_2 \}\} = \{\{2^{360}, 3^{320}, 6^{30} \}\}.
\end{equation*}
Thus for $X \in \tilde{L}_2$ we have $2 \leq \Betrag{\tilde{\A}_X} \leq 6$.
Suppose we add a new hyperplane $H$ such that $\tilde{\A} \dot{\cup} \{H\}$ is free.
Then by Lemma \ref{lem:A_u_H} we have $\sum_{X \in P_H} (\Betrag{\tilde{\A}_X} - 1) \in \expAA{\tilde{\A}}$
where $P_H = \{X \in \tilde{L}_2 \mid X \subseteq H \}$.
We immediately see that $\Betrag{P_H} \geq 3$ and if $\Betrag{P_H} \in \{3,4\}$ there must be at least
two $X \in P_H$ with $\Betrag{\tilde{\A}_X} \geq 4$ or $\Betrag{\A_X} = 6$.
But for $X,Y \in L_2$, $X \neq Y$, with $\Betrag{\A_X} = \Betrag{\A_Y} = 6$ we either have $X+Y = V$ or
$X \subseteq K$ and $Y \subseteq K$ for a $K \in \A$. Hence in this case $H \in \A$.
Now assume that $\Betrag{P_H} \geq 5$ and there is at most one $X \in P_H$ with $\Betrag{\tilde{\A}_X} \geq 4$
or $\Betrag{\A_X} = 6$.
Then there are either at least three $X \in P_H$ with $\Betrag{\tilde{\A}_X} = 3$ or
at least four $X \in P_H$ with $\Betrag{\tilde{\A}_X} = 2$.
But in both cases with the same argument as above we must have $H \in \A$.
This finishes the proof.
\end{proof}
We close this section with the following corollary which completes the proof of Theorem \ref{thm:G31_G29_nRF}.
\begin{corollary}\label{coro:A31_FFSA_nRF}
Let $\tilde{\A} \subseteq \A$ be a \FFSA of $\A = \A(\crg{31})$.
Then $\tilde{\A}$ is not recursively free and in particular $\A(\crg{31})$ and $\A(\crg{29})$ are not recursively free.
\end{corollary}
\begin{proof}
The statement follows immediately from Lemma \ref{lem:Add_H_FFSA} and Proposition \ref{prop:G31_NIF}.
\end{proof}
\subsection{The reflection arrangement $\Arr{\crg{33}}$}
In this section we will see, that the reflection arrangement $\A(W)$ with $W$ isomorphic to the finite complex reflection group
$\crg{33}$ is not recursively free.
\begin{lemma}\label{lem:A33_nRF}
Let $\A = \A(W)$ be the reflection arrangement with $W \cong \crg{33}$.
Then $\A$ is not recursively free.
\end{lemma}
\begin{proof}
With Theorem \ref{thm:RA_IF} the reflection arrangement $\A$ is
not inductively free.
In \cite[Remark~2.17]{2012arXiv1208.3131H} it is shown
that one cannot remove a single hyperplane from $\A$ resulting in a free arrangement $\A'$
Thus to prove the lemma, we have to show, that we also cannot add a new hyperplane $H$ such that the
arrangements $\tilde{\A} := \A \dot{\cup} \{ H\}$ and $\tilde{\A}^H$ are free with suitable exponents.
The exponents of $\A$ are $\expA{\A}{1,7,9,13,15}$.
Now suppose that there is a hyperplane $H$ such that $\tilde{\A}$ is free.
Looking at the intersection lattice $L:= \LA{\A}$ we find the follwing multiset of invariants:
\begin{equation*}
\{\{ \Betrag{\A_X} \mid X \in L_2 \}\} = \{\{ 2^{270}, 3^{240} \}\}.
\end{equation*}
With Lemma \ref{lem:A_u_H} and the same argument
as in the proof of Lemma \ref{lem:Add_H_A31} for $H$ we must have:
\begin{equation*}
\Betrag{ P_H } = \Betrag{\{ X \in L_2 \mid X \subseteq H \} } \geq 4.
\end{equation*}
If we look at all the possible cases for an $H$ such that $\Betrag{ P_H } \geq 2$ (there are only 2 possible cases up to symmetry)
we already see that in none of these cases the characteristic polynomial of $\tilde{\A}$ splits into linear factors over
$\PolyRing{\Ganz}{x}$ and by Theorem \ref{thm:A_free_factZ} $\tilde{\A}$ is not free.
Hence we cannot add a single hyperplane $H$ to $\A$ and obtain a free arrangement $\A \dot{\cup} \{H\} =:\tilde{\A}$
and $\A$ is not recursively free.
\end{proof}
\subsection{The reflection arrangement $\Arr{\crg{34}}$}
In this part we will see, that the reflection arrangement $\A(W)$ with $W$ isomorphic to the finite complex reflection group
$\crg{34}$ is free but not recursively free.
\begin{lemma}\label{lem:A34_nRF}
Let $\A = \A(W)$ be the reflection arrangement with $W \cong \crg{34}$.
Then $\A$ is not recursively free.
\end{lemma}
\begin{proof}
To prove the lemma, we could follow the same path as in the proof of Lemma \ref{lem:A33_nRF}.
But since the arrangement of $\Arr{\crg{33}}$ is a parabolic subarrangement (localization) $\A_X$
of the reflection arrangement $\A = \Arr{\crg{34}}$ for a suitable $X \in \LA{\A}$
(see e.g. \cite[Table~C.15.]{orlik1992arrangements} or \cite[Ch.~7,~6.1]{lehrer2009unitary}),
and this subarrangement is not recursively free by Lemma \ref{lem:A33_nRF},
$\A$ cannot be recursively free by Proposition \ref{prop:Arf_AXrf}.
\end{proof}
This completes the proof of Theorem \ref{thm:RFcrArr}.
\section{Abe's conjecture}\label{sec:Abes_Conjecture}
In this section we give the proof of Theorem \ref{thm:ConjAbe}, which settles \cite[Conj.~5.11]{AbeDivFree2015}.
The following result by Abe gives the divisional freeness of the reflection arrangement $\A(\crg{31})$.
\begin{theorem}[{\cite[Cor.~4.7]{AbeDivFree2015}}]\label{thm:AbeDFRA}
Let $W$ be a finite irreducible complex reflection group and $\A = \A(W)$ its corresponding reflection arrangement.
Then $\A \in \IFC$ or $W = \crg{31}$ if and only if $\A \in \DFC$.
\end{theorem}
With results from the previous section we can now state the proof of the theorem.
\begin{proof}[{Proof of Theorem \ref{thm:ConjAbe}}]
Let $\A = \A(\crg{31})$ be the reflection arrangement of the finite complex reflection group $\crg{31}$.
Then on the one hand by Theorem \ref{thm:AbeDFRA} we have $\A \in \DFC$, but on the other hand
by Theorem \ref{thm:G31_G29_nRF} we have $\A \notin \RFC$.
\end{proof}
\begin{remark}
Furthermore, with Corollary \ref{coro:A31_FFSA_nRF}, we see that every \FFSA $\tilde{\A} \subseteq \A(\crg{31})$
still containing a hyperplane $H \in \tilde{\A}$ such that $\Betrag{\tilde{\A}^H}=31$ is in $\DFC$.
\end{remark}
\section{Restrictions}
In \cite{MR3250448} Amend, Hoge and R\"ohrle showed, which restrictions of reflection arrangements are inductively free.
Despite the free but not inductively free reflection arrangements them self investigated in this paper,
by \cite[Thm.~1.2]{MR3250448} there are four restrictions of reflection arrangements which remain to be inspected, namely
\begin{enumerate}
\item the $4$-dimensional restriction $(\A(\crg{33}),A_1)$,
\item the $5$-dimensional restriction $(\A(\crg{34}),A_1)$,
\item the $4$-dimensional restriction $(\A(\crg{34}),A_1^2)$,
and
\item the $4$-dimensional restriction $(\A(\crg{34}),A_2)$,
\end{enumerate}
which are free but not inductively free (compare with \cite[App.~C.16, C.17]{orlik1992arrangements}).
Using similar techniques as for the reflection arrangements $\A(\crg{31})$, and $\A(\crg{33})$,
we can say the following about the remaining cases:
\begin{proposition}\label{prop:Res_RF}~
\begin{enumerate}
\item $(\A(\crg{33},A_1)$ is recursively free.
\item $(\A(\crg{34},A_1)$ is not recursively free.
\item $(\A(\crg{34}),A_1^2)$ is not recursively free,
and
\item $(\A(\crg{34}),A_2)$ is not recursively free.
\end{enumerate}
\end{proposition}
\begin{proof}
Let $\A$ be as in case (1). The arrangement may be defined by the following linear forms:
\begin{align*}
\A = \{ &( 1, 0, 0, 0 )^\perp,
( 1, 1, 0, 0 )^\perp,
( 1, 1, 1, 0 )^\perp,
( 1, 1, 1, 1 )^\perp,
( 0, 1, 0, 0 )^\perp, \\
&( 0, 1, 1, 0 )^\perp,
( 0, 1, 1, 1 )^\perp,
( 0, 0, 1, 0 )^\perp,
( 0, 0, 1, 1 )^\perp,
( 0, 0, 0, 1 )^\perp, \\
& ( \zeta^2, 0, -1, \zeta^2 )^\perp,
( 1, 0, -1, \zeta^2 )^\perp,
( 2\zeta, 2\zeta+\zeta^2, \zeta, -\zeta^2 )^\perp, \\
&( -1, \zeta+2\zeta^2, \zeta^2, -1 )^\perp,
( \zeta, 0, -1, \zeta^2 )^\perp,
( 2, -2\zeta-\zeta^2, 1, -\zeta^2 )^\perp, \\
&( \zeta, \zeta-\zeta^2, 2\zeta, \zeta )^\perp,
( \zeta^2, \zeta-2\zeta^2, -1, \zeta^2 )^\perp, \\
&( \zeta^2, -\zeta+\zeta^2, 2\zeta^2, \zeta^2 )^\perp,
( \zeta^2, 0, - \zeta, \zeta^2 )^\perp,
( \zeta^2, 0, -\zeta^2, 1 )^\perp, \\
&( \zeta^2, 0, -1, \zeta )^\perp,
( 2\zeta, \zeta-\zeta^2, -2\zeta^2, -\zeta^2 )^\perp,
( \zeta, 2\zeta+\zeta^2, -1, \zeta^2 )^\perp, \\
&( -2\zeta^2, \zeta-\zeta^2, 2\zeta, \zeta )^\perp,
( -1, 2\zeta+\zeta^2, \zeta, -\zeta^2 )^\perp, \\
&( 2\zeta, \zeta-\zeta^2, \zeta, -\zeta^2 )^\perp,
( 2\zeta, 2\zeta+\zeta^2, \zeta, -1 )^\perp
\} \\
= \{ &H_1,\ldots, H_{28} \},
\end{align*}
where $\zeta = \frac{1}{2}(-1+i\sqrt{3})$ is a primitive cube root of unity.
We can successively remove $6$ hyperplanes
\begin{align*}
\{H_5, H_6, H_7, H_{13}, H_{25}, H_{28} \} =: \{K_1,\ldots,K_6\} =: \N,
\end{align*}
with respect to this order such that $\A \setminus \N = \tilde{\A}$ is a \FFSA with a free filtration
$\A = \A_0 \supsetneq \A_1 \supsetneq \cdots \supsetneq \A_6 = \tilde{\A}$, $\A_i = \A \setminus \{K_1,\ldots,K_i\}$.
Moreover, all the restrictions $\A_{i-1}^{K_{i}}$, ($1 \leq i \leq 6$), are inductively free.
Then we can add $2$ new hyperplanes
\begin{align*}
\{I_1,I_2\} := \{ ( -2\zeta-3\zeta^2, 3, 2, 1 )^\perp, ( \zeta, 0, 2, 1 )^\perp \},
\end{align*}
such that $\tilde{\A_j} := \tilde{\A} \cup \{I_1,\ldots,I_j\}$, ($j=1,2$)
are all free and
in particular the arrangement $\tilde{\A}_2 = \tilde{\A} \cup \{I_1,I_2\}$ is inductively free.
Furthermore the $\tilde{\A}_j^{I_j}$ are inductively free.
Hence $\A$ is recursively free.
The arrangement in (2) is isolated which can be seen similarly as for the arrangement $\A(\crg{33})$.
To show that the restrictions $(\A(\crg{34}),A_1^2), (\A(\crg{34},A_2)$ from (3) and (4)
are not recursively free, we look at the exponents of their
minimal possible \FFSAs computed by Amend, Hoge, and R\"ohrle in \cite[Lemma~4.2,~Tab.~11,12]{MR3250448} and then use
Lemma \ref{lem:A_u_H}
and a similar argument as in the proof of Lemma \ref{lem:Add_H_FFSA}.
Let $\A$ be as in (3). Then Amend, Hoge, and R\"ohrle showed that the multiset of exponents of a
minimal possible \FFSA $\tilde{\A} \subseteq \A$ are $\expA{\tilde{\A}}{1,13,15,15}$, (see \cite[Tab.~11]{MR3250448}).
Now, as in the proof of Lemma \ref{lem:Add_H_FFSA}, suppose $\tilde{\A} \subseteq \A$ is a \FFSAp,
and there is a hyperplane $H$, such that $\tilde{\A} \cup \{ H \}$
is free.
Then by Lemma \ref{lem:A_u_H} we have $\sum_{X \in P_H} (\Betrag{\tilde{\A}_X} - 1) \geq 13$, where
$P_H= \{ X \in \LAq{\tilde{\A}}{2} \mid X \subseteq H \}$. Now $\LAq{\tilde{\A}}{2} \subseteq \LAq{\A}{2}$ and
we have the following multiset if invaraints of $\A$:
\begin{align*}
\{\{ \Betrag{\A_X} \mid X \in \LAq{\A}{2} \}\} = \{\{ 2^{264},3^{304},4^{34},5^{16}\}\}.
\end{align*}
So in particular we should have $\sum_{X \in P_H} (\Betrag{\A_X} - 1) \geq 13$, and $\Betrag{P_H} \geq 4$.
If $\Betrag{P_H} = 4$ then there are at least two $X,Y \in P_H$ with $\Betrag{\A_X}=\Betrag{\A_Y} = 5$
But for all such $X,Y$ we either have $X + Y = V$ or $X+Y \in \A$.
So there is at most one $X \in P_H$ such that $\Betrag{\A_X}=5$.
If $\Betrag{P_H} = 4$ we must have at least $X,Y \in P_H$
with $\Betrag{\A_X}=5$, $\Betrag{\A_Y} = 4$.
But again for all such $X,Y$ we either have $X + Y = V$ or $X+Y \in \A$.
Considering the other cases (giving a number partition of the smalles exponent not equal to $1$)
similarly we get that $H \in \A$.
Hence $\A$ is not recursively free.
Finally let $\A$ be as in (4).
Then Amend, Hoge, and R\"ohrle showed that the multiset of exponents of a
minimal possible \FFSA $\tilde{\A} \subseteq \A$ are $\expA{\tilde{\A}}{1,9,10,11}$ or $\expA{\tilde{\A}}{1,10,10,10}$,
(see \cite[Tab.~12]{MR3250448}).
Suppose $\tilde{\A} \subseteq \A$ is a \FFSAp, and there is a hyperplane $H$, such that $\tilde{\A} \cup \{ H \}$
is free.
Then inspecting the intersection lattice of $\A$ analogously to case (3) we again get $H \in \A$.
Hence $\A$ is not recursively free.
\end{proof}
Since the restrictions $(\A(\crg{34}),A_1^2)$ and $(\A(\crg{34}),A_2)$ behave somehow similar to the reflection arrangement
$\A(\crg{31})$, they also give examples for divisionally free but not recursively free arrangement,
(compare with Theorem \ref{thm:ConjAbe} and Section \ref{sec:Abes_Conjecture}).
For further details on divisional freeness of restrictions of reflection arrangements see the recent note by R\"ohrle, \cite{Roe15}.
\renewcommand{\refname}{References}
\bibliographystyle{amsalpha}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
1,941,325,220,393 | arxiv | \section{Introduction}\label{sectionIntroduction}
The expression \emph{multiple elliptic integral} refers to an integral involving an integrand factor given by the complete elliptic integral of the first kind
\begin{equation}\label{Kdefinition}
\text{{\bf K}}(k) := \int_{0}^{\frac{\pi}{2}} \left( 1 - k^2 \sin^2 \theta \right)^{-1/2} \, d\theta
\end{equation}
or the complete elliptic integral of the second kind
\begin{equation}\label{Edefinition}
\text{{\bf E}}(k) := \int_{0}^{\frac{\pi}{2}} \sqrt{1 - k^2 \sin^2 \theta} \, d\theta,
\end{equation}
up to a change of variables. Much about the study of multiple elliptic integrals is due to how mathematical objects of this form naturally arise within many
different areas of physics \cite{Kaplan1950}. For example, as expressed in \cite{Glasser1976}, practical problems concerning three-dimensional lattices
often give rise to triple integrals that are reducible to
\begin{equation}\label{multipleK}
\int_{\alpha}^{\beta} F(\mu) \text{{\bf K}}(\mu) \, d\mu
\end{equation}
for an elementary function $F$ \cite{Glasser1976}, and integrals as in \eqref{multipleK} involving twofold products
\cite{GlasserGuttmann1994,WanZucker2016} and threefold products \cite{WanZucker2016} of complete elliptic integral expressions have been similarly
applied in the context of the study of lattices. Zhou's 2014 article on multiple elliptic integrals \cite{Zhou2014Legendre}, in which Zhou solved the open
problem of proving the formulas
\begin{equation}\label{openproblem}
\int_{0}^{1} \text{{\bf K}}^{3}\left( \sqrt{1 - k^2} \right) \, dk = 6 \int_{0}^{1} \text{{\bf K}}^{2}(k) \text{{\bf K}}\left( \sqrt{1 - k^2} \right) k \, dk
= \frac{\Gamma^{8}\left( \frac{1}{4} \right)}{ 128 \pi^2}
\end{equation}
conjectured by Wan in 2012 \cite{Wan2012}, referenced applications of expressions as in \eqref{multipleK} within many different areas in physics, and
serves as main source of motivation behind the multiple elliptic integrals introduced in this article. A remarkable aspect of the definite integral evaluations
shown in \eqref{openproblem} is given by these integrals involving \emph{threefold} products of complete elliptic integral expressions, as opposed, for
example, to the many onefold or twofold products of $\text{{\bf K}}$- and/or $\text{{\bf E}}$-expressions in the integrands recorded in many of the
sections of a standard reference on multiple elliptic integrals \cite[\S4.21.1--4.22.14]{Brychkov2008}, noting that none of the integrals in
\cite{Brychkov2008} involve {threefold} products of elliptic integrals, with very little known about integrals involving such threefold products, relative to the
onefold or twofold cases. In our article, we introduce many new evaluations for definite integrals that resemble Wan's integrals in \eqref{openproblem}
and involve threefold products of complete elliptic expressions and that have not, to the best of our knowledge, appeared in any equivalent forms
in any relevant literature. In contrast to the Legendre function-based techniques introduced by Zhou in \cite{Zhou2014Legendre}, we instead make use
of Caputo operators, building on the recent work in \cite{Campbell2022,CampbellCantariniDAurizio2022}.
Zhou's Legendre polynomial-based approach from \cite{Zhou2014Legendre} was applied to prove the below closed form for what is referred to as a
\emph{multiple elliptic integral in Clebsch--Gordan (CG) form} \cite{Campbell2022,Zhou2014Legendre}, which was highlighted in Corollary 3.2 in
\cite{Zhou2014Legendre}, and which had been previously included in the standard reference on multiple elliptic integrals
previously referenced \cite[p.\ 278]{Brychkov2008}:
\begin{equation}\label{maintwofold}
2 \int_{0}^{1} \text{{\bf K}}\left( \sqrt{x} \right) \text{{\bf K}}\left( \sqrt{1 - x} \right) \, dx = \frac{\pi^3}{4}.
\end{equation}
The closed-form formula in \eqref{maintwofold} was later proved and generalized in \cite{Campbell2022} using a fractional-calculus based identity
introduced in \cite{CampbellCantariniDAurizio2022} and referred to as \emph{semi-integration by parts} (SIBP) \cite{CampbellCantariniDAurizio2022}. We
again emphasize the threefold nature of the products of complete elliptic functions in Wan's integrals in \eqref{openproblem}, in contrast to the twofold
product in the integrand in \eqref{maintwofold}. If the $F$-factor in \eqref{multipleK} is elementary, then Fubini-type interchanges often may be used to
evaluate \eqref{multipleK} by rewriting the $\text{{\bf K}}$-factor according to \eqref{Kdefinition}, which shows how this onefold case is much more
manageable relative to threefold products as in the Wan integrals shown in \eqref{openproblem}; a similar argument may be used to explain why
integrands with twofold products of complete elliptic integrals, as in the CG-type integral in \eqref{maintwofold}, are also relatively manageable compared
to the recalcitrant nature of integrals as in \eqref{openproblem}, noting that the challenging nature of the integrals in \eqref{openproblem} was also
considered by Wan and Zucker in \cite{WanZucker2016}. The foregoing considerations strongly motivate the exploration as to how the fractional
calculus-based approaches from \cite{Campbell2022,CampbellCantariniDAurizio2022}
may be improved or otherwise altered so as to be applicable to
integrals involving threefold products of $\text{{\bf K}}$ and/or $\text{{\bf E}}$. This is the main purpose of our article.
In this regard, we have succeeded in applying our new methods to
prove the new results highlighted in Section \ref{subsectionMotivating} below.
\subsection{New multiple elliptic integrals}\label{subsectionMotivating}
The main results in our article are given by how we have generalized the SIBP theorem from \cite{CampbellCantariniDAurizio2022}, together with our
applications of this generalization, as in the new closed forms highlighted below.
Zhou \cite{Zhou2014Legendre} has expressed and explored the analytically challenging nature of expressing integrals involving three or more complete
elliptic integrals in terms of fundamental mathematical constants, which strongly motivates the new results highlighted
as \eqref{mainresult1}--\eqref{mainresult12}. With regard to our
new symbolic evaluation in \eqref{mainresult2}, we are letting $G = 1 - \frac{1}{3^2} + \frac{1}{5^2} - \cdots$
denote Catalan's constant, and with reference to the new result in \eqref{mainresult3},
we are letting $\phi = \frac{1 + \sqrt{5}}{2}$ denote the Golden Ratio constant.
\begin{align}
& \frac{\pi ^3 (1+4 \ln (2))}{32}
= \int_0^1 \text{{\bf E}}\left(\sqrt{1-x}\right) \text{{\bf K}}^{2}\left(\sqrt{\frac{1-\sqrt{1-x}}{2} }\right) \, dx, \label{mainresult1} \\
& \ \nonumber \\
& \frac{\pi ^2 (4 G+2+\pi \ln (2))}{16 \sqrt{2}}
= \int_0^1 \text{{\bf E}}\left(\sqrt{1-x}\right) \text{{\bf K}}^2\left(\sqrt{\frac{1}{2}-\frac{1}{2} \sqrt{1-\frac{x}{2}}}\right) \, dx, \label{mainresult2} \\
& \ \nonumber \\
& \frac{\pi ^2}{2} \left(\frac{\pi ^2}{20}+\frac{3 \ln (\phi )}{2}-\frac{\sqrt{5}}{4}\right)
= \int_0^1 \text{{\bf E}}\left(\sqrt{1-x}\right) \text{{\bf K}}^2\left(\sqrt{\frac{1}{2}-\frac{\sqrt{4+x}}{4}}\right) \, dx, \label{mainresult3} \\
& \ \nonumber \\
& \pi ^2 \left(\frac{17}{30}-\frac{\ln \left(1+\sqrt{2}\right)}{2 \sqrt{2}}\right)
= \int_0^1 \frac{\text{{\bf E}}\left(\sqrt{1-x}\right)
\text{{\bf K}}^{2}\left(\sqrt{\frac{1}{2}-\frac{\sqrt{\frac{1-\sqrt{1-x}}{x}}}{\sqrt{2}}}\right)}{\sqrt[4]{2 \sqrt{1-x} - x + 2}} \, dx, \label{mainresult4} \\
& \ \nonumber \\
& \frac{1}{{\sqrt[4]{2}}} \left( \frac{47 \pi ^2}{160}-\frac{\pi ^3}{16 \sqrt{3}} \right)
= \int_0^1 \frac{\text{{\bf E}}\left(\sqrt{1-x}\right)
\text{{\bf K}}^2\left(\frac{\sqrt{4-\sqrt{6} \sqrt{\frac{\sqrt{16 x+9}-3}{x}}}}{2
\sqrt{2}}\right)}{\sqrt[4]{8 x+3 \sqrt{16 x+9}+9}} \, dx, \label{mainresult5} \\
& \ \nonumber \\
& \frac{1}{2^{7/4}} \left( \frac{71 \pi ^2}{60}-\frac{\pi ^3}{8} \right)
= \int_0^1 \frac{\text{{\bf E}}\left(\sqrt{1-x}\right) \text{{\bf K}}^2\left(\sqrt{\frac{1}{2}-\frac{1}{4} \sqrt{\frac{\sqrt{8
x+1}-1}{x}}}\right)}{\sqrt[4]{4 x+\sqrt{8 x+1}+1}} \, dx, \label{mainresult6} \\
& \ \nonumber \\
& \frac{\pi ^2 \left(143-\frac{20 \pi }{\sqrt{3}}\right)}{480 \sqrt[4]{2}}
= \int_0^1 \frac{\text{{\bf E}}\left(\sqrt{1-x}\right) \text{{\bf K}}^{2}\left(\sqrt{\frac{1}{2}-\frac{\sqrt{\frac{\sqrt{48 x+1}-1}{x}}}{4
\sqrt{6}}}\right)}{\sqrt[4]{24 x+\sqrt{48 x+1}+1}} \, dx, \label{mainresult7} \\
& \ \nonumber \\
& \frac{\pi ^2 (104- 45 \ln (3))}{180 \sqrt{3}}
= \int_0^1 \frac{\text{{\bf E}}\left(\sqrt{1-x}\right) \text{{\bf K}}^{2}\left(\frac{\sqrt{3-2 \sqrt{\frac{6-3 \sqrt{4-3
x}}{x}}}}{\sqrt{6}}\right)}{\sqrt[4]{4 \sqrt{4-3 x}-3 x+8}} \, dx. \label{mainresult8}
\end{align}
We also introduce an infinite family of closed-form generalizations of \eqref{mainresult8}.
Our method allows us to obtain new evaluations such as
\begin{equation}\label{firstwithdE}
\frac{\pi ^3}{64} (1-4 \ln (2)) = \int_{0}^{1} x \text{{\bf K}}^{2}\left( \sqrt{\frac{1
- \sqrt{x}}{2}} \right) \, d \text{{\bf E}}\left( \sqrt{x} \right),
\end{equation}
which may be written in an equivalent form so as to again obtain a threefold product of
complete elliptic expressions, recalling the differential relation such that $ \frac{d \text{{\bf E}}(k)}{dk} = \frac{ \text{{\bf E}}(k) -
\text{{\bf K}}(k) }{k}$.
In a similar vein, relative to \eqref{firstwithdE}, our method allows us to prove the following:
\begin{align}
& \frac{\pi ^2}{8} \left(3 \ln (\phi )-\frac{\sqrt{5}}{2}-\frac{\pi ^2}{10}\right)
= \int_{0}^{1} x
\text{{\bf K}}^2\left( \sqrt{\frac{1-\sqrt{\frac{1-x}{4}+1}}{2}} \right) \,
d \text{{\bf E}}\left( \sqrt{x} \right), \label{mainresult10} \\
& \ \nonumber \\
&
\frac{\pi ^2}{4} \left(\frac{\ln \left(1+\sqrt{2}\right)}{ \sqrt{2}}-\frac{13}{15}\right)
= \int_0^1 \frac{ x \text{{\bf K}}^{2}\left(\sqrt{\frac{1}{2}-\frac{1}{\sqrt{2} \sqrt{\sqrt{x}+1}}}\right)
}{\sqrt{\sqrt{x}+1}} \, d\text{{\bf E}}\left( \sqrt{x} \right), \label{mainresult11} \\
& \ \nonumber \\
& \frac{\pi ^2 \left(2 \ln (3)-\frac{152}{45}\right)}{16 \sqrt{3}}
= \int_0^1 \frac{ x \text{{\bf K}}^{2}\left(\sqrt{\frac{1}{2}-\frac{\sqrt{\frac{\sqrt{3 x+1}-2}{x-1}}}{\sqrt{3}}}\right)
}{\sqrt[4]{3 x+4
\sqrt{3 x+1}+5}} \, d\text{{\bf E}}\left(\sqrt{x}\right). \label{mainresult12}
\end{align}
Wan and Zucker \cite{WanZucker2016}
introduced a number of remarkable evaluations for
integrals that are of the forms suggested via \eqref{mainresult1}--\eqref{mainresult12},
i.e., definite integrals satisfying the following properties:
\begin{enumerate}
\item The definite integral is from $0$ to $1$;
\item The integrand involves a factor given by a threefold product of complete elliptic integral expressions; and
\item Any remaining integrand factors are algebraic.
\end{enumerate}
The closed-form evaluation of mathematical objects satisfying the above conditions is the main purpose of this article.
A remarkable evaluation for
an integral of this form
was given by Wan and Zucker \cite{WanZucker2016}
in the context of research on lattice sums and involves an integrand factor
of the form $2 \text{{\bf E}}(k)-\text{{\bf K}}(k)$.
We introduce, in Section \ref{subsectionWanZucker},
new closed forms for integrals satisfying the above conditions
and involving a factor of the form $2 \text{{\bf E}}(k)-\text{{\bf K}}(k)$, inspired by \cite{WanZucker2016}.
\section{Clebsch--Gordan theory}
The CG coefficients are typically defined via the phenomenon of angular momentum coupling. Following \cite{Askey1982}, we express that the CG
coefficients that have zero magnetic quantum numbers satisfy the following identity:
\begin{equation}\label{mainphysics}
\left( C_{i0b0}^{c0} \right)^{2} = \frac{2c+1}{2} \int_{-1}^{1} P_{i}(x) P_{b}(x) P_{c}(x) \, dx,
\end{equation}
letting the orthogonal family of Legendre polynomials
be denoted as per usual. A common definition for Legendre polynomials
is via the binomial sum indicated as follows: $ P_{n}(x) = \frac{1}{2^{n}} \sum_{k=0}^{n} \binom{n}{k}^{2} (x+1)^{k} (x-1)^{n-k}$.
In view of \eqref{mainphysics}, and
as in the Zhou article \cite{Zhou2014Legendre} that is the main source of motivation behind our new results,
we may define \emph{generalized Clebsch--Gordan integrals} to be of the form
\begin{equation}\label{generalizedCG}
\int_{-1}^{1} P_{\mu}(x) P_{\nu}(x) P_{\nu}(-x) \, dx
\end{equation}
for $\mu$ and $\nu$ in $\mathbb{C}$,
and this terminology is also used in the article \cite{Cantarini2022} relevant to much of our work.
The CG coefficients naturally arise in both the decomposition of a product of two spherical harmonics into spherical harmonics and, equivalently, in the
decomposition of a product of Legendre polynomials into Legendre polynomials \cite{DongLemus2002}. So, by taking a product of three Legendre polynomials
and integrating this product, the CG coefficients naturally arise according to this latter decomposition.
From the product formula for $ Y_{\ell_{1}}^{m_{1}}(\theta, \varphi) Y_{\ell_{2}}^{m_{2}}(\theta,
\varphi)$ for spherical harmonics in terms of Legendre polynomials, we may write
\begin{align}
& P_{\ell_{1}}^{m_{1}}(x) P_{\ell_{2}}^{m_{2}}(x) = \label{prodPP1} \\
& \sqrt{ \frac{ (\ell_{1} + m_{1})! (\ell_{2} + m_{2})! }{ (\ell_{1} - m_{1})! (\ell_{2} - m_{2})! } }
\sum_{\ell_{12}} \sqrt{ \frac{ (\ell_{12} - m_{12})! }{ (\ell_{12} + m_{12})! } }
C_{m_{1}, m_{2}, m_{12}}^{\ell_{1}, \ell_{2}, \ell_{12}} C_{0, 0, 0}^{\ell_{1}, \ell_{2}, \ell_{12}}
P_{\ell_{12}}^{m_{12}}(x), \label{prodPP2}
\end{align}
referring to \cite{DongLemus2002} for details.
Integrals of threefold products of Legendre polynomials of the form shown in \eqref{generalizedCG} arise in the context of the evaluation of CG
coefficients in much the same way as in the classic identity in \eqref{mainphysics}, and hence the appropriateness as to how series and integral
evaluations arising from or otherwise directly relating to integrals as in \eqref{mainphysics} and \eqref{generalizedCG} may be referred to as being of CG
type, especially in view of the product identity shown in \eqref{prodPP1}--\eqref{prodPP2}. From Fourier--Legendre expansions such as
\begin{equation}\label{FLofK}
\text{{\bf K}}\left( \sqrt{x} \right) = \sum_{n=0}^{\infty} \frac{2}{2n+1} P_{n}\left( 2 x - 1 \right),
\end{equation}
the integration of products of complete elliptic-type expressions
often gives rise to CG coefficients via identities as in \eqref{prodPP1}--\eqref{prodPP2}.
\section{New applications of semi-integration by parts}
As indicated above, Zhou's 2014 article \cite{Zhou2014Legendre} is the main inspiration for our current work. This current work is also inspired by
many references citing or otherwise related to Zhou's article \cite{Zhou2014Legendre}, including
\cite{AusserlechnerGlasser2020,CampbellDAurizioSondow2019,Cantarini2022,GlasserZhou2018,RogersWanZucker2015,WanZucker2016,Zhou2022,Zhou2015,Zhou2017,Zhou2019,Zhou2014On,Zhou2016,Zhou2018},
and these references include a number of articles involving definite integrals from $0$ to $1$ with integrands containing twofold products of complete
elliptic expressions \cite{AusserlechnerGlasser2020,CampbellDAurizioSondow2019,Cantarini2022,GlasserZhou2018,Zhou2017,Zhou2019} or threefold such
products \cite{RogersWanZucker2015,WanZucker2016,Zhou2014On}.
These references add to our interest in the multiple elliptic integrals highlighted in Section \ref{subsectionMotivating}
and proved in the current Section.
A fundamental object in the field of fractional calculus is the \emph{Riemann--Liouville fractional derivative}, which is such that
\begin{align*}
D^{\alpha} f(x) & = \frac{d^{n}}{dx^{n}} \left( D^{-(n - \alpha)} f(x) \right) \\
& = \frac{1}{\Gamma(n - \alpha)} \frac{d^{n}}{dx^{n}} \left( \int_{0}^{x} \left( x - t \right)^{n
- \alpha - 1} f(t) \, dt \right),
\end{align*}
setting $n - 1 \leq \alpha \leq n$ and $n \in \mathbb{N}$.
In this regard, and following \cite{Campbell2022,CampbellCantariniDAurizio2022}, the \emph{semi-derivative} operator
$D^{1/2}$ satisfies
\begin{equation}\label{semiderivative}
D^{1/2} x^{\alpha} = \frac{\Gamma(\alpha + 1)}{\Gamma\left( \alpha + \frac{1}{2} \right)} x^{\alpha - \frac{1}{2}},
\end{equation}
and the \emph{semi-primitive} operator $D^{-1/2}$ satisfies
\begin{equation}\label{semiprimitive}
D^{-1/2} x^{\alpha} = \frac{\Gamma(\alpha + 1)}{\Gamma\left( \alpha + \frac{3}{2} \right)} x^{\alpha + \frac{1}{2}},
\end{equation}
and we refer to the operators $D^{\pm 1/2}$ as \emph{Caputo operators}. As described in \cite{CampbellCantariniDAurizio2022}, much of the interest in
the techniques in \cite{CampbellCantariniDAurizio2022} involving the operators in \eqref{semiderivative}
and \eqref{semiprimitive}
may, informally, be regarded as being given by
how the application of $D^{\pm 1/2}$
to series involving powers of central binomial coefficients
has the effect, by the Legendre duplication formula, of reducing such a power by one,
and this is often useful for the purposes of simplifying
series containing higher powers of $\binom{2n}{n}$ for $n \in \mathbb{N}_{0}$.
This is formalized, in part, in \cite{Campbell2022,CampbellCantariniDAurizio2022} with the following transformation.
\begin{theorem}\label{SIBPtheorem}
(Semi-integration by parts): The equality
\begin{equation}\label{displayedSIBP}
\langle f, g \rangle = \left\langle \left( D^{1/2} \tau \right) f,
\left( \tau D^{-1/2} \right) g \right\rangle
\end{equation}
holds true if both sides are well-defined,
and where $\tau$ maps a function $h(x)$ to $h(1-x)$ \cite{Campbell2022,CampbellCantariniDAurizio2022}.
\end{theorem}
From the Maclaurin series expansions
\begin{equation*}
\text{{\bf K}}(k) = \frac{\pi}{2} \, {}_{2}F_{1}\!\!\left[
\begin{matrix}
\frac{1}{2}, \frac{1}{2} \vspace{1mm}\\ 1
\end{matrix} \ \Bigg| \ k^2 \right]
\end{equation*}
and
\begin{equation}\label{EMaclaurin}
\text{{\bf E}}(k) = \frac{\pi}{2} \, {}_{2}F_{1}\!\!\left[
\begin{matrix}
\frac{1}{2}, -\frac{1}{2} \vspace{1mm}\\ 1
\end{matrix} \ \Bigg| \ k^2 \right],
\end{equation}
the term-by-term application of the Caputo operators to the power series for $\text{{\bf K}}(\sqrt{k})$ and $\text{{\bf E}}(\sqrt{k})$ yields
elementary functions, and, as explored in \cite{Campbell2022}, this is often useful in the evaluation and generalizations of integrals of CG form as in
\eqref{maintwofold}. In view of the power series expansions
\begin{equation}\label{cubedcentralbinomial}
\sum _{n=0}^{\infty} \binom{2 n}{n}^3 x^n
= \frac{4 \text{{\bf K}}^{2}\left(\frac{\sqrt{1-\sqrt{1-64 x}}}{\sqrt{2}}\right)}{\pi ^2}
\end{equation}
and
\begin{equation}\label{relatedcubedbinomial}
\sum _{n=0}^{\infty} \binom{2 n}{n}^2 \binom{4 n}{2 n} x^n
= \frac{4 \sqrt{2} \text{{\bf K}}^{2}\left(\sqrt{\frac{1}{2}-\frac{\sqrt{\frac{1-\sqrt{1-256 x}}{x}}}{16 \sqrt{2}}}\right)}{\pi ^2 \sqrt[4]{-256 x+2
\sqrt{1-256 x}+2}},
\end{equation}
we are led to consider something of an ``opposite'' strategy relative to \cite{CampbellCantariniDAurizio2022}, in the following sense: Instead of using
fractional derivatives to attempt to decrease the power of a central
binomial coefficient in a given series, we instead want to \emph{increase}
the power of $\binom{2n}{n}$, again with the use
of fractional derivatives, and by direct analogy with Theorem \ref{SIBPtheorem}.
We formalize this idea with Theorem \ref{SIBPvariant} below.
\begin{theorem}\label{SIBPvariant}
(A variant of SIBP for analytic functions):
For sequences $({a}_{n} : n \in \mathbb{N}_{0})$ and $({b}_{n} : n \in \mathbb{N}_{0})$,
we write ${f}(x) = \sum_{n=0}^{\infty} x^{n + \frac{1}{2}} {a}_{n} $
and $\mathfrak{g}(x) = \sum_{n=0}^{\infty} x^{n+ \frac{1}{2}} {b}_{n}$.
The inner product
\begin{equation}\label{displayinner1}
\langle {f}(x), \mathfrak{g}(1-x) \rangle
\end{equation}
may then be written as
\begin{equation}\label{displayinner2}
\left\langle \sum_{n=0}^{\infty}
\frac{\Gamma\left( n+\frac{3}{2} \right)}{\Gamma(n+1)} (1-x)^n {a}_{n},
\sum_{n=0}^{\infty} \frac{\Gamma\left( n+\frac{3}{2} \right)}{\Gamma\left( n+2 \right)}
x^{n+1} {b}_{n} \right\rangle,
\end{equation}
under the assumption that applications of $\langle \cdot, \cdot \rangle$ and infinite summation
may be reversed in \eqref{displayinner1} and \eqref{displayinner2} (cf.\ \cite{Campbell2022}).
\end{theorem}
\begin{proof}
Under the given commutativity assumption,
it remains to consider the cases whereby $a_{n} = \delta_{n, \ell}$
and $b_{n} = \delta_{n, m}$ for fixed $\ell, m \in \mathbb{N}_{0}$, letting the Kronecker delta
symbol be denoted as per usual. So, it remains to prove that
$$ \int_0^1 x^{\ell+\frac{1}{2}} (1-x)^{m+\frac{1}{2}} \, dx
= \int_{0}^{1} \left( \frac{(1-x)^\ell \Gamma \left(\ell+\frac{3}{2}\right)}{\Gamma (\ell+1)} \right)
\left( \frac{x^{m+1} \Gamma \left(m+\frac{3}{2}\right)}{\Gamma (m+2)} \right) \, dx. $$
So, from the $\Gamma$-function evaluation of the beta integral, the desired result then immediately holds.
\end{proof}
Typically, to verify the required interchanges of limiting operations
specified in Theorem \ref{SIBPvariant}, one may employ
basic results in real analysis concerning term-by-term integration of infinite series \cite[\S5.3]{Bressoud2007};
for the purposes of
our applications in Section \ref{subsectionApplications}, the required term-by-term integrations
may be justified in a routine way,
but obtaining closed forms from \eqref{displayinner1}
can be quite involved and may require some degree of ingenuinty
in the rare cases whereby the $a$- and $b$-sequences
are such that \eqref{displayinner2}
is expressible with a threefold product of complete elliptic integrals; this is clarified in Section \ref{subsectionApplications} below.
\subsection{Applications}\label{subsectionApplications}
The integral expressions given below are referred to as \emph{multiple elliptic integrals in CG form} by Zhou in \cite{Zhou2014Legendre}, and the following
evaluations due to Zhou \cite{Zhou2014Legendre} are highlighted as part of Corollary 2.2 in \cite{Zhou2014Legendre}. The following integrals in CG form
all involve threefold products of complete elliptic integrals, as in our new results listed in Section \ref{subsectionMotivating},
and the below formulas due to Zhou, along with many results from Zhou of a similar quality,
are main sources of inspiration concerning our results as in
Section \ref{subsectionMotivating}:
\begin{align}
& 4 \int_{0}^{1}
\frac{ (1-t) \text{{\bf K}}^{2}\left( \sqrt{1-t} \right) \text{{\bf K}}\left( \sqrt{t} \right) }{
(1 + t)^{3/2} } \, dt = \frac{ \Gamma^{2}\left( \frac{1}{8} \right)
\Gamma^{2}\left( \frac{3}{8} \right) }{24}, \label{nonintro1} \\
& \frac{27}{4} \int_{0}^{1}
\frac{ t(1 - t) \text{{\bf K}}^{2}\left( \sqrt{1 - t} \right) \text{{\bf K}}\left( \sqrt{t}
\right) }{ (1 - t + t^2)^{7/4} } \, dt
= \frac{ \Gamma^{4}\left( \frac{1}{4} \right) }{ 8 \sqrt{2 \sqrt{3}}}. \label{nonintro2}
\end{align}
Our present work is also inspired by Zhou's proof \cite{Zhou2014On} of the conjectured formula
\begin{equation}\label{nonintro3}
\int_{0}^{1} \frac{ \text{{\bf K}}^{2}\left( \sqrt{1 - k^2} \right) \text{{\bf K}}(k) }{\sqrt{k}
\left( 1 - k^2 \right)^{3/4}} \, dk
= \frac{ \Gamma^{8}\left( \frac{1}{4} \right) }{32 \sqrt{2} \pi^2}
\end{equation}
discovered experimentally by Rogers et al.\ \cite{RogersWanZucker2015}
in the context of the study of formulas as in
\begin{equation}\label{nonintro4}
\int_{0}^{1} \frac{ \text{{\bf K}}^{3}\left( \sqrt{1 - k^2} \right) }{ \sqrt{k}
\left( 1 - k^2 \right)^{3/4} } \, dk = \frac{3 \Gamma^{8}\left( \frac{1}{4} \right) }{32 \sqrt{2} \pi^2},
\end{equation}
as introduced in \cite{RogersWanZucker2015}. The above results due to Zhou et al.\ as in \eqref{nonintro1}--\eqref{nonintro4} motivate our first
Corollary to the SIBP variant formulated in Theorem \ref{SIBPvariant}, as below, in view of our discussions in Section \ref{sectionIntroduction}. As we
are to later explain, the three integral formulas highlighted in the following Corollary are ``special'' and ``non-arbitrary'' in the sense that these
specific formulas depend on the very few known closed forms for the dilogarithm function.
\begin{corollary}\label{corollaryfirstthree}
The CG-type integral evaluations in \eqref{mainresult1}--\eqref{mainresult3} hold true.
\end{corollary}
\begin{proof}
We begin by setting $a_{n} = 4^{-n} \binom{2 n}{n}$ and $b_{n} = \frac{16^{-n} (n+1) \binom{2 n}{n}^2}{2 n+1}$ in our SIBP variant given as
Theorem \ref{SIBPvariant}. An application of Theorem \ref{SIBPvariant}, according to the specified input parameters, then gives us the equality of
\begin{align*}
\int_{0}^{1} \frac{1}{12} \sqrt{x} \Bigg( 12 & \, \, {}_{3}F_{2}\!\!\left[
\begin{matrix}
\frac{1}{2}, \frac{1}{2}, \frac{1}{2} \vspace{1mm}\\
1, \frac{3}{2}
\end{matrix} \ \Bigg| \ 1 - x \right] + \, {}_{3}F_{2}\!\!\left[
\begin{matrix}
\frac{3}{2}, \frac{3}{2}, \frac{3}{2} \vspace{1mm}\\
2, \frac{5}{2}
\end{matrix} \ \Bigg| \ 1 - x \right] - \\
& x \, {}_{3}F_{2}\!\!\left[
\begin{matrix}
\frac{3}{2}, \frac{3}{2}, \frac{3}{2} \vspace{1mm}\\
2, \frac{5}{2}
\end{matrix} \ \Bigg| \ 1 - x \right] \Bigg) \, dx
\end{align*}
and
\begin{equation}\label{firstCGscalar}
\int_0^1 \frac{2 \text{{\bf E}}\left(\sqrt{1-x}\right) \text{{\bf K}}^{2}\left(\sqrt{\frac{1}{2}-\frac{\sqrt{1-x}}{2}}\right)}{\pi ^2} \, dx,
\end{equation}
where the Maclaurin series in \eqref{EMaclaurin} and \eqref{cubedcentralbinomial} have been applied to obtain, via Theorem \ref{SIBPvariant}, the
integral in \eqref{firstCGscalar}. Applying term-by-term integration to the above expression involving ${}_{3}F_{2}$-series, this gives us that
\eqref{firstCGscalar} is equivalent to $$ \sum _{n=0}^{\infty } \left(\frac{1}{(2 n+1)^2}-\frac{1}{2 (2 n+1)} - \frac{1}{(2 n+3)^2}+\frac{3}{2 (2
n+3)}-\frac{1}{2 n+5}\right) \binom{2 n}{n} 2^{-2 n}. $$ Expanding the above summand, the resultant series are classically known, giving us a closed
form for \eqref{firstCGscalar} equivalent to \eqref{mainresult1}. The same approach as above, as applied to the cases such that $a_{n} = 4^{-n}
\binom{2 n}{n}$ and $b_{n} = \frac{ (n+1) \binom{2 n}{n}^2 \left( -\frac{1}{64} \right)^n}{2 n+1}$, may be used to prove the CG-type integral
evaluation in \eqref{mainresult3}. Similarly, setting $a_{n} = 4^{-n} \binom{2 n}{n}$ and $b_{n} = \frac{ (n+1) \binom{2 n}{n}^2 \left( \frac{1}{32}
\right)^n}{2 n+1}$ may be used to prove \eqref{mainresult2}.
\end{proof}
Setting $a_{n} = 4^{-n} \binom{2 n}{n}$ and $b_{n} = \frac{16^{-n} (n+1) \binom{2 n}{n}^2 \alpha ^n}{2 n+1}$
for a free parameter $\alpha$ in Theorem \ref{SIBPvariant}, by mimicking our proof of
Corollary \ref{corollaryfirstthree},
we can show that
\begin{equation}\label{firstinfinite}
\int_0^1 \text{{\bf E}}\left(\sqrt{1-x}\right) \text{{\bf K}}^{2}\left(\sqrt{\frac{1}{2}-\frac{1}{2} \sqrt{1-\alpha x}}\right) \, dx
\end{equation}
may be expressed as a combination of elementary functions, closed forms, together with the two-term dilogarithm combination
\begin{equation}\label{twotermLi2}
\text{Li}_2\left(-\sqrt{1-\alpha }-\sqrt{-\alpha }\right)-\text{Li}_2\left(-\sqrt{1-\alpha }-\sqrt{-\alpha }+1\right).
\end{equation}
So, the CG-type integral in \eqref{firstinfinite} admits a closed form if and only if the $\text{Li}_{2}$-combination in \eqref{twotermLi2} admits a closed
form. We have established a new connection between integrals related to CG theory and the closed-form evaluation of dilogarithmic expressions, and
it seems that there is not much known about connections of this form. Furthermore, past research articles exploring the closed-form evaluation of
two-term dilogarithm combinations as in \cite{Campbell2021Some,Khoi2014,Lima2012,Lima2017,Stewart2022} motivate our interest in the relationship
between \eqref{firstinfinite} and \eqref{twotermLi2} that we have introduced. By systematically setting each of the arguments of the two
$\text{Li}_{2}$-expressions in \eqref{twotermLi2} to be equal to the eight known real values $x$ such that both $\text{Li}_{2}(x)$ and $x$ admit
closed forms \cite[pp.\ 4, 6--7]{Lewin1981}, the only $\alpha$-value that yields a closed form with real arguments in \eqref{twotermLi2} is $\alpha
= -\frac{1}{4}$. A similar argument may be used to explain the uniqueness for the $\alpha = \frac{1}{2}$ case. A similar uniqueness property may be
applied to the results highlighted in Corollary \ref{corollarynextthree} below, as we later explain.
\begin{corollary}\label{corollarynextthree}
The CG-type integral evaluations in \eqref{mainresult4}--\eqref{mainresult7} hold true.
\end{corollary}
\begin{proof}
In Theorem \ref{SIBPvariant}, we set $a_{n} = 4^{-n} \binom{2 n}{n}$
and $b_{n} = \frac{64^{-n} (n+1) \binom{2 n}{n} \binom{4 n}{2 n}}{2 n+1}$.
Using \eqref{relatedcubedbinomial}, Theorem \ref{SIBPvariant} then gives us the equality of
\begin{align*}
\int_{0}^{1} \frac{1}{16} \sqrt{x} \Bigg( 16 & \,
\, {}_{3}F_{2}\!\!\left[
\begin{matrix}
\frac{1}{4}, \frac{1}{2}, \frac{3}{4} \vspace{1mm}\\
1, \frac{3}{2}
\end{matrix} \ \Bigg| \ 1 - x \right] + \, {}_{3}F_{2}\!\!\left[
\begin{matrix}
\frac{5}{4}, \frac{3}{2}, \frac{7}{4} \vspace{1mm}\\
2, \frac{5}{2}
\end{matrix} \ \Bigg| \ 1 - x \right] - \\
& x \, {}_{3}F_{2}\!\!\left[
\begin{matrix}
\frac{5}{4}, \frac{3}{2}, \frac{7}{4} \vspace{1mm}\\
2, \frac{5}{2}
\end{matrix} \ \Bigg| \ 1 - x \right] \Bigg) \, dx
\end{align*}
and
\begin{equation}\label{multiplenotcubed}
\int_0^1 \frac{2 \sqrt{2} \text{{\bf E}}\left(\sqrt{1-x}\right)
\text{{\bf K}}^{2}\left(\sqrt{\frac{1}{2}-\frac{\sqrt{\frac{1-\sqrt{1-x}}{x}}}{\sqrt{2}}}\right)}{\pi ^2
\sqrt[4]{-x+2 \sqrt{1-x}+2}} \, dx.
\end{equation}
Applying term-by-term integration to the above expression involving ${}_{3}F_{2}$-series, we find that the multiple elliptic integral in
\eqref{multiplenotcubed} equals $$ \sum _{n=0}^{\infty } \left(\frac{4}{(2 n + 1)^2} - \frac{33}{16 (2 n + 1)}-\frac{15}{4 (2 n + 3)^2} +
\frac{6}{2 n + 3} - \frac{63}{16 (2 n + 5)}\right) \binom{4 n}{2 n} 4^{-2 n - 1}. $$ This reduces, via classically known series expansions, to:
$$ \sum _{n=0}^{\infty } \frac{4^{-2 n} \binom{4 n}{2 n}}{(2 n + 1)^2} - \sum _{n=0}^{\infty } \frac{15\ 4^{-2-2 n} \binom{4 n}{2 n}}{(2 n +
3)^2}-\frac{3}{10 \sqrt{2}}. $$ So, it remains to evaluate
\begin{equation}\label{3F24F3}
{}_{3}F_{2}\!\!\left[
\begin{matrix}
\frac{1}{4}, \frac{1}{2}, \frac{3}{4} \vspace{1mm}\\
\frac{3}{2}, \frac{3}{2}
\end{matrix} \ \Bigg| \ 1 \right] \ \ \ \text{and} \ \ \
{}_{4}F_{3}\!\!\left[
\begin{matrix}
\frac{1}{4}, \frac{3}{4}, \frac{3}{2}, \frac{3}{2} \vspace{1mm}\\
\frac{1}{2}, \frac{5}{2}, \frac{5}{2}
\end{matrix} \ \Bigg| \ 1 \right].
\end{equation}
For the former case, we may rewrite this ${}_{3}F_{2}(1)$-series as $$\int _0^1\int _0^1\frac{\sqrt{1+\sqrt{1-t^2 u^2}}}{\sqrt{2} \sqrt{1-t^2 u^2}} \,
dt \, du, $$ and the corresponding antiderivatives admit elementary forms. As for the ${}_{4}F_{3}(1)$-series, the same argument together with a
reindexing may be applied.
We proceed to set
\begin{equation}\label{aandbalpha}
a_{n} = 4^{-n} \binom{2 n}{n} \ \ \ \text{and} \ \ \ b_{n} = \frac{64^{-n} (n+1) \binom{2 n}{n} \binom{4 n}{2 n} \alpha ^n}{2 n+1}
\end{equation}
in Theorem \ref{SIBPvariant}. For the $\alpha = -\frac{16}{9}$ case, we may mimic our above proof to
prove \eqref{mainresult5}. For the $\alpha = -{8}$ case,
we may again mimic our above proof to prove \eqref{mainresult6}.
For the $\alpha = -{48}$ case, we may, once again, mimic our above proof to prove \eqref{mainresult7}.
\end{proof}
Setting $\alpha = \frac{3}{4}$ may be applied to prove \eqref{mainresult8}. The $\alpha = \frac{3}{4}$ case has led us to discover a complex analytic
property concerning the arctanh function that may be applied to prove the closed-form evaluation below for an infinite family of generalizations of
\eqref{mainresult8}, as in Corollary \ref{infinitecorollary} below.
\begin{corollary}\label{infinitecorollary}
The CG-type integral
\begin{equation}\label{infinitefamintegral}
\int_0^1 \frac{\text{{\bf E}}\left(\sqrt{1-x}\right) \text{{\bf K}}^{2}\left(\sqrt{\frac{1}{2}-\frac{\sqrt{\frac{1-\sqrt{1-\alpha x}}{\alpha
x}}}{\sqrt{2}}}\right)}{\sqrt[4]{ + 2 \sqrt{1-\alpha x} - \alpha x + 2}} \, dx
\end{equation}
equals $$ \frac{\pi ^2 \left(36 \alpha +2 \sqrt{1-\alpha }-15 \sqrt{2} \sqrt{\frac{\sqrt{1-\alpha }+1}{\alpha }} \alpha \coth ^{-1}\left(\frac{\sqrt{2}
\sqrt{\sqrt{1-\alpha }+1}}{\sqrt{\alpha }}\right)-2\right)}{60 \sqrt{\sqrt{1-\alpha }+1} \alpha } $$ for positive values $\alpha$.
\end{corollary}
\begin{proof}
We again set the $a$- and $b$-sequences as in \eqref{aandbalpha}. Mimicking our proof of Corollary \ref{corollarynextthree}, we can show that
$$ \sum _{n=0}^{\infty } \left(\frac{4}{(2 n+1)^2} + \frac{-32-\alpha }{16 (2 n+1)} - \frac{15 \alpha }{4 (2 n+3)^2} + \frac{2 (1+2 \alpha )}{2 n +
3} - \frac{63 \alpha }{16 (2 n+5)}\right) \binom{4 n}{2 n} 2^{-2-4 n} \alpha ^n $$ equals
\begin{equation}\label{infiniteclosed}
\int_0^1 \frac{2 \sqrt{2} \text{{\bf E}}\left(\sqrt{1-x}\right) \text{{\bf K}}^{2}\left(\sqrt{\frac{1}{2}-\frac{\sqrt{\frac{1-\sqrt{1-\alpha x}}{\alpha
x}}}{\sqrt{2}}}\right)}{\pi ^2 \sqrt[4]{\alpha (-x)+2 \sqrt{1-\alpha x}+2}} \, dx
\end{equation}
for real values $\alpha$.
By direct analogy with our proof for
the ${}_{3}F_{2}$-series in \eqref{3F24F3}, we can show that
$$ {}_{3}F_{2}\!\!\left[
\begin{matrix}
\frac{1}{4}, \frac{1}{2}, \frac{3}{4} \vspace{1mm}\\
\frac{3}{2}, \frac{3}{2}
\end{matrix} \ \Bigg| \ \alpha \right]
= \frac{i \pi }{\sqrt{\alpha }}+\sqrt{2} \left(\frac{2}{\sqrt{\sqrt{1-\alpha }+1}}-\frac{2 \sqrt{2} \tanh ^{-1}\left(\frac{\sqrt{\alpha }+i
\left(\sqrt{1-\alpha }+1\right)}{\sqrt{2} \sqrt{\sqrt{1-\alpha }+1}}\right)}{\sqrt{\alpha }}\right) $$
for real $\alpha$. This can be used to show that
\eqref{infiniteclosed} is equal to the expression given by the following Mathematica output
\begin{verbatim}
((-240*I)*Pi*Sqrt[1 + Sqrt[1 - \[Alpha]]]*Sqrt[\[Alpha]] -
524*Sqrt[2]*\[Alpha] - 524*Sqrt[2 - 2*\[Alpha]]*\[Alpha] - 9*Sqrt[1 +
Sqrt[1 - \[Alpha]]]*(-(Sqrt[2]*Sqrt[1 + Sqrt[1 - \[Alpha]]]) +
2*Sqrt[2]*Sqrt[1 + Sqrt[1 - \[Alpha]]]*Sqrt[1 - \[Alpha]])*(2 + 2*Sqrt[1 -
\[Alpha]] - \[Alpha])*\[Alpha] + (120*I)*Pi*Sqrt[1 + Sqrt[1 -
\[Alpha]]]*\[Alpha]^(3/2) + 225*Sqrt[2]*\[Alpha]^2 - 45*Sqrt[2 -
2*\[Alpha]]*\[Alpha]^2 + 18*Sqrt[2]*\[Alpha]^3 - (240*I)*Pi*Sqrt[1 +
Sqrt[1 - \[Alpha]]]*Sqrt[-((-1 + \[Alpha])*\[Alpha])] - 480*Sqrt[1 +
Sqrt[1 - \[Alpha]]]*(-2*Sqrt[\[Alpha]] + \[Alpha]^(3/2) - 2*Sqrt[-((-1 +
\[Alpha])*\[Alpha])])*ArcTanh[(I*(1 + Sqrt[1 - \[Alpha]]) +
Sqrt[\[Alpha]])/(Sqrt[2]*Sqrt[1 + Sqrt[1 - \[Alpha]]])])/(240*(-1 + Sqrt[1 -
\[Alpha]])*(1 + Sqrt[1 - \[Alpha]])^(7/2))
\end{verbatim}
So, it remains to evaluate
\begin{equation}\label{maincomplex}
\tanh ^{-1}\left(\frac{\sqrt{\alpha }+i \left(\sqrt{1-\alpha }+1\right)}{\sqrt{2} \sqrt{\sqrt{1-\alpha }+1}}\right)
\end{equation}
for real variables $\alpha$.
For positive values $\alpha$, the usual
extension of the arctanh function for complex arguments gives us that \eqref{maincomplex}
equals
\begin{align*}
& -\frac{1}{4} \ln \left(\left(1-\frac{1}{\sqrt{2} \sqrt{\frac{\sqrt{1-\alpha }+1}{\alpha }}}\right)^2+\frac{1}{2} \left(\sqrt{1-\alpha
}+1\right)\right) + \\
& \frac{1}{4} \ln \left(\left(\frac{1}{\sqrt{2} \sqrt{\frac{\sqrt{1-\alpha }+1}{\alpha }}}+1\right)^2+\frac{1}{2}
\left(\sqrt{1-\alpha }+1\right)\right)+ \\
& i \left(\frac{1}{2} \tan ^{-1}\left(\frac{\sqrt{\sqrt{1-\alpha }+1}}{\sqrt{2} \left(1-\frac{1}{\sqrt{2}
\sqrt{\frac{\sqrt{1-\alpha }+1}{\alpha }}}\right)}\right)+\frac{1}{2} \tan ^{-1}\left(\frac{\sqrt{\sqrt{1-\alpha }+1}}{\sqrt{2}
\left(\frac{1}{\sqrt{2} \sqrt{\frac{\sqrt{1-\alpha }+1}{\alpha }}}+1\right)}\right)\right)
\end{align*}
Differentiating the expression
$$ \frac{1}{2} \tan ^{-1}\left(\frac{\sqrt{\sqrt{1-\alpha }+1}}{\sqrt{2} \left(1-\frac{1}{\sqrt{2} \sqrt{\frac{\sqrt{1-\alpha }+1}{\alpha
}}}\right)}\right)+\frac{1}{2} \tan ^{-1}\left(\frac{\sqrt{\sqrt{1-\alpha }+1}}{\sqrt{2} \left(\frac{1}{\sqrt{2} \sqrt{\frac{\sqrt{1-\alpha
}+1}{\alpha }}}+1\right)}\right)$$
can be used to show that the abvoe expression always equals $\frac{\pi }{4}$,
and this gives us our desired closed form for the $\alpha > 0$ case.
\end{proof}
If $\alpha < 0$, the evaluation of \eqref{maincomplex} proves to be of a more interesting nature in terms of the challenge of producing closed forms as
in Corollary \ref{corollarynextthree}. This is formalized as below. For the time being, we remark that the arccoth evaluation in Corollary
\ref{infinitecorollary} is of interest in terms of how this evaluation may be used to obtain simply closed forms for CG-type integrals, by sysmetically
searching for $\alpha$-values such that the arccoth argument in Corollary \ref{infinitecorollary} is reducible to simple constants such as $\ln 2$.
\begin{example}
Setting $\alpha = \frac{576}{625}$, we can prove the evaluation
$$ \frac{\pi ^2 (551-400 \ln (2))}{3840 \sqrt{2}}
= \int_0^1 \frac{\text{{\bf E}}\left(\sqrt{1-x}\right) \text{{\bf K}}^2\left(\frac{\sqrt{24-5 \sqrt{\frac{50-2 \sqrt{625-576 x}}{x}}}}{4
\sqrt{3}}\right)}{\sqrt[4]{50 \left(\sqrt{625-576 x}+25\right)-576 x}} \, dx. $$
\end{example}
\begin{example}
Setting $\alpha = \frac{32}{81}$, we can prove the evaluation
$$ \pi ^2 \left(\frac{93}{640}-\frac{3 \ln (2)}{32}\right)
= \int_0^1 \frac{\text{{\bf E}}\left(\sqrt{1-x}\right) \text{{\bf K}}^{2}\left(\sqrt{\frac{1}{2}-\frac{3}{8} \sqrt{\frac{9-\sqrt{81-32
x}}{x}}}\right)}{\sqrt[4]{18 \left(\sqrt{81-32 x}+9\right)-32 x}} \, dx. $$
\end{example}
\begin{corollary}\label{classificationcorollary}
For $\alpha < 0$, the integral in \eqref{infinitefamintegral} is expressible in closed
form as a finite combination of algebraic expressions and a given set of previously recognized constants
if and only if the same applies to
\begin{equation}\label{mainarctan}
\tan ^{-1}\left(\frac{\sqrt{1-\alpha }+\frac{1}{\sqrt{-\frac{1}{\alpha }}}+1}{\sqrt{2} \sqrt{\sqrt{1-\alpha }+1}}\right).
\end{equation}
\end{corollary}
\begin{proof}
From the evaluation for \eqref{infiniteclosed} given in the proof of Corollary \ref{infinitecorollary}, this gives us that the desired integral in
\eqref{infinitefamintegral} equals a finite combination of algebraic expressions together with $\pi^2$ and \eqref{maincomplex}, for all real values
$ \alpha$. For negative values $\alpha$, the usual extension of the arctanh function to complex arguments gives us that \eqref{maincomplex} equals the
imaginary unit times the arctan expression in \eqref{mainarctan}, and hence the desired result.
\end{proof}
The classification result in Corollary \ref{classificationcorollary} is of interest in the following sense: If we want to obtain a closed form for the CG-type
integral in \eqref{infinitefamintegral} for rational values $\alpha$, then it is only in exceptional cases that \eqref{mainarctan} will admit a closed form.
A systematic computer search based on the algebraic values of $\tan(q \pi)$ for $q \in \mathbb{Q}$ further demonstrated that \eqref{mainarctan} is
expressible in closed form for $\alpha \in \mathbb{Q}$ in only exceptional cases, which emphasizes the unique quality of the CG-type integrals
highlighted in \eqref{mainresult4}--\eqref{mainresult7}. For example, setting $\alpha = -\frac{1}{3}$ yields an expression involving
$$ \tan^{-1}\left(\sqrt{\sqrt{3}-\frac{3}{2}} \left(1+\sqrt{3}\right)\right),$$ which does not seem to be reducible, e.g., to a rational multiple of $\pi$ or
otherwise. The $\alpha = -\frac{16}{9}$ case corresponds to the closed form $\tan \left(\frac{\pi }{3}\right) = \sqrt{3}$, and similarly for
\eqref{mainresult6} and \eqref{mainresult7}.
The integral evaluations from Section \ref{subsectionMotivating} listed as \eqref{firstwithdE}--\eqref{mainresult12}
may be proved via Theorem \ref{SIBPvariant}
in much the same way as in with Corollaries \ref{corollaryfirstthree}--\ref{infinitecorollary};
for the sake of brevity, we leave it to the reader to verify this.
\subsection{New integrals inspired by Wan and Zucker}\label{subsectionWanZucker}
Integrals involving threefold products of complete elliptic functions
as in the formulas
\begin{equation}\label{nonintro5}
\frac{\pi ^3}{6 \sqrt{2}} =
\int_0^1 \sqrt{\frac{k}{\sqrt{1-k^2}}} \text{{\bf K}}^{2}\left(\sqrt{1-k^2}\right)
(2 \text{{\bf E}}(k)-\text{{\bf K}}(k)) \, dk
\end{equation}
and
\begin{equation}\label{nonintro6}
\frac{\Gamma^4 \left(\frac{1}{8}\right)
\Gamma^4 \left(\frac{3}{8}\right)}{384 \sqrt{2} \pi ^2}
= \int_0^1 \frac{\left(2 + 3 k - k^2\right) \text{{\bf K}}^3(k)}{\sqrt{k+1}} \, dk
\end{equation}
and
\begin{equation}\label{nonintro7}
\frac{\left(\sqrt{2}-1\right)^{3/2} \Gamma^8 \left(\frac{1}{4}\right)}{128 \sqrt{2} \pi ^2}
= \int_0^1 \sqrt[4]{k} \sqrt[4]{1-k^2} \text{{\bf K}}^3(k) \, dk
\end{equation}
were given by Wan and Zucker in \cite{WanZucker2016} in the context of the study of lattice sums, and this inspires the new results we introduce below,
which resemble the Wan--Zucker formula in \eqref{nonintro5}, as we provide closed forms resembling the left-hand side of \eqref{nonintro5} for integrals
satisfying the three listed conditions in Section \ref{subsectionMotivating} together with the condition that the integrands are to contain the last integrand
factor displayed in \eqref{nonintro5}.
\begin{corollary}\label{corollarylattice}
The CG-type integral evaluations below hold:
\begin{align*}
& \frac{\pi ^3 (29+32 \ln (2))}{2048}
= \int_0^1 y \left(1-y^2\right) \text{{\bf K}}^{2}\left(\sqrt{\frac{1}{2}-\frac{y}{2}}\right)
(2 \text{{\bf E}}(y)-\text{{\bf K}}(y)) \, dy, \\
& \ \\
& \frac{\pi ^2 G}{32 \sqrt{2}} +
\frac{7 \pi ^2}{64 \sqrt{2}}-\frac{9 \pi ^3}{512 \sqrt{2}}+\frac{\pi ^3 \ln (2)}{128 \sqrt{2}} = \\
& \int_0^1 y \left(1 -
y^2\right) \text{{\bf K}}^{2}\left(\frac{\sqrt{2-\sqrt{2} \sqrt{y^2+1}}}{2} \right)
(2 \text{{\bf E}}(y)-\text{{\bf K}}(y)) \, dy, \\
& \ \\
& -\frac{57}{64} \pi ^2 \ln (\phi )+\frac{\pi ^4}{320}+\frac{53 \sqrt{5} \pi ^2}{256} = \\
& \int_0^1 y \left(1-y^2\right) \text{{\bf K}}^{2}\left(\frac{\sqrt{2-\sqrt{5-y^2}}}{2} \right)
(2 \text{{\bf E}}(y)-\text{{\bf K}}(y)) \, dy.
\end{align*}
\end{corollary}
\begin{proof}
We set
\begin{equation}\label{latticeinput}
a_{n} = \frac{4^{-n} \binom{2 n}{n}}{2 n-1} \ \ \
\text{and} \ \ \ b_{n} = \frac{16^{-n} (n+1) \binom{2 n}{n}^2}{2 n+1}
\end{equation}
in Theorem \ref{SIBPvariant}.
This gives us the equality of
\begin{align*}
\int_{0}^{1} -\frac{1}{12} (1-x) \sqrt{x} \Bigg( 12 & \,
\, {}_{3}F_{2}\!\!\left[
\begin{matrix}
\frac{1}{2}, \frac{1}{2}, \frac{1}{2} \vspace{1mm}\\
1, \frac{3}{2}
\end{matrix} \ \Bigg| \ 1 - x \right] + \, {}_{3}F_{2}\!\!\left[
\begin{matrix}
\frac{3}{2}, \frac{3}{2}, \frac{3}{2} \vspace{1mm}\\
2, \frac{5}{2}
\end{matrix} \ \Bigg| \ 1 - x \right] - \\
& x \, {}_{3}F_{2}\!\!\left[
\begin{matrix}
\frac{3}{2}, \frac{3}{2}, \frac{3}{2} \vspace{1mm}\\
2, \frac{5}{2}
\end{matrix} \ \Bigg| \ 1 - x \right] \Bigg) \, dx
\end{align*}
and $$ \int_0^1 -\frac{2 x \text{{\bf K}}^{2}\left(\sqrt{\frac{1}{2}-\frac{\sqrt{1-x}}{2}}\right) \left(2 \text{{\bf E}}\left(\sqrt{1-x}\right)-\text{{\bf
K}}\left(\sqrt{1-x}\right)\right)}{\pi ^2} \, dx. $$ Reversing integration and infinite summation in the above expression involving
${}_{3}F_{2}$-functions, we obtain the expression
\begin{align*}
& \sum _{n=0}^{\infty } \Bigg(-\frac{1}{8 (2 n+1)^2}-\frac{1}{32 (2 n+1)}+\frac{1}{8 (2 n+3)^2} + \\
& \frac{3}{32 (2 n+3)}-\frac{11}{32 (2 n+5)} + \frac{9}{32 (2 n+7)} \Bigg) \binom{2 n}{n} 2^{1-2 n},
\end{align*}
which reduces to $\frac{1}{512} (-29 \pi -32 \pi \ln (2))$ according to classically known series.
So, a change of variables then gives us the desired closed form for
$$ \int_0^1 y \left(1-y^2\right) \text{{\bf K}}^{2}\left(\sqrt{\frac{1}{2}-\frac{y}{2}}\right)
(2 \text{{\bf E}}(y)-\text{{\bf K}}(y)) \, dy,$$
and similarly for the remaining integrals in the Corollary under consideration.
\end{proof}
By setting $a_{n} = \frac{4^{-n} \binom{2 n}{n}}{2 n-1}$ $b_{n} = \frac{16^{-n} (n+1) \binom{2 n}{n}^2}{2 n+1} \alpha^{n}$ in Theorem
\ref{SIBPvariant}, the resultant CG-type integral can be shown to be reducible to a combination of elementary functions together with the same
two-term dilogarithm combination in \eqref{twotermLi2}. So, we may repeat a previous argument to explain the uniqueness of the closed forms in
Corollary \ref{corollarylattice}. Using variants of the input sequences in \eqref{latticeinput}, we may obtain many further integrals involving $2 \text{{\bf
E}}(y) - \text{{\bf K}}(y)$, and we encourage the exploration of this.
\section{Conclusion}
We conclude by briefly considering some subjects of further research connected with the above material.
Although our article is mainly devoted to integrals involving threefold products of products of $\text{{\bf K}}$ and/or $\text{{\bf E}}$, a careful
examination of Zhou's 2014 article \cite{Zhou2014Legendre} relative to the main material in this article and relative to material concerning integrals
involving $\text{{\bf K}}$ and/or $\text{{\bf E}}$ from relevant references such as \cite{Campbell2021New,CampbellDAurizioSondow2019} leads to new
results on CG-type integrals that involve twofold products and that are related to what Zhou refers to as \emph{Ramanujan transformations}
\cite{Zhou2014Legendre}. By ``reverse-engineering'' formulas from Ramanujan's notebooks that were presented without proof, Zhou
\cite{Zhou2014Legendre} showed that
\begin{equation}\label{Ramanujantransform1}
\text{{\bf K}}^{2}\left( \sqrt{t} \right) = \frac{2}{\pi} \int_{0}^{1}
\frac{ \text{{\bf K}}\left( \sqrt{\mu} \right) \text{{\bf K}}\left( \sqrt{1 - \mu} \right) }{1 - \mu t} \, d\mu
\end{equation}
and that
\begin{equation}\label{Ramanujantransform2}
\text{{\bf K}}^{2}\left( \sqrt{1 - t} \right)
= \frac{8}{\pi} \int_{0}^{1} \frac{ \text{{\bf K}}\left( \sqrt{\mu} \right)
\text{{\bf K}}\left( \sqrt{1 - \mu} \right) }{ (1 + \sqrt{t})^{2} - \mu (1 - \sqrt{t})^{2} } \, d\mu,
\end{equation}
and these Ramanujan-inspired transforms may be used to generalize the CG-type integral in
\eqref{maintwofold}, in the following manner, using material from
\cite{Campbell2021New,CampbellDAurizioSondow2019}.
The closed-form evaluation
\begin{equation}\label{Polishformula}
\sum_{i, j = 0}^{\infty} \frac{ \binom{2i}{i}^2 \binom{2j}{j}^2 }{4^{2i+2j} (i + j + 1)}
= \frac{14 \zeta(3)}{\pi^2}
\end{equation}
introduced in \cite{Campbell2021New} can be shown to be equivalent to a corresponding evaluation for
\begin{equation}\label{Polishequivalent}
\frac{7 \zeta (3)}{2} = \int_{0}^{1} \text{{\bf K}}^{2}\left( \sqrt{t} \right) \, dt,
\end{equation}
through the use of Fourier--Legendre expansion in \eqref{FLofK}, and this approach was generalized and explored in \cite{Campbell2021New} through the
use of the moment formula for Legendre polynomials. So, from the evaluation in terms of Ap\'{e}ry's constant for \eqref{Polishequivalent}, by applying the
operator $\int_{0}^{1} \cdot \, dt $ to both sides of \eqref{Ramanujantransform1} and then applying Fubini's theorem, this gives us an evaluation for
\begin{equation}\label{generalizetwofold}
-\frac{7 \pi \zeta (3)}{4} = \int_{0}^{1}
\frac{\ln (1-\mu )}{\mu } \text{{\bf K}}\left( \sqrt{\mu} \right) \text{{\bf K}}\left( \sqrt{1 - \mu} \right) \, d\mu,
\end{equation}
which was proved in a different way by Wan in \cite{Wan2012}. By mimicking the above described approach for evaluating
\eqref{generalizetwofold}, with the use of the many infinite families of double sums generalizing or otherwise related to \eqref{Polishformula} given
in \cite{Campbell2021New} and the follow-up article \cite{ChuCampbell2023} greatly generalizing the techniques from \cite{Campbell2021New}, this
leads us to new families of evaluations for CG-type integrals of the form
\begin{equation}\label{generalizetwofold}
\int_{0}^{1} F(\mu) \text{{\bf K}}\left( \sqrt{\mu} \right) \text{{\bf K}}\left( \sqrt{1 - \mu} \right) \, d\mu
\end{equation}
for elementary functions $F(\mu)$,
and similarly with respect to the Ramanujan transform in \eqref{Ramanujantransform2}.
We leave it to a separate project to pursue a full exploration of this.
Recall that our SIBP variant presented as Theorem \ref{SIBPvariant} involved the series $${f}(x) = \sum_{n=0}^{\infty} x^{n + \frac{1}{2}} {a}_{n} \ \
\ \text{and} \ \ \ \mathfrak{g}(x) = \sum_{n=0}^{\infty} x^{n+ \frac{1}{2}} {b}_{n}.$$ A fruitful area of research to explore would involve the
application of further variants of SIBP based on the series obtained by replacing $x_{n + \frac{1}{2}}$ in the expansions for $f(x)$ and
$\mathfrak{g}(x) $, respectively, with $x^{n+h_{1}}$ and $x^{n + h_{2}}$ for half-integer parameters $h_{1}$ and $h_{2}$. In a similar spirit, the
SIBP formula in \eqref{displayedSIBP} may be extended using operators other than $D^{\pm 1/2}$ such as $D^{\pm 1/4}$.
\subsection*{Acknowledgements}
The author is very grateful to Dr.\ Yajun Zhou for many useful discussions related to the results introduced in this article.
\subsection*{Competing interests statement}
There are no competing interests to declare.
|
1,941,325,220,394 | arxiv | \section{Introduction}
\label{sec:intro}
\vspace*{-0.4cm}
Data throughput and coverage enhancements are of paramount importance in fifth-generation (5G) and beyond networks \cite{giordani2020toward}. In this context, reconfigurable intelligent surfaces (RISs) have received significant attention from academic and industrial researchers because of their ability to control the wireless propagation environment through passive reflecting elements integrated with low-cost electronics \cite{qian2020beamforming, wu2019intelligent, abrardo2020intelligent}.
The complex nature of wireless environments results in propagation channels that are characterized by small-scale and large-scale fading. RISs aim at shaping the electromagnetic waves in complex wireless environments by appropriately optimizing the phases of their constitutive elements (i.e., the unit cells). Due to the small-scale and large-scale dynamics that characterize a complex wireless channel, the phase shifts of the RIS elements can be optimized based on different time scales \cite{Chien2021TWC, zhi2021two}. Most of the works in the literature have considered the optimization of the phase shifts by assuming the perfect knowledge of the instantaneous channel state information, and different performance metrics have been considered \cite{wu2019intelligent, zhang2020sum, zappone2020overhead}. This optimization criterion is based on adjusting the phase shifts of the RIS elements based on the small-scale dynamics of the channel, and, therefore, results in the best achievable performance. This optimization criterion may, however, not be applicable in some application scenarios that are characterized by a short coherent time, since the optimal phase shifts of the RIS elements need be updated frequently in order to adapt to the rapid changes that characterize the small-scale fading \cite{jung2020performance}.
Another option for optimizing the phase shifts of the RIS elements is based on leveraging only statistical channel state information (CSI), i.e., the large-scale characteristics of the wireless channel \cite{abrardo2020intelligent, 9140329}.
Optimization criteria based on long-term CSI need to be updated less frequently, and this reduces the channel estimation overhead. However, some performance degradation is expected as compared with the optimal phase shift design based on instantaneous CSI.
Even though some research works have recently proposed the design of RISs based on long-term CSI, to the best of our knowledge, no previous work has comprehensively and analytically analyzed as well as compared the achievable performance of RIS-assisted wireless networks based on short-term and long-term CSI.
Based on these considerations, the aim of this paper is to study the performance of RIS-assisted systems by considering optimization criteria based on long-term channel statistics and short-term channel statistics, i.e., based on prefect CSI.
We analyze the coverage probability and the ergodic channel capacity under long-term and short-term phase shift design criteria.
Specifically, we derive closed-form expressions of the coverage probability and the ergodic channel rate for both optimization criteria, highlighting findings that have not been investigated before. Several insights are also observed, such as that the long-term phase shift design offers similar performance as the short-term optimal phase shift design as the number of RIS elements increases.
\textit{Notation}: Upper-bold and lower-bold letters are used to denote matrices and vectors, respectively. The identity matrix of size $M \times M$ is denoted by $\mathbf{I}_M$. The Hermitian and regular transpose are denoted $(\cdot)^H$ and $(\cdot)^T$, respectively. $\mathcal{CN}(\cdot, \cdot)$ denotes a circularly symmetric Gaussian distribution. The expectation of a random variable is denoted by $\mathbb{E}\{ \cdot \}$. The upper incomplete Gamma function is denoted by $\Gamma(m,n) = \int_{n}^{\infty} t^{m-1} \mathrm{exp}(-t) dt$ and $\Gamma(x) = \int_{0}^{\infty} t^{x-1} \mathrm{exp}(-t) dt$ denotes the Gamma function.
\vspace*{-0.5cm}
\section{System Model}
\label{sec:SysModel}
\vspace*{-0.5cm}
We consider an RIS-assisted communication system where a single-antenna source communicates with a single-antenna destination. A frequency-flat block-fading channel model is assumed in each coherence interval.
The RIS comprises $M$ reflecting elements. The RIS phase shift matrix $\pmb{\Phi} \in \mathbb{C}^{M \times M}$ is defined as $\pmb{\Phi} = \mathrm{diag}\big([e^{j\theta_{1}}, \ldots, e^{j\theta_{M}}]^T \big)$, where $\theta_{m} \in [-\pi, \pi]$ is the phase shift of the $m$-th reflecting element.
\vspace*{-0.2cm}
\subsection{Channel Model}
\vspace*{-0.2cm}
The channel of the direct link between the source and the destination is $h_{\mathrm{sd}} \in \mathbb{C}$.
The indirect link from the source to the destination comprises the channel between the source and the RIS, which is denoted by $\mathbf{h}_{\mathrm{sr}} \in \mathbb{C}^M$, and the channel between the RIS and the destination, which is denoted by $\mathbf{h}_{\mathrm{rd}} \in \mathbb{C}^M$.
Specifically, the channels are defined as
$h_{\mathrm{sd}} = \sqrt{\beta_{\mathrm{sd}}} g_{\mathrm{sd}}$, $\mathbf{h}_{\mathrm{sr}} = \bar{\mathbf{h}}_{\mathrm{sr}} + \mathbf{g}_{\mathrm{sr}}$, and $\mathbf{h}_{\mathrm{rd}} = \bar{\mathbf{h}}_{\mathrm{rd}} + \mathbf{g}_{\mathrm{rd}}$,
where $g_{\mathrm{sd}} \sim \mathcal{CN}(0,1)$, $g_{\mathrm{sr}} \sim \mathcal{CN}(0, \mathbf{I}_M \beta_{\mathrm{sr}} /(K_{\mathrm{sr}}+1))$, and $g_{\mathrm{rd}} \sim \mathcal{CN}(0, \mathbf{I}_M \beta_{\mathrm{rd}} /(K_{\mathrm{rd}}+1))$ are the small-scale fading contributions; $\beta_{\mathrm{sd}},$ $\beta_{\mathrm{sr}},$ and $\beta_{\mathrm{rd}}$ are the large-scale fading coefficients; and $K_{\mathrm{sr}} \geq 0 $ and $K_{\mathrm{rd}} \geq 0$ are the Rician factors. Based on \cite{massivemimobook}, the line-of-sight (LoS) channel vectors $\bar{\mathbf{h}}_\alpha \in \mathbb{C}^{M} , \alpha \in \{\mathrm{sr}, \mathrm{rd} \},$ are given as follows
\vspace*{-0.2cm}
\begin{equation} \label{eq:barhsr}
\bar{\mathbf{h}}_\alpha = \sqrt{\frac{K_\alpha \beta_\alpha }{K_{\alpha}+1}} \left[e^{j \mathbf{k}(\psi_{\alpha}, \phi_{\alpha} )^T \mathbf{u}_1}, \ldots, e^{j \mathbf{k}(\psi_{\alpha}, \phi_{\alpha} )^T \mathbf{u}_M} \right]^T,
\vspace*{-0.1cm}
\end{equation}
where $\psi_{\alpha}$ and $\phi_{\alpha}$ are the azimuth and elevation angles of departure (AoD) under which the RIS views the source and the destination for $\alpha = {\rm{sr}}$ and $\alpha = {\rm{rd}}$, respectively. By assuming that the RIS is a planar surface, the wave vectors in \eqref{eq:barhsr}, $\mathbf{k}(\psi_{\alpha}, \phi_\alpha)$, are
\vspace*{-0.2cm}
\begin{equation} \label{eq:kvec}
\mathbf{k}(\psi_{\alpha}, \phi_\alpha) = \frac{2\pi}{\lambda} \left[ z_1, \, z_2, \,\sin(\psi_{\alpha}) \right]^T,
\vspace*{-0.2cm}
\end{equation}
with $z_1 = \cos(\psi_{\alpha})\cos(\phi_\alpha)$ and $z_2 = \sin(\psi_{\alpha})\cos(\phi_\alpha)$, and $\lambda$ is the signal wavelength.
Also, the vector $\mathbf{u}_m$ in \eqref{eq:barhsr} is defined as $\mathbf{u}_m =[0, \, \mathrm{mod}(m-1,M_H)d_r, \, \lfloor (m-1)/M_H \rfloor d_r ]^T$, where $\mathrm{mod}$ is the modulus operation and $\lfloor \cdot \rfloor$ is the floor function. $d_r$ is the element spacing at the RIS.
To facilitate the analysis in the next section, Theorem~\ref{Theorem:ChannelStatistics} gives the moments of the RIS-assisted (cascaded) channel for an arbitrary phase shift matrix $\pmb{\Phi}$.
\vspace*{-0.2cm}
\begin{theorem} \label{Theorem:ChannelStatistics}
The indirect link from the source to the destination through the RIS has the following statistical moments
\vspace*{-0.4cm}
\begin{align} \label{eq:Expecs}
\mathbb{E} \{ |\mathbf{h}_{\mathrm{sr}}^H \pmb{\Phi} \mathbf{h}_{\mathrm{rd}}|^2 \} & = \delta , \;\;\; \mathbb{E} \{ |\mathbf{h}_{\mathrm{sr}}^H \pmb{\Phi} \mathbf{h}_{\mathrm{rd}}|^4 \} = \delta^2 + a,
\vspace*{-0.2cm}
\end{align}
where $\delta = |\bar{\alpha}|^2 + M\mu \widetilde{K}$, $\bar{\alpha} = \bar{\mathbf{h}}_{\mathrm{sr}}^H \pmb{\Phi} \bar{\mathbf{h}}_{\mathrm{rd}}$, $\mu = \beta_{\mathrm{sr}}\beta_{\mathrm{rd}}/\omega$, $\omega = (K_{\mathrm{sr}}+1)(K_{\mathrm{rd}}+1)$, $\widetilde{K} = K_{\mathrm{sr}}+ K_{\mathrm{rd}}+1$, $\widehat{K} = 1+ 2K_{\mathrm{sr}} + 2K_{\mathrm{rd}}$, and $a =2M |\bar{\alpha}|^2 \mu \widetilde{K} + M^2 \mu^2 \widetilde{K}^2 + 2M \mu^2 \widehat{K} +8 |\bar{\alpha}|^2 \mu$.
\vspace*{-0.2cm}
\end{theorem}
\begin{proof}
The proof follows by using known results on the moments of Rician random variables. It is omitted due to space limitations.
\end{proof}
The second and fourth moments demonstrate that the array gain due to the presence of an RIS is
proportional to $M$ and $M^2$, respectively.
\vspace*{-0.2cm}
\subsection{Phase Shift Designs and Channel Rate}
\vspace*{-0.2cm}
If the source transmits a data symbol $s$ with $\mathbb{E}\{ |s|^2 \} = 1$, the received signal $y \in \mathbb{C}$ at the destination is
\vspace*{-0.2cm}
\begin{equation} \label{eq:ReceiveSig}
y = \sqrt{\rho} \left( h_{\mathrm{sd}} + \mathbf{h}_{\mathrm{sr}}^H \pmb{\Phi} \mathbf{h}_{\mathrm{rd}} \right) s + n,
\vspace*{-0.2cm}
\end{equation}
where $\rho$ is the transmit power and $n \sim \mathcal{CN}(0,\sigma^2)$ is the additive noise. The phase shift matrix $\pmb{\Phi}$ is usually optimized based on the CSI. In this paper, we focus our attention on two design criteria.
\vspace*{-0.2cm}
\begin{itemize}[leftmargin=*]
\item[$i)$] \textit{Short-term phase shift design}: The phase shifts of the RIS elements are optimized based on perfect CSI, which encompasses large-scale and small-scale fading statistics.
\vspace*{-0.2cm}
\item[$ii)$] \textit{Long-term phase shift design}: The phase shifts of the RIS elements are optimized based on statistical CSI. In particular, the optimal phase shift matrix is obtained by maximizing the average SNR at the destination.
\vspace*{-0.2cm}
\end{itemize}
The short-term phase shift design corresponds to the best achievable performance.
Let us assume that we are interested in maximizing the received signal strength, i.e., $\rho | h_{\mathrm{sd}} + \mathbf{h}_{\mathrm{sr}}^H \pmb{\Phi} \mathbf{h}_{\mathrm{rd}} |^2$. Then, the short-term optimal phase shift of the $m$-th RIS element, which is denoted by $\theta_{m}^{\mathsf{opt}, \mathsf{st}}$, is \cite{wu2019intelligent,van2021outage}
\vspace*{-0.2cm}
\begin{equation} \label{eq:Phase1}
\theta_{m}^{\mathsf{opt}, \mathsf{st}} = \arg(h_{\mathrm{sd}}) - \arg([\mathbf{h}_{\mathrm{sr}}^\ast]_m) - \arg([\mathbf{h}_{\mathrm{rd}}]_m), \forall m,
\vspace*{-0.1cm}
\end{equation}
where $[ \mathbf{h}_{\mathrm{sr}} ]_m$ and $[\mathbf{h}_{\mathrm{rd}}]_m$ are the $m$-th element of $\mathbf{h}_{\mathrm{sr}} $ and $\mathbf{h}_{\mathrm{rd}}$, respectively.
The optimal phase shift of each RIS element in \eqref{eq:Phase1} needs to be updated every channel coherent interval.
By contrast, the long-term phase shift matrix can be applied for a longer period of time, which spans many coherence intervals.
Conditioned on the phase shift matrix and the CSI, the channel rate is formulated as
\vspace*{-0.2cm}
\begin{equation} \label{eq:Rran}
R = \log_2 \left(1 + \gamma \right), \mbox{[b/s/Hz]},
\vspace*{-0.1cm}
\end{equation}
where the signal-to-noise ratio (SNR) value $\gamma$ is
\vspace*{-0.2cm}
\begin{subnumcases}
{\gamma =}
\nu |h_{\mathrm{sd}} + \mathbf{h}_{\mathrm{sr}}^H \pmb{\Phi}^{\mathsf{opt}, \mathrm{lt}} \mathbf{h}_{\mathrm{rd}}|^2, & \mbox{Long-term}, \label{eq:SNRLT}\\
\nu \left( | h_{\mathrm{sd}} | + \sum\limits_{m = 1}^M \omega_m \right)^2,& \mbox{Short-term}, \label{eq:SNRST}
\end{subnumcases}
where $\omega_m = | [ \mathbf{h}_{\mathrm{sr}} ]_m | | [\mathbf{h}_{\mathrm{rd}}]_m |, \forall m$. The SNR value $\gamma$ that corresponds to the short-term phase shift design is obtained by using the optimal phase shift design in \eqref{eq:Phase1}. As far as the long-term phase shift design is concerned, the optimal phase shift matrix $\pmb{\Phi}^{\mathsf{opt}, \mathrm{lt}} = \mathrm{diag}\big([e^{j\theta_{1}^{\mathsf{opt}, \mathsf{lt}}}, \ldots, e^{j\theta_{M}^{\mathsf{opt}, \mathsf{lt}}}]^T \big)$ is obtained in
Lemma~\ref{lemma:ChanStaDesign}.
\vspace*{-0.2cm}
\begin{lemma} \label{lemma:ChanStaDesign}
If the $m$-th phase shift of the RIS is set as follows
\vspace*{-0.2cm}
\begin{equation} \label{eq:PhaseLoS}
\theta_{m}^{\mathsf{opt}, \mathsf{lt}} = -\arg([\bar{\mathbf{h}}_{\mathrm{sr}}^\ast]_m) - \arg([\bar{\mathbf{h}}_{\mathrm{rd}}]_m) ,
\vspace*{-0.2cm}
\end{equation}
then the average received SNR, $\mathbb{E}\{\gamma \}$ with $\gamma$ given in \eqref{eq:SNRLT} is maximized.
\vspace*{-0.2cm}
\end{lemma}
\begin{proof}
The proof follows by maximizing the cost function $\mathbb{E}\{ \gamma\}$ subject to the phase shift constraints $\theta_m \in [-\pi, \pi], \forall m$. The detailed proof is omitted due to space limitations.
\end{proof}
The phase shift design in \eqref{eq:PhaseLoS} depends only on the LoS components of the channels.
The short-term and long-term phase shift designs are both aimed at boosting the strength of the received signal. The short-term phase shift design is, in general, an upper-bound for the long-term phase shift design. In analytical terms, in fact, we have the following
\vspace*{-0.2cm}
\begin{equation} \label{eq:Ratebound}
\begin{split}
& \log_2(1 + \nu |h_{\mathrm{sd}} + \mathbf{h}_{\mathrm{sr}}^H \pmb{\Phi}^{\mathsf{opt}, \mathrm{lt}} \mathbf{h}_{\mathrm{rd}}|^2) \stackrel{(a)}{\leq} \underset{\{ \theta_m \}}{\max}\, \log_2 ( 1 + \gamma) \\
&\stackrel{(b)}{=} \log_2(1 + \nu |h_{\mathrm{sd}} + \mathbf{h}_{\mathrm{sr}}^H \pmb{\Phi}^{\mathsf{opt}, \mathsf{st}} \mathbf{h}_{\mathrm{rd}}|^2,
\end{split}
\vspace*{-0.2cm}
\end{equation}
where $\pmb{\Phi}^{\mathsf{opt}, \mathsf{st}} = \mathrm{diag}\big([e^{j\theta_{1}^{\mathsf{opt}, \mathsf{st}}}, \ldots, e^{j\theta_{M}^{\mathsf{opt}, \mathsf{st}}}]^T \big)$ is the short-term phase shift matrix with $\theta_{m}^{\mathsf{opt}, \mathrm{st}}, \forall m,$ defined in \eqref{eq:Phase1}.
In particular, $(a)$ is obtained because the phase shift solution that maximizes the average received SNR is a feasible point of the capacity maximization problem as a function of the instantaneous CSI, and $(b)$ is obtained because the short-term phase shift design in \eqref{eq:Phase1} is the optimal solution.
\vspace*{-0.3cm}
\section{Coverage Probability and Ergodic Channel Rate}
\vspace*{-0.3cm}
In this section, we introduce analytical frameworks for the coverage probability and the ergodic rate for the short-term and long-term phase shift designs.
\begin{figure*}[t]
\begin{minipage}{0.33\textwidth}
\centering
\includegraphics[trim=0.8cm 0cm 1.3cm 0.6cm, clip=true, width=2.3in]{FigConfPcovMonteCarlo} \vspace*{-0.4cm}\\
(a)
\vspace*{-0.3cm}
\end{minipage}
\begin{minipage}{0.33\textwidth}
\centering
\includegraphics[trim=0.8cm 0cm 1.3cm 0.5cm, clip=true, width=2.3in]{FigConfPcovAvg} \vspace*{-0.4cm} \\
(b)
\vspace*{-0.3cm}
\end{minipage}
\begin{minipage}{0.33\textwidth}
\centering
\includegraphics[trim=0.8cm 0cm 1.3cm 0.5cm, clip=true, width=2.3in]{FigConfRateAvg} \vspace*{-0.4cm}\\
(c)
\vspace*{-0.3cm}
\end{minipage}
\caption{The system performance: $(a)$ Coverage probability v.s. the target rate [b/s/Hz] with $M=100$ and the destination is located at $(180, 15, 15)$~m; $(b)$ Average coverage probability (defined as $\mathbb{E} \{ \mathrm{Pr}(R > \xi) \}$, which is averaged over different realizations of the destination's location) v.s. the number of RIS elements; and $(c)$ Ergodic rate v.s. the number of RIS elements.} \label{Fig1}
\vspace*{-0.3cm}
\end{figure*}
\vspace*{-0.2cm}
\subsection{Coverage Probability} \label{Sub:CovP}
\vspace*{-0.2cm}
From the channel rate in \eqref{eq:Rran}, we define the coverage probability for a given target rate $\xi$ [b/s/Hz] as
$ P_{\mathsf{cov}} = 1 - \mathrm{Pr}(R < \xi),$
where $\mathrm{Pr}(\cdot)$ denotes the probability of an event. By denoting $z = 2^\xi -1$, the coverage probability can be rewritten as
\vspace*{-0.2cm}
\begin{equation} \label{eq:PcovRan}
P_{\mathsf{cov}} (z) = 1 - \mathsf{Pr}(\gamma < z),
\vspace*{-0.2cm}
\end{equation}
We utilize the moment-matching method to compute the coverage probability in \eqref{eq:PcovRan}.
\begin{theorem}~\label{Theorem:CovProbRan}
The coverage probability in \eqref{eq:PcovRan} can be formulated, in a closed-form expression, as
\vspace*{-0.2cm}
\begin{equation} \label{eq:Pcovlt}
P_{\mathsf{cov}} (z) \approx \Gamma\left( k, z/w \right)/\Gamma (k),
\vspace*{-0.2cm}
\end{equation}
where the shape parameter $k$ and the scale parameter $w$ depend on the criterion for optimizing the phase shifts of the RIS elements. If the long-term phase shift design is utilized, we have
\vspace*{-0.2cm}
\begin{align}
k
&= \frac{ \left(\beta_{\mathrm{sd}} +
o_{1} \beta_{\mathrm{sr}} \beta_{\mathrm{rd}}
\right)^2 }{\beta_{\mathrm{sd}}^2 +
o_{2} \beta_{\mathrm{sr}}^2 \beta_{\mathrm{rd}}^2 + 2 \beta_{\mathrm{sd}} o_{1} \beta_{\mathrm{sr}} \beta_{\mathrm{rd}}
},
\\
w
&=
\frac{
\nu \beta_{\mathrm{sd}}^2 +
\nu^2 o_{2} \beta_{\mathrm{sr}}^2 \beta_{\mathrm{rd}}^2 + 2 \nu \beta_{\mathrm{sd}} o_{1} \beta_{\mathrm{sr}} \beta_{\mathrm{rd}}
}{
\beta_{\mathrm{sd}} +
o_{1} \beta_{\mathrm{sr}} \beta_{\mathrm{rd}}
},
\vspace*{-0.2cm}
\end{align}
where $\nu = \rho /\sigma^2$ is the ratio between the transmit power and noise power, and the scalars $o_1$ and $o_2$ are defined as
\vspace*{-0.2cm}
\begin{align}
o_{1} &= \omega^{-1}({{K_{\mathrm{sr}}}{K_{\mathrm{rd}}} \eta + \widetilde{K} M}), \label{eq:o1}\\
o_2 &= \omega^{-2}(2\eta K_{\mathrm{sr}}K_{\mathrm{rd}}( {M\widetilde{K} + 4} ) + M^2\widetilde{K}^2 + 2M( {2\widetilde K - 1} )), \label{eq:o2}
\vspace*{-0.2cm}
\end{align}
with $\eta = | \bar{\mathbf{h}}_{\mathrm{sr}}^H \mathbf{\Phi} {\bar{\mathbf{h}} }_{\mathrm{rd}} |^2$.
If, on the other hand, the short-term phase shift design is considered, the shape and scale parameters are
\vspace*{-0.2cm}
\begin{align}
k &= \frac{k_c ( k_c + 1 ) }{2 ( 2k_c + 3 )}, w = 2 \nu w_c^2\left( 2k_c + 3 \right),
\vspace*{-0.2cm}
\end{align}
where $k_c = \frac{( {c_1} + {c_2}\sqrt {{\beta_{\mathrm{sr}}}{\beta_{\mathrm{rd}}}} )^2}{c_3 + c_4\beta_{\mathrm{sr}}\beta_{\mathrm{rd}}}$ and $w_c = \frac{c_3 + c_4\beta_{\mathrm{sr}}\beta_{\mathrm{rd}}}{{c_1} + {c_2}\sqrt {{\beta_{\mathrm{sr}}}{\beta_{\mathrm{rd}}}}} $ with
\vspace*{-0.2cm}
\begin{align}
& c_1 = {0.5\sqrt {\pi\beta_{\mathrm{sd}}} }, c_2 = 0.25 M \pi t_{\mathrm{sr}} t_{\mathrm{rd}} \omega^{-0.5} , \\
&
c_3 = \frac{4 - \pi }{4}\beta_{\mathrm{sd}}, c_4 = M
- \frac{M{{\pi ^2}}}{{16}} t_{\mathrm{sr}}^2 t_{\mathrm{rd}}^2 \omega^{-1},
\vspace*{-0.2cm}
\end{align}
and $t_{\mathrm{sr}} = {_1{F_1}\left( { - 0.5,1, - {K_{\mathrm{sr}}}} \right)}$, $t_{\mathrm{rd}} = {_1{F_1}\left( { - 0.5,1, - {K_{\mathrm{rd}}}} \right)}$ with $_1F_1(\cdot, \cdot, \cdot)$ being the confluent hypergeometric function of the first kind.
\end{theorem}
\begin{proof}
The proof is based on computing the mean and the variance of the SNR expressions in \eqref{eq:SNRLT} and \eqref{eq:SNRST}. To this end, the closed-form expressions in Theorem~\ref{Theorem:ChannelStatistics} is used. Then, the obtained mean and variance are matched to those of a Gamma distribution. The details of the proof are omitted due to space limitations.
\vspace*{-0.1cm}
\end{proof}
The coverage probability in \eqref{eq:Pcovlt} offers a simple closed-form expression for evaluating the performance of RIS-assistusually ed communications without the need of resorting to Monte Carlo simulations.
As the number of RIS elements is usually sufficiently large, the obtained analytical expressions can be further simplified.
If, for example, we ignore the nondominant terms, the shape parameter tends to $k \rightarrow ({{K_{\mathrm{sr}}}{K_{\mathrm{rd}}} \eta + \widetilde{K} M})^2 / ( 2\eta K_{\mathrm{sr}}K_{\mathrm{rd}} {M\widetilde{K}} + M^2\widetilde{K}^2)$ for the long-term phase shift design and to $k \rightarrow \left( {c_1} + {c_2}\sqrt {{\beta_{\mathrm{sr}}}{\beta_{\mathrm{rd}}}} \right)^2 /(4c_3 + 4c_4{\beta_{\mathrm{sr}}}{\beta_{\mathrm{rd}}})$ for the short-term phase shift design.
If $M \rightarrow \infty$, in addition, we obtain $k \rightarrow 1$ and $k \rightarrow \infty $ for the long-term and short-term phase shift designs, respectively.
Furthermore, the scale parameter tends to $w \rightarrow \nu^2 o_2 \beta_{\mathrm{sr}} \beta_{\mathrm{rd}} / o_1$ and to $w \rightarrow 4\nu (c_3 + c_4 \beta_{\mathrm{sr}} \beta_{\mathrm{rd}} )$ for the long-term and short-term phase shift designs, respectively.
If $M \rightarrow \infty$, we obtain $w \rightarrow \infty$. By rewriting the Gamma function in \eqref{eq:Pcovlt} in a series expression and ignoring the high-order terms, the coverage probability in \eqref{eq:Pcovlt} is simplified to
$P_{\mathsf{cov}} \rightarrow 1 - \frac{(z/w)^k}{k^2 \Gamma(k)}$ \cite{9195523}.
Based on the obtained values of $k$ and $w$, this implies that the coverage probability tends to $1$ as $M \rightarrow \infty$ for both the short-term and the long-term phase shift designs. This implies that, if the number of reconfigurable RIS elements is large enough, an RIS is capable of offering a good coverage.
It is worth mentioning that the coverage probability in \eqref{eq:Pcovlt} can applied to arbitrary phase shift designs, including the random and equal phase shifts as reported in \cite{van2021outage}.
\vspace*{-0.2cm}
\subsection{Ergodic Channel Rate}
\vspace*{-0.2cm}
The channel rate in \eqref{eq:Rran} depends on the small-scale and large-scale fading coefficients. In this section, we study the ergodic channel rate over a long time period by averaging out the small-scale fading as follows
\vspace*{-0.2cm}
\begin{equation} \label{eq:ErgodicRate1}
\bar{R} = \mathbb{E}\{ \log_2 ( 1 + \gamma)\} , \mbox{ [b/s/Hz]}.
\vspace*{-0.2cm}
\end{equation}
A closed-form expression of \eqref{eq:ErgodicRate1} is given in Lemma~\ref{lemma:ErgodicRate} that still relies on a moment-marching approach, according to which the received SNR is matched to a Gamma distribution.
\vspace*{-0.4cm}
\begin{lemma} \label{lemma:ErgodicRate}
The ergodic channel capacity in \eqref{eq:ErgodicRate1} can be formulated in the closed-form expression as follows:
\vspace*{-0.2cm}
\begin{align} \label{eq:ErgodicRate}
\bar{R} = \frac{1}{{\Gamma \left( {{k}} \right)\ln \left( 2 \right)}}G_{2,3}^{3,1}\left( {\left. {\frac{1}{{{w}}}} \right|\begin{array}{*{20}{c}}
{0,1}\\
{0,0,{k}}
\end{array}} \right),
\vspace*{-0.2cm}
\end{align}
where $G_{p,q}^{m,n}\Big( { z \Big|\begin{array}{*{20}{c}}
{a_1,\ldots, a_q}\\
{b_1,\ldots,b_p}
\end{array}} \Big)$ is the Meijer-G function, and, similar to Theorem~\ref{Theorem:CovProbRan}, $k$ and $w$ are the shape parameter and the scale parameter of the approximating Gamma distribution, respectively.
\end{lemma}
\begin{proof}
The proof follows along the same lines as the proof in \cite{9195523}, with the exception that the channel model considered in this paper is different.
\end{proof}
Differently from \cite{9195523}, Lemma~\ref{lemma:ErgodicRate} can be applied to all phase shift designs, which include the short-term and the long-term phase shifts designs of interest in this paper.
The analytical expressions for $k$ and $w$ that correspond to the latter two phase designs are the same as for Theorem~\ref{Theorem:CovProbRan}.
\vspace*{-0.2cm}
\section{Numerical Results}
\vspace*{-0.2cm}
In this section, we validate the obtained analytical frameworks with the aid of Monte Carlo simulations.
We consider an RIS-assisted wireless network
where the source is located at the origin and the center-point of the RIS is located at $(27, 25, 25)$~m.
For the direct link, the channel gain $\beta_{\mathrm{sd}}$ [dB] is $\beta_{\mathrm{sd}} = -33.1 - 3.50 \log_{10}(d_{\mathrm{sd}}/1 \mbox{m})$, where $d_{\mathrm{sd}}$ is the distance between the source and the destination. For the indirect link, the channel gains $\beta_\alpha$ [dB] are $\beta_\alpha = -25.5 - 2.4 \log_{10}(d_\alpha / 1 \mbox{m})$, where $d_\alpha$ is the distance between the transmitter and the receiver.
The Rician factors are equal to $K_{\alpha} = 10^{1.3 - 0.003 d_\alpha}$. The transmit power is $20$~mW and the system bandwidth is $10$~MHz. The carrier frequency is $1.8$~GHz and the noise power is $-94$~dBm. The following phase shift designs are considered for comparison: $i)$ the short-term phase shift design in \eqref{eq:Phase1}; $ii)$ the long-term phase shift design in \eqref{eq:PhaseLoS}; $iii)$ the equal phase shift design where the phase shifts are all set equal to $\pi/4$; and $iv)$ the random phase shift design where arbitrary values in the range $[-\pi, \pi]$ are considered.
In Fig.~\ref{Fig1}(a), we compared the closed-form expression of the coverage probability against Monte Carlo simulations. The good overlap between the analytical results and the numerical simulations confirms the accuracy of \eqref{eq:Pcovlt}.
In Fig.~\ref{Fig1}(b), we utilize the analytical framework in \eqref{eq:Pcovlt} to evaluate the coverage probability as a function of the number of RIS elements and for different designs of the phase shifts.
The phase shift designs based on short-term and long-term CSI offer significant gains compared to the equal and random phase shift designs. In addition, the gap between the short-term and long-term phase shifts design reduces as the number of RIS elements increases.
Finally, Fig.~\ref{Fig1}(c) displays the ergodic rate [b/s/Hz] in \eqref{eq:ErgodicRate}.
We evince that the deployment of an RIS results in a substantial increase of the ergodic rate, as opposed to surfaces that operate as random scatterers and are not smart and reconfigurable.
Notably, the long-term phase shift design provides an ergodic rate that is close to the short-term phase shift design, and approaches it if the number of RIS elements is sufficiently large.
\vspace*{-0.2cm}
\section{Conclusion}
\vspace*{-0.2cm}
This paper has investigated the coverage probability and the ergodic rate of an RIS-assisted link for different phase shift designs depending on the amount of channel information that is exploited for optimizing the RIS.
It is shown that a long-term phase shift design that depend on long-term CSI offers a suboptimal solution with a performance loss compared with the optimal phase shift design based on perfect CSI that decreases as the number of RIS elements increases.
Generalization of this research work includes the analysis of sources and destinations equipped with multiple antennas.
\vfill\pagebreak
\bibliographystyle{IEEEtran}
|
1,941,325,220,395 | arxiv | \section{On the numerical simulation of classical spin dynamics}
We have carried out numerical simulations of classical spin dynamics using the method introduced in \cite{Bagchi2013}. The latter
proves to be superior compared to more standard Euler or Runge-Kutta integration schemes (as e.g. employed in \cite{Li2019}),
owing to \textit{exact} conservation of both total energy and spin magnitude $|\vec{S}_j|=1$ at each lattice site.
The method employs the fact that any equation of motion of type
\begin{equation}
\partial_t \vec{S}_j = \vec{S}_j \times \vec{B}_j,
\end{equation}
with vector $\vec{B}_j$ depending in general on the spins $\vec{S}_{j+1}, \vec{S}_{j-1} \ldots , \vec{S}_{j+n},\vec{S}_{j-n}$
can be analytically integrated with aid of the Rodrigues' rotational formula, yielding
\begin{equation}
\vec{S}_j (t + \Delta t) = [\vec{S} \cos \phi + \vec{S} \times \hat{B} \sin \phi + (\vec{S} \cdot \hat{B}) (1- \cos \phi) \hat{B}]_j(t)
\end{equation}
where $\phi = |\vec{B}_i \Delta t|$ and $\hat{B} = \vec{B}/|\vec{B}|$. By evolving first $\vec{S}_j,\vec{S}_{j+ n}, \ldots \vec{S}_{j+ 2 n}$ and then $\vec{S}_{j+1}, \vec{S}_{j+ 1 + n},\ldots$ and so on, energy and $|\vec{S}_j|$ are conserved to all orders in
$\Delta t$, while all other local conserved quantities fluctuate within a range of order $(\Delta t)^3$ even on time-scales of
the order $t \sim 3000$, see Fig. \ref{fig:error1} and \ref{fig:error2}. In order to simulate dynamics at infinite temperature,
we have computed an average over $5 \times 10^5 - 10^6$ random initial states, which also reduces the error at large times, see Fig. \ref{fig:error3}. Moreover we stress that a large number of states is of fundamental importance in order to well recognise the logarithmic corrections at large times. In order to reduce the noise, we have additionally performed
the ergodic time-average $t^{-1} \int_0^t {\rm d} t' $ of the spin-spin correlations $C_j(t)$. \\
We stress that our numerical results do not differ from the ones reported in \cite{Bagchi2013}, where the presence of logarithmic corrections to diffusion was however missed in the analysis of the data. We instead believe that the numerical data at infinite temperature in \cite{Li2019} are incorrect, as they show normal diffusion, probably due to the Runge-Kutta integration schemes or lack of proper averaging on initial conditions. The results in \cite{Li2019} for slightly anomalous diffusion at lower temperature are instead in agreement with the presence of logarithmic corrections.
\begin{figure}[htb]
\centering
\includegraphics[width=0.4\columnwidth]{conservationS.pdf}
\caption{Time-evolution of the approximately conserved magnetizations in the classical (non-integrable)
Heisenberg chain with integration step-size $\Delta t= 0.025$ and $L=1000$, starting from a single random initial state.}
\label{fig:error1}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[width=0.4\columnwidth]{error.pdf}
\caption{Total magnetization fluctuations $|\sum_j S^z_j(t) - \sum_j S^z_j(0)|/N$ as a function of time for the classical (non-integrable) Heisenberg chain for two different values of $\Delta t$ and $L=1000$, starting from a single random initial state.}
\label{fig:error2}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[width=0.4\columnwidth]{errorAverage.pdf}
\caption{Same as in Fig. \ref{fig:error2} with $\Delta t = 0.05$ but after averaging over $10^6$ random initial states in order to describe an infinite temperature ensemble.}
\label{fig:error3}
\end{figure}
\section{Spin transport in the hierarchy of the quantum Heisenberg model}
We shall employ the toolbox of the generalized hydrodynamics \cite{PhysRevX.6.041065,PhysRevLett.117.207201} to examine the spin
diffusion constant in the quantum Heisenberg hierarchy.
For definiteness we shall specialize here to the fundamental spin chains ${\rm S}=1/2$, noticing that
integrable spin-${\rm S}$ chains can be essentially treated along the same lines.
The higher Hamiltonian densities can be obtained by the iterative application of the boost operator $\hat{B}$,
\begin{equation}
\hat{H}^{(n+1)}\simeq [\hat{B},\hat{H}^{(n)}],\qquad
\hat{B}=\frac{1}{2{\rm i}}\sum_{j}j\,\hat{\vec{S}}_{j}\cdot \hat{\vec{S}}_{j+1}.
\end{equation}
In close analogy to the isotropic Landau--Lifshitz magnet, the `second Hamiltonian flow' $H^{(3)}$
corresponds to the chiral three-spin interaction
\begin{equation}
\hat{h}^{(3)} = \hat{\vec{S}}_{j}\cdot (\hat{\vec{S}}_{j+1}\times \hat{\vec{S}}_{j+2}),
\end{equation}
whereas the Hamiltonian density of the `third flow' is supported on four adjacent lattice sites
\begin{equation}
\hat{h}^{(4)} = 2\hat{\vec{S}}_{j}\cdot \big(\hat{\vec{S}}_{j+1}\times (\hat{\vec{S}}_{j+2}\times \hat{\vec{S}}_{j+3})\big)
+2\hat{\vec{S}}_{j}\cdot \hat{\vec{S}}_{j+2}-4\hat{\vec{S}}_{j}\cdot \hat{\vec{S}}_{j+1}.
\end{equation}
Now we turn to the computation of the spin diffusion constant. The total contribution can be conveniently presented as a spectral
sum over quasi-particle excitations. The latter form an infinite tower of bound states made out of $s$ constituent magnons
carrying $s$ quanta of (bare) magnetization.
The spin diffusion constant associated with the $n$-th Hamiltonian flow can be accordingly decomposed as
$\mathfrak{D}^{(n)}=\sum_{s\geq 1}\mathfrak{D}^{(n)}_{s}$, with contributions of individual quasi-particle species $s$
given by a closed-form expression \cite{NMKI2019}
\begin{equation}
\mathfrak{D}^{(n)}_{s} = \frac{1}{2}\int_{\mathbb{R}} {\rm d} \theta\,n_{s}(\theta)(1-n_{s}(\theta))
\big|\varepsilon^{\prime\,(n)}_{s}(\theta)\big|
\left(\frac{\mu^{\rm dr}_{s}}{2\chi_h}\right)^{2},
\label{eqn:D_s}
\end{equation}
where the integration is taken over the range of the rapidity variable $\theta$,
$n_{s}(\theta)$ are Fermi occupation functions of the reference half-filled (i.e. $\expect{S^{z}}=0$) equilibrium background,
$\varepsilon^{\prime\,(n)}_{s}(\theta)$ denote the dressed values of the energy derivatives
pertaining to the $n$-th Hamiltonian flow, $\mu^{\rm dr}_{s}$ are quasi-particles' dressed magnetic moments,
\begin{equation}
\mu^{\rm dr}_{s} \equiv \lim_{h\to 0} \frac{\partial^{2} m^{\rm dr}_{s}}{\partial h^{2}},
\end{equation}
and $\chi_h = \int {\rm d} x \langle S^z(x) S^z(0)\rangle$ is the rescaled spin susceptibility at half filling.
The task at hand is to isolate the conditions under which $\mathfrak{D}^{(n)}$ becomes divergent.
It is sufficient to inspect the high-temperature limit of the grand-canonical Gibbs ensemble,
\begin{equation}
\hat{\rho} = \mathcal{Z}^{-1}\exp{[h\,\hat{S}^{z}]},\qquad \mathcal{Z}={\rm Tr}\,\hat{\rho},
\end{equation}
where closed-form expressions are available (see \cite{NMKI2019}) in the half-filled $ h\to 0$ limit. In particular, functions
\begin{equation}
n_{s} = \frac{1}{(s+1)^{2}}\sim \frac{1}{s^{2}},\qquad
\mu^{\rm dr}_{s} = \frac{1}{3}(s+1)^{2}\sim s^{2},
\end{equation}
become independent of the rapidity variable $\theta$. Furthermore, using the exact expressions
\begin{equation}
\varepsilon^{\prime\,(2)}_{s}(\theta)=8\theta(s+1)
\left[\frac{1}{(4\theta^{2}+s^{2})^{2}}-\frac{1}{(4\theta^{2}+(s+2)^{2})^{2}}\right],
\end{equation}
and
\begin{equation}
\varepsilon^{\prime\,(n)}_{s}(\theta)=
\frac{\partial^{n-2} \varepsilon^{\prime\,(2)}_{s}(\theta)}{\partial \theta^{n-2}} \qquad {\rm for}\quad n>2,
\end{equation}
we deduce that
\begin{equation}
\int {\rm d} \theta\,| \varepsilon^{\prime\,(n)}_{s}(\theta)|\sim \frac{1}{s^{n}}.
\end{equation}
Based on this we conclude that the sum \eqref{eqn:D_s} is \emph{convergent} whenever $n\geq 4$.
We now take a closer look at anomalous cases $n=2,3$ with a divergent spin diffusion constant.
For this purpose we introduce the regularized diffusion constant,
\begin{equation}
\mathfrak{D}^{(n)}(s_{\star})=\sum_{s=1}^{s_{\star}}\mathfrak{D}^{(n)}_{s},
\end{equation}
where we have imposed a spectral cut-off $s_{\star}$ which integrates out the `heavy' quasi-particles. Now we can
essentially reiterate the dimensional analysis along the lines of ref.~\cite{GV2019}.
The key piece of information is the large-$s$ behavior of
the dressed velocities
\begin{equation}
v^{(n){\rm dr}}_{s}(\theta) = \frac{\partial \varepsilon^{(n)}_{s}(\theta)}{\partial p_{s}(\theta)},
\end{equation}
where $p_{s}(\theta)$ denote momenta of dressed excitations.
At large $s$, these satisfy the algebraic law (to be intended under integration over $\theta$)
\begin{equation}
v^{(n){\rm dr}}_{s}(\theta)\sim \frac{1}{s^{n-1}}.
\end{equation}
The associated `anomalous diffusive length'
\begin{equation}
x_{\star}=v^{(n){\rm dr}}_{s}t,
\end{equation}
can then be converted into the time-domain by comparing it to the \emph{time-dependent} diffusion constant $\mathfrak{D}^{(n)}(t)$ via
\begin{equation}
x^{2}_{\star} \sim \mathfrak{D}^{(n)}(t)t.
\end{equation}
As previously shown in \cite{GV2019}, for $n=2$ one makes the ansatz $\mathfrak{D}^{(2)}(t)\sim t^{\alpha}$
and deduces that $s_{\star}\sim t^{(1-\alpha)/2}$. Inserting this result back to $\mathfrak{D}^{(2)}(s_{\star})\sim s_{\star}$ and
comparing it to $\mathfrak{D}^{(2)}(t)\sim t^{\alpha}$ yields the superdiffusive exponent $\alpha=1/3$
(which translates into the dynamical exponent $z=2/(\alpha+1)=3/2$).
In the $n=3$ case, where
\begin{equation}
\mathfrak{D}^{(3)}(s_{\star})\sim \log{(s_{\star})},
\end{equation}
we instead plug in an ansatz $\mathfrak{D}^{(3)}(t)\sim [\log{(t)}]^{r(t)}$. The self-consistent value for $r$, which follows
from the dimensional analysis requires that $\lim_{t\to \infty}r(t)=1$.
We owe to point out the mismatch in comparison to ref.~\cite{Devillard1992} which, using a perturbative analysis at one-loop order,
predicts the logarithmic correction of the type $\mathfrak{D}^{(3)}_{\rm DS}(t)\sim [\log{(t)}]^{1/2}$.
In contrast, our conclusion follows from a non-perturbative calculation based on exact spectrum of thermally-dressed
quasi-particle excitations, but it also relies on a scaling analysis that could in principle fail to distinguish
different types of logarithmic terms.
\section{Landau--Lifshitz hierarchy}
We consider the Landau--Lifshitz hierarchy of commuting Hamiltonian flows
\begin{equation}
\vec{S}_{t_{n}} = F^{(n)}_{\rm LL} = -\vec{S}\times \frac{\delta H^{(n)}}{\delta \vec{S}},
\end{equation}
which includes the celebrated continuous isotropic Heisenberg ferromagnet
\begin{equation}
\vec{S}_{t_{2}} = F^{(2)} = \vec{S}\times \vec{S}_{xx}.
\end{equation}
Below we analyze the spin dynamics with aid of the Frenet--Serret apparatus, mapping the spin-field $\vec{S}\in S^{2}$
to a dynamical smooth curve in Euclidean space $\mathbb{R}^{3}$.
To each point on a curve, parametrized by its arclength $x$, we attach
a triad of orthonormal vectors $\{\vec{e}_{i}\}_{i=1}^{3}$, representing the tangent, normal and binormal vectors of the curve.
The local change of frame is then generated by a pair of $\mathfrak{so}(3)$ transformations,
\begin{equation}
(\vec{e}_{i})_{x} = \vec{\Omega}\times \vec{e}_{i},\qquad (\vec{e}_{i})_{t} = \vec{\omega}\times \vec{e}_{i},
\end{equation}
specified by the Darboux and angular-velocity vectors
\begin{equation}
\vec{\Omega} \equiv \tau \vec{e}_{1} + \kappa \vec{e}_{3},\qquad
\vec{\omega} = \sum_{i=1}^{3}\omega_{i}\vec{e}_{i},
\end{equation}
satisfying compatibility relation $\vec{\Omega}_{t}-\vec{\omega}_{x}=\vec{\Omega}\times \vec{\omega}$.
Identifying the spin-field with the tangent vector, $\vec{e}_{1}\equiv \vec{S}$, the time-evolution \eqref{eqn:abstract_EOM}
can be cast in the form $\vec{S}_{t} = (\vec{e}_{1})_{t} = \omega_{3}\vec{e}_{2} - \omega_{2}\vec{e}_{3}$, or
in terms of curvature and torsion as
\begin{equation}
\kappa_{t} = (\omega_{3})_{x}+ \tau \omega_{2},\qquad
\tau_{t} = (\omega_{1})_{x}-\kappa \omega_{2}.
\end{equation}
Therefore, we can express $\vec{S}_{xx} = -\kappa^{2}\,\vec{e}_{1}+\kappa_{x}\,\vec{e}_{2}+\kappa \tau\,\vec{e}_{3}$,
whence we deduce the angular velocities
\begin{equation}
\omega_{1} = \frac{\kappa_{xx}}{\kappa} - \tau^{2},\qquad
\omega_{2} = -\kappa_{x},\qquad
\omega_{3} = -\kappa \tau,
\end{equation}
and accordingly the Frenet--Serret equations
\begin{equation}
\kappa_{t_{2}} = -2\kappa_{x}\tau - \kappa \tau_{x},\qquad
\tau_{t_{2}} = \left(\frac{\kappa_{xx}}{\kappa}+\frac{\kappa^{2}}{2}-\tau^{2}\right)_{x}.
\end{equation}
These are also known as the Betchov-Da Rios equations \cite{DaRios,Barros1999} and govern the motion of a vortex filament in a viscous
liquid.
The higher Hamiltonians $H^{(n\geq 3)}=\int {\rm d} x\,h^{(n)}(x)$ can be constructed recursively \cite{Fuchssteiner1983},
\begin{equation}
h^{(n+1)}=\frac{1}{n}\,D^{-1}\left(\vec{S}_{x}\cdot D\,F^{(n)}_{\rm LL}\right),
\end{equation}
where $D\equiv {\rm d}/{\rm d} x$. In particular, the second (i.e. the third-order) flow is given
\begin{equation}
\vec{S}_{t_{3}} = F^{(3)}_{\rm LL} = -\vec{S}_{xxx} - \frac{3}{2}\big((\vec{S}_{x}\cdot \vec{S}_{x})\vec{S}\big)_{x},
\end{equation}
and is generated by the chiral interaction of the form
\begin{equation}
h^{(3)} = -\frac{1}{2}\vec{S}\cdot (\vec{S}_{x}\times \vec{S}_{xx}) = -\frac{1}{2}\kappa^{2}\tau.
\end{equation}
We shall in addition examine the third (i.e. fourth-order) flow
\begin{equation}
\vec{S}_{t_{4}} = F^{(4)}_{\rm LL} = \vec{S}\times \vec{S}_{xxxx}
+ \frac{5}{2}\big((\vec{S}_{x}\cdot \vec{S}_{x})\vec{S}\times \vec{S}_{x}\big)_{x},
\end{equation}
which corresponds to
\begin{equation}
h^{(4)} = \frac{1}{2}\left[\vec{S}_{xx}\cdot \vec{S}_{xx} - 5\big(h^{(2)}\big)^{2}\right]
= \frac{\kappa^{2}_{x}}{2}-\frac{\kappa^{2}}{2}\left[\frac{\kappa^{2}}{4} - \tau^{2}\right].
\end{equation}
The total integral of elastic energy density $\mathcal{E}\equiv h^{(2)}=\tfrac{1}{2}\kappa^{2}$ is a conserved quantity,
obeying the local conservation law
\begin{equation}
\mathcal{E}_{t_{n}} + \mathcal{J}^{(n)}_{x} = 0,
\label{eqn:fluxes}
\end{equation}
with flux densities
\begin{align}
\mathcal{J}^{(2)} &= h^{(3)} = \kappa^{2}\tau,\\
\mathcal{J}^{(3)} &= \frac{1}{2}\kappa^{4}-\frac{1}{2}\kappa^{2}_{x} + \kappa \kappa_{xx},\\
\mathcal{J}^{(4)} &= -\frac{3}{2}\kappa^{4}\tau + 2\kappa^{2}\tau^{3} - \kappa^{2}\tau_{xx} -2\kappa\kappa_{x}\tau_{x}
-4\kappa\kappa_{xx}\tau +4\kappa^{2}_{x}\tau,
\end{align}
and so forth.
The angular velocities of the second flow $H^{(3)}$ read
\begin{align}
\omega_{1} &= \tau^{3} + \kappa^{2}\tau - \tau_{xx} - \frac{\kappa_{x}}{\kappa}\tau_{x} - 3\frac{\kappa_{xx}}{\kappa}\tau
-\frac{3}{2}\kappa \tau,\\
\omega_{2} &= 2\kappa_{x}\tau + \kappa \tau_{x},\\
\omega_{3} &= \kappa \tau^{2} - \frac{1}{2}\kappa^{3} - \kappa_{xx},
\end{align}
implying the following Frenet--Serret equations for the curvature and torsion
\begin{align}
\kappa_{t_{3}} + \left(\kappa_{xx}+\frac{1}{2}\kappa^{3}\right)_{x} + \frac{3}{2}\frac{(\kappa^{2}\tau^{2})_{x}}{\kappa} &= 0,\\
\tau_{t_{3}} + \left(\tau_{xx}+3\frac{(\kappa_{x} \tau)_{x}}{\kappa} + \frac{3}{2}\kappa^{2}\tau - \tau^{3}\right)_{x} &= 0.
\end{align}
\medskip
The above exact dynamical equations are still exact. The next step is to simplify them by taking into account
that on a large coarse-graining scale $\ell$ the curvature and torsion components of a hydrodynamically modulated soft mode
obey $\kappa \sim \tau \sim \mathcal{O}(1/\ell)$, which in effect allows to neglect the fluxes in Eq.~\eqref{eqn:fluxes}.
This means, in particular, that to the leading-order approximation
$\mathcal{E}\sim \mathcal{O}(1/\ell^{2})$ (and likewise $\kappa$) can be treated as constant and thus effectively decouple from
the torsion dynamics. Additionally, by dropping the dispersive terms in the equation for $\tau$,
we are left with the cubic Burgers' equation
\begin{equation}
\tau_{t_{3}} - (\tau^{3} + \ldots)_{x} = 0.
\end{equation}
The structure of the fourth flow $H^{(4)}$ is slightly more involved, and a lengthy calculation yields
\begin{align}
\omega_{1} &= \tau^{4} + \tau^{2}\left(-\frac{3}{2}\kappa^{2}-6\frac{\kappa_{xx}}{\kappa}\right)
+\tau\left(-12\frac{\kappa_{x}}{\kappa}\tau_{x}-4\tau_{xx}\right)+3\kappa^{2}_{x}+\frac{3}{2}\kappa \kappa_{xx}-3\tau^{2}_{x}
+\frac{\kappa_{xxxx}}{\kappa},\\
\omega_{2} &= -\kappa_{xxx} - \frac{3}{2}\kappa^{2}\kappa_{x}+3\kappa_{x}\tau^{2}+3\kappa \tau \tau_{x},\\
\omega_{3} &= \kappa \tau^{3} -\frac{3}{2}\kappa^{3}\tau - 3\kappa_{x}\tau_{x} - 3\kappa_{xx}\tau - \kappa \tau_{xx}
-\frac{5}{2}\kappa^{3}\tau.
\end{align}
In this case the Frenet--Serret equations are of the form
\begin{align}
\kappa_{t_{4}} &= -6\kappa^{2}\kappa_{x}\tau + 4\kappa_{x}\tau^{3} -4\kappa_{xxx}\tau - \frac{3}{2}\kappa^{3}\tau_{x}
+6\kappa\tau^{2}\tau_{x} - 6\kappa_{xx}\tau_{x} - 4\kappa_{x}\tau_{xx} - \kappa \tau_{xxx},\\
\tau_{t_{4}} &= (\tau^{4})_{x} + \frac{6}{\kappa^{2}}\left[\kappa_{x}\kappa_{xx}-\kappa^{3}\kappa_{x}-\kappa \kappa_{xxx}\right]\tau^{2}
+\frac{2}{\kappa^{2}}\left[6\kappa^{2}_{x}\tau_{x}-3\kappa^{4}\tau_{x}-12\kappa\kappa_{xx}\tau_{x}
-6\kappa\kappa_{x}\tau_{xx}-2\kappa^{2}\tau_{xxx}\right]\tau \nonumber \\
&+\frac{1}{2\kappa^{2}}\left[3\kappa^{5}\kappa_{x}+15\kappa^{2}\kappa_{x}\kappa_{xx}+5\kappa^{3}\kappa_{xxx}
-2\kappa_{x}\kappa_{xxxx}+2\kappa \kappa_{xxxxx}-24\kappa \kappa_{x}\tau^{2}_{x}-20\kappa^{2}\tau_{x}\tau_{xx}\right].
\end{align}
leading to, after repeating the above logic, a quartic Burgers' equation
\begin{equation}
\tau_{t_{4}} - (\tau^{4} + \ldots)_{x} =0.
\end{equation}
\section{Additional numerical data}
\begin{figure}[htb]
\centering
\includegraphics[width=0.4\columnwidth]{sigmabeta01.pdf}
\caption{Spin conductivity $\sigma(t)=\int_0^{t} dt'\sum_x \langle j_x(t') j_0 \rangle_T$ of the \textit{quantum} spin-1 chain $\hat{H}_\delta$ as function of time, shown at temperature $T=10$ and for values of $\delta$ ranging from $\delta=0.25$ (light blue) to $\delta=3$ (purple) with spacing of $0.25$, computed with a tDMRG algorithm. The dashed lines on the right of the plot show the fitted time-asymptotic limit $\lim_{t \to \infty} \sigma(t) = \chi \mathfrak{D}/T$, with $\chi$ static spin susceptibility.
The data is compatible with diffusion constant $\mathfrak{D}$ diverging as the isotropic point $\delta =0$ is approached.}
\label{fig:sigmabeta01}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.4\columnwidth]{supplogt.pdf}
\caption{Spin autocorrelation ${C}_0(t)= \langle S_{L/2}^z(t) S_{L/2}^z(0) \rangle$ in the classical ${H}_\delta$ chain at infinite temperature for different anisotropies $\delta$ shown on the log-log scale.
The decay of the autocorrelation crosses over from a fast decay to normal diffusive scaling $C_0(t)\simeq (2 \pi \mathfrak{D} t)^{-1/2}$ (dashed black lines) with spin diffusion constant $\mathfrak{D}$ diverging as $\delta \to 0^+$. }
\label{fig:sigmabeta01}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.4\columnwidth]{DSF.pdf}
\caption{Spin autocorrelation ${C}_j(t)= \langle S_{L/2+j}^z(t) S_{L/2}^z(0) \rangle$ in the anisotropic classical spin chain
${H}_\delta$ at infinite temperature, shown for different value of $j$. The data is consistent with convergence towards anomalous
diffusion law $\langle S_{L/2}^z(t) S_{L/2 + j}^z(0) \rangle \sim t^{-1/2} (\log t)^{-3/2}$.}
\label{fig:sigmabeta01}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.4\columnwidth]{logtdecay2.pdf}
\caption{Plot of the rescaled spin autocorrelation function $[t^{1/2}C_{0}(t)]^{-1}$ and $[t^{1/2}C_{0}(t)]^{-3/2}$ as
function of $\log{(t)}$, computed in the (non-integrable) classical Heisenberg chain. The dashed curves are fitting lines with $\log{(t)}$. The numerical data is unable to reliably distinguish between the decay $C_{0}(t) \sim t^{-1/2}(\log t)^{-1}$ and
$C_{0}(t) \sim t^{-1/2}(\log t)^{-2/3}$, despite the latter is a slightly better fit.}
\label{fig:logtdecay2}
\end{figure}
\end{document}
|
1,941,325,220,396 | arxiv | \section{Introduction}
Global population growth and increasing urbanization require novel economically viable concepts for drinking water treatment and water disposal~\cite{larsen2016emerging}. This challenge includes smart operations, such as data-driven urban water management~\cite{eggimann2017potential}, but also new treatment concepts and technologies~\cite{al2019can, bagheri2019advanced}. Synthetic membranes are an essential technological basis for this separation processes~\cite{nunes2019thinking, ghaffour2013technical}. Innovative membrane technologies help to develop new approaches for the reuse of sustainable raw material sources from urban and industrial waste water~\cite{abels2013membrane, niewersch2014nanofiltration} as well as producing smart process water tailored to its application~\cite{nair2018membrane}. These ambitious aspirations require a versatile modeling environment and rigorous optimization methodologies for membrane systems made for case-specific customized processes.
Nowadays, ion selectivity becomes increasingly essential for sustainable drinking water treatment and electrochemical processes~\cite{werber2016materials, luo2018selectivity}. A transition followed from the complete removal of all minerals from drinking water through reverse osmosis membranes towards more selective nanofiltration membranes. The selectivity of ionic components enables the recovery of sustainable raw material sources from urban and industrial waste water~\cite{shannon2008science}, such as phosphorus~\cite{remmen2019phosphorus}.
The development of ion separation membranes is driven by the advent of the layer-by-layer (LbL) nanofiltration technology~\cite{liu2018porous, harris2000layered, malaisamy2005high}. The layer-by-layer (LbL) nanofiltration technology is characterized by oppositely charged polymer layers applied to a conventional highly porous ultrafiltration membrane. The layer-wise manufacturing technique enables precise tuning of selectivity and permeability~\cite{cheng2018selective, ilyas2017preparation, menne2016regenerable} and a scale-up from the laboratory to industrial scale~\cite{menne2016precise}.
However, tailoring of the selectivity of charged and uncharged species \cite{rall2019rational, labban2018relating, dirir2014theoretical} is the predominant challenge. Moreover, exploiting all synergies in membrane development through the simultaneous design of LbL membranes, the process structure, and its operation is an open research question~\cite{rall2020simultaneous}.
Detailed mechanistic models describe the mass transport and separation by membranes at the nano-scale. There exists a variety of different membrane models for ion separation~\cite{lonsdale1965transport, schlogl1966membrane, yaroshchuk2013solution, femmer2016mechanistic, bowen2002modelling}. These mechanistic models range from highly accurate 3D models to low fidelity heuristics, as shown in Figure~\ref{fig:Overview}. Most notably, a trade-off exists between the fidelity of the approach and its 3D computational resource-efficiency~\cite{jin2011surrogate}.
Accurate mechanistic models describe ion transport mechanisms in multiple dimensions, leading to partial differential equations (PDE) that require special solvers for the solution, i.e., computational fluid dynamics~(CFD) for 3D. Lumping dimensions reduces the dimensionality to 2D and 1D models that can be evaluated at lower computational effort at the cost of fidelity. Finally, heuristics describe lumped systems (0D) and can be assessed at little computational effort, but their predictive capability is insufficient to describe the complex ion transport. Moreover, only experiments obtain high fidelity data directly.
This results in the dilemma that increased accuracy leads to enormous computational effort and cannot be used in the context of process optimization by using standard tools.
\begin{figure}[H]
\centering
\includegraphics[width=0.75\textwidth]{Figures/Introduction_Overview.png}
\vspace{-1cm}
\caption{ANN-based surrogate models push the boundary of computational recource-efficiency: Fidelity and computational recource-efficiency trade-off of different descriptive approaches. The descriptive approaches include experiments, simulation studies, and heuristics. Adapted from~\cite{jin2011surrogate}}
\label{fig:Overview}
\end{figure}
Economically viable and sustainable process design requires decision making at large process scales (potentially also considering environmental effects~\cite{lapkin2010chemical}). At the same time, physical models of the complex ion transport are described at small-scales. This multi-scale problem has strong implications for optimal decision making:
On the one hand, superstructure process optimization with 3D, 2D, and 1D models embedded lead to (mixed-integer) optimization problems with PDE constraints. These mixed-integer dynamic optimization~(MIDO) problems~\cite{chachuat2006global,singer2006global,wesselhoeft2018algorithms} are very difficult-to-solve and their deterministic global optimization is currently limited to small-scale problems~\cite{sager2015efficient}. Thus, process optimization using this approach is currently not possible.
On the other hand, heuristics or short-cut models can be embedded computational resource-efficiently in complex optimization superstructures~\cite{ohs2016optimization, zarca2019optimization, lee2018automated, bocking2019can, mores2018membrane, alsayegh2017systematic, ghobeity2014optimal}. However, these simplified heuristics or short-cut models do not sufficiently describe the influence of membrane parameters, such as the synthesis protocols or process environment, on the membrane performance. Simultaneous development of the membrane synthesis, along with the process using heuristics or short-cut models, is not possible.
Overcoming the trade-off between fidelity and computational resource-efficiency is necessary for deterministic global membrane process optimizations with the simultaneous development of the membrane synthesis.
One way to overcome this issue is to replace the expensive simulation by a surrogate model~\cite{white2019multiscale}. Here, the accurate model is evaluated offline, creating a training data set that is subsequently learned by a supervised machine learning algorithm. This results in a data-driven surrogate model that approximates the accurate model. Then, the data-driven surrogate model can be combined with further data-driven and mechanistic models yielding a hybrid mechanistic / data-driven model that can be optimized~\cite{von2014hybrid}. The use of machine learning models enables bridging scales, e.g., in material design~\cite{zhou2019big, tsay2019110th, prakash2018chances, sanchez2018inverse, henao2011surrogate, unger2009neural} or process systems engineering~\cite{lee2018machine,venkatasubramanian2019promise}.
In addition, data-driven models have been used in various disciplines for process optimization~(e.g.,~\cite{mistry2018mixed,boukouvala2016data,cozad2014learning,fahmi2012process,del2019review,schafer2019reduced}).
Further, we have found that hybrid mechanistic / data-driven models with artificial neural networks~(ANNs) embedded can be optimized efficiently by a reduced-space deterministic global optimization method~\cite{schweidtmann2019deterministic}. In the past, this new optimization approach has already been used successfully for the optimization of energy processes~\cite{schweidtmann2019singlespecies,huster2019WorkingFluidSelection,schweidtmann2019flash,huster2019impact,schafer2019wavelet}.
Many publications in the field of membrane science address the implementation of neural networks in membrane development~\cite{madaeni2010modeling, al2007rejection, wessling1994modelling} or prediction of membrane operation~\cite{roehl2018modeling, salehi2016modeling, soleimani2013experimental}. Recently, we extended the use of ANN surrogate models in membrane science to describe and optimize LbL-based membrane systems by identifying superior membrane synthesis protocols based on the delicate trade-off between ion retention characteristics and permeability~\cite{rall2019rational}.
Next, a coupling of the ANNs into a hybrid mechanistic / data-driven model enables an optimization strategy to simultaneously design the performance of LbL nanofiltration membrane modules and the separation process. This yields membrane synthesis protocols and membrane processes that are optimally tailored to the desired separation task~\cite{rall2020simultaneous}. The results suggest that simultaneous membrane synthesis and process optimization design achieve immediately favorable results with lower impurities at comparable costs.
But in the previous work, this optimization has been limited to lab data. In particular, only a single-stage membrane process has been considered because the available experimental data is only valid for a fixed feed concentration and single salt solutions.
Thus, an extension to accommodate for higher feed concentrations, multiple-staged processes, and salt mixtures is highly desired.
In this work, we propose a surrogate-based approach to bridge the gap between mechanistic ion transport models at the nano-scale and optimal process design through deterministic global superstructure optimization at a large-scale. Thereby, we facilitate ANN surrogate models trained on data generated by a one-dimensional ion transport model, which are subsequently embedded in a state-of-the-art membrane process optimization model.
We use the extended Nernst-Planck model, called pEnPEn, for describing the ion transport through the membrane~\cite{femmer2015ion, femmer2016mechanistic, evdochenko2019direct}. The model pEnPEn describes ion transport through multi-layered geometries (i.e., LbL nanofiltration membranes) consisting of $n$ electrolyte layers (En) with $n$ polyelectrolyte layers, i.e., membranes, (PEn). An applied pressure ($p$) acts as the driving force of the separation process.
The aforementioned one-dimensional ion transport model is evaluated offline, creating an extensive data set that describes the membrane's ionic retention based on membrane-specific parameters (such as, the layer charge $X$ and the layer thickness $\Delta x$) and process-specific parameters (such as, the transmembrane velocity $v$, and ionic feed concentration $c_j$). A subsequent training of the data creates an ANN surrogate model.
\pagebreak
Next, the surrogate models are exploited towards a more accurate two-dimensional distribution of the membrane module even to capture the filtration-related decreasing retention of salt. Therefore, individual ANNs are arranged in series to resolve the membrane in the direction of flow. The data that is generated by this series of ANNs are used to create a two-dimensional surrogate model, which is then embedded in the membrane process optimization model.
This enables the complex mechanistic model to be applied computational resource-efficiently in an optimization context, despite the partial differential-algebraic form of the original model.
The paper is organized as follows:
First, we present the data generation and the learning of ANN surrogate models. Second, the more accurate two-dimensional distribution of the membrane module is presented through ANN surrogate modeling. Third, the hybrid mechanistic / data-driven multi-scale process model is applied to a case study optimizing nanofiltration membrane plants for multiple objectives, feed concentrations, filtration stages, and salt mixtures.
The membrane module synthesis properties are optimized along with the superstructure of the membrane plant. In particular, process configurations of the plant (i.e., number of filtration stages, recirculating of streams, pump power) as well as the membrane synthesis properties (i.e., the layer charge $X$ and the layer thickness $\Delta x$) are degrees of freedom and are optimized simultaneously.
This work sets the foundation for computational resource-efficient multi-scale modeling integrating simulation studies of the complex ion transport in the decision making process at large process scale optimizations.
Finally, all developed models and the optimization solvers are available open-source, making it a viable tool for future research and industrial applications.
\newpage
\section{Methodology}
In this section, the workflow of the data-driven approach is described to bridge the scales between (i) high fidelity ion transport models on a nano-scale and (ii) deterministic global membrane process optimization on a large-scale.
First, the high fidelity simulation of the ion transport through the membrane on the nano-scale is replaced by a surrogate model. This surrogate model based replacement is performed by evaluating the high fidelity ion transport model offline and creating a training data set that is subsequently learned by a supervised machine learning algorithm (here ANNs).
Then, the resulting data-driven ANN-based surrogate model that approximates the accurate model is embedded in a hybrid mechanistic / data-driven model and optimized to find optimal membrane plant process configurations.
In summary, this procedure avoids PDE constraints in the formulation of the deterministic global optimization problem. Thereby, this method enables deterministic optimization of membrane processes with surrogate models at a low computational cost similar to lumped systems (i.e., heuristics or short cut models) but significantly more accurate.
The workflow commences as depicted in Figure~\ref{fig:DataDrivenWorkflow}.
(1) The high fidelity ion transport model (Section~\ref{sec:datageneration}) is evaluated offline to create an extensive data set, that (2) is used to train the data-driven ANN-based surrogate models on the data (Section~\ref{sec:ANNsMethods}).
(3) Moreover, a surrogate model, including a two-dimensional distribution of the membrane module is created for higher fidelity. The most precise two-dimensional distribution and additional data generation are described in Section~\ref{sec:ANNsMethodsadaption}, and the implementation of the two-dimensional surrogate model is described in Section~\ref{sec:ANNsTwoSurrogate}.
(4) Next, the individual surrogate models are integrated with a mechanistic process model to a hybrid mechanistic / data-driven process model. In Section~\ref{sec:membrane_process_design}, we explain the overall mechanistic process model, including cost correlations.
(5) Finally, the hybrid model is optimized using a reduced-space formulation~\cite{schweidtmann2019deterministic} and our open-source deterministic global solver MAiNGO~\cite{MAiNGO}. In Section~\ref{sec:global_deterministic_optimization}, we briefly describe the optimization method.
\begin{figure}[h]
\centering
\includegraphics[width=1\linewidth]{Figures/Workflow.png}
\vspace{-1cm}
\caption{The workflow of the data-driven approach. The input data is created through the high fidelity ion transport model simulation study. Next, an ANN takes this data to learn the ionic separation performance of the membrane based on the model, creating a data-driven ANN-based surrogate model. Two-dimensional distribution of the membrane module is constructed for higher fidelity of the surrogate model. For this purpose, the surrogate model is used to generate additional data that is trained in a new ANN surrogate model. Next, this ANN surrogate model is embedded in a mechanistic process model. Finally, the optimal process design is solved by deterministic global process optimization.}
\label{fig:DataDrivenWorkflow}
\end{figure}
All tools necessary to rebuild and extend this method are open-source available to everyone. The process models and optimization problems are available open-source (\url{http://permalink.avt.rwth-aachen.de/?id=506639}). The training data, the trained artificial neural networks, and the optimization results are available in the electronic supplementary material of this article (published by the journal). The artificial neural network models and training scripts are available open-source within the ``MeLOn - \textbf{M}achin\textbf{e} \textbf{L}earning Models for \textbf{O}ptimizatio\textbf{n}'' toolbox~\cite{MeLOn_Git} (\url{https://git.rwth-aachen.de/avt.svt/public/MeLOn/}). Furthermore, the deterministic global solver MAiNGO is also available open-source (\url{https://git.rwth-aachen.de/avt.svt/public/maingo}).
\subsection{Data generation by high fidelity ion transport model}\label{sec:datageneration}
In this work, the initial data set to train the ANN surrogate model is created by a simulation study using a high fidelity mechanistic ion transport model. We use a one-dimensional extended Nernst-Planck model, called pEnPEn~\cite{femmer2015ion, femmer2016mechanistic}, to describe the ion transport through multi-layered geometries. LbL nanofiltration membranes can be modeled with this model~\cite{evdochenko2019direct} by introducing the layer thickness $\Delta x$ of the separation layer with a homogeneous layer charge $X$ as shown in Figure~\ref{fig:MechanisticModelIonicTransport}.
\begin{figure}[h]
\centering
\includegraphics[width=0.75\linewidth]{Figures/ModelMembraneProperties.png}
\vspace{-0.5cm}
\caption{Overview of model parameters of the extended ion transport model pEnPEn~\cite{femmer2015ion, femmer2016mechanistic, evdochenko2019direct}. The layer charge $X$, and the layer thickness $\Delta x$ of the separation layer determine the separation performance. These values are inputs to the ANN and serve as a degree of freedom during simultaneous optimization of membrane module synthesis properties along with the superstructure of the membrane plant.}
\label{fig:MechanisticModelIonicTransport}
\end{figure}
The model is evaluated offline, creating a large data set that describes the membrane's ionic retention based on membrane-specific parameters (such as, the layer charge $X$ and the layer thickness $\Delta x$) and process-specific parameters (such as, the transmembrane velocity $v$, ionic feed concentration $c_j$). The generation of data is automated, utilizing a wrapper script. This generates a set of training data inputs/outputs by a Latin hypercube sampling design for the ANN surrogate model. The variables are listed as follows:
\begin{itemize}
\item The \textbf{layer charge $X$~[\SI{}{\mol\per\cubic\meter}]} is a membrane-specific variable and \textbf{input} to the ANN. For single salts the layer charge varies in the interval $[-500, -100] \subset \mathbb{R}$. For salt mixtures this interval needs to be extended to $[-1000, -10] \subset \mathbb{R}$ to account for the more challenging separation task. The membrane charge needs to be higher as compared to single salts due to the consideration of higher ionic concentrations for salts mixtures.
\item The separation \textbf{layer thickness $\Delta x$~[\SI{}{\nano\meter}]} is a membrane-specific variable and \textbf{input} to the ANN. The separation layer thickness varies in the interval $\in [75, 150] \subset \mathbb{R}$ according to experimental observations of \cite{menne2016precise}.
\item The \textbf{transmembrane velocity $v$ [\SI{}{\micro\meter\per\second}]} is a process-specific variable and \textbf{input} to the ANN. The retention of ions strongly depends on the transmembrane velocity, i.e., recovery rate, and varies in this study in the interval $[0, 50] \subset \mathbb{R}$.
\item The \textbf{salt feed concentration $c_j$~[\SI{}{\mol\per\cubic\meter}]} is a process-specific variable and \textbf{input} to the ANN. The retention of ions strongly depends on the feed concentration. The range of salt feed concentration for a single membrane unit is $[1, 50] \subset \mathbb{R}$. When using two membrane filtration units in series, an extension of the data for feed concentrations below \SI{1}{\mol\per\cubic\meter} is needed. Therefore, the data is extended using an additional Latin hypercube sampling to extend the data towards concentrations in the interval $[0.01, 50] \subset \mathbb{R}$.
\item The \textbf{ionic retention of ion j $R_j$ [\%]} is the \textbf{output} to the ANN.
\end{itemize}
The one-dimensional model is established for the membrane itself, along with two adjacent diffusive layers. The model is implemented in COMSOL Multiphysics\textsuperscript{\textregistered}~\cite{COMSOL} and executed via the MATLAB~\cite{MATLAB2019} API.
The partial differential extended Nernst-Planck equation is simplified to a second-order ordinary differential equation by considering only the steady-state behavior of the charged species. Moreover, the pressure-driven convective flux is described via the first-order ordinary differential equation Darcy’s law. The boundary conditions are applied for the pressure at the inlet and outlet of the domain. Additionally, charge neutrality across the membrane is assumed.
The resulting problem formulation involves the coupling of two different ordinary differential equations and is highly non-linear. It is solved with an adaptive Newton-Rapson algorithm. The systems matrix of the linearized problem is inverted using the MUMPS algorithm. This modeling approach is generic and thus applies to all combinations of salts used in this study.
Due to convergence issues in the range of low initial salt feed concentrations (i.e., at high local gradients in the second-order ordinary differential equation), a load ramping technique is applied to the excess charge parameter on the membrane surface. Input combinations to the one-dimensional ion transport model for which the solver did not converge are excluded from the training set.
\subsection{Training of the data-driven surrogate model}\label{sec:ANNsMethods}
In this study, we utilize supervised machine learning techniques for the creation of surrogate models. In literature, there exists a large variety of machine learning techniques, including Gaussian processes, artificial neural networks, and decision trees for correlations of varying complexity. In this work, we implement ANNs as surrogate models, as they can capture the ion transport in membranes based on synthesis parameters, as demonstrated in our previous work~\cite{rall2019rational, rall2020simultaneous}. ANNs are black box models that are frequently used for unsupervised and supervised learning tasks, including pattern recognition, classification, and regression~\cite{lecun2015deep}. ANNs learn training data and can approximate an underlying input-output function. The evaluation of ANNs is given by an explicit algebraic function that depends on the architecture of the ANN, i.e., the connection of neurons~\cite{dayhoff2001artificial}. This function is embedded in the process optimization problem and is efficiently solved by a reduced-space deterministic global optimization method~\cite{schweidtmann2019deterministic}.
In this work, a shallow feed-forward multi-layered perceptron ANN including one hidden layer is utilized to recreate the one-dimensional extended Nernst-Planck mechanistic ion transport model (pEnPEn) from the previous section. A hyperbolic tangent transfer function is employed in the hidden- and output layer. The structure of the ANN is depicted in Figure~\ref{fig:ANNiontransport}. Inputs to the ANN are the layer charge $X$, the layer thickness $\Delta x$, the transmembrane velocity $v$, and the salt feed concentration $c_j$. Output to the ANN is the ionic retention of ion j $R_j$.
\begin{figure}[H]
\centering
\includegraphics[width=1\linewidth]{Figures/ANN.png}
\caption{Multi-layered perceptron ANN with one hidden layer used this study as a data-driven surrogate model to recreate the ion transport through nanofiltration membranes. The training is based on data generated by a one-dimensional extended Nernst-Planck mechanistic ion transport model (pEnPEn). The membrane-specific parameters (such as the layer charge $X$ and the layer thickness $\Delta x$) and process-specific parameters (such as the transmembrane velocity $v$ and salt feed concentration $c_j$) are inputs to the ANN. The ionic retention of ion j $R_j$ is the output to the ANN.}
\label{fig:ANNiontransport}
\end{figure}
The main drawback of the data-driven modeling approach using ANNs is its limited extrapolation capability. Meaning that any data-driven model is not valid anymore outside of its training data domain (i.e., its convex envelope). Therefore, this results in a large data requirement for the training of ANNs. In our study, the training data of the ANNs is generated by a mechanistic ion transport model that allows us to have any number of data points (within computational reason) for training and validation. A consideration of probabilities, i.e., through Gaussian processes, is omitted because we use a data set extracted from a mechanistic model.
The training data set we use for this study is small compared to many other machine learning applications, such as deep artificial neural networks with millions of parameters that are frequently used for image recognition. Thus, we select a shallow artificial neural network architecture with one hidden layer ranging from 6 to 14 hidden neurons because the input dimensionality and training data set are much smaller. Furthermore, the training of the ANNs is optimized to minimize adverse effects using small data sets including: (a) multiple runs of ANN trainings to mitigate sporadic fluctuations in artificial neural network performance, (b) a randomized data set for training, validation and test to account for random effects due to small test data, and (c) the use of a k-fold cross-validation to prevent overshooting between data sets and avoid overfitting of the data set.
The data preparation and training procedure are based on our previous work~\cite{rall2019rational, rall2020simultaneous}, and adapted accordingly. The architecture of ANNs is flexible and can be adjusted by setting its hyperparameters, e.g., number of layers, for application. In general, the architecture should be selected appropriately to avoid undesired under- and overfitting effects. The available data set is split into 70~\% training, 15~\% validation, and 15~\% test set at random. Early stopping is used to prevent overshooting during all ANN training. All ANNs used in this work are created in the MATLAB environment~\cite{MATLAB2019,marquardt1963algorithm}. The training procedure commences in the following steps:
First, the hidden layer size is determined by k-fold crossvalidation with a range of $n \in [1, 50] \subset \mathbb{N}$ hidden neurons. The minimum of the characteristic bias/variance trade-off bathtub curve of the mean-squared-error (MSE) determines the best number of hidden neurons $n$ and thereby the hidden layer size.
Second, ANNs are trained manifold with the previously determined hidden layer size $n \pm 3$ using a split of 70~\% training, 15~\% validation, and 15~\% test set at random.
Third, the best performing candidate of the ANN is selected for the following case studies.
Two ANNs are utilized as one dimensional data-driven surrogate model considering a single salt \ce{Na2SO4} or either a salt mixture of \ce{NaCl} and \ce{Na2SO4}. Here, only the predominant ion species (i.e., the ions \ce{Cl-} and \ce{SO4^{2-}}) of the filtrated salt mixture are trained as inputs to the ANN. This consideration is valid due to electroneutrality in the solution and membrane. The monovalent cation \ce{Na+} is calculated based on a mass balance. For single salts the ANN has one output (i.e., \ce{Na2SO4} retention) and for salt mixtures, the ANN has two outputs (i.e., \ce{Cl-} retention and \ce{SO4^{2-}} retention). Negative salt retentions are possible \cite{yaroshchuk2008negative} and are mapped by the ANN. A list of properties of both ANNs is summarized in Table~\ref{tab:ArificialNeuralNetworks}.
\begin{table}[h]
\caption{One-dimensional data-driven surrogate models used in this work to recreate the one-dimensional extended Nernst-Planck high fidelity mechanistic ion transport model (pEnPEn). The number of neurons $n$ of each ANN, the amount of training data set entries, and the section in which it is used is given.}
\centering
\label{tab:ArificialNeuralNetworks}
\begin{tabular}{lclrc}
\hline
Network name & Hidden neurons $n$ & Description & Training points & Used in Section \\
\hline
ANN$_{Na_{2}SO_{4},~k=1,~ 1~\text{Unit}}$ & 6 & Single salt & 998 & \ref{sec:ANNsMethodsadaption}, \ref{sec:results_optimal_solution_strategy} \\
ANN$_{NaCl,~Na_{2}SO_{4},~k=1,~1~\text{Unit}}$ & 8 & Salt mixture & 1,000 & \ref{sec:ANNsMethodsadaption} \\
\hline
\end{tabular}
\end{table}
\subsection{Two-dimensional distribution of the membrane module}\label{sec:ANNsMethodsadaption}
Up until now, the membrane unit is considered where only the changes in the orthogonal direction of the membrane are considered by the one-dimensional surrogate model of the previous section. However, due to the retention of salt at the membrane, the feed concentration in the module increases with the length of the membrane. The feed concentration of the salt strongly influences the retention of the nanofiltration membrane. Changes in feed concentration in the direction of flow are not considered so far.
In Figure~\ref{fig:results_module}, a two-dimensional distribution of the membrane module is proposed in flow direction through ANN-based surrogate models. Here, the membrane length is divided into several elements $k$ and solved for the conservation equations on every element. An individual ANN describes each element. In Figure~\ref{fig:results_module}~A-B the strong influence of the feed concentration of the salt on the retention is evident for a selected membrane case.
\begin{figure}[H]
\centering
\includegraphics[width=.95\linewidth]{Figures/Results_Module.png}
\caption{Two-dimensional distribution of the membrane module in flow direction through ANN-based surrogate models. Shown is the negative effect of the feed concentration on the salt retention of the membrane. Only the two-dimensional distributed cases do not underestimate this local reduction of ionic retention in the direction of flow. In A), the retention of one selected case and membrane are shown for three different initial feed concentrations and varying element numbers $k$. In b), the corresponding feed sided concentration is shown resulting from the filtration-related increase of feed concentration.}
\label{fig:results_module}
\end{figure}
Two effects contribute to the reduction of salt retention. First, the retention decreases with increasing initial salt feed concentration. This is demonstrated in Figure~\ref{fig:results_module}~A-B for the concentrations of \SI{2}{\mol\per\cubic\meter}, \SI{5}{\mol\per\cubic\meter}, and \SI{10}{\mol\per\cubic\meter}. Second, a filtration-related reduction of the retention of salt is observed in the direction of flow. During filtration, the salt concentration increases at the retentate side. This negative effect of the increasing feed concentration on the salt retention of the membrane is differently pronounced for the shown three cases. A one-dimensional distributed membrane underestimates this effect and leads to inaccurate results. With an increasing number of elements $k$, the approximation approaches the most accurate concentration profile. The relative gap between a one-dimensional distributed membrane and a two-dimensional distributed membrane is calculated for the specific case in Table~\ref{tab:results_RelativeGapDiscretization}. Here, the relative gap ranges from +43\% to +240\% dependent on the initial feed concentration. These results are case-specific but apply to any other case.
\begin{table*}[h]
\caption{The Feed concentration increases along the membrane length during filtration. The relative gap of the permeate concentration is calculated based on a two-dimensional distributed membrane with $k=1000$~elements and a one-dimensional distributed membrane with $k=1$~element for different feed concentrations. The gab occurs due to the decreasing retention with increasing feed concentration which is not accounted for in a one-dimensional distributed membrane.}
\centering
\label{tab:results_RelativeGapDiscretization}
\begin{tabular}{rcccc}
\hline
Variable name & Unit & $c_{feed}$ = \SI{10}{\mol\per\cubic\meter} & $c_{feed}$ = \SI{5}{\mol\per\cubic\meter} & $c_{feed}$ = \SI{2}{\mol\per\cubic\meter} \\
\hline
Two-dimensional distributed ($k = 1000$) & [\SI{}{\mol\per\cubic\meter}] & 6.71 & 1.99 & 0.47 \\
One-dimensional distributed ($k = 1$) & [\SI{}{\mol\per\cubic\meter}] & 9.62 & 4.58 & 1.61 \\
\hline
Absolute difference & [\SI{}{\mol\per\cubic\meter}] & 2.92 & 2.53 & 1.14 \\
Relative difference & [\SI{}{\percent}] & +43.50 & +129.75 & +239.86 \\
\hline
\end{tabular}
\end{table*}
Next, the most accurate number of elements for a single salt or salt mixture process optimization is identified. This is necessary to determine the number of two-dimensional distribution elements $k$ that are necessary to describe the change of states along the membrane surface accurately. Therefore, the number of elements is increased until the relative gap of the concentration is below 1\%, where it can be considered sufficiently accurate. The relative difference (or residuum) is calculated by the relative gap between the two-dimensional distributed ($k>1$), and one-dimensional distributed ($k=1$) permeate concentration leaving the membrane module. This procedure is repeated for different membrane configurations and observed covering the whole application range. Two exemplary cases and the progression of the residuum over the number of elements used are visualized in Figure~\ref{fig:results_ResiduumVSElements}. Here, a maximum number of $k=56$ elements are needed for single salt applications, and $k=51$ elements are required for salt mixtures.
\begin{figure}[H]
\centering
\includegraphics[width=0.6\linewidth]{Figures/Results_ResiduumVSElements.png}
\caption{Two-dimensional distribution of the module in flow direction: The membrane unit is split into multiple elements $k$. The number of elements is increased until the relative gap of filtration performance through the permeate concentration is below 1\%.}
\label{fig:results_ResiduumVSElements}
\end{figure}
\subsection{Training of the surrogate model with two-dimensional distribution of the membrane}\label{sec:ANNsTwoSurrogate}
Resolving the membranes through two-dimensional distribution is achieved by subdividing it into a fixed number of elements and arranging the individual ANNs in series. Then, the mass conservation laws and the ANN computations are employed on every element. This creates a local output state from the input for each element. The technique is recursively applied to all elements along the direction of flow.
Now, this repeated computation of the local element quantities can be directly embedded into the process of plant optimization. However, this nesting of ANNs can lead to weak McCormick relaxations and can adversely affect the computational performance of global optimization~\cite{Doncevic2020GlobalANNControl}.
An alternative approach is to train a new ANN surrogate model that includes these recursions as a black box creating a single surrogate model for the two-dimensional modeling of the nanofiltration module. The training of an ANN-based surrogate model, including a two-dimensional distribution of the membrane module, is described in the following.
A special case of data generation and ANN training arises when creating the surrogate model, including a two-dimensional distribution. The one-dimensional surrogate models trained on the one-dimensional extended Nernst-Planck ion transport model are connected serially one after the other and evaluated offline. This series of surrogate models create an additional data set for the new two-dimensional surrogate model. This extended two-dimensional surrogate model contains additional influential variables as the volume flows of the entire module need to be considered. A partial resolution of the recursive equation (exemplary for the single salt case) results in the following description
\begin{align}
c_{in}(k) = c_{in}(k-1) \cdot \left( 1 + \frac{\gamma}{K-(k-1) \cdot \gamma} R(k-1) \right)
\label{eqn:discr1}
\end{align}
where $c_{in,j}$ denotes the feed concentration in element $k$, $K$ the total number of elements, $R_j$ the ionic retention, and $\gamma$ the flow ratio which denotes the ratio of feed and permeate flow along with the nanofiltration unit. Where $\gamma$ is defined as
\begin{align}
\gamma = \frac{F_{\text{permeate}}}{F_{\text{feed}}}.
\label{eqn:discr_kappa}
\end{align}
This flow ratio of the membrane module determines the performance of the unit. Therefore, the two-dimensional surrogate model is extended by the additional input of $\gamma$ to account for the extra parameter.
For obtaining the training data of the two-dimensional surrogate model, an additional Latin hypercube sampling for a random two-dimensional distribution is applied, and the data created offline. Due to the low computational effort to calculate such recursive ANNs in series, $k = 1000$ elements is chosen for the following process optimizations. Therefore, the relative difference between the two-dimensional distributed ($k>1$) and one-dimensional distributed ($k=1$) permeate concentration leaving the membrane module is always below 1\%. For example, in Figure~\ref{fig:results_ResiduumVSElements} only a maximum number of $k=56$ elements is needed.
The training procedure is performed as previously stated in Section~\ref{sec:ANNsMethods}. Now, inputs to the ANN are the layer charge $X$, the layer thickness $\Delta x$, the transmembrane velocity $v$, the salt feed concentration $c_j$, and the additional parameter $\gamma$ for the flow ratio. The flow ratio $\gamma$ is valid in the range $\in [0.18; 1] \subset \mathbb{R}$ under the consideration of the intercorrelation of $\gamma$ (i.e., the permeate flow can never be higher than the feed flow). Output to the ANN is the ionic retention of ion j $R_j$. A list of properties of the new two-dimensional surrogate models can be found in Table~\ref{tab:ArificialNeuralNetworks2}.
\begin{table}[h]
\caption{Two-dimensional surrogate models used in this work to recreate the two-dimensional distribution of the membrane in the flow direction. The number of neurons $n$ of each ANN, the amount of training data set entries, and the section in which it is used is given.}
\centering
\label{tab:ArificialNeuralNetworks2}
\begin{tabular}{lclrc}
\hline
Network name & Hidden neurons $n$ & Description & Training points & Used in Section \\
\hline
ANN$_{Na_{2}SO_{4},~k=1000,~1~\text{Unit}}$ & 6 & Single salt & 998 & \ref{sec:results_optimal_solution_strategy}, \ref{sec:results_single_salt} \\
ANN$_{Na_{2}SO_{4},~k=1000,~2~\text{Units}}$ & 6 & Single salt & 1,198 & \ref{sec:results_single_salt} \\
ANN$_{NaCl,~Na_{2}SO_{4},~k=1000,~1~\text{Unit}}$ & 14 & Salt mixture & 1,000 & \ref{sec:results_salt_mixture} \\
\hline
\end{tabular}
\end{table}
\subsection{Hybrid mechanistic / data-driven process superstructure model}\label{sec:membrane_process_design}
The individual surrogate models are integrated with a mechanistic process model to a hybrid mechanistic / data-driven process model. The hybrid process superstructure model comprises of a mechanistic process model to describe the process plant and of surrogate models to describe the ionic transport based on membrane and process-specific variables. The mechanistic process model consists of component mass balances, a pump model, and cost correlations. The superstructure of the process plant is shown in Figure~\ref{fig:Superstructure}. The model is adapted from our previous work~\cite{rall2020simultaneous}.
Either a single-stage nanofiltration unit or a two-stage connection of nanofiltration units is chosen for the process plant. A pump is installed in the inlet stream of each nanofiltration unit to provide the main driving force. Each nanofiltration unit consists of multiple membrane modules, $N_{module}$ in parallel. Each nanofiltration unit~(unit $i$) has a feed stream $F_{feed,i}$. A concentrated retentate stream $F_{retentate,i}$ and a diluted permeate stream $F_{permeate,i}$ each leave the nanofiltration unit. The permeate stream $F_{permeate,i}$ is calculated based on the membrane's ionic retention performance, permeability, and pressure difference $\Delta p$ applied:
\begin{align}
F_{permeate,i} = Q \cdot A_{unit,i} \cdot \Delta p
\label{eqn:cross}
\end{align}
where $Q$ is the permeability of the membrane unit, $A_{unit,i} = N_{module,i} \cdot A_{module}$ is the total membrane area of the membrane unit. The size of a single membrane module in a nanofiltration unit is set to $A_{module}$ = \SI{60}{\square\meter} in accordance with multiple membrane modules available on the market~\cite{pentair, dupontUF}. The permeability of the membrane is estimated based on a linear correlation of the membrane thickness $\Delta x$ with the range $Q \in [5; 35] \subset \mathbb{R}$ in accordance with laboratory results obtained in our previous work~\cite{menne2016precise}.
\begin{align}
Q = 0.4 \cdot \Delta x - 25
\label{eqn:Q=f(delta)}
\end{align}
Additionally, the layer charge $X$ is assumed to increase linearly with membrane thickness $\Delta x$. A mass balance coupled with the ionic retention of ion j $R_{i,j}$ for ion $j$ closes the governing equations.
\begin{align}
F_{feed,i}\hspace{0.1cm}c_{feed,i,j} = F_{retentate,i}\hspace{0.1cm}c_{retentate,i,j} + F_{permeate,i}\hspace{0.1cm}c_{permeate,i,j}
\label{eqn:balance}
\end{align}
\begin{align}
R_{i,j} = 1 - \frac{c_{permeate,i,j}}{c_{feed,i,j}}
\label{eqn:ret}
\end{align}
The ionic retention of ion j $R_{i,j}$ is computed by the ANNs as a function of membrane-specific variables and process-specific variables as described in Section~\ref{sec:ANNsMethods} and Section~\ref{sec:ANNsTwoSurrogate}.
\begin{figure}[H]
\centering
\includegraphics[width=0.75\linewidth]{Figures/Superstructure.png}
\caption{Layouts of the membrane process plant consisting of either a single nanofiltration unit or a two-stage connection of nanofiltration units. Permeate flow recirculation is allowed but does not have to be used. Each of the membrane units is equipped with a feed sided pump. A global feed, and retentate and permeate are connected by mixing units and represent the process input and outputs}
\label{fig:Superstructure}
\end{figure}
The cost model for this study is adapted from our previous work~\cite{rall2020simultaneous}. Here, the Verberne~cost~model~\cite{verberne1993membraanfiltratie, sethi2000cost, ang2017effect} is used. The total investment cost $C_{investment}$ is calculated based on the sum of the civil $C_{civil}$, mechanical $C_{mechanical}$, electromechanical $C_{electro}$, and membrane $C_{membrane}$ investment cost
\begin{align}
C_{investment} = C_{civil} + C_{mechanical} + C_{electro} + C_{membrane}.
\end{align}
The annual operation cost is calculated based on the sum of the depreciation cost $C_{depreciation}$, the energy cost $C_{energy}$, the maintenance cost $C_{maintenance}$, and specific cost $C_{specific}$.
\begin{align}
C_{operation} = C_{depreciation} + C_{energy} + C_{maintenance} + C_{specific}
\end{align}
Appropriate cost parameters and depreciation periods are accounted for by the individual investment cost~\cite{rall2020simultaneous}.
\vspace{0.5cm}
This study does not consider the effects of osmotic pressure differences on the reduction of the driving force (here transmembrane pressure). This reduction of the driving force means that the presented results underestimate the pumping power because of the high salt concentrations used in this study. Considering the previously described cost model, it might be more economically viable to implement more membrane modules per membrane unit than using a higher pressurized feed stream. There always exists a trade-off between the costs by increased pumping power and the capital costs of more membrane area. Thus, the integration of the osmotic pressure into the proposed framework would be relevant and important for future research. In particular, future work could consider osmotic pressure differences in the ion transport models by accounting for the osmotic pressure in the two-dimensional model used for data generation (c.f. Section~\ref{sec:ANNsMethodsadaption}-\ref{sec:ANNsTwoSurrogate}). However, this is beyond the scope of the current study. Considering the effects of osmotic pressure differences on the reduction of the driving force demand for a more detailed investigation as well as validation of the model against existing methods.
\subsection{Numerical optimization approach}\label{sec:global_deterministic_optimization}
Finally, the hybrid model is optimized using a reduced-space formulation and our open-source deterministic global solver MAiNGO~\cite{MAiNGO}. The optimization problems in this work are formulated as a multi-objective mixed-integer nonlinear program (MINLP):
\begin{align}\label{eqn:MINLP}
\begin{split}
\min_{{\bf x},{\bf y}} \quad & \begin{pmatrix}f_1(\textbf{x},\textbf{y}) \\ f_2(\textbf{x},\textbf{y})\end{pmatrix} \\
s.t. \quad &g_{i}(\textbf{x},\textbf{y}) = 0, \quad i=1,...,I \\
&h_{j}(\textbf{x},\textbf{y}) \leq 0, \quad j=1,...,J \\
&\textbf{x} \in X \subset \mathbb{R}^{n}, \quad
\textbf{y} \in Y \subset \mathbb{Z}^{m} \\
\end{split}
\end{align}
where $g_{i}(\textbf{x},\textbf{y})$ are equality constraints, $h_{j}(\textbf{x},\textbf{y})$ are inequality constraints,
$f_1(\textbf{x},\textbf{y}),f_2(\textbf{x},\textbf{y})$ are the objective functions, $\textbf{x}$ are continuous optimization variables, and $\textbf{y}$ is the integer optimization variable (i.e., number of membrane modules $N_{module,i}$).
All model equations for the mechanistic process model are described in Subsection~\ref{sec:membrane_process_design}. The ionic separation performance is mapped by ANNs as described in Subsection~\ref{sec:datageneration}, which are formulated in a reduced-space formulation~\cite{schweidtmann2019deterministic}. The objective functions are (i) minimal annual operation costs ($f_1(\textbf{x},\textbf{y}) = C_{operation}$) and (ii) minimal permeate concentration ($f_2(\textbf{x},\textbf{y})=c_{permeate}$).
The MINLP is solved for the two objectives by the $\epsilon$-constraint method. Here, the multi-objective problem is reformulated to multiple single-objective problems. Thereby, one objective is minimized, and the other objective is enforced to be less or equal to a parameter $\epsilon$. This procedure is repeated for different $\epsilon$ yielding Pareto-optimal points. All optimization problems in this work are implemented and solved by our open-source deterministic global solver MAiNGO~\cite{MAiNGO}.
\newpage
\section{Case study \& results}
In this section, the results of the process optimizations, including the ANN-based surrogate models, are discussed for a specific case scenario (cf. Section~\ref{sec:results_case}).
In Section~\ref{sec:results_1plantopt} the methodology to integrate multi-scale membrane plant optimization using LbL nanofiltration membranes is presented.
Then, the performance of the overall process is evaluated when including a more accurate two-dimensional surrogate model by an optimal two-dimensional distribution of the membrane in Section~\ref{sec:results_optimal_solution_strategy}. Here, the single filtration unit is considered by (A) using a one-dimensional distribution, (B)~(1) by using a three-element two-dimensional distribution which is directly implemented in the optimization framework, and (B)~(2) by using a new surrogate model that includes these two-dimensional distributions over $k=1000$ cells in a black box creating a computational resource-efficient two-dimensional modeling of the nanofiltration module.
In Section~\ref{sec:results_single_salt}, the filtration performance of the process is evaluated when including a second stage.
Finally, in Section~\ref{sec:results_salt_mixture}, salt mixtures are considered and the process optimized.
\subsection{Case scenario for membrane plant optimization} \label{sec:results_case}
The case scenario and cost correlations are adapted from our previous work~\cite{rall2020simultaneous}. The membrane plant is optimized based on a drinking water purification meeting given quality specifications for a small town with $10,000$ inhabitants~\cite{baur2014mutschmann}. In this study, water softening is considered by retaining \ce{Na2SO4} salt in the membrane plant. The drinking water demand amounts to a peak demand of approximately \SI{224}{\cubic\meter\per\hour}. All processes dimensions are set to meet the permeate volume flow of this peak demand. Demand-side management is not considered within the scope of this work.
\subsection{Bridging the gap of multiple scales by surrogate model-based membrane plant optimization}\label{sec:results_1plantopt}
In this section, the results of the surrogate model-based membrane plant optimization are presented.
The optimization is performed for a single-stage nanofiltration unit (cf., Figure~\ref{fig:Superstructure}) using the ANN$_{Na_{2}SO_{4},~k=1,~1~\text{Unit}}$ surrogate model (cf., Table~\ref{tab:ArificialNeuralNetworks}) to describe the ionic retention of the membrane unit. The whole process is optimized for ranging feed concentrations of \ce{Na2SO4} and solved for the two objectives -- (i) minimal annual operation costs and (ii) minimal \ce{Na2SO4} permeate concentration. The \ce{Na2SO4} feed concentration range from \SI{5}{\mol\per\cubic\meter} to \SI{20}{\mol\per\cubic\meter}. The results of this optimization are shown in Figure~\ref{fig:results_1plantopt} depicting a smooth, convex Pareto front.
\begin{figure}[H]
\centering
\includegraphics[width=0.85\linewidth]{Figures/Results_ParetoSurface.png}
\caption{Surrogate model-based membrane plant optimization for a single-stage nanofiltration unit using the ANN$_{Na_{2}SO_{4},~k=1,~1~\text{Unit}}$ surrogate model to describe the ionic retention of the membrane unit for \ce{Na2SO4} feed concentration ranging from \SI{5}{\mol\per\cubic\meter} to \SI{20}{\mol\per\cubic\meter}. The optimization results in a smooth, convex Pareto front. The annual operating costs increase along with higher purity and with the more difficult filtration task.}
\label{fig:results_1plantopt}
\end{figure}
Ideally, an optimal membrane plant would operate at zero costs and zero permeate concentrations for all feed concentrations. This is called the utopia point. However, in reality, higher feed concentrations (i.e., more difficult filtration) and lower permeate concentrations (i.e., higher purities) always lead to higher operational costs. Hence, we face an inherent trade-off between conflicting objectives.
This trade-off can be addressed by multi-objective optimization. The solution of a multi-objective optimization problem is a Pareto-front between the objective functions. The points on the Pareto front correspond to processes where neither one of the objectives can be improved anymore without worsening another objective. Thus, for each of the Pareto efficient process points, one cannot archive higher purity at a given feed concentration without spending the additional operational cost.
Considering the calculated Pareto front in Figure~\ref{fig:results_1plantopt}, indeed the annual operating costs increase along with higher purity (i.e., lower \ce{Na2SO4} permeate concentration) and more difficult filtration task (i.e., higher \ce{Na2SO4} feed concentration concentration). The achievable permeate purity strongly depends on the process' salt feed concentration. Here, the maximum process purity ranges from 0.63~\SI{}{\mol\per\cubic\meter} for a feed concentration of \SI{5}{\mol\per\cubic\meter} to 3.6~\SI{}{\mol\per\cubic\meter} for a feed concentration of \SI{20}{\mol\per\cubic\meter}. The latter is the most difficult separation task of this optimization resulting in an annual operation cost of approximately 700~thousand~\EUR{}~year$^{-1}$. A table of the optimal solution points can be found in the supplementary data to this publication.
In the presented work, we demonstrate that nano-scale ion transport models can be used in process optimization. Besides, some essential effects for the design of membrane processes result from the observation of the simulated data. Therefore, the qualitative differences between the process configurations are assessed in the following. The tables to this discussion can be found in the supplementary data to this publication.
The optimal process configuration towards higher purity (and higher annual operation cost) follows a stringent pattern regarding the involved process parameters. The process recirculation (cf., Figure~\ref{fig:Superstructure}~single~stage) is fully exploited to minimize the filtration unit’s feed concentration. Permeate flow recirculation is used to keep the feed concentration low to maintain the membrane's retention high. This process indeed seems to be non-intuitive because the purified permeate stream is recirculated back to the feed stream. However, the feed concentration strongly influences the retention of the nanofiltration membrane and must, therefore, remain low to sustain high ionic retention of the membrane. It becomes apparent that it is economically more viable to recirculate the permeate stream than investing in more membrane area or higher membrane charges associated with low permeability. Moreover, the membrane thickness and surface charge are also increased throughout the optimization to yield a similar, but quantitatively less significant, effect. The increase of the membrane thickness leads to a decreased permeability, which has to be compensated by a larger membrane surface or higher pressure gradient. This can be observed from a large number of membrane modules utilized for high-purity process configurations. Additionally, a large membrane surface facilitates a suitable mass transfer along with a medium transmembrane velocity, which also has a significant impact on retention.
For processes with lower cost and purity, the membrane's surface is drastically reduced to eliminate this cost factor. In particular, no recirculating is performed to minimize the operation costs for the involved feed-sided pressure pump. The preceding observations apply to all following discussed process configurations. For the filtration of salt mixtures, small restrictions have to be made that shall be considered later.
Furthermore, the shape and position of the calculated Pareto front can be used to determine the performance of the membrane system. In Figure~\ref{fig:results_1plantopt}, a curved Pareto front towards the utopia point indicates overall a better performance. Whereas a curved Pareto away from the utopia point means a worse performance of the process up to the angular end of the feed- and permeate concentration plane so that no filtration takes place. This visual effect may be used when comparing different membrane systems embedded in the same process of plant optimization. Taking a closer look at the next Section~\ref{sec:results_optimal_solution_strategy}, the shape and position of the calculated Pareto front indicates the severe underestimation of the costs when only including a one-dimensional distribution of the membrane.
The optimization problem includes 6 optimization variables, 1 equality constraint, and 9 inequality constraints. The optimization was executed on 4 cores in parallel on a high performance computing cluster. The average CPU time summed over all cores for the optimization of a Pareto point was 2.1 CPU seconds (see Table~\ref{tab:CPUTimeComparison}). Thus, the computation of the Pareto front, which is approximated by 100 points, took roughly 35 CPU minutes in total.
\subsection{Optimization including the two-dimensional distribution of the membrane}\label{sec:results_optimal_solution_strategy}
In the next step, the performance of the overall process is evaluated when including a more accurate two-dimensional surrogate model by an optimal two-dimensional distribution of the membrane. A membrane module is considered by (A) using a one-dimensional distribution as presented in the previous Section~\ref{sec:results_1plantopt}), and compared to (B)~(1) using a three-element two-dimensional distribution by three ANN$_{Na_{2}SO_{4},~k=1,~1~\text{Unit}}$ surrogate models (cf., Table~\ref{tab:ArificialNeuralNetworks}) which are directly implemented in the optimization framework, and (B)~(2) by using a new surrogate model ANN$_{Na_{2}SO_{4},~k=1000,~1~\text{Unit}}$ (cf., Table~\ref{tab:ArificialNeuralNetworks2}) that includes these two-dimensional distributions as a black box creating a computational resource-efficient two-dimensional modeling of the nanofiltration module.
Again, one single-stage membrane unit (c.f., Figure~\ref{fig:Superstructure}) is considered for the optimization of a process plant for ranging feed concentrations of \ce{Na2SO4} and solved for the two objectives - (i) minimal annual operation costs and (ii) minimal \ce{Na2SO4} permeate concentration. The resulting Pareto fronts of the optimizations are shown in Figure~\ref{fig:results_ElementsStudy}. Here, for comparability, the results of Figure~\ref{fig:results_1plantopt} are displayed in Figure~\ref{fig:results_ElementsStudy}~(Membrane~not~distributed, i.e. $k=1$) as well. A table of the optimal solution points can be found in the supplementary data to this publication.
\begin{figure}[H]
\centering
\includegraphics[width=1\linewidth]{Figures/Results_ElementsStudy.png}
\vspace{-1cm}
\caption{Surrogate model-based membrane plant optimization for a single-stage nanofiltration for \ce{Na2SO4} filtration. Two implementations of the two-dimensional distribution are presented: (B)~(1) by direct implementation of the two-dimensional distribution by three ANN$_{Na_{2}SO_{4},~k=1,~1~\text{Unit}}$ surrogate models (B)~(2) by the recursive back box surrogate model approach using ANN$_{Na_{2}SO_{4},~k=1000,~1~\text{Unit}}$. For comparability, the results of Figure~\ref{fig:results_1plantopt} are displayed using one ANN$_{Na_{2}SO_{4},~k=1,~1~\text{Unit}}$ surrogate model with one-dimensional distribution. Showing A) the results for a feed concentration of \SI{10}{\mol\per\cubic\meter} and B) for ranging feed concentrations between \SI{5}{\mol\per\cubic\meter} and \SI{20}{\mol\per\cubic\meter}.}
\label{fig:results_ElementsStudy}
\end{figure}
(B)~(1) Directly embedding the two-dimensional distribution in the optimization framework results in a dramatic increase in run time for only a few two-dimensional distribution elements.
As shown in Table~\ref{tab:CPUTimeComparison}, the average CPU time to solve one Pareto point using $k=3$ elements is $9.1 \cdot 10^{4}$ CPU seconds corresponding to over 25 CPU hours. To reduce the wall-clock time, we solved this optimization parallel on 48 cores. (Note that the CPU times are summed over all cores.)
Thus, the maximum number of elements for directly embedding the two-dimensional distribution in the optimization framework is $k=3$ elements using a single nanofiltration unit. This low number of elements is not sufficient to accurately describe the changes along the membrane length, as presented in Section~\ref{sec:ANNsMethodsadaption}. The results suggest clearly that the solution of a large number of elements, i.e., $k > 3$, would not be feasible using this method.
(B)~(2) In contrast, our proposed recursive black-box approach using the ANN$_{Na_{2}SO_{4},~k=1000,~1~\text{Unit}}$ surrogate model requires on average $2.4 \cdot 10^{2}$ CPU seconds to solve a Pareto point (summed over all cores). Thus, the complete Pareto front, including 100 points, is solved within 7 CPU hours. As we use 48 cores in parallel, this corresponds to a wall-clock time of under 10 minutes.
Thus, the method significantly reduced the computational effort compared to (B)~(1) and makes the integration of $k=1000$ elements feasible. Compared to the consideration of $k=1$, we can observe an acceptable increase in CPU time, which is mainly due to the additional input of the ANN describing the flow ratio $\gamma$.
\begin{table}[h]
\caption{Computational performance comparison of the proposed optimization methods. For each method, 100 optimization problems are solved, i.e., one for each Pareto point. Not all solved points are displayed in the figure. We provide average, variance, minimal, and maximal CPU times overall optimization problems. Note that CPU times for optimization include preprocessing and branch-and-bound time summed over all parallel cores in seconds.}
\centering
\label{tab:CPUTimeComparison}
\begin{tabular}{lccrr|llll}
\hline
& & & & & \multicolumn{4}{c}{CPU time summed over all cores [s]} \\
\# & Name & Section & k [-] & CPU Cores & Average & Variance & min & max \\
\hline
A & ${Na_{2}SO_{4},~k=1,~ 1~\text{Unit}}$ & \ref{sec:results_1plantopt} & 1 & 4 & $2.1 \cdot 10^{1}$ & $2.4 \cdot 10^{2}$ & $7.7 \cdot 10^{-1}$ & $5.6 \cdot 10^{1}$ \\
B~(1)& ${Na_{2}SO_{4},~k=3,~ 1~\text{Unit}}$ & \ref{sec:results_optimal_solution_strategy} & 3 & 48 & $9.1 \cdot 10^{4}$ & $2.5 \cdot 10^{10}$ & $2.3 \cdot 10^{0}$ & $7.7 \cdot 10^{5}$ \\
B~(2)& ${Na_{2}SO_{4},~k=1000,~ 1~\text{Unit}}$ & \ref{sec:results_optimal_solution_strategy} & 1000 & 48 & $2.4 \cdot 10^{2}$ & $1.1 \cdot 10^{5}$ & $1.7 \cdot 10^{0}$ & $1.5 \cdot 10^{3}$ \\
\hline
\end{tabular}
\end{table}
A severe underestimation of the costs is observed in Figure~\ref{fig:results_ElementsStudy}~A when not including a two-dimensional distribution of the membrane. This severe underestimation of the costs is prevailing for all feed concentration of \ce{Na2SO4} from \SI{5}{\mol\per\cubic\meter} to \SI{20}{\mol\per\cubic\meter} as indicated by the location of the three Pareto surfaces shown in Figure~\ref{fig:results_ElementsStudy}~B. The use of a more precise two-dimensional distribution of the membrane reveals a worse performance than the idealized process only with one-dimensional distribution in Figure~\ref{fig:results_ElementsStudy}. The worsening in process performance can be explained by the filtration-related decreasing retention of salt at the retentate side. For positive retention (which is always the case for the single salt filtration examined here), the concentration on the retentate side of the filtration unit steadily increases along the membrane surface. However, this leads to decreasing retention and, therefore, a worse filtration performance. Additionally, a back-coupling through recirculation of the permeate feed stream back to the membrane's unit feed stream is observed. For the two-dimensional distribution of the membrane with $k=1000$ elements, for instance, the maximum achievable performance for a feed concentration of \SI{5}{\mol\per\cubic\meter} is 1.19~\SI{}{\mol\per\cubic\meter} and therefore presents an increase in impurity as compared to the idealized case using the one-dimensional distribution. This effect is even more pronounced when considering higher feed concentrations.
\subsection{Extension of the optimization process by a second filtration unit}\label{sec:results_single_salt}
Next, the two-dimensional surrogate model approach is utilized to extend the optimization process by a second filtration unit (cf,. Figure~\ref{fig:Superstructure}~two~stage). The ANN$_{Na_{2}SO_{4},~k=1000,~1~\text{Unit}}$ surrogate model is taken from the previous Section~\ref{sec:results_optimal_solution_strategy}. For the two-stage process the ANN$_{Na_{2}SO_{4},~k=1000,~2~\text{Units}}$ surrogate model is chosen (cf., Table~\ref{tab:ArificialNeuralNetworks2}). The whole process is optimized for ranging feed concentrations of \ce{Na2SO4} from \SI{5}{\mol\per\cubic\meter} to \SI{20}{\mol\per\cubic\meter} and solved again for the two objectives - (i) minimal annual operation costs and (ii) minimal \ce{Na2SO4} permeate concentration. The results of the optimization are shown in Figure~\ref{fig:results_MultibleUnits}. A table of the optimal solution points can be found in the supplementary data to this publication.
\begin{figure}[H]
\centering
\includegraphics[width=1\linewidth]{Figures/Results_MultibleUnits.png}
\vspace{-1cm}
\caption{Surrogate model-based membrane plant optimization for a single-stage and two-stage serial connection of nanofiltration units for \ce{Na2SO4} filtration. The two-dimensional surrogate model approach approach is chosen using ANN$_{Na_{2}SO_{4},~k=1000,~1~\text{Unit}}$ for one nanofiltration unit, and the ANN$_{Na_{2}SO_{4},~k=1000,~2~\text{Units}}$ surrogate model for the two-stage serial connection. Showing A) the results for a feed concentration of \SI{10}{\mol\per\cubic\meter} and B) for ranging feed concentrations between \SI{5}{\mol\per\cubic\meter} and \SI{20}{\mol\per\cubic\meter}.}
\label{fig:results_MultibleUnits}
\end{figure}
This combination of a two-stage series of filtration units results in a strongly enhanced filtration performance. This tendency is particularly intensified as the feed concentration of the second unit is already the purified concentration of the first unit. Due to the strong correlation of the feed concentration and retention, the second filtration unit is particularly useful. Notably, the improvement is much more significant for lower feed concentrations. For higher initial feed concentrations, however, the effect of utilizing two filtration units is not as predominant.
There exists a distinct switching point, as highlighted in Figure~\ref{fig:results_MultibleUnits}~A, where the economic viability of either one or two nanofiltration unit changes. In the range of less difficult filtration operations, a single-stage option is preferable. More difficult filtration operations with higher purity, however, are cheaper under the inclusion of a second unit (or even impossible for a single unit). The costs of a second filtration unit fall short in the overall economic estimation. This distinct switching point is case-specific and dependent on the feed concentration of \ce{Na2SO4}. As shown in Figure~\ref{fig:results_MultibleUnits}~B for all feed concentration of \ce{Na2SO4} from \SI{5}{\mol\per\cubic\meter} to \SI{20}{\mol\per\cubic\meter} a distinct switching point exists highlighted by the change of color (here from black to purple).
The superstructure optimization includes 12 continuous and 2 integer optimization variables, 6 equality constraints, and 14 inequality constraints. We run the optimization for the two-unit superstructure with $k=1$ elements per unit on 48 cores (result not displayed in Figure~\ref{fig:results_MultibleUnits}). Here, the average CPU time summed over all cores for one Pareto point was $7.7 \cdot 10^{3}$ seconds. This is an increase by a factor of 360 compared to the one unit process with a $k=1$ element.
Due to the increased computational effort, the results with two units and $k=1000$ have been solved using a local optimization approach with 10 multistarts. This method cannot guarantee global optimality. Therefore, only the results with two units and $k=1$ can guarantee global optimality, but are not meaningful as they severely underestimate the overall process costs. The local optimization took on average 25 CPU seconds for a Pareto point.
\subsection{Extension of the optimization process by salt mixtures}\label{sec:results_salt_mixture}
Next, the two-dimensional surrogate model approach is extended by salt mixtures. Due to computational effort, only one single-stage is considered in the optimization (cf,. Figure~\ref{fig:Superstructure}). The ANN$_{NaCl,~Na_{2}SO_{4},~k=1000,~1~\text{Unit}}$ surrogate model is utilized in the process optimization framework. The whole process is optimized for ranging feed concentrations of \ce{Na2SO4} from \SI{5}{\mol\per\cubic\meter} to \SI{20}{\mol\per\cubic\meter}. The feed concentration is extended by the addition of \ce{NaCl} by fixed composition ratios. More precise the ratios for \ce{NaCl}:\ce{Na2SO4} are 1:2, 1:1, and 2:1. For example in Figure~\ref{fig:results_salt_mixture}~A, for the ratio \ce{NaCl}:\ce{Na2SO4} = 1:2, \SI{5}{\mol\per\cubic\meter} of \ce{NaCl} is added to the feed stream when considering a feed of \SI{10}{\mol\per\cubic\meter} \ce{Na2SO4}.
The optimization is again solved for the two objectives - (i) minimal annual operation costs and (ii) minimal \ce{Na2SO4} permeate concentration. The results of the optimization for the given ratios for \ce{NaCl}:\ce{Na2SO4} are shown in Figure~\ref{fig:results_salt_mixture}. A table of the optimal solution points can be found in the supplementary data to this publication.
To reduce computational efforts, the optimization problem is solved for fewer operating conditions (Pareto pints), leading to a more coarse approximation of the Pareto front. The shape and different process configurations show qualitatively similar behavior concerning the optimization of the single salt processes in Figure~\ref{fig:results_ElementsStudy}. In particular, the operational costs are in the same order of magnitude for both cases.
Taking a closer look at the different Pareto fronts for the individual salt mixtures, they reveal differences in filtration operation. On the one hand, the Pareto fronts for ratios 1:1 and 2:1 (\ce{NaCl}:\ce{Na2SO4}) in Figure~\ref{fig:results_salt_mixture}~C-D present similar behavior to the previous results. On the other hand, the optimization points for a ratio of 1:2 in Figure~\ref{fig:results_salt_mixture}~B are clustered in a much narrower region of feasible points in the Pareto front. This process infeasibility is mainly observed for higher feed concentrations. This behavior can be ascribed to the more complex relations between the membrane retention, and it’s influencing process-specific parameters. These relations are neither monotone (as for the single salt case), nor is the retention itself limited to positive values (the Donnan effect induces negative retention). Most notably filtration operation with high retention is still possible in Figure~\ref{fig:results_salt_mixture}~D even at very high salt concentrations (i.e., \SI{40}{\mol\per\cubic\meter} of \ce{Na2SO4} and \SI{20}{\mol\per\cubic\meter} of \ce{Na2SO4}) by considering the Donnan effect. Hence an accurate description of the underlying physical processes becomes quite cumbersome, making the surrogate-based model for the determination of optimal operation points even more valuable. The increased complexity of the process for salt mixtures also manifests in clustered points on the Pareto front. Such points are Pareto-dominated by preceding configurations, which means a process in that particular setting gets more expensive under looser purity constraints.
\begin{figure}[H]
\centering
\includegraphics[width=0.9\linewidth]{Figures/Results_Salt_Mixture.png}
\caption{Surrogate model-based membrane plant optimization for a single-stage nanofiltration unit for filtration of \ce{NaCl} and \ce{Na2SO4} salt mixture. The two-dimensional surrogate model approach is chosen using the ANN$_{NaCl,~Na_{2}SO_{4},~k=1000,~1~\text{Unit}}$ surrogate model. Showing A) the results for a \ce{Na2SO4} feed concentration of \SI{10}{\mol\per\cubic\meter} and for ranging feed concentrations between \SI{5}{\mol\per\cubic\meter} and \SI{20}{\mol\per\cubic\meter} for salt mixture ratios (\ce{NaCl}:\ce{Na2SO4}) of B) 1:2, C) 1:1, and D) 2:1.}
\label{fig:results_salt_mixture}
\end{figure}
The optimization for salt mixtures includes 6 continuous and 1 integer optimization variables, 2 equality constraints, and 13 inequality constraints. We run the optimization for one single-stage unit and $k=1000$ elements on 48 cores. The average CPU time summed over all cores for one Pareto point was $3.3 \cdot 10^{3}$, $1.1 \cdot 10^{5}$, and $1.0 \cdot 10^{4}$ seconds for 1:1, 1:2, and 2:1 salt mixtures (\ce{NaCl}:\ce{Na2SO4}), respectively.
These results demonstrate that the CPU time for optimization is not only dependent on the structure of the optimization problem but also the case study.
\newpage
\section{Conclusion}
We present a computational resource-efficient methodology to integrate multi-scale modeling in membrane science. We propose a machine learning approach to integrate accurate physical models of ion transport, valid on the nano-scale, into large-scale superstructure optimization of membrane processes. Thereby, the multi-scale decision-making process allows us to simultaneously design the membrane synthesis properties along with the process design and operation. All models and the solver itself are open-source and freely available, enabling multi-scale modeling in membrane science.
In particular, we train ANN surrogate models on simulated data from a differential-algebraic one-dimensional extended Nernst-Plank ion transport model, called pEnPEn. These surrogate models are then exploited to extend the framework towards a more accurate two-dimensional distribution of the membrane module to capture the filtration-related decreasing retention of salt. Here, individual ANNs are arranged in series to resolve the membrane in the direction of flow. Next, the two-dimensional surrogate models are embedded in a mechanistic membrane process model leading to a hybrid mechanistic / data-driven model. The results demonstrate that:
\begin{itemize}
\item A consideration of a one-dimensional distribution of the membrane module leads to a severe underestimation of process costs. Only the extension towards a two-dimensional distribution captures the adverse effects of filtration-related decreasing retention of salt.
\item Non-intuitive interconnections of the membrane plant layout are optimal for high permeate purities, in which the permeate flow recirculates back to keep the feed concentration low.
\item A distinct point between a single-stage and a two-stage process exists, where the economic viability of either one or two nanofiltration unit changes. Costs of a second filtration unit fall short in the overall economic estimation for more difficult filtration operations.
\item For salt mixtures, it is much more complex to find optimal operating points, as further effects (i.e., Donnan exclusion) have an additional impact on the separation process. Therefore, it is all the more important that surrogate-based modeling bridges the gap between the scales for application of high fidelity ion transport models in superstructure optimization of membrane processes.
\end{itemize}
This work stimulates to exploit this open-source optimization framework by further research and in industrial applications, such as in drinking water treatment and water disposal. Additional surrogate models may include other membrane-based platforms. Furthermore, the concept of surrogate-based modeling should be exploited to integrate driving force diminishing effects acting on nano-scale, such as the osmotic pressure difference. Future work in deterministic global optimization and model reduction is desirable to push the computational limits of this approach further.
\vspace{0.5cm}
\section*{Acknowledgement}
D.R., E.E., and M.W. acknowledge the support through the German Federal Ministry of Education and Research (BMBF) under the project UO-Rohrfabrikation (03XP0100E) and the project EfflueNF (02WIL1486) and the European Union's Horizon~2020 research and innovation program (grant agreement no. 694946).
A.M.S. and A.M. gratefully acknowledge funding by the excellence initiative of the German federal and state governments and they acknowledge the financial support of the Kopernikus project SynErgie by the Federal Ministry of Education and Research (BMBF) and the project supervision by the project management organization Projekttr\"ager J\"ulich (PtJ).
Funded by the Excellence Initiative of the German federal and state governments. Simulations were performed with computing resources granted by RWTH Aachen University under project rwth0404.
\vspace{0.5 cm}
\section*{References}
\bibliographystyle{elsarticle-num}
\biboptions{sort&compress}
|
1,941,325,220,397 | arxiv | \section{Introduction} \label{section:introduction}
Salient object detection from videos plays an important role as a pre-processing step in many computer vision applications such as video re-targeting~\cite{Taoran-ICIP2010}, object detection~\cite{Guo-Neurocomputing2014}, person re-identification~\cite{Zhao-CVPR2013}, and visual tracking~\cite{Stalder-ACCV2012}. Conventional methods for salient object detection often segment each frame into regions and artificially combine low-level (bottom-up) features (e.g., intensity~\cite{Zhu-CVPR2014}, color~\cite{Zhu-CVPR2014}, edge orientation~\cite{Nghia-PSIVT2015}) with heuristic (top-down) priors (e.g., center prior~\cite{Zhou-CVPR2014}, boundary prior~\cite{Zhu-CVPR2014}, objectness~\cite{Nghia-PSIVT2015}) detected from the regions. Low-level features and priors used in the conventional methods are hand-crafted and are not sufficiently robust for challenging cases, especially when the salient object is presented in a low-contrast and cluttered background. Although machine learning based methods have been recently developed~\cite{Long-CVPR2013}\cite{Jiang-CVPR2013}\cite{Liu-PAMI2011}, they are primary for integrating different hand-crafted features~\cite{Jiang-CVPR2013}\cite{PJiang-ICCV2013} or fusing multiple saliency maps generated from various methods~\cite{Long-CVPR2013}. Accordingly, they usually fail to preserve object details when the salient object intersects with the image boundary or has similar appearance with the background where hand-crafted features are often unstable.
Recent advances in deep learning using Deep Neural Network (DNN) enable us to extract visual features, called deep features, directly from raw images/videos. They are more powerful for discrimination and, furthermore, more robust than hand-crafted features~\cite{Taylor-ECCV2010}\cite{Girshick-CVPR2014}\cite{Tran-ICCV2015}. Indeed, saliency models for videos using deep features~\cite{Li-CVPR2016}\cite{Liu-CVPR2016}\cite{Lee-CVPR2016} have demonstrated superior results over existing works utilizing only hand-crafted features. However, they extract deep features from each frame independently and employ frame-by-frame processing to compute saliency maps, leading to inaccuracy for dynamically moving objects. This is because temporal information over frames is not taken into account in computing either deep features or saliency maps. Incorporating temporal information in such computations should lead to better performance.
\begin{figure}[t]
\includegraphics[width=1\linewidth]{final_images/img_examples.pdf}
\caption{Examples of results obtained by our proposed method. Top row images are original video frames, followed by the ground truth and corresponding saliency maps obtained using our method.}
\label{img:examples}
\end{figure}
Computed saliency maps do not always accurately reflect the shapes of salient objects in videos. To segment salient objects as accurately as possible while reducing noise, dense Conditional Random Field (CRF)~\cite{Li-CVPR2016}\cite{Hou-CVPR2017}, a powerful graphical model to globally capture the contextual information, has been applied to the computed saliency maps, which results in improvement in spatial coherence and contour localization. However, dense CRF is applied to each frame of a video separately, meaning that only spatial contextual information is considered. Again, temporal information over frames should be taken into account for better performance.
Motivated by the above observation, we propose a novel framework using spatiotemporal information as fully as possible for salient object detection in videos. We introduce a new set of SpatioTemporal Deep (STD) features that utilize both local and global contexts over frames. Our STD features consist of local and global features. The local feature is computed by aggregating over frames deep features, which are extracted from each frame using a region-based Convolutional Neural Network (CNN)~\cite{Girshick-CVPR2014}. The global feature is computed from a temporal-segment of a video using a block-based\footnote{In contrast to the region-based CNN working on spatial segments in each frame, the block-based CNN works on a sequence of frames of a video.} CNN~\cite{Tran-ICCV2015}.
We also introduce the SpatioTemporal CRF (STCRF), in which the spatial relationship between regions in a frame as well as temporal consistency of regions over frames is formally described using STD features. Our proposed method first segments an input video into multi-scale levels, and then at each scale level, extracts STD features and computes a saliency map. The method then fuses saliency maps at different scale levels into the final saliency map.
Extensive experiments on public benchmark datasets for video saliency confirm that our proposed method significantly outperforms the state-of-the-arts. Examples of saliency maps obtained by our method are shown in Fig.\ref{img:examples}. We also apply our method to video object segmentation and observe that our method outperforms existing methods.
The rest of this paper is organized as follows. We briefly review and analyze related work in Section \ref{section:related_work}. Then, we present in detail our proposed method in Section \ref{section:proposed_method}. Our experiments are discussed in Sections \ref{section:setting} and \ref{section:experiments}. In Section \ref{section:application}, we present an application of our proposed method to video object segmentation. Section \ref{section:conclusion} presents conclusion and future work.
We remark that this paper extends the work reported in~\cite{Nghia-DeLIMMA2017}. Our extensions in this paper are building a new STCRF model utilizing CNN instead of Random Forest (Section \ref{section:STCRF_model}), adding more experiments (Section \ref{section:experiments}), and an application of our method to video object segmentation (Section \ref{section:application}).
\section{Related Work} \label{section:related_work}
Here we briefly survey features used for salient object detection in videos, and saliency computation methods.
\subsection{Features for Salient Object Detection}
Saliency computation methods for videos using hand-crafted features are mostly developed from traditional saliency models for still images by incorporating motion features to deal with moving objects~\cite{Nghia-PSIVT2015}\cite{Zhou-CVPR2014}\cite{Liu-PAMI2011}\cite{Rahtu-ECCV2010}.
Motion features commonly used include optical flow~\cite{Nghia-PSIVT2015}\cite{Zhou-CVPR2014}\cite{Rahtu-ECCV2010}, trajectories of local features~\cite{Liu-PAMI2011}\cite{Zhai-MM2006}, gradient flow field~\cite{Wang-TIP2015}, and temporal motion boundary~\cite{Wang-CVPR2015}; they are utilized to detect salient objects in videos. Xue et al.~\cite{Xue-ICASSP2012}, on the other hand, sliced a video along $X$--$T$ and $Y$--$T$ planes to separate foreground moving objects from backgrounds. However, hand-crafted features have limitation in capturing the semantic concept of objects. Accordingly, these methods often fail when the salient object crosses the image boundary or has similar appearance with the background.
Several existing methods~\cite{Li-CVPR2016}\cite{Li-CVPR2015} for saliency computation using deep features, on the other hand, utilize superpixel segmentation to extract region-level deep features in different ways (e.g., feeding regions into a CNN individually to compute deep features~\cite{Li-CVPR2015} or pooling a pixel-level feature map into regions to obtain region-level deep features~\cite{Li-CVPR2016}).
To exploit the context of a region in multiple scales, multi-scale deep features of the region are extracted by changing the window size~\cite{Li-CVPR2015}. Li et al.~\cite{Li-CVPR2015} fused multi-scale deep features of a region of interest to compute the saliency score for the region using a two-layer DNN. Lee et al.~\cite{Lee-CVPR2016} integrated
hand-crafted features into deep features to improve accuracy for salient object detection. More precisely, they concatenated an encoded low-level distance map and a high-level feature map from CNN to enrich information included in the extracted feature map. The region-level feature map and the pixel-level feature map are also integrated into the saliency model to enhance accuracy of detected object boundaries~\cite{Li-CVPR2016}. In end-to-end deep saliency models~\cite{Liu-CVPR2016}\cite{Wang-ECCV2016}, pixel-based deep features are enhanced by their context information through recurrent CNNs.
Saliency models using deep features have demonstrated state-of-the-art performance in salient object detection and significantly outperformed existing works utilizing only hand-crafted features. However, in almost all existing saliency models, temporal information over frames is not taken into account in deep features, leading to inaccuracy for dynamically moving objects.
Though Wang et al.~\cite{Wang-TIP2018} very recently proposed a fully convolutional network (FCN) having a pair of frames as its input for video saliency computation, a pair of frames is too short to exploit the temporal domain.
Therefore, effectively mining correlation inherent in the spatial and temporal domains into powerful deep features for saliency computation is still an open problem.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.9\linewidth]{final_images/img_overview3.pdf}
\end{center}
\centering
\caption{Pipeline of the proposed method (brighter means more salient in the final saliency map).}
\label{fig:overview}
\end{figure*}
\subsection{Saliency Computation Methods}
The salient object detection approach using deep models ~\cite{Li-CVPR2016}\cite{Liu-CVPR2016}\cite{Hou-CVPR2017}\cite{Wang-ECCV2016}\cite{Xi-TIP2015} computes saliency scores directly from FCNs. In these deep models, recurrent layers~\cite{Liu-CVPR2016}\cite{Wang-ECCV2016} and skip connections~\cite{Liu-CVPR2016}\cite{Hou-CVPR2017} are utilized to enhance the contextual information of deep feature maps to improve the accuracy of saliency computation. However, these methods focus on frame-by-frame processing without considering any temporal information in videos.
In addition, they still do not detect boundaries of salient objects accurately. A refinement post-processing step is usually required to improve accuracy of detected object boundaries.
Spatial CRF has the capability to relate local regions in order to capture global context, and has been commonly used for refinement in semantic segmentation~\cite{Shimoda-ECCV2016} and for saliency computation~\cite{Li-CVPR2016}\cite{Hou-CVPR2017}. Dense CRF~\cite{Philipp-NIPS2011} is used as a post-processing step to refine the label map generated from CNN to improve the performance of semantic segmentation~\cite{Shimoda-ECCV2016}. Shimoda et al.~\cite{Shimoda-ECCV2016} developed a weakly supervised semantic segmentation method using a dense CRF to refine results from distinct class saliency maps. The dense CRF is incorporated into the saliency map computed from the CNN to improve spatial coherence and contour localization~\cite{Li-CVPR2016}\cite{Hou-CVPR2017}. Though spatial information is successfully utilized using CRFs in these methods, how to deal with temporal information is left unanswered, which is crucial for videos.
Dynamic CRF (DCRF)~\cite{Yang-CVPR2005} is an extension of the spatial CRF toward to the spatiotemporal domain to exploit both spatial and temporal information in videos. DCRF is constructed from consecutive video frames, where each pixel connects to its neighboring pixels in both space (i.e., the same frame) and time (i.e., the next frame and the previous frame). DCRF has been used to enhance both spatial accuracy and temporal coherence for object segmentation~\cite{Yang-CVPR2005}\cite{Yang-PAMI2006}\cite{Yi-CVPR2016} and saliency computation~\cite{Liu-PAMI2011} in videos. Yi et al.~\cite{Yi-CVPR2016} proposed a framework using DCRF to improve fence segmentation in videos. Wang et al.~\cite{Yang-CVPR2005}\cite{Yang-PAMI2006} applied DCRF to object segmentation and moving shadow segmentation in indoor scenes in videos. SIFT flow features were incorporated into DCRF to detect salient objects from videos~\cite{Liu-PAMI2011}. However, DCRF is a pixel-level dense graph; thus it is usually constructed using only two successive frames due to large memory consumption. In addition, since the energy function of DCRF is defined using the combination of classical hand-crafted features such as color and optical flow, DCRF is not capable of exploiting spatial and temporal information semantically. Our proposed STCRF differs from DCRF in that STCRF is defined over regions using STD features only, so that it is capable of dealing with more successive frames and exploiting spatial and temporal information semantically with less computational cost.
Different from these existing methods, our proposed method utilizes spatiotemporal information as much as possible when both extracting deep features and computing saliency maps. More precisely, our method uses STD features computed from the spatiotemporal domain together with STCRF constructed in the spatiotemporal domain to produce accurate saliency maps. Our method thus accurately detects boundaries of salient objects by removing irrelevant small regions.
\section{Proposed Method} \label{section:proposed_method}
\subsection{Overview}
Our goal is to compute a saliency map to accurately segment salient objects in every frame from an input video while fully utilizing information along the temporal dimension. Figure \ref{fig:overview} illustrates the pipeline of our proposed method.
We segment an input video at multiple scale levels and compute a saliency map at each scale level at each frame, and then aggregate all saliency maps at different scale levels at each frame into a final saliency map. This follows our intuition that objects in a video contain various salient scale patterns and an object at a coarser scale level may be composed of multiple parts at a finer scale level.
In this work, we employ the video segmentation method~\cite{Liu-CVPR2011} at multiple scale levels. We first specify the number of initial superpixels to define a scale level. For each scale level, we then segment each frame into initial superpixels using entropy-rate superpixel segmentation~\cite{Liu-CVPR2011}. Similar superpixels at consecutive frames are then grouped and connected across frames to have temporal segments using parametric graph partitioning~\cite{yu-ICCV2015}.
By specifying different numbers of initial superpixels, we obtain multiple scale temporal segments (we set four numbers to have four scale levels in our experiments as discussed later). We remark that each scale level has a different number of segments, which are defined as (non-overlapping) regions.
The final saliency map is computed by taking the average value of saliency maps over different scale levels. In the following subsections, we explain how to compute a saliency map at a scale level. We remark that a saliency map in this section indicates the saliency map at a scale level unless explicitly stated with "final."
\subsection{Spatiotemporal Deep Feature Extraction}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.9\linewidth]{final_images/img_feature_extraction5.pdf}
\end{center}
\centering
\caption{Spatiotemporal deep (STD) feature extraction for a region.
A region (yellow) in a frame of a video block is fed to the region-based CNN to have the region-based feature of the (yellow) region in the frame.
Region-based features over the frames of the video block are aggregated to have the local feature of the region. On the other hand, the video block is fed to the block-based CNN to have the global feature.
The local feature of the region and the global feature are concatenated to form the spatiotemporal deep (STD) feature of the region.}
\label{fig:feature_extraction}
\end{figure*}
For each region (segment) at each frame, our proposed STD feature is computed by concatenating a local feature and a global feature. The local feature is extracted using a region-based CNN followed by aggregation over frames, while the global feature is computed using a block-based CNN whose input is a sequence of frames of the video. The STD feature extraction for a region is illustrated in Fig.~\ref{fig:feature_extraction}.
\subsubsection{Local Feature Extraction}
A region at each frame, which is defined from a temporal segment at a frame, is fed into a region-based CNN to extract its region-based feature which is with a dimension of 4096. As our region-based CNN, we use the publicly available R-CNN model\footnote{R-CNN runs at the original resolution of its input region while Fast R-CNN~\cite{Girshick-ICCV2015}, Faster R-CNN~\cite{Ren-NIPS2015}, and Mask R-CNN~\cite{Kaiming-ICCV2017} require to reduce the resolution of the region to adapt their architectures. This resolution reduction may eliminate small regions. We thus used R-CNN.}~\cite{Girshick-CVPR2014} that was pre-trained on the ImageNet ILSVRC-2013 challenge dataset~\cite{Russakovsky-IJCV2015}.
The region-based feature contains the local context of the region but does not contain temporal information because it is computed frame-by-frame. In order to incorporate temporal information, for a region, we aggregate its region-based features over a sequence of frames, resulting in the consistent local feature over time for the region. It is important to remark that we use only neighboring frames whenever a region-of-interest is present. Thus, the number of frames used for this aggregation may change depending on the region.
Just uniformly averaging region-based features over frames is not wise because pixels vary over time due to lossy compression, degrading accuracy of corresponding regions across frames. This degradation increases with larger time increments across frames. We thus linearly combine region-based features at neighboring frames, similarly to~\cite{Nghia-PSIVT2015}, using weights modeled by a Gaussian distribution centered at the frame from which we compute local features. With these weights, region-based features at frames with large temporal distance to a frame of interest will contribute less to the computation of local features of the frame:
the local feature $ F_{\rm L}(i,t)$ of a region $i$ at frame $t$ is extracted by
\begin{eqnarray}
F_{\rm L}(i,t) & = & \frac{1}{\Psi}\sum\limits_{t' = t - {\raise0.5ex\hbox{$\scriptstyle k$}
\kern-0.1em/\kern-0.15em
\lower0.25ex\hbox{$\scriptstyle 2$}}}^{t + {\raise0.5ex\hbox{$\scriptstyle k$}
\kern-0.1em/\kern-0.15em
\lower0.25ex\hbox{$\scriptstyle 2$}}} {{\cal G}\left( {t'|t,{\sigma ^2}} \right)f(i,t')},
\end{eqnarray}
where
${\cal G}\left( {t'|t,{\sigma^2}} \right)$ is a Gaussian distribution with mean $t$ and standard deviation $\sigma=2$ expressing distribution of temporal weights,
$f(i,t')$ is the region-based feature of region $i$ at frame $t'$, and
$\Psi=\sum_{t' = t - k/2}^{t + k/2}{{\cal G}\left( {t'|t,{\sigma ^2}} \right)}$ (normalizing factor).
$k+1$ is the number of frames where the region $i$ is always present.
In this work, we set $k=16$ by default. This is because almost all regions at a frame are present in the next (previous) 8 successive frames. For a region that disappears during the next 7 successive frames or that newly appears during the previous 7 successive frames, we first identify the maximum number of successive frames in which the region is always present in the previous and the next directions and use this number as $k$ for the region.
For example, if a region in a frame appears from 3 frames before and disappears in 2 frames after, then we set $k=4(=2 \times 2)$ for this region.
\subsubsection{Global Feature Extraction}
To compute a global feature, we feed a video block (sequence of frames) of a video into a block-based CNN. The global feature obtained in this way takes its temporal consistency into account in its nature. As our block-based CNN, we employ the C3D model~\cite{Tran-ICCV2015} pre-trained on the Sports-1M dataset~\cite{Karpathy-CVPR2014}, which is known to be effective for extracting spatiotemporal features for action recognition. As an input video block, frame $t$ is expanded into both directions in the temporal domain to obtain a 16-frame sequence as suggested by~\cite{Tran-ICCV2015}. For each input block, we feed it into the pre-trained C3D model only once and assign the extracted global feature $F_{\rm G}(t)$ with a dimension of 4096 identical to all the regions in the block. This distributes the global context to each region and, at the same time, reduces the computational cost.
Finally, for a region $i$ of a frame $t$, we concatenate its local and global feature vectors to obtain its STD feature vector $F(i,t)$ whose dimension is $4096 \times 2$: $F(i,t)=F_{\rm L}(i,t) \oplus F_{\rm G}(t)$ (cf. Fig.~\ref{fig:feature_extraction}).
\subsection{Saliency Computation Using SpatioTemporal CRF}
\begin{table}[t]
\centering
\caption{Architecture of the our F-DNN.}
\label{tab:FDNN}
\small
\begin{tabular}{clc}
\toprule
\textbf{No} & \textbf{Layer} & \textbf{Output Channel} \\ \midrule
0 & STD Feature Input & 8192 \\ \midrule
1 & Fully Connected & 2048 \\
2 & ReLU & 2048 \\
3 & Dropout & 2048 \\
4 & Fully Connected & 2048 \\
5 & ReLU & 2048 \\
6 & Dropout & 2048 \\
7 & Fully Connected & 2048 \\
8 & ReLU & 2048 \\
9 & Dropout & 2048 \\
10 & Fully Connected & 1024 \\
11 & ReLU & 1024 \\
12 & Dropout & 1024 \\
13 & Fully Connected & 1024 \\
14 & ReLU & 1024 \\
15 & Dropout & 1024 \\
16 & Fully Connected & 1024 \\
17 & ReLU & 1024 \\
18 & Fully Connected & 2 \\
\bottomrule
\end{tabular}
\end{table}
CRF is used to improve accuracy (particularly in object boundaries) of the saliency map while reducing noise because CRF captures the spatial relationship between regions in a frame. We extend CRF toward the temporal domain to have the ability to capture temporal consistency of regions over frames as well.
We call our extended CRF, SpatioTemporal CRF (STCRF in short).
\subsubsection{STCRF Graph Construction} \label{section:STCRF_model}
For temporal segments of a video block, we construct a STCRF graph. Each vertex of the graph represents a region, which is defined from a temporal segment at a frame, in the block. Each edge of the graph, on the other hand, represents the neighboring relationship between regions in space or in time. Considering all the neighboring relationships, however, leads to a dense graph especially when the video volume is large, and the constructed graph becomes practically useless in considering memory consumption and processing time in the inference process. We therefore employ edges that only represent adjacency relationship (cf. Fig.\,\ref{fig:graphical_model}). Furthermore, we partition the video into a sequence of consecutive blocks so that inference in each block is performed separately.
In the experiments, an input video is decomposed into overlapping blocks with a fixed size where the overlapping rate is 50\%. We note that each block length is equal to 16 frames (see Section \ref{section:ws}). The saliency score of a region is refined by uniformly averaging saliency scores of the region over all the blocks that contain the region. This reduces processing time while keeping accuracy.
\begin{figure*}[t]
\centering
\includegraphics[width=0.9\linewidth]{final_images/img_saliency_computation2.pdf}
\caption{Saliency computation pipeline for a video block based on a graphical model.}
\label{fig:graphical_model}
\end{figure*}
\subsubsection{Energy Function for STCRF} \label{section:energy_function}
We define the energy function of the STCRF so that probabilistic inference is realized by minimizing the function.
The energy function $E$ has a video block (with its temporal segments) $\bm{x}$ as its input.
$E$ is defined by the unary and the binary terms with labels representing foreground/background
$\bm{l}=\{ l_i \in \{0,1\} | i \in {\cal V} \}$
where $l_i$ is the label for region $i$, and ${\cal V}$ is the set of vertices, i.e., regions in ${\bm x}$:
\begin{eqnarray}
E(\bm{l}, \bm{x}; \bm{\theta}) & = &
\sum_{i \in {\cal V}} {\psi_{\rm u} (l_i;\theta_{\rm u} )} +
\sum_{(i,j) \in {\cal E}} {\psi_{\rm b} (l_i,l_j;\theta_{\rm b} )},
\label{equation:energy}
\end{eqnarray}
where ${\psi_{\rm u}}$ and ${\psi_{\rm b}}$ are the unary and binary potentials given below. $\cal E$ is the set of edges of the STCRF graph.
$\bm{\theta} =(\theta_{\rm u}, \theta_{\rm b})$ is the model parameter.
\noindent\textbf{{Unary potential}: }
The unary potential for region $i$ is defined using the label of the region:
\begin{eqnarray}
\psi_{\rm u}\left( l_i ; \theta_{\rm u} \right) & = & \theta_{\rm u}\, {\omega}(F(i,t_i)),
\end{eqnarray}
where $t_i$ is the frame in which region $i$ exists, and $\omega$ is a function estimating the probability of the region being the foreground.
To compute $\omega$, i.e., the probability of the region being the foreground, we employ the DNN proposed by Wang et al.~\cite{LWang-CVPR2015} and modify it for our problem (cf. Table \ref{tab:FDNN}). Namely, right before the last fully connected layer of the original network, we add a dropout layer and a fully connected layer followed by a Rectified Linear Unite (ReLU)~\cite{Nair-ICML2010} layer (Nos. 15, 16, and 17 in Table \ref{tab:FDNN}) to increase the depth of the network. We then appropriately change the output channel of the first fully connected layer (Nos. 1, 2, and 3).
Hence, our used network, called Foreground-Deep Neural Network (F-DNN in short), consists of 7 fully connected layers. Each layer executes a linear transformation followed by the ReLU operator. Dropout operations with the ratio of 0.5 are applied after ReLU layers during the training process to avoid overfitting. To the input STD feature having 8192 channels, the numbers of output channels gradually reduce to 2048 at the first three fully connected layers and to 1024 at the next three layers. The last fully connected layer has two output channels representing foreground and background classes.
\noindent\textbf{{Binary potential}: }
The binary potential provides the deep feature based smoothing term that assigns similar labels to regions with similar deep features. Depending on spatial adjacency or temporal adjacency, the potential is differently formulated with further separation of $\theta_{\rm b}$ into $\theta_{\rm bs}$ and $\theta_{\rm bt}$:
\begin{eqnarray}
\psi _{\rm b} \left( {l_i},{l_j};\theta_{\rm b} \right) & = &
\left\{
\begin{array}{cl}
\theta_{\rm bs}\,{\Phi}_{\rm bs}\left( {l_i},{l_j}\right) & (i,j) \in {\cal E}_{s} \\
\theta_{\rm bt}\,{\Phi}_{\rm bt}\left( {l_i},{l_j}\right) & (i,j) \in {\cal E}_{t}
\end{array},
\right.
\end{eqnarray}
where ${\cal E}_{\rm s}$ and ${\cal E}_{\rm t}$ respectively denote the set of edges representing spatial adjacency and that representing temporal adjacency. Note that ${\cal E}={\cal E}_{\rm s}\cup {\cal E}_{\rm t}$ and ${\cal E}_{\rm s} \cap {\cal E}_{\rm t} = \emptyset$.
$\Phi_{\rm bs}$ and $\Phi_{\rm bt}$ are spatial smoothness and temporal smoothness between two regions:
\begin{eqnarray}
& & \hspace*{-20pt} \Phi_{\rm bs}(l_i, l_j) = \nonumber \\
& & \hspace*{-15pt} (1-\delta_{l_i\, l_j})
D(i,j)^{-1}
\exp \left( -\beta_{s}{ \left\| F(i,t_i) - F(j,t_j) \right\|^2 } \right),
\\
& & \hspace*{-20pt} {\Phi}_{\rm bt}\left(l_i , l_j\right) = \nonumber \\
& & \hspace*{-10pt} \left(1-\delta_{l_i l_j}\right)
\phi(i,j)
\exp \left( -\beta_{t}{ \left\| F(i,t_i) - F(j,t_j) \right\|^2 } \right),
\end{eqnarray}
where
$\delta$ is the Kronecker delta and
$D(i,j)$ is the Euclidean distance between the two centers of regions $i$ and $j$.
$\phi$ is the ratio of the area matched by the optical flow inside the two temporally different
regions~\cite{Papazoglou-ICCV2013}.
$F(i,t_i)$ is the STD feature of region $i$ (which exists in frame $t_i$).
The parameters $\beta_s$ and $\beta_t$ are chosen similarly to~\cite{Rother-SIGGRAPH2004} to ensure the exponential term switches appropriately between high and low contrasts:
\begin{eqnarray}
\beta_{s} & = &
\frac{1}{2}( \sum\limits_{(i,j) \in {\cal E}_{s}} {\left\| F(i,t_i) - F(j,t_j) \right\|^2} )^{- 1}, \\
\beta_{t} & = &
\frac{1}{2}( \sum\limits_{(i,j) \in {\cal E}_{t}} {\left\| F(i,t_i) - F(j, t_j) \right\|^2})^{ - 1}.
\end{eqnarray}
We remark that to compute the weight $\phi$, we first count the area transformed from a temporal segment (region) at a frame to its corresponding region at the next frame via optical flow and vice versa, and then take the average of ratios of the areas.
In the temporal domain, this weight is better than the Euclidean distance because it is independent of the speed of the motion~\cite{Papazoglou-ICCV2013}. In this work, we employ the deep flow method~\cite{Weinzaepfel-ICCV2013} to transfer pixels in the temporal segment.
\subsubsection{Saliency Inference}
Saliency scores for regions are obtained in terms of labels by minimizing the energy function:
\begin{eqnarray}
\bm{\hat{l}} & = & \mathop {\arg \min }\limits_{\bm{l}} E(\bm{l},\bm{x}; \bm{\theta}),
\end{eqnarray}
We minimize $E$ in Eq.~(\ref{equation:energy}) by iterating the Graph Cut method~\cite{Boykov-PAMI2001}, which shows the effectiveness in CRF-based energy minimization~\cite{Cheng-CGF2015}, and is popularly used for object segmentation~\cite{Le-DAVIS2017}\cite{Tsai-CVPR2016}.
The inputs are initial label $\bm{l}$, block (with its temporal segments) $\bm{x}$, and model parameter $\bm{\theta}$.
The minimization is then executed as the iterative expectation-maximization~\cite{Moon-SPM1996} scheme until convergence. In each iteration, the Graph Cut algorithm~\cite{Boykov-PAMI2001} is used to solve the "Min-Cut/Max-Flow" problem~\cite{Foulds-2012} of the graph, resulting in a new label for each vertex (region). The updated labels are used for the next iteration. After the saliency inference process, we obtain (binary) saliency maps for frames in $\bm{x}$.
\section{Experimental Settings} \label{section:setting}
\subsection{Benchmark Datasets}
We evaluated the performance of our method on three public benchmark datasets: 10-Clips dataset~\cite{Fukuchi-ICME2009}, SegTrack2 dataset~\cite{Li-ICCV2013}, and DAVIS dataset~\cite{Perazzi-CVPR2016}.
\textbf{The 10-Clips dataset}~\cite{Fukuchi-ICME2009} has ten video sequences, each of which contains a single salient object. Each sequence in the dataset has the spatial resolution of $352\times288$ and consists of about 75 frames.
\textbf{The SegTrack2 dataset}~\cite{Li-ICCV2013}
contains 14 video sequences and is originally designed for video object segmentation. A half of the videos in this dataset have multiple salient objects. This dataset is challenging in that it has background-foreground color similarity, fast motion, and complex shape deformation. Sequences in the dataset consist of about 76 frames with various resolutions.
\textbf{The DAVIS dataset}~\cite{Perazzi-CVPR2016} consists of 50 high quality $854 \times 480$ spatial resolution and Full HD 1080p video sequences with about 70 frames per video, each of which has one single salient object or two spatially connected objects either with low contrast or overlapping with image boundary. This is also a challenging dataset because of frequent occurrences of occlusions, motion blur, and appearance changes. In this work, we used only $854 \times 480$ resolution video sequences.
All the datasets contain manually annotated pixel-wise ground-truth for every frame.
\subsection{Evaluation Criteria} \label{section:metrics}
We evaluated the performance using Precision-Recall Curve (PRC),
F-measure~\cite{Achanta-CVPR2009}, and Mean Absolute Error (MAE).
The first two evaluation metrics are computed based on the overlapping areas between obtained results and provided ground-truth. Using a fixed threshold between 0 and 255, pairs of $(Precision, Recall)$ scores are computed and then combined to form a PRC. F-measure is a balanced measurement between $Precision$ and $Recall$ as follows:
\begin{eqnarray}
{F_\beta } & = & \frac{{\left( {1 + {\beta ^2}} \right)Precision \times Recall}}{{{\beta ^2} \times Precision + Recall}}.
\end{eqnarray}
We remark that we set $\beta^2=0.3$ for F-measure, as suggested by~\cite{Achanta-CVPR2009} so that precision is weighted more heavily.
For a given threshold, we binarize the saliency map to compute $Precision$ and $Recall$ at each frame in a video and then take the average over frames in the video. After that, the mean of the averages over the videos in a dataset is computed. F-measure is computed from the final precision and recall. When binarizing results for the comparison with the ground truth, we used F-Adap~\cite{Jia-ICCV2013}, which uses an adaptive threshold $\theta=\mu+\eta$ where $\mu$ and $\eta$ are the mean value and the standard deviation of the saliency scores of the obtained map, and F-Max~\cite{Borji-TIP2015}, which describes the maximum of F-measure scores for different thresholds from 0 to 255.
MAE, on the other hand, is the average over the frame of pixel-wise absolute differences between the ground truth $GT$ and obtained saliency map $SM$:
\begin{eqnarray}
{\rm MAE} & = & \hspace*{-5pt}
\frac{1}{{W \cdot H}}\sum\limits_{x = 1}^W {\sum\limits_{y = 1}^H {\left\| {SM\left( {x,y} \right) - GT\left( {x,y} \right)} \right\|} },
\end{eqnarray}
where $W$ and $H$ are the width and the height of the video frame. We note that MAE is also computed from the mean average value of the dataset in the same way as F-measure.
\subsection{Implementation Details}
We implemented region-based CNN, block-based CNN, and F-DNN in C/C++ using Caffe~\cite{Jia-MM2014}, and we implemented the other parts in Matlab. All experiments were conducted on a PC with a Core i7 3.6GHz processor, 32GB of RAM, and GTX 1080 GPU.
We remark that the region-based CNN and the block-based CNN were used without any fine-tuning.
To segment a video, we follow \cite{yu-ICCV2015} as described above. We set the number of initial superpixels at each frame as $\{100, 200, 300, 400\}$ to have four scale levels. The other required parameters are set similarly to \cite{yu-ICCV2015}.
For parameters in STCRF, we empirically set $\theta=(\theta_u, \theta_{bs}, \theta_{bt})=(50, 0.05, 1000)$. All these parameters are fixed throughout experiments.
\subsection{Training F-DNN for Foreground Probability Prediction}
\begin{table}[t]
\centering
\caption{Number of videos used in our experiments.}
\label{tab:dataset}
\begin{tabular}{l|cccc}
\toprule
\textbf{Dataset} & \textbf{10-Clips}~\cite{Fukuchi-ICME2009} & \textbf{SegTrack2}~\cite{Li-ICCV2013} & \textbf{DAVIS}~\cite{Perazzi-CVPR2016}& \textbf{Total} \\ \midrule
\textbf{Training} & 6 & 8 & 30 & 44 \\ \midrule
\textbf{Testing} & 4 & 6 & 20 & 30 \\ \bottomrule
\end{tabular}
\end{table}
In training our F-DNN (see Section \ref{section:energy_function}), we took an approach where we use all three datasets together rather than training our F-DNN for each dataset. This is because each dataset is too small to train a reliable model. Our approach also enables the trained model not to over-fit to a specific dataset.
From each video dataset except for the DAVIS dataset, we chose randomly 60\% (in number) of videos and mixed them into a larger dataset for training while the remaining videos were used for testing each dataset (cf. Table \ref{tab:dataset}). For the DAVIS dataset, we used the training set and the testing set as in the DAVIS Benchmark~\cite{Perazzi-CVPR2016}\footnote{\href{http://davischallenge.org/browse.html}{http://davischallenge.org/browse.html}}. We thus used 44 videos for training.
The model was fine-tuned from the network proposed in~\cite{LWang-CVPR2015} using randomly initialized weights for new layers. We trained the network for 300k iterations, using the Stochastic Gradient Descent (SGD) optimization~\cite{Rumelhart-Neurocomputing1988} with a moment $\beta=0.9$ and a weight decay of 0.005. The size of each mini-batch is set 500. A base learning rate was initially set to $0.001$ and divided by 10 at every 50k iterations.
\section{Experimental Results} \label{section:experiments}
\subsection{Comparison with the State-of-the-Arts}
We compared the performance of our method (denoted by STCRF) with several state-of-the-art methods for salient object detection such as LC~\cite{Zhai-MM2006}, LD~\cite{Liu-PAMI2011}, LGFOGR~\cite{Wang-TIP2015}, LRSD~\cite{Xue-ICASSP2012}, RST~\cite{Nghia-PSIVT2015}, SAG~\cite{Wang-CVPR2015}, SEG~\cite{Rahtu-ECCV2010}, STS~\cite{Zhou-CVPR2014}, DCL~\cite{Li-CVPR2016}, DHS~\cite{Liu-CVPR2016}, DS~\cite{Xi-TIP2015}, DSS~\cite{Hou-CVPR2017}, ELD~\cite{Lee-CVPR2016}, MDF~\cite{Li-CVPR2015}, and RFCN~\cite{Wang-ECCV2016}. Compared methods are classified in Table \ref{tab:compared_method}. We remark that we run their original codes provided by the authors with the recommended parameter settings for obtaining results. We also note that we applied the methods developed for the still image to videos frame-by-frame.
\begin{table}[t]
\centering
\caption{Compared state-of-the-art methods and classification.}
\label{tab:compared_method}
\resizebox{\linewidth}{!}{%
\begin{tabular}{l|cc}
\toprule
\textbf{Target} & \textbf{Hand-crafted feature} & \textbf{Deep feature} \\ \midrule
\textbf{Video} & \begin{tabular}[c]{@{}l@{}}LC\cite{Zhai-MM2006},LD\cite{Liu-PAMI2011}, LGFOGR\cite{Wang-TIP2015}, \\ LRSD\cite{Xue-ICASSP2012}, RST\cite{Nghia-PSIVT2015}, SAG\cite{Wang-CVPR2015}, \\ SEG\cite{Rahtu-ECCV2010}, STS\cite{Zhou-CVPR2014}\end{tabular} & None \\ \midrule
\textbf{Image} & None & \begin{tabular}[c]{@{}l@{}}DCL\cite{Li-CVPR2016}, DHS\cite{Liu-CVPR2016}, DS\cite{Xi-TIP2015}, \\ DSS\cite{Hou-CVPR2017}, ELD\cite{Lee-CVPR2016}, MDF\cite{Li-CVPR2015}, \\ RFCN\cite{Wang-ECCV2016}
\end{tabular} \\ \bottomrule
\end{tabular}
}
\end{table}
\begin{figure*}[p]
\centering
\includegraphics[width=1\textwidth]{final_images/img_visual_comparison.pdf}
\caption{Visual comparison of our method against the state-of-the-art methods. From top-left to bottom-right, original video frame and ground-truth are followed by outputs obtained using our method (STCRF), LC\cite{Zhai-MM2006},LD\cite{Liu-PAMI2011}, LGFOGR\cite{Wang-TIP2015}, LRSD\cite{Xue-ICASSP2012}, RST\cite{Nghia-PSIVT2015}, SAG\cite{Wang-CVPR2015}, SEG\cite{Rahtu-ECCV2010}, STS\cite{Zhou-CVPR2014}, DCL\cite{Li-CVPR2016}, DHS\cite{Liu-CVPR2016}, DS\cite{Xi-TIP2015}, DSS\cite{Hou-CVPR2017}, ELD\cite{Lee-CVPR2016}, MDF\cite{Li-CVPR2015}, and RFCN\cite{Wang-ECCV2016}, in this order. Our method surrounded with red rectangles achieves the best results.}
\label{img:visual_comparison}
\end{figure*}
Figure \ref{img:visual_comparison} shows examples of obtained results. Qualitative evaluation confirms that our method produces the best results on each dataset. Our method can handle complex foreground and background with different details, giving accurate and uniform saliency assignment. In particular, object boundaries are clearly kept with less noise, compared with the other methods.
To quantitatively evaluate the obtained results, we first computed PRC and F-measure curves, which are shown in Figs.~\ref{fig:PRC} and \ref{fig:F_curve}.
\begin{figure*}[t]
\centering
\footnotesize
\begin{tabularx}{\textwidth}{*{3}{X}}
\includegraphics[width=1\linewidth]{final_figures/10-Clips_PRC.pdf} &
\includegraphics[width=1\linewidth]{final_figures/SegTrack2_PRC.pdf} &
\includegraphics[width=1\linewidth]{final_figures/DAVIS_PRC.pdf} \\
\centering (a) 10-Clips Dataset &
\centering (b) SegTrack2 Dataset &
\centering (c) DAVIS Dataset \\
\end{tabularx}
\caption{Quantitative comparison of precision-recall curve with state-of-the-art methods under different thresholds. Our method is denoted by STCRF (\textcolor[rgb]{0,0,1}{thick blue}).}
\label{fig:PRC}
\end{figure*}
\begin{figure*}[t]
\centering
\footnotesize
\begin{tabularx}{\textwidth}{*{3}{X}}
\includegraphics[width=1\linewidth]{final_figures/10-Clips_F_Curve.pdf} &
\includegraphics[width=1\linewidth]{final_figures/SegTrack2_F_Curve.pdf} &
\includegraphics[width=1\linewidth]{final_figures/DAVIS_F_Curve.pdf} \\
\centering (a) 10-Clips Dataset &
\centering (b) SegTrack2 Dataset &
\centering (c) DAVIS Dataset \\
\end{tabularx}
\caption{Quantitative comparison of F-measure with state-of-the-art methods under different thresholds. Our method is denoted by STCRF (\textcolor[rgb]{0,0,1}{thick blue}).}
\label{fig:F_curve}
\end{figure*}
It can be seen that our method achieves the highest precision in almost the entire recall ranges on all the datasets. Especially on the two most challenging datasets (i.e., SegTrack2 and DAVIS), the performance gains of our method against the other methods are more remarkable (results with higher recall values are less important because achieving higher recall values is easy). When compared with the second best method, i.e., DHS, we see that (1) both the methods have comparable results on 10-Clips dataset, that (2) our method is significantly better than DHS on SegTrack2 dataset, and that (3) on DAVIS dataset, the precision of our method is larger than that of DHS when recall values are small (higher binarization thresholds) while it is smaller for large recall values (lower binarization thresholds). Salient object detection at higher thresholds is more practical and effective than that at lower thresholds because with low thresholds, more pixels are segmented regardless of salient objects or background.
F-measure indicates that our method significantly outperforms the other methods at every threshold on all the datasets. Since the 10-Clips dataset is easiest among the three datasets, any methods can achieve good results while the other two datasets are challenging, meaning that the effectiveness of methods becomes discriminative. Indeed, compared with the second best method (DHS), our method is comparable on the 10-Clips dataset and significantly better on the other datasets.
Table \ref{tab:comparison} illustrates the evaluations in terms of F-Adap, F-Max, and MAE. Our proposed method achieves the best performance under all the metrics on all the datasets.
In particular, the outperformance of our method even against the second best method (DHS) is significant on SegTrack2 and DAVIS datasets.
\begin{table}[t]
\centering
\caption{The wall-clock time average for each frame.}
\label{tab:time}
\begin{tabular}{l|cccc}
\toprule
\textbf{Method} & \textbf{Code} & \textbf{Platform} & \textbf{Time (seconds)} & \textbf{FPS} \\ \midrule
\textbf{STCRF} & Matlab & CPU+GPU & \textbf{4.596}& \textbf{0.218} \\
\textbf{STCRF-full} & Matlab & CPU+GPU & \textbf{10.300} & \textbf{0.097} \\
LGFOGR~\cite{Wang-TIP2015} & Matlab & CPU & 16.096 & 0.062 \\
RST~\cite{Nghia-PSIVT2015} & Matlab & CPU & 19.903 & 0.050 \\
SAG~\cite{Wang-CVPR2015} & Matlab & CPU & 17.613 & 0.057 \\
STS~\cite{Zhou-CVPR2014} & Matlab & CPU & 10.924 & 0.092 \\
MDF~\cite{Li-CVPR2015} & Matlab & CPU+GPU & 12.300 & 0.081 \\ \midrule
LD~\cite{Liu-PAMI2011} & Matlab & CPU & 8.318 & 0.120 \\
LRSD~\cite{Xue-ICASSP2012} & Matlab & CPU & 0.755 & 1.325 \\
SEG~\cite{Rahtu-ECCV2010} & Matlab & CPU & 4.856 & 0.206 \\
RFCN~\cite{Wang-ECCV2016} & Matlab & CPU+GPU & 1.840 & 0.543 \\ \midrule
LC~\cite{Zhai-MM2006} & C/C++ & CPU & 0.131 & 7.634 \\
DCL~\cite{Li-CVPR2016} & C/C++ & GPU & 0.183 & 5.464 \\
DHS~\cite{Liu-CVPR2016} & C/C++ & GPU & 0.080 & 12.500 \\
DS~\cite{Xi-TIP2015} & C/C++ & GPU & 0.109 & 9.174 \\
DSS~\cite{Hou-CVPR2017} & C/C++ & GPU & 0.178 & 5.618 \\
ELD~\cite{Lee-CVPR2016} & C/C++ & GPU & 2.030 & 0.493 \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[t]
\centering
\caption{The wall-clock time average of each step for each frame in the proposed method. Bottlenecks are shown in \textcolor[rgb]{1,0,0}{red}. (The local feature extraction stream and the global feature extraction stream run in parallel.)}
\label{tab:time_step}
\resizebox{1\linewidth}{!}{%
\begin{tabular}{l|l|c}
\toprule
\textbf{Part} & \textbf{Small step} & \textbf{Time (seconds)} \\ \midrule
Optical flow~\cite{Weinzaepfel-ICCV2013} & & \textcolor[rgb]{1,0,0}{1.265} \\ \midrule
Video segmentation~\cite{yu-ICCV2015} & & \textcolor[rgb]{1,0,0}{4.439} \\ \midrule
STD feature extraction & & \\
& Region-based feature extraction & \textcolor[rgb]{1,0,0}{2.323} \\
& Local feature computation & 0.383 \\
& Global feature extraction & 0.027 \\
& (sub-total) & (2.706) \\ \midrule
Saliency computation & & \\
& Unary potential prediction & 0.180 \\
& Binary potential computation & \textcolor[rgb]{1,0,0}{1.461} \\
& Saliency inference & 0.249 \\
& (sub-total) & (1.890) \\ \midrule
\textbf{Total} & & \textbf{10.300} \\
\bottomrule
\end{tabular}
}
\end{table}
\begin{table*}[t]
\footnotesize
\centering
\caption{Quantitative comparison with state-of-the-art methods, using F-measure (F-Adap and F-Max) (higher is better) and Mean Absolute Errors (MAE) (smaller is better). The best and the second best results are shown in \textcolor[rgb]{0,0,1}{\textbf{blue}} and \textcolor[rgb]{0,0.7,0}{\textbf{green}}, respectively. Our method (STCRF) marked in \textbf{bold} is followed by methods for videos and those for still images.}
\label{tab:comparison}
\begin{tabular}{lccccccccccc}
\toprule
\textbf{Dataset} & \multicolumn{3}{c}{\textbf{10-Clips}} & \multicolumn{1}{c}{\textbf{}} & \multicolumn{3}{c}{\textbf{SegTrack2}} & \multicolumn{1}{c}{\textbf{}} & \multicolumn{3}{c}{\textbf{DAVIS}} \\ \cline{2-4} \cline{6-8} \cline{10-12}
\textbf{Metric} & \multicolumn{1}{c}{\textbf{F-Adap$\Uparrow$}} & \multicolumn{1}{c}{\textbf{F-Max$\Uparrow$}} & \multicolumn{1}{c}{\textbf{MAE$\Downarrow$}} & \multicolumn{1}{c}{\textbf{}} & \multicolumn{1}{c}{\textbf{F-Adap$\Uparrow$}} & \multicolumn{1}{c}{\textbf{F-Max$\Uparrow$}} & \multicolumn{1}{c}{\textbf{MAE$\Downarrow$}} & \multicolumn{1}{c}{\textbf{}} & \multicolumn{1}{c}{\textbf{F-Adap$\Uparrow$}} & \multicolumn{1}{c}{\textbf{F-Max$\Uparrow$}} & \multicolumn{1}{c}{\textbf{MAE$\Downarrow$}} \\ \midrule
\textbf{STCRF} & \textcolor[rgb]{0,0,1}{\textbf{0.936}} & \textcolor[rgb]{0,0.7,0}{\textbf{0.942}} & \textcolor[rgb]{0,0,1}{\textbf{0.016}} & & \textcolor[rgb]{0,0,1}{\textbf{0.899}} & \textcolor[rgb]{0,0,1}{\textbf{0.919}} & \textcolor[rgb]{0,0,1}{\textbf{0.014}} & & \textcolor[rgb]{0,0,1}{\textbf{0.803}} & \textcolor[rgb]{0,0,1}{\textbf{0.816}} & \textcolor[rgb]{0,0,1}{\textbf{0.033}} \\ \midrule
LC~\cite{Zhai-MM2006} & 0.577 & 0.583 & 0.166 & & 0.244 & 0.306 & 0.173 & & 0.161 & 0.209 & 0.203 \\
LD~\cite{Liu-PAMI2011} & 0.637 & 0.654 & 0.197 & & 0.286 & 0.305 & 0.281 & & 0.242 & 0.246 & 0.302 \\
LGFOGR~\cite{Wang-TIP2015} & 0.629 & 0.667 & 0.207 & & 0.500 & 0.614 & 0.117 & & 0.555 & 0.614 & 0.100 \\
LRSD~\cite{Xue-ICASSP2012} & 0.339 & 0.342 & 0.164 & & 0.438 & 0.438 & 0.102 & & 0.228 & 0.229 & 0.109 \\
RST~\cite{Nghia-PSIVT2015} & 0.827 & 0.831 & 0.055 & & 0.510 & 0.677 & 0.125 & & 0.607 & 0.628 & 0.081 \\
SAG~\cite{Wang-CVPR2015} & 0.755 & 0.777 & 0.117 & & 0.504 & 0.646 & 0.106 & & 0.487 & 0.537 & 0.105 \\
SEG~\cite{Rahtu-ECCV2010} & 0.687 & 0.680 & 0.298 & & 0.388 & 0.418 & 0.321 & & 0.313 & 0.345 & 0.316 \\
STS~\cite{Zhou-CVPR2014} & 0.591 & 0.631 & 0.177 & & 0.471 & 0.583 & 0.147 & & 0.362 & 0.489 & 0.194 \\ \midrule
DCL~\cite{Li-CVPR2016} & \textcolor[rgb]{0,0.7,0}{\textbf{0.935}} & 0.937 & 0.031 & & \textcolor[rgb]{0,0.7,0}{\textbf{0.734}} & 0.750 & 0.060 & & 0.630 & 0.673 & 0.075 \\
DHS~\cite{Liu-CVPR2016} & 0.923 & \textcolor[rgb]{0,0,1}{\textbf{0.947}} & \textcolor[rgb]{0,0.7,0}{\textbf{0.022}} & & 0.733 & \textcolor[rgb]{0,0.7,0}{\textbf{0.762}} & \textcolor[rgb]{0,0.7,0}{\textbf{0.050}} & & \textcolor[rgb]{0,0.7,0}{\textbf{0.738}} & \textcolor[rgb]{0,0.7,0}{\textbf{0.777}} & \textcolor[rgb]{0,0.7,0}{\textbf{0.040}} \\
DS~\cite{Xi-TIP2015} & 0.832 & 0.864 & 0.050 & & 0.636 & 0.725 & 0.069 & & 0.610 & 0.741 & 0.083 \\
DSS~\cite{Hou-CVPR2017} & 0.838 & 0.853 & 0.049 & & 0.662 & 0.681 & 0.054 & & 0.693 & 0.758 & 0.047 \\
ELD~\cite{Lee-CVPR2016} & 0.893 & 0.915 & 0.023 & & 0.611 & 0.675 & 0.065 & & 0.571 & 0.686 & 0.076 \\
MDF~\cite{Li-CVPR2015} & 0.884 & 0.887 & 0.041 & & 0.627 & 0.633 & 0.077 & & 0.670 & 0.673 & 0.067 \\
RFCN~\cite{Wang-ECCV2016} & 0.901 & 0.910 & 0.046 & & 0.716 & 0.737 & 0.062 & & 0.694 & 0.725 & 0.070 \\
\bottomrule
\end{tabular}
\end{table*}
\subsection{Computational Efficiency}
We further evaluated the computational time of all the methods.
We compared the running-time average of our method with that of the other methods.
The wall-clock time average for each frame in our method and the compared methods is given in Table \ref{tab:time}.
Our methods are denoted by STCRF for the pipeline without counting optical flow computation and video segmentation, and by STCRF-full for the full pipeline.
We note that all videos were resized to the resolution of $352 \times 288$ for the fair comparison.
Performances of all the methods are compared based on the implementations in C/C++ and Matlab.
We classify all the methods into three categories: Matlab-region-based methods, Matlab-pixel-based methods, and C/C++ based methods.
Since it is obvious that codes implemented in C/C++ run faster than those in Matlab, we cannot directly compare the run-time of all the methods.
However, we see that our method runs in the competitive speed with the others.
Indeed, our method is fastest among the Matlab-region-based methods.
We remark that Matlab-region-based methods run more slowly than Matlab-pixel-based ones because treating regions individually in a sequential manner and then integrating results are time-consuming.
It can be seen in Table \ref{tab:time} that in our method, the time required for computing optical flow and video segmentation is a bottleneck: it takes 5.704 ($=10.300-4.596$) seconds per frame. To identify bottleneck steps in our pipeline, we broke down running-time into individual steps in our pipeline (see Table \ref{tab:time_step}).
We note that in our pipeline, the step of region-based feature extraction followed by local feature computation, and the step of global feature extraction run in parallel.
Table \ref{tab:time_step} indicates that region-based feature extraction and binary potential computation are also bottlenecks.
Because the bottleneck steps except for region-based feature extraction are implemented in Matlab,
re-implementing such steps in C/C++ and using Cuda for parallel processing for regions will improve the speed of our method.
We note that speed-up of the computational time for salient object detection is not the scope of this paper.
\subsection{Detailed Analysis of the Proposed Method}\label{section:validation}
To demonstrate the effectiveness of utilizing local and global features, utilizing spatiotemporal information in computing the saliency map, and the effectiveness of multi-level analysis, we performed experiments under controlled settings and compared results.
\subsubsection{Effectiveness of Combination of Local and Global Features}
To evaluate the effectiveness of combining local and global features, we compared results using STD features with those using local features alone, which is illustrated in Table \ref{tab:feature}.
We see that the combination of local and global features brings more gains than using only local features.
This can be explained as follows. Local features exploit the meaning of an object in term of saliency but only in a local context, while global features can model a global context in the whole video block. Thus, STD features are more powerful.
We remark that we also present results using RGB features just to confirm that the deep feature outperforms RGB features.
\begin{table*}[t]
\centering
\caption{Comparison of STD features and local features. The best results are shown in \textcolor[rgb]{0,0,1}{blue} (higher is better for F-Adap and F-Max, and lower is better for MAE).
}
\label{tab:feature}
\begin{tabular}{lccccccccccc}
\toprule
\multicolumn{1}{c}{\multirow{2}{*}{\textbf{Used feature}}} & \multicolumn{3}{c}{\textbf{10-Clips}} & & \multicolumn{3}{c}{\textbf{SegTrack2}} & & \multicolumn{3}{c}{\textbf{DAVIS}} \\ \cline{2-4}\cline{6-8}\cline{10-12}
&
\multicolumn{1}{c}{\textbf{F-Adap $\Uparrow$}} & \multicolumn{1}{c}{\textbf{F-Max $\Uparrow$}} & \multicolumn{1}{c}{\textbf{MAE $\Downarrow$}} & &
\multicolumn{1}{c}{\textbf{F-Adap $\Uparrow$}} & \multicolumn{1}{c}{\textbf{F-Max $\Uparrow$}} & \multicolumn{1}{c}{\textbf{MAE $\Downarrow$}} & &
\multicolumn{1}{c}{\textbf{F-Adap $\Uparrow$}} & \multicolumn{1}{c}{\textbf{F-Max $\Uparrow$}} & \multicolumn{1}{c}{\textbf{MAE $\Downarrow$}} \\ \midrule
\textbf{STD feature} & \textcolor[rgb]{0,0,1}{\textbf{0.936}} & \textcolor[rgb]{0,0,1}{\textbf{0.942}} & \textcolor[rgb]{0,0,1}{\textbf{0.016}} & & \textcolor[rgb]{0,0,1}{\textbf{0.899}} & \textcolor[rgb]{0,0,1}{\textbf{0.919}} & \textcolor[rgb]{0,0,1}{\textbf{0.014}} & & \textcolor[rgb]{0,0,1}{\textbf{0.803}} & \textcolor[rgb]{0,0,1}{\textbf{0.816}} & \textcolor[rgb]{0,0,1}{\textbf{0.033}} \\
Local feature alone & 0.683 & 0.727 & 0.079 & & 0.692 & 0.780 & 0.043 & & 0.648 & 0.744 & 0.067 \\
RGB feature & 0.882 & 0.913 & 0.044 & & 0.366 & 0.454 & 0.080 & & 0.160 & 0.186 & 0.199 \\
\bottomrule
\end{tabular}
\end{table*}
\subsubsection{Effectiveness of Spatiotemporal Potential in STCRF}
\begin{table*}[t]
\centering
\caption{Comparison of different potentials in STCRF. The best results are shown in \textcolor[rgb]{0,0,1}{blue} (higher is better for F-Adap and F-Max, and lower is better for MAE). Our complete method are marked in \textbf{bold}.}
\label{tab:potential}
\resizebox{\textwidth}{!}{%
\begin{tabular}{l|cccccccccccccc}
\toprule
\multicolumn{1}{c|}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{3}{c}{\textbf{10-Clips}} & & \multicolumn{3}{c}{\textbf{SegTrack2}} & & \multicolumn{3}{c}{\textbf{DAVIS}} \\ \cline{5-7} \cline{9-11} \cline{13-15}
\multicolumn{1}{c|}{\multirow{-2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}Setting \\ description \end{tabular}}}} &
\multicolumn{1}{c}{\multirow{-2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}} Unary\\ term \end{tabular}}}} &
\multicolumn{1}{c}{\multirow{-2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}} Spatial\\ information \end{tabular}}}} &
\multicolumn{1}{c}{\multirow{-2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}Temporal\\ information \end{tabular}}}} &
\multicolumn{1}{c}{\textbf{F-Adap $\Uparrow$}} & \multicolumn{1}{c}{\textbf{F-Max $\Uparrow$}} & \multicolumn{1}{c}{\textbf{MAE $\Downarrow$}} & &
\multicolumn{1}{c}{\textbf{F-Adap $\Uparrow$}} & \multicolumn{1}{c}{\textbf{F-Max $\Uparrow$}} & \multicolumn{1}{c}{\textbf{MAE $\Downarrow$}} & &
\multicolumn{1}{c}{\textbf{F-Adap $\Uparrow$}} & \multicolumn{1}{c}{\textbf{F-Max $\Uparrow$}} & \multicolumn{1}{c}{\textbf{MAE $\Downarrow$}} \\ \midrule
\textbf{STP} & $\vee$ & $\vee$ & $\vee$ & \textcolor[rgb]{0,0,1}{\textbf{0.936}} & \textcolor[rgb]{0,0,1}{\textbf{0.942}} & \textcolor[rgb]{0,0,1}{\textbf{0.016}} & & \textcolor[rgb]{0,0,1}{\textbf{0.899}} & \textcolor[rgb]{0,0,1}{\textbf{0.919}} & \textcolor[rgb]{0,0,1}{\textbf{0.014}} & & \textcolor[rgb]{0,0,1}{\textbf{0.803}} & \textcolor[rgb]{0,0,1}{\textbf{0.816}} & \textcolor[rgb]{0,0,1}{\textbf{0.033}} \\
SP & $\vee$ & $\times$ & $\vee$ & 0.930 & 0.941 & 0.018 & & 0.871 & 0.918 & 0.017 & & 0.759 & 0.814 & 0.038 \\
TP & $\vee$ & $\vee$ & $\times$ & 0.930 & 0.940 & 0.019 & & 0.859 & 0.901 & 0.019 & & 0.750 & 0.805 & 0.039 \\
U & $\vee$ & $\times$ & $\times$ & 0.876 & 0.940 & 0.044 & & 0.703 & 0.912 & 0.072 & & 0.537 & 0.804 & 0.169 \\
\bottomrule
\end{tabular}
}
\end{table*}
To demonstrate the effectiveness of utilizing spatiotemporal information into the energy function in STRCF, we performed experiments under four different controlled settings.
We changed the binary term: setting $\theta_{\rm bt}=0$ to use spatial information alone (denoted by SP),
setting $\theta_{\rm bs}=0$ to use temporal information alone (denoted by TP), and
setting $\theta_{\rm bt}=\theta_{\rm bs}=0$ to use the unary term alone (denoted by U).
We compared the proposed (complete) method (denoted by STP) with these three baseline methods (cf. Table \ref{tab:potential}).
Table \ref{tab:potential} indicates that STP exhibits the best performance on all the metrics on the three datasets. We see that using both spatial and temporal information effectively works and brings more gains than using spatial information alone or using temporal information alone. This suggests that our method captures spatial contexts in a frame and temporal information over frames to produce saliency maps.
\subsubsection{Effectiveness of Multiple-Scale Approach}
To demonstrate the effectiveness of our multiple-scale approach, we compared methods that use different numbers of scale levels in computing the saliency map.
More precisely, starting with only the coarsest scale level (level 1), we fused finer levels (levels 2, 3, 4) one by one to
compute the saliency map.
The methods are denoted by 1-level, 2-levels, 3-levels, and 4-levels (our complete method).
The results are illustrated in Table \ref{tab:multiscale}. Table \ref{tab:multiscale} shows that the multiple-scale approach outperforms the single-scale approach.
It also indicates that using more scales produces better results. Indeed, as the number of scales in the saliency computation increases, we have more accurate results.
Table \ref{tab:multiscale} also shows that employing
4 scale levels seems to be sufficient because 3-levels and 4-levels have almost similar performances.
\begin{table*}[t]
\centering
\caption{Comparison of different numbers of scale levels in processing. The best results are shown in \textcolor[rgb]{0,0,1}{blue} (higher is better for F-Adap and F-Max, and lower is better for MAE). Our complete method is marked in \textbf{bold}.}
\label{tab:multiscale}
\begin{tabular}{lccccccccccc}
\toprule
\multicolumn{1}{c}{} & \multicolumn{3}{c}{\textbf{10-Clips}} & & \multicolumn{3}{c}{\textbf{SegTrack2}} & & \multicolumn{3}{c}{\textbf{DAVIS}} \\ \cline{2-4}\cline{6-8}\cline{10-12}
\multicolumn{1}{c}{\multirow{-2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}Setting \\ description \end{tabular}}}} &
\multicolumn{1}{c}{\textbf{F-Adap $\Uparrow$}} & \multicolumn{1}{c}{\textbf{F-Max $\Uparrow$}} & \multicolumn{1}{c}{\textbf{MAE $\Downarrow$}} & &
\multicolumn{1}{c}{\textbf{F-Adap $\Uparrow$}} & \multicolumn{1}{c}{\textbf{F-Max $\Uparrow$}} & \multicolumn{1}{c}{\textbf{MAE $\Downarrow$}} & &
\multicolumn{1}{c}{\textbf{F-Adap $\Uparrow$}} & \multicolumn{1}{c}{\textbf{F-Max $\Uparrow$}} & \multicolumn{1}{c}{\textbf{MAE $\Downarrow$}} \\ \midrule
1-level & 0.928 & 0.930 & 0.017 & & 0.880 & 0.889 & 0.016 & & 0.750 & 0.757 & 0.038 \\
2-levels & 0.929 & 0.940 & 0.017 & & 0.876 & 0.908 & 0.015 & & 0.763 & 0.800 & 0.035 \\
3-levels & 0.935 & 0.941 & \textcolor[rgb]{0,0,1}{\textbf{0.016}} & & \textcolor[rgb]{0,0,1}{\textbf{0.909}} & 0.912 & \textcolor[rgb]{0,0,1}{\textbf{0.014}} & & 0.798 & \textcolor[rgb]{0,0,1}{\textbf{0.816}} & \textcolor[rgb]{0,0,1}{\textbf{0.033}} \\
\textbf{4-levels} & \textcolor[rgb]{0,0,1}{\textbf{0.936}} & \textcolor[rgb]{0,0,1}{\textbf{0.942}} & \textcolor[rgb]{0,0,1}{\textbf{0.016}} & & 0.899 & \textcolor[rgb]{0,0,1}{\textbf{0.919}} & \textcolor[rgb]{0,0,1}{\textbf{0.014}} & & \textcolor[rgb]{0,0,1}{\textbf{0.803}} & \textcolor[rgb]{0,0,1}{\textbf{0.816}} & \textcolor[rgb]{0,0,1}{\textbf{0.033}} \\
\bottomrule
\end{tabular}
\end{table*}
\subsubsection{Effective Length of Video Block}\label{section:ws}
\begin{table*}[t]
\centering
\caption{Comparison under different lengths of the video block.}
\label{tab:ws}
\begin{tabular}{ccccccccc}
\toprule
\multicolumn{1}{c}{} & \multicolumn{2}{c}{\textbf{10-Clips}} & & \multicolumn{2}{c}{\textbf{SegTrack2}} & & \multicolumn{2}{c}{\textbf{DAVIS}} \\ \cline{2-3}\cline{5-6}\cline{8-9}
\multicolumn{1}{c}{\multirow{-2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}Length of \\ video block \end{tabular}}}} &
\multicolumn{1}{c}{\textbf{F-Adap $\Uparrow$}} & \multicolumn{1}{c}{\textbf{MAE $\Downarrow$}} & &
\multicolumn{1}{c}{\textbf{F-Adap $\Uparrow$}} & \multicolumn{1}{c}{\textbf{MAE $\Downarrow$}} & &
\multicolumn{1}{c}{\textbf{F-Adap $\Uparrow$}} & \multicolumn{1}{c}{\textbf{MAE $\Downarrow$}} \\ \midrule
$1\,\, (=2^0)$ & 0.934 & 0.017 & & 0.890 & 0.016 & & 0.791 & 0.035 \\
$2\,\, (=2^1)$ & 0.935 & 0.017 & & 0.892 & 0.015 & & 0.792 & 0.034 \\
$4\,\, (=2^2)$ & 0.935 & 0.017 & & 0.895 & 0.015 & & 0.794 & 0.034 \\
$8\,\, (=2^3)$ & 0.936 & 0.017 & & 0.897 & 0.015 & & 0.799 & 0.033 \\
\textbf{$16\,\, (=2^4)$} & \textbf{0.936} & \textbf{0.016} & & \textbf{0.899} & \textbf{0.014} & & 0\textbf{.803} & \textbf{0.033} \\
$32\,\, (=2^5)$ & 0.936 & 0.016 & & 0.899 & 0.014 & & 0.803 & 0.032 \\
$64\,\, (=2^6)$ & 0.936 & 0.016 & & 0.899 & 0.014 & & 0.803 & 0.032 \\
\bottomrule
\end{tabular}
\end{table*}
We investigated the effectiveness of the size of the video block to feed to STCRF by changing the window size from 1 to 64 by twice: $1, 2, 2^2,\ldots, 2^6$ (cf. Table \ref{tab:ws}).
Table \ref{tab:ws} shows that as the window size becomes larger, we have more accurate results. However, the improvement in accuracy is saturated around a size of 16. On the other hand, the processing time for a larger window size is slower because the size of the graphical model becomes larger. To balance the performance between accuracy and the processing time, we observe that the appropriate window size of the video block is 16.
\section{Application to Video Object Segmentation} \label{section:application}
\begin{figure*}[t]
\centering
\includegraphics[width=0.75\linewidth]{final_images/VOS.pdf}
\caption{Boundary snapping~\cite{Caelles-CVPR2017} based video object segmentation framework using the saliency map.}
\label{fig:VOS}
\end{figure*}
\begin{table*}[t]
\centering
\caption{Quantitative comparison with state-of-the-art video object segmentation methods on the DAVIS dataset, using region similarity, contour accuracy, and overall performance metrics. The best three results are shown in \textcolor[rgb]{0,0,1}{\textbf{blue}}, \textcolor[rgb]{0,0.7,0}{\textbf{green}}, and \textcolor[rgb]{1,0,0}{\textbf{red}}, respectively. Our method, denoted by STCRF*, is marked in \textbf{bold}.}
\label{tab:vos}
\begin{tabular}{lccccccccc}
\toprule
\multicolumn{1}{c}{\multirow{2}{*}{\textbf{Methods}}} & \multicolumn{3}{c}{\textbf{Region similarity $(\mathcal{J})$}} & \textbf{} & \multicolumn{3}{c}{\textbf{Contour accuracy $(\mathcal{F})$}} & \textbf{} & \multicolumn{1}{c}{\textbf{Overall performance $(\mathcal{O})$}} \\ \cline{2-4} \cline{6-8} \cline{10-10}
& \textbf{Mean$\Uparrow$} & \textbf{Recall$\Uparrow$} & \textbf{Decay$\Downarrow$} & \textbf{} & \textbf{Mean$\Uparrow$} & \textbf{Recall$\Uparrow$} & \textbf{Decay$\Downarrow$} & \textbf{} & \textbf{\textbf{Mean$\Uparrow$}} \\ \midrule
\textbf{STCRF*} & \textcolor[rgb]{0,0,1}{\textbf{0.714}} & \textcolor[rgb]{0,0,1}{\textbf{0.851}} & \textcolor[rgb]{0,0,1}{\textbf{-0.019}} & & \textcolor[rgb]{0,0,1}{\textbf{0.674}} & \textcolor[rgb]{0,0,1}{\textbf{0.790}} & \textcolor[rgb]{0,0.7,0}{\textbf{-0.019}} & & \textcolor[rgb]{0,0,1}{\textbf{0.694}} \\
DHS*~\cite{Liu-CVPR2016} & \textcolor[rgb]{0,0.7,0}{\textbf{0.701}} & \textcolor[rgb]{0,0.7,0}{\textbf{0.840}} & 0.032 & & \textcolor[rgb]{0,0.7,0}{\textbf{0.656}} & \textcolor[rgb]{0,0.7,0}{\textbf{0.779}} & 0.036 & & \textcolor[rgb]{0,0.7,0}{\textbf{0.679}} \\
\midrule
ACO~\cite{Jang-CVPR2016} & 0.503 & 0.572 & \textcolor[rgb]{0,0.7,0}{\textbf{-0.006}} & & 0.467 & 0.494 & \textcolor[rgb]{0,0,1}{\textbf{-0.022}} & & 0.485 \\
CVOS~\cite{Taylor-CVPR2015} & 0.482 & 0.540 & 0.105 && 0.447 & 0.526 & 0.117 && 0.465 \\
FST~\cite{Papazoglou-ICCV2013} & \textcolor[rgb]{1,0,0}{\textbf{0.558}} & \textcolor[rgb]{1,0,0}{\textbf{0.649}} & \textcolor[rgb]{1,0,0}{\textbf{0.000}} && 0.511 & 0.516 & \textcolor[rgb]{1,0,0}{\textbf{0.029}} && 0.535 \\
KEY~\cite{Lee-ICCV2011} & 0.498 & 0.591 & 0.141 && 0.427 & 0.375 & 0.106 && 0.463 \\
MSG~\cite{Ochs-ICCV2011} & 0.533 & 0.626 & 0.024 && 0.508 & \textcolor[rgb]{1,0,0}{\textbf{0.600}} & 0.051 && 0.521 \\
NLC~\cite{Faktor-BMVC2014} & 0.552 & 0.558 & 0.126 && \textcolor[rgb]{1,0,0}{\textbf{0.523}} & 0.519 & 0.114 && \textcolor[rgb]{1,0,0}{\textbf{0.537}} \\
TRC~\cite{Fragkiadaki-CVPR2012} & 0.473 & 0.493 & 0.083 && 0.441 & 0.436 & 0.129 && 0.457 \\
\bottomrule
\end{tabular}
\end{table*}
\begin{figure*}[h!]
\centering
\includegraphics[width=1\textwidth]{final_images/img_visual_comparison_vos.pdf}
\caption{Visual comparison of our method against the state-of-the-art video object segmentation methods. From left to right, original video frame and ground-truth are followed by outputs obtained using our method (STCRF*), DHS*\cite{Liu-CVPR2016}, ACO\cite{Jang-CVPR2016}, CVOS\cite{Taylor-CVPR2015}, FST\cite{Papazoglou-ICCV2013}, KEY\cite{Lee-ICCV2011}, MSG\cite{Ochs-ICCV2011}, NLC\cite{Faktor-BMVC2014}, and TRC\cite{Fragkiadaki-CVPR2012},
in this order. Our STCRF surrounded with red rectangles achieves the best results.}
\label{img:visual_comparison_vos}
\end{figure*}
Video object segmentation (VOS) is a binary labeling problem aiming to separate foreground objects from the background of a video~\cite{Perazzi-CVPR2016}. On the other hand, salient object detection (SOD) aims to detect and segment salient objects in natural scenes. Although VOS and SOD are different tasks, SOD methods are beneficial for VOS when salient objects are foreground objects in scenes. In this section, we demonstrate the applicability of our proposed method to VOS.
Figure \ref{fig:VOS} illustrates the framework for VOS using the saliency map. In one pass, the output saliency map is binarized using the adaptive threshold mentioned in Section \ref{section:metrics} to obtain the foreground mask. In the other pass, we implemented the object segmentation method based on boundary snapping~\cite{Caelles-CVPR2017}. We first detect contours of foreground objects using CEDN~\cite{Yang-CVPR2016} and then apply the combinatorial grouping method~\cite{Jordi-PAMI2017} to compute the Ultrametric Contour Map (UCM) ~\cite{Jordi-PAMI2017}, which presents hierarchical segmentation. Superpixels are aligned by binarizing the UCM using a threshold $\tau=0.3$. From the foreground mask and superpixels, we perform the majority voting to segment foreground objects.
VOS methods are classified into two groups: one that is requiring the initial object mask at the first frame, and the other that is not. In the DAVIS Benchmark~\cite{Perazzi-CVPR2016}, the former group is called semi-supervised while the latter one is unsupervised.
Since an initial object mask becomes a strong prior for accurately segmenting objects in subsequent frames, we chose most recent unsupervised methods for the fair comparison. We compared our method with the state-of-the-art saliency method (DHS~\cite{Liu-CVPR2016}), and most recent unsupervised VOS methods: ACO~\cite{Jang-CVPR2016}, CVOS~\cite{Taylor-CVPR2015}, FST~\cite{Papazoglou-ICCV2013}, KEY~\cite{Lee-ICCV2011}, MSG~\cite{Ochs-ICCV2011}, NLC~\cite{Faktor-BMVC2014}, and TRC~\cite{Fragkiadaki-CVPR2012}.
We remark that two SOD methods (i.e., our method and DHS) segment objects using the framework in Fig. \ref{fig:VOS}. We denote their by STCRF* and DHS* individually.
We tested all the methods on the DAVIS dataset~\cite{Perazzi-CVPR2016}, the newest dataset for VOS, and evaluated results using measures in the 2017 DAVIS Challenge~\cite{Jordi-2017} (i.e., region similarity $\mathcal{J}$, contour accuracy $\mathcal{F}$, and overall performance $\mathcal{O}$). For a given error measure, we computed three different statistics as in~\cite{Perazzi-CVPR2016}. They are the mean error, the object recall (measuring the fraction of sequences scoring higher than a threshold $\tau=0.5$), and the decay (quantifying the performance loss (or gain) over time). Note that we used the results in the DAVIS Benchmark~\cite{Perazzi-CVPR2016}\footnote{\href{http://davischallenge.org/soa\_compare.html}{http://davischallenge.org/soa\_compare.html}}
for the compared state-of-the-art VOS techniques.
We also note that we run the source code of ACO~\cite{Jang-CVPR2016}, which is not mentioned in the DAVIS Benchmark, provided by the authors with the recommended parameter settings.
Figure \ref{img:visual_comparison_vos} shows some examples of the obtained results. The quantitative comparison of these methods is shown in Table \ref{tab:vos}, indicating that our proposed method STCRF* exhibits the best performance on all the metrics at all the statistics.
STCRF* achieves 0.714 for $\mathcal{J}(\rm{Mean})$, 0.674 for $\mathcal{F}(\rm{Mean})$, and 0.694 for $\mathcal{O}(\rm{Mean})$, while the best VOS methods achieve 0.558 (for FST~\cite{Papazoglou-ICCV2013}), 0.523 (for NLC~\cite{Faktor-BMVC2014}), and 0.537 (for NLC~\cite{Faktor-BMVC2014}), respectively.
STCRF* outperforms the compared VOS methods by a large margin on all the metrics.
We can thus conclude that our proposed SOD method works even for VOS.
We note that DHS* is second best.
\section{Conclusion} \label{section:conclusion}
Different from the still image, the video has temporal information and how to incorporate temporal information as effectively as possible is the essential issue for dealing with the video. This paper focused on detecting salient objects from a video and proposed a framework using STD features together with STCRF. Our method takes into account temporal information in a video as much as possible in different ways, namely, feature extraction and saliency computation. Our proposed STD feature utilizes local and global contexts in both spatial and temporal domains. The proposed STCRF is capable to capture temporal consistency of regions over frames and spatial relationship between regions.
Our experiments show that the proposed method significantly outperforms state-of-the-art methods on publicly available datasets. We also applied our method to the video object segmentation task, showing that our method outperforms existing unsupervised VOS methods on the DAVIS dataset.
Visual saliency is also used for estimating human gaze~\cite{Itti-PAMI1998}\cite{Harel-NIPS2006}\cite{Hou-CVPR2007}. For salient object detection, object boundaries should be kept as accurately as possible while for human gaze estimation, they are not. Rather, gaze fixation point should be precisely identified and the area nearby the fixation point had better be blurred to present saliency using a Gaussian kernel, for example.
Applying our method directly to gaze estimation is thus not suitable. However, the idea of combining local and global features
will be interesting even to gaze estimation. Adapting our proposed method to gaze estimation in videos is left for future work.
\section*{Acknowledgment}
The authors are thankful to Gene Cheung for his valuable comments to improve the presentation of this paper.
This work is in part supported by JST CREST (Grant No. JPMJCR14D1) and by Grant-in-Aid for Scientific Research (Grant No. 16H02851) of the Ministry of Education, Culture, Sports, Science, and Technology of Japan.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
|
1,941,325,220,398 | arxiv | \section{Introduction}
Deep learning (DL) applications play an increasingly important role in medicine, yet, ``everyone participating in medical image evaluation with machine learning is data starved" \cite{Kohli2017}.
When the same imaging technique is used, e.g. confocal microscopy of a specific tissue type with a matching staining protocol, image analysis networks should be applicable to datasets not seen during training without a substantial drop in performance.
However, due to bias introduced during the acquisition or processing of datasets (`domain shift'), the generalizability of deep neural networks is negatively affected.
If bias cannot be accounted for, models have to be re-trained every time a new dataset becomes available.
In consequence, new data can only be used if a large cohort of labeled samples exists or can be created.
Traditional techniques try to compensate domain shift through normalization or simple image transformations, like an increase of the contrast through histogram equalization~\cite{Gonzalez2007}.
However, as indicated by de Bel et al.\ \cite{DeBel2019}, simple approaches are limited in how much bias they can capture and adjust, leading to insufficient transformations.
Recent evidence suggests that deep generative models could be utilized to modify samples, creating novel images that are close to the reference domain while not altering the original content.
This way, images of a new dataset can be adjusted to the bias of the reference domain as a pre-processing step (`bias transfer'), in order to avoid re-training large image analysis networks.
The goal is that new data of the same modality can be handled correctly and without a large drop in performance.
Necessarily, content preservation is of the utmost importance since hallucination artifacts introduced by the generative models could lead to misdiagnosis.
Reliable bias transfer approaches would also enable the usage of DL in settings that do not allow for frequent creation and retraining of models, such as decision support systems in hospitals.
In this paper, we aim to improve existing generative models to enable stable bias transfer for medical histopathology images.
In addition, we propose guidelines for testing and evaluating bias transfer models in settings with similar transformation goals.
To benchmark the quality of the bias transfer for three state-of-the-art generative models, cycle-consistent generative adversarial networks (cycleGANs) \cite{Zhu2017}, U-Net cycleGANs \cite{DeBel2019}, and Fixed-Point GANs \cite{Choi2018}, we measured the content preservation, target domain adaptation, and impact on image segmentation and classification performance.
To increase the performance of the models, we tested three additional losses designed to improve the quality in terms of content (`MS-SSIM loss')~\cite{Armanious2019}, structure integrity (`structure loss')~\cite{Ma2020} and intensity of transformation (`additional identity loss')~\cite{DeBel2019}.
As a baseline, histogram matching~\cite{Gonzalez2007} for color correction in a decorrelated color space~\cite{Reinhard2001} (`color transfer') is included in our evaluation, which utilizes a single, random image to represent the target domain.
\section{Related work}
\label{sec:relatedwork}
In a medical context, most datasets that require bias transfer are unpaired, meaning that a ground truth for the transformed images does not exist.
A well-established approach for learning unpaired image-to-image translations is using CycleGANs~\cite{Zhu2017}.
CycleGANs have already been used for stain transforming renal tissue sections~\cite{DeBel2019} or histological images of breast cancer~\cite{Shaban2019}.
Unfortunately, every paper introduces a new variation of cycleGAN.
To the best of our knowledge, an extensive comparison of bias transfer algorithms for microscopy images does not exist yet.
Selecting a fitting approach is a non-trivial task.
One common enhancement is using a U-Net structure for the generators~\cite{Manakov2019,DeBel2019}.
In a U-Net, the encoder and decoder structures of the neural network are connected via skip connections between layers of the same size~\cite{Ronneberger2015}.
Thus, the generators can pass information directly without transitioning the bottleneck, improving the level of detail of the images.
In this paper, we refer to the modified cycleGAN approach as U-Net cycleGAN.
Besides architecture modifications or simple hyperparameter adaptations like changing the learning rate~\cite{DeBel2019,Manakov2019}, or the number of images per batch~\cite{DeBel2019,Shaban2019}, a frequent change is adding additional losses to incorporate demands the network has to fulfill.
Armanious et al.\ used the multi-scale structural similarity index (MS-SSIM)~\cite{Wang2003} in \cite{Armanious2019} as an additional cycle loss between the original and the cycle-reconstructed images.
Their goal was to penalize structural discrepancies between the images.
In their experiments, the additional loss led to sharper results with better textural details.
Another loss has been proposed by de Bel et al.\ in~\cite{DeBel2019}.
They included an additional identity loss that is decreased to zero over the first 20 epochs of training.
The loss is an addition to the original identity loss of cycleGANs~\cite{Zhu2017}.
In their experiments, the loss stabilized the training process and led to faster convergence since it forces the generator to look for transformations close to identity first, effectively shrinking the solution space.
Moreover, Ma et al.\ proposed a loss based on the sub-part of the structural similarity index (SSIM) that only evaluates local structure changes~\cite{Ma2020}.
They proposed the loss to enhance the quality of endoscopy and confocal microscopy images that suffer from ``intensity inhomogeneity, noticeable blur and poor contrast''~\cite{Ma2020}.
The structure loss directly compares the original and transformed images.
Unfortunately, cycleGANs are designed for transforming between two domains only.
Therefore, transforming multiple domains to the reference domain requires individual models and training runs.
In \cite{Choi2018}, Choi et al.\ proposed StarGANs, which transform between any number of domains with a single generator and discriminator.
The generator produces samples based on the input image and the label of the target domain.
Even if only two domains exist, StarGANs can potentially outperform cycleGAN-based approaches due to the benefit that all data samples are used to train a single generator.
Siddiquee et al.\ proposed Fixed-Point GANs (FPGs), which extend StarGANs with a conditional identity loss, thus reducing hallucination artifacts~\cite{Siddiquee2019}.
If a new dataset could be adjusted by adding a new condition to FPG, bias transfer would be an exceptionally fast solution for the generalizability problem of image analysis networks.
\section{Methodology}
In the following, the overall workflow, as well as the generative models (Subsection \ref{sub:gen}) and our datasets (Subsection \ref{sub:data}) are introduced.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.9\textwidth]{images/methodology_overview_v2.png}
\caption{Overview of the image analysis pipeline including bias transfer: First, the original input images are downsampled \textbf{a} and transformed by the generative models \textbf{b} trained with or without additional losses \textbf{c}. Afterwards, they are upsampled to the original size \textbf{d}. The transformed images are then used as inputs for the segmentation or classification networks \textbf{e}. $\textrm{NEW}_{k}$ and $\textrm{TAR}_{k}$ refer to the domains of the kidney samples, whereas $\textrm{NEW}_{p}$ and $\textrm{TAR}_{p}$ are the domains of the prostate samples.
}
\label{fig:metho_overview}
\end{figure}
As part of this work, several generative approaches are introduced into an existing image analysis pipeline.
An overview of the complete workflow with bias transfer is shown in Figure~\ref{fig:metho_overview}.
As bias transfer is used as a pre-processing step here, the image size of the transformed images should be identical to the original size.
However, generative models are usually not designed for images larger than $256\times256$ pixels.
We used the Laplacian pyramid-based approach introduced in~\cite{Engin2018} to downsample the images to $256\times256$ pixels pre- (Figure~\ref{fig:metho_overview} a $\rightarrow$ b), and back to the original size post-transformation (Figure~\ref{fig:metho_overview} c $\rightarrow$ d), which is $1024\times1024$ pixels for the kidney (upper row) and $2048\times2048$ for the prostate images (lower row).
The approach involves replacing the smallest layer of the pyramids with the respective transformed image for upsampling.
Hence, the edges of the original image, which are typically lost at a low resolution, are added back into the image, boosting the quality of the transformed image.
\subsection{Generative approaches}\label{sub:gen}
For this paper, we implemented, tested, and modified three state-of-the-art generative models for image generation, as shown in Figure~\ref{fig:metho_overview}b.
For this purpose, we re-created the original implementation by Zhu et al.\ for cycleGANs~\cite{Zhu2017} in Tensorflow 2.0 with a PatchGAN discriminator that judges $16\times16$ patches per image.
The U-Net cycleGAN architecture has been constructed by adding skip connections to the generators.
For FPG, the generator input consists of an image and the one-hot encoded label of the target domain.
Otherwise the generator architecture is identical to cycleGAN.
Instead of using the discriminator structure described in the original FPG paper, we created a modified version of the cycleGAN discriminator by replacing the original output layer with two outputs, one for predicting the image's domain and one for the authenticity of the individual image patches (real vs.\ generated).
The modified cycleGAN discriminator has fewer trainable parameters and judges more patches per image than the original FPG discriminator.
In~\cite{Isola2017} it has been shown that judging more patches per image leads to images with more detail for cycleGANs.
It seems evident that FPGs could also benefit from this modification.
The hyperparameters for cycleGAN and U-Net cycleGAN are mainly based on the original cycleGAN implementation.
The networks are trained for up to 200 epochs with an initial steady learning rate of $0.0005$ (original: $0.0002$) for the first 100 epochs and a linearly decaying learning rate that reaches 0 after 200 epochs.
For FPG, an initial learning rate of $0.0001$ has been used, as proposed by Choi et al.\ \cite{Choi2018}.
We used a batch size of one.
To improve the generative performance and reduce hallucination artifacts of the three models, we included additional terms in the loss functions (see Figure~\ref{fig:metho_overview}c) specifically the additional identity loss $\mathcal{L}_{\textrm{+id}}$~\cite{DeBel2019}, the MS-SSIM loss $\mathcal{L}_{\textrm{+ms-ssim}}$~\cite{Armanious2019} and the structure loss $\mathcal{L}_{\textrm{+struc}}$~\cite{Ma2020}.
The original losses of (U-Net) cycleGANs are the adversarial loss ($adv$), the cycle-reconstruction loss ($cyc$) and the identity loss ($id$).
For FPGs, the original losses are the cycle-reconstruction loss ($cyc$), the domain-classification loss ($domain$), the gradient penalty loss ($gp$) and the conditional identity loss ($id$).
For our experiments, the losses have been weighted with $\lambda_{\textrm{adv}} =1$, $\lambda_{\textrm{cyc}} =10$, and $\lambda_{\textrm{id}}=10$ (original: $\lambda_{\textrm{id}}=5$) for cycleGAN and U-Net cycleGAN and with $\lambda_{\textrm{cyc}}=\lambda_{\textrm{gp}}= \lambda_{\textrm{id}} = 10$ and $\lambda_{\textrm{domain}}=1$ for FPG.
All additional losses have been weighted with $\lambda=5$.
We added the structure loss and the MS-SSIM loss individually and, for the MS-SSIM loss, in combination with the additional identity loss (`combined losses').
The structure loss was not combined with the other additional losses since the direct comparison of the original and transformed images covers similar objectives as the `combined losses'.
Our implementation is publicly available on GitHub.\footnote{\url{https://github.com/imsb-uke/bias-transfer-microscopy}}
\subsection{Data}\label{sub:data}
In this paper, we perform bias transfer for two modalities, IF images of kidney biopsies and H\&E stained tissue microarray (TMA) spots of prostate biopsies.
In Figure~\ref{fig:metho_overview}, the upper row shows the workflow for the kidney and the lower row for the prostate biopsy samples.
Each modality includes two sub-datasets (domains) originating from different hospitals and has a specified transformation direction.
This is due to the fact that bias transfer is applied here to overcome the domain shift between a new and a target domain which has been used for training a segmentation or classification network, as outlined in Figure~\ref{fig:metho_overview}e.
\subsubsection{Kidney biopsies}
The kidney dataset consists of 2D kidney IF images~\cite{Zimmermann2021}.
The images have been used to train a modified U-Net with dual output for the automatic segmentation of the glomeruli and their podocytes, an important cell type inside the glomeruli that is indicative of the health of the kidney~\cite{Zimmermann2021}.
To highlight the glomeruli and podocytes, three different biomarkers were used: DAPI for cell nuclei (blue), WT1 for podocyte cytoplasm (red), and DACH1 for podocyte nuclei (green).
The images originate from two distinct hospitals (two domains: the target domain `$\textrm{TAR}_{k}$' and the new domain `$\textrm{NEW}_{k}$'), which were imaged by two different operators with differing confocal laser scanning microscopes (Nikon A1 Confocal and Zeiss Confocal LSM 800).
In total, subset $\textrm{TAR}_{k}$ contains 269 images and subset $\textrm{NEW}_{k}$ contains 375 images.
Training and evaluating the segmentation requires annotations.
For this purpose, a subset of the images has been annotated by medical experts ($\textrm{TAR}_{k}$: 109, $\textrm{NEW}_{k}$: 90).
Further details on the datasets and their biological meaning can be found in~\cite{Zimmermann2021}.
For bias transfer, the split into training, validation, and test sets was performed randomly, with the constraints that images originating from the same patient (up to 16) remain in the same set, that a ratio of approximately 70\% for training and 15\% each for validation and test sets is reached and that the validation and test sets only consist of annotated images.
While bias transfer does not require a ground truth, masks are necessary to evaluate the segmentation quality achieved on the transformed images.
Accordingly, the $\textrm{TAR}_{k}$ images were split into 180 for training, 44 for validation, and 45 for testing.
The $\textrm{NEW}_{k}$ dataset was split into 285 images for training, 46 for validation, and 44 for testing.
\subsubsection{Prostate biopsies}
The prostate dataset consists of circular biopsies (spots) originating from radical prostatectomies (RPE) that have been assembled with the help of TMAs.
The images originate from two different hospitals (`$\textrm{TAR}_{p}$' and `$\textrm{NEW}_{p}$').
The $\textrm{TAR}_{p}$ dataset consists of 2866 and the $\textrm{NEW}_{p}$ dataset of 886 images.
Both datasets were created for staging prostate cancer with the help of Gleason patterns, which categorize the shape of the glands.
Tissue can be classified as benign or as Gleason pattern 3, 4, or 5, whereas a higher number represents worse tumor tissue.
The Gleason score (GS) is then calculated as the sum of the most prevalent and the worst occurring patterns in a sample~\cite{Chen2016}.
Unfortunately, the inter-pathologist agreement on GSs is usually low~\cite{Egevad2013}, which is why automated and stable GS prediction is of high clinical value.
We trained an InceptionNet-V3-based classification network, pre-trained on ImageNet~\cite{Deng2009}, on images of the $\textrm{TAR}_{p}$ dataset, reaching a test accuracy of $81.6\%$.
To exclude image background only the innermost $2048\times2048$ pixels of the spots have been used.
The classification was limited to single Gleason patterns only (`0' = benign, `3+3', `4+4', and `5+5') since TMA spots with two (or more) differing Gleason patterns could potentially contain patterns that occur solely outside of the innermost pixels.
During training, all images are pre-processed with normalization, and the data is augmented with random rotations, shearing, shifting, and flipping.
After pre-processing, the images are resized to $224\times224$ pixels, which is the input size of the InceptionNet.
The $\textrm{NEW}_{p}$ dataset is publicly available~\cite{Arvaniti2018a} and does not include Gleason score annotations.
However, the images have been annotated via segmentation masks to identify areas containing a specific Gleason pattern.
To evaluate the classification performance of our network, we used the segmentation masks to calculate the Gleason scores of the image centers.
We then used the same split into training, validation, and test sets that was defined for the dataset in the original publication~\cite{Arvaniti2018b}.
The training and validation sets have been annotated by one pathologist (pathologist 1) and the test set has been annotated independently by two pathologists (pathologist 1 and pathologist 2), resulting in 506 images (429 with a single Gleason pattern) for training, 133 (111) for validation, and 245 (199 -- pathologist 1, 156 -- pathologist 2) images for testing.
For the $\textrm{TAR}_{p}$ dataset, the split was determined according to a random stratified sampling strategy, resulting in the same label distribution for the training, validation, and test sets.
The resulting split consists of 2000 (1136 with a single Gleason pattern) images for training, 432 (246) images for validation, and 434 (245) images for testing.
All images were annotated by the same pathologist (pathologist 3).
Nonetheless, the label distribution is not uniform, $\textrm{TAR}_{p}$ overrepresents 3+3 samples.
In contrast, $\textrm{NEW}_{p}$ mostly contains 4+4 samples.
\section{Results and discussion}
We based the evaluation of the generative approaches on three factors: content preservation, target domain imitation, and impact on the segmentation or classification performance.
Two metrics have been selected for a quantitative evaluation of the transformation quality: the structural similarity index (SSIM)~\cite{Wang2004} and the Fr\'echet Inception Distance (FID)~\cite{Heusel2017}.
The SSIM is a well-established metric that has been used here to calculate the degradation of structural information (content) between the original (Figure~\ref{fig:metho_overview}a) and the transformed (Figure~\ref{fig:metho_overview}d) input images.
The FID measures the feature-wise distance between two domains.
While bias transfer can be trained on all images, for the prostate dataset, the impact on the classification can only be evaluated on the single Gleason pattern images.
Nonetheless, we decided to evaluate all images regarding the SSIM and FID since the differing annotations given by pathologist 1 and 2 create two non-identical subsets of test images.
For our evaluation, the FID between the target domain and the transformed images is calculated and compared to the FID between the original domains.
A lower FID implies that images are visually closer to the target domain after bias transfer.
Since the FID highly depends on the image content, validation and test scores should be considered separately.
Finally, for the kidney data, the segmentation accuracy pre- and post-bias transfer is compared.
The segmentation network predicts one segmentation mask for the glomerulus as a whole and one for the podocytes (see Figure~\ref{fig:metho_overview}e).
The quality of the segmentation is measured with three Dice scores~\cite{Dice1945}, the pixel-wise segmentation of the glomeruli and podocytes and the object-wise segmentation of the podocytes.
For the prostate data, the classification accuracy and the macro F1 scores pre- and post-bias transfer are compared.
The macro F1 score can give insights into the classification performance for underrepresented classes.
Considering class imbalance is especially important here since the label distributions of $\textrm{TAR}_{p}$ and $\textrm{NEW}_{p}$ do not match.
Every variation has been trained five times with different random seeds, including the baseline.
The training epoch with the lowest generator validation loss for transformations from $\textrm{NEW}$ to $\textrm{TAR}$ has been used for the evaluation of the individual runs.
For each approach, the run with the best results on the validation set has been evaluated on the test set.
\subsection{Results}
\subsubsection{Kidney biopsies}
The SSIM and FID scores for the validation set are visualized in Figure~\ref{fig:boxplots_kidney}.
They indicate that the transformations performed by the U-Net cycleGAN and FPG variations had the highest quality, with U-Net cycleGAN with structure loss leading to the overall best and most stable SSIM scores.
Regarding the FID, U-Net cycleGAN with combined losses largely improved the distance to the target domain for the validation and test sets.
The combination of additional identity and MS-SSIM loss (combined losses) had a stabilizing effect on the training process of cycleGAN and U-Net cycleGAN.
The structure loss had a positive effect on U-Net cycleGAN and FPG, however, it led to mode collapse for cycleGAN, only producing a single output irrespective of the input.
\begin{figure}[p]
\centering
{
\subfloat[$1-\textrm{SSIM}$]{%
\includegraphics[width=0.47\textwidth]{{images/boxplots/plot_metric_boxplot_kidney_1-SSIM_val_test.png}}
\label{fig:ssim_boxplot_kidney}
}
\subfloat[$\textrm{FID}$]{
\includegraphics[width=0.47\textwidth]{{images/boxplots/plot_metric_boxplot_kidney_FID_val_test.png}}
\label{fig:fid_boxplot_kidney}
}
}
\caption{Boxplots visualizing the transformation metrics $1-\textrm{SSIM}$ (a) and FID (b) for the validation and test sets of the kidney biopsies. The dashed lines highlight the original FID scores and the red dots show the values achieved by the runs that performed best on the validation set for each variation.}
\label{fig:boxplots_kidney}
\end{figure}
\begin{table}[p]
\centering
\caption{Means ($\mu_{test}$) and standard deviations ($\sigma_{test}$) of the Dice scores on the original test sets and the relative performance for the transformed images. `Dice glom.\ pix.' evaluates the pixel-wise segmentation of the glomeruli, and `Dice podo.\ pix.' of the podocytes. `Dice podo.\ obj.' evaluates the object-wise segmentation of the podocytes. Significance ($p<0.05$) is marked with $*$.}
\label{tab:kidney_segmentation_dice}
\begin{tabular}{lS[retain-explicit-plus, table-format=-1.3, table-space-text-post={*} ]S[table-format=1.2]S[retain-explicit-plus,table-format=-1.3, table-space-text-post={*}]S[table-format=1.2]S[retain-explicit-plus, table-format=-1.3, table-space-text-post={*}]S[table-format=1.2]}
\toprule
& \multicolumn{2}{S}{\bfseries{Dice glom. pix.}} & \multicolumn{2}{S}{\bfseries{Dice podo.\ obj.}} & \multicolumn{2}{S}{\bfseries{Dice podo.\ pix.}} \\
\cmidrule(lr){2-3}
\cmidrule(lr){4-5}
\cmidrule(lr){6-7}
& {$\mu_{test}$} & {$\sigma_{test}$} & {$\mu_{test}$} & {$\sigma_{test}$} & {$\mu_{test}$} & {$\sigma_{test}$} \\
\midrule
Test $\textrm{TAR}_{k}$ & 0.953 & 0.02 & 0.933 & 0.06 & 0.929 & 0.01 \\
Test $\textrm{NEW}_{k}$ & 0.909 & 0.05 & 0.877 &0.05 & 0.730 & 0.07 \\
\midrule
Color transfer & -0.158 * & 0.32 & -0.163 * & 0.27 & -0.079 & 0.26 \\
\midrule
CycleGAN & -0.002 & 0.06 & -0.053 * & 0.10 & +0.020 & 0.05\\
$\ $ + MS-SSIM & +0.014 & 0.05 & -0.013 & 0.10 & +0.055 * & 0.04 \\
$\ $ + structure & -0.020 & 0.10 & -0.055* & 0.09 & -0.017 & 0.09\\
$\ $ + combined & -0.005 * & 0.05 & -0.031 * & 0.09 & +0.046 & 0.05\\
U-Net CycleGAN & +0.018 * & 0.04 & \bfseries +0.015 & 0.05 & +0.061 * & 0.04 \\
$\ $ + MS-SSIM & +0.020 * & 0.03 & -0.002 & 0.07 & +0.039* & 0.04 \\
$\ $ + structure & +0.017 & 0.04 & -0.023 & 0.11 & +0.064* & 0.05 \\
$\ $ + combined & +0.022* & 0.03 & +0.000 & 0.08 & \bfseries +0.067* & 0.03 \\
Fixed-Point GAN & -0.002 & 0.06 & -0.019 & 0.09 & +0.060* & 0.05 \\
$\ $ + MS-SSIM & \bfseries +0.031* & 0.02 & -0.005 & 0.07 & +0.040* & 0.04\\
$\ $ + structure & +0.022* & 0.04 & -0.007 & 0.07 & +0.055* & 0.04 \\
$\ $ + combined & +0.025* & 0.03 & -0.002& 0.07 & +0.048* & 0.04 \\
\bottomrule
\end{tabular}
\end{table}
The segmentation scores on the test set are shown in Table \ref{tab:kidney_segmentation_dice}.
Here, `Test $\textrm{TAR}_{k}$' only includes images not used for training the segmentation U-Net (21 images).
Color transfer worsened the segmentation scores, despite improving the FID.
As for the SSIM and FID scores, the U-Net cycleGAN and Fixed-Point GAN variations led to the best results.
For those approaches, three out of four variations significantly improved the pixel-level Dice score for the glomeruli and all four for the podocytes.
However, unlike FPG, not all runs of U-Net cycleGAN worsened the object-level Dice score for the podocytes.
Overall, U-Net cycleGANs with combined losses performed the best for the task at hand.
The aforementioned approach produced hallucination artifact-free images that significantly improved the pixel-level segmentation scores of the glomeruli (0.909 to 0.923, $p=0.005$), and podocytes (0.730 to 0.797, $p<0.0001$), due to a strong adaption to the target domain.
An example transformation performed by U-Net cycleGAN and its effect on the segmentation result can be found in~\cite{Zimmermann2021}.
\subsubsection{Prostate biopsies}
For the prostate biopsies, the boxplots in Figure~\ref{fig:boxplots_prostate} show that FPG trained with the structure loss outperformed the other approaches regarding content preservation.
While the overall lowest FID score on the test set was achieved by cycleGAN with structure loss, this result is not reproducible for different random seeds -- the training process is not stable enough.
\begin{figure}[p]
\centering
{
\subfloat[$1-\textrm{SSIM}$]{%
\includegraphics[width=0.47\textwidth]{{images/boxplots/plot_metric_boxplot_prostate_1-SSIM_val_test.png}}
\label{fig:ssim_boxplot_prostate}
}
\subfloat[$\textrm{FID}$]{
\includegraphics[width=0.47\textwidth]{{images/boxplots/plot_metric_boxplot_prostate_FID_val_test.png}}
\label{fig:fid_boxplot_prostate}
}
}
\caption{Boxplots visualizing the transformation metrics $1-\textrm{SSIM}$ (a) and FID (b) for the validation and test sets of the prostate biopsies. The dashed lines highlight the original FID scores and the red dots mark the results achieved by the runs that performed best on the validation set for each variation.}
\label{fig:boxplots_prostate}
\end{figure}
\begin{table}[p]
\centering
\caption{Means ($\mu_{val}$) and standard deviations ($\sigma_{val}$) of the accuracy and macro-weighted F1 scores for the classification of the Gleason scores on the original validation and test sets and the relative performance for the transformed images. For the test set, the predicted Gleason scores are compared to the annotations of two medical professionals ($\mu_{test 1}$ and $\mu_{test 2}$). The best results are bold.}
\label{tab:prostate_classification_metrics}
\begin{tabular}{lS[retain-explicit-plus, table-format=-1.3]S[table-format=1.2]S[retain-explicit-plus,table-format=-1.3]S[retain-explicit-plus,table-format=-1.3]S[retain-explicit-plus, table-format=-1.3]S[table-format=1.2]S[retain-explicit-plus,table-format=-1.3]S[retain-explicit-plus,table-format=-1.3]}
\toprule
& \multicolumn{4}{S}{\bfseries{Accuracy}} & \multicolumn{4}{S}{\bfseries{F1 score}} \\
\cmidrule(lr){2-5}
\cmidrule(lr){6-9}
& {$\mu_{val}$} & {$\sigma_{val}$} & {$\mu_{test 1}$} & {$\mu_{test 2}$} & {$\mu_{val}$} & {$\sigma_{val}$} & {$\mu_{test 1}$} & {$\mu_{test 2}$} \\
\midrule
Test $\textrm{TAR}_{p}$ & 0.763 & 0.00 & 0.816 & 0.816 & 0.665 & 0.00 & 0.742 & 0.742 \\
Test $\textrm{NEW}_{p}$ & 0.576 & 0.00 & 0.256 & 0.217 & 0.454 & 0.00 & 0.249 & 0.242 \\
\midrule
Color transfer & -0.117 & 0.01 & -0.105 & -0.108 & -0.178 & 0.03 & -0.096 & -0.107 \\
\midrule
CycleGAN & +0.073 & 0.04 & +0.090 & +0.038 & +0.066 & 0.04 & +0.069 & +0.032 \\
$\ $ + MS-SSIM & +0.063 & 0.06 & +0.065 & +0.012 & +0.086 & 0.04 & +0.050 & +0.009 \\
$\ $ + structure & +0.057 & 0.03 & +0.100 & +0.083 & +0.046 & 0.03 & +0.064 & +0.060 \\
$\ $ + combined & +0.111 & 0.00 & +0.140 & \bfseries +0.141 & \bfseries +0.120 & 0.01 & +0.108 & \bfseries +0.128 \\
U-Net CycleGAN & +0.095 & 0.03 & +0.095 & +0.070 & +0.086 & 0.02 & +0.072 & +0.066 \\
$\ $ + MS-SSIM & +0.099 & 0.02 & +0.135 & \bfseries +0.141 & +0.102 & 0.02 & +0.107 & +0.120 \\
$\ $ + structure & +0.097 & 0.04 & +0.100 & +0.096 & +0.090 & 0.03 & +0.075 & +0.086 \\
$\ $ + combined & +0.064 & 0.03 & +0.120 & +0.083 & +0.065 & 0.03 & +0.082 & +0.067 \\
Fixed-Point GAN & +0.109 & +0.00 & \bfseries +0.145 & +0.128 & +0.106 & 0.01 & \bfseries+0.112 & +0.113 \\
$\ $ + MS-SSIM & +0.117 & 0.00 & +0.090 & +0.051 & +0.106 & 0.00 & +0.068 & +0.054 \\
$\ $ + structure & \bfseries+0.122 & 0.02 & +0.125 & +0.089 & +0.107 & 0.02 & +0.097 & +0.082 \\
$\ $ + combined & +0.117 & 0.01 & +0.140 & +0.096 & +0.109 & 0.00 & +0.111 & +0.086 \\
\bottomrule
\end{tabular}
\end{table}
\begin{figure}[tb]
\centering
{
\subfloat[Confusion matrices]{
\includegraphics[width=0.31\textwidth]{images/confusion_matrices/tar_p_new_p_1_test_confusion_relative_with_131D.png}
\label{fig:tarp_newp1_test_confusion}
}
\subfloat[FPG + structure]{%
\includegraphics[width=0.31\textwidth]{{images/hall_artifacts/fpg_ma_heatmap_and_image.png}}
\label{fig:fpg_ma_image}
}%
\subfloat[CycleGAN + comb.]{
\includegraphics[width=0.31\textwidth]{{images/hall_artifacts/cyc_com_heatmap_and_image.png}}
\label{fig:cyc_com_image}
}
}
\caption{(a) Relative confusion matrices for the classification of the test sets for $\textrm{TAR}_{p}$ (top) and $\textrm{NEW}_{p}$ annotated by pathologist 1 (bottom), as well as transformed test images with and without hallucination artifacts and their corresponding transformation heatmaps in (b) and (c).}
\label{fig:artifact_examples_prostate}
\end{figure}
When we look at the accuracies achieved on the validation set in Table~\ref{tab:prostate_classification_metrics}, FPG with structure loss achieved the largest improvement (from $0.576$ to $0.698$).
However, this was not reproduced on the test set or for the macro-weighted F1 scores.
Regarding the test accuracies, cycleGAN with combined losses and FPG resulted in the best scores.
While FPG achieved the best accuracy and F1 scores for the images annotated by pathologist 1, cycleGAN with combined losses outperformed all other approaches for the images annotated by pathologist 2.
Yet, the images transformed by cycleGAN with combined losses contained hallucination artifacts, as is reflected in the SSIM scores (Figure~\ref{fig:ssim_boxplot_prostate}).
Unfortunately, for the classification network at hand, a direct link between transformation and classification quality does not seem to exist.
In Figure~\ref{fig:cyc_com_image} an image transformed by cycleGAN with combined losses can be seen.
The transformation adds a purple streak across the upper third of the image, which is also reflected in the heatmap of the changes in pixel intensities through the transformation.
In contrast, the change heatmap of the images transformed by FPG with structure loss indicates that the color intensity of the background was reduced and that small cell structures remained intact (see Figure~\ref{fig:fpg_ma_image}).
Hence, to avoid basing the classification on hallucination artifacts, the best bias-transfer approach cannot be judged based on classification accuracies only.
Another factor that could contribute to the mismatch of transformation and classification quality is the very low accuracy on the original $\textrm{NEW}_{p}$ test images, which is $25.6\%$ for the images annotated by pathologist 1 and $21.7\%$ for pathologist 2, compared to $81.6\%$ achieved on the $\textrm{TAR}_{p}$ test set.
The samples of the two domains were annotated by different pathologists, who might interpret structures disparately, resulting in different Gleason scores.
As a result, a transformation to the target domain might not be sufficient to reach high accuracy on this dataset.
This is amplified by the fact that the classification network has the lowest accuracy for 4+4 samples (see Figure~\ref{fig:tarp_newp1_test_confusion}), which is the most frequent class in $\textrm{NEW}_{p}$.
For the original images of $\textrm{NEW}_{p}$ most of the 4+4 samples were misclassified as 5+5 or 0 (benign) (see Figure~\ref{fig:tarp_newp1_test_confusion}).
As a consequence, an accidental improvement of the classification accuracy due to hallucination artifacts cannot be ruled out.
All those reasons indicate that FPG with structure loss may still be considered the best performing bias transfer for the prostate biopsy samples, despite not resulting in the highest classification scores.
It stably led to improved accuracies and F1 scores across all runs while keeping the balance between not adding hallucination artifacts to the images and imitating the target domain.
\subsection{Discussion}
Currently, bias can be overcome with hand-engineered methods~\cite{DeBel2019} or with deep generative models, which are universal function approximators.
While our datasets do not require a strong transformation, the simple baseline approach was not a sufficient solution.
Like for many simple approaches, the transformation is applied uniformly across the whole image.
Therefore, non-linear bias cannot be captured.
The resulting, tinted images hindered the segmentation of the glomeruli and their podocytes, as well as the classification of prostate cancer.
In contrast, nine generative approaches significantly improved the pixel-wise segmentation of the podocytes and six the segmentation of the glomeruli.
The classification accuracies and F1 scores were improved by every variation.
However, as indicated by the varying SSIM scores, not all approaches resulted in artifact-free images.
In our experiments, the widely used cycleGAN architecture did not perform well for bias transfer without any modifications.
The training process was unstable and often led to hallucination artifacts.
This matches reports in previous studies.
Manakov et al.\ described that, in their experiments, vanilla cycleGANs generated blurry images with checkerboard-like artifacts~\cite{Manakov2019}.
CycleGANs were created for general image-to-image transformation.
While goals like identity-transforming images that belong to the target domain and reconstructing the original from a transformed image are already incorporated in the basic model, other objectives have to be added explicitly and balanced carefully to achieve maximum performance for the task at hand.
Otherwise the instability of the cycleGAN training process can be amplified, leading to complete mode collapse, as we experienced with the kidney biopsies.
FPGs were invented to convert diseased to healthy samples, which requires removing structures from the images.
For bias transfer, however, the content of the image has to stay the same.
Since this is not explicitly safeguarded by FPG, it is not surprising that the additional losses had a positive impact on the transformation.
U-Net cycleGANs, on the other hand, did not benefit as much from the losses as the other approaches.
The structure loss (and the combined losses) did however further stabilize the training process since the transformation goals were explicitly incorporated into the loss function.
In our evaluation, the transformation quality was well reflected in the segmentation scores, but only partially for the classification.
It is particularly important to note that if the best approach is selected solely based on the resulting classification or segmentation scores, hallucination artifacts might be missed.
The artifacts can introduce a new type of bias to the images which could `accidentally' improve the segmentation or classification scores.
Finally, another factor that contributed to the mismatch of transformation quality and classification scores for the prostate biopsies is the inter-annotator variance.
Multiple factors play a role in deciding which of the three architectures should be used to perform bias transfer.
When a large number of domains are available, the training time can become a limiting factor.
Since U-Net cycleGANs only transform between two domains, every target domain requires a separate model and training.
Another common problem is a lack of data.
If few samples exist (e.g. $<100$ per domain), U-Net cycleGANs might have too many trainable parameters to learn an adequate transformation.
Both of these issues would warrant selecting FPGs instead since all domains are used to train a single generator.
The cycleGAN variations did not match the performance of U-Net cycleGAN and FPG.
Therefore, we do not recommend using vanilla cycleGANs for bias transfer in medicine.
Regarding the evaluation of generative approaches, no consensus on adequate evaluation metrics exists.
Here, the complementary metrics SSIM and FID were good indicators of hallucination artifacts and therefore suitable for evaluating bias transfer.
\subsection{Limitations}
The selection of the models and the additional losses in this paper focused on bias transfer for domains with small differences between them, i.e., a shift in staining.
Therefore, the results might not be reproducible for less similar domains.
Additionally, bias transfer has to be performed with caution if the domains have an inherent content bias.
A disease can become part of the `style' if one domain contains more healthy and the other more malformed samples~\cite{Cohen2018}.
\section{Conclusion and outlook}
The goal of this paper was to determine which deep generative approach results in the best bias transfer for medical images originating from IF confocal microscopy of kidney biopsies and H\&E stained microscopy images of prostate biopsies.
The performance on the test set mostly corroborated our findings obtained on the validation data.
U-Net cycleGAN with combined losses and FPG with structure loss had a stable training process and created hallucination artifact-free images that imitated the target domain, improving the segmentation or classification performance.
Our results show that bias transfer for histopathological images benefits from adding structure-based losses since they help with content preservation.
The combination of MS-SSIM loss and additional identity loss is especially helpful if bias transfer is performed for domains that only require small changes.
For medical image datasets with larger differences, e.g.\ a different staining type, the additional identity loss should only be used if it is weighted much lower than the MS-SSIM loss.
Furthermore, additional losses are not a universal solution.
The individual network architectures and approaches have to be adapted to the task at hand, as our differing results for the two modalities have shown.
In future projects, similar datasets from additional sites could be investigated to get a firmer grasp on which transformation goals can be covered by the selected additional losses and which further limitations might prevent using deep learning-based bias transfer for medical datasets.
\subsubsection*{Acknowledgements} This work was supported by DFG (SFB 1192 projects B8, B9 and C3) and by BMBF (eMed Consortia `Fibromap').
\bibliographystyle{splncs04}
|
1,941,325,220,399 | arxiv | \section{Introduction}
Distributed graph algorithms give a formal theoretical framework for studying networks. There is abundant literature on distributed graph algorithms modeling \emph{static} networks, and recently, an increasing amount of work on \emph{dynamic} networks.
Yet, our understanding of the interconnections between the theoretical models and real world networks is unsatisfactory. That is, we have strong theoretical lower bounds for many distributed settings, both static and dynamic, yet real-world communication networks are functioning, and in a satisfactory manner.
One approach to bridging the gap between the strong theoretical lower bounds and the behaviour of real-world instances, is the idea of \emph{smoothed analysis}.
Smoothed analysis was first introduced by Spielman and Teng~\cite{SpielmanT09, SpielmanT04}, in an attempt to explain the fact that some problems admit strong theoretical lower bounds, but in practice are solved on a daily basis.
The explanation smoothed analysis suggests for this gap, is that lower bounds are proved using very specific, pathological instances of a problem, which are highly unlikely to happen in practice. They support this idea by showing that some lower bound instances are extremely \emph{fragile}, i.e., a small random perturbation turns a hard instance into an easy one.
Spielman and Teng applied this idea to the simplex algorithm, and showed that, while requiring an exponential time in the \emph{worst case}, if we apply a small random noise to our instance before executing the simplex algorithm on it, the running time becomes polynomial in expectation.
Smoothed analysis was first introduced to the distributed setting in the seminal work of Dinitz \textit{et al.}\xspace~\cite{DFGN18}, who considered distributed dynamic networks. These are networks that changes over time, and capture many real world systems, from Blockchain, through vehicle networks, and to peer-to-peer networks. They are modeled by a sequence of communication graphs, on which the algorithm needs to solve a problem despite the changes in the communication topology.
Adapting the ideas of smoothed analysis in this setting is not an easy task.
First, while the classical smoothed analysis is applied to a single input, in the dynamic setting we deal with an infinite sequence of communication rounds, on ever-changing graphs.
And second, defining the correct perturbation which the input undergoes is more challenging, as the input is discrete, as opposed to the continuous input of the simplex algorithm, where Gaussian noise is a very natural candidate.
To address the above challenges, Dinitz \textit{et al.}\xspace\ fix a \emph{noise parameter} $k>0$, and then, after the adversary picks an infinite sequence of graphs $\set{G_i}$,
they derive a new series of graphs $\set{H_i}$ by perturbing every $G_i$ with an addition or deletion of roughly $k$ random edges.
Specifically, the perturbation ($k$-smoothing) is done by choosing uniformly at random a graph within Hamming distance at most $k$ of $G_i$
(i.e., a graph that is different from $G_i$ by at most $k$ edges), which also has some desired properties (say, connectivity).
Note that in the typical case $\Omega (k)$ edges are perturbed, as most of the graphs with Hamming distance at most $k$ from $G_i$ have Hamming distance $\Omega (k)$ from it.
Using this model, they analyze the problems of flooding, hitting time, and aggregation.
In this paper we present a study of \emph{models of smoothing}, or put differently, a study of models of noise in dynamic networks.
To this end, we focus on one of the three fundamental problems presented in the previous work --- the flooding problem.
In this problem, a source vertex has a message it wishes to disseminate to all other vertices in the network. In each round, every vertex which has the message forewords it to all of its neighbors, until all vertices are informed. First, note that the problem is trivially solvable in static networks, in diameter time. Second, in the dynamic setting, it is easy to construct a sequence of graph where flooding takes $\Omega(n)$ rounds, even if each of them has a small diameter.
It is thus not surprising that adding random links between vertices may accelerate the flooding process, and indeed, it was shown in~\cite{DFGN18} that in $k$-smoothed dynamic networks, the complexity of flooding drops to $\tilde{\Theta}(n^{2/3}/k^{1/3})$ (where the $\tilde{\cdot}$ sign hides $\polylog n$ factors). Note the huge improvement, from $\Omega(n)$ to $\tilde{O}(n^{2/3})$, even for $k=1$.
\subsection{Motivation}
The significant step made by Dinitz \textit{et al.}\xspace in introducing smoothed analysis to the distributed dynamic environment left many fascinating questions unanswered. One natural direction of research has to do with applying smoothed analysis for a variety of problems, but here, we take a different approach. In an attempt to understand the essential properties of smoothed analysis in our setting, we focus on questions related to the very basic concept of smoothing and study several models of noise. We outline some of the related questions below.
\subparagraph{The curious gap between $k=0$ and $k=1$} Dinitz \textit{et al.}\xspace show a tight bound of $\tilde{\Theta}(n^{2/3} / k^{1/3})$ for the problem of flooding in a dynamic networks with noise $k$. That is, as $k$ decreases, flooding becomes harder, but only up to $\tilde{\Theta}(n^{2/3})$ for a constant $k$. However, when there is no noise at all, a \emph{linear}, in $n$, number of rounds is required. That is, there is a curious gap between just a tiny amount of noise and no noise at all, which stands in sharp contrast with the smooth change in the flooding time as a function of $k$, when $k$ ranges from $1$ and to $\Omega(n)$. One may wonder if there is a natural way to extend the model presented by Dinitz \textit{et al.}\xspace to bridge this gap.
\subparagraph{An oblivious adversary} The results of Dinitz \textit{et al.}\xspace assume an oblivious adversary. That is, the evolution of the network must be decided by the adversary in advance, without taking into account the noise added by the smoothing process. It is natural to ask what is the power of an \emph{adaptive} adversary compared to an oblivious one.
As smoothed analysis aims to be the middle ground between worst-case and average-case analysis, it is very natural to consider the effect of noise on the strongest possible adversary, i.e., an adaptive adversary.
\subparagraph{Change-dependent noise} The type of noise considered by Dinitz \textit{et al.}\xspace remains constant between rounds, regardless of the topology or the actions of the adversary, which corresponds to ``background noise'' in the network. However, it is very natural to consider cases where the amount of noise itself changes over time.
More specifically, if we think of a noise as an artifact of the changes the adversary performs to the system, and thus, it is natural to expect different amounts of noise if the adversary performs a single change, or changes the whole topology of the graph.
Hence, a natural model of noise in a network is one where the added noise is \emph{proportional} to the amount of change attempted by the adversary, that is, $k$ is proportional to the Hamming distance between the current graph and the next one. A different, natural definition of a changes-dependent noise, is where the noise occurs only in the changing edges, and not all the graph.
\subsection{Our results}
\begin{table}[tb]
\centering
\begin{tabular*}{\linewidth}{@{}l@{\;\;\;}l@{\;\;}l@{\;\; }l@{}}
\toprule
Model & Upper bound & Lower bound & Reference \\
\toprule
\begin{tabular}{@{}l@{}}
Integer noise\\ Non-adaptive adv.
\end{tabular}
& $O\left(n^{2/3} (1 / k)^{1/3} \log n \right)$ & $\Omega\left(n^{2/3}/k^{1/3}\right)$, for $k \leq \sqrt{n}$
& Dinitz \textit{et al.}\xspace~\cite{DFGN18} \\
\midrule
\begin{tabular}{@{}l@{}}
Fractional noise\\ Non-adaptive adv.
\end{tabular}
& $O\left(\min\set{n, n^{2/3} (\log n / k)^{1/3}}\right)$ & $\Omega\left(\min\set{n,n/k, n^{2/3}/k^{1/3}}\right)$
& Thm.~\ref{thm: fractional UB},~\ref{thm: fractional LB 1},~\ref{thm: fractional LB 2}\\
\midrule
\begin{tabular}{@{}l@{}}
Fractional noise\\ Adaptive adv.
\end{tabular}
& $O\left(\min\set{n, n \sqrt{\log n / k}}\right)$ & $\Omega(\min \set{n, n \log k / k})$
& Thm.~\ref{thm: fractional adaptive UB},~\ref{thm: fractional adaptive LB}\\
\midrule
\begin{tabular}{@{}l@{}}
Proportional noise\\ Non-adaptive adv.
\end{tabular}
& $O\left(n^{2/3} \cdot ( D \log n / \epsilon )^{1/3}\right)$ &
& Thm.~\ref{thm: FP - UB}\\
\midrule
\begin{tabular}{@{}l@{}}
Proportional noise\\ Adaptive adv.
\end{tabular}
& $O(n)$ & $\Omega(n)$
& Thm.~\ref{thm: FP - adaptive LB}\\
\hline
Targeted noise
& $O\left(\min\set{n, D \log n /\epsilon^{D^2}}\right)$
&
$\Omega(n)$, for $D\in\Theta(\log n)$
& Thm~\ref{thm: flooding targeted UB},~\ref{thm: flooding targeted LB}\\
\bottomrule
\end{tabular*}
\caption{Bounds on flooding time in different models of smoothing}
\label{table:results}
\end{table}
To address the above points of interest,
we prove upper and lower bounds, as summarized in Table~\ref{table:results}.
First, we show a natural extension of the previously presented noise model to \emph{fractional} values of $k$.
For $k\geq 1$, our model roughly coincides with the prior model and results, while for $0<k<1$, in our model a single random change occurs in the graph with probability $k$.
In our model, we show a bound of $\tilde{\Theta}(\min \{n^{2/3}/k^{1/3},n\})$ for flooding, for values of $k$ ranging from $k=0$ to $k=\Theta(n)$, even fractional --- see theorems~\ref{thm: fractional UB}, \ref{thm: fractional LB 1}, and~\ref{thm: fractional LB 2}.
The flooding time thus has a clean and continuous behavior, even for the range $0 < k < 1$ which was not studied beforehand, providing a very natural extension of the previous model.
Next, we focus our attention on an adaptive adversary, that can choose the next graph depending on the smoothing occurred in the present one.
Here, we show that the added power indeed makes the adversary stronger, and yet, flooding can be solved in this case faster than in a setting where no smoothing is applied.
Specifically, in theorems~\ref{thm: fractional adaptive UB} and \ref{thm: fractional adaptive LB} we show that in this setting flooding can be done in $\tilde{O}(n/\sqrt{k})$ rounds, and takes at least $\tilde{\Omega}(n / k)$ rounds.
This result presents a strict separation between the power of an adaptive and an oblivious adversaries.
We then turn our attention to a different type of noise, and introduce two new models of \emph{responsive noise} --- noise which depends on the changes the adversary preforms.
The goal of studying responsive noise is to better understand systems where the noise is an artifact of the changes, and more changes induce more noise.
We consider two, completely different cases:
if only the \emph{amount} of noise depends on the changes, then the system is less stable than in the prior models, and flooding can be delayed a lot by the adversary.
On the other hand, if the same amount of noise is \emph{targeted} at the changing links themselves, then the system remains stable, and flooding occurs much faster.
In both models, our results surprisingly depend
on a new attribute --- the static diameter of the graph, $D$.
The first model of responsive noise we introduce is the \emph{proportional noise} model, where the noise is randomly spread among the graph edges as before, and its amount is proportional to the number of proposed changes.
We consider two types of adversaries in this setting --- adaptive, and oblivious.
Theorem~\ref{thm: FP - UB} shows that $\tilde{O}(\min \{n^{2/3}D^{1/3}/\epsilon^{1/3},n\})$ rounds are sufficient for flooding in this model with an oblivious adversary.
Here, the static diameter $D$ comes into play, since the adversary can now force change-free rounds: if the adversary does not make any change, no noise occurs, the graph remains intact and no random ``shortcut edges'' are created.
Current lower bounds for flooding with oblivious adversaries seem to require many changes, but in the proportional noise model this results in the addition of many random edges, thus speeding up the flooding process.
In addition, the upper bound suggests that the static diameter $D$ should come into play in the lower bounds, which is not the case in previous constructions.
While we believe our upper bound is tight, proving lower bounds appears to be beyond the reach of current techniques.
In the proportional noise model with adaptive adversary, we encounter another interesting phenomenon.
Adjusting the upper bound analysis in a straightforward manner gives the trivial upper bound of $O(n)$, and for a good reason: an adaptive adversary can slow down the flooding time all the way to $\Omega(n)$, as shown in Theorem~\ref{thm: FP - adaptive LB}.
The adversary for this lower bound makes only a few changes in each round, and only in necessary spots (using its adaptivity). The fact that the noise is proportional boils down to almost no noise occurring, which allows the adversary to maintain control over the network.
The second model of responsive noise we study is that of a \emph{targeted noise}.
Here, the expected amount of noise applied is roughly the same as above, but only the edges that undergo changes are perturbed by the noise. More concretely, each change proposed by the adversary does not happen with some probability $\epsilon$.
The aim here is to model networks where every attempted change may have some probability of failure.
In this setting, the case of an adaptive adversary seems more natural; however, we prove our upper bound for an adaptive adversary, and the lower bound for an oblivious one, thus handling the harder cases of both sides.
Our upper bound shows significant speedup in flooding on graphs with constant static diameter --- $O(\log n)$ rounds suffice for flooding with targeted noise.
This phenomenon holds even for higher static diameters:
Theorem~\ref{thm: flooding targeted UB} gives an upper bound of $O((D\log_{1/\epsilon} n ) / \epsilon^{D^2})$ rounds for flooding.
This improves upon the trivial $O(n)$ whenever $D = O(\sqrt{\log_{1/\epsilon} n})$. Finally, in Theorem~\ref{thm: flooding targeted LB} we show that for larger static diameter, $D = \Theta(\log n)$, the number of rounds required for flooding is~$\Omega(n)$.
\subparagraph{Our techniques}
Analysing our new models of smoothing require a new and more global techniques. While we adopt the results of \cite{DFGN18} for changes in a single round, our models introduce new technical challenges, as they require multiple-round based analysis.
The main technical challenge for proving upper bounds comes from the fact that one cannot even guarantee that noise occurs at every round, and when it does --- the amount of noise is not fixed through the execution of the whole algorithm.
This requires us to conduct a much more global analysis, taking into account sequences of rounds with a varying amount of noise, instead of analyzing a single round at a time.
In several cases, the static diameter~$D$ appears as a parameter in our results and analysis: in our model, the adversary can force change-free rounds where no noise occurs, in which case flooding happens on a static graph.
In an attempt to study the exact limitations of our noise models, we present specifically crafted lower bound networks for each model.
Note that in many models our adversary is adaptive, potentially making it much stronger. This makes our lower bound more strategy-based, and not merely a fixed instance of a dynamic graph.
We revise the original flooding lower bound of Dinitz \textit{et al.}\xspace, in order to make it more applicable to other models. We present a more detailed and rigorous proof, that indeed implies tight bounds in most of our models, and specifically, when considering adaptive adversaries.
\hide{Finally, we also show some results that combine the the type of noise defined in \cite{DFGN18} with our definition of responsive noise. Intuitively, one can see the noise defined in \cite{DFGN18} as a sort of noise which is always present in the network, a kind of \emph{background noise}.
We show that as long as no round permits more than a linear number of changes made by the adversary, the background noise \emph{overcomes} the responsive one. }
\subsection{Related work}
Smoothed analysis was introduced by Spielman and Teng~\cite{SpielmanT09,SpielmanT04} in relation to using pivoting rules in the simplex algorithm. Since, it have received much attention in sequential algorithm design; see, e.g., the survey in~\cite{SpielmanT09}. The first application of smoothed analysis to the distributed setting is due to Dinitz \textit{et al.}\xspace~\cite{DFGN18}, who apply it to the well studied problems of aggregation, flooding and hitting time in dynamic networks, as described above.
Chatterjee \textit{et al.}\xspace~\cite{abs-1911-02628} considered the problem of a minimum spanning tree construction in a distributed setting of a synchronous network, where the smoothing is in the form of random links added in each round, which can be used for communication but not as a part of the tree.
The field of distributed algorithm design for dynamic networks has received considerable attention in recent years~\cite{Michail16}.
Most of the works considered models of adversarial changes~\cite{KuhnLO10,AugustinePRU12, DuttaPRSV13,GhaffariLN13, HaeuplerK11, KuhnOM11,BambergerKM19, DCKPS2020},
while others considered random changes, or random interactions between the vertices~\cite{CaiSZ20,ClementiST15,DenysyukR14,GasieniecS18,KowalskiM2020}.
Yet, as far as we are aware, only the two aforementioned works take the smoothed analysis approach in this context.
\section{Preliminaries}
\subsection{Dynamic graphs}
All graphs are of the form $G=(V,E)$, where $V=[n]=\{1,\ldots,n\}$ is the set of vertices.
A dynamic graph $\mathcal{H}$ is a sequence $\mathcal{H}=(G_1,G_2,\ldots)$ of graphs, $G_i=(V,E_i)$ on the same set of vertices, which can be finite or infinite.
The \emph{diameter} of a (static) graph is the largest distance between a pair of vertices in it, and the \emph{static diameter} of a dynamic graph $\mathcal{H}$ is the minimal upper bound $D$ on all the diameters of the graphs in the sequence $\mathcal{H}$.
We study synchronous distributed algorithms in this setting, where the time is split into rounds, and in round $i$ the vertices communicate over the edges of the graph $G_i$.
Given two graphs $G=(V,E)$ and $G'=(V,E')$ on the same set of vertices, we denote $G-G'=(V,E\setminus E')$.
We also denote $G\oplus G'=(V,E\oplus E')$, where $E\oplus E'$ is the set of edges appearing in exactly one of the graphs.
The size of set $E\oplus E'$ is called the \emph{Hamming distance between $G$ and $G'$}, and is denoted by $|G\oplus G'|$.
\subsection{Models of noise}
Our smoothed analysis is based on three models of noise, defined in this section.
For a single step, we recall the definition of $t$-smoothing~\cite{DFGN18}, and add the new notion of $\epsilon$-smoothing, for a parameter $0 < \epsilon < 1$.
At each step, we think of $G_{\mathrm{old}}$ as the current graph, and of $G_{\mathrm{adv}}$ as the future graph suggested by the adversary.
The actual future graph, $G_{\mathrm{new}}$, will be a modified version of $G_{\mathrm{adv}}$, randomly chosen as a function of $G_{\mathrm{old}}$ and $G_{\mathrm{adv}}$.
In addition, we consider the set $\Gcal_{\mathrm{allowed}}$ of \emph{allowed graphs} for a specific problem.
For flooding, this is just the set of all connected graphs.
\hide{
We start with two auxiliary definition. Let $0<\epsilon<1$ and $k\in\mathbb{R}_{+}$ be two parameters and $G_{\mathrm{old}}$, $G_{\mathrm{adv}}$ two graphs.
We think of $G_{\mathrm{old}}$ as the current graph, and of $G_{\mathrm{adv}}$ as the future graph suggested by the adversary.
The actual future graph, $G_{\mathrm{new}}$, will be a modified version of $G_{\mathrm{adv}}$, randomly chosen as a function of $G_{\mathrm{old}}$ and $G_{\mathrm{adv}}$.
In addition, we consider the set $\Gcal_{\mathrm{allowed}}$ of \emph{allowed graphs} for a specific problem.
For flooding, this is just the set of all connected graphs.
}
\hide{
We start by defining a single smoothing step, extending the definition of $k$-smoothing from~\cite{DFGN18} to allow also targeted noise.
}
\begin{definition}
Let $0\leq\epsilon\leq 1$ and $t \in \mathbb N$ be two parameters, $\Gcal_{\mathrm{allowed}}$ a family of graphs, and $G_{\mathrm{old}}$ and $G_{\mathrm{adv}}$ two graphs in $\Gcal_{\mathrm{allowed}}$.
\begin{itemize}
\item
A $t$-smoothing of $G_{\mathrm{adv}}$
is a graph $G_{\mathrm{new}}$ which is picked uniformly at random from all the graphs of $\Gcal_{\mathrm{allowed}}$ that are at Hamming distance at most $t$ from $G_{\mathrm{adv}}$.
The parameter $t$ is called the \emph{noise parameter.}
\item
An $\epsilon$-targeted smoothing of a graph $G_{\mathrm{adv}}$ with respect to $G_{\mathrm{old}}$ is a graph $G_{\mathrm{new}}$ which is constructed from $G_{\mathrm{adv}}$ by adding to it each edge of $G_{\mathrm{old}}-G_{\mathrm{adv}}$ independently at random with probability $\epsilon$, and removing each edge of $G_{\mathrm{adv}}-G_{\mathrm{old}}$ with the same probability.
If the created graph is not in $\Gcal_{\mathrm{allowed}}$, the process is repeated.
\end{itemize}
\end{definition}
We are now ready to define the three types of smoothing for dynamic graphs. The first extends the definition of~\cite{DFGN18} for non-integer values $k\in\mathbb{R}_{+}$, and the other two incorporate noise that depends on the latest modifications in the graph.
For a positive real number $x$, we define the random variable $\round_{p}(x)$, which takes the value $\ceil{x}$ with probability $x - \floor{x}$ (the fractional part of $x$), and $\floor{x}$ otherwise.
\begin{definition}
\label{def: smoothed_dynamic_network}
Let $0\leq\epsilon\leq 1$ and $k\in\mathbb{R}_{+}$ be two parameters, and $\Gcal_{\mathrm{allowed}}$ a family of graphs.
Let $\mathcal{H}=(G_1,G_2,\ldots)$ be a dynamic graph, i.e., sequence of ``temporary'' graphs. Let $G'_0$ be a graph (if $G'_0$ is not explicitly specified, we assume $G'_0=G_1$).
\begin{itemize}
\item
A $k$-smoothed dynamic graph is the dynamic graph $\mathcal{H}'=(G'_0,G'_1,G'_2,\ldots)$ defined from $\mathcal{H}$, where
for each round $i>0$, $G'_i$ is the $t_i$-smoothing of $G_i$, where $t_i=\round_{p}(k)$.
\item
An $\epsilon$-\emph{proportionally} smoothed dynamic graph $\mathcal{H}'=(G'_0,G'_1,G'_2,\ldots)$ is defined from $\mathcal{H}$, where
for each round $i>0$, $G'_i$ is the $t_i$-smoothing of $G_i$,
where $t_i=\round_{p}(\epsilon \cdot \size{G'_{i-1}\oplus G_{i}})$.
\item
An $\epsilon$-\emph{targeted} smoothed dynamic graph is the dynamic graph $\mathcal{H}'=(G'_0,G'_1,G'_2,\ldots)$ defined iteratively from $\mathcal{H}$, where
for each round $i>0$, $G'_i$ is the $\epsilon$-targeted smoothing of $G_i$ w.r.t.~$G'_{i-1}$.
\end{itemize}
All the above definitions also have \emph{adaptive} versions, where each $\mathcal{H}$ and $\mathcal{H}'$ are defined in parallel: the graph $G_i$ is chosen by an adversary after the graphs $G_0,G_1,G_2,\ldots,G_{i-1}$ and $G'_0,G'_1,G'_2,\ldots,G'_{i-1}$ are already chosen, and then $G'_i$ is defined as above.
We use adaptive versions for the first two definitions.
\end{definition}
\subsection{Auxiliary lemmas}
We use two basic lemmas, which help analyzing the probability of the noisy to add or remove certain edges from a given set of potential edges.
These lemmas address a single round, we apply them with a different parameter in each round.
The lemmas appeared as Lemmas~4.1 and 4.3 in~\cite{DFGN18},
where they were used with the same parameter throughout the process.
\begin{lemma
\label{lem: LB hitting a set of edges}
There exists a constant $c_1>0$ such that the following holds.
Let $G_{\mathrm{adv}} \in \Gcal_{\mathrm{allowed}}$ be a graph, and $\emptyset \neq S \subseteq \binom{[n]}{2}$ a set of potential edges.
Let $t\in\mathbb{N}$ be an integer such that $t\leq n/16$ and $t\cdot \size{S} \leq n^2/2$.
If $G_{\mathrm{new}}$ is a $t$-smoothing of $G_{\mathrm{adv}}$, then the probability that $G_{\mathrm{new}}$ contains at least one edge from $S$ is at least $c_1 \cdot t \size{S} / n^2$.
\end{lemma}
\begin{lemma
\label{lem: UB adding from a set of edges}
There exists a constant $c_2>0$ such that the following holds.
Let $G_{\mathrm{adv}} \in \Gcal_{\mathrm{allowed}}$ be a graph. Let $S \subseteq E_{\mathrm{adv}}$ be a set of potential edges such that $S \bigcap E_{\mathrm{adv}} = \emptyset$. Let $t\in\mathbb{N}$, such that $t\leq n/16$.
If $G_{\mathrm{new}}$ is an $t$-smoothing of $G_{\mathrm{adv}}$, then the probability that $G_{\mathrm{new}}$ contains at least one edge from $S$ is at most $c_2 \cdot t \size{S} / n^2$.
\end{lemma}
\subsection{Probability basics}
In all our upper bound theorems, we show that flooding succeeds with high probability (w.h.p.), i.e., with probability at least $1-n^{-c}$ for a constant $c$ of choice.
For simplicity, we prove success probability $1-n^{-1}$, but this can be amplified by increasing the flooding time by a multiplicative constant factor.
Our lower bounds show that flooding does not succeed with probability more than $1/2$, so it definitely does not succeeds w.h.p.
We will also use the following Chernoff bound (see, e.g.~\cite{MitzenmacherU05}):
\begin{lemma}
Suppose $X_1,\dots,X_n$ are independent Bernoulli random variables, with $\Expc{X_i} = \mu$ for every $i$.
Then for any $0 \leq \delta \leq 1$:
\[\Pr\left[ \sum_{i=1}^{n} X_i \leq (1-\delta)\mu n \right] \leq e^{ - \delta^2 \mu n / 2}.\]
\end{lemma}
We usually apply this bound with $\delta = 0.9$.
\hide{
We add another lemma, to show that sometimes that chance of not hitting a set $S_1$, but hitting a different set $S_2$ can relate to the sizes of said sizes.
\begin{lemma}[Lemma 4.3 in~\cite{DFGN18}]
\label{lem: UB adding from a set of edges}
There exists a constant $c_4>0$ such that the following holds.
Let $G_{\mathrm{old}},G_{\mathrm{adv}}$ be two graphs on the same set $[n]=\{1,\ldots,n\}$ of vertices and $\emptyset\neq S\subseteq \binom{[n]}{2}$ a set of potential edges such that $S \bigcap E_{\mathrm{adv}} = \emptyset$.
Let $0<\epsilon<1$ and $k$ be parameters such that for $t=\round_{p}(\max \set{k, \epsilon \cdot \size{G_{\mathrm{old}}-G_{\mathrm{adv}}}})$ we have $t\leq n/16$.
If $G_{\mathrm{new}}$ is an $(\epsilon,k)$-smoothing of $G_{\mathrm{adv}}$ w.r.t.~$G_{\mathrm{old}}$, then the probability that $G_{\mathrm{new}}$ containing at least one edge from $S_2$, and no edges from $S_1$ is at most: $c_2 t \size{S} / n^2$.
\end{lemma}
\begin{proof}
We denote by $E_2$ the event that $G_{\mathrm{new}}$ contains at least one edge of $S_2$ and by $E_1$ the event of $G_{\mathrm{new}}$ having no edges from $S_1$. We also focus on a specific spanning tree $T$ of size $n-1$ edges, and denote by $E_T$ the probability of the noise not hitting $T$ is at least $1/2$. For $t < n/16$ noise, we know that $\Pr[E_T] \geq 1/2$ (see original proof of Lemma~\ref{lem: LB hitting a set of edges}), and therefore
$$\Pr[E_1 \wedge E_2 | E_T] \leq \frac{\Pr[E_1 \wedge E_2 \wedge E_t]}{\Pr[E_T]} \leq 2\Pr[E_1 \wedge E_2]$$
\end{proof}
}
\section{Flooding in Dynamic Networks}
\label{sec: background noise}
\iffalse
\Gcomment{
Structure of the results
\begin{itemize}
\item Change $(\epsilon, k)$ definition to handle fractional noise.
\item Consider noise between 0 and 1. We aim to close the curious gap which they have between $k = 1$ and $k=0$. We show that their bounds continue smoothly in this interval as well. For fractional noise $\alpha \in (0,1)$, we show a $\Theta(n^{2/3}/\alpha^{1/3})$ bound.
\item Adaptivity. Their bounds assume an oblivious adversary (give formal definition). We consider the case of an adaptive adversary and show this indeed makes the problem harder. We give a bound of $\Theta(n / \sqrt{k})$.
\item Responsive noise. Here we consider what happens when the noise depends on the actions of the adversary. We model this in two ways: responsive and targeted responsive (think of better names). We ignore background noise, and actually allow no noise when there are no changes. Surprisingly, this leads to higher lower bounds for the problem in the non targeted case. We get a bound of $\Theta(n^{2/3}D^{1/3}/\epsilon^{1/3})$. While targeted noise greatly weakens the opponent. Here we get an upper bound of $O(D \log n / \epsilon^{D^2})$. And when $D=\log n$ we get a lower bound of $\Omega(n)$.
\end{itemize}
}
\fi
\hide{
\begin{theorem}
For an $(\epsilon,k)$-smoothed dynamic graph, flooding takes $\Theta(n^{2/3} / k^{1/3})$ rounds, when $k\geq 1$.
\end{theorem}
\begin{proof}
The upper bound from~\cite{DFGN18} holds, as in each round we have noise of magnitude \emph{at least} $k$. Upper bound from Seth's paper still holds. Lower bound: take spooling graph, but instead of connecting center to center, connect center to a vertex on the edge of the star. Now the number of changes is constant. Also we clean any edges between the two starts incurred due to noise. This requires more actions (due to multiplicative noise) but this sum converges and is bounded by $3k$. The rest of the proof should be similar to the original.
\end{proof}
}
\subsection{Fractional amount of noise}
We address an interesting phenomenon which occurs in~\cite{DFGN18}: by merely adding a single noisy edge per round, the flooding process dramatically speeds up from $\Theta(n)$ in the worst case, to only $\Theta(n^{2/3})$ in the worst case.
To explain this gap, we present a natural notion of \emph{fractional} noise: if the amount of noise $k$ is not an integer, we use an integer close to $k$, using the function~$\round_{p}(k)$.
An easy assertion of the result in \cite{DFGN18}, is that whenever $k > 1$, the analysis does not change by much, and the same asymptotic bound holds. This leaves open the big gap between $k=0$ (no noise) and $k=1$. We address this question in the next two theorems, showing a smooth transition in the worst-case flooding time between these two values of $k$.
Next, we revise the upper bound in~\cite{DFGN18} to handling fractional values of $k$ and values smaller than $1$. In addition, we show a somewhat cleaner argument, that carries three additional advantages: (1) It can be easily extended to adaptive adversaries; (2) It extends well to other models, as seen in subsequent sections; (3) It decreases the multiplicative logarithmic factor, dropping the asymptotic gap between upper and lower bound to only $O(\log^{1/3}(n))$.
Hence, the proof of the next theorem serves as a prototype for later proofs, of upper bounds for adaptive adversaries and proportional noise.
\begin{theorem}
\label{thm: fractional UB}
Fix a real number $0 < k \leq n/16$. For any $k$-smoothed dynamic graph, flooding takes $O\left(\min\set{n, n^{2/3} (\log n / k)^{1/3}}\right)$ rounds, w.h.p.
\end{theorem}
Note that an $n$-round upper bound is trivial (using the connectivity guarantee), and that whenever $k \leq c \cdot \log n / n$ for some constant $c$, the term $n$ is the smaller one, and we need not prove anything more.%
\footnote{This is rather intuitive, as if $k$ is very small (e.g., $k \leq c/n$), when considering $\delta n$ rounds for small enough $\delta$, no noise occurs. It is known that when no noise occurs, a linear amount of rounds is required for flooding. This intuition is formalized in the lower bound later in this section.}
We therefore focus on the event of $k > c \cdot \log n / n$, and for this case we show an upper bound of $O(n^{2/3}(\log n / k)^{1/3})$.
\begin{proof}
Fix $u$ to be the starting vertex, and $v$ to be some other vertex. We show that within $R = 3r$ rounds, the message has been flooded to $v$ with constant probability (over the random noise). We split the $3r$ rounds into three phases as follows.
\begin{enumerate}
\item First $r$ rounds, for spreading the message to a large set of vertices we call $U$.
\item Next $r$ rounds, for transferring the message from the set $U$ to another large set $V$ to be defined next.
\item Last $r$ rounds, for ``converging'' and passing the message from the potential set $V$ to the specific vertex $v$.
\end{enumerate}
We now quantify the message flooding in the three phases.
\subparagraph{After the first $r$ rounds, the message has reached many vertices.}
Let $U_i$ denote the set of vertices that have been flooded at time $i$. Using merely the fact that the graph is connected at all times, and assuming the flooding process is not over, we know that at each round a new vertex receives the message, implying $\size{U_{i+1}} \geq \size{U_i} + 1$.
Hence, $\size{U_r} \geq r$ (with probability~$1$).
\subparagraph{In the last $r$ rounds, many vertices can potentially flood the message to $v$.}
We denote by $V_i$ the set of vertices from which $v$ can receive the message within the last $i$ rounds ($R-i+1, \dots, R$).
Formally, fix the sequence of graphs chosen by the oblivious adversary,
and apply the random noise on these graphs to attain the last $r$ graphs of our \emph{smoothed} dynamic graph.
Define $V_0=\set{v}$, and for $i>0$, define $V_i$ as the union of $V_{i-1}$ with the neighbors of it in $G_{R-i+1}$.
That is, $V_i$ is defined analogously to $U_i$ but with the opposite order of graphs.
We point out that we are dealing with a stochastic process: each $V_i$ is a random variable that depends on the noise in the last $i$ rounds. Still, the connectivity guarantees that $\size{V_{i+1}} \geq \size{V_{i}} + 1$. Therefore, we have $\size{V_{r}} \geq r$ (with probability $1$).%
\footnote{Note that here we strongly rely on the obliviousness of the adversary: with an adaptive adversary, one cannot define $V_r$ properly before the end of the execution of the algorithm, as the adversary's choices were not yet made. The case of an adaptive adversary is discussed in the next section.}
\subparagraph{The middle rounds.}
The above processes define two randomly chosen sets, $U_{r}$ and $V_{r}$, each of size at least $r$. If $U_{r} \cap V_{r} \neq \emptyset$, then we are done as $v$ is guaranteed to receive the message even if we ignore the middle rounds.
Otherwise, we consider the set $S = U_{r} \times V_{r}$ of potential edges, and show that one of them belongs to our smoothed dynamic graph in at least one of the middle graphs, with probability at least $1 - n^{-2}$, for the right value of $r$.
Let us consider separately the two cases: $k \geq 1$ (note that non-integer $k$ was not handled before), and the case $0 < k <1$.
\subparagraph{The case of $k \geq 1$.}
In this case, we essentially claim the following: we are guaranteed to have either $\floor{k}$ noise or $\ceil{k}$ at each and every round. Applying Lemma~\ref{lem: LB hitting a set of edges} for each such round, the probability of not adding any edge from $S$ is at most
\[\left(1 - c_1 \floor{k} \size{S} / n^2 \right) \leq \left(1 - c_1 k r^2 / 2n^2 \right),\]
where the inequality follows from $\floor{k} \geq k/2$ and $\size{S} = \size{U_{r}} \cdot \size{V_{r}} \geq r^2$.
Thus, the probability of not adding any edge from $S$ in any of these $r$ noisy rounds is at most
\[\left(1 - c_1 k r^2 / 2n^2 \right)^{r} \leq e^{-c_1 r^3 k /(2n^2)},\]
which is upper bounded by $n^{-2}$, whenever $r \geq n^{2/3} \cdot \left[(4 \log n)/(c_1 k)\right]^{1/3}$.
\subparagraph{The case of $0 < k < 1$.}
Note that here we can no longer use $\floor{k} \geq k/2$, and so we turn to a somewhat more global analysis which guarantees that ``enough'' rounds produce a (single) noisy edge:
at each round we essentially add a noisy edge with probability $k < 1$, and otherwise do nothing. Since we have $r$ rounds, and in each of them noise occurs independently, we can use a standard Chernoff bound to say that with all but probability at most $e^{-0.4kr}$, we have $(k/10)r$ rounds in which a noisy edge was added.\footnote{The expectation over all rounds is $kr$ noisy edges.}
We later bound this term. For each of the $(k/10)r$ rounds we again apply Lemma~\ref{lem: LB hitting a set of edges} to claim that no edge from $S$ was added, with probability at most
\[\left(1 - c_1 \size{S} / n^2 \right) \leq \left(1 - c_1 r^2 / n^2 \right),\]
where the inequality follows again from $\size{S} = \size{U_{r}} \cdot \size{V_{r}} \geq r^2$.
Thus, the probability of not adding any edge from $S$ in any of these $(k/10)r$ noisy rounds is upper bounded by
\[\left(1 - c_1 r^2 / n^2 \right)^{(k/10) r} \leq e^{-c_1 k r^3 /(10n^2)},\]
which is upper bounded by $1/(2n^2)$ whenever $r \geq n^{2/3} \cdot \left[(10\log n)/(c_1 k)\right]^{1/3}$.
Recalling that we only deal with the case $k \geq c\cdot \log n / n$, choosing the constant $c$ according to the constant $c_1$, we also get $r \geq 10\log n / k$. This allows us to bound the error of the Chernoff inequality: $e^{-0.4kr} \leq e^{-3 \log n} \leq 1/(2n^2)$. Union bounding on both possible errors, we know that with probability at least $1-n^{-2}$ the vertex $v$ is informed after $3r$ rounds.
\medskip
To conclude, we use $r = n^{2/3} \cdot \left[(10\log n)/(c_1 k)\right]^{1/3}$. For any value $k \geq c \cdot \log n / n$, and for any realization of $U_{r}$ and $V_{r}$, the message has been passed to $V_{r}$ within the $r$ middle rounds, with probability at least $1-n^{-2}$. So $v$ received the message after $R = 3r$ rounds with the same probability.
Taking a union bound over all the vertices implies that $R$ rounds are enough to flood to the whole network with probability at least $1-1/n$.
\end{proof}
We also show a matching lower bound, restructuring the proof of the lower bound in~\cite{DFGN18}. We note that their proof actually states a lower bound of $\Omega\left(min\set{n/k, n^{2/3}/k^{1/3}}\right)$ that apply to any $k < n/16$ (and is indeed tight for $k < O(\sqrt{n})$). We restructure the analysis of the lower bound to fit the different constructions given in the following sections, as follows.
First, we consider the first $R$ rounds, for some parameter $R$, and show inductively that the following two main claims continue to hold for every round $i < R$ with very high probability.
\begin{itemize}
\item Some pre-defined set of vertices stays uninformed
\item Given the above, the \emph{expected} number of informed vertices in total does not grow rapidly.
\end{itemize}
The growth (in the expected number of informed vertices) has a multiplicative-additive progression, potentially becoming an exponential growth (with a small exponential).
We choose $R$ such that the expected number of informed vertices is upper bounded by $\delta n$, and use Markov inequality to show that with high probability the flooding is not yet over after $R$ rounds. We then apply a union bound over all the inductive steps, where each has a small probability to fail the inductive argument. Altogether, we show that with high probability, the flooding is not over after $R$ rounds.
In Appendix~\ref{Sec: appendix thm proof}, we use an argument along those lines in order to prove the following extensions of the lower bound from~\cite{DFGN18} to non-integer values and to fractional values (where $k < 1$).
\begin{theorem}
\label{thm: fractional LB 1}
Fix $1 \leq k \leq n/16$ (not necessarily an integer). For any $k$-smoothed dynamic graph, for the flooding process to succeed with probability at least $1/2$, it must run for $\Omega\left(\min\set{n/k, n^{2/3}/k^{1/3}}\right)$ rounds.
\end{theorem}
In particular, whenever $k = O(\sqrt{n})$, the dominant term is $\Omega(n^{2/3}/k^{1/3})$, matching the upper bound (up to logarithmic factors).
\begin{theorem}
\label{thm: fractional LB 2}
Fix $0 < k < 1$. For any $k$-smoothed dynamic graph, for the flooding process to succeed with probability at least $1/2$, it must run for $\Omega\left(\min\set{n, n^{2/3}/k^{1/3}}\right)$ rounds.
\end{theorem}
\subsection{Adaptive vs. oblivious adversary}
Here we note that the results of \cite{DFGN18} are for the case of oblivious (non-adaptive) adversary.
We extend our results (in the more generalized, fractional noise regime) to the adaptive case, obtaining bounds that are different than the ones in \cite{DFGN18}.
Interestingly, our results show separation between adaptive and oblivious adversary for a wide range of network noise: for constant $k$, in particular, we get a separation for the adaptive case, where no constant amount of noise speeds up the flooding below $\Omega(n)$, and the oblivious case where $k = \omega(1/n)$ already speeds the flooding to $o(n)$ rounds.
\begin{theorem}
\label{thm: fractional adaptive UB}
Fix $0 < k \leq n/16$, not necessarily an integer. For any $k$-smoothed \emph{adaptive} dynamic graph, flooding takes $O(\min\set{n, n \sqrt{\log n / k}})$ rounds, w.h.p.
\end{theorem}
The proof follows an argument similar to the one of Theorem~\ref{thm: fractional UB}, where the process is broken to three phases: $u$ spreads the message to $U_r$, which carry it to $V_r$, who are sure to deliver it to the destination $v$. The major difference here is that we can no longer rely on the last phase, since an adaptive adversary \emph{sees} the informed vertices and is able to act accordingly. We thus give more focus to the second, analyzing the chance for direct noisy edge between the informed set $U_r$ and a destination vertex $v$.
\begin{proof}
First, note that either way $n-1$ rounds are enough for flooding, as we still have a connectivity guarantee for each and every iteration. This means that for $k < \Theta(\log n)$, the theorem does not improve upon the trivial upper bound. In particular, in the following, we can therefore disregard the case of $k < 1$.
We use a similar argument as in the proof of Theorem~\ref{thm: fractional UB}.
We note that the previous proof relied on an oblivious adversary when we statistically analysed the middle rounds, for each \emph{fixed} value of $V_r$ (which depends on the the last $r$ rounds).
Assuming adaptivity, one can no longer use this analysis, and so we turn to a more simplified analysis: we consider only the two phases: the first $r$ rounds and $r$ last rounds. At first, connectivity assures us that enough vertices learn the message, and next we analyse the probability of one of them to connect directly to the goal vertex $v$ by a noisy edge. As before, the first phase gives us $U_r \geq r$ with probability $1$. Next, we give analysis of the second phase, that correspond to the one of the middle rounds before.
\subparagraph{The second phase}
Note that if $v \in U_{r}$, then we are done. Otherwise, we consider the set $S = U_{r} \times \set{v}$ of potential edges, and show that one of them belongs to our smoothed dynamic graph in at least one of the intermediate graphs, with high probability. We use the fact that for $0 < k < 1$ the assertion is trivial, and analyse the case of $k \geq 1$.
In this case, as before, we apply Lemma~\ref{lem: LB hitting a set of edges} for each round in this phase, and conclude that the probability of not adding any edge from $S$ is at most
\[1 - c_1 \floor{k} \size{S} / n^2 \leq
1 - c_1 k r / 2n^2 ,\]
where the inequality follows from $\floor{k} \geq k/2$ and $\size{S} = \size{U_{r}}$.
Thus, the probability of not adding any edge from $S$ in any of these $r$ noisy rounds is upper bounded by
\[\left(1 - c_1 k r / 2n^2 \right)^{r} \leq e^{-c_1 r^2 k /(2n^2)},\]
which is upper bounded by $n^{-2}$, for $r = 2n \cdot \sqrt{\log n/(c_1 k)}$.
\medskip
Using this value for $r$, for any value $k \geq 1$, and realization of $U_{r}$, the message has been passed to $v$ with probability at least $1 - n^{-2}$, within the whole $R=2r$ rounds of phase one and two. A union bound over all vertices $v \neq u$ implies that $R$ rounds are enough to flood to the whole network with probability at least $1-1/n$.
\end{proof}
When $k$ is a constant, the above result is tight, which we prove by adjusting the oblivious dynamic network from Theorem~\ref{thm: fractional LB 1} to use the adversary's adaptivity, as follows.
The basic structure of the hard instance for flooding is achieved by roughly splitting the graph into \emph{informed} and \emph{uninformed} vertices. In order to ensure low diameter, all the uninformed vertices are connected as a star, centered at a changing uninformed vertex called a \emph{head}, which in turn connects them to a star composed of the informed vertices.
The key idea in the analysis of this process, is that the head at each round should not become informed via noisy edges at an earlier round, as in this case it will immediately propagate the message to all the uninformed vertices, completing the flooding.
An oblivious adversary picks a sequence of heads at the beginning of the process, and this invariant is kept since the probability of any of the selected heads to get informed too early is low. However, after roughly $n^{2/3}$ rounds, the probability that a selected head is informed becomes too high.
An adaptive adversary, on the other hand, can continue crafting a hard instance graph for a linear number of rounds --- in each round, the adversary knows which vertices are uninformed and can pick one of them as the new head, thus overcoming this obstacle.
\iffalse
The main idea is as follows: the hard instance for an oblivious adversary consists of two star graph connected through a bridge, roughly splitting the graph to \emph{informed} and \emph{uninformed} vertices.
As each point in time, a different vertex is put at the center of the star graph of uninformed vertices; this vertex is called a \emph{head}.
Within $R$ rounds, there will only be $R$ different potential heads. A key in the analysis is the chance of such potential \emph{head} getting the message prematurely, as this event would immediately end the flooding process within $1$ round.
Using adaptivity, an adversarial network can easily pick and choose the next head to be a \emph{currently uninformed} vertex, thus overcoming this obstacle, attaining much power over its oblivious counterpart.
\fi
\begin{theorem}
\label{thm: fractional adaptive LB}
Fix $0 < k \leq n/16$, not necessarily an integer. For any $k$-smoothed \emph{adaptive} dynamic graph, for the flooding process to succeed with probability at least $1/2$ it must run for $\Omega(\min \set{n, n \log k / k})$ rounds.
\end{theorem}
\begin{proof}
We start by formally defining a hard case for flooding with noise.
The base to our definition is the \emph{spooling graph}, which was defined by Dinitz \textit{et al.}\xspace~\cite{DFGN18}:
at time $i$, the edges are $\set{(j,i)}_{j=1}^{i-1}$, a star around the vertex~$i$ (this will be the informed spool)
and edges $\set{(i+1,j)}_{j=i+2}^{n}$, a star around the vertex $i+1$, which will be the right spool, with vertex $i+1$ crowned as the \emph{head}. the spools are then connected using the additional edge $(i, i+1)$.
We define the \emph{adaptive spooling graph}, as follows: at first $E_1 = ([n]\setminus\set{2}) \times \set{2}$. After the first iteration, $2$ learns the message and is removed to the left (informed) spool. We denote for each round $i$ the set of informed vertices at the end with $I_i$. We also use $u_i$ for the lowest index of an uninformed vertex at the end of round $i$. Formally $u_i = \min\set{\bar{I}_i}$. We define the adaptive spooling graph at time $i+1$ to have the edge set
\[E_{i+1} = \set{1} \times (I_i\setminus \set{1}) \ \cup \ \set{(1,u_i)} \ \cup \ \set{u_i} \times (\bar{I}_i \setminus \set{u_i}).\]
In this graph, essentially, $1$ is connected to all already-informed vertices, and also to the next head $u_i$, which is connected to all other uninformed vertices. The static diameter stays $3$ at each iteration, but now we are promised that at iteration $i+1$, the new head $u_{i}$ is uninformed at the beginning of said round.
We next show that the \emph{expected} number of informed players cannot grow by much.
\begin{claim}
\label{claim: expected informed vertices}
When taking expectation over the noise we add, we have
$$\Expc{\size{I_{i+1}}} \leq (1+c_2 k/n)\cdot \Expc{\size{I_{i}}} + 1 .$$
\end{claim}
Note that $I_i$ does not depend on the noise of rounds $i+1,\dots,R$, and so the left side takes expectation over $i+1$ rounds of noise, and the right side over $i$ rounds.
Intuitively, the claim states that the expected growth in the number of \emph{informed} vertices is bounded by an additive-multiplicative progression (i.e., at each step the amount grows multiplicatively and then a constant is added). This is true as the noisy edges induce a multiplicative growth, and connectivity forces that at least $1$ additional vertex (in fact, $u_i$ itself) receives the message. The proof of this is deferred to Appendix~\ref{Sec: appendix thm proof}.
Using the claim, we analyze the progression of $A_i = \Expc{\size{I_i}}$, splitting to $2$ cases:
\begin{enumerate}
\item For $A_i \leq n/(c_2 k)$, the additive term is larger, and so in this range $A_{i+1} \leq A_i + 2$.
\item For $A_i \geq n/(c_2 k)$, the multiplicative term overcomes and we get $A_{i+1} \leq (1 + 2c_2 k/n)A_i$.
\end{enumerate}
Note that we also have $A_{i+1} \geq A_i + 1$, using connectivity, showing this bounds on the progression are rather tight.
We next split into cases: if $k < 1/c_2$, the additive term controls the process through $\Theta(n)$ iterations, giving us $A_{n/20} \leq n/10$.
Otherwise, the multiplicative term would come into play.
We denote the number of additive rounds by $r_0$: the minimal index such that $A_{r_0} \geq n/(c_2 k)$.
Since $A_0 = 1$ (only the vertex $1$ knows the message at the beginning), and we have good bounds on the progression, we conclude $r_0 = \Theta(n/k)$.
Next, we allow additional $r_1$ rounds in which the multiplicative term is dominant. We get
\[A_{r_0 + r_1} \leq A_{r_0} (1+2c_2 k/n)^{r_1} \leq \Theta(n/k) (1+2c_2 k/n)^{r_1}.\]
Taking $r_1 = \delta(n\log k / k)$ with small enough $\delta$, we have $A_{r_0+r_1} = \Expc{\size{I_{r_0+r_1}}}\leq n/10$.
In both cases, there is a round~$R$ with $\Expc{\size{I_R}} \leq n/10$.
By Markov inequality,
strictly less than $n$ vertices are informed after $R$ rounds, w.p. at least~$0.9$.
Note that for the first case $R = \Theta(n)$, and for the second case $R = r_0 + r_1 = O(n\log k /k)$, concluding the proof.
\end{proof}
\section{Responsive noise}
\label{sec: responsive noise}
In this section we consider \emph{responsive} noise in the dynamic network, where some elements of the noise incurred in each round relate to the changes done at this round.
We consider this model as complement to one discussed in the previous section: the parameter $k$ of ``noise per round'' is fit to model some internal background noise in a system.
However, in a more realistic model we would expect the noise to work in different patterns in times of major changes in the network, as opposed to rounds where no changes were made at all.
To this end, we introduce two variants of a responsive noise: one is that of \emph{proportional} noise, and the other is \emph{targeted} noise.
In the first, the amount of noisy edges would relate to the amount of changes that occurred in the last step.
In the second variant, we expect the noise to specifically ``target'' newly modified edges. This could relate to a somewhat limited adversary (in aspect of ability to change the graph).
In the responsive model, an interesting phenomenon occurs: an adversarial network can choose to stay still, in order to force a round where no noise occurs.
For the flooding problem, this strength is limited: whenever the static diameter of each iteration is upper bounded by $D$, the waiting game can only last $D-1$ rounds. To that end, we show how this model affects the analysis of the upper bound, and yet again incorporate the new phenomenon described to devise a non-trivial lower bound.
\subsection{Proportional noise}
\begin{theorem}
\label{thm: FP - UB}
Fix $0 < \epsilon$. For any $\epsilon$-proportionally smoothed dynamic graph with static diameter at most $D$. If no noise invokes more than $n/16$ changes, the flooding process finishes after $O(n^{2/3} \cdot ( D \log n / \epsilon )^{1/3})$ rounds, w.h.p.
\end{theorem}
The proof resembles the proof of Theorem~\ref{thm: fractional UB} with three phases, which handles the ``waiting game'' mentioned above. Given static diameter $D$, an adversarial network can only stay intact for $D-1$ rounds, thus in the second phase at least $1/D$ of the rounds introduce one changed edge (or more), inferring that w.h.p. noise is incurred in $\Omega(\epsilon/D)$ rounds.
\begin{proof}
We start by noting that if $n \leq c \cdot D \log n / \epsilon$, for some constant $c$, this bound is larger than the trivial $O(n)$ guaranteed by connectivity and we are done. Next, we follow a similar argument as the one in the proof of Theorem~\ref{thm: fractional UB}.
For the oblivious case, we had 3 phases: first $r$ rounds and last $r$ rounds are always guaranteed connectivity, and so we have the random sets $U_r, V_r$ such that with probability $1$ we know $\size{U_r}, \size{V_r} \geq r$. If the sets intersects we are done, so assume they do not, and denote $S = U_r \times V_r$. Note that $\size{S} \geq r \cdot r = r^2$.
Note that the size of $S$ is guaranteed even though its realization depends on the first and last rounds. Since the adversary is not adaptive, we can turn to analyse chance of adding a noisy edge from the set $S$ during the $r$ middle rounds. We note that the adversary can now play the waiting game in order to force a round where no noise is invoked.
\subparagraph{The middle rounds.}
As we are promised a static diameter of at most $D$, we know that if the networks does not change for $D$ steps consecutively, the flooding is guaranteed to finish successfully. Therefore, we can assume that within the $r$ middle rounds, once every $D$ rounds, we have a round where changes happen, and therefore potential noise might occur.
This overall guarantees us $r/D$ such rounds,
where in each of them noise occurs independently with probability at least $\epsilon$ (that correspond to the amount of noise if only a single change was made).
We apply a standard Chernoff bound to say that with all but probability at most $e^{-0.4\epsilon r / D}$, we have $\epsilon r/(10D)$ rounds in which at least one noisy edge was added.%
\footnote{The expectation over all rounds is at least $\epsilon r / D$ rounds with noisy edges.}
We later bound this term.
\hide
{
Using the same argument as in the proof of Theorem~\ref{thm: fractional UB} (for the case $k < 1$) we get by Chernoff bound that with $90\%$, at least $\epsilon r / (10D)$ rounds invoked noise of at least a single edge.
}
We write $t_i$ for the amount of noise at each such round, for which we know $1 \leq t_i \leq n/16$, using the premise. Now, one can safely apply Lemma~\ref{lem: LB hitting a set of edges} with $S$ for each such round, to upper bound the probability of not adding any such potential edge:
\[\left(1 - c_1 t_i\size{S} / n^2 \right) \leq \left(1 - c_1 r^2 / 2n^2 \right),\]
where the inequality simply follows from $t_i \geq 1$ and $\size{S} \geq r^2$.
Thus, the probability of not adding any edge from $S$ in any of the $\epsilon r/(10D)$ noisy rounds is upper bounded by
\[\left(1 - c_1 r^2 / 2n^2 \right)^{\epsilon r/(10D)} \leq e^{-c_1 r^3 \epsilon /(20D n^2)},\]
which is upper bounded by $1/(2n^2)$, whenever $r \geq 20 n^{2/3} \cdot \left[D \log n / (c_1\epsilon) \right]^{1/3}$.
\medskip
We now recall assuming that $n \geq c \cdot D \log n /\epsilon$, using the right constant $c$ (as function of the constant $c_1$). For this case, we also get $r \geq 10 D\log n / \epsilon$, for which the case of too few noisy rounds is bounded by $e^{-0.4\epsilon r / D} \leq e^{- 3 log(n)} \leq 1/(2n^2)$.
By union bound we get the following: using $r = 20 n^{2/3} \cdot \left[D \log n / (c_1\epsilon) \right]^{1/3}$, for any $D \geq c\cdot \epsilon n / \log n$, and any realization of $U_r, V_r$, the message has been passed to $V_r$ within the $r$ middle rounds with probability at least $1 - n^{-2}$, and so after $R = 3r$ rounds, $v$ received the message with the same probability. Using union bound over all vertices in the network, we conclude see that $R$ rounds suffice for flooding with probability at least $1-1/n$.
\hide
{
To conclude, using $r = 20 n^{2/3} D^{1/3}/ (c_1\epsilon)^{1/3}$: for any realization of $U_{r}, V_{r}$, the message has been passed to $V_r$ during the middle rounds (and therefore to $v$ by the end of all rounds) with probability at least $1/2$.
By repeating the process for $2\log n$ times, we have failure probability of $n^{-2}$ for each non-source vertex $v$. A union bound over all such vertices implies that $R\log n$ rounds are enough to flood to the whole network with probability at least $1-1/n$.
}
\end{proof}
Adjusting the same argument for an \emph{adaptive} network, using only two phases (same as Theorem~\ref{thm: fractional adaptive UB}), would give an upper bound of $O(n \cdot \sqrt{D \log n / \epsilon})$, which in this case does not improve upon the trivial $O(n)$ promised by connectivity alone.
We continue to show that this is tight, and in the proportional noise model, the changes in the network can be so subtle that they would invoke little noise and the flooding would barely speed up at all, asymptotically. We show the following lower bound, and note that the graph sequence constructed in the proof has constant $D$, where it is hardest to manipulate.
\begin{theorem}
\label{thm: FP - adaptive LB}
Fix $\epsilon \leq 1/5$.
There exists an $\epsilon$-proportionally smoothed dynamic graph with an adaptive adversary, where a flooding process must run for $\Omega(n)$ rounds in order to succeed with probability at least $1/2$.
\end{theorem}
In the current, adaptive model, the spooling graph no longer stands as a slow-flooding graph. Changing the center of the uninformed star takes $\Omega(n)$ edge changes in early rounds, and the proportional noise invoked by these changes would be devastating (e.g., for~$\epsilon = \Omega(1)$).
Instead, in the following proof, the adversary relies on its adaptivity in order to maintain control over the network. This is done using a sequence of graphs that require only $O(1)$ changes at each round.
\begin{proof}
We modify the spooling graph in a way that only few edges are changed at each step. Let $G_1,\dots, G_{n-1}$ be the sequence of graphs over $[n]$, each having the edge set
\[E_i = \set{(1,j)\mid j\leq i} \cup \set{(j,n)\mid j \geq i}.\]
Note that $G_i$ is connected through the vertex $i$. These graphs are essentially two star graphs, one around $1$ and one around $n$ that are connected through one other vertex (a different one at each time frame). We associate the star centered in $1$ with the set of \emph{informed} vertices and the star centered in $n$ with the set of \emph{uninformed} vertices (as $1$ is the the vertex from which the flooding begins).
Wishfully, we would like to say that at each time connectivity is preserved via the vertex $i$, but as soon as it receives the message, it is being cut off from the uninformed part of the graph. However, the added noise would impair this argument. As we wish to use a low-maintenance dynamic graph, we cannot afford to replace the center. Therefore, we surgically cut off from the uninformed star all the vertices that learned the message via edges added by noise.
This enforces us to incorporate into the lower bound the concept of \emph{adaptivity}.
Let $G_1$ be as before. we define the rest of the graphs in the sequence adaptively. Let $I_i$ be the set of vertices that received the message by the end of round $i$, and let $\bar{I}_i = V / I_i$, the set of uninformed vertices at the same time. We define the connecting vertex of the next graph as $u_{i+1} = argmin(\bar{I}_i)$. The graph $G_{i+1}$ in the sequence is then defined using
\[E_{i+1} = \set{(1,j)\mid j\in I_i} \cup \set{(j,n)\mid j\in\bar{I}_i} \cup \set{(1,u_i)}.\]
Consider the first $R = \delta n$ rounds of the process, for small enough $\delta$. We show that if no edge of $I_i \times n$ was added at some round $i+1$, which happens with small probability, we have:
\begin{enumerate}
\item The number of required changes at each iteration is at most constant at each round.
\item At each point in time $\size{I_j} < 2j$.
\end{enumerate}
For the first bullet, note that $2 \leq \size{G'_{i-1}\oplus G_{i}} \leq 5$. Indeed, at the first round we only change $2$ edges, replacing $(u_1,n)$ by $(1,u_2)$.
By induction, for any round $i > 1$, if noise was invoked in the previous round, then since $5 \epsilon < 1$, at most one extra edge was changed in $G'_{i-1}$. We have 3 options for the toggled edge
\begin{itemize}
\item If an edge was added by noise, connecting an informed vertex with some uninformed vertex $w \neq n$, we remove the edge, cut off the edge $(w,n)$ as well, and add $(1,w)$ instead.\footnote{Leaving $w$ connected through the vertex $1$ and not the vertex who sent him the message keeps our low diameter intact. A more freely defined graph with no diameter guarantees could simply remove $(w,n)$ and leave it connected to the informed spool.}
\item If an edge was added by noise, connecting two uninformed vertices, we cut it off immediately (rather than analysing its affect later, when one of them becomes informed).
\item If any other edge was connected (or disconnected) by noise, we simply revert the change, to keep our graph in line with its definition.
\end{itemize}
Either way, we need to add two more changes (if not done already above) which consist of removing the edge $(u_{i-1},n)$ and adding the edge $(1,u_i)$ instead. In total, the number of changes in the next round stays between $2$ and $5$, proving the induction step.
For the second bullet, note that the amount of noisy edges produced at each round is at most $1$, which means that at most $2$ new vertices learns the message in each round, including the one connecting the graph. This means that $\size{I_i} \leq \size{I_{i-1}} + 2$, or alternatively $\size{I_j} \leq 1 + 2j \leq 3j$.
As both the above claims hold whenever $n$ has not yet learnt the message, we are left to show that with high probability, within $R = \delta n$ rounds, this is indeed the case. We union bound over all $R$ rounds: as long as it has not yet happened yet, the probability of $n$ getting the message at the next round is that of a noisy edge connecting $I_i$ to the vertex $n$.
We apply Lemma~\ref{lem: UB adding from a set of edges} for each round $i < R$, using $S_i = I_i \times \set{n}$, and assuming $n$ is not yet informed. Thus, the probability is at most:
\[c_2 \size{S_i} / n^2 \leq c_2 \cdot 3R / n^2\]
since $\size{S_i} = \size{I_i} \leq 3i \leq 3R$, and the number of noisy edges is at most $1$. Applying a union bound over all $R$ rounds, the probability of vertex $n$ being prematurely informed is at most
\[3c_2 \cdot R^2 / n^2 \leq 3c_2 \delta^2.\]
For small enough $\delta$, this is at most $0.1$, so with probability at least $0.9$ after $R$ rounds we indeed have at most $3R < n$ informed vertices, which means the flooding is not yet over.
\end{proof}
\subsection{Targeted noise}
For targeted noise this new phenomenon is surpassed by the limited ability of the adversary to make changes. We can think of targeted noise as some sort of ``slow down'', as if the network repeatedly tries to modify some of the edges, eventually succeeding, but not before a number of rounds has passed.
In this last model we show just how strong the waiting game can be: for a graph with constant static diameter (which makes the waiting game obsolete), flooding will take $O(\log n)$ rounds, with high probability. However, for larger value of $D$, the same analysis would quickly fail. For $D = \Theta\left(\sqrt{\log n}\right)$ we get the trivial bound of $O(n)$. We finish by showing an explicit construction with static diameter $D = \Theta\left(\log n\right)$ that relies strongly on the waiting game and admits a lower bound of $\Omega(n)$ rounds.
\begin{theorem}
\label{thm: flooding targeted UB}
For any $\epsilon$-targeted smoothed dynamic graph with static diameter $D$, flooding can be done in $O(D \log n /\epsilon^{D^2})$ rounds, w.h.p.
\end{theorem}
Note that for $D = o(\sqrt{\log_{1/\epsilon} n})$, this improves upon the trivial fact that $n$ rounds are enough (as at each round, one new vertex must be informed, since the graph is connected). Specifically, for a constant static diameter, we get $O(\log n)$ rounds.
\begin{proof}
Fix the starting vertex $v_0$ which is informed.
First, we show that the probability of a vertex $v\neq v_0$ staying uninformed after $D$ rounds is at most $1-\epsilon^{D^2}$.
Consider $D$ consecutive graphs $G_1,\ldots, G_D$ in the smoothed graph. As the static diameter is $D$ in every round, there exists a path $P_v$ from $v_0$ to $v$ in $G_1$, whose length is at most $D$. Since we deal with targeted smoothing, each edge of $P_v$ that exists in a graph $G_i$ for some $1\leq i<D$ exists in $G_{i+1}$ probability either $1$ or $\epsilon$. So, for each such $i$, if all the edges of $P_v$ exist in $G_i$ then they all exist in $G_{i+1}$ with probability at least $\epsilon^D$.
Hence, the probability of the path $P_v$ existing in all $D$ graphs is at least $(\epsilon^D)^D = \epsilon^{D^2}$.
Fix a positive integer $t$, and consider $t$ consecutive sequences of $D$ graphs each.
On each sequence, we apply the above claim, and conclude that the probability of $v$ staying uninformed after theses $tD$ rounds is at most
\[(1-\epsilon^{D^2})^t \leq e^{-t\cdot \epsilon^{D^2}}.\]
For $t = (c+1)\log n \cdot (1/\epsilon)^{D^2}$ sequences, the probability of $v$ not being informed after $tD$ rounds is at most $n^{-(c+1)}$. A union bound over all $n-1$ vertices that need to be informed implies that $tD$ rounds suffice to fully flood the network with probability at least $1-n^{-c}$.
\end{proof}
The above theorem implies that if the static diameter of all graphs in the sequence is small, roughly $O(\sqrt{\log n})$, flooding is fast.
Next, we show that this is almost tight: if the diameter is in $\Omega(\log n)$, flooding cannot be done faster than in non-smoothed graphs.
\begin{theorem}
\label{thm: flooding targeted LB}
For every constant $0<\epsilon<1$,
there is a value $D\in \Theta(\log n)$
and an $\epsilon$-targeted smoothed dynamic graph
such that with high probability,
the diameter of the graph is $D$ and flooding on it takes $n-1$ rounds.
\end{theorem}
In order to prove this theorem, we present the \emph{dynamic cassette graph} (see Figure~\ref{fig: cassette}).
Fix $\epsilon$ and $n$ as in the theorem statement, and let $t= \floor{c\log_{1/\epsilon} n}$ for a constant $c$ of choice.
The dynamic cassette graph on vertices $V = \set{0,\dots, n-1}$ is the dynamic graph $\mathcal{H}= \set{G_1, \dots, G_n}$, where $G_i=(V,E_i)$ is defined by
\begin{align*}
E_i = & \set{(j,j+1)\mid 0\leq j< n-1} \cup \\
& \set{(0,jt)\mid 1\leq j\leq\floor{(i-1)/t}} \cup
\set{(jt,n-1)\mid \floor{(i-1)/t}+2\leq j\leq \floor{(n-2)/t}}
.
\end{align*}
\begin{figure}
\centering
\includegraphics[scale=0.3]{cassette-new.png}
\caption{The cassette graph, $G^t_j$, where $j=kt$ and $n$ is some multiple of $t$.
}
\label{fig: cassette}
\end{figure}
This graph is the path on $n$ vertices, with some additional edges connecting the first and last vertices to vertices in the set $\set{jt \mid 1\leq j\leq (n-2)/t}$; these will be referred to as \emph{shortcut vertices}, and the additional edges to them, \emph{shortcut edges}.
At the first graph, $G_1$, all shortcut vertices but the first are connected to the last vertex, $n-1$.
Then, one by one, the shortcut vertices disconnect from $n-1$, and soon after --- connect to $0$.
At each time interval $[(j-1)t+1,jt]$, all the shortcut vertices with index strictly smaller than $jt$ are connected to the vertex $0$, and all those with index strictly higher than $jt$ are connected to~$n-1$.
Consider the \emph{smoothed cassette graph} $\mathcal{H}'$, i.e., the $\epsilon$-targeted smoothed dynamic graph derived from $\mathcal{H}$.
The dynamic graph $\mathcal{H}'$ can be interpreted as undergoing the following process:
during each time interval $[(j-1)t+1,jt]$, the adversary repeatedly tries to add a new edge $(0,(j-1)t)$, and remove the edge $((j+1)t, n-1)$.
The targeted noise creates a slowdown that might prevent this from happening right away,
yet for the right value of $t$, both changes indeed happen by the end of the time interval w.h.p. We state the following claim and direct the reader to Appendix~\ref{Sec: appendix thm proof} for a full proof.
\begin{claim}
\label{claim: cassette specific shortcut edges whp}
For each $2\leq j\leq (n-2)/t$, the smoothed graph $G'_{jt}$
does not contain the edge $(0,(j-1)t)$ with probability at most $n^{-c}$,
and contains the edge $((j+1)t, n-1)$ with the same probability.
\end{claim}
If the edge $(0,(j-1)t)$ exists in the smoothed graph $G'_{jt}$
then it also exists in all later graphs, $G_{j'}$ with $j'>jt$.
Similarly, if the edge $((j+1)t, n-1)$ does not exist in this graph, it also does not appear in later graphs.
A union bound thus extends the last claim as follows.
\begin{claim}
\label{claim: cassette all shortcut edges whp}
For each $2\leq j\leq (n-2)/t$, the smoothed graph $G'_{jt}$ and all later graphs
contain the edge $(0,(j-1)t)$ with probability at least $1-n^{-c+1}$,
and all these graphs do not contain the edge $((j+1)t, n-1)$ with the same probability.
\end{claim}
Using this claim, Theorem~\ref{thm: flooding targeted LB} can easily be proven.
\begin{proof}[Proof of Theorem~\ref{thm: flooding targeted LB}]
Consider the smoothed dynamic cassette graph $\mathcal{H}'$. We start by analyzing its diameter.
Let $G'_{j'}$ be a graph in $\mathcal{H}'$, and pick a $j$ such that $jt\leq j'< (j+1)t$.
By Claim~\ref{claim: cassette all shortcut edges whp}, the graph $G'_{jt}$ and all later graphs contain the edge $(0,(j-1)t)$ with probability at least $1-n^{-c+2}$.
In addition, all the graphs $G_1,\ldots,G_{(j+1)t-1}$ contain the edge $((j+2)t, n-1)$ (and note that this is not a probabilistic claim).
The distance between every two shortcut vertices is $t$, so the distance from every vertex in the graph to the closest shortcut vertex is at most $t/2$.
Each shortcut vertex is directly connected to either $0$ or $n-1$ w.h.p., except for $jt$, who is connected to both by a $t+1$ path.
Finally, between the vertices there is a path of length $2t+2$, through $(j-1)t$ and $(j+1)t$.
Let us bound the length of a path between two vertices $i,i'$ (with upper bounds on each part): from $i$ to its closest shortcut vertex ($t/2$ hops), to $0$ or $n-1$ ($t+1$ hop), maybe to the other vertex between $0$ and $n-1$ ($2t+2$ hops), to the shortcut vertex closest to $i'$ ($t+1$ hop), and to $i'$ ($t/2$ hops).
This sums to a path of length at most $5t+4=\Theta(\log n)$.
A more detailed analysis can reduce this to roughly $3t$ hops.
For the flooding time, we use Claim~\ref{claim: cassette all shortcut edges whp} again, but now for the edges $((j+1)t, n-1)$ that do not appear $G'_{jt}$ and all later graphs w.h.p.
A simple induction on $j'=0,\ldots,n-1$ shows that after $j'$ rounds, i.e. in graph $G_{j'}$, only vertices $0,\ldots,j'$ are informed.
The base case is trivial. For the step, the only edges connecting informed vertices to uninformed vertices are $(j',j'+1)$, and edges from $0$ to shortcut vertices, which have the form $(0,tj)$ with $tj\leq j'$ --- this yields from the construction and always holds.
The only other type of possible edges connecting informed and uninformed vertices are of the form $(jt,n-1)$, with $jt\leq j'$. However, the claim implies that w.h.p., by round $j'$ none of these edges exist.
Hence, before round $n-1$ not all vertices are informed, and flooding takes $n-1$ rounds w.h.p.
\end{proof}
\hide{end of real content}
\iffalse
\subsection{Aggregation}
\begin{theorem}
\label{thm: aggregation lower bound}
For every algorithm $A$, there exists a dynamic pairing graph $\mathcal{H}$ s.t. with probability at least $1/2$, $A$'s aggregation factor is $\Omega(n)$ times worse than the offline optimal aggregation factor achievable in a $(\epsilon,k)$-smoothed version of $\mathcal{H}$. Where $k \leq n/ (32\log^2 n)$.
\end{theorem}
\Acomment{As for $D$ before, what is exactly the condition on $K$ here?}
\begin{proof}
Take Seth's lower bound and repeat every step $\log n$ times to get rid of the multiplicative noise.
\end{proof}
\subsection{Consensus}
\fi
|
1,941,325,220,400 | arxiv | \section{Introduction}
The set of all continuous functions, defined on the interval $I$, is
denoted by $C(I)$. For any $f\in C([0,1])$, the corresponding
\emph{Bernstein operators} are defined as follows:
$$B_n(f,x):=\sum_{k=0}^nf(\frac{k}{n})p_{n,k}(x),$$
where \begin{align*}p_{n,k}(x):={n \choose k}x^k(1-x)^{n-k}, \
k=0,1,2,\ldots,n, \ x\in[0,1].\end{align*}
Approximation properties of Bernstein operators have been studied
very well (see \cite{Berens}, \cite{Della},
\cite{Totik}-\cite{Lorentz}, \cite{Yu}-\cite{X. Zhou}, for example).
In order to approximate the functions with singularities, Della
Vecchia et al. \cite{Della} and Yu-Zhao \cite{Yu} introduced some
kinds of \emph{modified Bernstein operators}. Throughout the paper,
$C$ denotes a positive constant independent of $n$ and $x$, which
may be different in different cases.\\
Let
\begin{align*}
w(x)=x^\alpha(1-x)^\beta,\ \alpha,\ \beta \geqslant0,\ \alpha +
\beta >0,\ 0 \leqslant x \leqslant 1.
\end{align*}
and
\begin{align*}
C_w:= \{{f \in C((0,1)) :\lim\limits_{x\longrightarrow
1}(wf)(x)=\lim\limits_{x\longrightarrow 0}(wf)(x)=0 }\}.
\end{align*}
The \emph{norm} in $C_w$ is defined by
$\|wf\|_{C_w}:=\|wf\|=\sup\limits_{0\leqslant x\leqslant
1}|(wf)(x)|$. Define
\begin{align*}
W_{w,\lambda}^{r}:= \{f \in C_w:f^{(r-1)} \in A.C.((0,1)),\
\|w\varphi^{r\lambda}f^{(r)}\|<\infty\}.
\end{align*}
For $f \in C_w$, define the \emph{weighted modulus of smoothness} by
\begin{align*}
\omega_{\varphi^\lambda}^{r}(f,t)_w:=\sup_{0<h\leqslant
t}\{\|w\triangle_{h\varphi^\lambda}^{r}f\|_{[16h^2,1-16h^2]}+\|w\overrightarrow{\triangle}_{h}^{r}f\|_{[0,16h^2]}+\|w\overleftarrow{\triangle}_{h}^{r}f\|_{[1-16h^2,1]}\},
\end{align*}
where
\begin{align*}
\Delta_{h\varphi}^{r}f(x)=\sum_{k=0}^{r}(-1)^{k}{r \choose
k}f(x+(\frac r2-k)h\varphi(x)),\\
\overrightarrow{\Delta}_{h}^{r}f(x)=\sum_{k=0}^{r}(-1)^{k}{r
\choose k}f(x+(r-k)h),\\
\overleftarrow{\Delta}_{h}^{r}f(x)=\sum_{k=0}^{r}(-1)^{k}{r \choose
k}f(x-kh),
\end{align*}
and $\varphi(x)=\sqrt{x(1-x)}$. Della Vecchia \emph{et al.} firstly
introduced $B_{n}^{\ast}(f,x)$ and ${\bar{B}}_{n}(f,x)$ in
\cite{Della}, where the properties of $B_{n}^{\ast}(f,x)$ and
${\bar{B}}_{n}(f,x)$ are studied. Among others, they prove that
\begin{align*}
\|w(f-B_{n}^{\ast}(f))\|\leqslant
C\omega_{\varphi}^{2}(f,n^{-1/2}),\ f\in C_{w},\\
\|{\bar{w}}(f-{\bar{B}_{n}(f)})\|\leqslant
\frac{C}{n^{3/2}}\sum_{k=1}^{[\sqrt{n}]}k^{2}\omega_{\varphi}^{2}(f,\frac{1}{k})_{\bar{w}}^{\ast},\
f\in C_{\bar{w}},
\end{align*}
where $w(x)=x^{\alpha}(1-x)^{\beta},\ \alpha,\ \beta\geqslant 0,\
\alpha+\beta>0,\ 0\leqslant x \leqslant1.$ In \cite{S. Yu}, for any
$\alpha,\ \beta>0,\ n\geqslant 2r+\alpha+\beta$, there hold
\begin{align*}
\|wB_{n,r}^{\ast}(f)\|\leqslant C\|wf\|,\ f\in C_{w},\\
\|w(B_{n,r}^{\ast}(f)-f)\|\leqslant \left\{
\begin{array}{lrr}
{\frac{C}{n^{r}}} (\|wf\|+\|w\varphi^{2r}f^{(2r)}\|), f\in
W_{w}^{2r}, \\
C(\omega_{\varphi}^{2r}(f,n^{-1/2})_{w}+n^{-r}\|wf\|), f\in C_{w}.
\end{array}
\right.\\
\|w\varphi^{2r}B_{n,r}^{\ast(2r)}(f)\|\leqslant \left\{
\begin{array}{lrr}
Cn^{r}\|wf\|, f\in C_{w}, \\
C(\|wf\|+\|w\varphi^{2r}f^{(2r)}\|), f\in W_{w}^{2r}.
\end{array}
\right.
\end{align*}
and for $0< \gamma <2r,$
$$\|w(B_{n,r}^{\ast}(f)-f)\|=O(n^{-\gamma/2}) \Longleftrightarrow
\omega_{\varphi}^{2r}(f,t)_{w}=O(t^{r}).$$
Ditzian and Totik \cite{Totik} extended this method of combinations
and defined the following combinations of Bernstein operators:
\begin{align*}
B_{n,r}(f,x):=\sum_{i=0}^{r-1}C_{i}(n)B_{n_i}(f,x).
\end{align*}
with the conditions
\begin{enumerate}[(a)]
\item $n=n_0<n_1< \cdots <n_{r-1}\leqslant
Cn,$\\
\item $\sum_{i=0}^{r-1}|C_{i}(n)|\leqslant C,$\\
\item
$\sum_{i=0}^{r-1}C_{i}(n)=1,$\\
\item $\sum_{i=0}^{r-1}C_{i}(n)n_{i}^{-k}=0$,\ for $k=1,\ldots,r-1$.
\end{enumerate}
\section{The main results}
Now, we can define our \emph{new combinations of Bernstein
operators} as follows:
\begin{align}
B^\ast_{n,r}(f,x):=B_{n,r}(F_n,x)=\sum_{i=0}^{r-1}C_{i}(n)B_{n_i}(F_{n},x),\label{s1}
\end{align}
where $C_{i}(n)$ satisfy the conditions (a)-(d). For the details, it
can be referred to \cite{S. Yu}. Our main results are the following:
\begin{thrm}\label{t1} If \ $\alpha, \ \beta >0,$ for any $f \in
C_w,$ we have
\begin{align}
\|wB^{\ast(r)}_{n,r-1}(f)\| \leqslant Cn^{r}\|wf\|.\label{s2}
\end{align}
\end{thrm}
\begin{thrm}\label{t2} For any $\alpha, \ \beta >0,\ 0
\leqslant \lambda \leqslant 1,$ we have
\begin{align}
|w(x)\varphi^{r\lambda}(x)B^{\ast(r)}_{n,r-1}(f,x)|\leqslant \left\{
\begin{array}{lrr}
Cn^{r/2}{\{max\{n^{r(1-\lambda)/2},\varphi^{r(\lambda-1)}(x)\}\}}\|wf\|, && f\in C_w, \\
C\|w\varphi^{r\lambda}f^{(r)}\|,&& f\in W_{w,\lambda}^{r}.\label{s3}
\end{array}
\right.
\end{align}
\end{thrm}
\begin{thrm}\label{t3} For $f\in C_w,\ \alpha, \ \beta>0,\ \alpha_0 \in(0,r), \ 0
\leqslant \lambda \leqslant 1,$ we have
\begin{align}
w(x)|f(x)-B^\ast_{n,r-1}(f,x)|=O((n^{-{\frac
12}}\varphi^{-\lambda}(x)\delta_n(x))^{\alpha_0})
\Longleftrightarrow
\omega_{\varphi^\lambda}^r(f,t)_w=O(t^{\alpha_0}).\label{s4}
\end{align}
\end{thrm}
\section{Lemmas}
\begin{lem}(\cite{Zhou}) For any non-negative real $u$ and $v$, we
have
\begin{align}
\sum_{k=1}^{n-1}({\frac kn})^{-u}(1-{\frac
kn})^{-v}p_{n,k}(x)\leqslant Cx^{-u}(1-x)^{-v}.\label{s5}
\end{align}
\end{lem}
\begin{lem}(\cite{Della}) If $\gamma \in R,$ then
\begin{align}
\sum_{k=0}^n|k-nx|^\gamma p_{n,k}(x) \leqslant Cn^{\frac
\gamma2}\varphi^\gamma(x).\label{s6}
\end{align}
\end{lem}
\begin{lem} For any $f\in W_{w,\lambda}^{r},$ $0 \leqslant \lambda
\leqslant 1$ and $\alpha,\ \beta
>0$, we have
\begin{align}
\|w\varphi^{r\lambda}F_{n}^{(r)}\| \leqslant
C\|w\varphi^{r\lambda}f^{(r)}\|.\label{s7}
\end{align}
\end{lem}
\begin{proof} By symmetry, we only prove the above result when $x\in
(0,1/2]$, the others can be done similarly. Obviously, when $x \in
(0,1/n],$ by \cite{Totik}, we have
\begin{align*}
|L_r^{(r)}(f,x)| \leqslant C|\overrightarrow{\Delta}_{\frac
1r}^{r}f(0)| \ \leqslant \ Cn^{-{\frac r2}+1}\int_0^{\frac rn}u^{\frac r2}|f^{(r)}(u)|du\\
\leqslant Cn^{-{\frac
r2}+1}\|w\varphi^{r\lambda}f^{(r)}\|\int_0^{\frac rn}u^{\frac
r2}w^{-1}(u)\varphi^{-r\lambda}(u)du.
\end{align*}
So
\begin{align*}
|w(x)\varphi^{r\lambda}(x)F_{n}^{(r)}(x)| \leqslant
C\|w\varphi^{r\lambda}f^{(r)}\|.
\end{align*}
If $x\in [{\frac 1n},{\frac 2n}],$ we have
\begin{align*}
|w(x)\varphi^{r\lambda}(x)F_{n}^{(r)}(x)| \leqslant |w(x)\varphi^{r\lambda}(x)f^{(r)}(x)| + |w(x)\varphi^{r\lambda}(x)(f(x)-F_{n}(x))^{(r)}|\\
:= I_1 + I_2.
\end{align*}
For $I_2,$ we have
\begin{align*}
f(x)-F_{n}(x) = (\psi(nx-1)+1)(f(x)-L_r(f,x)).\\
w(x)\varphi^{r\lambda}(x)|(f(x)-F_{n}(x))^{(r)}|=w(x)\varphi^{r\lambda}(x)\sum_{i=0}^rn^i|(f(x)-L_r(f,x))^{(r-i)}|.
\end{align*}
By \cite{Totik}, then
\begin{align*}
|(f(x)-L_r(f,x))^{(r-i)}|_{[{\frac 1n},{\frac 2n}]} \leqslant
C(n^{r-i}\|f-L_r\|_{[{\frac 1n},{\frac 2n}]} +
n^{-i}\|f^{(r)}\|_{[{\frac 1n},{\frac 2n}]}),\ 0<j<r.
\end{align*}
Now, we estimate
\begin{align}
I:=w(x)\varphi^{r\lambda}(x)|f(x)-L_r(x)|.\label{s8}
\end{align}
By Taylor expansion, we have
\begin{align}
f({\frac in})=\sum_{u=0}^{r-1}\frac{({\frac
in}-x)^u}{u!}f^{(u)}(x)+{\frac 1{(r-1)!}}\int_{x}^{{\frac
in}}({\frac in}-s)^{r-1}f^{(r)}(s)ds,\label{s9}
\end{align}
It follows from (\ref{s9}) and the identity
\begin{align*}
\sum\limits_{i=1}^{r}({\frac in})^{v}l_{i}(x)=Cx^v,\ v=0,1,\cdots,r.
\end{align*}
We have
\begin{align*}
L_r(f,x)=\sum_{i=1}^{r}\sum_{u=0}^{r-1}\frac{({\frac
in}-x)^u}{u!}f^{(u)}(x)l_{i}(x)+{\frac
1{(r-1)!}}\sum_{i=1}^{r}l_{i}(x)\int_{x}^{{\frac in}}({\frac in}-s)^{r-1}f^{(r)}(s)ds\nonumber\\
=f(x)+C\sum_{u=1}^{r-1}f^{(u)}(x)(\sum_{v=0}^{u}C_{u}^{v}(-x)^{u-v}\sum_{i=1}^{r}({\frac in})^{v}l_{i}(x))\nonumber\\
+\ \ {\frac 1{(r-1)!}}\sum_{i=1}^{r}l_{i}(x)\int_{x}^{{\frac
in}}({\frac in}-s)^{r-1}f^{(r)}(s)ds,
\end{align*}
which implies that
$${w(x)}\varphi^{r\lambda}(x)|f(x)-L_r(f,x)|={\frac 1{r!}}{w(x)}\varphi^{r\lambda}(x)\sum_{i=1}^{r}l_{i}(x)\int_{x}^{{\frac in}}({\frac in}-s)^{r-1}f^{(r)}(s)ds,$$
since $|l_{i}(x)|\leqslant C$ for $x\in [0,{\frac 2n}],\
i=1,2,\cdots,r$.
It follows from $\frac{|{\frac in}-s|^{r-1}}{w(s)}\leqslant
\frac{|{\frac in}-x|^{r-1}}{w(x)},$ $s$ between ${\frac in}$ and
$x$, then
\begin{align*}
w(x)\varphi^{r\lambda}(x)|f(x)-L_r(f,x)|\leqslant Cw(x)\varphi^{r\lambda}(x)\sum_{i=1}^{r}\int_{x}^{{\frac in}}({\frac in}-s)^{r-1}|f^{(r)}(s)|ds\nonumber\\
\leqslant C\varphi^{r\lambda}(x)\|w\varphi^{r\lambda}f^{(r)}\|\sum_{i=1}^{r}\int_{x}^{{\frac in}}({\frac in}-s)^{r-1}\varphi^{-r\lambda}(s)ds\nonumber\\
\leqslant {\frac {C}{n^r}}\|w\varphi^{r\lambda}f^{(r)}\|.
\end{align*}
Thus
\begin{align*}
I \leqslant C\|w\varphi^{r\lambda}f^{(r)}\|.
\end{align*}
So, we get
\begin{align*}
I_2 \leqslant C\|w\varphi^{r\lambda}f^{(r)}\|.
\end{align*}
Above all, we have
\begin{align*}
|w(x)\varphi^{r\lambda}(x)F_{n}^{(r)}(x)| \leqslant
C\|w\varphi^{r\lambda}f^{(r)}\|.
\end{align*}
\end{proof}
\begin{lem} If $f\in W_{w,\lambda}^{r},$ $0 \leqslant \lambda \leqslant 1$
and $\alpha,\ \beta
>0$, then
\begin{align}
|w(x)(f(x)-L_r(f,x))|_{[0,{\frac 2n}]} \leqslant C(\frac
{\delta_n(x)}{\sqrt{n}\varphi^\lambda(x)})^r\|w\varphi^{r\lambda}f^{(r)}\|.\label{s10}\\
|w(x)(f(x)-R_r(f,x))|_{[1-{\frac 2n},1]} \leqslant C(\frac
{\delta_n(x)}{\sqrt{n}\varphi^\lambda(x)})^r\|w\varphi^{r\lambda}f^{(r)}\|.\label{s11}
\end{align}
\end{lem}
\begin{proof} By Taylor expansion, we have
\begin{align}
f({\frac in})=\sum_{u=0}^{r-1}\frac{({\frac
in}-x)^u}{u!}f^{(u)}(x)+{\frac 1{r!}}\int_{x}^{{\frac in}}({\frac
in}-s)^{r-1}f^{(r)}(s)ds,\label{s12}
\end{align}
It follows from (\ref{s12}) and the identity
\begin{align*}
\sum\limits_{i=1}^{r-1}({\frac in})^{v}l_{i}(x)=Cx^v,\
v=0,1,\ldots,r.
\end{align*}
We have
\begin{align*}
L_r(f,x)=\sum_{i=1}^{r}\sum_{u=0}^{r-1}\frac{({\frac
in}-x)^u}{u!}f^{(u)}(x)l_{i}(x)+{\frac
1{(r-1)!}}\sum_{i=1}^{r}l_{i}(x)\int_{x}^{{\frac in}}({\frac in}-s)^{r-1}f^{(r)}(s)ds\nonumber\\
=f(x)+C\sum_{u=1}^{r-1}f^{(u)}(x)(\sum_{v=0}^{u}C_{u}^{v}(-x)^{u-v}\sum_{i=1}^{r}({\frac in})^{v}l_{i}(x))\nonumber\\
+\ \ {\frac 1{(r-1)!}}\sum_{i=1}^{r}l_{i}(x)\int_{x}^{{\frac
in}}({\frac in}-s)^{r-1}f^{(r)}(s)ds,
\end{align*}
which implies that
$$w(x)|f(x)-L_r(f,x)|={\frac 1{(r-1)!}}w(x)\sum_{i=1}^{r}l_{i}(x)\int_{x}^{{\frac in}}({\frac in}-s)^{r-1}f^{(r)}(s)ds,$$
since $|l_{i}(x)|\leqslant C$ for $x\in [0,{\frac
2n}],\ i=1,2,\cdots,r$. \\
It follows from $\frac{|{\frac in}-s|^{r-1}}{w(s)}\leqslant
\frac{|{\frac in}-x|^{r-1}}{w(x)},$ $s$ between ${\frac in}$ and
$x$, then
\begin{align*}
w(x)|f(x)-L_r(f,x)|\leqslant Cw(x)\sum_{i=1}^{r}\int_{x}^{{\frac in}}({\frac in}-s)^{r-1}|f^{(r)}(s)|ds\nonumber\\
\leqslant C{\frac
{\varphi^r(x)}{\varphi^{r\lambda}(x)}}\|w\varphi^{r\lambda}f^{(r)}\|\sum_{i=1}^{r}\int_{x}^{{\frac in}}({\frac in}-s)^{r-1}\varphi^{-r}(s)ds\nonumber\\
\leqslant C{\frac
{\delta_n^r(x)}{\varphi^{r\lambda}(x)}}\|w\varphi^{r\lambda}f^{(r)}\|\sum_{i=1}^{r}\int_{x}^{{\frac in}}({\frac in}-s)^{r-1}\varphi^{-r}(s)ds\nonumber\\
\leqslant C(\frac
{\delta_n(x)}{\sqrt{n}\varphi^\lambda(x)})^r\|w\varphi^{r\lambda}f^{(r)}\|.
\end{align*}
The proof of (\ref{s11}) can be done similarly.
\end{proof}
\begin{lem}(\cite{S. Yu}) For every $\alpha,\ \beta>0,$ we have
\begin{align}
\|wB^\ast_{n,r-1}(f)\| \leqslant C\|wf\|.\label{s13}
\end{align}
\end{lem}
\begin{lem}(\cite{Wang}) If \
$\varphi(x)=\sqrt{x(1-x)},\ 0 \leqslant \lambda \leqslant 1,\ 0
\leqslant \beta \leqslant 1,$ then
\begin{align}
\int_{-{\frac {h\varphi^\lambda(x)}{2}}}^{\frac
{h\varphi^\lambda(x)}{2}} \cdots \int_{-{\frac
{h\varphi^\lambda(x)}{2}}}^{\frac
{h\varphi^\lambda(x)}{2}}\varphi^{-r\beta}(x+\sum_{k=1}^ru_k)du_1
\cdots du_r \leqslant Ch^r\varphi^{r(\lambda-\beta)}(x).\label{s14}
\end{align}
\end{lem}
\section{Proof of Theorems}
\subsection{Proof of Theorem \ref{t1}}
By symmetry, in what follows we will always assume that $x \in
(0,{\frac 12}].$ It is sufficient to prove the validity for
$B_{n,r-1}(F_n,x)$ instead of $B^\ast_{n,r-1}(f,x).$ When $x\in
(0,{\frac {1}{n}}),$ we have
\begin{align*}
|w(x)B_{n,r-1}^{\ast(r)}(f,x)|\leqslant w(x)\sum_{i=0}^{r-2}{\frac
{n_{i}!}{({n_{i}-r})!}}\sum_{k=0}^{n_i-r}|\overrightarrow{\Delta}_{\frac
1{n_i}}^{r}F_{n}{(\frac k{n_i})}|p_{n_i-r,k}(x)\nonumber\\
\leqslant
Cw(x)\sum_{i=0}^{r-2}n_{i}^{r}\sum_{k=0}^{n_i-r}|\overrightarrow{\Delta}_{\frac
1{n_i}}^{r}F_{n}{(\frac k{n_i})}|p_{n_i-r,k}(x)\nonumber\\
\leqslant
Cw(x)\sum_{i=0}^{r-2}n_{i}^{r}\sum_{k=0}^{n_i-r}\sum_{j=0}^{r}C_{r}^{j}|F_{n}({\frac
{k+r-j}{n_i}})|p_{n_i-r,k}(x)\nonumber\\
\leqslant
Cw(x)\sum_{i=0}^{r-2}n_{i}^{r}\sum_{j=0}^{r}C_{r}^{j}|F_{n}({\frac
{r-j}{n_i}})|p_{n_i-r,0}(x)\nonumber\\
+ \
Cw(x)\sum_{i=0}^{r-2}n_{i}^{r}\sum_{j=0}^{r}C_{r}^{j}|F_{n}({\frac
{n_{i}-j}{n_i}})|p_{n_i-r,n_i-r}(x)\nonumber\\
+ \
Cw(x)\sum_{i=0}^{r-2}n_{i}^{r}\sum_{k=1}^{n_i-r-1}\sum_{j=0}^{r}C_{r}^{j}|F_{n}({\frac
{k+r-j}{n_i}})|p_{n_i-r,k}(x)\nonumber\\
:=H_1 +H_2 + H_3.
\end{align*}
We have
\begin{align*}
H_1\leqslant
Cw(x)\|wf\|\sum_{i=0}^{r-2}n_{i}^{r}w^{-1}(\frac {1}{n_i})p_{n_i-r,0}(x)\\
\leqslant C\|wf\|\sum_{i=0}^{r-2}n_{i}^{r}(n_ix)^{\alpha}(1-x)^{n_i-r}\\
\leqslant Cn^{r}\|wf\|.
\end{align*}
When $1 \leqslant k \leqslant n_i-r-1,$ we have $1 \leqslant k+2r-j
\leqslant n_i-1,$ and thus
\begin{align*}
{\frac {w({\frac {k}{n_i-r}})}{w({\frac {k+r-j}{n_i}})}}=({\frac
{n_i}{n_i-r}})^{\alpha+\beta}({\frac {k}{k+r-j}})^\alpha({\frac
{n_i-r-k}{n_i-r-k+j}})^\beta \leqslant C.
\end{align*}
Thereof, by (\ref{s5}), we have
\begin{align*}
H_3 \leqslant
Cw(x)\|wF_n\|\sum_{i=0}^{r-2}n_{i}^{r}\sum_{k=1}^{n_i-r-1}\sum_{j=0}^{r}{\frac
{1}{w({\frac {k+r-j}{n_i}})}}p_{n_i-r,k}(x)\\
\leqslant
Cw(x)\|wF_n\|\sum_{i=0}^{r-2}n_{i}^{r}\sum_{k=1}^{n_i-r-1}{\frac
{1}{w({\frac {k}{n_i-r}})}}p_{n_i-r,k}(x)\\
\leqslant Cn^r\|wF_n\| \ \leqslant \ Cn^r\|wf\|.
\end{align*}
Similarly, we can get $H_2\leqslant Cn^{r}\|wf\|$. So
\begin{align}
|w(x)B^{\ast(r)}_{n,r-1}(f,x)| \leqslant Cn^{r}\|wf\|,\ x\in
(0,{\frac {1}{n}}).\label{s15}
\end{align}
When $x\in [{\frac {1}{n}},{\frac 12}],$ according to \cite{Totik},
we have
\begin{align*}
|w(x)B_{n,r-1}^{\ast(r)}(f,x)|\\
=|w(x)B_{n,r-1}^{(r)}({F_{n}},x)|\\
\leqslant w(x)(\varphi^{2}(x))^{-r}\sum_{i=0}^{r-2}\sum_{j=0}^{r}|Q_{j}(x,n_i)|n_{i}^{j}\sum_{k=0}^n|(x-{\frac kn_{i}})^{j}F_{n}({\frac kn_{i}})|p_{n_i,k}(x).\\
\end{align*}
Then\\
$Q_{j}(x,n_i)=(n_ix(1-x))^{[{\frac {r-j}{2}}]},$ and
$(\varphi^{2}(x))^{-r}Q_{j}(x,n_i)n_{i}^{j}\leqslant
C(n_{i}/\varphi^{2}(x))^{\frac {r+j}{2}},$ we have
\begin{align}
|w(x)B_{n,r-1}^{\ast(r)}(f,x)|\leqslant
Cw(x)\sum_{i=0}^{r-2}\sum_{j=0}^{r}(\frac
{n_{i}}{\varphi^{2}(x)})^{\frac
{r+j}{2}}\sum_{k=0}^{n_{i}}|(x-{\frac
kn_{i}})^{j}F_{n}({\frac kn_{i}})|p_{n_i,k}(x)\nonumber\\
\leqslant C\|wF_n\|w(x)\sum_{i=0}^{r-2}\sum_{j=0}^{r}(\frac
{n_{i}}{\varphi^{2}(x)})^{\frac {r+j}{2}}\sum_{k=0}^{n_i}{\frac
{|x-{\frac {k}{n_{i}}|^{j}}}{w({\frac
{k^\ast}{n_i})}}}p_{n_i,k}(x),\label{s16}
\end{align}
where $k^\ast =1$ for $k=0,$ $k^\ast =n_i-1$ for $k=n_i$ and $k^\ast
=k$ for $1<k<n_i.$ Note that
\begin{align*}
w^2(x){\frac {p_{n_i,0}(x)}{w^2({\frac {1}{n_{i}}})}} \leqslant
C(n_ix)^{2\alpha}(1-x)^{n_i} \leqslant C,
\end{align*}
and
\begin{align*}
w^2(x){\frac {p_{n_i,n_i}(x)}{w^2(1-{\frac {1}{n_{i}}})}} \leqslant
Cn_i^\beta x^{n_i} \leqslant C{\frac {n_i^\beta}{2^{n_i}}} \leqslant
C.
\end{align*}
By (\ref{s5}), we have
\begin{align}
\sum_{k=0}^{n_i}{\frac {1}{w^2({\frac
{k^\ast}{n_{i}}})}}p_{n_i,k}(x) \leqslant Cw^{-2}(x).\label{s17}
\end{align}
Now, applying Cauchy's inequality, by (\ref{s6}) and (\ref{s17}), we
have
\begin{align*}
\sum_{k=0}^{n_i}{\frac {{|x-{\frac {k}{n_{i}}|^{j}}}}{{w({\frac
{k^\ast}{n_i}})}}}p_{n_i,k}(x) \leqslant
({\sum_{k=0}^{n_i}{|x-{\frac
{k}{n_{i}}|^{2j}}}p_{n_i,k}(x)})^{1/2}({\sum_{k=0}^{n_i}{\frac
{1}{{w^2({\frac {k^\ast}{n_i}})}}}p_{n_i,k}(x)})^{1/2}\\
\leqslant Cn_i^{-j/2}\varphi^j(x)w^{-1}(x).
\end{align*}
Substituting this to (\ref{s16}), we have
\begin{align}
|w(x)B^{\ast(r)}_{n,r-1}(f,x)| \leqslant Cn^{r}\|wf\|,\ x\in [{\frac
{1}{n}},{\frac 12}].\label{s18}
\end{align}
We get Theorem \ref{t1} by (\ref{s15}) and (\ref{s18}). $\Box$
\subsection{Proof of Theorem \ref{t2}}
(1) When $f\in C_w$, we proceed it as follows:\\~\\
\textit{Case 1.} If $0\leqslant \varphi(x)\leqslant {\frac
{1}{\sqrt{n}}}$, by $(\ref{s2})$, we have
\begin{align}
|w(x)\varphi^{r\lambda}(x)B_{n,r-1}^{\ast(r)}(f,x)|\leqslant
Cn^{-r\lambda/2}|w(x)B_{n,r-1}^{\ast(r)}(f,x)|\leqslant
Cn^{r(1-\lambda/2)}\|wf\|.\label{s19}
\end{align}
\textit{Case 2.} If $\varphi(x)> {\frac {1}{\sqrt{n}}}$, we have
\begin{align*}
|B_{n,r-1}^{\ast(r)}(f,x)|=|B_{n,r-1}^{(r)}(F_{n},x)|\nonumber\\
\leqslant(\varphi^{2}(x))^{-r}\sum_{i=0}^{r-2}\sum_{j=0}^{r}|Q_{j}(x,n_i)C_{i}(n)|n_{i}^{j}\sum_{k=0}^{n_i}|(x-{\frac
kn_{i}})^{j}F_{n}({\frac kn_{i}})|p_{n_i,k}(x),
\end{align*}
$Q_{j}(x,n_i)=(n_ix(1-x))^{[{\frac {r-j}{2}}]},$ and
$(\varphi^{2}(x))^{-r}Q_{j}(x,n_i)n_{i}^{j}\leqslant
C(n_i/\varphi^{2}(x))^{\frac {r+j}{2}}$.\\ So
\begin{align}
|w(x)\varphi^{r\lambda}(x)B_{n,r-1}^{\ast(r)}(f,x)|\nonumber\\
\leqslant
Cw(x)\varphi^{r\lambda}(x)\sum_{i=0}^{r-2}\sum_{j=0}^{r}({\frac
{n_{i}}{\varphi^2(x)}})^{\frac {r+j}{2}}\sum_{k=0}^{n_i}|(x-{\frac
kn_{i}})^{j}F_{n}({\frac kn_{i}})|p_{n_i,k}(x)\nonumber\\
\leqslant Cn^{\frac r2}\varphi^{r(\lambda-1)}(x).\label{s20}
\end{align}
It follows from combining with (\ref{s19}) and (\ref{s20}) that the first inequality is proved.\\
(2) When $f\in W_{w,\lambda}^{r},$ we have
\begin{align}
B_{n,r-1}^{(r)}(F_{n},x)=\sum_{i=0}^{r-2}C_{i}(n)n_{i}^{r}\sum_{k=0}^{n_{i}-r}\overrightarrow{\Delta}_{\frac
1{n_{i}}}^{r}F_{n}({\frac kn_{i}})p_{n_i-r,k}(x).\label{s21}
\end{align}
If $0<k<n_{i}-r,$ we have
\begin{align}
|\overrightarrow{\Delta}_{\frac 1{n_{i}}}^{r}F_{n}({\frac kn_{i}})
|\leqslant Cn_{i}^{-r+1}\int_{0}^{\frac
{r}{n_{i}}}|F_{n}^{(r)}({\frac kn_{i}}+u)|du,\label{s22}
\end{align}
If $k=0,$ we have
\begin{align}
|\overrightarrow{\Delta}_{\frac 1{n_{i}}}^{r}F_{n}(0)|\leqslant
C\int_{0}^{\frac {r}{n_{i}}}u^{r-1}|F_{n}^{(r)}(u)|du,\label{s23}
\end{align}
Similarly
\begin{align}
|\overrightarrow{\Delta}_{\frac 1{n_{i}}}^{r}F_{n}({\frac
{n_{i}-r}{n_{i}}})|\leqslant Cn_i^{-r+1}\int_{1-{\frac
{r}{n_{i}}}}^{1}(1-u)^{\frac r2}|F_{n}^{(r)}(u)|du.\label{s24}
\end{align}
By (\ref{s21})-(\ref{s24}), we have
\begin{align}
|w(x)\varphi^{r\lambda}(x)B_{n,r-1}^{\ast(r)}(f,x)|\nonumber\\\leqslant
C{w(x)}\varphi^{r\lambda}(x)\|w\varphi^{r\lambda}F_n^{(r)}\|\sum_{i=0}^{r-2}\sum_{k=0}^{n_{i}-r}(w\varphi^{r\lambda})^{-1}(\frac
{k^\ast}{n_i})p_{n_i-r,k}(x),\label{s25}
\end{align}
where $k^\ast =1$ for $k=0,$ $k^\ast =n_i-r-1$ for $k=n_i-r$ and
$k^\ast =k$ for $1<k<n_i-r.$ By (\ref{s5}), we have
\begin{align}
\sum_{k=0}^{n_i-r}(w\varphi^{r\lambda})^{-1}(\frac
{k^\ast}{n_i})p_{n_i-r,k}(x) \leqslant
C(w\varphi^{r\lambda})^{-1}(x).\label{s26}
\end{align}
which combining with (\ref{s26}) give
\begin{align*}
|w(x)\varphi^{r\lambda}(x)B_{n,r-1}^{\ast(r)}(f,x)|\leqslant
C\|w\varphi^{r\lambda}f^{(r)}\|.\Box
\end{align*}
So we get the second inequality of the Theorem \ref{t2}.
\subsection{Proof of Theorem \ref{t3}}
\subsubsection{The direct theorem} We know
\begin{align}
F_n(t)=F_n(x)+F'_n(t)(t-x) + \cdots + {\frac{1}{(r-1)!}}\int_x^t
(t-u)^{r-1}F_n^{(r)}(u)du,\label{s27}\\
B_{n,r-1}((\cdot-x)^k,x)=0, \ k=1,2,\cdots,r-1.\label{s28}
\end{align}
According to the definition of $W_{w,\lambda}^{r},$ \ for any $g \in
W_{w,\lambda}^{r},$ we have
$B^\ast_{n,r-1}(g,x)=B_{n,r-1}(G_{n}(g),x),$ and
$w(x)|G_{n}(x)-B_{n,r-1}(G_{n},x)|=w(x)|B_{n,r-1}(R_r(G_n,t,x),x)|,$
thereof $R_r(G_n,t,x)=\int_x^t (t-u)^{r-1}G^{(r)}_n(u)du.$ It
follows from ${\frac {|t-u|^{r-1}}{w(u)}} \leqslant {\frac
{|t-x|^{r-1}}{w(x)}},$ $u$ between $t$ and $x$, we have
\begin{align}
w(x)|G_{n}(x)-B_{n,r-1}(G_{n},x)| \leqslant
C\|w\varphi^{r\lambda}G^{(r)}_n\|w(x)B_{n,r-1}({\int_x^t{\frac
{|t-u|^{r-1}}{w(u)\varphi^{r\lambda}(u)}du,x}})\nonumber\\
\leqslant
C\|w\varphi^{r\lambda}G^{(r)}_n\|w(x)(B_{n,r-1}(\int_x^t{\frac
{|t-u|^{r-1}}{\varphi^{2r\lambda}(u)}}du,x))^{\frac 12}\cdot
\nonumber\\
(B_{n,r-1}(\int_x^t{\frac {|t-u|^{r-1}}{w^2(u)}}du,x))^{\frac
12}.\label{s29}
\end{align}
also
\begin{align}
\int_x^t{\frac {|t-u|^{r-1}}{\varphi^{2r\lambda}(u)}}du \leqslant
C{\frac {|t-x|^r}{\varphi^{2r\lambda}(x)}},\ \int_x^t{\frac
{|t-u|^{r-1}}{w^2(u)}}du \leqslant {\frac
{|t-x|^r}{w^2(x)}}.\label{s30}
\end{align}
By (\ref{s6}), (\ref{s29}) and (\ref{s30}), we have
\begin{align}
w(x)|G_{n}(x)-B_{n,r-1}(G_{n},x)| \leqslant
C\|w\varphi^{r\lambda}G^{(r)}_n\|\varphi^{-r\lambda}(x)B_{n,r-1}
(|t-x|^r,x)\nonumber\\
\leqslant Cn^{-\frac r2}{\frac
{\varphi^{r}(x)}{\varphi^{r\lambda}(x)}}\|w\varphi^{r\lambda}G^{(r)}_n\|\nonumber\\
\leqslant Cn^{-\frac r2}{\frac
{\delta_n^r(x)}{\varphi^{r\lambda}(x)}}\|w\varphi^{r\lambda}G^{(r)}_n\|\nonumber\\
= C(\frac
{\delta_n(x)}{\sqrt{n}\varphi^\lambda(x)})^r\|w\varphi^{r\lambda}G^{(r)}_n\|.\label{s31}
\end{align}
By (\ref{s7}), (\ref{s10}), (\ref{s11}) and (\ref{s31}), when $g \in
W_{w,\lambda}^{r},$ then
\begin{align}
w(x)|g(x)-B^\ast_{n,r-1}(g,x)| \leqslant w(x)|g(x)-G_{n}(g,x)| +
w(x)|G_{n}(g,x)-B^\ast_{n,r-1}(g,x)|\nonumber\\
\leqslant |w(x)(g(x)-L_r(g,x))|_{[0,{\frac 2n}]} +
|w(x)(g(x)-R_r(g,x))|_{[1-{\frac 2n},1]}\nonumber\\ +\ \ C(\frac
{\delta_n(x)}{\sqrt{n}\varphi^\lambda(x)})^r\|w\varphi^{r\lambda}G^{(r)}_n\|\nonumber\\
\leqslant C(\frac
{\delta_n(x)}{\sqrt{n}\varphi^\lambda(x)})^r\|w\varphi^{r\lambda}g^{(r)}\|.\label{s32}
\end{align}
For $f \in C_w,$ we choose proper $g \in W_{ w,\lambda}^{r},$ by
(\ref{s13}) and (\ref{s32}), then
\begin{align*}
w(x)|f(x)-{B^\ast_{n,r-1}(f,x)}| \leqslant w(x)|f(x)-g(x)| +
w(x)|{B^\ast_{n,r-1}(f-g,x)}| \\+
w(x)|g(x)-{B^\ast_{n,r-1}(g,x)}|\\
\leqslant C(\|w(f-g)\|+(\frac
{\delta_n(x)}{\sqrt{n}\varphi^\lambda(x)})^r\|w\varphi^{r\lambda}g^{(r)}\|)\\
\leqslant C\omega_{\varphi^\lambda}^r(f,\frac
{\delta_n(x)}{\sqrt{n}\varphi^\lambda(x)})_w. \Box
\end{align*}
\subsubsection{The inverse theorem} We define the weighted main-part
modulus for $D=R_+$ by
\begin{align*}
\Omega_{\varphi^\lambda}^r(C,f,t)_w = \sup_{0<h \leqslant
t}\|w\Delta_{{h\varphi}^\lambda}^rf\|_{[Ch^\ast,\infty]},\\
\Omega_{\varphi^\lambda}^r(1,f,t)_w =
\Omega_{\varphi^\lambda}^r(f,t)_w.
\end{align*}
where $C>2^{1/\beta(0)-1},\ \beta(0)>0$ and $h^\ast$ is given by
\begin{align*}
h^\ast= \left\{
\begin{array}{lrr}
(Ar)^{1/1-\beta(0)}h^{1/1-\beta(0)}, && 0 \leqslant \beta(0) <1,
\\
0, && \beta(0) \geqslant 1.
\end{array}
\right.
\end{align*}
The main-part $K$-functional is given by
\begin{align*}
K_{r,\varphi^\lambda}(f,t^r)_w=\sup_{0<h \leqslant
t}\inf_g\{\|w(f-g)\|_{[Ch^\ast,\infty]}+t^r\|w\varphi^{r\lambda}g^{(r)}\|_{[Ch^\ast,\infty]},
\end{align*}
where $g^{(r-1)} \in A.C.((Ch^\ast,\infty))\}$. By (\cite{Totik}),
we have
\begin{align}
C^{-1}\Omega_{\varphi^\lambda}^r(f,t)_w \leqslant
\omega_{\varphi^\lambda}^{r}(f,t)_w \leqslant
C\int_0^t{\frac {\Omega_{\varphi^\lambda}^r(f,\tau)_w}{\tau}}d\tau,\label{s33} \\
C^{-1}K_{r,\varphi^\lambda}(f,t^r)_w \leqslant
\Omega_{\varphi^\lambda}^r(f,t)_w \leqslant
CK_{r,\varphi^\lambda}(f,t^r)_w.\label{s34}
\end{align}
\begin{proof} Let $\delta>0,$ by (\ref{s34}), we choose proper $g$ so
that
\begin{align}
\|w(f-g)\| \leqslant C\Omega_{\varphi^\lambda}^r(f,\delta)_w,\
\|w\varphi^{r\lambda}g^{(r)}\| \leqslant
C\delta^{-r}\Omega_{\varphi^\lambda}^r(f,\delta)_w.\label{s35}
\end{align}
then
\begin{align}
|w(x)\Delta_{h\varphi^\lambda}^rf(x)| \leqslant
|w(x)\Delta_{h\varphi^\lambda}^r(f(x)-{B^\ast_{n,r-1}(f,x)})|+|w(x)\Delta_{h\varphi^\lambda}^rB^\ast_{n,r-1}(f-g,x)|\nonumber\\
+\ |w(x)\Delta_{h\varphi^\lambda}^r{B^\ast_{n,r-1}(g,x)}|\nonumber\\
\leqslant \sum_{j=0}^rC_r^j(n^{-\frac
12}{\frac {\delta_n(x+({\frac r2}-j)h\varphi^\lambda(x))}{\varphi^\lambda(x+({\frac r2}-j)h\varphi^\lambda(x))}})^{\alpha_0}\nonumber\\
+\ \ \int_{-{\frac {h\varphi^\lambda(x)}{2}}}^{\frac
{h\varphi^\lambda(x)}{2}}\cdots \int_{-{\frac
{h\varphi^\lambda(x)}{2}}}^{\frac
{h\varphi^\lambda(x)}{2}}w(x){B^{\ast(r)}_{n,r-1}(f-g,x+\sum_{k=1}^ru_k)}du_1\cdots
du_r\nonumber\\
+\ \ \int_{-{\frac {h\varphi^\lambda(x)}{2}}}^{\frac
{h\varphi^\lambda(x)}{2}}\cdots \int_{-{\frac
{h\varphi^\lambda(x)}{2}}}^{\frac
{h\varphi^\lambda(x)}{2}}w(x){B^{\ast(r)}_{n,r-1}(g,x+\sum_{k=1}^ru_k)}du_1\cdots
du_r\nonumber\\
:= J_1+J_2+J_3.\label{s36}
\end{align}
Obviously
\begin{align}
J_1 \leqslant C(n^{-\frac 12}\varphi^{-\lambda}(x)\delta_n(x))^{\alpha_0}.\label{s37}
\end{align}
By (\ref{s2}) and (\ref{s35}), we have
\begin{align}
J_2 \leqslant Cn^r\|w(f-g)\|\int_{-{\frac
{h\varphi^\lambda(x)}{2}}}^{\frac {h\varphi^\lambda(x)}{2}}\cdots
\int_{-{\frac {h\varphi^\lambda(x)}{2}}}^{\frac
{h\varphi^\lambda(x)}{2}}du_1 \cdots du_r\nonumber\\
\leqslant Cn^rh^r\varphi^{r\lambda}(x)\|w(f-g)\|\nonumber\\
\leqslant
Cn^rh^r\varphi^{r\lambda}(x)\Omega_{\varphi^\lambda}^r(f,\delta)_w.\label{s38}
\end{align}
By the first inequality of (\ref{s3}), we let $\lambda=1,$ and
(\ref{s14}) as well as (\ref{s35}), we have
\begin{align}
J_2 \leqslant Cn^{\frac r2}\|w(f-g)\|\int_{-{\frac
{h\varphi^\lambda(x)}{2}}}^{\frac {h\varphi^\lambda(x)}{2}} \cdots
\int_{-{\frac {h\varphi^\lambda(x)}{2}}}^{\frac
{h\varphi^\lambda(x)}{2}}\varphi^{-r}(x+\sum_{k=1}^ru_k)du_1 \cdots du_r\nonumber\\
\leqslant Cn^{\frac r2}h^r\varphi^{r(\lambda-1)}(x)\|w(f-g)\|\nonumber\\
\leqslant Cn^{\frac
r2}h^r\varphi^{r(\lambda-1)}(x)\Omega_{\varphi^\lambda}^r(f,\delta)_w.\label{s39}
\end{align}
By the second inequality of (\ref{s3}) and (\ref{s35}), we have
\begin{align}
J_3 \leqslant C\|w\varphi^{r\lambda}g^{(r)}\|w(x)\int_{-{\frac
{h\varphi^\lambda(x)}{2}}}^{\frac {h\varphi^\lambda(x)}{2}} \cdots
\int_{-{\frac {h\varphi^\lambda(x)}{2}}}^{\frac
{h\varphi^\lambda(x)}{2}}{w^{-1}(x+\sum_{k=1}^ru_k)}\varphi^{-r\lambda}(x+\sum_{k=1}^ru_k)du_1 \cdots du_r\nonumber\\
\leqslant Ch^r\|w\varphi^{r\lambda}g^{(r)}\|\nonumber\\
\leqslant
Ch^r\delta^{-r}\Omega_{\varphi^\lambda}^r(f,\delta)_w.\label{s40}
\end{align}
Now, by (\ref{s36})-(\ref{s40}), we get
\begin{align*}
|w(x)\Delta_{h\varphi^\lambda}^rf(x)| \leqslant C\{(n^{-\frac
12}\delta_n(x))^{\alpha_0} + h^r(n^{-\frac
12}\delta_n(x))^{-r}\Omega_{\varphi^\lambda}^r(f,\delta)_w +
h^r\delta^{-r}\Omega_{\varphi^\lambda}^r(f,\delta)_w\}.
\end{align*}
When $n \geqslant 2,$ we have
\begin{align*}
n^{-\frac 12}\delta_n(x) < (n-1)^{-\frac 12}\delta_{n-1}(x)
\leqslant \sqrt{2}n^{-\frac 12}\delta_n(x),
\end{align*}
Choosing proper $x, n \in N,$ so that
\begin{align*}
n^{-\frac 12}\delta_n(x) \leqslant \delta < (n-1)^{-\frac
12}\delta_{n-1}(x),
\end{align*}
Therefore
\begin{align*}
|w(x)\Delta_{h\varphi^\lambda}^rf(x)| \leqslant C\{\delta^{\alpha_0}
+ h^r\delta^{-r}\Omega_{\varphi^\lambda}^r(f,\delta)_w\}.
\end{align*}
By Borens-Lorentz lemma in \cite{Totik}, we get
\begin{align}
\Omega_{\varphi^\lambda}^r(f,t)_w \leqslant
Ct^{\alpha_0}.\label{s41}
\end{align}
So, by (\ref{s41}), we get
\begin{align*}
\omega_{\varphi^\lambda}^{r}(f,t)_w \leqslant C\int_0^t{\frac
{\Omega_{\varphi^\lambda}^r(f,\tau)_w}{\tau}}d\tau =
C\int_0^t\tau^{\alpha_0-1}d\tau = Ct^{\alpha_0}.
\end{align*}
\end{proof}
|
1,941,325,220,401 | arxiv | \section{Introduction}
I. K. Daugavet \cite{Daugavet} proved in 1963 that all compact operators $T$
on $C[0,1]$ fulfill the norm identity
\begin{equation*}
\norm{\mathrm{Id} + T} = 1 + \norm{T},
\end{equation*}
which has become known as the \emph{Daugavet equation}. C. Foia{\cb{s} and
I. Singer \cite{FoiasSingerPointsDiffusion} extended this result to all
weakly compact operators on $C(K)$ where $K$ is a compact space without
isolated points. Shortly afterwards G. Ya. Lozanovski{\u\i}
\cite{LozanovskiiAlmostIntegralOperators} showed that the Daugavet equation
holds for all compact operators on $L^1[0,1]$ and J. R. Holub
\cite{HolubDaugavetsEquationL1} extended this result to all weakly compact
operators on $L^1(\mu)$ where $\mu$ is a $\sigma$-finite non-atomic measure.
V. M. Kadets, R. V. Shvidkoy, G. G. Sirotkin, and D. Werner
\cite{KadetsShvidkoySirotkinWernerDaugavetProperty} proved that the validity
of the Daugavet equation for weakly compact operators already follows from the
corresponding statement for operators of rank 1. This result led to the
following definition: A Banach space $X$ is said to have the \emph{Daugavet
property}, if every operator $T: X \rightarrow X$ of rank 1 satisfies the
Daugavet equation. During the studies of ultraproducts
\cite{BilikKadetsShvidkoyWernerNarrowOperatorsDaugavetPrUltraproducts}
and quotients \cite{KadetsShepelskaWernerQuotientsDaugavetProperty} of Banach
spaces with the Daugavet property a weaker notion was introduced. Let $X$ be a
Banach space and let $Y$ be a subspace of $X^*$. We say that $X$ has the
\emph{Daugavet property with respect to $Y$}, if the Daugavet equation holds
true for every rank-one operator $T: X \rightarrow X$ of the form
$T=y^* \otimes x$ where $x \in X$ and $y^* \in Y$. A Banach space $X$ is
called an \emph{almost Daugavet space} or a space with the \emph{almost
Daugavet property}, if it has the Daugavet property with respect to some
norming subspace $Y \subset X^*$. Recall that a subspace $Y \subset X^*$ is
said to be norming, if for every $x \in X$
\begin{equation*}
\sup_{y^* \in S_Y} \left| y^*(x) \right| = \norm{x}.
\end{equation*}
The space $\ell^1$ is an example of an almost Daugavet space that does not
have the Daugavet property.
Separable almost Daugavet spaces can be characterized using a kind of inner
measure of non-compactness of the unit sphere. We call a set $F$ an inner
$\varepsilon$-net for $S_X$, if $F \subset S_X$ and for every $x \in S_X$ there
is a $y \in F$ with $\norm{x - y} \leq \varepsilon$. Then the \emph{thickness}
$T(X)$ of a Banach space $X$ is defined by
\begin{equation*}
T(X) = \inf \left\{ \varepsilon>0 : \text{there exists a finite
inner $\varepsilon$-net for $S_X$} \right\}.
\end{equation*}
R. Whitley \cite{WhitleySizeUnitSphere} introduced this parameter and showed
that $1 \leq T(X) \leq 2$ if $X$ is infinite-dimensional, in particular that
$T(\ell^p) = 2^{1/p}$ for $1 \leq p < \infty$ and $T(C(K)) = 2$, if $K$ has no
isolated points. It was shown by V. Kadets, V. Shepelska, and D. Werner that
a separable Banach space $X$ is an almost Daugavet space if and only if
$T(X)=2$ \cite{KadetsShepelskaWernerThicknessAlmostDaugavet}*{Theorem 1.1}.
Almost Daugavet spaces contain $\ell^1$
\cite{KadetsShepelskaWernerThicknessAlmostDaugavet}*{Corollary 3.3} and are
considered ``big''. It is therefore an interesting question which subspaces of
an almost Daugavet space inherit the almost Daugavet property. The most general
result in this direction is that a closed subspace $Z$ of a separable almost
Daugavet space $X$ has the almost Daugavet property as well, if the quotient
space $X/Z$ contains no copy of $\ell^1$
\cite{LueckingSubspacesAlmostDaugavet}*{Theorem 2.5}.
Let us consider an infinite, compact abelian group $G$ with its Haar measure
$m$. Since $G$ has no isolated points and since $m$ has no atoms, the spaces
$C(G)$ and $L^1(G)$ have the Daugavet property. Using the group structure of
$G$, we can translate functions that are defined on $G$ and look at closed,
translation-invariant subspaces of $C(G)$ or $L^1(G)$. These subspaces can be
described via subsets $\Lambda$ of the dual group $\widehat G$ and are of the
form
\begin{align*}
C_\Lambda(G) &= \left\{ f \in C(G): \text{spec} f \subset \Lambda
\right\}\\
L^1_\Lambda(G) &= \left\{ f \in L^1(G): \text{spec} f \subset \Lambda
\right\},
\shortintertext{where}
\text{spec}f &= \left\{ \gamma \in \widehat G: \hat f(\gamma) \neq
0 \right\}.
\end{align*}
We are going to characterize the sets $\Lambda \subset \widehat G$ such that
$C_\Lambda(G)$ and $L^1_\Lambda(G)$ are of thickness 2. If $G$ is metrizable,
this leads to a characterization of the translation-invariant subspaces of
$C(G)$ and $L^1(G)$ which have the almost Daugavet property.
\section{Translation-invariant subspaces of $C(G)$}
Let us start with translation-invariant subspaces of $C(G)$. We will show that
$T(C_\Lambda(G)) = 2$ if and only if $\Lambda$ is an infinite subset of
$\widehat G$. We will split the proof into various cases that depend on the
structure of $G$. For this reason we recall some definitions and results
concerning abelian groups and compact abelian groups.
Let $G$ be an abelian group with identity element $e_G$. A subset $E$ of $G$ is
said to be \emph{independent}, if $x_1^{k_1} \cdots x_n^{k_n} =e_G$ implies
$x_1^{k_1} = \cdots = x_n^{k_n} = e_G$ for every choice of distinct points
$x_1, \dotsc, x_n \in E$ and integers $k_1, \dotsc, k_n$. The \emph{order}
$o(x)$ of an element $x \in G$ is the smallest positive integer $m$ such that
$x^m = e_G$. If no such $m$ exists, $x$ is said to have infinite order.
Let $\mathbb T$ be the \emph{circle group}, i.e., the multiplicative group of
all complex numbers with absolute value one. If $G$ is a compact abelian group,
we denote the identity element of $\widehat G$, which coincides with the
function identically equal to one, by $\mathbf{1}_{G}$. Linear combinations of
elements of $\widehat G$ are called \emph{trigonometric polynomials} and for
every $\Lambda \subset \widehat G$ the space
$T_\Lambda(G) = \operatorname{lin} \Lambda$ of trigonometric polynomials with
spectrum contained in $\Lambda$ is dense in $C_\Lambda(G)$.
Let $H$ be a closed subgroup of $G$. The \emph{annihilator} of $H$ is defined
by
\begin{equation*}
H^\perp = \left\{ \gamma \in \widehat{G}: \gamma(x) = 1 \text{ for all $x
\in H$} \right\}
\end{equation*}
and is therefore a closed subgroup of $\widehat G$. We have that
$\widehat H = \widehat G/H^\perp$ and that $\widehat{G/H} = H^\perp$
\cite{RudinFourierAnalysis}*{Theorem 2.1.2}.
If $(G_j)_{j \in J}$ is a family of abelian groups, we define their
\emph{direct product} (or their \emph{complete direct sum}) by
\begin{align*}
\prod_{j \in J} G_j &= \left\{ f : J \rightarrow \bigcup_{j \in J} G_j:
f(j) \in G_j \right\}
\shortintertext{and define the group operation coordinatewise. Their
\emph{direct sum} is the subgroup}
\bigoplus_{j \in J} G_j &= \left\{ f \in \prod_{j \in J} G_j :
f(j) = e_{G_j} \text{ for all but finitely many $j \in J$} \right\}.
\end{align*}
If all $G_j$ coincide with $G$, we write $G^J$ or $G^{(J)}$ for the direct
product or the direct sum. We denote by $p_{G_j}$ the projection from
$\prod_{j \in J} G_j$ onto $G_j$. If we consider products of the form $\mathbb
Z^\mathbb N$ or $\mathbb Z^n$, we denote by $p_1, p_2, \dotsc$ the
corresponding projections onto $\mathbb Z$. If all $G_j$ are compact, then
$\prod_{j \in J} G_j$ is a compact abelian group as well and its dual group is
given by $\bigoplus_{j \in J} \widehat{G_j}$
\cite{RudinFourierAnalysis}*{Theorem 2.2.3}.
\begin{prop}
\label{finiteTorus}
Let $A$ be a compact abelian group, set $G = \mathbb T \oplus A$, and let
$\Lambda$ be a subset of $\widehat G = \mathbb Z \oplus \widehat A$. If
$p_\mathbb Z[\Lambda]$ is infinite, then $T(C_\Lambda(G)) = 2$.
\begin{proof}
Fix $f_1, \dotsc, f_n \in S_{C_\Lambda(G)}$ and $\varepsilon > 0$.
We have to
find $g \in S_{C_\Lambda(G)}$ with $\norm{f_k + g}_\infty \geq 2 -
\varepsilon$ for $k = 1, \dotsc, n$.
Every $f_k$ is uniformly continuous and therefore
there exists $\delta >0$ such that for $k = 1, \dotsc, n$ and
all $a \in A$
\begin{equation*}
|\varphi - \vartheta| \leq \delta \Longrightarrow
\left| f_k(\mathrm e^{\mathrm i \varphi}, a) -
f_k(\mathrm e^{\mathrm i \vartheta}, a) \right| \leq \varepsilon.
\end{equation*}
Since $p_\mathbb Z[\Lambda]$ contains infinitely many elements, we can
pick $s \in p_\mathbb Z[\Lambda]$ with $|s|2 \delta \geq 2\pi$. By our
choice of $s$, we get for all $\vartheta \in [0, 2\pi]$
\begin{equation}
\label{finiteTorusEquation1}
\left\{ \mathrm e^{\mathrm i s \varphi}:
|\varphi - \vartheta| \leq \delta \right\}
= \left\{ \mathrm e^{\mathrm i \varphi}:
|\varphi - \vartheta| \leq |s| \delta \right\}
= \mathbb T.
\end{equation}
Choose $g \in \Lambda$ with $p_\mathbb Z(g) = s$ and fix $k \in
\{1,\dotsc, n\}$. There exists
$(\mathrm e^{\mathrm i \vartheta^{(k)}}, a^{(k)}) \in G$ with
\begin{equation*}
\left| f_k(\mathrm e^{\mathrm i \vartheta^{(k)}}, a^{(k)}) \right|
= 1,
\end{equation*}
since $f_k \in S_{C_\Lambda(G)}$. By
(\ref{finiteTorusEquation1}), we can pick $\varphi^{(k)} \in
\mathbb R$ with
\begin{gather*}
\left| \varphi^{(k)} - \vartheta^{(k)} \right| \leq \delta\\
\shortintertext{and}
\mathrm e^{\mathrm i s \varphi^{(k)}} =
\frac{f_k(\mathrm e^{\mathrm i \vartheta^{(k)}}, a^{(k)})}
{g(1,a^{(k)})}.
\end{gather*}
Note that the right-hand side of the last equation has absolute
value 1 because $g$ is a character of $G$. Consequently,
\begin{equation*}
g(\mathrm e^{\mathrm i \varphi^{(k)}}, a^{(k)}) =
g(\mathrm e^{\mathrm i \varphi^{(k)}}, e_A) g(1, a^{(k)}) =
e^{\mathrm i s \varphi^{(k)}} g(1, a^{(k)}) =
f_k(\mathrm e^{\mathrm i \vartheta^{(k)}}, a^{(k)}).
\end{equation*}
Finally,
\begin{align*}
\norm{f_k + g}_\infty &\geq
\left| f_k(\mathrm e^{\mathrm i \varphi^{(k)}}, a^{(k)}) +
g(\mathrm e^{\mathrm i \varphi^{(k)}}, a^{(k)}) \right|\\
&\geq \left| g(\mathrm e^{\mathrm i \varphi^{(k)}}, a^{(k)}) +
f_k(\mathrm e^{\mathrm i \vartheta^{(k)}}, a^{(k)}) \right|
- \left| f_k(\mathrm e^{\mathrm i \varphi^{(k)}}, a^{(k)}) -
f_k(\mathrm e^{\mathrm i \vartheta^{(k)}}, a^{(k)}) \right|\\
&\geq 2 - \varepsilon. \qedhere
\end{align*}
\end{proof}
\end{prop}
\begin{prop}
\label{infiniteTorus}
Let $A$ be a compact abelian group, set $G = \mathbb T^\mathbb N \oplus
A$, and let $\Lambda$ be a subset of $\widehat G = \mathbb Z^{(\mathbb N)}
\oplus \widehat A$. If we find arbitrarily large $l \in \mathbb N$ with
$p_l[\Lambda] \neq \{0\}$, then $T(C_\Lambda(G)) = 2$.
\begin{proof}
Fix $f_1, \dotsc, f_n \in S_{C_\Lambda(G)}$. Since $T_\Lambda(G)$ is
dense in $C_\Lambda(G)$, we may assume without loss of
generality that $f_1, \dotsc, f_n$ are trigonometric polynomials.
We are going to find $g \in S_{C_\Lambda(G)}$ with $\norm{f_k +
g}_\infty = 2$ for $k = 1,\dotsc, n$.
Setting $\Delta = \bigcup_{k=1}^n \text{spec}f_k$, we get a finite
subset of $\Lambda$ because every $f_k$ is a trigonometric
polynomial and therefore has a finite spectrum. Consequently, there
exists $l_0 \in \mathbb N$ with $p_l[\Delta] = \{0\}$ for all
$l > l_0$ and the evaluation of $f_1, \dotsc, f_n$ at a point
$(t_1,t_2, \dotsc, a) \in G$ just depends on the coordinates
$t_1, \dotsc, t_{l_0}$ and $a$.
By assumption, we can find $l > l_0$ and $g \in \Lambda$ with
$s = p_l(g) \neq 0$. Fix $k \in \{1,\dotsc, n\}$. There exists
$x^{(k)} = (t^{(k)}_1,t^{(k)}_2, \dotsc, a^{(k)}) \in G$ with
$|f_k(x^{(k)})|=1$ since $f_k \in S_{C(G)}$. Pick $u^{(k)} \in
\mathbb T$ with
\begin{equation*}
(u^{(k)})^s = \frac{f_k(x^{(k)})}
{g(t^{(k)}_1, \dotsc, t^{(k)}_{l-1}, 1, t^{(k)}_{l+1},
t^{(k)}_{l+2}, \dotsc, a^{(k)})}.
\end{equation*}
Note that the right-hand side of the last equation has absolute
value 1 because $g$ is a character of $G$. With the same reasoning
as at the end of the proof of Proposition \ref{finiteTorus} we
get that
\begin{equation*}
g(t^{(k)}_1, \dotsc, t^{(k)}_{l-1}, u^{(k)}, t^{(k)}_{l+1},
t^{(k)}_{l+2}, \dotsc, a^{(k)})=f_k(x^{(k)}).
\end{equation*}
Finally,
\begin{align*}
\norm{f_k + g}_\infty &\geq
\left|(f_k + g)(t^{(k)}_1, \dotsc, t^{(k)}_{l-1}, u^{(k)},
t^{(k)}_{l+1}, t^{(k)}_{l+2}, \dotsc, a^{(k)}) \right|\\
&= 2 \left| f_k(x^{(k)}) \right| = 2. \qedhere
\end{align*}
\end{proof}
\end{prop}
\begin{lem}
\label{Lemma1}
Let $\varepsilon > 0$ and $z_1, \dotsc, z_n \in \{ z \in \mathbb C :
|z| \leq 1 \}$ with
\begin{equation*}
\left| \sum_{k = 1}^n z_k \right| \geq n(1 - \varepsilon).
\end{equation*}
Then
\begin{equation*}
|z_k| \geq 1 - n\varepsilon \quad \text{and} \quad
|z_k - z_l| \leq 2n \sqrt \varepsilon
\end{equation*}
for $k, l = 1, \dotsc, n$.
\begin{proof}
The first assertion is an easy consequence of the triangle
inequality.
For fixed $k, l \in \{1, \dotsc, n\}$ we have
\begin{align*}
\Re z_k \overline{z_l} &= \Re \sum_{\mathclap{s,t = 1}}^n
z_s \overline{z_t} - \Re \sum_{\mathclap{\substack{s,t=1\\
(s,t) \neq (k,l)}}}^n z_s \overline{z_t} =
\left| \sum_{k = 1}^n z_k \right|^{2} -
\Re \sum_{\mathclap{\substack{s,t = 1\\(s,t) \neq (k,l)}}}^n
z_s \overline{z_t}\\
&\geq n^2(1 - \varepsilon)^2 - (n^2 - 1) =
1 - 2n^2\varepsilon + n^2\varepsilon^2\\
&\geq 1 - 2n^2\varepsilon.
\end{align*}
Using this inequality, we get
\begin{align*}
|z_k - z_l|^2 &= |z_k|^2 + |z_l|^2 -2\Re z_k \overline{z_l}\\
&\leq 2 - 2(1 - 2n^2\varepsilon) = 4n^2\varepsilon. \qedhere
\end{align*}
\end{proof}
\end{lem}
The following lemma shows that if we are given $n$ subsets of the unit circle that do not
meet a circular sector with central angle bigger than $\frac{2\pi}{n}$,
then we can rotate these $n$ subsets such that their intersection becomes empty.
\begin{lem}
\label{Lemma2}
Let $W_1, \dotsc, W_n \subset \{ z \in \mathbb C : |z| \leq 1 \}$.
Suppose that for every $k \in \{1, \dotsc, n\}$ there
exist $\varphi_k \in [0, 2\pi]$ and $\vartheta_k \in [\frac{2\pi}{n},
2\pi]$ with
\begin{equation*}
W_k \cap \{ r \mathrm e^{\mathrm i \alpha}: r \in [0,1],
\alpha \in [\varphi_k, \varphi_k + \vartheta_k] \} = \emptyset.
\end{equation*}
Then there exist $t_1, \dotsc, t_n \in \mathbb T$ with
\begin{equation*}
\bigcap_{k = 1}^n t_k W_k = \emptyset.
\end{equation*}
\begin{proof}
Setting for $k = 1, \dotsc, n$ (with $\vartheta_0 = 0$)
\begin{gather*}
t_k = \mathrm e^{\mathrm i \sum_{l = 0}^{k - 1}\vartheta_l}
\mathrm e^{-\mathrm i \varphi_k},
\shortintertext{we get}
t_k W_k \cap
\left\{ r \mathrm e^{\mathrm i \alpha}:
r \in [0,1], \alpha \in
\left[ \sum_{l = 0}^{k - 1}\vartheta_l,
\sum_{l = 0}^{k}\vartheta_l \right] \right\} = \emptyset.
\end{gather*}
Fix $\alpha \in [0,2\pi]$ and $r \in [0,1]$.
Since $\sum_{k = 1}^n \vartheta_k \geq 2\pi$,
there exists $k \in \{1, \dotsc, n\}$ with
\begin{equation*}
\alpha \in \left[ \sum_{l = 0}^{k - 1}\vartheta_l,
\sum_{l = 0}^{k}\vartheta_l \right].
\end{equation*}
Consequently, $r \mathrm e^{\mathrm i \alpha}$ does not belong to
$t_k W_k$ and $\bigcap_{k = 1}^n t_k W_k = \emptyset$.
\end{proof}
\end{lem}
\begin{lem}
\label{Lemma3}
Let $\varepsilon, \delta > 0$, let $W \subset \{ z \in \mathbb C:
1 - \delta \leq |z| \leq 1 \}$, and set $W_\varepsilon:
\{ z \in \mathbb C: \text{there exists $w \in W$ with
$|w-z| \leq \varepsilon$} \}$. Suppose that there exists $\vartheta
\in [0,2\pi]$ such that for every $\varphi \in [0,2\pi]$
\begin{equation*}
W_\varepsilon \cap
\{ r \mathrm e^{\mathrm i \alpha}: r \in [0,1], \alpha \in
[\varphi, \varphi + \vartheta] \} \neq \emptyset.
\end{equation*}
Then $W$ is a $(2\varepsilon + \delta + \vartheta)$-net for $\mathbb T$.
\begin{proof}
Fix $\mathrm e^{\mathrm i \varphi} \in \mathbb T$. We have to find
$w \in W$ with $|w- e^{\mathrm i \varphi}| \leq 2\varepsilon + \delta
+ \vartheta$.
By assumption, there exist $s \mathrm e^{\mathrm i \beta} \in
W_\varepsilon \cap \{ r \mathrm e^{\mathrm i \alpha}: r \in [0,1],
\alpha \in [\varphi, \varphi + \vartheta] \}$ and $w \in W$ with
$|w-s \mathrm e^{\mathrm i \beta}| \leq \varepsilon$. It is easy to
see that $s \geq 1 - \delta - \varepsilon$. Finally,
\begin{align*}
|w - \mathrm e^{\mathrm i \varphi}| &\leq
|w-s \mathrm e^{\mathrm i \beta}| +
|s \mathrm e^{\mathrm i \beta} -
s \mathrm e^{\mathrm i \varphi}|+
|s\mathrm e^{\mathrm i \varphi} -
\mathrm e ^{\mathrm i \varphi}|\\
&\leq \varepsilon + \vartheta + (\delta +\varepsilon)
= 2\varepsilon + \delta + \vartheta. \qedhere
\end{align*}
\end{proof}
\end{lem}
\begin{prop}
\label{ProductOfFiniteGroups}
Let $A$ be a compact abelian group, let $(G_l)_{l \in \mathbb N}$ be a
sequence of finite abelian groups, set $G = \prod_{l = 1}^\infty G_l
\oplus A$, and let $\Lambda$ be an infinite subset of $\widehat G =
\bigoplus_{l = 1}^\infty \widehat{G_l} \oplus \widehat A$. If
$p_{\widehat A}[\Lambda]$ is a finite set, then $T(C_\Lambda(G)) = 2$.
\begin{proof}
The beginning is almost like in the proof of Proposition
\ref{infiniteTorus}.
Fix $f_1, \dotsc, f_n \in S_{C_\Lambda(G)}$ and $\varepsilon > 0$.
Since $T_\Lambda(G)$ is dense in $C_\Lambda(G)$, we may assume
without loss of generality that $f_1, \dotsc, f_n$ are trigonometric
polynomials. We have to find $g \in S_{C_\Lambda(G)}$ with
$\norm{f_k + g}_\infty \geq 2 - \varepsilon$ for $k = 1, \dotsc, n$.
Setting $\Delta = \bigcup_{k = 1}^n \text{spec}f_k$, we get a finite
subset of $\Lambda$ because every $f_k$ is a trigonometric
polynomial and therefore has a finite spectrum. Consequently, there
exists $l_0 \in \mathbb N$ with $p_{\widehat{G_l}}[\Delta] =
\{ \mathbf{1}_{G_l} \}$ for all $l > l_0$ and the evaluation of $f_1,
\dotsc, f_n$ at a point $(x_1, x_2, \dotsc, a) \in G$ just depends on
the coordinates $x_1, \dotsc, x_{l_0}$ and $a$.
Since $\widehat{G_1}, \dotsc, \widehat{G_{l_0}}$ and $p_{\widehat A}
[\Lambda]$ are finite sets and $\Lambda$ is an infinite set, there
exist an infinite subset $\Lambda_0$ of $\Lambda$ and elements
$\gamma_1 \in \widehat{G_1}, \dotsc, \gamma_{l_0} \in
\widehat{G_{l_0}}, \gamma_A \in \widehat A$ with
$p_{\widehat{G_l}}[\Lambda_0] = \{\gamma_l\}$ for $l = 1, \dotsc, l_0$
and $p_{\widehat A}[\Lambda_0] = \{\gamma_A\}$. In other words, all
elements of $\Lambda_0$ coincide in the first $l_0$ coordinates of
$\bigoplus_{l = 1}^\infty \widehat{G_l}$ and in the coordinate that
corresponds to $\widehat A$. We can also assume that $\Lambda_0$ is a
Sidon set because every infinite subset of $\widehat G$ contains an
infinite Sidon set \cite{RudinFourierAnalysis}*{Example 5.7.6.(a)}.
(Recall that $\Lambda_0$ is said to be a \emph{Sidon set}, if there
exists a constant $C > 0$ such that $\sum_{\gamma \in \Lambda_0}
|\hat f (\gamma)| \leq C \norm{f}_\infty$ for all $f \in
T_{\Lambda_0}(G)$.) So if $\{ \lambda_1, \lambda_2, \dotsc \}$ is an
enumeration of $\Lambda_0$, then $(\lambda_s)_{s \in \mathbb N}$ is
equivalent to the canonical basis of $\ell^1$.
Set $\gamma = (\overline{\gamma_1}, \dotsc,
\overline{\gamma_{l_0}}, \mathbf{1}_{G_{l_0 + 1}},
\mathbf{1}_{G_{l_0 + 2}}, \dotsc, \overline{\gamma_A}) \in
\widehat G$. The sequence $(\gamma \lambda_s)_{s \in \mathbb N}$ is
still equivalent to the canonical basis of $\ell^1$ and we have for
every character $\gamma \lambda_s$ that $p_{\widehat A}
(\gamma\lambda_s) = \mathbf{1}_A$ and $p_{\widehat G_l}
(\gamma\lambda_s) = \mathbf{1}_{G_l}$ for $l = 1, \dotsc, l_0$. Thus
the evaluation of $\gamma \lambda_1, \gamma \lambda_2, \dotsc$ at a
point $(x_1, x_2, \dotsc, a) \in G$ does not depend on the coordinates
$x_1, \dotsc, x_{l_0}$ and $a$.
Choose $n_0 \in \mathbb N$ with $\frac{2\pi}{n_0} \leq
\frac \varepsilon 3$ and $\delta \in (0,1)$ with $4n_0 \sqrt \delta
\leq \frac \varepsilon 3$. By James's $\ell^1$ distortion theorem
\cite{AlbiacKaltonTopics}*{Theorem 10.3.1}, there is a normalized
block basis sequence $(g_s)_{s \in \mathbb N}$ of
$(\gamma \lambda_s)_{s \in \mathbb N}$ with
\begin{equation*}
(1 - \delta) \sum_{s = 1}^\infty |z_s| \leq
\norm{\sum_{s = 1}^\infty z_s g_s}_\infty \leq
\sum_{s = 1}^\infty |z_s|
\end{equation*}
for any sequence of complex numbers $(z_s)_{s \in \mathbb N}$ with
finite support. It follows that for every $n_0$-tuple
$(z_1, \dotsc, z_{n_0}) \in \mathbb T^{n_0}$ there is $x \in G$ with
\begin{equation*}
\left| \sum_{s = 1}^{n_0} z_s g_s(x) \right| \geq n_0 (1 - \delta).
\end{equation*}
Using Lemma \ref{Lemma1}, we have for $s,t = 1, \dotsc, n_0$
\begin{equation*}
|g_s(x)| \geq 1 - n_0 \delta \quad \text{and} \quad
|z_s g_s(x) - z_t g_t(x)| \leq 2n_0 \sqrt \delta.
\end{equation*}
Setting for $s = 1, \dotsc, n_0$
\begin{align*}
W_s &= g_s[G] \cap \{ z \in \mathbb C : |z| \geq 1 - n_0 \delta \}
\shortintertext{and}
\widetilde{W_s} &=\{ z \in \mathbb C : \text{there exists
$w \in W_s $ with $|w-z| \leq 2n_0 \sqrt\delta$} \},
\end{align*}
we conclude that for every tuple $(z_1, \dotsc, z_{n_0}) \in
\mathbb T^{n_0}$
\begin{equation*}
\bigcap_{s = 1}^{n_0} z_s \widetilde{W_s} \neq \emptyset.
\end{equation*}
By Lemma \ref{Lemma2}, there is $s_0 \in \{1, \dotsc, n_0\}$ such that
for any $\varphi \in [0,2\pi]$
\begin{equation*}
\widetilde{W_{s_0}} \cap
\left\{ r \mathrm e^{\mathrm i \alpha}: r \in [0,1], \alpha \in
\left[ \varphi,\varphi + \frac{2\pi}{n_0} \right] \right\}
\neq \emptyset.
\end{equation*}
It follows from Lemma \ref{Lemma3} and our choice of $n_0$ and
$\delta$ that $W_{s_0}$ is an $\varepsilon$-net for $\mathbb T$.
The function $g = \overline{\gamma}g_{s_0}$ is by construction a
normalized trigonometric polynomial with spectrum contained in
$\Lambda$. Fix $k \in \{1, \dotsc, n\}$. There exists
$x^{(k)} = (x^{(k)}_1, x^{(k)}_2, \dotsc, a^{(k)}) \in G$ with
$|f_k(x^{(k)})| = 1$. By our choice of $g_{s_0}$ we can find
$y^{(k)} = (y^{(k)}_1, y^{(k)}_2, \dotsc, b^{(k)}) \in G$ with
\begin{equation*}
\left| \gamma(x^{(k)})f_k(x^{(k)})
- g_{s_0}(y^{(k)}) \right| \leq \varepsilon.
\end{equation*}
Note that $\gamma(x^{(k)})f_k(x^{(k)}) \in \mathbb T$ since $\gamma$ is
a character. We therefore get
\begin{align*}
\norm{f_k + g}_\infty &= \norm{\gamma f_k + g_{s_0}}_\infty\\
&\geq \left| (\gamma f_k + g_{s_0})
(x^{(k)}_1, \dotsc, x^{(k)}_{l_0}, y^{(k)}_{l_0+1},
y^{(k)}_{l_0+2}, \dotsc, a^{(k)}) \right|\\
&= \left| \gamma(x^{(k)})f_k(x^{(k)})
+ g_{s_0}(y^{(k)}) \right|\\
&\geq 2\left| \gamma(x^{(k)})f_k(x^{(k)}) \right| -
\left| \gamma(x^{(k)})f_k(x^{(k)}) - g_{s_0}(y^{(k)}) \right|\\
&\geq 2 - \varepsilon. \qedhere
\end{align*}
\end{proof}
\end{prop}
\begin{lem}
\label{Lemma4}
Let $G$ be a compact abelian group and let $\gamma \in \widehat G$.
\begin{enumerate}[\upshape(a)]
\item If $o(\gamma) = m$, then $\gamma[G] =
\{ \mathrm e^{2\pi \mathrm i \frac k m}: k = 0, \dotsc, m-1 \}$,
i.e., the image of $G$ under $\gamma$ is the set of the $m$th
roots of unity.
\item If $o(\gamma) = \infty$, then $\gamma[G] = \mathbb T$.
\end{enumerate}
\begin{proof}
If $o(\gamma) = m$, we have $\gamma(x)^m = 1$ for every $x \in G$. Thus
every element of $\gamma[G]$ is an $m$th root of unity. Setting
$n = |\gamma[G]|$, it follows from Lagrange's theorem that
$\gamma(x)^n = 1$ for every $x \in G$. Therefore $n \geq m$ and
$\gamma[G]$ has to coincide with $\{ \mathrm e^{2\pi \mathrm i \frac k
m}: k = 0, \dotsc, m - 1 \}$.
The set $\gamma[G]$ is a compact and therefore closed subgroup of
$\mathbb T$. Since all proper closed subgroups of $\mathbb T$ are
finite \cite{MorrisPontryaginDualityStructureLCAGroups}*{Corollary 3,
p. 28}, we have $\gamma[G] = \mathbb T$, if $o(\gamma) = \infty$.
\end{proof}
\end{lem}
\begin{theo}
\label{GeneralTheorem}
Let $G$ be a compact abelian group and let $\Lambda$ be an infinite subset
of $\widehat G$. Then $T(C_\Lambda(G)) = 2$.
\begin{proof}
We start like in the proofs of Proposition \ref{infiniteTorus} and
\ref{ProductOfFiniteGroups}.
Fix $f_1, \dotsc, f_n \in S_{C_\Lambda(G)}$ and $\varepsilon > 0$.
Since $T_\Lambda(G)$ is dense in $C_\Lambda(G)$, we may assume
without loss of generality that $f_1, \dotsc, f_n$ are trigonometric
polynomials. We have to find $g \in S_{C_\Lambda(G)}$ with
$\norm{f_k + g}_\infty \geq 2 - \varepsilon$ for $k = 1, \dotsc, n$.
Setting $\Delta = \bigcup_{k = 1}^n \text{spec}f_k$, we get a finite
subset of $\Lambda$ because every $f_k$ is a trigonometric
polynomial and therefore has a finite spectrum.
We can assume, by passing to a countably infinite subset if
necessary, that $\Lambda$ is countable. Hence $\Gamma = \langle
\Lambda \rangle$, the group generated by $\Lambda$, is a countable
subgroup of $\widehat G$.
Let $M$ be a maximal independent subset of $\Gamma$ and let $
\Gamma_1 = \langle M \rangle$ be the subgroup of $\Gamma$ that is
generated by $M$. Defining inductively $\Gamma_l =
\{ \gamma \in \Gamma: \gamma^l \in \Gamma_{l - 1}\}$ for $l = 2, 3,
\dotsc$, we get an increasing sequence $(\Gamma_l)_{l \in \mathbb N}$
of subgroups of $\Gamma$. Since $M$ is a maximal independent subset of
$\Gamma$, we have that $\bigcup_{l = 1}^\infty \Gamma_l = \Gamma$.
Furthermore, every $\Gamma_l$ is a direct sum of cyclic groups
\cite{FuchsInfiniteAbelianGroupsI}*{Corollary 18.4}. We distinguish
two cases depending on whether or not there exists $\Gamma_l$ that
contains $\Delta$ and infinitely many elements of $\Lambda$.
\emph{First case:} Suppose that there exists $l_0 \in \mathbb N$ such
that $\Delta \subset \Gamma_{l_0}$ and $\Lambda_0 = \Gamma_{l_0}\cap
\Lambda$ is an infinite set.
By our choice of $\Gamma_{l_0}$, the functions $f_1, \dotsc, f_n$ and
all characters $\gamma \in \Lambda_0$ are constant on the cosets
of $G/(\Gamma_{l_0})^\perp$ and can therefore be considered as
functions and characters on $G_0 = G/(\Gamma_{l_0})^\perp$. (To
simplify notation, we continue to write $f_1, \dotsc, f_n$.) Note that
$\Gamma_{l_0}$ is the dual group of $G_0$. Since $\Gamma_{l_0}$ is
a direct sum of cyclic groups, there exists a sequence
$(\widehat{G_s})_{s \in \mathbb N}$ of finite abelian groups such that
$\Gamma_{l_0} = \mathbb Z^{(\mathbb N)} \oplus
\bigoplus_{s = 1}^\infty \widehat{G_s}$ or $\Gamma_{l_0} =
\mathbb Z^{n_0} \oplus \bigoplus_{s = 1}^\infty \widehat{G_s}$ for
adequate $n_0 \in \mathbb N$. Hence $G_0 = \mathbb T^{\mathbb N}
\oplus \prod_{s = 1}^\infty G_s$ or $G_0 = \mathbb T^{n_0} \oplus
\prod_{s = 1}^\infty G_s$. Let $p_1, p_2, \dotsc$ be the projections
from $\Gamma_{l_0}$ onto $\mathbb Z$.
If there exists $s_0 \in \mathbb N$ such that $p_{s_0}[\Lambda_0]$
contains infinitely many elements or if there exist arbitrarily large
$s \in \mathbb N$ with $p_s[\Lambda_0] \neq \{0\}$, then
$T(C_{\Lambda_0}(G_0)) = 2$ by Proposition \ref{finiteTorus} or
\ref{infiniteTorus}. Otherwise $p_{\mathbb Z ^{(\mathbb N)}}
[\Lambda_0]$ (or $p_{\mathbb Z ^{n_0}}[\Lambda_0]$) is a finite set and
$T(C_{\Lambda_0}(G_0)) = 2$ by Proposition \ref{ProductOfFiniteGroups}.
So we can find $\tilde g \in S_{C_{\Lambda_0}(G_0)}$ with
$\norm{f_k + \tilde g}_\infty \geq 2 - \varepsilon$ for $k = 1,
\dotsc, n$. Setting $g=\tilde g \circ \pi$ where
$\pi$ is the canonical map from $G$ onto $G_0 = G/(\Gamma_0)^\perp$, we
get $\norm{f_k + g}_\infty \geq 2 - \varepsilon$ for $k = 1,\dotsc,
n$.
\emph{Second case:} Suppose that there exist arbitrarily large
$l \in \mathbb N$ with $(\Gamma_l \setminus \Gamma_{l - 1}) \cap
\Lambda \neq \emptyset$.
Fix $l_0 \in \mathbb N$ with $\Delta \subset \Gamma_{l_0}$ and choose
$l_1 \in \mathbb N$ with $l_1 > l_0^2$, $\frac{2\pi}{l_1} \leq
\varepsilon$ and $(\Gamma_{l_1} \setminus \Gamma_{l_1 - 1}) \cap
\Lambda \neq \emptyset$. By our choice of $\Gamma_{l_0}$, the
functions $f_1, \dotsc, f_n$ are constant on the cosets of
$G/(\Gamma_{l_0})^\perp$ and therefore
\begin{equation}
\label{GeneralTheoremEquation1}
f_k(xy) = f_k(x) \quad (x \in G, y \in (\Gamma_{l_0})^\perp)
\end{equation}
for $k = 1, \dotsc, n$. Pick $g \in (\Gamma_l \setminus
\Gamma_{l - 1}) \cap \Lambda$ and denote by $\tilde g$ the restriction
of $g$ to $(\Gamma_{l_0})^\perp$. What can we say about the order of
$\tilde g$? Since $(\Gamma_{l_0})^{\perp \perp} = \Gamma_{l_0}$, we
have for every $m \in \mathbb N$ that $\tilde g ^m =
\mathbf 1_{(\Gamma_{l_0})^\perp}$ if and only if
$g^m \in \Gamma_{l_0}$.
Suppose that $\tilde g ^m = \mathbf 1_{(\Gamma_{l_0})^\perp}$ for
some $2 \leq m \leq l_0$. Then $\tilde g ^{ml_0} =
\mathbf 1_{(\Gamma_{l_0})^\perp}$ as well and
$g^{ml_0} \in \Gamma_{l_0}$.
Consequently, $g \in \Gamma_{ml_0}$ because $g^{ml_0} \in \Gamma_{l_0}
\subset \Gamma_{ml_0 - 1}$. But this contradicts our choice of $g$ and
$l_1$ because $l_1 > ml_0$. Assuming that $\tilde g^m =
\mathbf 1_{(\Gamma_{l_0})^\perp}$ for some $l_0 < m < l_1$ leads to the same
contradiction. The order of $\tilde g$ is therefore at least $l_1$.
By our choice of $l_1$ and by Lemma \ref{Lemma4}, we get that
$\tilde g[(\Gamma_{l_0})^\perp]$ is an $\varepsilon$-net for $\mathbb
T$.
Fix now $k \in \{1, \dotsc, n\}$ and choose $x^{(k)} \in G$ with
$|f_k(x^{(k)})| = 1$ and $y^{(k)} \in (\Gamma_{l_0})^\perp$ with
\begin{equation}
\label{GeneralTheoremEquation2}
\left| f_k(x^{(k)}) - g(x^{(k)})\tilde g (y^{(k)}) \right|
\leq \varepsilon.
\end{equation}
Note that $g$ is a character and hence $g(x^{(k)}) \in \mathbb T$.
Using (\ref{GeneralTheoremEquation1}) and
(\ref{GeneralTheoremEquation2}), we get
\begin{align*}
\norm{f_k + g}_\infty &\geq \left| f_k(x^{(k)}y^{(k)}) +
g(x^{(k)}y^{(k)}) \right|\\
&= \left| f_k(x^{(k)}) + g(x^{(k)})\tilde g (y^{(k)}) \right|\\
&\geq 2 \left| f_k(x^{(k)}) \right| - \left| f_k(x^{(k)}) -
g(x^{(k)})\tilde g (y^{(k)}) \right|\\
&\geq 2 - \varepsilon. \qedhere
\end{align*}
\end{proof}
\end{theo}
\begin{cor}
Let $G$ be a metrizable, compact abelian group and let $\Lambda$ be a
subset of $\widehat G$. The space $C_\Lambda(G)$ has the almost Daugavet
property if and only if $\Lambda$ contains infinitely many elements.
\begin{proof}
Every almost Daugavet space is infinite-dimensional and so the
condition is necessary.
If $G$ is a metrizable, compact abelian group, then $\widehat G$ is
countable \cite{RudinFourierAnalysis}*{Theorem 2.2.6} and $C(G)$ is
separable. Since for separable Banach spaces the almost Daugavet
property can be characterized via the thickness
\cite{KadetsShepelskaWernerThicknessAlmostDaugavet}*{Theorem 1.1},
it is sufficient to prove that $T(C_\Lambda(G)) = 2$. But this is
given by Theorem \ref{GeneralTheorem}.
\end{proof}
\end{cor}
\section{Subspaces of $L$-embedded spaces}
To deal with translation-invariant subspaces of $L^1(G)$ we will consider a
more general class of Banach spaces. A linear projection $P$ on a Banach space
$X$ is called an \emph{$L$-projection}, if
\begin{equation*}
\norm{x}=\norm{Px} + \norm{x-Px} \quad (x \in X).
\end{equation*}
A closed subspace of $X$ is called an \emph{$L$-summand}, if it is the range of
an $L$-projection, and $X$ is called \emph{$L$-embedded}, if $X$ is an
$L$-summand in $X^{**}$. Classical examples of $L$-embedded spaces are
$L^1(\mu)$-spaces, preduals of von Neumann algebras, and the Hardy space $H^1$
\cite{HarmandWernerMideale}*{Example IV.1.1}.
Using the principle of local reflexivity, it is easy to see that a
non-reflexive, $L$-embedded space has thickness 2. We will strengthen this
and will show that every non-reflexive subspace of an $L$-embedded space has
thickness 2. Let us recall the following result from the theory of
$L$-embedded spaces \cite{HarmandWernerMideale}*{claim in the proof of Theorem
IV.2.7}}
\begin{prop}
\label{PropositionNearAccumulationPoint}
Let $X$ be an $L$-embedded space with $X^{**} = X\oplus_1 X^s$,
let $\varepsilon$
be a number with $0 < \varepsilon < \frac 1 4$, and let
$(y_l)_{l \in \mathbb N}$ be a sequence in $X$ with
\begin{equation*}
(1 - \varepsilon)\sum_{l = 1}^\infty |a_l| \leq
\norm{\sum_{l = 1}^\infty a_l y_l} \leq
\sum_{l = 1}^\infty |a_l|
\end{equation*}
for any sequence of scalars $(a_l)_{l \in \mathbb N}$ with
finite support. Then there exists $x_s \in X^s$ such that
\begin{equation*}
1 - 4\sqrt \varepsilon \leq \norm{x_s} \leq 1
\end{equation*}
and for all $\delta > 0$, all $x_1^*, \dotsc, x_n^* \in X^{**}$ and all
$l_0 \in \mathbb N $ there is $l \geq l_0$ with
\begin{equation*}
|x_s(x_k^*) - x_k^*(y_l)| \leq 3 \sqrt \varepsilon \norm{x_k^*} + \delta
\quad (k = 1, \dotsc, n).
\end{equation*}
In other words, there is $x_s \in X^s$ which is ``close'' to a weak*
accumulation point of $(y_l)_{l \in \mathbb N}$.
\end{prop}
\begin{theo}
\label{LEmbedded}
Let $X$ be an $L$-embedded space with $X^{**} = X \oplus_1 X^s$ and let
$Y$ be a closed subspace of $X$ which is not reflexive. Then $T(Y) = 2$.
\begin{proof}
Fix $x_1, \dotsc, x_n \in S_Y$ and $\varepsilon > 0$. We have to find
$y \in S_Y$ with $\norm{x_k + y} \geq 2 - \varepsilon$ for
$k = 1, \dotsc, n$.
Choose $\delta > 0$ with $7 \sqrt \delta + 2 \delta \leq \varepsilon$.
Every non-reflexive subspace of $X$ contains a copy of $\ell^1$
\cite{HarmandWernerMideale}*{Corollary IV.2.3} and by James's $\ell_1$
distortion theorem \cite{AlbiacKaltonTopics}*{Theorem 10.3.1} there is
a sequence $(y_l)_{l \in \mathbb N}$ in $Y$ with
\begin{equation*}
(1 - \delta)\sum_{l = 1}^\infty |a_l| \leq
\norm{\sum_{l = 1}^\infty a_l y_l} \leq
\sum_{l = 1}^\infty |a_l|
\end{equation*}
for any sequence of scalars $(a_l)_{l \in \mathbb N}$ with
finite support. Let $x_s \in X^s$ be ``close'' to a weak* accumulation
point of $(y_l)_{l \in \mathbb N}$ as in Proposition
\ref{PropositionNearAccumulationPoint}. Since $X^{**} = X \oplus_1
X^s$, we have for $k = 1, \dotsc, n$
\begin{equation*}
\norm{x_k + x_s} = \norm{x_k} + \norm{x_s} \geq 2 - 4\sqrt \delta.
\end{equation*}
Thus there exist functionals $x_1^*, \dotsc, x_n^* \in S_{X^*}$ with
\begin{align*}
|x_k^*(x_k) + x_s(x_k^*)| &\geq 2 - 4 \sqrt \delta - \delta
\shortintertext{and $l \in \mathbb N$ with}
|x_s(x_k^*) - x_k^*(y_l)| &\leq 3 \sqrt \delta + \delta
\end{align*}
for $k = 1,\dotsc, n$.
Fix $k \in \{1, \dotsc, n\}$. Using the last two inequalities leads to
\begin{align*}
\norm{x_k + y_l} &\geq |x_k^*(x_k) + x_k^*(y_l)|\\
&\geq |x_k^*(x_k) + x_s(x_k^*)| - |x_s(x_k^*) - x_k^*(y_l)|\\
&\geq (2 - 4\sqrt \delta - \delta) - (3 \sqrt \delta + \delta)\\
&\geq 2 - \varepsilon. \qedhere
\end{align*}
\end{proof}
\end{theo}
\begin{cor}
Let $X$ be an $L$-embedded space and let $Y$ be a separable, closed
subspace of $X$. If $Y$ is not reflexive, then $Y$ has the almost
Daugavet property.
\begin{proof}
The space $Y$ has thickness 2 by Theorem \ref{LEmbedded} and this is
for separable spaces equivalent to the almost Daugavet property
\cite{KadetsShepelskaWernerThicknessAlmostDaugavet}*{Theorem 1.1}.
\end{proof}
\end{cor}
Let us use this result in the setting of translation-invariant subspaces of
$L^1(G)$. Suppose that $G$ is a compact abelian group, $\Lambda$ a subset of
its dual group $\widehat G$ and $0 < r < p < \infty$. The set $\Lambda$ is
said to \emph{be of type} $(r,p)$, if there is a constant $C > 0$ such that
\begin{equation*}
\norm{f}_p \leq C \norm{f}_r
\end{equation*}
for every $f \in T_\Lambda(G)$. In other words, if $\norm{\,\cdot\,}_r$ and
$\norm{\,\cdot\,}_p$ are equivalent on $T_\Lambda(G)$. We say furthermore that
$\Lambda$ is a \emph{$\Lambda(p)$ set}, if $\Lambda$ is of type $(r,p)$ for
some $r < p$.
\begin{cor}
Let $G$ be a metrizable, compact abelian group and let $\Lambda$ be a
subset of $\widehat G$. The space $L^1_\Lambda(G)$ has the almost Daugavet
property if and only if $\Lambda$ is not a $\Lambda(1)$ set.
\begin{proof}
Every almost Daugavet space contains a copy of $\ell^1$
\cite{KadetsShepelskaWernerThicknessAlmostDaugavet}*{Corollary 3.3} and
is therefore not reflexive. So the condition is necessary because
$L^1_\Lambda(G)$ is reflexive if and only if $\Lambda$ is a
$\Lambda(1)$ set \cite{HareElementaryProofLambdap}*{Corollary}.
If $G$ is a metrizable, compact abelian group, then $\widehat G$ is
countable \cite{RudinFourierAnalysis}*{Theorem 2.2.6} and
$L^1(G)$ is separable. If $\Lambda$ is not a $\Lambda(1)$ set, then
$L^1_\Lambda(G)$ is not reflexive and $T(L^1_\Lambda(G)) = 2$
by Theorem \ref{LEmbedded}. But this is for separable spaces equivalent
to the almost Daugavet property
\cite{KadetsShepelskaWernerThicknessAlmostDaugavet}*{Theorem 1.1}.
\end{proof}
\end{cor}
\section{Remarks}
The almost Daugavet property is strictly weaker than the Daugavet
property for translation-invariant subspaces of $C(G)$ or $L^1(G)$. If we
set $\Lambda = \{ 3^n : n \in \mathbb N \}$, then $\Lambda$ is a Sidon set. So
$C_\Lambda(\mathbb T)$ is isomorphic to $\ell^1$, has the
Radon-Nikod\'ym property and therefore not the Daugavet property.
But $\Lambda$ is an
infinite set and $C_\Lambda(\mathbb T)$ has the almost Daugavet property.
Analogously, $L^1_\mathbb N(\mathbb T)$ is isomorphic to the Hardy space
$H^1_0$, has therefore the Radon-Nikod\'ym property and fails the Daugavet
property. But $\mathbb N$ is not a $\Lambda(1)$ set and $L^1_\mathbb N(G)$ has
the almost Daugavet property.
We say that a Banach space $X$ has the fixed point property, if given any
non-empty, closed, bounded and convex subset $C$ of $X$, every non-expansive
mapping $T: C \rightarrow C$ has a fixed point. Here $T$ is non-expansive,
if $\norm{Tx - Ty}\leq \norm{x - y}$ for all $x,y \in C$. By considering
$C = \{ (x_n)_{n \in \mathbb N} \in S_{\ell^1}: x_n \geq 0 \}$ and the right
shift operator, it can be shown that $\ell^1$ does not have the fixed point
property \cite{DowlingLennardNonreflexiveSubspaceL1FailsFixedPointProperty}*
{Theorem 1.2}. This counterexample can be transferred to all Banach spaces
that contain an \emph{asymptotically isometric copy of $\ell^1$}. A Banach
space $X$ is said to contain an asymptotically isometric copy of $\ell^1$,
if there is a null sequence $(\varepsilon_n)_{n \in \mathbb N}$ in $(0,1)$
and a sequence $(x_n)_{n \in \mathbb N}$ in $X$ such that
\begin{equation*}
\sum_{n = 1}^\infty (1 - \varepsilon_n)|a_n| \leq
\norm{\sum_{n = 1}^\infty a_nx_n} \leq \sum_{n = 1}^\infty |a_n|
\end{equation*}
for any sequence of scalars $(a_n)_{n \in \mathbb N}$ with finite support.
Every Banach space $X$ with $T(X) = 2$ contains an asymptotically isometric
copy of $\ell^1$ \cite{KadetsShepelskaWernerThicknessAlmostDaugavet}*
{implicitly in the proof of Propositions 3.2 and 3.4}. So Theorem
\ref{LEmbedded} gives another proof of the fact that every non-reflexive
subspace of $L^1[0,1]$ or more generally every non-reflexive subspace of an
$L$-embedded space fails the fixed point property (cf. \citelist{
\cite{DowlingLennardNonreflexiveSubspaceL1FailsFixedPointProperty}*
{Theorem 1.4}\cite{PfitznerNoteAsymptoticallyIsometricCopiesl1c0}*
{Corollary 4}}).
\section*{Acknowledgements}
This is part of the author's Ph.D. thesis, written under the supervision of
D. Werner at the Freie Universit\"at Berlin.
\begin{bibdiv}
\begin{biblist}
\bib{AlbiacKaltonTopics}{book}{
author={Albiac, Fernando},
author={Kalton, Nigel J.},
title={Topics in Banach Space Theory},
series={Graduate Texts in Mathematics},
volume={233},
publisher={Springer-Verlag},
place={New York},
date={2006},
}
\bib{BilikKadetsShvidkoyWernerNarrowOperatorsDaugavetPrUltraproducts}
{article}{
author={Bilik, Dmitriy},
author={Kadets, Vladimir},
author={Shvidkoy, Roman},
author={Werner, Dirk},
title={Narrow operators and the Daugavet property for ultraproducts},
journal={Positivity},
volume={9},
date={2005},
number={1},
pages={45\ndash 62},
}
\bib{Daugavet}{article}{
author={Daugavet, I. K.},
title={A property of completely continuous operators in the space
$C$},
journal={Uspekhi Mat. Nauk},
volume={18},
date={1963},
number={5 (113)},
pages={157\ndash 158},
}
\bib{DowlingLennardNonreflexiveSubspaceL1FailsFixedPointProperty}
{article}{
author={Dowling, P. N.},
author={Lennard, C. J.},
title={Every nonreflexive subspace of $L_1[0,1]$ fails the fixed
point property},
journal={Proc. Amer. Math. Soc.},
volume={125},
date={1997},
number={2},
pages={443\ndash 446},
}
\bib{FoiasSingerPointsDiffusion}{article}{
author={Foia{\cb {s}}, Ciprian},
author={Singer, Ivan},
title={Points of diffusion of linear operators and almost diffuse
operators in spaces of continuous functions},
journal={Math. Z.},
volume={87},
date={1965},
pages={434\ndash 450},
}
\bib{FuchsInfiniteAbelianGroupsI}{book}{
author={Fuchs, L{\'a}szl{\'o}},
title={Infinite Abelian Groups. Vol. I},
series={Pure and Applied Mathematics},
volume={36-I},
publisher={Academic Press},
place={New York},
date={1970},
}
\bib{HareElementaryProofLambdap}{article}{
author={Hare, Kathryn E.},
title={An elementary proof of a result on $\Lambda (p)$ sets},
journal={Proc. Amer. Math. Soc.},
volume={104},
date={1988},
number={3},
pages={829\ndash 834},
}
\bib{HarmandWernerMideale}{book}{
author={Harmand, Peter},
author={Werner, Dirk},
author={Werner, Wend},
title={$M$-Ideals in Banach Spaces and Banach Algebras},
series={Lecture Notes in Mathematics},
volume={1547},
publisher={Springer-Verlag},
address={Berlin},
date={1993},
}
\bib{HolubDaugavetsEquationL1}{article}{
author={Holub, James R.},
title={Daugavet's equation and operators on $L^1(\mu )$},
journal={Proc. Amer. Math. Soc.},
volume={100},
date={1987},
number={2},
pages={295\ndash 300},
}
\bib{KadetsShepelskaWernerQuotientsDaugavetProperty}{article}{
author={Kadets, Vladimir},
author={Shepelska, Varvara},
author={Werner, Dirk},
title={Quotients of Banach spaces with the Daugavet property},
journal={Bull. Pol. Acad. Sci. Math.},
volume={56},
date={2008},
number={2},
pages={131\ndash 147},
}
\bib{KadetsShepelskaWernerThicknessAlmostDaugavet}{article}{
author={Kadets, Vladimir},
author={Shepelska, Varvara},
author={Werner, Dirk},
title={Thickness of the unit sphere, $\ell _1$-types, and the almost
Daugavet property},
journal={Houston J. Math.},
volume={37},
date={2011},
number={3},
pages={867\ndash 878},
}
\bib{KadetsShvidkoySirotkinWernerDaugavetProperty}{article}{
author={Kadets, Vladimir M.},
author={Shvidkoy, Roman V.},
author={Sirotkin, Gleb G.},
author={Werner, Dirk},
title={Banach spaces with the Daugavet property},
journal={Trans. Amer. Math. Soc.},
volume={352},
date={2000},
number={2},
pages={855\ndash 873},
}
\bib{LozanovskiiAlmostIntegralOperators}{article}{
author={Lozanovski{\u \i }, G. Ya.},
title={On almost integral operators in $KB$-spaces},
journal={Vestnik Leningrad. Univ.},
volume={21},
date={1966},
number={7},
pages={35\ndash 44},
}
\bib{LueckingSubspacesAlmostDaugavet}{article}{
author={L{\"u}cking, Simon},
title={Subspaces of almost Daugavet spaces},
journal={Proc. Amer. Math. Soc.},
volume={139},
date={2011},
number={8},
pages={2777\ndash 2782},
}
\bib{MorrisPontryaginDualityStructureLCAGroups}{book}{
author={Morris, Sidney A.},
title={Pontryagin Duality and the Structure of Locally Compact
Abelian Groups},
series={London Mathematical Society Lecture Note Series},
volume={29},
publisher={Cambridge University Press},
place={Cambridge},
date={1977},
}
\bib{PfitznerNoteAsymptoticallyIsometricCopiesl1c0}{article}{
author={Pfitzner, Hermann},
title={A note on asymptotically isometric copies of $l^1$ and $c_0$},
journal={Proc. Amer. Math. Soc.},
volume={129},
date={2001},
number={5},
pages={1367\ndash 1373},
}
\bib{RudinFourierAnalysis}{book}{
author={Rudin, Walter},
title={Fourier Analysis on Groups},
series={Wiley Classics Library},
note={Reprint of the 1962 original},
publisher={John Wiley \& Sons Inc.},
place={New York},
date={1990},
}
\bib{WhitleySizeUnitSphere}{article}{
author={Whitley, Robert},
title={The size of the unit sphere},
journal={Canadian J. Math},
volume={20},
date={1968},
pages={450\ndash 455},
}
\end{biblist}
\end{bibdiv}
\end{document} |
1,941,325,220,402 | arxiv |
\section{Conclusion}
In this paper, we have studied mean-payoff parity games, with an application
to finding permissive strategies in parity games with penalties. In particular,
we have established that mean-penalty parity games are not harder to solve than
mean-payoff parity games: for both kinds of games, the value problem is in
${\NP\cap\coNP}$ and can be solved by an exponential algorithm that becomes
pseudo-polynomial when the number of priorities is bounded.
One complication with both kinds of games is that optimal
strategies for \Pl1 require infinite memory, which makes it
hard to synthesise these strategies. A suitable alternative to
optimal strategies are \emph{$\epsilon$-optimal} strategies
that achieve the value of the game by at most~$\epsilon$.
Since finite-memory $\epsilon$-optimal strategies are
guaranteed to exist \cite{BCHJ09},
a challenge for future work is to modify our algorithms so that
they compute not only the values of the game but also a finite-memory
$\epsilon$-optimal (multi-)\dbr strategy for \Pl1.
\subsubsection*{Acknowledgement}
We thank an anonymous reviewer for pointing out the polynomial reduction
from mean-penalty parity games to mean-payoff parity games, which has
simplified the proof that mean-penalty parity games are in \NP.
\section{Introduction}
Games extend the usual semantics of finite automata from one to several players,
thus allowing to model interactions between agents acting on the progression of
the automaton. This has proved very useful in computer science, especially for
the formal verification of open systems interacting with their
environment~\cite{Tho02}. In~this setting, the aim is to synthesise a
controller under which the system behaves according to a
given specification, whatever the environment does.
Usually, this is modelled as a game between two players:
Player~1 represents the controller and Player~2 represents the environment.
The goal is then to find a~\emph{winning strategy} for Player~1, \ie
a~recipe stating how the system should react to any possible action of
the environment, in order to meet its specification.
In this paper, we consider \emph{multi-strategies} (or
\emph{non-deterministic strategies}, \cf~\cite{BJW02,BDMR09}) as a
generalisation of strategies: while strategies select only one possible action
to be played in response to the behaviour of the environment, multi-strategies
can retain several possible actions. Allowing several moves provides a~way to
cope with errors (\eg, actions being disabled for a short period, or timing
imprecisions in timed games).
Another quality of multi-strategies is their ability to be combined
with other multi-strategies, yielding a refined multi-strategy, which
is ideally winning for all of the original specifications.
This offers a modular approach for solving games.
Classically, a strategy is more \emph{permissive} than another one if it
allows more behaviours. Under this notion, there does not need to exist
a most permissive winning strategy~\cite{BJW02}. Hence, we~follow
a~different approach, which is of a~quantitative nature: we provide
a~\emph{measure} that specifies \emph{how} permissive a given multi-strategy
is.
In~order to do so, we~consider \emph{weighted games}, where each edge is
equipped with a weight, which we treat as a \emph{penalty} that is
incurred when disallowing this edge. The penalty of a multi-strategy
is then defined to be the average sum of penalties incurred in each step
(in the limit). The lower this penalty is, the more permissive is the
given multi-strategy. Our~aim is to find one of the most permissive
multi-strategies achieving a given objective.
We deal with multi-strategies by transforming a game with penalties into a
\emph{mean-payoff game}~\cite{EM79,ZP96} with classical (deterministic)
strategies.
A~move in the latter game corresponds to a set of moves in the former, and is
assigned a (negative) \emph{reward} depending on the penalty of the
original move. The~penalty of a multi-strategy in the original game
equals the opposite of the payoff achieved by the corresponding strategy
in the mean-payoff game.
In~previous work, Bouyer et al.~\cite{BDMR09} introduced the notion of
penalties and showed how to compute permissive strategies \wrt reachability
objectives. We extend the study of~\cite{BDMR09} to parity
objectives. This is a significant extension because parity
objectives can express infinitary specifications.
Using the above transformation, we~reduce the problem of finding
a most permissive strategy in a parity game with penalties to that of
computing an
optimal strategy in a \emph{mean-payoff parity game}, which combines a
mean-payoff objective with a parity objective.
While mean-payoff parity games have already been
studied~\cite{CHJ05,BCHJ09,CD10}, we~propose a new proof that these
games are determined and that both players have optimal strategies.
Moreover, we prove that the second player does not only have an
optimal strategy with finite memory, but one that uses no memory at all.
Finally, we provide a new algorithm for computing the values of a mean-payoff
parity game, which is faster than the best known algorithms
for this problem; the running time is exponential in the number of priorities
and polynomial in the size of the game graph and the largest absolute weight.
In the second part of this paper, we present our results on parity games
with penalties. In particular, we prove the existence of most permissive
multi-strategies, and we show that the existence of a multi-strategy
whose penalty is less than a given threshold can be decided in
${\NP\cap\coNP}$. Finally, we adapt our deterministic algorithm
for mean-payoff parity games to parity games with penalties. Our
algorithm computes the penalties of a most permissive multi-strategy
in time exponential in the number of priorities and polynomial in the
size of the game graph and the largest penalty.
\iffull\else
Due to space restrictions, most proofs are omitted in this
extended abstract; they can be found in the full version of
this paper \cite{rr-lsv-11-17}.
\fi
\subsubsection{Related work}
Penalties as we use them were defined in~\cite{BDMR09}. Other notions of
permissiveness have been defined in~\cite{BJW02,PR05}, but these notions
have the drawback that a most permissive strategy might not exist.
Multi-strategies have also been used for different purposes in~\cite{Lut08}.
The parity condition goes back to~\cite{EJ91,Mos91} and is fundamental
for verification.
Parity games admit optimal memoryless strategies for both players,
and the problem of deciding the winner is in ${\NP\cap\coNP}$.
As of this writing, it is not known whether parity games can be
solved in polynomial time; the best known algorithms run in time
polynomial in the size of the game graph but exponential in the number
of priorities.
Another fundamental class of games are games with quantitative objectives.
Mean-payoff games, where the aim is
to maximise the average weight of the transitions taken in a play,
are also in $\NP\cap\coNP$ and admit memoryless
optimal strategies \cite{EM79,ZP96}. The same is true for
\emph{energy games}, where the aim is to always keep the sum of the weights
above a given threshold \cite{CdAHS03,BFLMS08}.
In~fact, parity games can easily be reduced to mean-payoff
or energy games~\cite{Jurdzinski98}.
Finally, several game models mixing several qualitative or quantitative
objectives have recently appeared in the literature: apart from mean-payoff
parity games, these include generalised parity games~\cite{CHP07},
energy parity games~\cite{CD10} and lexicographic mean-payoff (parity)
games~\cite{BCHJ09} as~well~as generalised energy and
mean-payoff games~\cite{CDHR10}.
\section{Mean-payoff parity games}
\label{sec-mpp}
\iffull
In this first part of the paper, we show that mean-payoff parity games
are determined, that both players have
optimal strategies, that for \Pl2 even memoryless strategies suffice,
and that the value problem for mean-payoff parity games is in $\NP\cap\coNP$.
\else
In this section, we establish that mean-payoff parity games
are determined, that both players have
optimal strategies, that for \Pl2 even memoryless strategies suffice,
and that the value problem for mean-payoff parity games is in $\NP\cap\coNP$.
\fi
Furthermore, we present a deterministic algorithm which computes
the values in time exponential in the number of priorities, and runs
in pseudo-polynomial time
when the number of priorities is bounded.
\iffull
\subsection{Definitions}
\fi
Formally, a \emph{mean-payoff parity game} is a tuple $\G=(G,\col)$, where
$G$~is a~weighted game graph, and $\col\colon \V{all}\to\bbN$ is a priority
function assigning a \emph{priority} to every state.
A play $\rho=\rho(0)\rho(1)\cdots$ is \emph{parity-winning} if the minimal
priority occurring infinitely often in $\rho$ is even, \ie, if
$\min\{\col(q)\mid q\in\Inf(\rho)\}\equiv 0\pmod 2$. All notions that we
have defined for weighted game graphs carry over to mean-payoff parity games.
In~particular, a~play of~$\G$ is just a play of~$G$ and a strategy for \Pli
in~$\G$ is nothing but a strategy for \Pli in~$G$. Hence, we write
$\Out^{\G}(\sigma,q)$ for $\Out^{G}(\sigma,q)$, and so on. As for
weighted games graphs, we often omit the superscript if
$\G$~is clear from the context.
Finally, for a mean-payoff parity game~$\G=(G,\col)$ and a subarena~$S$
of~$G$, we~write $\G\restrict S$ for the mean-payoff parity game
$(G\restrict S, \col\restrict S)$.
We say that a mean-payoff parity game $\G=(G,\col)$ is a \emph{mean-payoff
game} if $\col(q)$ is even for all $q\in\V{all}$. In particular, given
a weighted game graph~$G$, we~obtain a mean-payoff game by assigning
priority~$0$ to all states. We denote this game by~$(G,0)$.
\iffull
If $\col(\V{all})\subseteq\{0,1\}$, then we say that $\G$~is a
\emph{mean-payoff B\"uchi game}; if $\col(\V{all})\subseteq\{1,2\}$, we
call it a \emph{mean-payoff co-B\"uchi game}. Hence, in a B\"uchi game \Pl1
needs to visit the set $\col^{-1}(0)$ infinitely often, whereas in a
co-B\"uchi game he has to visit the set $\col^{-1}(1)$ only
finitely often.
\fi
\iffull
For a play $\rho$ of~$\G$, we define its \emph{payoff} as
\[
\payoff^{\G}(\rho)=\begin{cases}
\displaystyle\liminf_{n\to\infty}\payoff^{\G}_n(\rho)
& \text{if $\rho$ is parity-winning,} \\
-\infty & \text{otherwise,}
\end{cases}
\]
where for $n\in\bbN$
\[
\payoff^{\G}_n(\rho)=\begin{cases}
\displaystyle\frac{1}{n}\sum^{n-1}_{i=0}\weight(\rho(i),\rho(i+1))
& \text{if $n>0$,} \\
-\infty & \text{if $n=0$.}
\end{cases}
\]
\else
For a play $\rho$ of a mean-payoff parity game $\G$ that is parity-winning,
its \emph{payoff} is defined as\pagebreak[0]
\[
\payoff^{\G}(\rho)=\liminf_{n\to\infty}
\frac{1}{n}\sum^{n-1}_{i=0}\weight(\rho(i),\rho(i+1))\,;\]
if $\rho$~is not parity-winning, we set $\payoff^{\G}(\rho)\coloneq-\infty$.
\fi
If $\sigma$~is a strategy for \Pl1 in~$\G$, we define its \emph{value}
from $q_0\in\V{all}$ as
\iffull
\[\val^{\G}(\sigma,q_0)
=\inf\nolimits_{\tau}\payoff^{\G}(\rho(\sigma,\tau,q_0))
=\inf\{\payoff^{\G}(\rho)\mid\rho\in\Out^{\G}(\sigma,q_0)\},\]
where $\tau$~ranges over all strategies of \Pl2 in~$\G$.
\else
$\val^{\G}(\sigma,q_0)=\inf_{\rho\in\Out^{\G}(\sigma,q_0)}\payoff^{\G}(\rho)$.
\fi
Analogously,
\iffull
the value of a strategy~$\tau$ for \Pl2 from~$q_0$ is defined as
\[\val^{\G}(\tau,q_0)
=\sup\nolimits_{\sigma}\payoff^{\G}(\rho(\sigma,\tau,q_0))
=\sup\{\payoff^\G(\rho)\mid\rho\in\Out^\G(\tau,q_0)\},\]
where $\sigma$~ranges over all strategies of \Pl1 in~$\G$.
\else
the value $\val^{\G}(\tau,q_0)$ of a strategy~$\tau$ for \Pl2 is defined
as the supremum of $\payoff^{\G}(\rho)$ over all $\rho\in\Out^\G(\tau,q_0)$.
\fi
The \emph{lower} and \emph{upper value} of a state~$q_0\in\V{all}$
are defined by
\iffull
\begin{align*}
\lowval^{\G}(q_0)=\sup\nolimits_{\sigma}\val^{\G}(\sigma,q_0)
&& \text{and} &&
\upval^{\G}(q_0)=\inf\nolimits_{\tau}\val^{\G}(\tau,q_0),
\end{align*}
\else
$\lowval^{\G}(q_0)=\sup_{\sigma}\val^{\G}(\sigma,q_0)$ and
$\upval^{\G}(q_0)=\inf_{\tau}\val^{\G}(\tau,q_0)$,
\fi
respectively.
Intuitively, $\lowval^{\G}(q_0)$ and $\upval^{\G}(q_0)$ are the maximal
(respectively minimal) payoff that \Pl1 (respectively \Pl2) can ensure
(in the limit). We~say that a~strategy~$\sigma$ of \Pl1 is \emph{optimal
from~$q_0$} if $\val^{\G}(\sigma,q_0)=\lowval^{\G}(q_0)$. Analogously,
we~call a strategy~$\tau$ of \Pl2 optimal from~$q_0$ if
$\val^{\G}(\tau,q_0)=\upval^{\G}(q_0)$. A~strategy is
\emph{(globally) optimal} if it is optimal from every state
$q\in\V{all}$.
It is easy to see that $\lowval^{\G}(q_0)\leq\upval^{\G}(q_0)$. If
$\lowval^{\G}(q_0)=\upval^{\G}(q_0)$, we say that $q_0$~has a \emph{value},
which we denote by $\val^\G(q_0)$.
\iffull
In the next section, we will see that mean-payoff games are
\emph{determined}, \ie, that every state has a value.
The \emph{value problem} is the following decision problem:
Given a mean-payoff parity game~$\G$ (with integral weights), a
designated state $q_0\in\V{all}$, and a number $x\in\bbQ$, decide
whether $\val^{\G}(q_0)\geq x$.
\fi
\begin{example}\label{ex:inf-mem}
Consider the mean-payoff parity game~$\G$ depicted in \cref{fig:inf-mem},
where a state or an edge is labelled with its priority, respectively
weight; all states belong to \Pl1.
Note that $\val^{\G}(q_1)=1$ since \Pl1 can delay visiting~$q_2$
longer and longer while still ensuring that this vertex is seen
infinitely often. However, there is no finite-memory strategy
that achieves this value.
\iffull\par
Let $\sigma$ be a finite-memory
strategy of \Pl1 in~$\G$, and let $\rho$ be the
unique play of~$\G$ that starts in~$q_1$ and is consistent
with~$\sigma$. Assume furthermore that $\rho$~visits~$q_2$
infinitely often (otherwise $\val^{\G}(\sigma,q_1)=-\infty$). Then
$\rho={q_1}^{k_1}q_2 {q_1}^{k_2} q_2\cdots$, where each
$k_i\in\bbN\setminus\{0\}$. Since $\sigma$~is a finite-memory strategy,
there exists $m\in\bbN$ such that $k_i\leq m$ for all $i\in\bbN$.
Hence, $\val^{\G}(\sigma,q_q)=\payoff(\rho)\leq m/(m+1)<1$.
\fi
\end{example}
\begin{figure}
\centering
\input{fig_inf-mem.tex}
\caption{\label{fig:inf-mem}A mean-payoff parity game for which infinite
memory is necessary}
\end{figure}
\iffull
\subsection{Strategy complexity}
\fi
\iffull
It follows from Martin's determinacy theorem \cite{Martin75} that
mean-payoff parity games are determined.
\else
It follows from Martin's determinacy theorem \cite{Martin75} that
mean-payoff parity games are determined, \ie, that every state has
a value.
\fi
Moreover, Chatterjee et
al.~\cite{CHJ05} gave an algorithmic proof for the existence of optimal
strategies. Finally, it can be shown that for every
$x\in\bbR\cup\{-\infty\}$ the set
${\{\rho\in\V{}^\omega\mid\payoff(\rho)\geq x\}}$ is closed under
\emph{combinations}. By Theorem~4 in \cite{Kopczynski06}, this property
implies that \Pl2 even has a memoryless optimal strategy.
\iffull
We give here a purely inductive proof of these facts that does not
rely on Martin's theorem.
We start by proving that \Pl1 has an optimal strategy in games where
\Pl2 is absent.
\begin{lemma}
\label{lem:mpp-pl1}
Let $\G$ be a mean-payoff parity game with $\V2=\emptyset$.
Then \Pl1 has an optimal strategy in~$\G$.
\end{lemma}
\begin{proof}
It suffices to construct for each $q_0\in\V{}$ a strategy~$\sigma$ with
$\val^{\G}(\sigma,q_0)\geq\val^{\G}(q_0)$.
If $\val^{\G}(q_0)=-\infty$, we can choose an arbitrary
strategy~$\sigma$. Otherwise, by the definition of $\val^{\G}(q_0)$, for
each $\epsilon>0$ there exists a play $\rho_\epsilon\in\Out^\G(q_0)$ with
$\payoff(\rho_\epsilon)\geq\val^{\G}(q_0)-\epsilon$. Consider the sets
$\Inf(\rho_\epsilon)$ of states occurring infinitely often
in~$\rho_\epsilon$. Since there are only finitely many such sets, we
can find a set $P\subseteq\V{all}$ such that for each $\epsilon>0$
there exists $0<\epsilon'<\epsilon$ with $P=\Inf(\rho_{\epsilon'})$.
Let $q_{\min}\in P$ be a vertex of lowest priority. (This priority must be
even since each~$\rho_\epsilon$ fulfils the parity condition).
Let $\sigma_1$ be an optimal memoryless strategy in the mean-payoff
game $\G_P=(G\restrict P,0)$ (the strategy~$\sigma_1$ just leads the
play to a simple cycle with maximum average weight),
and let $\sigma_2$ be the memoryless attractor strategy in the game~$\G_P$
that ensures a visit to~$q_{\min}$ from all states $q\in P$; we
extend both strategies to a strategy in~$\G$ by combining them with a
memoryless attractor strategy for~$P$. (In particular, $\sigma_2$~enforces
a visit to~$q_{\min}$ from~$q_0$.)
Note that $\val^{\G_P}(q)\geq\val^{\G}(q_0)$ for all $q\in P$
since each of the plays~$\rho_{\epsilon'}$ visits each vertex in~$P$
and has payoff $\geq\val^{\G}(q_0)-\epsilon'$.
\Pl1's optimal strategy~$\sigma$ is played in rounds: in the $i$th
round, \Pl1 first forces a visit to~$q_{\min}$ by playing according
to~$\sigma_2$; once $q_{\min}$~has been visited, \Pl1 plays
$\sigma_1$ for $i$~steps before proceeding to the next round.
Note that $\val^{\G_P}(\sigma,q_{\min})=
\val^{\G_P}(\sigma_1,q_{\min})$. Moreover, the unique play
$\rho\in\Out^{\G}(\sigma,q_0)$ satisfies $q_{\min}\in\Inf(\rho)\subseteq P$
and therefore fulfils the parity condition. To sum up, we have
$\val^{\G}(\sigma,q_0)=
\val^{\G}(\sigma,q_{\min})=
\val^{\G_P}(\sigma,q_{\min})=
\val^{\G_P}(\sigma_1,q_{\min})=
\val^{\G_P}(q_{\min})\geq\val^{\G}(q_0)$.\qed
\end{proof}
Using \cref{lem:mpp-pl1}, we can prove that mean-payoff-parity games are not
only determined, but also that \Pl1 has an optimal strategy and that \Pl2 has
a memoryless optimal strategy.
We use the \emph{loop factorisation} technique (\cf~\cite{Zielonka04}):
Let $\gamma$ be a play prefix and let $\hat{q}\in\V{all}$.
The \emph{loop factorisation of~$\gamma$ relative to~$\hat{q}$} is the unique
factorisation of the form $\gamma=\gamma_0\gamma_1\cdots\gamma_l$, where
$\gamma_0$ does not contain $\hat{q}$, and each factor $\gamma_i$,
$1\leq i\leq l$,
is of the form $\gamma_i=\hat{q}\cdot\gamma_i'$ where $\gamma_i'$ does not
contain~$\hat{q}$.
Analogously, for a play $\rho$ which has infinitely many occurrences
of~$\hat{q}$ the \emph{loop factorisation of~$\rho$ relative to~$\hat{q}$} is
the unique factorisation $\rho=\gamma_0\gamma_1\cdots$ where each $\gamma_i$
has the same properties as in the above case.
For a state~$\hat{q}$ with $m$~successors,
$\hat{q}E=\{q_1,\ldots,q_m\}$,
we define an operator $\pi_i\colon\V{all}^*\to\V{all}^*$ for each
$1\leq i\leq m$ by setting
\[
\pi_i(\gamma)\coloneqq\begin{cases}
\gamma & \text{if either $\gamma=\hat{q}q_i\gamma'$ for some $\gamma'\in\V{all}^*$ or $\gamma=q_i=\hat{q}$,} \\
\epsilon & \text{otherwise.}
\end{cases}
\]
The operator~$\pi_i$ induces another operator $\Pi_i\colon
\V{all}^*\to\V{all}^*$ by setting
\[\Pi_i(\gamma)=\Pi_i(\gamma_0)\Pi_i(\gamma_1)\cdots\Pi_i(\gamma_l),\]
where $\gamma=\gamma_0\gamma_1\cdots\gamma_l$ is the loop factorisation
of~$\gamma$ relative to~$\hat{q}$. The operator~$\Pi_i$~operates on play
prefixes, but it can easily be extended to operate on infinite plays with
infinitely many occurrences of~$\hat{q}$.
\else
In the full version of this paper~\cite{rr-lsv-11-17},
we~give a purely inductive proof of determinacy and the
existence of (memoryless) optimal strategies.
We thus have the following theorem.
\fi
\begin{theorem}
\label{thm:mpp-main}
Let $\G$ be a mean-payoff parity game.
\begin{enumerate}
\item $\G$~is determined;
\item \Pl1 has an optimal strategy in $\G$;
\item \Pl2 has a memoryless optimal strategy in $\G$.
\end{enumerate}
\end{theorem}
\iffull
\begin{proof}
We proceed by an induction over the size of
$S\coloneq\{{q\in\V2}\mid {\abs{qE}>1}\}$, the set of all
\Pl2 states with more than one successor.
If $S=\emptyset$, all statements follow from \cref{lem:mpp-pl1}.
Let 1.--3.\ be fulfilled for all games with $\abs{S}<n$ and let $\G=(G,\col)$
be a mean-payoff parity game with $\abs{S}=n$. We prove that the statements
also hold for~$\G$. Let $\hat{q}\in S$ with $\hat{q}E=\{q_1,\ldots,q_m\}$.
For each $1\leq j\leq m$, we define a new game $\G_j=(G_j,\col)$ by setting
$E_j=E\setminus(\{\hat{q}\}\times \V{all})\cup\{(\hat{q},q_j)\}$, and
$G_j=(\V1,\V2,E_j,\weight\restriction E_j)$.
Note that the induction hypothesis applies to each~$\G_j$.
\Wlg assume that $\val^{\G_1}(\hat{q})\leq\val^{\G_j}(\hat{q})$ for all
$1\leq j\leq m$.
We will construct a memoryless strategy~$\tau$ for \Pl2 and a strategy~$\sigma$
for \Pl1 such that $\val^{\G}(\tau,q_0)\leq\val^{\G_1}(q_0)$ and
$\val^{\G}(\sigma,q_0)\geq\val^{\G_1}(q_0)$ for every $q_0\in\V{all}$.
Hence,
\begin{align*}
\val^{\G_1}(q_0) \leq \val^{\G}(\sigma,q_0) \leq \lowval^{\G}(q_0) \leq \upval^{\G}(q_0) \leq \val^{\G}(\tau,q_0) \leq \val^{\G_1}(q_0),
\end{align*}
and all these numbers are equal. In particular, we have
$\val^{\G}(q_0)=\lowval^{\G}(q_0)=\upval^{\G}(q_0)$,
$\val^{\G}(\sigma,q_0)=\val^{\G}(q_0)$ and $\val^{\G}(\tau,q_0)
=\val^{\G}(q_0)$,
which proves 1.--3.
By the induction hypothesis, \Pl2 has a memoryless optimal strategy~$\tau$
in~$\G_1$.
Clearly, $\tau$~is also a memoryless strategy for \Pl2 in~$\G$, and
$\val^{\G}(\tau,q_0)=\val^{\G_1}(\tau,q_0)=\val^{\G_1}(q_0)$ for all
$q_0\in\V{all}$.
It remains to construct a strategy~$\sigma$ for \Pl1 in~$\G$ such that
$\val^{\G}(\sigma,q_0)\geq\val^{\G_1}(q_0)$ for all $q_0\in\V{all}$.
First, we devise a strategy~$\hat{\sigma}$ such that
$\val^{\G}(\hat{\sigma},\hat{q})\geq\val^{\G_1}(\hat{q})$.
If $\val^{\G_1}(\hat{q})=-\infty$, we can take an arbitrary strategy.
Hence, assume that $\val^{\G_1}(\hat{q})$ is finite.
By the induction hypothesis, for each $j=1,\ldots,m$ there exists a
strategy~$\sigma_j$ for \Pl1 in~$\G_j$ with
$\val^{\G_j}(\sigma_j,\hat{q})=\val^{\G_j}(\hat{q})$.
We define $\hat{\sigma}$ to be the \emph{interleaving strategy}, defined by
\begin{align*}
\hat{\sigma}(\gamma)=\hat{\sigma}(\gamma_0\cdots\gamma_l)=
\begin{cases}
\sigma_1(\Pi_1(\gamma)) & \text{if $\gamma_l=\hat{q}q_1\gamma'$
for some $\gamma'\in\V{all}^*$,} \\
\qquad\vdots & \qquad\vdots \\
\sigma_m(\Pi_m(\gamma)) & \text{if $\gamma_l=\hat{q}q_m\gamma'$
for some $\gamma'\in\V{all}^*$,}
\end{cases}
\end{align*}
for all play prefixes~$\gamma$ whose loop factorisation relative to~$\hat{q}$
equals $\gamma_0\cdots\gamma_l$.
We~claim that $\val^{\G}(\hat{\sigma},\hat{q})\geq\val^{\G_1}(\hat{q})$.
Let $\rho\in\Out^\G(\hat{\sigma},\hat{q})$. If $\rho$~has only finitely many
occurrences of $\hat{q}$, then $\rho$~is equivalent to a play in~$\G_j$ that is
consistent with~$\sigma_j$ for some~$j$. Since
$\val^{G_j}(\hat{q})\geq\val^{G_1}(\hat{q})$ and $\sigma_j$ is
optimal, $\payoff(\rho)\geq\val^{G_1}(\hat{q})$, and we are done.
Otherwise, consider the loop factorisation $\rho=\gamma_0\gamma_1\cdots$ and
set
\[
\Gamma=\{j\in \{1,\ldots,m\}\mid \text{$\gamma_i\cdot\hat{q}$ is a loop in
$\G_j$ for infinitely many~$i\in\bbN$}\}.
\]
Since the mean-payoff parity condition is prefix-independent, we can
assume \wlg that every loop in~$\rho$ is
a loop in~$\G_j$ for $j\in\Gamma$. For each $j\in\Gamma$, denote by
$\rho_j=\Pi_j(\rho)$ the corresponding play in~$\G_j$. By definition
of~$\hat{\sigma}$, we have $\rho_j\in\Out^{\G_j}(\sigma_j,\hat{q})$ for
each $j\in\Gamma$. Since $\val^{\G_1}(\hat{q})$ is finite
and $\val^{\G_1}(\hat{q})\leq\val^{\G_j}(\hat{q})$, each~$\rho_j$ fulfils
the parity condition. As the minimal priority occurring infinitely often
in~$\rho$ also occurs infinitely often in one~$\rho_j$, this implies that
$\rho$~fulfils the parity condition.
We claim that for each $n>0$, $\payoff_n(\rho)$ is a weighted average of
$\payoff_{n_j}(\rho_j)$ for some $n_j>0$. To see this, consider the
loop factorisation $\gamma'_0\cdots\gamma'_k$ of $\rho[0,n]$. (Note
that $\gamma'_i=\gamma_i$ for all $i<k$.) For each $j\in\Gamma$, set
\[
n_j=\begin{cases}
\abs{\Pi_j(\rho[0,n])} -1 &
\text{if $\gamma'_k$ is a history of~$\G_j$ and either
$\gamma'_k\neq\hat{q}$ or $q_j=\hat{q}$.} \\
\abs{\Pi_j(\rho[0,n])} & \text{otherwise.}
\end{cases}
\]
Intuitively, $n_j$~is the number of transitions in~$\rho[0,n]$ that
correspond to a transition in~$\rho_j$. Hence,
\[\{(\rho(i),\rho(i+1))\mid 0\leq i<n\}=\bigcup_{j\in\Gamma}
\{(\rho_j(i),\rho_j(i+1))\mid 0\leq i<n_j\}.\]
In particular, $\sum_{j\in\Gamma} n_j=n$ and $\sum_{j\in\Gamma}
n_j/n=1$. We have
\begin{align*}
\payoff_n(\rho)
&= \frac{1}{n}\,\sum_{i=0}^{n-1}\weight(\rho(i),\rho(i+1)) \\
\noalign{\pagebreak[1]}
&= \frac{1}{n}\sum_{\substack{j\in\Gamma \\ n_j>0}}\sum_{i=0}^{n_j-1}
\weight(\rho_j(i),\rho_j(i+1)) \\
\noalign{\pagebreak[1]}
&= \sum_{\substack{j\in\Gamma \\ n_j>0}}\frac{n_j}{n}\cdot\frac{1}{n_j}
\sum_{i=0}^{n_j-1}\weight(\rho_j(i),\rho_j(i+1)) \\
\noalign{\pagebreak[1]}
&= \sum_{\substack{j\in\Gamma \\ n_j>0}}
\frac{n_j}{n}\cdot\payoff_{n_j}(\rho_j).
\end{align*}
Since a weighted average is always bounded from below by the minimum element,
we can conclude that
\[\payoff_n(\rho)\geq
\min_{\substack{j\in\Gamma \\ n_j>0}}\payoff_{n_j}(\rho_j)\geq
\min_{j\in\Gamma}\payoff_{n_j}(\rho_j).\]
Taking the lower limit on both sides, we obtain
\begin{align*}
\payoff(\rho)
&= \liminf_{n\to\infty}\payoff_n(\rho) \\
\noalign{\pagebreak[1]}
&\geq\liminf_{n\to\infty}\min_{j\in\Gamma}\payoff_{n_j}(\rho_j) \\
\noalign{\pagebreak[1]}
&=\min_{j\in\Gamma}\liminf_{n\to\infty}\payoff_{n_j}(\rho_j) \\
\noalign{\pagebreak[1]}
&=\min_{j\in\Gamma}\liminf_{n_j\to\infty}\payoff_{n_j}(\rho_j) \\
\noalign{\pagebreak[1]}
&=\min_{j\in\Gamma}\payoff(\rho_j).
\end{align*}
Since each~$\rho_j$ is consistent with~$\sigma_j$ and $\sigma_j$~is
optimal, we have $\payoff(\rho_j)\geq\val^{\G_j}(\hat{q})
\geq\val^{\G_1}(\hat{q})$ for each $j\in\Gamma$ and therefore also $\payoff(\rho)\geq\val^{\G_1}(\hat{q})$. Since this holds for
all $\rho\in\Out^{\G}(\hat{\sigma},\hat{q})$, we can conclude that
$\val^{\G}(\hat{\sigma},\hat{q})\geq\val^{\G_1}(\hat{q})$.
Finally, we construct a strategy~$\sigma$ for \Pl1 in~$\G$ such that
$\val^{\G}(\sigma,q_0)\geq\val^{\G_1}(q_0)$ for all $q_0\in\V{all}$.
Let
\begin{align*}
\sigma(\gamma)=\begin{cases}
\sigma_1(\gamma) & \text{if $\hat{q}$ does not occur in $\gamma$,} \\
\hat{\sigma}(\hat{q}\gamma_2) & \text{if $\gamma=\gamma_1\hat{q}\gamma_2$
with $\gamma_1\in(\V{all}\setminus\{\hat{q}\})^*$.}
\end{cases}
\end{align*}
Then for each play $\rho\in\Out^{\G}(\sigma,q_0)$ where $\hat{q}$ does not
occur, it holds $\payoff^\G(\rho)=\payoff^{\G_1}(\rho)\geq\val^{\G_1}(\sigma_1,q_0)=
\val^{\G_1}(q_0)$.
If $\hat{q}$ occurs in at least one play consistent with~$\sigma$, then in
the game~$\G_1$ (where $\sigma_1$ is optimal), we have
$\val^{\G_1}(q_0)=\val^{\G_1}(\sigma_1,q_0)\leq\val^{\G_1}(\hat{q})$.
Hence, for each play $\rho\in\Out^{\G}(\sigma,q_0)$ where $\hat{q}$ occurs
(say at position~$j$), it holds
$\payoff^\G(\rho)=\payoff^{\G}(\rho[j,\infty))\geq
\val^{\G}(\hat{\sigma},\hat{q})\geq\val^{\G_1}(\hat{q})\geq\val^{\G_1}(q_0)$.
Altogether we have $\payoff^\G(\rho)\geq\val^{\G_1}(q_0)$ for every play
$\rho\in\Out^{\G}(\sigma,q_0)$ and therefore
$\val^{\G}(\sigma,q_0)\geq\val^{\G_1}(q_0)$.\qed
\end{proof}
\fi
\iffalse
\begin{proof}[Sketch]
We use a technique that has already been employed
several times to prove the existence of
(memoryless) optimal strategies (see \eg \cite{Zielonka04,CD10}).
The idea is to proceed via an induction over the number of \Pl2 states
that have more than one successor; \cref{lem:mpp-pl1} establishes the
base case. Now, if we have a state $\hat{q}\in\V{2}$ with successors
$q_1,\ldots,q_m$, where $m\geq 2$, we can build $m$~new games
$\G_1,\ldots,\G_m$, where $\G_i$ results from~$\G$ by removing
all transitions out of~$\hat{q}$ except the one to~$q_i$. By the
induction hypothesis, these games are determined, and \Pl1 has
an optimal strategy $\sigma_i$ in each $\G_i$. \Wlg, we can assume
that $\val^{\G_1}(\hat{q})$ is minimal. By the induction hypothesis,
\Pl2 has a memoryless optimal strategy in~$\G_1$. Clearly,
$\tau$~is also a memoryless strategy in~$\G$ such that
$\val^{\G}(\tau,q)=\val^{\G_1}(q)$ for all $q\in\V{}$, which proves that
$\upval^{\G}(q)\leq\val^{\G_1}(q)$.
We~then construct from $\sigma_1,\dots,\sigma_m$ a strategy~$\sigma$
for \Pl1 such that $\val^{\G}(\sigma,q)\geq\val^{\G_1}(q)$ for all $q\in\V{}$,
which proves that $\lowval^{\G}(q)\geq\val^{\G_1}(q)$ for all $q\in\V{}$.
It~follows that $\lowval^{\G}(q)=\upval^{\G}(q)=\val^{\G_1}(q)$ for
all $q\in \V{}$ and that $\sigma$ and~$\tau$ are optimal.\qed
\end{proof}
\fi
A consequence of the proof of
\iffull
\cref{lem:mpp-pl1,thm:mpp-main}
\else
\cref{thm:mpp-main}
\fi
is that
each value of a~mean-payoff parity game is either $-\infty$ or equals one of
the values of a~mean-payoff game played on the same weighted graph (or
a~subarena of it). Since optimal memoryless strategies exist in
mean-payoff games~\cite{EM79}, the values of a~mean-payoff game with integral
weights are rational numbers of the form~$r/s$ with
$\abs{r}\leq\abs{\V{}}\cdot W$ and $\abs{s}\leq\abs{\V{}}$.
Consequently, this property holds for the (finite) values of a mean-payoff
parity game as well.
\iffull
While \cref{ex:inf-mem} demonstrates that an optimal strategy of \Pl1
requires infinite memory in general, this is not the case for
mean-payoff co-B\"uchi games, where both players have memoryless
optimal strategies. This can be seen by applying Theorem~2 of
\cite{GZ04} or by an inductive proof, which we provide here.
\begin{theorem}\label{thm:mpp-buechi-optimal}
Let $\G$ be a mean-payoff co-B\"uchi game. Then \Pl1 has a memoryless
optimal strategy from every state $q_0\in\V{all}$.
\end{theorem}
\begin{proof}
The proof is by induction over the number $\abs{\V{}}=n$ of states in~$\G$.
For $n=1$, the statement is trivially fulfilled.
Now let $n>1$, $q_0\in\V{}$, and assume that the statement is true for all
games with less than $n$~states. Define
$\V{}'=\V{}\setminus\Attr_2(\col^{-1}(1))$. If $\V{}'=\emptyset$, then \Pl2
can force visiting~$\col^{-1}(1)$ infinitely often by playing a
memoryless attractor strategy. Hence, $\val^{\G}(q_0)=-\infty$, and
every memoryless strategy of \Pl1 is optimal.
In the following, assume that $\V{}'\neq\emptyset$.
Consider the game $\G'\coloneqq\G\restrict \V{}'$,
which is a mean-payoff game, and set
\[S\coloneqq\{q\in\V{}'\mid\val^{\G'}(q)\geq\val^{\G}(q_0)\}.\]
Note that $S$~is a trap for \Pl2 both in~$\G'$ and in~$\G$
(since $\V{}'$~is a 2-trap in~$\G$).
We claim that $S\neq\emptyset$.
Towards a contradiction, assume that $S=\emptyset$, \ie,
$\val^{\G'}(q)<\val^{\G}(q_0)$ for all $q\in\V{}'$, and let $\tau$
be an optimal memoryless strategy for \Pl2 in~$\G'$. We extend $\tau$ to
a strategy in~$\G$ by combining it with a memoryless attractor strategy
for $\col^{-1}(1)$ on $\Attr_2(\col^{-1}(1))$. Let
$\rho\in\Out^{\G}(\tau,q_0)$ and $m\coloneq\max_{q\in\V{}'}\val^{\G'}(q)$.
Either $\rho$ visits $\Attr_2(\col^{-1}(1))$ and therefore also
$\col^{-1}(1)$ infinitely often, in which case
$\payoff(\rho)=-\infty<m$, or
$\rho[i,\infty)$ is a play of $\G'$ for some $i\in\bbN$, in which case
$\payoff(\rho)=\payoff(\rho[i,\infty))\leq\val^{\G'}(\rho(i))\leq m$.
Hence, $\val^{\G}(q_0)\leq\val^{\G}(\tau,q_0)\leq m<\val^{\G}(q_0)$, a
contradiction.
Now, let $\sigma'$ be a memoryless optimal strategy of \Pl1 in~$\G'$.
By the definition of~$S$, we have $\val^{\G'}(\sigma',q)\geq\val^\G(q_0)$
for all ${q\in S}$. Moreover, $\sigma'$~induces a memoryless
strategy~$\sigma_S$ in $\G\restrict S$ such that
$\val^{\G\restrict S}(\sigma_S,q)=\val^{\G'}(\sigma',q)\geq
\val^\G(q_0)$ for all $q\in S$. Let $A=\Attr_1^\G(S)$.
We extend~$\sigma_S$ to a memoryless strategy~$\sigma_A$
in~$\G\restrict A$ by combining it with a memoryless attractor
strategy for~$S$ on~$A\setminus S$. It follows that
$\val^{\G\restrict A}(\sigma_A,q)\geq
\val^\G(q_0)$ for all $q\in\Attr_1(S)$.
If $q_0\in\Attr_1(S)$, we are done. Otherwise, $q_0\in T\coloneq
\V{all}\setminus A$. Since $S\neq\emptyset$, the game
$\G\restrict T$ has less states than~$\G$, and by the induction hypothesis,
\Pl1 has a memoryless optimal strategy~$\sigma_T$ from~$q_0$
in~$\G\restrict T$. Note that, since $T$~is a trap for \Pl1, we~have
$\val^{\G\restrict T}(\sigma_T,q_0)=\val^{\G\restrict T}(q_0)
\geq\val^{\G}(q_0)$.
Let $\sigma$ be the union of $\sigma_A$ and~$\sigma_T$,
which is a memoryless strategy in~$\G$. We claim that $\sigma$~is optimal
from~$q_0$ in~$\G$. Let $\rho\in\Out^{\G}(\sigma,q_0)$. If $\rho$~stays
in~$T$, it is consistent with~$\sigma_T$ and must have
payoff at least $\val^{\G\restrict T}(\sigma_T,q_0)\geq\val^{\G}(q_0)$.
Otherwise, there exists $i\in\bbN$ such that $\rho(i)\in A$ and
$\rho[i,\infty)$ is consistent with~$\sigma_A$, which implies
$\payoff(\rho)=\payoff(\rho[i,\infty))\geq
\val^{\G\restrict A}(\sigma_A,\rho(i))\geq\val^\G(q_0)$.\qed
\end{proof}
\fi
\iffull
\subsection{Computational complexity}
In this section, we prove that the value problem for mean-payoff parity
games lies in $\NP\cap\coNP$. Although this has already been proved by
Chatterjee and Doyen~\cite{CD10}, our proof has the advantage that it
works immediately on mean-payoff parity games,
and not on energy parity games as in \cite{CD10}.
\else
We now turn towards the computational complexity of mean-payoff parity
games.
Formally, the \emph{value problem} is the following decision problem:
Given a~mean-payoff parity game~$\G$ (with integral weights), a
designated state $q_0\in\V{all}$, and a number $x\in\bbQ$, decide
whether $\val^{\G}(q_0)\geq x$. By \cref{thm:mpp-main}, to decide whether
$\val^{\G}(q_0)<x$, we can guess a memoryless strategy~$\tau$ for \Pl2
and check whether $\val^{\G}(\tau,q_0)<x$. It follows from a result of Karp~\cite{Karp78} that the latter check can be carried out in polynomial
time. Hence, the value problem belongs to $\coNP$.
\fi
\iffull
In order to put the value problem for mean-payoff parity games into \coNP,
we first show that the value can be decided in polynomial time in games
where \Pl2 is absent.
\begin{proposition}\label{prop:mpp-pl1-ptime}
The problem of deciding, given a mean-payoff parity game~$\G$ with $\V{2}=\emptyset$, a state $q_0\in\V{all}$, and $x\in\bbQ$,
whether $\val^{\G}(q_0)\geq x$, is in~\PTime.
\end{proposition}
\begin{proof}
Deciding whether $\val^{\G}(q_0)\geq x$ is achieved by
\cref{alg:pl1-value},
\begin{algorithm}
\vspace*{.8ex}
\begin{tabbing}
\hspace*{1em}\=\hspace{1em}\=\hspace{1em}\=\hspace{1em}\=
\hspace{1em}\=\hspace{1em}\= \kill
\emph{Input:} mean-payoff parity game $\G$ with~$\V2=\emptyset$,
$q_0\in\V{}$, $x\in\bbQ$. \\
\emph{Output:} whether $\val^{\G}(q_0)\geq x$. \\[\medskipamount]
$G'=G\restriction\{q\in\V{all}\mid q\text{ is reachable from }q_0\}$ \\
\+\textbf{for each} even $p\in\col(\V{all})$ \textbf{do} \\
$G_p=G'\restriction\{q\in\V{all}\mid \col(q)\geq p\}$ \\
decompose $G_p$ into SCCs \\
\+\textbf{for each} SCC~$C$ of~$G_p$ with $p\in\col(C)$ \textbf{do} \\
compute maximum cycle weight~$w$ in $C$ \\
\-\textbf{if} $w\geq x$ \textbf{then accept} \\
\-\textbf{done} \\
\textbf{done} \\
\textbf{reject}
\end{tabbing}
\vspace*{-2ex}
\caption{\label{alg:pl1-value}A polynomial-time algorithm for
deciding the value of a state in a one-player mean-payoff parity game}
\end{algorithm}
which employs as subroutines Tarjan's linear-time
algorithm~\cite{Cormen-etal09} for SCC decomposition and Karp's
polynomial-time algorithm~\cite{Karp78} for computing the
minimum\slash maximum cycle weight,
(\ie the minimum\slash maximum average weight on a cycle) in a given
strongly connected graph.
The algorithm is sound: If the algorithm accepts, then there is an even
priority~$p$ and a reachable SCC~$C$ in~$G_p$ with $p\in\col(C)$
that has maximum cycle weight $w\geq x$.
We construct a strategy~$\sigma$ for \Pl1 with $\val^{\G}(\sigma,q_0)=w$.
Let $q\in C$ be a state with priority~$p$. Since $q$~is reachable from~$q_0$ and
$C$~is strongly connected, both $q_0$ and~$C$ lie inside $\Attr_1(\{q\})$.
Let $\sigma_q$ be the memoryless attractor strategy for~$\{q\}$. Now, since
$w$~is the maximum cycle weight in~$C$, there exists a simple cycle
$\gamma=q_1\cdots q_n q_1$ in~$C$ with cycle weight~$w$.
We construct a (memoryless) strategy $\sigma_\gamma$ on~$C$ by setting
$\sigma_\gamma(q_n)=q_1$ and
$\sigma_\gamma(q_i)=q_{i+1}$ for every $1\leq i<n$; this strategy is
extended to the whole game by combining it with an attractor
strategy for $\{q_1,\ldots,q_n\}$.
The strategies~$\sigma_q$ and~$\sigma_\gamma$ are then combined to a
strategy~$\sigma$, which is played in rounds:
in the $i$th round, \Pl1 first forces a visit
to~$\col^{-1}(p)\cap C$ by playing according to~$\sigma_q$; once
$\col^{-1}(p)\cap C$~has been reached, \Pl1 plays~$\sigma_\gamma$
for $i$~steps before proceeding to the next round.
Note that $\sigma$~fulfils the parity condition because $q$~is visited
infinitely often and all other priorities that appear infinitely often obey
$\col(q)\geq p$.
Finally, the payoff of $\rho(\sigma,q_0)$ equals the cycle weight of~$\gamma$,
\ie, $\val^{\G}(q_0)\geq\val^{\G}(\sigma,q_0)=w\geq x$.
The algorithm is complete: Assume that $\val^{\G}(q_0)=v\geq x$
and let $\rho\in\Out^\G(q_0)$ be a play with $\payoff^{\G}(\rho)=v$;
such a play exists due to \cref{lem:mpp-pl1}. Consider the set
$\Inf(\rho)$ and let
$p=\min\col(\Inf(\rho))$ (which is even since $\payoff(\rho)$ is finite).
Since $\Inf(\rho)$ is strongly connected,
$\Inf(\rho)\subseteq C$ for an SCC~$C$ of~$G_p$ with $p\in\col(C)$.
Since optimal memoryless strategies exist in mean-payoff games,
there exists a simple cycle with average weight $\geq v$ in~$C$. Hence the
algorithm accepts.
Since SCC decomposition and maximum cycle weight computation both take
polynomial time, the whole algorithm runs in polynomial time.\qed
\end{proof}
It follows from \cref{thm:mpp-main,prop:mpp-pl1-ptime} that the value problem
for mean-payoff parity games is in \coNP: to~decide whether $\val^{\G}(q_0)<x$,
a nondeterministic algorithm can guess a memoryless strategy~$\tau$ for \Pl2
and check whether $\val^{\G}(\tau,q_0)<x$ in polynomial time.
\fi
\begin{corollary}\label{cor:mpp-conp}
The value problem for mean-payoff parity games is in \coNP.
\end{corollary}
\iffull
Following ideas from~\cite{CD10},
we prove that the value problem is not only in \coNP, but also in \NP.
The core of \cref{alg:mpp-np}
\begin{algorithm}[t]
\vspace*{.8ex}
\begin{tabbing}
\hspace*{1em}\=\hspace{1em}\=\hspace{1em}\=\hspace{1em}\=
\hspace{1em}\=\hspace{1em}\= \kill
\emph{Input:} mean-payoff parity game $\G$, state $q_0\in\V{}$,
$x\in\bbQ$ \\[\medskipamount]
\textbf{guess} 2-trap~$T$ in~$\G$ with $q_0\in T$ \\
$\Verify(T)$\iffull \\ \else; \fi
\textbf{accept} \\[\medskipamount]
\+\textbf{procedure} $\Verify(S)$ \\
\+\textbf{if} $S\neq\emptyset$ \textbf{then} \\
$p\coloneq\min\{\col(q)\mid q\in S\}$ \\
\+\textbf{if} $p$~is even \textbf{then} \\
\textbf{guess} memoryless strategy~$\sigma_{\rmM}$ for \Pl1 in $G\restrict S$ \\
\textbf{if} $\val^{(G\restrict S,0)}(\sigma_{\rmM},q)<x$ for some $q\in S$
\textbf{then reject} \\
$\Verify(S\setminus\Attr_1^{\G\restrict S}(\col^{-1}(p)))$ \-\\
\+\textbf{else} \\
\textbf{guess} 2-trap $T\neq\emptyset$ in
$\G\restrict (S\setminus\Attr_2^{\G\restrict S}(\col^{-1}(p)))$ \\
$\Verify(T)$;
$\Verify(S\setminus \Attr_1^{\G\restrict S}(T))$ \-\\
\textbf{end if} \-\\
\textbf{end if} \-\\
\textbf{end procedure}
\end{tabbing}
\vspace*{-2ex}
\caption{\label{alg:mpp-np}A nondeterministic algorithm for deciding
the value of a state in a mean-payoff parity game}
\end{algorithm}
is the procedure $\Verify$ that on input~$S$ checks whether the
value of all states in the game $\G\restrict S$ is at least~$x$.
If the least priority~$p$ in~$S$ is even, this is witnessed by a
strategy in the mean-payoff game $(G\restrict S,0)$ that ensures
payoff~$\geq x$ and the fact that the values of all states in
the game
$\G\restrict S\setminus\Attr_1^{\G\restrict S}(\col^{-1}(p))$
are greater than~$x$, which we can check by calling $\Verify$
recursively. If, on the other hand, the least priority~$p$ in~$S$
is odd, then $\val^{\G\restrict S}(q)\geq x$ for all $q\in S$ is
witnessed by a 2-trap~$T$ inside
$S\setminus\Attr_2^{\G\restrict S}(\col^{-1}(p))$ such that both
the values in the game $\G\restrict T$ and the values in the
game $\G\restrict S\setminus\Attr_1^{\G\restrict S}(T)$ are
bounded from below by~$x$; the latter two properties can again be
checked by calling $\Verify$ recursively.
The correctness of the algorithm relies on the following two lemmas.
\begin{lemma}\label{lemma:mpp-even}
Let $\G$ be a mean-payoff parity game with least priority~$p$
even, $T=\V{}\setminus\Attr_1(\col^{-1}(p))$, and $x\in\bbR$.
If $\val^{(G,0)}(q)\geq x$ for all $q\in\V{}$ and
$\val^{\G\restrict T}(q)\geq x$ for all $q\in T$, then
$\val^{\G}(q)\geq x$ for all $q\in\V{}$.
\end{lemma}
\begin{proof}
Assume that $\val^{(G,0)}(q)\geq x$ for all $q\in\V{}$ and
$\val^{\G\restrict T}(q)\geq x$ for all $q\in T$, and let $q^*\in\V{}$.
By \cref{thm:mpp-main}, it suffices to show that for every memoryless
strategy~$\tau$ of \Pl2 there exists a strategy~$\sigma$ of \Pl1 such that
$\payoff(\rho(\sigma,\tau,q^*))\geq x$.
Hence, assume that $\tau$~is a memoryless strategy of \Pl2 in~$\G$.
Moreover, let $\sigma_\rmM$~be a memoryless strategy for \Pl1 in~$(G,0)$ with
$\val^{(G,0)}(\sigma_\rmM,q)\geq x$ for all $q\in\V{}$, let $\sigma_T$~be a
strategy for \Pl1 in~$\G\restrict T$ with
$\val^{\G\restrict T}(\sigma_T,q)\geq x$ for all $q\in T$, and let
$\sigma_\rmA$ be a memoryless attractor strategy of \Pl1 on
$\Attr_1(\col^{-1}(p))$ that ensures to reach $\col^{-1}(p)$.
We combine these three strategies to a new strategy~$\sigma$, which is
played in rounds. In the $k$th round, the strategy behaves as follows:
\begin{enumerate}
\item while the play stays inside~$T$, play $\sigma_T$;
\item as soon as the play reaches $\Attr_1(\col^{-1}(p))$,
switch to strategy~$\sigma_\rmA$ and play~$\sigma_\rmA$ until the
play reaches $\col^{-1}(p)$;
\item when the play reaches $\col^{-1}(p)$, play $\sigma_M$ for exactly
$k$~steps and proceed to the next round.
\end{enumerate}
Let $\rho\coloneqq\rho(\sigma,\tau,q^*)$. To complete the proof, we need to
show that $\payoff(\rho)\geq x$.
We distinguish whether $\rho$~visits $\Attr_1(\col^{-1}(p))$ infinitely often
or not.
In the first case, we divide~$\rho$ into $\rho=\gamma_0\gamma_1\gamma_2\cdots$
where each $\gamma_i=\gamma_i^T\gamma_i^\rmA\gamma_i^\rmM$ consists of a
part consistent with~$\sigma_T$ (thus staying inside~$T$), a part consistent
with~$\sigma_\rmA$ (thus staying in $\Attr_1(\col^{-1}(p))$),
and one that starts with a state in~$\col^{-1}(p)$ and is consistent
with~$\sigma_\rmM$.
Since $\tau$~is a memoryless strategy, there can only be $\abs{T}$~many
different~$\gamma_i^T$, and the length of each~$\gamma_i^T$ is bounded by some
constant~$k$.
Since each~$\gamma_i^\rmA$ is consistent with an attractor strategy, the length
of each~$\gamma_i^\rmA$ is bounded by $\abs{\V{}}$.
Hence, the length of $\gamma_i^\rmM$ grows continuously while the
length of~$\gamma_i^T\gamma_i^\rmA$ is bounded. Therefore,
$\liminf_{n\to\infty}\payoff_n(\rho)
=\liminf_{n\to\infty}\payoff_n(\gamma_1^\rmM\gamma_2^\rmM\cdots)$.
Since $\val^{(G,0)}(\sigma_\rmM,q)\geq x$ for all $q\in\V{}$ and
priority~$p$ is visited infinitely often, we have
$\payoff(\rho)=\liminf_{n\to\infty}\payoff_n(\rho)\geq x$.
In the second case, $\rho=\gamma\cdot\rho'$, where $\rho'$~is a play of
$\G\restrict T$ that is consistent with~$\sigma_T$. Hence,
$\payoff(\rho)=\payoff(\rho')\geq\val^{\G\restrict T}(\sigma_T,\rho'(0))
\geq x$.\qed
\end{proof}
\begin{lemma}\label{lemma:mpp-odd}
Let $\G$ be a mean-payoff parity game with least priority~$p$
odd, $T=\V{}\setminus\Attr_2(\col^{-1}(p))$, and $x\in\bbR$.
If $\val^{\G}(q)\geq x$ for some $q\in\V{}$, then $T\neq\emptyset$ and
$\val^{\G\restrict T}(q)\geq x$ for some $q\in T$.
\end{lemma}
\begin{proof}
Let $q^*\in\V{}$ be a state with $\val^{\G}(q^*)\geq 0$.
If $T=\emptyset$, then $\Attr_2(\col^{-1}(p))=\V{}$ and there is a
memoryless attractor strategy~$\tau$ for \Pl2 in~$\G$ that ensures to
visit~$\col^{-1}(p)$ infinitely often. This implies
$\val^{\G}(\tau,q^*)=-\infty$, a contradiction to
$\val^{\G}(q^*)\geq x$. Thus $T\neq\emptyset$.
Now assume that $\val^{\G\restrict T}(q)<x$ for all $q\in T$, and let $\tau$
be a (\wlg memoryless) strategy for \Pl2 in~$\G\restrict T$ that ensures
$\val^{\G\restrict T}(\tau,q)<x$ for all $q\in T$.
We~extend~$\tau$ to a strategy~$\tau'$ in~$\G$ by combining it with a
memoryless
attractor strategy for $\col^{-1}(p)$ on the states in $\V{}\setminus T$.
Let $\rho\in\Out^{\G}(\tau',q^*)$. Either $\rho$~reaches $\col^{-1}(p)$
infinitely often, in which case $\payoff^{\G}(\rho)=-\infty$, or there is a
position~$i$ from which onwards $\rho$~stays in~$T$, in which case
$\payoff^{\G}(\rho)=\payoff^{\G\restrict T}(\rho[i,\infty))\leq
\val^{\G\restrict T}(\tau,\rho(i))$. In any case,
$\val^{\G}(\tau',q^*)\leq\max_{q\in T}\val^{\G\restrict T}(\tau,q)<x$,
a~contradiction to $\val^{\G}(q^*)\geq x$.\qed
\end{proof}
Finally, \cref{alg:mpp-np} runs in polynomial time because
the value of a memoryless strategy in a mean-payoff game can be
computed in polynomial time \cite{ZP96} and because recursive
calls are limited to disjoint subarenas.
\begin{theorem}\label{thm:mpp-np}
The value problem for mean-payoff parity games is in \NP.
\end{theorem}
\begin{proof}
We claim that \cref{alg:mpp-np} is a nondeterministic polynomial-time
algorithm for the value problem. To analyse the running time, denote by
$T(n)$ the worst-case running time of the procedure $\Verify$ on a
subarena~$S$ of size~$n$. Since the value of a memoryless strategy for
\Pl1 in a mean-payoff game can be computed in polynomial time
\cite{ZP96} and attractor computations take linear time, there exists a
polynomial $f\colon\bbN\times\bbN\to\bbN$ such that the numbers
$T(n)$ satisfy the following recurrence:
\begin{align*}
T(1) &\leq f(\size{G},\size{x}), \\
T(n) &\leq \max_{1\leq k<n} T(k)+T(n-k)+f(\size{G},\size{x})\,.
\end{align*}
Solving this recurrence, we get that
$T(n)\leq (2n-1)f(\size{G},\size{x})$ for all $n\geq 1$,
again a polynomial. Consequently, the algorithm runs in polynomial time.
To prove the correctness of the algorithm, we need to prove that the
algorithm is both sound and complete.
We start by proving soundness: If the
algorithm accepts its input, then $\val^{\G}(q_0)\geq x$. In fact, we
prove the following stronger statement. We say that $\Verify(S)$
\emph{succeeds} if the procedure terminates without rejection
(for at least one sequence of guesses).
\begin{claim*}
Let $S\subseteq\V{}$.
If $S$~is a subarena of~$\G$ and $\Verify(S)$ does succeed,
then $\val^{\G\restrict S}(q)\geq x$ for all $q\in S$.
\end{claim*}
Assume that the claim is true and that the algorithm accepts its input.
Then there exists a 2-trap~$T$ with $q_0\in T$ such that
$\val^{\G\restrict T}(q)\geq x$ for all $q\in T$. Since $T$~is a 2-trap,
it follows that $\val^{\G}(q_0)\geq x$.
To prove the claim, we proceed by induction over the cardinality
of~$S$. If $\abs{S}=0$, the claim is trivially fulfilled. Hence,
assume that $\abs{S}>0$ and that the claim is true for all sets
$S'\subseteq\V{}$ with $\abs{S'}<\abs{S}$. Let
$p=\min\{\col(q)\mid q\in S\}$. We distinguish two cases:
\begin{enumerate}
\item The minimal priority~$p$ is even. Since
$\Verify(S)$ succeeds, there exists
a memoryless strategy~$\sigma_{\rmM}$ of \Pl1 in $\G\restrict S$ such
that $\val^{(G\restrict S,0)}(\sigma_{\rmM},q)\geq x$ for all $q\in S$,
\ie $\val^{(G\restrict S,0)}(q)\geq x$ for all $q\in S$.
Let $A=\Attr_1^{\G\restrict S}(\col^{-1}(p))$. Since $\Verify(S)$ succeeds, so
does $\Verify(S\setminus A)$. Hence, by the induction hypothesis,
$\val^{\G\restrict (S\setminus A)}(q)\geq x$ for all $q\in S\setminus A$.
By \cref{lemma:mpp-even}, these two facts imply that
$\val^{\G\restrict S}(q)\geq x$ for all $q\in S$.
\item The minimal priority~$p$ is odd. Since $\Verify(S)$ succeeds,
there exists a 2-trap $T\neq\emptyset$ in
$\G\restrict(S\setminus\Attr_2^{\G\restrict S}(\col^{-1}(p)))$ such that
both $\Verify(T)$ and $\Verify(S\setminus\Attr_1^{\G\restrict S}(T))$ succeed.
Let $A=\Attr_1^{\G\restrict S}(T))$. By the induction hypothesis, \Pl1
has a strategy~$\sigma_T$ in $\G\restrict T$ such that
$\val^{\G\restrict T}(\sigma_T,q)\geq x$ for all $q\in T$ and a
strategy~$\sigma_S$ in
$\G\restrict S\setminus A$ such that
$\val^{\G\restrict S\setminus A}(\sigma_S,q)\geq x$
for all $q\in S\setminus A$. We extend~$\sigma_T$ to a
strategy~$\sigma_A$ in $\G\restrict A$ such that
$\val^{\G\restrict A}(\sigma_A,q)\geq x$ for all $q\in A$ by
combining~$\sigma_T$ with a suitable attractor strategy.
By playing~$\sigma_S$ as long as the play
stays in $S\setminus A$ and switching
to~$\sigma_A$ as soon as the
play enters~$A$, \Pl1 can ensure that
$\val^{\G\restrict S}(q)\geq x$ for all $q\in S$.
\end{enumerate}
Finally, we prove that the algorithm is complete:
if $\val^{\G}(q_0)\geq x$, then the algorithm accepts the
input $\G,q_0,x$.
Since the set $\{q\in\V{}\mid \val^{\G}(q)\geq x\}$ is a trap for
\Pl2, it suffices to prove the following claim.
\begin{claim*}
Let $S\subseteq\V{}$. If $S$~is a subarena of~$\G$ and
$\val^{\G\restrict S}(q)\geq x$ for all $q\in S$, then
$\Verify(S)$ succeeds.
\end{claim*}
As the previous claim, we prove this claim by an induction over the
cardinality of~$S$. Clearly, $\Verify(S)$ succeeds if $\abs{S}=0$.
Hence, assume that $\abs{S}>0$ and that the claim is correct for
all sets $S'\subseteq\V{all}$ with $\abs{S'}<\abs{S}$. Moreover,
assume that $S$~is a subarena of~$\G$ such that
$\val^{\G\restrict S}(q)\geq x$ for all $q\in S$ (otherwise the
claim is trivially fulfilled). Again, we distinguish whether
$p\coloneq\min\{\col(q)\mid q\in S\}$ is even or odd.
\begin{enumerate}
\item The minimal priority~$p$ is even. Since $\val^{\G\restrict S}(q)\geq x$
for all $q\in S$, also $\val^{(G\restrict S,0)}(q)\geq x$ for all $q\in S$,
which is witnessed by a memoryless strategy~$\sigma_{\rmM}$. Let
$A=\Attr_1^{\G\restrict S}(\col^{-1}(p))$. Since $S\setminus A$~is a
1-trap and $\val^{\G\restrict S}(q)\geq x$ for all $q\in S$, we must also
have $\val^{\G\restrict (S\setminus A)}(q)\geq x$ for all $q\in S\setminus A$.
Hence, by the induction hypothesis, $\Verify(S\setminus A)$ succeeds.
Therefore, in order to succeed, $\Verify(S)$ only needs to guess a suitable
memoryless strategy~$\sigma_{\rmM}$.
\item The minimal priority~$p$ is odd.
Let $A\coloneq\Attr_2^{\G\restrict S}(\col^{-1}(p))$.
We claim that $\Verify(S)$ succeeds if it guesses
$T\coloneq\{q\in S\setminus A\mid\val^{\G\restrict (S\setminus A)}(q)\geq x\}$.
By \cref{lemma:mpp-odd}, the set $T$~is nonempty. Note that $T$~is a
2-trap and that
$\val^{\G\restrict T}(q)\geq x$ for all $q\in T$.
Hence, by the induction hypothesis, $\Verify(T)$~succeeds.
It remains to be shown that $\Verify(S\setminus\Attr_1^{\G\restrict S}(T))$
succeeds as well. Note that
$S\setminus\Attr_1^{\G\restrict S}(T)$~is a 1-trap, which together with
$\val^{\G\restrict S}(q)\geq x$ for all $q\in S$ implies that
$\val^{\G\restrict (S\setminus\Attr_1^{\G\restrict S}(T))}(q)\geq x$
for all $q\in S\setminus\Attr_1^{\G\restrict S}(T)$.
Hence, the induction hypothesis yields that
$\Verify(S\setminus\Attr_1^{\G\restrict S}(T))$ succeeds.\qed
\end{enumerate}
\end{proof}
\else
Via a reduction to \emph{energy parity games}, Chatterjee and Doyen
\cite{CD10} recently proved that the value problem for mean-payoff parity
games is in \NP. Hence, these games do not seem harder than parity
or mean-payoff games, which also come with a value problem in
$\NP\cap\coNP$.
\begin{theorem}[Chatterjee-Doyen]\label{thm:mpp-np}
The value problem for mean-payoff parity games is in \NP.
\end{theorem}
\fi
\iffull
\subsection{A deterministic algorithm}
\else
\subsubsection{A deterministic algorithm}
\fi
\label{sec-det}
\iffull
In this section, we present
\else
We now present
\fi
a deterministic algorithm for computing the values of
a~mean-payoff parity game, which runs faster than all
known algorithms for solving these games. Algorithm~$\Solve$
\begin{algorithm}[t]
\vspace*{.8ex}
\begin{tabbing}
\hspace*{1em}\=\hspace{1em}\=\hspace{1em}\=\hspace{1em}\=
\hspace{1em}\=\hspace{1em}\= \kill
\textbf{Algorithm} $\Solve(\G)$ \\[\medskipamount]
\emph{Input:} mean-payoff parity game $\G=(G,\col)$ \\
\emph{Output:} $\val^\G$ \\[\medskipamount]
\textbf{if} $\V{}=\emptyset$ \textbf{then return} $\emptyset$ \\
$p\coloneq\min\{\col(q)\mid q\in\V{}\}$ \\
\+\textbf{if} $p$~is even \textbf{then} \\
$g\coloneq\SolveMP(G,0)$ \\
\textbf{if} $\col(q)=p$ for all $q\in\V{}$ \textbf{then return} $g$ \\
$T\coloneq \V{}\setminus\Attr_1^{\G}(\col^{-1}(p))$;
$f\coloneq\Solve(\G\restrict T)$ \\
$x\coloneq\min(f(T)\cup g(\V{}))$;
$A\coloneq\Attr_2^{\G}(f^{-1}(x)\cup g^{-1}(x))$ \\
\textbf{return} $(\V{}\to\bbR\cup\{-\infty\}\colon q\mapsto x)\sqcup
\Solve(\G\restrict \V{}\setminus A)$ \-\\
\+\textbf{else} \\
$T\coloneq \V{}\setminus\Attr_2^{\G}(\col^{-1}(p))$ \\
\textbf{if} $T=\emptyset$ \textbf{then return}
$(\V{}\to\bbR\cup\{-\infty\}\colon q\mapsto-\infty)$ \\
$f\coloneq\Solve(\G\restrict T)$; $x\coloneq\max f(T)$;
$A\coloneq\Attr_1^{\G}(f^{-1}(x))$ \\
\textbf{return} $(\V{}\to\bbR\cup\{-\infty\}\colon q\mapsto x)\sqcap
\Solve(\G\restrict\V{}\setminus A)$ \-\\
\textbf{end if}
\end{tabbing}
\vspace*{-2ex}
\end{algorithm}
is based on the classical algorithm for solving parity
games, due to Zielonka~\cite{Zielonka98}.
The algorithm employs as a subprocedure an algorithm $\SolveMP$ for
solving mean-payoff games.
By \cite{ZP96}, such an algorithm can be implemented to run in time
$\Oh(n^3\cdot m\cdot W)$ for a game with $n$~states and $m$~edges.
We denote by $f \sqcup g$ and
$f\sqcap g$ the pointwise maximum, respectively minimum, of two
(partial) functions $f,g\colon\V{}\to\bbR\cup\{\pm\infty\}$
(where $(f\sqcup g)(q)=(f\sqcap g)(q)=f(q)$ if $g(q)$~is undefined).
The algorithm works as follows: If the least priority~$p$ in~$\calG$ is
even, the algorithm first identifies the least value of~$\G$ by computing
the values of the mean-payoff game
$(G,0)$ and (recursively) the values of the game
${\G\restrict\V{}\setminus\Attr_1(\col^{-1}(p))}$, and taking their
minimum~$x$. All states from where \Pl2 can enforce a visit to a
state with value~$x$ in one of these two games must have value~$x$
in~$\G$.
In~the remaining subarena, the values can be computed by calling
$\Solve$ recursively. If~the least priority is odd,
we can similarly compute the greatest value of~$\G$ and
proceed by recursion.
\iffull\else
The correctness of the algorithm relies on the following two lemmas.
\begin{lemma}\label{lemma:mpp-even}
Let $\G$ be a mean-payoff parity game with least priority~$p$
even, $T=\V{}\setminus\Attr_1(\col^{-1}(p))$, and $x\in\bbR$.
If $\val^{(G,0)}(q)\geq x$ for all $q\in\V{}$ and
$\val^{\G\restrict T}(q)\geq x$ for all $q\in T$, then
$\val^{\G}(q)\geq x$ for all $q\in\V{}$.
\end{lemma}
\begin{lemma}\label{lemma:mpp-odd}
Let $\G$ be a mean-payoff parity game with least priority~$p$
odd, $T=\V{}\setminus\Attr_2(\col^{-1}(p))$, and $x\in\bbR$.
If $\val^{\G}(q)\geq x$ for some $q\in\V{}$, then $T\neq\emptyset$ and
$\val^{\G\restrict T}(q)\geq x$ for some $q\in T$.
\end{lemma}
\fi
\begin{theorem}\label{thm:mpp-det}
The values of a mean-payoff parity game with $d$~priorities
can be computed in time
$\Oh(\abs{\V{}}^{d+2}\cdot \abs{E}\cdot W)$.
\end{theorem}
\begin{proof}
We claim that $\Solve$ computes, given a mean-payoff
parity game~$\G$, the function $\val^\G$ in the given time bound.
Denote by $T(n,m,d)$ the worst-case running time of the algorithm
on a game with $n$~states, $m$~edges and
$d$~priorities.
Note that, if $\G$~has only one priority, then there are no recursive
calls to $\Solve$. Since attractors can be computed in time $\Oh(n+m)$ and
the running time of $\SolveMP$ is $\Oh(n^3\cdot m\cdot W)$,
there exists a constant~$c$ such that the numbers
$T(n,m,d)$ satisfy the following recurrence:
\begin{align*}
T(1,m,d) &\leq c, \\
T(n,m,1) &\leq c\cdot n^3\cdot m\cdot W, \\
T(n,m,d) &\leq T(n-1,m,d-1) + T(n-1,m,d) + c\cdot n^3\cdot m\cdot W\,.
\end{align*}
\iffull
We claim that $T(n,m,d)\leq c\cdot (n+1)^{d+2}\cdot m\cdot W\in
\Oh(n^{d+2}\cdot m\cdot W)$.
The claim is clearly true if $n=1$. Hence, assume that $n\geq 2$ and that
the claim is true for all lower values of~$n$. If $d=1$, the claim follows
from the second inequality. Otherwise,
\begin{align*}
T(n,m,d) &\leq T(n-1,m,d-1) + T(n-1,m,d) + c\cdot n^3\cdot m\cdot W \\
&\leq c\cdot n^{d+1}\cdot m\cdot W+c\cdot n^{d+2}\cdot m\cdot W+
c\cdot n^3\cdot m\cdot W \\
&\leq c\cdot (n^{d+1}+n\cdot n^{d+1}+n^{d+1})\cdot m\cdot W \\
&\leq c\cdot ((n+1)^{d+1}+n\cdot (n+1)^{d+1})\cdot m\cdot W \\
&= c\cdot (n+1)^{d+2}\cdot m\cdot W
\end{align*}
\else
Solving this recurrence, we get that
$T(n,m,d)\leq c\cdot (n+1)^{d+2}\cdot m\cdot W$,
which proves the claimed time bound.
\fi
It remains to be proved that the algorithm is correct, \ie that
$\Solve(\G)=\val^\G$.
We prove the claim by induction over the number of states. If there
are no states, the claim is trivial. Hence, assume that
$\V{}\neq\emptyset$ and that the
claim is true for all games with less than $\abs{\V{}}$~states.
Let $p\coloneq\min\{\col(q)\mid q\in\V{}\}$. We only consider the case that
$p$~is even. If $p$~is odd, the proof is similar, but relies on
\cref{lemma:mpp-odd} instead of \cref{lemma:mpp-even}.
Let $T$, $f$, $g$, $x$ and~$A$ be defined as in the corresponding case
of the algorithm, and let $f^*=\Solve(\G)$.
If $\col(\V{})=\{p\}$, then $f^*=g=\val^{(G,0)}=
\val^{\G}$, and the claim is fulfilled. Otherwise,
by the definition of~$x$ and applying the induction hypothesis to
the game $\G\restrict T$, we have $\val^{(G,0)}(q)\geq x$ for
all $q\in\V{}$ and $\val^{\G\restrict T}(q)=f(q)\geq x$ for all $q\in T$.
Hence, \cref{lemma:mpp-even} yields that $\val^{\G}(q)\geq x$
for all $q\in\V{}$. On the other hand, from any state $q\in A$ \Pl2 can
play an attractor strategy to $f^{-1}(x)\cup g^{-1}(x)$, followed by an
optimal strategy in the game $\G\restrict T$, respectively in the
mean-payoff game $(G,0)$, which ensures that \Pl1's payoff
does not exceed~$x$.
Hence, $\val^{\G}(q)=x=f^*(q)$ for all $q\in A$.
Now, let $q\in\V{}\setminus A$. We already know that $\val^{\G}(q)
\geq x$. Moreover, since $\V{}\setminus A$ is a 2-trap and applying
the induction hypothesis to the game $\G\restrict\V{}\setminus A$, we have
$\val^{\G}(q)\geq\val^{\G\restrict\V{}\setminus A}(q)=
\Solve(\G\restrict\V{}\setminus A)(q)$.
Hence, $\val^{\G}(q)\geq f^*(q)$. To see that $\val^{\G}(q)\leq f^*(q)$,
consider the strategy~$\tau$ of \Pl2 that mimics an optimal strategy
in $\G\restrict\V{}\setminus A$ as long as the play stays in
$\V{}\setminus A$
and switches to an optimal strategy in~$\G$ as soon as the
play reaches~$A$. We have $\val^{\G}(\tau,q)\leq
\max\{\val^{\G\restrict\V{}\setminus A}(q),x\}=f^*(q)$.\qed
\end{proof}
Algorithm~$\Solve$ is faster and conceptually simpler than the original
algorithm proposed for solving mean-payoff parity games~\cite{CHJ05}.
Compared to the recent algorithm proposed by Chatterjee and Doyen~\cite{CD10},
which uses a reduction to energy parity games and runs in time
$\Oh(\abs{\V{}}^{d+4}\cdot \abs{E}\cdot d\cdot W)$, our algorithm has three
main advantages: 1.~it is faster; 2.~it operates directly on mean-payoff
parity games, and 3.~it is more flexible since it computes the values
exactly instead of just comparing them to an integer threshold.
\section{Mean-penalty parity games}
\label{sec-mpep}
\iffull
In this second part of the paper,
\else
In this section,
\fi
we define multi-strategies and \emph{mean-penalty parity games}.
We~reduce these games to mean-payoff parity games, show that
their value problem is in $\NP\cap\coNP$, and propose
a deterministic algorithm for computing the values, which runs
in pseudo-polynomial time if the number of priorities is bounded.
\iffull
\subsection{Definitions}
\fi
Syntactically, a~\emph{mean-penalty parity game} is a mean-payoff
parity game with non-negative weights, \ie a tuple $\G=(G,\col)$, where
$G=(\V1,\V2,E,\weight)$~is a weighted game graph with $\weight\colon E
\to \bbR^{\geq 0}$ (or $\weight\colon E\to\bbN$ for algorithmic
purposes), and $\col\colon \V{all}\to\bbN$ is a
priority function assigning a priority to every state.
As~for mean-payoff parity games, a~play~$\rho$ is
parity-winning if the minimal priority occurring infinitely often
($\min\{\col(q)\mid q\in\Inf(\rho)\}$) is even.
Since we are interested in controller synthesis, we define
multi-strategies only for \Pl1 (who represents the system).
Formally, a~\emph{multi-strategy} (for \Pl1) in~$\G$ is a function
$\sigma\colon \V{all}^* \V{1} \to\pow(\V{all})\setminus\{\emptyset\}$
such that $\sigma(\gamma q)\subseteq qE$ for all $\gamma\in\V{}^*$
and $q\in\V{1}$.
A~play~$\rho$ of~$\G$ is \emph{consistent} with a multi-strategy~$\sigma$
if $\rho(k+1)\in\sigma(\rho[0,k])$ for all $k\in\bbN$ with $\rho(k)\in\V{1}$,
and we denote by $\Out^{\G}(\sigma,q_0)$ the set of all
plays~$\rho$ of~$\G$ that are consistent with~$\sigma$ and start in
$\rho(0)=q_0$.
Note that, unlike for deterministic strategies, there is, in general,
no unique play consistent with a multi-strategy~$\sigma$ for \Pl1 and a
(deterministic) strategy~$\tau$ for \Pl2 from a given initial state. Finally,
note that every deterministic strategy can be viewed as a multi-strategy.
Let $\G$ be a mean-penalty parity game, and let $\sigma$ be a multi-strategy.
We~inductively define $\pnlty^{\G}_{\sigma}(\gamma)$ (the
\emph{total penalty} of~$\gamma$ \wrt~$\sigma$) for all $\gamma\in\V{}^*$
by setting $\pnlty^{\G}_{\sigma}(\epsilon)=0$ as well as
$\pnlty^{\G}_{\sigma}(\gamma q)=\pnlty^{\G}_{\sigma}(\gamma)$ if
$q\in\V2$ and
\begin{equation*}
\pnlty^{\G}_{\sigma}(\gamma q)=
\pnlty^{\G}_{\sigma}(\gamma)
+\sum_{\mathmakebox[0.8cm][c]{q'\in qE\setminus\sigma(\gamma q)}}
\weight(q,q')
\end{equation*}
if $q\in\V1$.
Hence, $\pnlty^{\G}_{\sigma}(\gamma)$ is the total weight of transitions
blocked by~$\sigma$ along~$\gamma$.
The \emph{mean penalty} of an infinite play~$\rho$ is then defined as
the average penalty that is incurred along this play in the limit,
\ie
\begin{equation*}
\pnlty^{\G}_{\sigma}(\rho)=
\begin{cases}\displaystyle
\limsup_{n\to\infty}\tfrac{1}{n}\pnlty^{\G}_{\sigma}(\rho[0,n)) &
\text{if $\rho$~is parity-winning,} \\
\infty & \text{otherwise.}
\end{cases}
\end{equation*}
The mean penalty of a strategy~$\sigma$ from a given initial state~$q_0$
is defined as the supremum over the mean penalties of all plays that
are consistent with~$\sigma$, \ie
\begin{equation*}
\pnlty^{\G}(\sigma,q_0)=
\sup\{\pnlty^{\G}_{\sigma}(\rho)\mid\rho\in\Out^{\G}(\sigma,q_0)\}.
\end{equation*}
The \emph{value} of a state~$q_0$ in a mean-penalty parity game
$\G$ is the least mean penalty that a multi-strategy of \Pl1 can achieve, \ie
$\val^{\G}(q_0)=\inf_{\sigma}\pnlty^{\G}(\sigma,q_0)$, where $\sigma$~ranges
over all multi-strategies of \Pl1. A~multi-strategy~$\sigma$ is
called \emph{optimal} if
$\pnlty^\G(\sigma,q_0)=\val^\G(q_0)$ for all $q_0\in Q$.
Finally, the \emph{value problem for mean-penalty parity games} is the
following decision problem:
Given a mean-penalty parity game $\G=(G,\col)$,
an initial state $q_0\in\V{all}$, and a
number $x\in\bbQ$, decide whether $\val^{\G}(q_0)\leq x$.
\begin{figure}
\begin{floatrow}
\ffigbox[.3\textwidth]{%
\centering
\input fig_penalty.tex
}{\caption{\label{fig:mpep}A mean-penalty parity game}}
\ffigbox[.6\textwidth]{%
\centering
\input fig_penalty-red.tex
}{\caption{\label{fig:mppofmpep}The corresponding mean-payoff parity game}}
\end{floatrow}
\end{figure}
\begin{example}\label{ex:mpep}
Fig.~\ref{fig:mpep} represents a mean-penalty parity game. Note that
weights of transitions out of \Pl2 states are not indicated as they are
irrelevant for the mean penalty.
In this game, \Pl1 (controlling circle states) has to regularly \emph{block}
the self-loop if~she wants to enforce infinitely many visits to the
state with priority~$0$.
This comes with a penalty of~$2$. However, the multi-strategy in which she
blocks no transition can be played safely for an arbitrary number of times. Hence
\Pl1 can win with mean-penalty~$0$ (but infinite memory), by blocking
the self-loop once every $k$~moves, where $k$~grows with the number of
visits to~$q_2$.
\end{example}
\iffull
\subsection{Strategy complexity}
\fi
\label{sec:mean-penalty_mean-payoff}
In order to solve mean-penalty games, we reduce them to
mean-payoff parity games.
We~construct from a given mean-penalty parity game~$\G$ an exponential-size
mean-payoff parity game~$\G'$, similar to~\cite{BDMR09} but with an added
priority function.
Formally,
for a mean-penalty parity game $\G=(G,\col)$ with game graph
$G=(\V1,\V2,E,\weight)$, the game graph $G'=(\V1',\V2',E',\weight')$
of the corresponding mean-payoff parity game~$\G'$ is defined as follows:
\begin{itemize}
\item $\V1'=\V1$ and $\V2'=\V2\cup\bar{\V{all}}$, where
$\bar{\V{}}\coloneq\{(q,F)\mid q\in\V{all},\ \emptyset\neq F\subseteq qE\}$;
\item $E'$ is the (disjoint) union of three kinds of transitions:
\begin{enumerate}[(1)]
\item transitions of the form $(q,(q,F))$ for each $q\in\V1$ and
$\emptyset\neq F\subseteq qE$,
\item transitions of the form $(q,(q,\{q'\}))$ for each $q\in\V2$
and $q'\in qE$,
\item transitions of the form $((q,F),q')$ for each $q'\in F$;
\end{enumerate}
\item the weight function $\weight'$ assigns~$0$ to transitions of type~(2)
and~(3), but
$\weight'(q,(q,F))=-2\sum_{q'\in qE\setminus F}\weight(q,q')$
to transitions of type~(1).
\end{itemize}
Finally, the priority function~$\col'$
of~$\G'$ coincides with~$\col$ on~$\V{all}$ and assigns priority
$M\coloneq\max\{\col(q) \mid q\in \V{all}\}$
to all states in~$\bar{\V{all}}$.
\begin{example}
\cref{fig:mppofmpep} depicts the
mean-payoff parity game obtained from the mean-penalty parity game
from \cref{ex:mpep}, depicted in \cref{fig:mpep}.
\end{example}
\noindent
The correspondence between~$\G$ and~$\G'$ is expressed in the following lemma.
\begin{lemma}
\label{lem:value_equivalence}
Let $\G$ be a mean-penalty parity game,
$\G'$~the corresponding mean-payoff parity game,
and $q_0\in Q$.
\begin{enumerate}
\item For every multi-strategy~$\sigma$ in~$\G$ there exists a
strategy~$\sigma'$ for \Pl1 in~$\G'$ such that
$\val(\sigma',q_0)\geq-\pnlty(\sigma,q_0)$.
\item For every strategy~$\sigma'$ for \Pl1 in~$\G'$ there exists
a multi-strategy~$\sigma$ in~$\G$ such that
$\pnlty(\sigma,q_0)\leq-\val(\sigma',q_0)$.
\item $\val^{\G'}(q_0)=-\val^{\G}(q_0)$.
\end{enumerate}
\end{lemma}
\iffull
\begin{proof}
Clearly, 3.~is implied by 1.~and 2., and we only need to prove the
first two statements. To prove 1., let $\sigma$~be a multi-strategy
in~$\G$. For a play prefix $\gamma=q_0(q_0,F_0)\cdots q_n(q_n,F_n)$
in~$\G'$, let $\tilde{\gamma}\coloneq q_0\cdots q_n$ be the
corresponding play prefix in~$\G$. We set
$\sigma'(\gamma q)=(q,F)$ if $q\in\V{1}$ and
$\sigma(\tilde{\gamma} q)=F$.
Clearly, for each
$\rho'\in\Out(\sigma',q_0)$ there exists a play
$\rho\in\Out(\sigma,q_0)$ with
$-\pnlty_\sigma(\rho)=\payoff(\rho')$
(namely $\rho(i)=\rho'(2i)$ for all ${i\in\bbN}$). Hence,
\begin{align*}
\val^{\G'}(\sigma',q_0)
&=\inf\{\payoff(\rho')\mid\rho'\in\Out(\sigma',q_0)\} \\
&\geq\inf\{-\pnlty_\sigma(\rho)\mid\rho\in\Out(\sigma,q_0)\} \\
&=-\sup\{\pnlty_\sigma(\rho)\mid\rho\in\Out(\sigma,q_0)\} \\
&=-\pnlty(\sigma,q_0)\,.
\end{align*}
To prove 2., let $\sigma'$ be a strategy for \Pl1 in~$\G'$. For a play prefix
$\gamma=q_0\cdots q_n$ in~$\G$, we inductively define the corresponding play
prefix $\tilde{\gamma}$ in~$\G'$ by setting
$\tilde{q}=q$ and $\tilde{\gamma q}=
\tilde{\gamma}\cdot\sigma'(\tilde{\gamma})\cdot q$. We set
$\sigma(\gamma)=F$ if $\sigma'(\tilde{\gamma})=(q,F)$.
For each $\rho\in\Out(\sigma,q_0)$ there exists a play
$\rho'\in\Out(\sigma',q_0)$ with $\pnlty_\sigma(\rho)=-\payoff(\rho')$,
namely the play~$\rho'$ defined by $\rho'(2i)=\rho(i)$ and
\begin{align*}
\rho'(2i+1)=\begin{cases}
(\rho(i),\sigma(\rho[0,i])) & \text{if $\rho(i)\in\V1$,} \\
(\rho(i),\{\rho(i+1)\}) &\text{if $\rho(i)\in\V2$,}
\end{cases}
\end{align*}
for all $i\in\bbN$. Hence,
\begin{align*}
\pnlty(\sigma,q_0)
&=\sup\{\pnlty_\sigma(\rho)\mid\rho\in\Out(\sigma,q_0)\} \\
&\leq\sup\{-\payoff(\rho')\mid\rho'\in\Out(\sigma',q_0)\} \\
&=-\inf\{\payoff(\rho')\mid\rho'\in\Out(\sigma',q_0)\} \\
&=-\val^{\G'}(\sigma',q_0)\,. \tag*{\qed}
\end{align*}
\end{proof}
\fi
It~follows from \cref{thm:mpp-main,lem:value_equivalence} that
every mean-penalty parity game admits an optimal multi-strategy.
\begin{corollary}
In every mean-penalty parity game, \Pl1 has an optimal
multi-strategy.
\end{corollary}
We now show that \Pl2 has a memoryless optimal strategy of a special kind
in the mean-payoff parity game derived from a mean-penalty parity game.
This puts the value problem for mean-penalty parity games into \coNP, and is
also a crucial point in the proof of Lemma~\ref{lemma:value_equivalence_2} below.
\begin{lemma}
\label{lem:total_order}
Let $\G$ be a mean-penalty parity game and $\G'$ the corresponding
mean-payoff parity game. Then in~$\G'$ there is a memoryless optimal
strategy~$\tau'$ for \Pl2 such that for every $q\in\V{all}$ there
exists a total order $\leq_q$ on the set~$qE$ with
$\tau'((q,F))=\min_{\leq_q}F$ for every state $(q,F)\in\bar{\V{all}}$.
\end{lemma}
\iffull
\begin{proof}
\else
\begin{proof}[Sketch]
\fi
Let $\tau$~be a memoryless optimal strategy for \Pl2 in~$\G'$. For a
state~$q$, we consider the set~$qE$ and order it in the following way.
We inductively define $F_1=qE$, $q_i=\tau((q,F_i))$ and
$F_{i+1}=F_i\setminus\{q_i\}$ for every $1\leq i\leq\abs{qE}$. Note
that $\{q_1,\ldots,q_{\abs{qE}}\}=qE$. We set
$q_1\leq_q q_2\leq_q\cdots\leq_q q_{\abs{qE}}$ and define a new
memoryless strategy~$\tau'$ for \Pl2 in ~$\G'$ by
$\tau'((q,F))=\min_{\leq_q}F$ for $(q,F)\in\bar{\V{all}}$
and $\tau'(q)=\tau(q)$ for all $q\in\V2$.
\iffull
To prove the lemma, we have to show that $\tau'$~is at least as good
as~$\tau$ and thus optimal.
Let $q_0\in\V{all}$ and $\rho'\in\Out(\tau',q_0)$. We construct
a play $\rho\in\Out(\tau,q_0)$ with
$\payoff(\rho)\geq\payoff(\rho')$ in the following way. For every
position~$i$ with $\rho'(i)=(q,F')$, let $F=\{q'\in qE\mid
\tau'((q,F'))\leq_q q'\}$ (then $\tau((q,F))=\tau'((q,F'))$ by the
definition of~$\tau'$) and set $\rho(i)=(q,F)$. For every other
position~$i$, let $\rho(i)=\rho'(i)$. Note that
$\rho\in\Out(\tau,q_0)$ and
$\min\col(\Inf(\rho))=\min\col(\Inf(\rho'))$. Moreover,
we have $F'\subseteq F$ and therefore
$\weight'(q,(q,F'))\leq\weight'(q,(q,F))$ whenever
$\rho'(i)=(q,F')$ and $\rho(i)=(q,F)$
(because weights in~$\G$ are nonnegative).
Hence, $\payoff(\rho)\geq\payoff(\rho')$.
Since $\rho'$~was chosen arbitrarily, it follows that
\begin{align*}
\val(\tau,q_0)
&=\sup\{\payoff(\rho)\mid\rho\in\Out(\tau,q_0)\} \\
&\geq\sup\{\payoff(\rho')\mid\rho'\in\Out(\tau',q_0)\} \\
&=\val(\tau',q_0)\,.
\end{align*}
Hence, $\tau'$~is optimal.\qed
\else
It can be shown that $\val(\tau',q_0)\leq \val(\tau,q_0)$ for
all $q_0\in Q$, which proves that~$\tau'$ is optimal.\qed
\fi
\end{proof}
\iffull
\subsection{Computational complexity}
\fi
In order to put the value problem for mean-penalty parity games into
${\NP\cap\coNP}$, we propose a more sophisticated reduction
from mean-penalty parity games to mean-payoff parity games, which
results in a polynomial-size mean-payoff parity game. Intuitively,
in a state $q\in\V{1}$ we ask \Pl1 \emph{consecutively} for each
outgoing transition whether he wants to block that transition.
If he allows a transition, then \Pl2 has to decide whether she wishes
to explore this transition. Finally, after all transitions have been
processed in this way, the play proceeds along the \emph{last}
transition that \Pl2 has desired to explore.
Formally, let us fix a mean-penalty parity game $\G=(G,\col)$ with
game graph $G=(\V{1},\V{2},E,\weight)$, and denote by
$k\coloneq\max\{\abs{qE}\mid q\in\V{}\}$ the maximal out-degree of a state.
Then the polynomial-size mean-payoff
parity game~$\G''$ has vertices of the form $q$ and~$(q,a,i,m)$, where
$q\in\V{}$, $a\in\{\select,\allow,\block\}$, $i\in\{1,\dots,k+1\}$ and
$m\in\{0,\ldots,k\}$;
vertices of the form $q$ and $(q,\select,i,m)$ belong to \Pl1, while
vertices of the form $(q,\allow,i,m)$ or $(q,\block,i,m)$ belong
to \Pl2.
To describe the transition structure of~$\G$, let $q\in\V{}$ and
assume that $qE=\{q_1,\ldots,q_k\}$ (a state may occur more than once in
this list). Then the following transitions
originate in a state of the form $q$ or~$(q,a,i,m)$:
\begin{enumerate}
\item a transition from $q$ to~$(q,\select,1,0)$ with weight~$0$,
\item for all $1\leq i\leq k$ and $0\leq m\leq k$ a transition from
$(q,\select,i,m)$ to $(q,\allow,i,m)$ with weight~$0$,
\item if $q\in\V{1}$ then for all $1\leq i\leq k$ and $0\leq m\leq k$
a transition from $(q,\select,i,m)$ to $(q,\block,i,m)$ with
weight~$0$, \emph{except} if $i=k$ and $m=0$;
\item for all $0\leq m\leq k$ a transition from $(q,\select,k+1,m)$ to~$q_m$
with weight~$0$ (where $q_0$~can be chosen arbitrarily),
\item for all $1\leq i\leq k$ and $0\leq m\leq k$ a transition from
$(q,\allow,i,m)$ to $(q,\select,i+1,i)$ with weight~$0$,
\item for all $1\leq i\leq k$ and $1\leq m\leq k$ a transition from
$(q,\allow,i,m)$ to $(q,\select,i+1,m)$ with weight~$0$,
\item for all $1\leq i\leq k$ and $0\leq m\leq k$ a transition from
$(q,\block,i,m)$ to $(q,\select,i+1,m)$ with weight
$-2(k+1)\cdot\weight(q,q_i)$.
\end{enumerate}
Finally, the priority of a state $q\in\V{}$ equals the
priority of the same state in~$\G$, whereas all states of the
form $(q,a,i,m)$ have priority $M=\max\{\col(q)\mid q\in\V{}\}$.
\begin{example}
For the game of \cref{fig:mpep}, this transformation would yield the game
depicted in \cref{fig:mpeptopolympp}.
\begin{figure}
\centering
\input fig_penalty-poly.tex
\caption{The game~$\calG''$ associated with the game~$\calG$ of
\cref{fig:mpep}}
\label{fig:mpeptopolympp}
\end{figure}
In this picture, $\rma$, $\rmb$
and~$\rmc$ stand for \emph{allow}, \emph{block} and \emph{choose},
respectively; zero weights are omitted.
\end{example}
It is easy to see that the game~$\calG''$ has polynomial size and
can, in fact, be constructed in polynomial time from the given
mean-penalty parity game~$\calG$. The following lemma relates
the game~$\calG''$ to the mean-payoff parity game~$\calG'$ of
exponential size constructed
\iffull
in \cref{sec:mean-penalty_mean-payoff}
\else
earlier
\fi
and to the original game~$\calG$.
\begin{lemma}
\label{lemma:value_equivalence_2}
Let $\G$ be a mean-penalty parity game,
$\G'$~the corresponding mean-payoff parity game of exponential size,
$\G''$~the corresponding mean-payoff parity game of polynomial size,
and $q_0\in Q$.
\begin{enumerate}
\item For every multi strategy~$\sigma$ in~$\G$ there exists a
strategy~$\sigma'$ for \Pl1 in~$\G''$ such that
$\val(\sigma',q_0)\geq-\pnlty(\sigma,q_0)$.
\item For every strategy~$\tau$ for \Pl2 in~$\G'$ there exists
a strategy~$\tau'$ for \Pl2 in~$\G''$ such that
$\val(\tau',q_0)\leq\val(\tau,q_0)$.
\item $\val^{\G''}(q_0)=-\val^{\G}(q_0)$.
\end{enumerate}
\end{lemma}
\iffull
\begin{proof}
To prove 1.,
let $\sigma$~be a multi-strategy in~$\G$. For any play prefix
$\gamma$ in~$\G''$, let $\tilde{\gamma}$ be the projection to states
in~$\G$ (\ie all states of the form $(q,a,i,m)$ are omitted).
Assuming that $q_1,\ldots,q_k$ is the enumeration of~$qE$ used in the
definition of~$\G''$,
we set $\sigma'(\gamma\cdot(q,\select,i,m))=(q,\allow,i,m)$ if (and only if)
either $q\in\V{1}$ and $q_i\in\sigma(\tilde{\gamma})$ or
$q\in\V{2}$.
It is easy to see that for each
$\rho'\in\Out(\sigma',q_0)$ there exists a play
$\rho\in\Out(\sigma,q_0)$ with
$-\pnlty_\sigma(\rho)=\payoff(\rho')$. Hence,
\begin{align*}
\val(\sigma',q_0)
&=\inf\{\payoff(\rho')\mid\rho'\in\Out(\sigma',q_0)\} \\
&\geq\inf\{-\pnlty_\sigma(\rho)\mid\rho\in\Out(\sigma,q_0)\} \\
&=-\sup\{\pnlty_\sigma(\rho)\mid\rho\in\Out(\sigma,q_0)\} \\
&=-\pnlty(\sigma,q_0)\,.
\end{align*}
To prove 2., let $\tau$ be a strategy for \Pl2 in~$\G'$. By
\cref{lem:total_order}, there exists a memoryless strategy~$\tau^*$ for
\Pl2 in~$\G'$ such that $\val(\tau^*,q_0)\leq\val(\tau,q_0)$
and for all $q\in\V{}$ there exists a total order~$\leq_q$
on~$qE$ with $\tau^*((q,F))=\min_{\leq_q} F$ for all $(q,F)\in\bar{\V{}}$.
We define a memoryless strategy~$\tau'$ for \Pl2 in~$\G''$ as follows:
Assume that $q_1,\ldots,q_k$ is the enumeration of~$qE$ used in the
definition of~$\G''$. Then we set
$\tau'((q,\allow,i,m))=(q,\select,i+1,i)$ if (and only if)
one of the following three conditions is fulfilled:
1.~$m=0$, or 2.~$q\in\V{1}$ and $q_i\leq_q q_m$, or
3.~$q\in\V{2}$ and $\tau^*(q)=(q,\{q_i\})$.
Now it is easy to see that for each $\rho'\in\Out(\tau',q_0)$
there exists a play $\rho\in\Out(\tau^*,q_0)$ with
$\payoff(\rho)=\payoff(\rho')$. Hence,
\begin{align*}
\val(\tau',q_0)
&=\sup\{\payoff(\rho')\mid\rho'\in\Out(\tau',q_0)\} \\
&\leq\sup\{\payoff(\rho)\mid\rho\in\Out(\tau^*,q_0)\} \\
&=\val(\tau^*,q_0) \\
&\leq\val(\tau,q_0)\,.
\end{align*}
Finally, we prove~3. It follows from 1.\ that
$\val^{\G''}(q_0)\geq-\val^{\G}(q_0)$, and it follows
from 2.\ that $\val^{\G''}(q_0)\leq\val^{\G'}(q_0)$.
But $\val^{\G'}(q_0)=-\val^{\G}(q_0)$ by
\cref{lem:value_equivalence}, and therefore
$\val^{\G''}(q_0)=-\val^{\G}(q_0)$.\qed
\end{proof}
\fi
Since the mean-payoff game~$\G''$ can be computed from~$\G$ in polynomial
time, we obtain a polynomial-time many-one reduction from the
value problem for mean-penalty parity games
to the value problem for mean-payoff parity games.
By \cref{cor:mpp-conp,thm:mpp-np},
the latter problem belongs to ${\NP\cap\coNP}$.
\begin{theorem}
\label{thm:value-penalty-NPcoNP}
The value problem for mean-penalty parity games belongs to
${\NP\cap\coNP}$.
\end{theorem}
\iffull
\subsection{A deterministic algorithm}
\else
\subsubsection{A deterministic algorithm}
\fi
Naturally, we can use the polynomial translation from mean-penalty parity
games to mean-payoff parity games to solve mean-penalty parity games
deterministically. Note that the mean-payoff parity game~$\G''$
derived from a mean-penalty parity game has $\Oh(\abs{\V{}}\cdot k^2)$
states and $\Oh(\abs{\V{}}\cdot k^2)$ edges, where $k$~is the maximum
out-degree of a state in~$\G$; the number of priorities
remains constant.
Moreover, if weights are given in integers and $W$~is the highest absolute
weight in~$\G$, then the highest absolute weight in~$\G''$ is $\Oh(k\cdot W)$.
Using \cref{thm:mpp-det}, we thus obtain a deterministic algorithm for
solving mean-penalty parity games that runs in time
$\Oh(\abs{\V{}}^{d+3}\cdot k^{2d+7}\cdot W)$.
If $k$~is a constant, the running time is
$\Oh(\abs{\V{}}^{d+3}\cdot W)$, which is acceptable. In the
general case however, the best upper bound on~$k$ is the number of
states, and we get an algorithm that runs in time
$\Oh(\abs{\V{}}^{3d+10}\cdot W)$. Even if the numbers of priorities
is small, this running time would not be acceptable in practical
applications.
The goal of this section is to show
that we can do better; namely we will give an algorithm that runs
in time $\Oh(\abs{\V{}}^{d+3}\cdot\abs{E}\cdot W)$, independently of
the maximum out-degree. The~idea is as follows: we~use
Algorithm~$\Solve$ on the mean-payoff parity game~$\G'$
of exponential size, but we show that we can run it \emph{on~$\G$}, \ie,
by handling the extra states of~$\G'$ symbolically during the computation.
As a first step, we adapt the pseudo-polynomial algorithm
by Zwick and Paterson~\cite{ZP96} to compute
the values of a mean-penalty parity game with a trivial parity
objective.
\begin{lemma}\label{lemma-ZP}
The values of a mean-penalty parity game with priority
function $\col\equiv 0$ can be computed in time
$\Oh(\abs{\V{all}}^4\cdot \abs E\cdot W)$.
\end{lemma}
\iffull
\begin{proof}
Let $\G=(G,\col)$ with
$G=(Q_1,Q_2,E,\weight)$, and $\G'=(G',\col')$ with
$G'=(Q'_1,Q'_2,E',\weight')$.
For a state~$q\in \V{all}'$, we~let $v_0(q)=0$, and
for~$k>0$, we~define
\[
v_k(q) = \begin{cases}
\displaystyle
\max_{q'\in qE'} \weight'(q,q') + v_{k-1}(q') &\text{if $q\in \V1'$,} \\
\displaystyle
\min_{q'\in qE'} \weight'(q,q') + v_{k-1}(q') &\text{if $q\in \V2'$.}
\end{cases}
\]
If $q\in\V{all}$, then the definition of~$\G'$ yields that
\[
v_k(q) = \begin{cases}\displaystyle
\max_{F\subseteq qE} \weight'(q,(q,F)) +
\min_{q'\in F} v_{k-2}(q') & \text{if $q\in\V{1}$,} \\
\displaystyle
\min_{q'\in qE} v_{k-2}(q') & \text{if $q\in\V{2}$,}
\end{cases}
\]
In the first case, a~na\"ive computation would require the examination
of an exponential
number of transitions. In order to avoid this blow-up, we~use the same
idea as in the proof of Lemma~\ref{lem:total_order}:
Let $qE=\{q_1,\dots,q_r\}$ be
sorted in such a way that $i\leq j$ implies $v_{k-2}(q_i) \leq
v_{k-2}(q_j)$. Since $\weight'(q,(q,F)) \leq \weight'(q,(q,F'))$
if $F\subseteq F'$, we~have
\[
v_k(q) = \max_i \weight'(q,(q,\{q_i,\ldots,q_r\})) +
v_{k-2}(q_i).
\]
Hence the sequence $v_{2k}$ can be computed in time $\Oh(k\cdot
\abs E)$ on~$\V{}$. Now, despite the exponential size of~$\G'$, the length
of a simple cycle in~$\G'$ is at most $2\abs{\V{}}$. Hence, Theorem~2.2
in~\cite{ZP96} becomes
\[
2k\cdot \val^{\G'}(q) - 4\abs{\V{}}\cdot W' \leq v_{2k}(q) \leq
2k\cdot \val^{\G'}(q) +4\abs{\V{}}\cdot W'
\]
for all $q\in\V{}$,
where $W'$~is the maximal absolute weight in~$\G'$. Since
$W'\leq\abs{\V{}}\cdot 2W$, it follows from~\cite{ZP96} that
$\val^{\G}={-\val^{\G'}}\restrict\V{}$ can be computed in
time $\Oh(\abs{\V{}}^4\cdot \abs E \cdot W)$.
\qed
\end{proof}
\fi
\iffull
Now, given a mean-penalty parity game~$\G$ with associated mean-payoff parity
game~$\G'$ and a set~$T$ of states of~$\G$, we~define
\begin{align*}
\rebar[\G]{T} &= T\cup \{(q,F)\in \bar{\V{all}} \mid F\subseteq T\}; \\
\drebar[\G]{T} &= T\cup \{(q,F)\in \bar{\V{all}} \mid F\cap T \not=\emptyset\}.
\end{align*}
We usually omit to mention the superscript~$\G$ when it is clear from the
context.
\begin{lemma}
If $S$~is a subarena of~$\G$, then $\rebar{S}$ and $\drebar{S}$ are
subarenas of~$\G'$.
\end{lemma}
\begin{proof}
Assume that $S$~is a subarena of~$\G$, and pick a
state~$q$ in~$\rebar{S}$. If $q\in Q$, then it also
belongs to~$S$ and, as a state of~$\G$, has a
successor~$q'$ in~$S$. Then $\rebar{S}$ contains $(q,\{q'\})$, which
is a successor of~$q$. If $q$~belongs to~$\bar{\V{all}}$, then
$qE'\subseteq S$ by definition of $\rebar{S}$; hence it~has at least
one successor in~$S$. A~similar argument shows that $\drebar{S}$ is also a
subarena of~$\G'$. \qed
\end{proof}
\begin{lemma}\label{lemma-setbar}
Let $\G$ be a mean-penalty parity game
with associated mean-payoff parity
game~$\G'$, and let $A,B\subseteq\V{all}$. The
\begin{align*}
\rebar{A\cap B} &= \rebar A\cap \rebar B, &
\rebar{A\cup B} &\supseteq \rebar A \cup \rebar B, \\
\drebar{A\cup B} &= \drebar A\cup \drebar B, &
\drebar{A\cap B} &\subseteq \drebar A \cap \drebar B, \\
\rebar{\V{}\setminus A} &= \V{}'\setminus\drebar{A}, &
\drebar{\V{}\setminus A} &= \V{}'\setminus\rebar{A}\,.
\end{align*}
\end{lemma}
\begin{proof}
Straightforward.\qed
\end{proof}
\begin{lemma}\label{lem:attr}\label{lemma-attr}
Let $\G$ be a mean-penalty parity game
with associated mean-payoff parity
game~$\G'$, and let $F\subseteq\V{}$. Then
\begin{align*}
\rebar{\Attr_1^\G(F)} &=\Attr_1^{\G'}(F)=\Attr_1^{\G'}(\rebar{F}), \\
\drebar{\Attr_2^\G(F)} &=\Attr_2^{\G'}(F)=\Attr_2^{\G'}(\drebar{F})\,.
\end{align*}
\end{lemma}
\begin{proof}
We only prove the first statement; the second can be proved using similar
arguments. Clearly, $\Attr_1^{\G'}(F)=\Attr_1^{\G'}(\rebar{F})$,
so we only need to prove that $\rebar{\Attr^{\G}_1(F)}=\Attr_1^{\G'}(F)$.
First pick $q\in\rebar{\Attr^{\G}_1(F)}$. If $q\in\V{}$, then
the attractor strategy for reaching~$F$ can be mimicked in~$\G'$,
and therefore $q\in\Attr_1^{\G'}(F)$. On the other hand, if
$q\in\bar{\V{}}$, then all successors of~$q$ lie in $\Attr^{\G}_1(F)$
and therefore also in $\Attr_1^{\G'}(F)$. Hence, $q\in\Attr_1^{\G'}(F)$.
Now pick $q\in\Attr_1^{\G'}(F)$. If $q\in\V{}$, then the attractor strategy
for reaching~$F$ yields a multi-strategy~$\sigma$ in~$\G$ such that all
plays $\rho\in\Out^{\G}(\sigma,q)$ visit~$F$. Hence,
$q\in\Attr_1^{\G}(F)\subseteq\rebar{\Attr_1^{\G}(F)}$. On the other hand,
if $q\in\bar{\V{}}$, then all successors of~$q$ lie in
$\V{}\cap\Attr_1^{\G'}(F)$ (since $q$~is a \Pl2 state) and therefore also
in $\Attr_1^{\G}(F)$. Hence, $q\in\rebar{\Attr_1^{\G}(F)}$.\qed
\end{proof}
\fi
Algorithm~$\SSolve$ is our algorithm for computing the values
\begin{algorithm}
\vspace*{.8ex}
\begin{tabbing}
\hspace*{1em}\=\hspace{1em}\=\hspace{1em}\=\hspace{1em}\=
\hspace{1em}\=\hspace{1em}\= \kill
\textbf{Algorithm} $\SSolve(\G)$ \\[\medskipamount]
\emph{Input:} mean-penalty parity game $\G=(G,\col)$ \\
\emph{Output:} $\val^\G$ \\[\medskipamount]
\textbf{if} $\V{}=\emptyset$ \textbf{then return} $\emptyset$ \\
$p\coloneq\min\{\col(q)\mid q\in\V{}\}$ \\
\+\textbf{if} $p$~is even \textbf{then} \\
$g\coloneq\SSolveMP(G,0)$ \\
\textbf{if} $\col(q)=p$ for all $q\in\V{}$ \textbf{then return} $g$ \\
$T\coloneq\V{}\setminus\Attr_1^{\G}(\col^{-1}(p))$;
$f\coloneq\SSolve(\G\restrict T)$ \\
$x\coloneq\max(f(T)\cup g(\V{}))$;
$A\coloneq\Attr_2^{\G}(f^{-1}(x)\cup g^{-1}(x))$ \\
\textbf{return} $(\V{}\to\bbR\cup\{\infty\}\colon q\mapsto x)\sqcap
\SSolve(\G\restrict\V{}\setminus A)$ \-\\
\+\textbf{else} \\
$T\coloneq\V{}\setminus\Attr_2^{\G}(\col^{-1}(p))$ \\
\textbf{if} $T=\emptyset$ \textbf{then return}
$(\V{}\to\bbR\cup\{\infty\}\colon q\mapsto\infty)$ \\
$f\coloneq\SSolve(\G\restrict T)$; $x\coloneq\min f(T)$;
$A\coloneq\Attr_1^{\G}(f^{-1}(x))$ \\
\textbf{return} $(\V{}\to\bbR\cup\{\infty\}\colon q\mapsto x)\sqcup
\SSolve(\G\restrict\V{}\setminus A)$ \-\\
\textbf{end if}
\end{tabbing}
\vspace*{-2ex}
\end{algorithm}
of a mean-penalty parity game. The algorithm employs as a subroutine
an algorithm $\SSolveMP$ for computing the values of a mean-penalty parity
with a trivial priority function (see Lemma~\ref{lemma-ZP}).
Since $\SSolveMP$ can be implemented to run in time
$\Oh(\abs{\V{}}^4\cdot\abs{E}\cdot W)$, the running time of
the procedure $\SSolve$ is $\Oh(\abs{\V{}}^{d+3}\cdot\abs{E}\cdot W)$.
Notably, the algorithm runs in polynomial time if the number of priorities
is bounded and we are only interested in the average \emph{number} of
edges blocked by a strategy in each step (\ie if all weights are
equal to~$1$).
\begin{theorem}\label{thm:mpep-det}
The values of a mean-penalty parity
game with $d$~priorities can be computed in time
$\Oh(\abs{\V{}}^{d+3}\cdot \abs{E}\cdot W)$.
\end{theorem}
\iffull
\begin{proof}
From \cref{lemma-ZP} and with the same runtime analysis as in the
proof of \cref{thm:mpp-det}, we get that $\SSolve$ runs in time
$\Oh(\abs{\V{}}^{d+3}\cdot \abs{E}\cdot W)$. We now prove that the algorithm
is correct, by proving that there is a correspondence between the values
the algorithm computes on a mean-penalty parity game~$\G$ and the values
computed by Algorithm~$\Solve$ on the mean-payoff parity game~$\G'$.
More precisely, we show that $\Solve(\G')\restrict\V{}=-\SSolve(\G)$.
The correctness of the algorithm thus follows from
\cref{lem:value_equivalence}, which states that
${\val^{\G'}}\restrict\V{}=-\val^{\G}$.
The proof is by induction on the number of states in~$\G$.
The~result holds trivially if
$\V{}=\emptyset$. Otherwise, assume that the result is true for
all games with less than $\abs{\V{}}$~states and let
$p=\min\{\col(q)\mid q\in\V{}\}$. By
construction, $p$~is also the minimal priority in~$\G'$. We
only consider the case that $p$~is even; the other case is proved
using the same arguments.
Write $g'$, $T'$, $f'$, $x'$ and~$A'$ for the items
computed by $\SSolve$ on~$\G'$,
while $q$, $T$, $f$, $x$ and~$A$ are the corresponding items
computed by $\Solve$ on~$\G$.
Then $g'(q)=-g(q)$ for all~$q\in\V{}$, and $g'((q,F))=\min_{q'\in F} g'(q')$
for all $(q,F)\in\bar{Q}$ (since such states belongs to \Pl2).
If $\G$~has only one priority, the result follows. Otherwise, by
\cref{lemma-setbar,lemma-attr}, we have $T'=\drebar T$.
However, any state~$(q,F)\in T'$ that is not a state of the
game $(\calG\restrict T)'$ has no predecessor in~$\G'\restrict T'$:
if $q\in T'$ then $q\in T\cap\V{1}$ and $qE\setminus T\neq\emptyset$,
\ie $qE\cap\Attr_1(\col^{-1}(p))\neq\emptyset$; but then
$q\in\Attr_1(\col^{-1}(p))$ and thus $q\notin T$, a contradiction.
It~follows that $\Solve(\G'\restrict T')\restrict
T=\Solve((\G\restrict T)')\restrict T$.
Now, since $T$~is a strict subset of~$\V{}$, the induction hypothesis
applies,
so that $f'(t)=-f(t)$ for all~$t\in T$. It~follows that $x'=-x$.
Let $S\coloneq\V{}\setminus A$ and $S'\coloneq\V{}'\setminus A'$. By
\cref{lemma-attr}, $A'=\drebar A$, and by \cref{lemma-setbar},
$S'=\rebar{S}$. Again, any state
$(q,F)\in S'$ that is not a state of the
game $(\G\restrict S)'$ has no predecessor in
$\G'\restrict S'$. Hence,
$\Solve(\G'\restrict S')\restrict S
=\Solve((\G\restrict S)')\restrict S$
Applying the induction
hypothesis to the game $G\restrict S$, we~get that
$\Solve((\G\restrict S)')\restrict S=-\SSolve(G\restrict S)$,
and the result follows for~$\G$.
\qed
\end{proof}
\else
\begin{proof}[Sketch]
From \cref{lemma-ZP} and with the same runtime analysis as in the
proof of \cref{thm:mpp-det}, we get that $\SSolve$ runs in time
$\Oh(\abs{\V{}}^{d+3}\cdot \abs{E}\cdot W)$. To prove that the
algorithm is correct, we show that there is a correspondence between
the values the algorithm computes on a mean-penalty parity game~$\G$ and
the values computed by Algorithm~$\Solve$ on the mean-payoff parity
game~$\G'$.
More precisely, we show that $\Solve(\G')\restrict\V{}=-\SSolve(\G)$.
The correctness of the algorithm thus follows from
\cref{lem:value_equivalence}, which states that
${\val^{\G'}}\restrict\V{}=-\val^{\G}$.\qed
\end{proof}
\fi
\section{Preliminaries}
\label{sec-prelim}
A \emph{weighted game graph} is a tuple $G=(\V1,\V2,E,\weight)$,
where $\V{all}\coloneqq\V1\cupdot\V2$ is a finite set of
\emph{states},
$E\subseteq\V{all}\times\V{all}$ is the \emph{edge} or
\emph{transition relation}, and
$\weight\colon E\to\bbR$ is a function assigning a \emph{weight} to every
transition.
When weighted game graphs are subject to algorithmic processing, we
assume that these weights are integers; in~this case, we
set $W\coloneq\max\{1,\abs{\weight(e)}\mid e\in E\}$.
\iffull\par
Moreover, we define the \emph{size} of~$G$, denoted by~$\size{G}$, as
$\abs{\V{}}+\abs{E}\cdot\lceil\log_2 W\rceil$.
(Up to a linear factor, $\size{G}$~is the length of a
binary encoding of~$G$).
In the same spirit, the size~$\size{x}$
of a rational number~$x$ equals the total length of the binary
representations of its numerator and its denominator.
\fi
For $q\in\V{all}$, we write $qE$ for the set
$\{q'\in\V{all}\mid(q,q')\in E\}$ of
all successors of~$q$. We~require that $qE\neq\emptyset$ for all states
$q\in\V{all}$.
A subset $S\subseteq\V{all}$ is a \emph{subarena} of~$G$ if
$qE\cap S\neq\emptyset$ for all states $q\in S$.
If $S\subseteq\V{all}$ is a subarena of~$G$, then we can restrict~$G$ to
states in~$S$, in which case we obtain the weighted game graph
$G\restriction S\coloneq
(\V1\cap S,\V2\cap S,E\cap(S\times S),\weight\restriction S\times S)$.
A \emph{play} of~$G$ is an infinite sequence $\rho=\rho(0)\rho(1)\cdots\in
\V{}^\omega$ of states such that $(\rho(i),\rho(i+1))\in E$ for all $i\in\bbN$.
We denote by $\Out^{\G}(q)$ the set of all plays~$\rho$ with $\rho(0)=q$
and by~$\Inf(\rho)$ the set of states occurring infinitely often in~$\rho$.
A \emph{play prefix} or a \emph{history} $\gamma=\gamma(0)\gamma(1)\cdots
\gamma(n)\in \V{}^+$ is a finite, nonempty prefix of a play.
\iffull
For a play or a history~$\rho$ and $j<k\in\bbN$, we denote by
$\rho[j,k)\coloneqq\rho[j,k-1]\coloneqq\rho(j)\cdots\rho(k-1)$ its infix that
starts at position~$j$ and ends at position $k-1$; the play's suffix
$\rho(j)\rho(j+1)\cdots$ is denoted by $\rho[j,\infty)$.
\else
For a play or a history~$\rho$ and $j<k\in\bbN$, we denote by
$\rho[j,k)\coloneqq\rho[j,k-1]\coloneqq\rho(j)\cdots\rho(k-1)$ its infix
starting at position~$j$ and ending at position~${k-1}$.
\fi
\paragraph{Strategies}
A \emph{(deterministic) strategy} for \Pli in~$G$ is a function
$\sigma\colon \V{all}^* \V{i}\to\V{all}$ such that
$\sigma(\gamma q)\in qE$ for all $\gamma\in\V{}^*$ and $q\in\V{i}$.
A~strategy~$\sigma$ is \emph{memoryless} if
$\sigma(\gamma q)=\sigma(q)$ for all $\gamma\in\V{}^*$ and $q\in\V{i}$.
More generally, a strategy~$\sigma$ is \emph{finite-memory} if the equivalence
relation~${\sim}\subseteq\V{}^*\times\V{}^*$, defined by $\gamma_1\sim\gamma_2$
if and only if $\sigma(\gamma_1\cdot\gamma)=\sigma(\gamma_2\cdot\gamma)$ for
all $\gamma\in\V{}^*\V{i}$, has finite index.
We say that a play~$\rho$ of~$G$ is \emph{consistent} with a
strategy~$\sigma$
for \Pli if $\rho(k+1)=\sigma(\rho[0,k])$ for all $k\in\bbN$
with $\rho(k)\in\V{i}$, and denote by
$\Out^{G}(\sigma,q_0)$ the set of all plays~$\rho$ of~$G$ that are
consistent with~$\sigma$ and start in $\rho(0)=q_0$.
Given a~strategy~$\sigma$ of \Pl1, a strategy~$\tau$
of \Pl2, and a state $q_0\in\V{all}$, there exists a unique play
$\rho\in\Out^G(\sigma,q_0)\cap\Out^G(\tau,q_0)$, which we denote by
$\rho^G(\sigma,\tau,q_0)$.
\paragraph{Traps and attractors}
Intuitively, a subarena $T\subseteq\V{all}$ of states is a \emph{trap} for
one of the two players if the other player can enforce that the play stays
in this set.
Formally, a~trap for \Pl2 (or simply a $2$-trap) is a subarena
$T\subseteq\V{all}$ such that
$qE\subseteq T$ for all states $q\in T\cap \V{2}$, and
$qE\cap T\neq\emptyset$ for all $q\in T\cap \V{1}$. A~trap for \Pl1
(or $1$-trap) is defined analogously.
\iffull
Note that if $T$~is n trap for \Pli in $G\restrict S$ and
$S$~is a trap for \Pl1 in~$G$, then $T$~is also a trap
for \Pli in~$G$.
\fi
If $T\subseteq\V{all}$ is not a trap for \Pl1, then \Pl1 has a strategy
to reach a position in $\V{all}\setminus T$. In~general, given a
subset $S\subseteq\V{all}$, we~denote by $\Attr_1^{G}(S)$ the set of states
from where \Pl1 can force a visit to~$S$.
\iffull
This set can be characterised as the limit of the
sequence $(A_i)_{i\in\bbN}$ defined by $A^0 = S$ and
\[
A^{i+1} = A^i\cup \{q\in\V{1}\mid qE\cap A^i\neq\emptyset\} \cup
\{q\in\V{2}\mid qE\subseteq A^i\}\,.
\]
From every state in~$\Attr_1^{G}(S)$, \Pl1 has a memoryless strategy~$\sigma$
that guarantees a visit to~$S$ in at most~$\abs{\V{all}}$ steps: the
strategy chooses for each state $q\in (A^i\setminus A^{i-1})\cap \V{1}$ a state
$p\in qE\cap A^{i-1}$ (which decreases the distance to~$S$ by~$1$).
We~call the set $\Attr_1^{G}(S)=\bigcup_{i\in\bbN}A_i$ the \emph{$1$-attractor
of~$S$} and $\sigma$ an \emph{attractor strategy for~$S$}.
\else
From every state in~$\Attr_1^{G}(S)$, \Pl1 has a memoryless strategy~$\sigma$
that guarantees a visit to~$S$ in at most~$\abs{\V{all}}$ steps.
We~call the set $\Attr_1^{G}(S)$ the \emph{$1$-attractor
of~$S$} and $\sigma$ an \emph{attractor strategy for~$S$}.
\fi
The \emph{$2$-attractor} of a set~$S$, denoted by $\Attr_2^{G}(S)$,
and attractor strategies for \Pl2 are defined symmetrically.
Notice that for any set~$S$, the~set $\V{}\setminus\Attr_1^G(S)$ is a $1$-trap,
and if $S$~is a subarena ($2$-trap), then $\Attr_1^G(S)$ is also a
subarena ($2$-trap).
Analogously, $\V{}\setminus\Attr_2^G(S)$ is a $2$-trap, and if $S$~is a
subarena ($1$-trap), then $\Attr_2^G(S)$ is also a subarena ($1$-trap).
\paragraph{Convention}
We often drop the superscript~$G$ from the expressions defined above,
if no confusion arises, \eg by writing $\Out(\sigma,q_0)$ instead of
$\Out^{G}(\sigma,q_0)$.
|
1,941,325,220,403 | arxiv | \section{The Flock vision}
In this section, we present our vision for {\em Flock\xspace}, a reference architecture to support the canonical data science lifecycle for EGML applications. {\em Flock} is our vehicle to explore assumptions (\S~\ref{sec:vantagepoint}), discover open problems and validate initial solutions (\S~\ref{sec:problems}).
We start from a key observation: {\em Machine Learning models are software artifacts derived from data.} The resulting dual nature of software artifacts and derived data provides us with a useful lens to understand the role of the DB community in the EGML revolution.
The lifecycle shown in Figure~\ref{fig:flock} begins with a (typically) offline phase, where a data scientist (and more and more frequently any software engineer) gathers data from multiple data sources, transforms them via several featurization steps and models reality using learning algorithms. Today, this phase is very manual and sadly closer to a black art than an engineering discipline. Looking at ML as software, we expect the ML and Software Engineering communities to provide us with automation~\cite{automl_book}, tooling, and engineering best practices---ML will become an integral part of the DevOps lifecycle. Looking at ML models as derived data, the DB community must address data discovery, access control and data sharing, curation, validation, versioning and provenance~(\S~\ref{sec:datamanagement}). Moreover, today's prevalent abstraction for data science is imperative Python code orchestrating data-intensive processing steps, each performed within a native library. This suggests that end-to-end ML pipelines can be approached as inherently optimizable dataflows (\S~\ref{sec:inference}).
The second stage in the lifecycle is entered when a model is selected and is ready to be deployed. Using the models-as-software lens, deployment consists of packaging the entire inference pipeline (model + all data preprocessing steps) in a way that preserves the exact behavior crafted by the data scientist in the training environment. Next, deployment envolves finding a suitable hosting infrastructure for scoring the model. Today's best practice is to package models in costly containers and hope that enough of the environment is preserved to ensure correctness\footnote{This is optimistic (e.g., is floating point precision guaranteed when running a container across Linux/Windows, x64/ARM?)}. Recall that in EGML settings individual decisions could be very consequential (e.g., loan acceptance, or choice of medical treatment), so ``average model accuracy'' is not a sufficient validation metric. Switching to our models-as-data lens, we observe that they must be subject to GDPR-style scrutiny, and their storage and querying/scoring must be secured and auditably tracked---e.g., \cite{carlini2018secret} has shown how to extract credit card numbers memoized in a trained neural network language model. Also, privacy and fairness implications must be handled carefully. Moreover, as the underlying data evolves, models need to be updated. To retain consistency for complex applications multiple models might have to be updated transactionally. DBMSs have long provided these type of enterprise features for operational data, and we propose to extend them to support model scoring. While this was our primary motivation, our early experiments suggest that in-DB model scoring actually allows us to deliver $5\times$ to $24\times$ speedups over standalone state of the art solutions!
Model predictions usually come in the form of single numbers or vectors of numbers (e.g.,\ the probability of each class in a classification problem). To act on a prediction, it must be transformed to domain terms (e.g.,\ the name of the winning class). But actions are typically more nuanced and involve policies that encode business constraints and might actually override a model's prediction under certain circumstances. Systematizing this policy space is important, as we have discussed in~\cite{floratou2017dhalion}.
Beside basic orchestration (of what happens when) in project lifecycle, management and governance for data and models throughout is vital. Access to a deployed model must be controlled, similar to how access to data or a view is controlled in a DBMS. Provenance here plays a key role and has two distinct applications---looking at models as software artifacts, we must be able to verify them or debug them, even as they evolve due to re-training. From a model-as-data viewpoint, we must be able to determine how a model was derived and from which snapshot of (training) data, in order to interpret the predictions and answer questions such as whether and why certain decisions were biased.
This leads to the need for pervasive and automated tracking of provenance from training through deployment to scoring (\S~\ref{sec:datamanagement}).
Given this context, we argue that: 1) the {\em ML development/training will happen in the Cloud}; 2) {\em Models must be stored and scored in managed environments such as a DBMS}; and 3) {\em Provenance needs to be collected across all phases}.
\section{The vantage point}
\label{sec:vantagepoint}
\noindent Our perspective on {\em what Enterprise-grade ML (EGML) will look like in 10 years} is shaped by multiple inputs.
\vspace{2mm}
{\noindent \em \large First-hand experience.} Collectively, the authors of this paper have extensive experience in using ML technologies in production settings, e.g., content recommenders~\cite{yahoo-coke}, spam filters~\cite{yahoo-spam}, big data learning optimizers~\cite{costLearner,cardLearner,raqo-full,raqo}, ML-based performance debuggers, Azure cloud optimizations based on customer load predictions, self-tuning streaming systems, and auto-tuning infrastructures for SQL~Server internals. Many of us have also been working on systems for ML technologies, including big data infrastructure \cite{hydra,hadoop,netco}, ML toolkits~\cite{ml.net,nimbus-ml,onnx-runtime}, and the systems that orchestrate it all in the cloud~\cite{azureml}.
This experience has led to one key insight: {\bf ``An ML model is software derived from data''}. This means that ML presents characteristics that are typical of software (e.g., it requires rich and new CI/CD pipelines), and of data (e.g., the need to track lineage)---hence, the database community is well positioned to play a key role in EGML, but much work is needed. We discuss some of the problems we tackle in \S~\ref{sec:problems}.
A second---and painfully clear---observation is that {\bf the actual model development represents less than 20\% of most data science project lifecycle}~\cite{kagglesurvey}. The majority of the time is spent on collecting and cleaning the data, with the reminder on operationalizing the best model.
\vspace{2mm}
{\noindent \em \large Conversations with enterprises.}
We have engaged with many large, sophisticated enterprises, including: (i) a financial institution seeking to streamline its loan approval process, (ii) a marketing firm identifying which customers to target for promotions, (iii) a sports company predicting athletes' performance, (iv) a health insurance agency aiming to predict patient recidivism, and (v) a large automotive company modeling recalls, customer satisfaction, and marketing.
A key learning from these conversations is that compared to ``unicorn'' ML applications like web search, these enterprise applications are characterized by smaller teams with domain expertise rather than deep algorithmic or systems expertise. On the other hand, their platform requirements are much more stringent around auditing, security, privacy, fairness, and bias.\footnote{This is not intended to suggest that unicorn applications do not share these requirements; rather, enterprise teams want off-the-shelf platforms that have a much higher level of support built-in, whereas unicorn teams have typically built everything from scratch.} This is particularly true for regulated industries. Existing ML technologies are not ready to support these applications in a safe, cost-effective manner. We also learned that is increasingly common for a data science project to produce a large number of localized/specialized models, rather than a single uber model---e.g., a scam-detection model per ATM vs a single global model. This creates novel challenges on model management, deployment, tracking, (transactional) rolling forward/backward among model versions etc. In short: {\bf Typical applications of ML are built by smaller, less experienced teams, yet they have more stringent demands}.
\vspace{2mm}
{\noindent \em \large GitHub analysis.}
To get a feel for trends in the broader data science community, we {\em downloaded and analyzed $>4$ million public Python notebooks from GitHub}, plus hundreds of thousands of data science pipelines from within Microsoft~\cite{dsonds}. Over 70\% of the notebooks focused on ML. Moreover, we analyzed hundreds of versions for popular Python packages. The details of this analysis are beyond the scope of this paper, but we make a few key observations. Figure~\ref{fig:coverage} shows the fraction of notebooks that would be completely supported, if we only covered the K most popular packages (for varying values of K). The shift between 2017 and 2019 suggests that the field is still expanding quickly (many more packages) but also that we are seeing an initial convergence (a few packages are becoming dominant). For example, \texttt{numpy}, \texttt{panda}s and \texttt{sklearn} are solidifying their position. We also observe very limited adoption of solutions for testing/CI-CD/model tracking (MLFlow~\cite{mlflow} is still not very popular despite its relevance to EGML).
Overall this suggests that {\bf systems aiming to support EGML must provide broad coverage, but can focus on optimizing a core set of ML packages.}
\begin{figure}
\centering\includegraphics[width=0.9\columnwidth]{figures/coverage_annotated.pdf}
\vspace{-2mm}
\caption{Notebook coverage (\%) for top-K packages.}
\label{fig:coverage}
\vspace{-2mm}
\end{figure}
\input{related-systems.tex}
\input{related-work.tex}
\section{Introduction}
\label{sec:intro}
Machine learning (ML) has proven itself in high-value consumer
applications such as search ranking, recommender systems and spam
detection~\cite{yahoo-spam, yahoo-coke}. These applications are built and
operated by large teams of experts, and run on massive dedicated
infrastructures.\footnote{ML.NET~\cite{ml.net} alone took dozens of
engineers over a decade.} The (exorbitant) human and hardware costs are
well justified by multi-billion dollar paydays. This approach is
entirely impractical when it comes to the (ongoing) mainstream adoption
of ML.
\interfootnotelinepenalty=10000
Enterprises in every industry are developing strategies for digitally
transforming their business at every level. The core idea is to
continuously monitor all aspects of the business, actively interpret the
observations using advanced data analysis---including ML---and integrate
the learnings into appropriate actions that improve business outcomes. We
predict that in the next 10~years, hundreds of thousands of small teams
will build millions\footnote{From our analysis on \textgreater4M Python notebooks from public GitHub repositories~\cite{dsonds}, we conservatively estimate that 10\% of the world's developers will use ML in
the next 10 years---totalling 20M engineering years.} of ML-infused applications---most just moderately remunerative,
but with huge collective value.
When it comes to leveraging ML in {\em enterprise} applications,
especially in regulated environments, the level of scrutiny for data
handling, model fairness, user privacy, and debuggability will be
substantially higher than in the first wave of ML applications. Consider
the healthcare domain: ML models may be trained on sensitive medical data,
and make predictions that determine patient treatments---copying CSV
files on a laptop and maximizing {\em average} model accuracy just
doesn't cut it! We refer to this new class of applications as {\em Enterprise Grade Machine Learning (EGML).}
In this paper, we speculate on how ML and database systems will evolve to
support EGML over the next several years. Database management systems
(DBMSs) are the repositories for high-value data that demands security,
fine-grained access control, auditing, high-availability, etc. Over the last 30~years, whenever
a new data-related technology has gained sufficient adoption, inevitably
DBMS vendors have sought to absorb the technology into their mainstream
products. Examples include Object-Oriented~\cite{oodbms}, XML~\cite
{xmldbms}, and Big Data~\cite{SQLServer2019} technologies. Indeed, ML is
no exception if we consider SQL Server Analysis Services~\cite{ssas}, SQL
Server R and Python integration~\cite{ssml}, and Big Query support for
ML~\cite{bigquery-ml}. {\em Is the future then that ML will be
assimilated by the DBMS?}
\begin{figure*}
\centering\includegraphics[width=1.9\columnwidth]{figures/flock}
\vspace{-2mm}
\caption{Flock\xspace reference architecture for a canonical data science lifecycle. }
\label{fig:flock}
\vspace{-2mm}
\end{figure*}
We believe that this is too simplistic, and understanding the path forward
requires a more careful look at the various aspects of EGML, which we divide
into three main categories: {\em model development/training}, {\em model
scoring} and {\em model management/governance}.
\indent \sstitle{Train in the Cloud.} First, we are witnessing an ongoing
revolution in frameworks for training an increasingly broad range of ML model
classes. Their very foundations are still undergoing rapid
development~\cite{chen2018neural}. Often, these developments happen in
conjunction with innovations in hardware. The rapidly expanding community of
data scientists who train models are developing sophisticated environments for
managing and supporting the iterative process of data exploration, feature
engineering, model training, model selection, model deployment, etc.---prominent examples include~\cite{mlflow, azureml}. These large, complex, evolving infrastructures are a
good fit with managed cloud service infrastructure. Moreover, model training
requires centralized data, is characterized by spiky resource usage, and
benefits from access to the latest hardware. This leads us to believe that model
training and development will happen in either private or public clouds.
\indent \sstitle{Score in the DBMS.} Second, while the models may be centrally
trained the resulting inference pipelines will be deployed everywhere: in the
cloud, on-prem, and on edge devices to make inferences (``scoring'') where the
data is. This raises the question of whether doing inference on data stored in a
DBMS can be done as an extension of the query runtime, without the need to
exfiltrate the data. We strongly believe this can and must be supported. It
appears likely that the most widely studied or promising families of models can
be uniformly represented \cite{mlflow,onnx}. Given a particular model we can
express how to score it (i.e., perform inference) on an input using an appropriate algebra, and compile these algebraic structures into highly optimized code for different
execution environments and hardware~\cite{tvm}. Taken together, these
observations suggest that we need to consider how to incorporate ML scoring as a
foundational extension of relational algebra, and an integral part of SQL query
optimizers and runtimes---we present a concrete proposal in
\S~\ref{sec:inference}.
\indent \sstitle{Governance everywhere.} Third, we believe that all data,
including deployed models---models are, in fact, best thought of as
derived data---and the inferences made using them, will need to be
robustly governed. The deployment of ML models and their use in
decision making via inference
leads to many significant challenges in governance. For example,
regulations such as GDPR~\cite{gdpr} and concerns such as model bias and
explainability motivate tracking provenance all the way from data used
for training through to decisions based on scoring of trained
models. In turn, this requires efficient support for versioning data.
While the ML community is focused on improvements in algorithms and
training infrastructures, we see massive need for the DB community to
step up in the areas of secure data access, version management, and
provenance tracking and governance---we discuss initial work in this area
in \S~\ref{sec:datamanagement}.
In summary, the future is likely {\em cloudy with a high chance of DBMS, and governance throughout.} We describe how our vision is shaped by customer conversations, data and market analysis, and our direct experience as well as present several open problems. We conclude by highlighting promising initial results from a few of the solutions that we are working on.
\section{Open Problems \& Advances}\label{sec:problems}
The vision for EGML we presented is an exciting one and presents many challenging problems. We summarize some key challenges below and present some of our ongoing work.
We focus on two categories that require attention from the DB community and are not well understood: 1) the systems support required to go from a trained model to decisions, and 2) data management for ML.
\subsection{From Model to Decision: Inference}
\label{sec:inference}
Much attention has been given to learning algorithms and efficient model training, but models only have value insofar as they are used for inference, to create insights and make decisions. This typically involves a complicated setup of containers for deploying the trained model (as executable code), with applications invoking them via HTTP/REST calls. Further, the containerized code often extends model inference with the implementation of complex application-level policies.
While this containerized approach offers a desirable decomposition of the problem between models and the applications using them, it has significant drawbacks: (1) Many applications use more than one model, with each model applied to the outcome of some (potentially different) data processing step. These assemblies of models and preprocessing steps should be updated atomically. (2) It seems unlikely that this solution will fit the scenarios emerging from the millions of applications we expect in this space (e.g., latency-sensitive decisions and large batch predictions are poorly served). (3) Mixing application-level policies and inference logic makes it hard to separate and measure the impact of the two.
We believe that models should be represented as first-class data types in a DBMS. This will naturally address (1) by allowing database transactions to be used for updating multiple deployed models. To address (2), we believe inference/scoring should be viewed as an extension of relational query processing, and argue for moving model inference close to the data and performing it in-DBMS, without external calls for common types of models. Naturally, this calls for a separation of inference from application-level logic. We present a clean framework for (3) after we briefly summarize our early results on in-DBMS inference. A more in-depth discussion on inference appears in~\cite{raven}.
\vspace{2mm}
{\noindent \em \large In-DBMS inference.} While in-DBMS inference appears desirable, a key question arises: {\em Can in-DBMS model inference perform as well as standalone dedicated solutions?}
To this end, several recent works~\cite{madlib,lara,laradb,levelheaded} in the database community explore how linear and relational algebra can be co-optimized. To carry this investigation further, we integrated ONNX Runtime~\cite{onnx-runtime} within SQL Server and developed an in-database cross-optimizer between SQL and ML to enable optimizations across hybrid relational and ML expressions~\cite{raven}. Further, we observe that practical end-to-end prediction pipelines are composed of a larger variety of operators (e.g., featurizers such as text encoding and models such as decision trees) often assembled in Python. We leverage static analysis to derive an intermediate representation (IR) amenable to optimization. The list of optimizations we have been exploring is therefore more comprehensive than prior work, and includes the following:
%
\begin{itemize}[noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt]
\item predicate-based model pruning, which uses selections from the SQL subsection of the query to simplify the ML subsection;
\item model-projection pushdown, which automatically prunes (projects out) unused input feature-columns exploiting model-sparsity;
\item model clustering that compiles simplified models for different parts of the input based on data statistics;
\item model inlining, which transforms ML operators to relational ones (similar in spirit to~\cite{froid});
\item physical operator selection based on statistics, available runtime (SQL/ONNX/Python UDFs~\cite{spexternalsql}) and hardware (CPU,~GPU).
\end{itemize}
\vspace{2mm}
In Figure~\ref{fig:inference-eval} we present two key results: (1) performance of integrating ONNX Runtime (ORT) within SQL Server, both through in-process execution (Raven) and out-of-process (Raven ext.); and (2) benefit of cross-optimizations, including model inlining and predicate-based model pruning. The results show that SQL Server integration provides up to $5.5\times$ over standalone ORT (due to automatic parallelization of the inference task in SQL Server) and up to $24\times$ from our combined optimizations. Early results indicate that in-DBMS inference is very promising.
\begin{figure}[t!]
\includegraphics[width=0.32\textwidth]{figures/sonnx2}
\hspace{0ex}\includegraphics[width=0.15\textwidth,trim={2.6cm 0 0 0},clip]{figures/inline}
\vspace{-2mm}
\caption{Benefits of integrating ONNX Runtime in SQL Server (left) and of cross-optimizations (right---red bar shows speedup due to model inlining, blue bar also includes predicate-based model pruning).
\label{fig:inference-eval}}
\vspace{-2mm}
\end{figure}
\vspace{2mm}
{\noindent \em \large Bridging the model-application divide.}
ML applications need to transform the model predictions into actionable decisions in the application domain. However, the mathematical output of the model is rarely the only parameter considered before a decision is made. In real deployment scenarios, business rules and constraints are important factors that need to be taken into account before any action is taken. As a concrete example, we have built models to automate the selection of parallelism for large big data jobs to avoid resource wastage (in the context of Cosmos clusters~\cite{morpheus}). While models are generally accurate, they occasionally predict resource requirements in excess of the amounts allowed by user-specified limits. Business rules expressed as policies then override the model.
Obviously, business rules and requirements can vary between different applications and environments. To that end, we employ a generic and extensible module~\cite{floratou2017dhalion} that takes as input user-defined \textit{policies} which introduce various business constraints on top of EGML workloads. The module continuously monitors the output of the ML models and applies the specified policies before taking any further action in the application domain. It also maintains the system state and actions taken over time allowing us to easily debug and explain the system's actions. Finally, it makes sure that the actions happen in a transactional way, rolling back in case of failures when needed. Overall this closes the loop between model and application, providing us visibility necessary for both debugging and end-to-end accountability.
Next, we discuss the requirements for managing data for ML.
\subsection{Data Management for ML}
\label{sec:datamanagement}
\vspace{2mm}
{\noindent \em \large Data Discovery, Access and Versioning.} One of the main challenges ML practitioners face today revolves around data access and discovery. Training data commonly contains tabular data, but also images, video or other sensor data. This gives rise to a predominantly file-based workflow. Only a small fraction of the $>4$~million notebooks we analyzed makes use of a database access library. This is surprising, as the vast majority of the pipelines ultimately use Pandas~\cite{pandas}, a structured DataFrame, to interact with this data. This state-of-the-art is deeply unsatisfying: {\em Data Discovery support is virtually non-existent }. This is especially troubling as data augmentation is one of the best strategies to improve a model.
Worse, \emph{data versioning is largely unsolved in this paradigm:} A model is the result of both its training code and the training data. Both need to be versioned. And file versioning technologies fail to address key needs of data versions; they often can only represent a deletion via a history rewrite. More fundamentally, files are not the atomic unit of training data: an individual data point may be stored in a file, but equally likely, many files represent one data point, or one file contains many data points.
Hence, we believe that {\bf there is an open need for queryable data abstractions, lineage-tracking and storage technology that can cover heterogenous, versioned, and durable data.}
\vspace{2mm}
{\noindent \em \large Model Management.}We have argued that ML models are software artifacts created from data, and must be secured, tracked and managed on par with other high-value data. DBMSs provide a convenient starting point thanks to their support for enterprise-grade features such as security, auditability, versioning, high availability, etc. To be clear, we are not suggesting that all data management needs to be inside a relational DBMS; indeed, we see a trend towards comprehensive data management suites that span all of a user's data across one or more repositories. Our point is that managing models should be treated on par with how high-value data is managed, whether in a DBMS (the most widely available option currently) or in emerging cross-repository managed environments.
\begin{figure}
\centering\includegraphics[width=\columnwidth]{figures/prov}
\vspace{-2mm}
\caption{Data science pipeline}
\label{fig:ds}
%
\vspace{-1em}
\end{figure}
\vspace{2mm}
{\noindent \em \large Model Tracking and Provenance.} Models are software of consequence. Their genesis needs to be tracked. To achieve that, the full \emph{provenance} of a model must be known for debugging/auditing.
We need to capture not only the code that trained the model, but also the (training) data that went into it, together with its full, tamper-proof lineage. There are multiple industry efforts to capture the inner training loop of this lineage~\cite{mlflow, kubeflow}. This must be expanded to the full lineage, and also automated to achieve the scale we expect.
Figure~\ref{fig:ds} shows an example of a data science pipeline. The data scientist first selects a subset of patient-related data stored in a relational database and saves them in a CSV file. This file can then be accessed by a Python script to train a machine learning model. This pipeline involves different tools and languages (SQL, Python) and data formats (SQL tables, flat files).
Tracking provenance across such pipelines can enable various important applications such as model linting, compliance testing, and impact analysis. In this particular example, the label used to train the model (TARGET column) is also part of the feature set. Automatically analyzing the Python script to identify the provenance relationships between the columns of the dataset and the trained model can help identify such mistakes. Similar techniques can also be used to detect compliance violations such as usage of PII information by a machine learning model. In our example, the dataset contains sensitive patient data that include the SSN and ID columns. However, these columns are dropped from the Python script before training the machine learning model, and as a result the pipeline is compliant with the regulations. Finally, automatic tracking of provenance can help identify the set of applications that are affected by changes in the original data sources. For example, one can use such information to detect which machine learning models will be impacted when a column of the dataset in the database is dropped because it contains corrupted data.
Enabling such applications by automatically capturing provenance information across such pipelines is challenging:
\begin{itemize}
\item \sstitle{C1. Provenance data model.} Data elements in EGML workloads are polymorphic (e.g., tables, columns, rows, ML models, and hyper-parameters) with inherent temporal dimensions (e.g., a model may have multiple versions, one for each re-run of a training pipeline). As such, and in contrast to traditional data models of provenance over DBMSs, EGML workloads dictate polymorphic and temporal provenance data models. Such data models are hard to design, capture, maintain, and query.
\item \sstitle{C2. Provenance capture.} EGML workloads typically span multiple systems and runtimes (e.g., a Python script may fetch data from multiple databases to train a model). These systems might have different architecture and programming constructs (e.g., declarative vs. imperative interfaces). Extracting a meaningful provenance data model in this setting requires different capture techniques tailored specifically to each system and runtime.
\item \sstitle{C3. Provenance across disparate systems.} Even if we capture provenance on top of each system and runtime in isolation, we still require to combine this information across systems (e.g., if we change a column in a database, models trained in Python that depend on this column may need to be invalidated and retrained). Hence, EGML workloads require protocols for consolidating and communicating the provenance information across systems.
\end{itemize}
\sstitle{Our initial solution.} Our solution consists of three major modules: the \textit{SQL Provenance module}, the \textit{Python Provenance module} and the \textit{Catalog}. The Catalog (we use Apache Atlas~\cite{atlas}) stores all the provenance information and acts as the bridge between the SQL and the Python Provenance modules. It allows us to capture end-to-end provenance across different systems---hence, provides a principal way to address \textbf{C3}. Next, we present the high-level architecture and some preliminary experimental results (full papers with the detailed description of the system design and associated algorithms behind the SQL and Python Provenance modules are under preparation).
\vspace{2mm}
\sstitle{Provenance in SQL.} Our SQL provenance module currently focuses on capturing coarse-grained provenance under two modes, traditionally referred to as eager and lazy. Under eager provenance capture, given a query, the module parses it to extract coarse-grained provenance information (i.e., input tables and columns that affected the output, with connections modelled as a graph). Under lazy provenance capture, the module gets as input the query log of the database and constructs the provenance data model, only this time by accounting the whole query history. Under both modes, the module populates the Catalog accordingly.
To scale across databases, the parsing module utilizes Apache Calcite~\cite{calcite} that provides parsers and adapters across databases---hence, provides us a way towards addressing \textbf{C2}. For cases where Apache Calcite cannot parse queries, we specialize to the parser of the corresponding engine. Furthermore, note that all data stored in the Catalog is versioned (e.g., an \texttt{INSERT} to a table results in a new version of the table in the provenance data model)---hence, we address the temporal aspect of \textbf{C1}. The table below shows the provenance capture performance (latency and provenance graph size) for queries generated out of all templates in TPC-H and TPC-C:
\vspace{-3mm} %
\begin{table}[h]
\centering
\small
\begin{tabular}{| c|c|c|c| }
\hline
\textbf{Dataset} & \textbf{\#Queries} & \textbf{Latency} & \textbf{Size(nodes+edges)}\\
\hline
TPC-H & 2,208 & 110s & 22,330\\ \hline
TPC-C & 2,200 & 124s & 34,785 \\
\hline
\end{tabular}
\label{t:sqlperf}
\end{table}
\vspace{-1.5mm}
These early findings indicate that a) the per-query capture latency can be significant and b) the provenance data model can become substantially large in size (e.g., a table with as many versions as the number of insertions that have happened to it). For these reasons, we develop optimized capture techniques, through compression and summarization, which are essential towards addressing \textbf{C1}.
\vspace{2mm}
\sstitle{Provenance in Python.} The Python provenance module parses scripts and automatically identifies the lines of code that correspond to feature extraction and model training using a combination of standard static analysis techniques and a knowledge base of ML APIs that we maintain. Through this process, we are able to identify which Python variables correspond to models, hyperparameters, model features and metrics. We can also track the transformations performed on these variables and eventually connect them with the datasets used to generate training data. The Python provenance module accesses the Catalog to collect the output of the SQL provenance module and eventually connect the datasets used in the Python scripts to the columns of one or more DBMS tables.
\vspace{-1.5mm}
\begin{table}[h]
\centering
\small
\begin{tabular}{| c|c|c|c| }
\hline
\textbf{Dataset} & \textbf{\#Scripts} & \textbf{\%Models} & \textbf{\%Training Datasets}\\
& & \textbf{ Covered} & \textbf{Covered}\\ \hline
Kaggle & 49 &$95\%$ & $61\%$ \\ \hline
Microsoft & 37 & $100\%$ & $100\%$ \\
\hline
\end{tabular}
\label{tab:pythoncoverage}
\end{table}
\vspace{-1.5mm}
The above table table shows the coverage currently achieved by the provenance module on the Kaggle dataset~\cite{kaggle} and a Microsoft internal dataset of scripts deployed in production. In this experiment, we evaluate how often the module identifies correctly ML models and training datasets in the Python scripts.
\section{Conclusion and Call to Action}
\label{sec:conclusions}
We live in interesting times. Database architectures are undergoing major transformations to leverage the elasticity of clouds, and a combination of increased regulatory pressures and data sprawl is forcing us to rethink data governance more broadly. Against this backdrop, the rapid adoption of ML in enterprises raises foundational questions at the intersection of model training, inference and governance, and we believe the DB community needs to play a significant role in shaping the future.
\section*{Acknowledgements}
We thank Doris Xin, Mohammad Hossein Namaki and Yi Zhang for their contributions in Sections~\ref{sec:inference} and \ref{sec:datamanagement}---full-length papers on each system contribution are ongoing.
\bibliographystyle{abbrv}
|
1,941,325,220,404 | arxiv | \section{Introduction\label{sec:intro}}
The rapid time variability of transient black hole X-ray binaries (BHXRBs) provides a unique tool to understand the physical processes like accretion, disk geometry, etc., in the vicinity of the central engine. During X-ray outbursts, BHXRBs undergo various spectral state transitions, namely (i) low/hard (LH), (ii) thermal or high soft (HS), (iii) very high state (VHS)/steep power law (SPL), and (iv) hard and soft intermediate (HIMS and SIMS) states \citep{Bel00, Rem06}. The HS state spectrum generally consists of non-thermal flux $\textless$ 25\% of the total emission, while the thermal emission from the accretion disk (around the compact object) constitutes the remaining 75\%, with either a weak or no quasi-periodic oscillations (QPOs) seen in the light curve. On the other hand, the VHS/SPL exhibits peculiar properties such as the disk flux varying from 35\% to 80\%, a photon index ($\Gamma$) $\ge$ 2.4 and a luminosity (L) $\textgreater$ 10\% of the Eddington luminosity (L$_{Ed}$). Low-frequency QPOs are often observed in this state. The LH state is characterized by the disk flux contributing $\textless$ 20\%, and the non-thermal flux dominating at $\ge$ 80\%, a spectral index of 1.4 $\le$ $\Gamma$ $\le$ 2.1, a luminosity L $\leq$ 1\% of L$_{Ed}$, and sometimes low-frequency QPOs ($\textless$ 0.1 Hz) are observed \citep{Rem06}.
\citet{van89} suggests that the QPO frequencies range from $\sim$mHz to kHz ($\leqslant$1.2 kHz). Based on the peak frequency ($\nu$), QPOs are categorized into low-frequency QPOs (LF--QPOs) and high-frequency QPOs (HF--QPOs). In the case of BHXRBs, the LF--QPOs are characterized by 0.1 $\le$ $\nu$ $\le$ 30 Hz. Using the definition of the Quality factor\footnote{Q--factor is defined as the ratio between the QPO centroid frequency and the full width at half maximum (FWHM) of the QPO peak.}(also referred to as Q--factor), representing the broadness of a QPO peak \citep{Belloni2014}, the LF--QPOs are further sub-categorized into three types: Type--A, Type--B and Type--C \citep{Hom01, Rem02}. Type--A LF--QPOs are characterized by a weak (few percents of rms amplitude) and broad peak (Q--factor $\leq$ 3) around 7--9 Hz, usually observed on top of weak red noise \citep{Hom01, Cas05}. Type--B LF--QPOs represent a relatively strong ( $\sim$4$-$7\% rms amplitude) and narrow peak (Q--factor $\sim$5--7) between 5$-$7 Hz, usually associated with a weak red noise (few percent rms) \citep{Mot11}. The type--C LF--QPOs are characterized by a strong (up to $\sim$21\% rms amplitude) and narrow peak (Q--factor $\sim$5--12) with the variable centroid frequency between 0.1--15 Hz superposed on a strong flat-top noise \citep{Wij99, Motta15}.
Different models have been proposed to explain the origin and physical nature of QPOs in X-ray binaries (XRBs). The study of LF--QPOs provides an indirect way to understand the accretion flow around the compact object in XRBs. The relativistic precession model (RPM), based on general relativity and proposed by \citet{Ste98}, explains the origin and evolution of LF--QPOs and a few HF--QPOs in neutron star X-ray binaries. According to this model, QPOs originate as a result of the nodal precession, periastron precession, and Keplerian motion, of a luminous blob of material in the accretion flow around the compact object. \citet{Ing09} extended the model proposed by \citet{Ste98}, considering the complete inner flow instead of a luminous blob. The authors tried to demonstrate the origin of LF--QPO and the noise linked to them. Later \citet{Motta14b} extended this model to black hole X-ray binaries.
The Galactic microquasar GRS 1716--249 (or GRO J1719--24 or Nova Oph 1993) was first detected on 1993 September 25, independently and simultaneously by the BATSE instrument aboard CGRO \citep{Har93} and the SIGMA telescope aboard GRANAT \citep{Bal93}. The optical counterpart was discovered by \citet{del94} and \citet{Mas96}, as a low mass main-sequence star of spectral type K (or later). The mass of the companion is star found to be $\sim 1.6 M_\odot$, and the system has an orbital period of $\sim$ 14.7 hr. The compact object is believed to be a stellar-mass black hole of mass $\sim 4.9 M_\odot$, located at a distance of $\sim$ 2.0--2.8 kpc. The only observed historical major outburst from GRS 1716--249 coincided with its discovery in 1993, which was later studied in detail by \citet{van96}, to investigate the nature of its temporal variability. During the entire 80 days of its outburst, the authors witnessed a QPO at $\nu\sim$ 0.04 Hz at the beginning of the observations, which gradually shifted to 0.3 Hz at the end. A constant phase lag of $0.072\pm0.010$ radian was also observed in the frequency range 0.02$-$0.20 Hz. \citet{van99} observed that the $\le$ 1 Hz QPO was similar to type II burst profiles of Rapid Bursters associated with neutron star accretors. This variability feature suggests that the origin of the $\le$ 1 Hz QPO is independent of the nature of the accretor, and the $\sim$ 0.04 Hz QPO originates from thermal viscous instabilities in the accretion disk surrounding the black hole.
A study of the energy spectra of GRS 1716--249 during the low hard state was presented by \citet{Rev96}, applying different models for the spectral fitting, such as Comptonization and optically thin thermal bremsstrahlung. \citet{Lin05} carried out a detailed study of spectral variability and the soft $\gamma$-ray flux during a 1000 day period in 1993 -- 1995.
Using Monitor of All-sky X-ray Image\footnote{\url{http://maxi.riken.jp/top/index.html}} ({\it MAXI}) data, \citet{Negoro2016} reported the first detection of GRS 1716--249 on 18 December 2016 after $\sim$ 23 years. During the {\it MAXI} observation on 21 December 2016, the source was observed with a photon index of $1.62\pm0.06$ \citep{Masumitsu2016}. {\it Chandra} observations on 06 February 2017 revealed GRS 1716--249 to be in the hard spectral state with a photon index ($\Gamma) \sim 1.53$ \citep{Miller2017}. Based on {\it Swift} observations on 27 March and 2 April 2017, \citet{Armas2017} found that the source was transiting to the soft state. However, {\it Swift} observations on 2017, May 05 and 11 confirmed that the source had returned to the hard state \citep{Bassi2017}. \citet{Bassi2019} studied the 2016--2017 outburst of GRS 1716--249 in the radio and the X-ray bands. Those authors reported that GRS 1716--249 underwent a failed outburst because the source never exhibited the canonical high-soft spectral state during this outburst. The radio -- X-ray correlation shows that the source is positioned at the radio-quiet `outlier' branch \citep{Bassi2019}.
This investigation aims to present a spectro-temporal study of GRS 1716--249 during its recent outburst in 2016--2017, using two different observations performed on 2017 April, 7 and 10 by the {\it Swift}/XRT and {\it NuSTAR} observatories. {\it NuSTAR} offers a pile-up free performance (up to $\sim$100 mCrab), high energy resolution ($\sim$400 eV in the range 0.1$-$10 keV) and excellent calibration. This instrument provides a rare opportunity to study relativistically skewed iron line profiles and to constrain the inner disk radius with high precision \citep{Harrison2013, Miller2013, Pahari2015, Parker2015}. This paper is organized as follows. Section $\S$2 presents a detailed description of the observations and data reduction methodologies. Section $\S$3 describes our analysis and results of the aforesaid outburst. The last section $\S$4 presents our discussion and conclusions of the major obtained results.
\begin{figure*}
\centering
\includegraphics[scale=0.34,angle=-90]{Images/MAXI_GRS_1716_2-20_and_HR2_OND_24May2019.eps}
\includegraphics[scale=0.34,angle=-90]{Images/SWIFT_BAT_Light_Curve_24May2019.eps}\vspace{2.5em}
\caption{{\bf Left: } One-day averaged light-curve (Top panel) from {\it MAXI} (2.0$-$20.0 keV) along with hardness ratio (Bottom panel) to display the long-term variations in GRS 1716--249. {\bf Right: } The {\it Swift}/BAT light curve in the energy range 15.0-50.0 keV. The blue vertical lines indicate the times of the observations used for the present study. Both observations are highlighted with the red star symbol in the {\it MAXI} and {\it Swift}/BAT plots, and represent the observed flux measured by {\it MAXI} and {\it Swift}/BAT.}
\label{fig:fig1}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[scale=0.33,angle=-90]{Images/Light_07Apr2017_E1_E2_E_24May2019.eps}
\includegraphics[scale=0.33,angle=-90]{Images/Light_10Apr2017_E1_E2_E_24May2019.eps}\vspace{1.5em}
\caption{{\bf Left:} {\it NuSTAR} light curves for epoch E1, binned over 120 s, in three different energy bands. The top, middle and bottom panels correspond to the energy bands 3.0-30.0, 30.0-80.0, and 3.0-80.0 keV, respectively. {\bf Right:} Same as the left panel, but for epoch E2 (see Table \ref{tab:tab1} for details of the data). Counts from both FPMA and FPMB are added to improve the $S/N$ ratio.}
\label{fig:fig2}
\end{figure*}
\section{Observation and Data Reduction \label{sec:ObsRedcut}}
\subsection{{\it NuSTAR} FPMA and FPMB \label{subsec:NuSTAR}}
During the 2016 -- 2017 outburst of GRS 1716--249, {\it NuSTAR} observed this source on 2017 April, 07 and 10 (Obs. ID:90202055002 and 90202055004). The {\it NuSTAR} data were acquired using the two focal plane module telescopes (FPMA and FPMB). The net effective on source exposure times after detector dead-time corrections are $\sim$18 ks and $\sim$16 ks, respectively. We refer to the two epochs as E1 and E2 in chronological order. The details of the observations can be found in Table \ref{tab:tab1}.
\begin{figure*}
\centering
\includegraphics[scale=0.32,angle=-90]{Images/Pds_07Apr2017_P8_A+B_LN_Rev7_24May2019.eps}
\includegraphics[scale=0.32,angle=-90]{Images/Pds_10Apr2017_P2_A+B_LN_Rev7_24May2019.eps}\vspace{3em}
\caption{PDS of the light curves in the 3.0$-$80.0 keV band for epochs E1 (a) and E2 (b), respectively. The bottom panels show the residuals for the same. The red dashed and the blue dotted lines in each plot represent the broad continuum and QPO (peaked at 1.20 Hz for E1 and 1.55 Hz for E2) fitted with a Lorentzian model. The solid black line indicates the total model. The observed QPOs have a significance of 6.9 and 8.1 $\sigma$, respectively.}
\label{fig:fig3}
\end{figure*}
We utilized the standard {\it NuSTAR} data analysis software ({\tt NUSTARDAS} v1.7.1) included in {\tt HEASOFT} v6.23 along with the calibration database {\tt CALDB VERSION 20180419}. We used the {\tt nupipeline} task (version 0.4.6 release date: 2016-10-06) for filtering event files and for making depth corrections from both telescopes. A circular region of radius 100 arcsec centered on GRS 1716--249 was used to extract the source events. For extracting the background events, a circular region of the same size as that of the source with centre 5 arcmin away from the centre of the source was chosen to avoid contamination by the source. Science products, e.g., light curves, energy spectra, response matrix files (rmfs) and auxiliary response files (arfs), for both telescopes (FPMA and FPMB) were generated using the {\tt NUPRODUCTS} task. Light curves from both telescopes were merged to increase the signal-to-noise ratio. To minimize systematic effects, the energy spectra from both detectors were modeled simultaneously.
\begin{table*}
\centering
\caption{Details of our {\it Swift}/XRT and {\it NuSTAR} observations of GRS 1716--249.}
\begin{center}
\scalebox{0.8}{%
\begin{tabular}{ |l|c|c|c|c|c|c|c|c| }
\hline
\hline
Instrument & Observation & Observation & Observation & MJD & Epoch & Effective & Count rate & Observation \\
Name & ID & Start date & Start time & & Flag & exposure & (cts s$^{-1}$) & mode \\
& & (DD-MM-YYYY) & (hh:mm:ss) & & & time (ks) & & \\
\hline
{\it NuSTAR}/FPMA, FPMB & 90202055002 & 07-04-2017 & 14:26:09 & 57850 & E1 & $\sim$18 & $338\pm12$ & SCIENCE \\
{\it NuSTAR}/FPMA, FPMB & 90202055004 & 10-04-2017 & 16:36:09 & 57853 & E2 & $\sim$16 & $325\pm10$ & SCIENCE \\
\hline
{\it Swift}/XRT & 00034924029 & 07-04-2017 & 08:50:41 & 57850 & E1 & $\sim$1.6 & $130\pm8$ & WT \\
{\it Swift}/XRT & 00034924031 & 10-04-2017 & 21:12:42 & 57853 & E2 & $\sim$1.9 & $145\pm8$ & WT \\
\hline
\hline
\end{tabular}}\\
\end{center}
\label{tab:tab1}
\end{table*}
\subsection{{\it Swift}/XRT}
The {\it Swift}/X-ray Telescope (XRT) observations on 2017, April 07 and 10 were utilized for the present study. These observations were either exactly simultaneous (E2; Total Exp. = 1.9 ks) or nearly simultaneous (E1; Total Exp. = 1.6 ks; 5 hrs difference from {\it NuSTAR} pointing) with the {\it NuSTAR} observations discussed in \S\ref{subsec:NuSTAR}. During both epochs (E1 and E2), XRT operated in windowed timing (WT) mode. For more details about the observations, please refer to Table \ref{tab:tab1}.
Standard procedures as suggested by the instrument team were followed for the filtering and screening. {\it Swift}/XRT data were reduced using the {\tt xrtpipeline} (version 0.13.2 release date: 2015-01-20). The background-subtracted average count rates during E1 and E2 in WT mode were found to be $\sim$130 and $\sim$145 counts sec$^{-1}$, respectively. According to \citet{Romano2006}, the specified photon pile-up limit of the WT mode data is $\sim$100 counts sec$^{-1}$. Therefore, before extracting any scientific product, the possibility of pile-up in the data is tested and corrected following the prescriptions by \citet{Romano2006}. Specifically, this was done by investigating the spectral distortion due to pile-up and subsequently removing an appropriate bright portion from the center of the source image to restore the spectrum. Following this procedure, the most suitable pile-up free source region used for our analysis is represented by a rectangular region defined by 108$\times$36 arcsec$^2$, with 21.6$\times$21.6 arcsec$^2$ removed from the center. The background region wes chosen to be a similar rectangular region, except 5 arcmin away from the center along the image strip. The end products, namely energy spectrum and light curves corresponding to the source and background regions were extracted using ftools {\tt XSELECT} (V2.4d). Afterwards, the {\tt XRTMKARF} tool was utilized for generating the ARF files using the source spectra and exposure map. The resulting files are then used along with the proper RMF files from the recently updated {\tt CALDB}, for further spectral analysis.
\section{Analysis and Results}
\subsection{Temporal analysis\label{sec:tempStudy}}
The publicly available one-day averaged {\it MAXI} light curve in the energy range 2.0$-$20.0 keV is plotted in the left panel of Fig. \ref{fig:fig1}, with hardness ratio as a function of time plotted in the bottom. The hardness ratio is defined as the ratio of count rates in the 4.0$-$10.0 keV and 2.0$-$4.0 keV energy bands \citep{Mo13}. The one-day averaged {\it Swift}/BAT light curve in the energy band 15.0$-$50.0 keV is shown in the right panel of the Fig. \ref{fig:fig1}. The long-term light curves from {\it MAXI} (2.0$-$20.0 keV) and BAT (15.0$-$50.0 keV), clearly display the overall rising and declining trends in flux. The average count rate over the two epochs E1 and E2 show a small variation as presented in Table \ref{tab:tab1}. For {\it NuSTAR} (3.0$-$80.0 keV) the average count rate during E1 is slightly higher than during E2, whereas the reverse trend is seen for the average XRT fluxes.
\begin{figure*}
\centering
\includegraphics[scale=0.34,angle=-90]{Images/Time_QPO_Freq_Q-factor_Sigma_07Apr2017_24May2019.eps}
\includegraphics[scale=0.34,angle=-90]{Images/Time_QPO_Freq_Q-factor_Sigma_10Apr2017_24May2019.eps}\vspace{1.5em}
\caption{ {\bf Left:} Temporal evolution of the QPO frequency (top panel), Quality factor (middle panel) and QPO significance ($\sigma$; bottom panel) derived from the {\it NuSTAR} observation dated 07 April 2017. {\bf Right :} Same as in the left panel, but for the {\it NuSTAR} observation on 10 April 2017. The QPO frequency clearly shows significant variations in the two epochs, whereas the Q-factor remains constant within uncertainties. The significance of the Q-factor varies, but is always above 4 $\sigma$ (4--7$\sigma$ for E1 \& 4--8$\sigma$ for E2)}
\label{fig:fig4}
\end{figure*}
Fig. \ref{fig:fig2} shows the 120-sec-binned light curve corresponding to E1 and E2 for different energy bands. It indicates periodic variations of $\sim$ 5800 sec, as derived from folding of the light curve over a range of periods and searching for the maximum chi-square as a function of the period in both observations. However, this could be an artifact of the satellite orbital motion. In order to claim a periodicity on this time time-scale, longer observations and rigorous investigations of possible systematics are needed which are beyond the scope for our present work.
To characterize the variability in GRS 1716--249 during both epochs E1 and E2, the power density spectrum (popularly known as PDS) was generated. They are displayed in Fig. \ref{fig:fig3}. PDS are generated by using the Fourier transform of the light curves. The peaks in the PDS correspond to the presence of periodic signals in the light curves. The {\tt powspec} tool distributed as {\tt XRONOS} sub-package of the heasoft package is used to generate the PDS. The {\it NuSTAR} detectors are affected by dead-time, which is $\approx$ 2.5 ms \citep{Harrison2013, Bachetti2015}. However, we generated PDS for FPMA and FPMB separately, which are not dead-time corrected. The presence of dead-time affects the contribution of white noise in the PDS and sometimes affects the overall shape of the QPO, but there is no effect on the QPO peak frequency. A distortion of the overall QPO shape affects the Q-factor and significance, but the variation is only significant at higher count rates, above $\sim$600 counts/sec \citep{Bachetti2015}. We observed a QPO in both detectors. The PDSs were fitted with a model consisting of two Lorentzians added together. The first, broader Lorentzian component represents the continuum, whereas the second Lorentzian was used to fit the QPO. The significance of the QPOs was estimated by the ratio of the area under the Lorentzian profile peaking at the QPO frequency (shown with the blue dash-dot-dashed line in Fig. \ref{fig:fig3}) to the 1$\sigma$ negative error in the area estimated by the model. The recipe suggested by \citet{Vau05} was adopted to calculate the confidence level of the detected QPO and to reject low significance peaks.
In Table \ref{tab:tab3}, we present the derived broad Lorentzian knee frequency, QPO frequency, Q-factor, its significance and $\chi^{2}$/dof for various segments of the {\it NuSTAR} light curves for both epochs in columns 3, 4, 5, 6 and 7, respectively. The temporal evolution of these parameters is shown in Fig. \ref{fig:fig4}. It is evident from Fig. \ref{fig:fig4} that there is a hint of variations in the QPO peak frequency ($\nu_{QPO}$) during both epochs, and we measured a significant change of the QPO peak frequency between the two epochs. During E1, it rises consistently from 1.13 Hz to 1.23 Hz in $\approx$ 6 hrs and afterward decreases to $\approx$ 1.11 Hz within the next $\approx$ 3 hrs, before rising again as evident from the last segment. Although the Q-factor shows some signature of variations, it always remains within 1$\sigma$ uncertainty. The significance of the QPO detection ($\sigma_{QPO}$) was always above 4. The beginning of E2 witnessed a higher value of $\nu_{QPO}$ of 1.45 Hz which remained constant within 1$\sigma$ for approximately 2.5 hours. The next four segments indicate a rising trend for the rest of the observations. Q-factor seems to be constant throughout E2. We also find that the QPO detection is always significant at $\ge 4 \, \sigma$. Note that the two epochs E1 and E2 are separated by approximately 2.6 days. Therefore, there is an indication that on an average $\nu_{QPO}$ is drifting towards higher frequencies with temporary reversals of the trend inbetween. However, the Q-factor shows a small hint of variations with a mean value $\approx$ 8 within 2$\sigma$ errors.
Type--C QPOs are strong (rms amplitude $\sim$ 21 \%), narrow (Q--factor $\sim$ 4--12) with centroid frequency varying within 0.1$-$15 Hz and superimposed on a strong flat-top noise \citep{Cas05, Pahari2014}. As displayed in Fig. \ref{fig:fig4} and evident from the PDS fitting parameters listed in Table \ref{tab:tab3}, the QPO frequency varies in the range 1.11 -- 1.67 Hz, the Q-factor ranges within $\sim$ 5--12 and a broad Lorentzian knee frequency is also present in our observations. This confirms that the observed low-frequency QPO to be of type--C. The PDS plots displayed in Fig. \ref{fig:fig3} also confirm the same type of low-frequency QPO. Both the PDS are Leahy-normalized and noise subtracted.
\begin{figure*}
\centering
\includegraphics[scale=0.33,angle=-90]{Images/Ratio_XRT_NuSTAR_07Apr2017_24May2019.eps}
\includegraphics[scale=0.33,angle=-90]{Images/Ratio_XRT_NuSTAR_10Apr2017_24May2019.eps}\vspace{1.5em}
\caption{{\bf Left:} Ratio of jointly fitted Swift/XRT (black dots) and {\it NuSTAR} FPMA (red star)/FPMB (blue square) spectra for epoch E1. The spectra are fitted using a simple phenomenological model {\tt TBabs$\times$(diskbb+nthComp)} for observation E1 in the energy range 0.5$-$79 keV. A broad emission line feature in the energy range 5$-$8 keV and a dip around 11 keV is significantly detected. Around 1 and 30 keV a hump-like excess is also observed. {\bf Right:} Same as on the left, but for epoch E2.}
\label{fig:fig5}
\end{figure*}
\subsection{Spectral analysis\label{sec:specStudy}}
To understand the spectral characteristics of GRS 1716--249 during the epochs E1 and E2, a simultaneous fit of the broad-band spectrum from {\it NuSTAR}/FPMA, FPMB (3.0$-$78.0 keV) and {\it Swift}/XRT (0.5$-$8.0 keV) was performed using {\tt XSPEC} version: 12.10.0 \citep{Arn96}. Before employing more complex models, the spectral fitting was first attempted with individual standard continuum models like {\tt diskbb}, {\tt nthComp} and {\tt powerlaw}. They generally result in large reduced chi-square values ($\chi^2_\nu$) ($\gg$ 2; i.e., greater than the acceptable limit).
An additional constant multiplicative term was incorporated in the models, utilizing the {\tt CONSTANT} model inbuilt in {\tt XSPEC}, to adjust the factors related to the cross-instrument calibration uncertainties. This constant is kept fixed at 1 for {\it Swift}/XRT, and the spectra were fitted by keeping it free for FPMA and FPMB. In this way, the best-fit constants for FPMA and FPMB represent the relative cross-instrument calibration factors with respect to {\it Swift}/XRT. This relative cross-instrument factor for E1 was estimated as $1.16\pm0.02$ ($\sim$16\%) for both FPMA and FPMB, while for E2 it was found to be $1.08\pm0.01$ ($\sim$8\%) for both telescopes. The recommended acceptable range for the cross-instrument calibration factor between {\it Swift} and {\it NuSTAR} is 3$-$20 \% \citep{Mad15, Mar17}. Hence, our values clearly fall within the acceptable range as recommended in the literature.
\begin{figure*}
\centering
\includegraphics[scale=0.32,angle=-90]{Images/Spectra_07-April-2017_R6_24May2019.eps}
\includegraphics[scale=0.32,angle=-90]{Images/Spectra_10April2017_R6_24May2019.eps}\vspace{2em}
\caption{Swift/XRT (black dots) and {\it NuSTAR} FPMA (red star)/FPMB (blue square) unfolded spectra of GRS 1716--249 along with the best fit model components {\tt (TBabs$\times$(diskbb+relxill(lp)Cp))}, and residuals for epoch E1 ({\bf Left panel}) and epoch E2 ({\bf Right panel}). a$^{*}$ is fixed at 0.998.}
\label{fig:fig6}
\end{figure*}
To consistently explain the observed broad-band spectrum, two-component models such as {\tt TBabs$\times$(diskbb+nthComp)}, {\tt TBabs$\times$(diskbb+powerlaw)} and {\tt TBabs$\times$(nthComp+powerlaw)} were further fitted to the data. The observed spectrum at both epochs (E1 \& E2) were found to be reasonably well described by the {\tt TBabs$\times$(diskbb+nthComp)} model, i.e, the combination of multi-colored disk blackbody component \citep[{\tt diskbb:}][]{Mitsuda1984, Makishima1986} and a thermal Comptonization model \citep[{\tt nthComp:}][]{Zd96,Zy99} modulated by Galactic absorption by neutral hydrogen \citep[{\tt TBabs:}][]{Wil00}. We found excesses around 1.3, 6.4 and 30 keV as shown in Fig. \ref{fig:fig5}. An excess around 6.4 keV is indicative of an Fe emission line from the accretion disk, while the observed residuals around 30 keV indicate a Compton hump, which, together with the relativistically broadened Fe K$\alpha$ line, is typical for Compton reflection. A soft excess around 1.3 keV is typically observed when the surface of the accretion disk is ionized. In this case, electron scattering becomes important along with the re-emission of emission lines, causing a soft excess \citep{Ross1993, Ross2005, Garcia2013}. We found that the red wing of the Fe emission line is extended down to 6 keV while the blue wing is stretched close to 8 keV with a dip around 11 keV as shown in Fig. \ref{fig:fig5}.
Therefore, we replaced the {\tt nthComp} model by the self-consistent, broadband reflection model {\tt relxilllpCp} from the {\tt relxill} \citep[v1.0.2 :][]{Garcia2014, Dauser2014} model family, to account for the relativistic reflection spectrum. Thus, our best fit model is {\tt TBabs$\times$(diskbb+relxilllpCp)}. The {\tt relxilllpCp} model is based on a lamp-post geometry of the corona, which is believed to be the illuminating source. In the lamp-post geometry, the corona is treated as a point source positioned at a height h, on the black hole spin axis above the accretion disk. The {\tt relxilllpCp} model uses the thermal Comptonization model {\tt nthComp} as the input continuum. In the case of the {\tt relxilllpCp} model, the value of the reflection fraction (Refl$_{frac}$) can be self-consistently determined, depending on the values of the lamp-post height (h), the black hole spin parameter (a$^{*}$) and the inner accretion disk radius (R$_{in}$) through ray-tracing calculations. The evaluation of Refl$_{frac}$ by the model itself helps to reduce the parameter space and eventually constrains the geometry of the system \citep{Dauser2014}.
While fitting the average spectra for both epochs using best fit model (i.e., {\tt TBabs$\times$(diskbb+relxilllpCp)}), we fixed the value for the outer radius of the accretion disk at 400 r$_{g}$, where r$_{g}$ is the gravitational radius (r$_{g} \equiv$ GM/c$^{2}$). We simultaneously fitted for a$^{*}$ and R$_{in}$ but those two parameters are degenerate, and the effective inner accretion disk radius is controlled by both a$^{*}$ and the disk inclination angle. In the case of a non-rotating black hole (a$^{*}$ $\approx$ 0), R$_{ISCO} \equiv$ 6r$_{g}$. However, in the case of a Kerr black hole, considering the co-rotating case where the accretion disk is rotating in a direction same as the compact object, R$_{ISCO} \equiv$ r$_{g}$ \citep{Bardeen1972, Thorne1974}. During the spectral fitting for both E1 and E2, the spin parameter (a$^{*}$) approached the hard upper limit of 0.998. Therefore, we have frozen the value of a$^{*}$ at 0.998 and kept R$_{in}$ free to vary. For a black hole of a$^{*}$ $\approx$ 0.998, R$_{ISCO}$ $\approx$ 1.24 r$_{g}$ \citep{Bardeen1972, Thorne1974}.
Additionally, to determine the real nature of the system i.e., whether the compact object is favouring a low spin or a truncated disk, we fixed the value of R$_{in}$ at the ISCO and allowed the spin parameter to vary. We found that a$^{*}$ continued to saturate at the maximum value and the 2$\sigma$ uncertainty on the spin parameter provided a lower bound of 0.73. It is observed that the $\chi^{2}$/dof is 949.04/906 if we freeze a$^{*}$ at its maximum value (0.998), whereas it is 1010.72/906 when we fix R$_{in}$ at the ISCO. This implies that the system prefers a truncated disk over a low spin.
During our analysis for both epochs, we were unable to constrain the electron temperature (kT$_{e}$). Therefore, we fixed it at a value of 400 keV. From our spectral fittings, the iron abundance (A$_{Fe}$) was found to be 0.79$^{+0.11}_{-0.05}$ times the solar abundance for E1. However, for E2 the value of the iron abundance is found to be 1.01$^{+0.34}_{-0.08}$ times the solar abundance. Apart from the above-mentioned parameters, we determined the following reflection and continuum parameters using the {\tt relxilllpCp} model: the lamp-post height (h), the power-law index ($\Gamma$), the R$_{in}$, the ionization index of the accretion disk (log $\xi$) and the flux due to reflection.
Even after using the best fit model ({\tt TBabs$\times$(diskbb+relxilllpCp)}), we have observed a small excess around 6-7 keV in the XRT spectra. The main reason for this excess could be cross-calibration errors. The position of the line is better constrained by \textit{NuSTAR} spectrum in comparison to the \textit{Swift}/XRT spectrum, due to the high signal-to-noise ratio for \textit{NuSTAR}, owing to its excellent sensitivity in this energy range. The iron line profiles observed by \textit{Swift}/XRT and \textit{NuSTAR} do not match perfectly for the same reason, and hence identical constraints for the Gaussian profile of the line for both instruments cannot be obtained by simultaneous fitting. Therefore, a small excess around 6--7 keV is visible in the \textit{Swift}/XRT spectrum. The equivalent widths of the 6.4 keV line derived from simultaneous fitting, corresponding to the epochs (E1) and (E2), are found to be $0.222\pm0.001$ keV and $0.173\pm0.001$ keV, respectively.
The fitted spectra along with the different model components and residuals, for both epochs (E1 \& E2) are presented in the left and right panel of Fig. \ref{fig:fig6} (See. Table \ref{tab:tab2} for the best-fit parameters), respectively. For the broad-band continuum, we used the {\tt TBabs} \citep{Wil00} model inbuilt in {\tt XSPEC} to account for the Galactic neutral hydrogen column density. The most recent abundance model \citep[{\it aspl}:][]{Aspl2009}, inbuilt in {\tt XSPEC}, was adopted as an abundance input for the {\tt TBabs} model. The best-fit absorption column density, $n_H$ (in units of 10$^{22}$ cm$^{-2}$) was found to be 0.62$^{+0.02}_{-0.02}$ and 0.59$^{+0.01}_{-0.01}$ for the epochs E1 and E2, respectively. These values show significant excess over the Galactic value in the direction of source as estimated by the online tools of the LAB Survey\footnote{\url{https://www.astro.uni-bonn.de/hisurvey/profile/index.php}} \citep{Kalberla2005} (n$_{H} = 0.26 \times 10^{22} cm^{-2}$) implying the presence of intrinsic excess absorption in this BHXRB system.
The disk temperature (T$_{in}$ of the {\tt diskbb} model) shows a small variation but the ionization index of the accretion disk (log $\xi$), as well as the lamp-post height (h) as estimated from the broad-band spectral fitting at E1 and E2, do not show any significant change. The power-law index ($\Gamma$) increases from 1.768$^{+0.013}_{-0.006}$ (E1) to 1.812$^{+0.006}_{-0.006}$ (E2). This increment in $\Gamma$ suggests that the source is drifting towards the soft state. The obtained inner disk radii R$_{in}$ (r$_{g}$) for E1 and E2 are 14.78$^{+5.93}_{-14.78}$ r$_{g}$ and 12.88$^{+5.06}_{-3.58}$ r$_{g}$, respectively. The disk ionization (log $\xi$) is high for both observations and its values corresponding to E1 and E2 are found to be 3.07$^{+0.06}_{-0.03}$ and 3.19$^{+0.28}_{-0.07}$, respectively. The best-fit lamp-post height (h) is 23.77$^{+16.02}_{-10.84}$ and 12.75$^{+7.30}_{-5.11}$ in units of r$_{g}$. The variation in h is small and within 2$\sigma$ error bars. The 2$\sigma$ error in all the above-mentioned quantities were calculated using the error command in {\tt XSPEC}.
\begin{figure*}
\centering
\includegraphics[scale=0.33,angle=-90]{Images/Comb_chi_Rin_E1_E2_24May2019.eps}
\includegraphics[scale=0.33,angle=-90]{Images/Comb_chi_Incl_E1_E2_24May2019.eps}\vspace{2em}
\caption{Variations of $\Delta\chi^{2}$ as a function of the inner accretion disc radius ( R$_{in}$: in units of R$_{ISCO}$, determined from the {\tt relxilllpCp} model) and the disk inclination angle (i). In the {\bf Left panel}, the variation of $\Delta\chi^{2}$ with inner disc radius is shown as observed in Epoch E1 (black solid line) and E2 (red dotted line). It is found that from epoch E1 to E2 within 3$\sigma$ uncertainty, the inner disk radius decreased from 11.92$^{+8.62}_{-11.92}$ (R$_{ISCO}$) to 10.39$^{+9.51}_{-3.02}$ (R$_{ISCO}$). The {\bf Right panel} shows the variations of $\Delta\chi^{2}$ as a function of disk inclination angle. The inclination angle is found to be 54.19$^{+7.43}_{-12.48}$ $^{\circ}$ and 46.59$^{+11.78}_{-10.38}$ $^{\circ}$ for E1 and E2, respectively, within 3$\sigma$ significance. a$^{*}$ is fixed at 0.998.}
\label{fig:fig7}
\end{figure*}
In order to constrain the inner radius of the accretion disc from our best-fit model {\tt (TBabs$\times$(diskbb+relxilllpCp))}, we determined $\Delta\chi^{2}$ using {\tt steppar} command in {\tt XSPEC}. The variation of the resulting $\Delta\chi^{2}$, while changing the inner disc radius as the free parameter between 1 R$_{ISCO}$ and 23 R$_{ISCO}$ for epochs E1 and E2, is illustrated in the left panel of Fig. \ref{fig:fig7}. The 2$\sigma$ and 3$\sigma$ significance levels are shown by the horizontal lines. Within 3$\sigma$ bounds, the value of the inner disc radius for the epochs E1 and E2 are found to be 11.92$^{+8.62}_{-11.92}$ R$_{ISCO}$ and 10.39$^{+9.51}_{-3.02}$ R$_{ISCO}$ or 14.78$^{+10.69}_{-14.78}$ r$_{g}$ and 12.88$^{+8.50}_{-3.74}$ r$_{g}$, respectively. The first epoch (E1) represents only an upper bound. Similarly, we have also constrained the disk inclination angle as shown in the right panel of Fig. \ref{fig:fig7}. The value of the inclination angle is found to be 54.19$^{+7.43}_{-12.48}$ $^{\circ}$ and 46.59$^{+11.78}_{-10.38}$ $^{\circ}$ for E1 and E2, respectively, within 3$\sigma$ bounds.
The un-absorbed total fluxes corresponding to the different continuum (disk and Comptonized emission) components have significantly changed from E1 to E2. For example, the total flux has increased by $\sim5$ \% after considering the proper error propagation. As mentioned earlier, the source was in a relatively softer state at E2 in comparison to E1. The detailed parameters obtained from modeling the broad-band X-ray spectrum (for both epochs E1 \& E2) are summarized in Table \ref{tab:tab2}. We have witnessed significant changes in $\Gamma$, making the overall spectra during E2 relatively softer than during E1. This implies the possibility of successive changes in $\Gamma$, i.e. its evolution during the spectral state change of GRS 1716--249 from E1 to E2.
Aiming to study this behavior, the overall {\it NuSTAR} light curves during E1 and E2 were subdivided into 8 and 7 fragments, respectively. The same segments were used for investigating the QPOs and the spectral fittings. Due to the unavailability of data at low energies ($<$3.0 keV) during these fragments, we have modeled only the 3.0$-$79.0 keV band. Absorption by neutral hydrogen is modifying the continuum at low energies and in our observations the disk is weak, extending only up to 3.5 keV. Hence, the {\it NuSTAR} data cannot constrain the disk contribution. Therefore, we have kept $n_H$ and T$_{in}$ fixed to the values obtained after fitting the overall average spectrum (listed in Table \ref{tab:tab2}) for both epochs E1 and E2. However, we can assume that certain parameters do not change significantly over a timescale as small as 2 ks. Therefore, we have fixed R$_{in}$ and A$_{Fe}$ to their respective values obtained after fitting the overall average spectrum of epoch E1 and E2, as detailed in Table \ref{tab:tab2}.
The results from the above mentioned time-resolved spectral fitting are shown in Fig. \ref{fig:fig8} and are tabulated in Table \ref{tab:tab3}. Note that the error bars shown in Fig. \ref{fig:fig8} represent 2$\sigma$ errors. This clearly shows that the spectral parameters, namely $\Gamma$ and log $\xi$ derived for both E1 and E2 show a hint of variation within 2$\sigma$ significance. $\Gamma$ shows an overall softening from epoch E1 to E2. On the other hand, the lamp-post height (h) is almost invariable during both epochs (E1 \& E2) within 2$\sigma$ uncertainties. When we compare the spectral parameters obtained from time-resolved spectrum fitting and average spectrum fitting, we see that there is some variation in the values of $\Gamma$ and log $\xi$. Some $\Gamma$ values are above the average values, whereas the log $\xi$ values are below the average value. This variation in $\Gamma$ and log $\xi$ could be due to the difference in the energy range between the average spectrum and the time-resolved spectrum. For the average spectrum, we performed a fit in the energy range 0.5--79.0 keV, whereas we carried out time-resolved spectrum fitting in the energy range 3.0--79.0 keV, as we do not have Swift/XRT data for each time interval. A systematically lower log $\xi$ corresponds to a larger $\Gamma$ to produce a similar fit. These two parameters are anti-correlated to some degree and hence, if $\Gamma$ systematically increases, the ionization parameter will decrease \citep{Garcia2013, Choudhury2017}.
\subsection{Correlation Study}
The correlations between the model parameters derived from the time-resolved spectroscopy and the QPO frequency ($\nu_{QPO}$) were also studied. The Pearson correlation test is performed to quantify the correlation, using the following definition:
\begin{itemize}
\item Pearson Correlation coefficient \citep{Pearson1920}
\begin{equation}
r = \frac{\Sigma(x-\bar{x})(y-\bar{y})}{\sqrt{\Sigma(x-\bar{x})^2}\sqrt{\Sigma(y-\bar{y})^2}}
\end{equation}
where $\bar{x}$ and $\bar{y}$ are the mean of the two series x \& y.
\end{itemize}
The p-value (significance level) of the correlation is determined by the T-test given by
\begin{equation}
t = \frac{r}{\sqrt{1-r^2}}\sqrt{n-2}
\end{equation}
As interpreted from the T-distribution table, the p-values (p) $\le$ 0.05 indicate a strong correlation.
The above mentioned test reveals a strong correlation between $\Gamma - \nu_{QPO}$, as displayed in Fig. \ref{fig:fig9}. The corresponding Pearson's coefficient is also shown in the plot. A strong positive correlation is observed for $\Gamma - \nu_{QPO}$ with r = 0.70 (p = 8.78 $\times 10^{-4}$).
\begin{figure*}
\centering
\includegraphics[scale=0.34,angle=-90]{Images/Time_Gamma_logxi_Refl_07Apr17_Rev_24May2019.eps}
\includegraphics[scale=0.34,angle=-90]{Images/Time_Gamma_logxi_Refl_10Apr17_Rev_24May2019.eps}\vspace{1.5em}
\caption{Comparative study of time-resolved energy spectroscopy between E1 and E2. The top panel shows the variation of the photon index ($\Gamma$), the second panel presents the variation of the accretion disk ionization parameter (log $\xi$) and the variation of lamp-post height (h) is shown in the third panel. a$^{*}$ is fixed at 0.998.}
\label{fig:fig8}
\end{figure*}
\begin{table*}
\centering
\caption{Spectral parameters obtained through time-resolved analysis of {\it NuSTAR} data}
\begin{center}
\scalebox{0.85}{
\begin{tabular}{ |l|l|c|c|c|c|c|c|c|c|c|c|c| }
\hline
\hline
& & & & & & Epoch E1 & & & \\
\hline
\hline
Time$^\dagger$ & Exp. & $\nu_{\mathrm{knee}}$ & $\nu_{QPO}$ & Q-factor & $\sigma_{QPO}$ & $\chi^{2}$/dof & h & $\Gamma$ & log $\xi$ & $\chi^{2}$/dof & Total \\
(min) & (ks) & (Hz) & (Hz) & & & (timimg) & (GM/c$^{2}$) & & (\ log[\ erg cm s$^{-1}$]\ )\ & (spectral) & Flux$^{\dagger\dagger}$\\
\hline
\hline
926.76 & 2.1 & 0.13$^{+0.02}_{-0.02}$ & 1.13$^{+0.01}_{-0.01}$ & 11.57$^{+2.84}_{-2.73}$ & 6.9 & 49.11/42 & 24.19$^{+6.89}_{-4.39}$ & 1.785$^{+0.011}_{-0.022}$ & 2.88$^{+0.22}_{-0.09}$ & 813.44/722 & 12.95$^{+0.04}_{-0.05}$\\
989.54 & 2.2 & 0.15$^{+0.02}_{-0.02}$ & 1.18$^{+0.01}_{-0.01}$ & 8.28$^{+1.91}_{-1.67}$ & 6.4 & 52.88/42 & 28.06$^{+9.68}_{-5.63}$ & 1.802$^{+0.009}_{-0.010}$ & 2.77$^{+0.07}_{-0.05}$ & 764.64/722 & 12.81$^{+0.04}_{-0.04}$\\
1086.59 & 2.2 & 0.15$^{+0.02}_{-0.03}$ & 1.18$^{+0.02}_{-0.01}$ & 8.02$^{+1.81}_{-1.66}$ & 6.3 & 47.88/42 & 23.18$^{+7.76}_{-4.55}$ & 1.803$^{+0.012}_{-0.013}$ & 2.81$^{+0.12}_{-0.07}$ & 675.19/722 & 12.79$^{+0.04}_{-0.04}$\\
1183.08 & 2.2 & 0.12$^{+0.02}_{-0.02}$ & 1.19$^{+0.02}_{-0.02}$ & 5.71$^{+1.41}_{-1.16}$ & 4.4 & 53.81/42 & 21.91$^{+7.34}_{-4.21}$ & 1.798$^{+0.012}_{-0.013}$ & 2.78$^{+0.10}_{-0.06}$ & 738.31/722 & 12.80$^{+0.04}_{-0.04}$\\
1280.21 & 2.2 & 0.14$^{+0.02}_{-0.02}$ & 1.23$^{+0.02}_{-0.02}$ & 9.22$^{+3.21}_{-3.24}$ & 4.5 & 49.11/42 & 23.36$^{+7.11}_{-4.26}$ & 1.799$^{+0.012}_{-0.012}$ & 2.86$^{+0.15}_{-0.08}$ & 791.99/722 & 12.71$^{+0.04}_{-0.04}$\\
1376.69 & 2.3 & 0.15$^{+0.02}_{-0.02}$ & 1.17$^{+0.01}_{-0.01}$ & 7.78$^{+1.71}_{-1.41}$ & 6.8 & 39.56/42 & 24.19$^{+8.26}_{-6.21}$ & 1.768$^{+0.024}_{-0.016}$ & 3.09$^{+0.10}_{-0.07}$ & 757.02/722 & 12.83$^{+0.04}_{-0.02}$\\
1472.66 & 2.2 & 0.14$^{+0.02}_{-0.02}$ & 1.11$^{+0.02}_{-0.02}$ & 5.85$^{+1.42}_{-1.15}$ & 5.9 & 52.01/42 & 23.57$^{+7.93}_{-4.58}$ & 1.790$^{+0.013}_{-0.012}$ & 2.86$^{+0.19}_{-0.09}$ & 720.65/722 & 12.88$^{+0.04}_{-0.04}$\\
1569.67 & 2.2 & 0.16$^{+0.02}_{-0.02}$ & 1.16$^{+0.01}_{-0.01}$ & 9.12$^{+2.10}_{-1.77}$ & 6.6 & 51.95/42 & 17.62$^{+4.15}_{-2.77}$ & 1.781$^{+0.014}_{-0.008}$ & 2.93$^{+0.16}_{-0.14}$ & 751.44/722 & 12.84$^{+0.04}_{-0.04}$\\
\hline
\hline
& & & & & & Epoch E2 & & & \\
\hline
\hline
5384.85 & 2.2 & 0.15$^{+0.02}_{-0.02}$ & 1.45$^{+0.02}_{-0.03}$ & 8.52$^{+1.99}_{-2.32}$ & 5.5 & 48.97/44 & 11.86$^{+1.40}_{-1.30}$ & 1.817$^{+0.017}_{-0.017}$ & 2.98$^{+0.14}_{-0.19}$ & 729.60/722 & 12.03$^{+0.04}_{-0.04}$\\
5439.51 & 2.3 & 0.11$^{+0.02}_{-0.03}$ & 1.45$^{+0.01}_{-0.01}$ & 10.90$^{+2.14}_{-1.81}$ & 8.1 & 56.15/44 & 10.97$^{+1.55}_{-1.17}$ & 1.824$^{+0.014}_{-0.020}$ & 2.88$^{+0.22}_{-0.12}$ & 819.55/722 & 11.95$^{+0.04}_{-0.04}$\\
5535.86 & 2.3 & 0.20$^{+0.02}_{-0.02}$ & 1.47$^{+0.02}_{-0.02}$ & 7.68$^{+1.57}_{-1.48}$ & 7.7 & 41.35/44 & 29.87$^{+13.31}_{-9.66}$ & 1.782$^{+0.017}_{-0.015}$ & 3.46$^{+0.05}_{-0.11}$ & 744.96/722 & 11.88$^{+0.04}_{-0.04}$\\
5632.99 & 2.2 & 0.18$^{+0.02}_{-0.02}$ & 1.53$^{+0.01}_{-0.02}$ & 10.14$^{+2.48}_{-2.25}$ & 6.4 & 50.05/44 & 11.91$^{+1.91}_{-1.39}$ & 1.855$^{+0.014}_{-0.018}$ & 2.83$^{+0.21}_{-0.09}$ & 823.90/722 & 11.79$^{+0.04}_{-0.04}$\\
5729.58 & 2.3 & 0.16$^{+0.02}_{-0.02}$ & 1.65$^{+0.02}_{-0.03}$ & 7.38$^{+2.11}_{-1.63}$ & 5.5 & 45.20/44 & 11.42$^{+1.26}_{-1.19}$ & 1.841$^{+0.023}_{-0.015}$ & 3.08$^{+0.46}_{-0.24}$ & 700.51/722 & 11.60$^{+0.04}_{-0.04}$\\
5825.97 & 2.2 & 0.21$^{+0.02}_{-0.02}$ & 1.67$^{+0.03}_{-0.02}$ & 9.38$^{+3.58}_{-3.40}$ & 4.4 & 50.62/44 & 21.95$^{+14.05}_{-7.08}$ & 1.812$^{+0.011}_{-0.008}$ & 3.46$^{+0.11}_{-0.10}$ & 791.98/722 & 11.61$^{+0.04}_{-0.04}$\\
5923.12 & 2.0 & 0.24$^{+0.02}_{-0.02}$ & 1.60$^{+0.05}_{-0.03}$ & 15.05$^{+14.19}_{-15.05}$ & 2.1 & 52.05/44 & 21.35$^{+15.08}_{-6.76}$ & 1.813$^{+0.012}_{-0.009}$ & 3.41$^{+0.10}_{-0.10}$ & 714.37/722 & 11.56$^{+0.04}_{-0.04}$\\
\hline
\hline
\label{tab:tab3}
\end{tabular}}\\
{\bf $\dagger$} : Time since MJD 57850.0; {\bf $\dagger\dagger$} : Flux in energy band 3.0--79.0 keV shown in unit of ($\times$ 10$^{-9}$) ergs cm$^{-2}$ s$^{-1}$ \\
Exp. : Exposure Time; $\nu_{noise}$ : Peak frequency of broad Lorentzian noise\\
$\nu_{QPO}$ : Peak frequency of QPO; $\sigma_{QPO}$ : Significance of detection\\
$\Gamma$ : Photon Index; log $\xi$ : Accretion disk ionization parameter\\
h : the lamp-post height; a$^{*}$ is fixed at 0.998
\end{center}
\end{table*}
\begin{table*}
\centering
\caption{Model parameters from simultaneous {\it Swift}/XRT (0.5--8.0 keV) and {\it NuSTAR} (3.0--79.0 keV) spectral fitting. The model which provides the best fit is {\tt TBabs$\times$(diskbb+relxill(lp)Cp)}. 2$\sigma$ errors are quoted. This model resulted in $\chi^2$/dof= 949.04/906 and 1097.55/906 for E1 and E2, respectively. Fig. \ref{fig:fig6} shows the fitted energy spectrum along with the model constituents and residuals for both observations.}
\begin{center}
\scalebox{1.0}{%
\begin{tabular}{ |l|l|l|l| }
\hline
\hline
Component & Parameter & Epoch & Epoch \\
& & E1 & E2 \\
\hline
\hline
TBABS & N$_{H}$ ($\times 10^{22} cm^{-2}$) & 0.62$^{+0.02}_{-0.02}$ & 0.59$^{+0.01}_{-0.01}$\\
\hline
diskbb & T$_{in}$(keV) & 0.70$^{+0.06}_{-0.04}$ & 0.59$^{+0.02}_{-0.02}$ \\
& Norm. & 132.18$^{+53.73}_{-49.11}$ & 437.70$^{+105.49}_{-89.48}$ \\
\hline
relxill(lp)Cp & h (GM/c$^{2}$) & 23.77$^{+16.02}_{-10.84}$ & 12.75$^{+7.30}_{-5.11}$ \\
& a$^{*}$ (cJ/GM$^{2}$) & 0.998$^f$ & 0.998$^f$ \\
& i (degrees) & 54.19$^{+6.77}_{-7.89}$ & 46.59$^{+3.83}_{-5.85}$ \\
& R$^{\dagger}_{in}$ (R$_{ISCO}$) & 11.92$^{+4.78}_{-11.92}$ & 10.39$^{+4.08}_{-2.89}$ \\
& $\Gamma$ & 1.768$^{+0.013}_{-0.006}$ & 1.812$^{+0.006}_{-0.006}$ \\
& log $\xi$ (\ log[\ erg cm s$^{-1}$]\ )\ & 3.07$^{+0.06}_{-0.03}$ & 3.19$^{+0.28}_{-0.07}$\\
& A$_{Fe}$ (solar) & 0.79$^{+0.11}_{-0.05}$ & 1.01$^{+0.34}_{-0.08}$ \\
& Norm. ($\times$ 10$^{-2}$) & 3.11$^{+0.56}_{-0.32}$ & 3.17$^{+0.85}_{-0.34}$ \\
\hline
& F$_{Total}$ ($\times$ 10$^{-9}$ ergs cm$^{-2}$ s$^{-1}$) & 14.85$_{-0.16}^{+0.16}$ & 15.59$_{-0.14}^{+0.14}$ \\
& F$_{diskbb}$ ($\times$ 10$^{-9}$ ergs cm$^{-2}$ s$^{-1}$) & 0.49$_{-0.05}^{+0.05}$ & 0.89$_{-0.07}^{+0.08}$ \\
& F$_{relxill}$ ($\times$ 10$^{-9}$ ergs cm$^{-2}$ s$^{-1}$) & 14.40$_{-0.15}^{+0.15}$ & 14.88$_{-0.14}^{ +0.14}$ \\
\hline
& $\chi^{2}$/dof & 949.04/906 & 1097.55/906 \\
\hline
\hline
\label{tab:tab2}
\end{tabular}}\\
{\bf $\dagger$} : Inner disk radius;
{\bf $^f$} tags imply that the specific parameter was frozen to these values while fitting the spectra\\
{\bf $-$} : Note that all the errors are calculated using {\tt error} command in {\tt XSPEC}.\\
{\bf $-$} : The flux values shown in this table are unabsorbed and calculated for the energy band 0.5$-$79.0 keV.
\end{center}
\end{table*}
\section{Discussion and Conclusions \label{sec:discussion}}
In this work, we present a broad-band X-ray study of the black hole candidate X-ray binary GRS 1716--249 during its 2016--2017 outburst in the energy range 0.5$-$79.0 keV, using {\it Swift}/XRT and {\it NuSTAR} FPMA and FPMB data on two different occasions. The joint spectral analysis shows the presence of a broad iron line and reflection hump around 30 keV, which can be well modeled with the state-of-art relativistic reflection model {\tt relxilllpCp}. We constrained the inner disk radius for both epochs and found that the inner disk tends to moves inward with an increase in the mass accretion rate. A low-frequency QPO is observed at $1.20\pm0.04$ Hz during the first epoch E1 and the QPO frequency is shifted to $1.55\pm0.04$ Hz in the second epoch E2. The time-resolved analysis reveals that there is a hint of variation in the QPO frequency during the second epoch (E2). We observed a strong positive correlation between the QPO frequency and the power-law index.
The current work reports for the first time the {\it NuSTAR} detection of a low-frequency QPO at $\sim 1.20\pm0.04$ Hz in the LMXB GRS 1716--249. Earlier studies from the 1993 outbursts of GRS 1716--249 detected a QPOs at $\sim$0.04 Hz which slowly drifted to 0.3 Hz at the end of the observation \citep{van96}, but there were no QPO detections at frequencies $\ge$1 Hz.
We observed that the QPO frequency ($\nu_{QPO}$) increases significantly as the source moves from epoch E1 to E2 and the total flux follows the same trend, which shows a positive correlation between the flux and $\nu_{QPO}$ \citep{Ingram2011}. A positive correlation has also been observed between the photon-index and $\nu_{QPO}$. The changing QPO frequency could be connected to the inner edge of the accretion disk \citep{Takizawa1997}. If the total flux is connected to the mass accretion rate, this signifies that the inner edge of the accretion disk may move inward with an increasing mass accretion rate.
The correlation between $\Gamma$ and $\nu_{QPO}$ has been beautifully demonstrated by the time-resolved spectroscopy over both epochs (E1 \& E2). The gradual increase of $\Gamma$ over the total 15 segments (Table \ref{tab:tab3}) from E1 and E2, signifies a gradual successive decrease in the non-thermal emission. The cross-correlation study between $\nu_{QPO}$ and the power-law index $\Gamma$ (Fig. \ref{fig:fig9}) clearly establishes a scenario, where the drifting of $\nu_{QPO}$ towards higher frequencies and the increase in $\Gamma$, correspond to a decrease of the non-thermal flux (96.97 \% to 95.45 \%) with an increase of the disk flux (3.30\% to 5.71 \%) as evident from Table \ref{tab:tab2}. Therefore, it is evident that the Comptonizing plasma is diminishing with the disk entering into the soft state.
A strong correlation between $\Gamma$ and $\nu_{QPO}$ has already been observed in a number of X-ray binaries, for example 4U 1608$-$52 and 4U 0614$+$091 \citep{Kaaret1998}, XTE J1550$-$564 and GRO J1655$-$40 \citep{Sobczak2000}, Cyg X-1 \citep{Shaposhnikov2006} and Cyg X-2 \citep{Titarchuk2007}. \citet{Sobczak2000} observed a positive correlation for XTE J1550$-$564 and a negative correlation for GRO J1655$-$40. The authors explained that an increase in the mass accretion rate increases the QPO frequency, and the contribution from the power-law should be more than 20 \% for QPOs to be present. \citet{Sobczak2000} have also suggested that the opposite correlation in XTE J1550$-$564 and GRO J1655$-$40 could be due to different regions of QPO generation in the two sources. To explain the correlations, \citet{Titarchuk2004} proposed the transition layer (TL) model. According to this model, a compact bounded coronal region is formed as a natural consequence of the adjustment of the Keplerian disk flow to the innermost sub-Keplerian boundary conditions near the central region. It ultimately ends up forming a TL between the adjustment radius and the innermost boundary. However, this mechanism is unable to produce the inclination-dependent QPOs obtained by \citet{Motta15}.
The observed correlation can be successfully explained on the basis of the Lense-Thirring (LT) precession model \citep{Ste98, Ing09} and the truncated disc model \citep{Esi97, Pou97, Done2007}. When the mass accretion rate shows rapid growth, the truncated disk radius slowly starts moving towards the compact object, hence leading to an increase of the LT precession frequency. The QPO frequency and its evolution are governed by the size of as well as the fluctuations in the truncation radius (increase in QPO frequency represents a decrease of the truncation radius)\citep{Motta17}. The spectral changes during the source transition can also be explained in this framework. When the mass accretion rate increases, the outer disk gradually starts moving towards the compact object. The disk component becomes stronger and causes greater cooling of the hot inner flow by the cool photons from the disk, resulting in a soft spectrum \citep{Motta15, Zhang2015}.
In our spectral analysis, we can estimate the inner radius of the accretion disk (R$_{in}$) in two different ways. The {\tt relxilllpCp} model directly provides R$_{in}$, which is found to be 14.78$^{+10.69}_{-14.78}$ (r$_{g}$) and 12.88$^{+8.50}_{-3.74}$ (r$_{g}$), for epoch E1 and E2, respectively. R$_{in}$ can be calculated from the normalization of the {\tt diskbb} model. Using the disc inclination obtained from our spectral analysis, taking the mass to be 4.9 M$_\odot$ \citep{Mas96} and the distance to be 2.4 kpc \citep{del94}, R$_{in}$ corresponding to epoch E1 and E2 found to be 0.66$^{+0.13}_{-0.13}$ (r$_{g}$) and 1.12$^{+0.12}_{-0.13}$ (r$_{g}$), respectively. We find that the R$_{in}$ values calculated from the normalization of the {\tt diskbb} model are smaller than the values estimated from the {\tt relxilllpCp} model. One possible reason for the discrepancy in the R$_{in}$ values is that the {\tt relxilllpCp} model is not self-consistent as it uses the {\tt nthComp} model with seed photon temperature (T$_{bb}$) fixed at 0.05 keV. This value of T$_{bb}$ is uncharacteristically low for a stellar mass black hole accretion disk. As a result, a higher temperature component becomes necessary to represent a part of the existing cold disk. Secondly, \citet{Merloni2000} carried out a critical analysis of the usual interpretation of the multicolor disc model parameters for black hole candidates in terms of the inner radius and temperature of the accretion disc. The authors have reported that the {\tt diskbb} model underestimates the inner disk radius. \citet{Merloni2000} suggested that when the disk contribution is very low and the spectrum is mostly dominated by non-thermal photons, the radius inferred by {\tt diskbb} is inaccurate. They have also explained that it is very difficult to determine the exact shape of the accretion disk spectrum for the aforementioned case. \citet{Kubota1998} suggested a correction factor to improve the value of R$_{in}$ calculated from {\tt diskbb}.
The presence of type--C QPO and the broad-band spectral fitting at both epochs (E1 \& E2) confirms that the power-law component dominates over the disk contribution, hence confirming the intermediate state of GRS 1716--249. Here, the soft photons from the accretion disk are Comptonized to higher energies via a hot corona of thermal electrons through the inverse Compton effect \citep{Haardt1991, Haardt1993}. The two epochs have shown significant spectral changes leading to a successive softening of the source as inferred from the broad-band spectrum at E1 and E2. The broad Fe-K$\alpha$ line at 6.4 keV is an indication of a spinning black hole \citep{Miller2009}. The observed iron line is extended from nearly 6 to 8 keV along with a dip around 11 keV. This shows that both wings of the Fe-K$\alpha$ line are stretched. It is believed that the broadening of the red wing might be due to Doppler and General Relativity (GR) effects, while the blue wing is elongated because of scattering of the photons within the hot inner flow \citep{Fabian2000, Miller2007, Miller2017}.
Reflection features have already been detected in a number of X-ray binaries. \citet{Done1999} have significantly detected the relativistic smearing of the reflected spectrum in Cyg X--1. Using the X-ray spectrum, they have simultaneously constrained the ionization state and the percentage of relativistic smearing. They analyzed observations from three different missions namely: {\it EXOSAT} GSPC, Ginga and {\it ASCA}. They implemented the {\tt PEXRIV} model in {\tt XSPEC} to fit the reflection spectrum, and the {\tt DISCLINE} model in {\tt XSPEC} is used to account for the relativistic smearing. They fitted the spectrum for different values of the inclination angle and iron abundance. During the entire analysis, the photon-index ranged from 1.44 to 1.91. They also reported $\xi$ and R$_{in}$ for various observations. Our results are found to be consistent with their results, and during our observations, $\Gamma$ varies from 1.80--1.93. On many occasions, the values of the inner disk radius and accretion disk ionization parameter as reported by \citet{Done1999}, are found to be close to the values observed in this work.
In a separate study, \citet{Miller2002} resolved the Fe K$\alpha$ line region in Cyg X-1 through spectral analysis. The {\it Chandra} X-ray observatory detected the source in an intermediate spectral state during these observations. The authors observed a narrow line around 6.42 keV along with a broad line feature at $\sim$ 5.82 keV and a smeared edge at 7.3 keV. These results support our findings, as the study reported a photon index around 1.8 which is close to the value we found during E1. The value of R$_{in}$ reported in that study is similar to the value of R$_{in}$ found during E1 in this work. Two different lines and a smeared edge are not observed in our study, either because of the superior spectral resolution of {\it Chandra} in comparison to {\tt NuSTAR} \citep{Canizares2000, Harrison2013}, or because they may not be present in GRS 1716--249.
Recently, there have been several attempts to find a correlation between QPOs and iron lines to constrain the origin of the QPOs \citep{Miller2005, Ingram2015}. The basic idea behind such an attempt is that both the QPOs and the Fe-line possibly originate from the inner part of the accretion disk. Therefore, they can provide an independent signature of the geometry of the inner disk. It has been found that the variation in the QPO frequency and the QPO phase is correlated with the variation of the iron line centroid frequency \citep{Ingram2016, Ingram2017}. These studies reflect the geometric origin of QPOs. However, our current findings are limited to explore any such correlations between QPO and Fe-line because of the limited extent of our observations. In future work, we plan to use better timing and multi-epoch data from {\it ASTROSAT} \citep{Antia2017, Verdhan2017}, spanning over several months, along with our current results, to study the QPOs with more physical explanations and rigorously updated physical models.
\begin{figure}
\centering
\includegraphics[scale=0.32,angle=-90]{Images/QPO_Freq_gamma_07_and_10Apr17_Comb_24May2019.eps}\vspace{1.5em}
\caption{Variation of the photon-index ($\Gamma$) with QPO frequency. The solid line shows the best fit linear dependence. The Pearson ($r$) correlation coefficients for $\nu_{QPO}-\Gamma$ is 0.70 (p= 8.78$\times$10$^{-4}$).}
\label{fig:fig9}
\end{figure}
\section*{Acknowledgements}
This research has made use of the software and/or data obtained through the High Energy Astrophysics Science Archive Research Center (HEASARC) online service, provided by the NASA/Goddard Space Flight Center and the {\it SWIFT} data center. We thank {\it NuSTAR} team for making {\it NuSTAR} data public. {\it MAXI} data are obtained from {\it MAXI} team, RIKEN, JAXA. The authors gratefully acknowledge the anonymous referee for constructive comments that improved the paper. The authors are also thankful to PI's for proposing these observations. JC and SC are also grateful to Prof. H. M. Antia, Prof. A. R. Rao and Prof. S. Bhattacharyya for constructive discussions about diagnostics and interpretations.
|
1,941,325,220,405 | arxiv | \section{Introduction}
The discovery of the Higgs boson with a mass around 126 GeV, which has been reported by the CMS and ATLAS collaborations \cite{Higgsd1,Higgsd2}, opens up a new era in understanding the origins of the electroweak (EW) symmetry breaking. However, questions regarding the theory behind the observed spin 0 particle still need to be addressed. Even though the recent experimental results obtained in the $ZZ$ \cite{ZZ1,ATLAS:2013nma}, $WW$ \cite{WW1,ATLAS:2013wla}, $b\bar{b}$ \cite{Chatrchyan:2013zna,ATLAS:2012aha}, $\tau\tau$ \cite{Chatrchyan:2014nva,tau2}, and $\gamma\gamma$ \cite{CMS:2014ega,ATLAS:2013oma} decay channels are compatible with the Standard Model (SM), there is still room for theories beyond the SM that can accommodate more than one Higgs boson with a non-standard Higgs structure. These models are motivated by the problems in the SM such as the naturalness of the Higgs mass and lack of a dark matter candidate.
Supersymmetric models remain among the best motivated extensions of the SM. The Constrained Minimal Supersymmetric Standard Model (CMSSM) \cite{cMSSM} is a well studied model with a minimal set of parameters and a dark matter candidate. Recent studies \cite{cmssmstd,cmssmfit} have shown that, within CMSSM, it is difficult to generate a Higgs boson with mass around 126 GeV consistent with all experimental constraints from colliders as well as with the observed dark matter relic abundance and muon anomalous magnetic moment. Indeed the measured Higgs boson mass can be achieved only for large values of the CMSSM dimensional parameters, $m_0$ and $m_{1/2}$. The experimentally viable regions of parameter space result in a multi-TeV sparticle spectrum that generates a fine-tuning $<0.1\%$ \cite{cmssmfinet}.
In general MSSM the desired Higgs mass can be achieved with the help of radiative corrections for a large mixing parameter, $A_t$, which in turn generates a large splitting between the two physical stops \cite{mssmsd}, and/or large stop soft squared masses. It was shown in \cite{mssmft} that MSSM parameter regions allowed by the experimental data require tuning smaller than $1\%$, depending on the definition of fine-tuning. Such a serious fine-tuning can be alleviated by having additional tree-level contributions to the Higgs mass, given that in MSSM the tree-level lightest Higgs is restricted to be lighter than $m_Z$, so that sizable quantum corrections are no longer required. In order to have additional contributions to the tree-level lightest Higgs mass, one can extend the MSSM field content by adding a singlet \cite{NMSSMft} and/or a triplet \cite{Espinosa:1991wt,Espinosa:1991gr,DiChiara:2008rg,chargedH,tripletft2,Huitu:1997rr,Zhang:2008jm,Kang:2013wm} chiral superfield(s).
Another advantage of singlet and triplet extensions of MSSM concerns CP symmetry breaking. Any softly broken low energy supersymmetric theory provides general soft breaking terms with complex phases which are necessary to explain the baryon asymmetry of the universe along with the CKM matrix of the SM \cite{baryon}. However, such explicit CP violation scenarios can lead to overproduction of CP violation that is stringently constrained by electric dipole moments (EDMs) \cite{EDMs}. This overproduction problem can be naturally evaded by breaking CP symmetry spontaneously. In the case of MSSM, spontaneous CP-violation is not feasible even at higher orders because of the existing experimental bounds on the Higgs masses \cite{SCPVMSSM}. The spontaneous CP violation can be achieved in the extended models with new singlet \cite{SCPVsinglet} or triplet superfield(s) \cite{SCPVtriplet}.
In light of fine-tuning considerations as well as the motivation of having spontaneous CP violation, here we consider the Triplet Extended Supersymmetric Standard Model (TESSM)\cite{Espinosa:1991wt,Espinosa:1991gr}. The model we consider here possesses a $Y=0$ SU(2) triplet chiral superfield along with the MSSM field content, where the extended Higgs sector generates additional tree-level contributions to the light Higgs mass and moreover may enhance the light Higgs decay rate to diphoton \cite{DiChiara:2008rg,tessm1,Delgado:2012sm,Delgado:2013zfa}.
To assess the viability of TESSM for the current experimental data, we perform a goodness of fit analysis, by using the results from ATLAS, CMS, and Tevatron on Higgs decays to $ZZ,WW,\gamma\gamma,\tau\tau,b\bar{b}$, as well as the measured $B_s\to X_s \gamma$ branching ratio, for a total of 59 observables. Several similar fits have been performed for MSSM \cite{Arbey:2012dq,Bechtle:2012jw,Djouadi:2013uqa,Buchmueller:2013rsa} and for NMSSM \cite{Gunion:2012zd,D'Agnolo:2012mj}, but to the best of our knowledge no such goodness of fit analysis of TESSM is present in the literature. As free parameters we use Higgs coupling coefficients associated with each SM field, as well as two extra parameters that take into account the contribution of the non-SM charged and coloured particles of TESSM to the loop induced Higgs decays to diphoton and digluon, respectively. As explained later in the text, in the viable region of the TESSM parameter space the $W$ and $Z$ bosons have a SM-like coupling to the light Higgs, and, in the same region, the upper and lower components of EW SM fermion doublets have coupling coefficients which are ultimately functions only of $\tan\beta$, the ratio between the vacuum expectation value(s) (vev) of the up and down Higgs doublets. The total number of free parameters of TESSM for the fit we perform is therefore reduced to just three, plus one to fit the $\mathcal{B}r(B_s\to X_s \gamma)$ data.
An important result of the fit is that, for viable data points in the TESSM parameter space, we observe not only an enhancement of the Higgs decay to diphoton, as previously observed in \cite{DiChiara:2008rg,Delgado:2012sm,Delgado:2013zfa,Arina:2014xya}, but also a suppression of the same decay rate. This is due to the fact that we scan also negative values of mass and coupling parameters, for which the light chargino mass and its coupling to the light Higgs can have the same sign. This, as it is the case for the top quark, produces a destructive interference between the $W$ and triplino-like chargino contributions to the Higgs decay to diphoton.
In this article we also consider the low energy observable $\mathcal{B}r(B_s\to X_s \gamma)$ to constrain the model and improve the relevance of the fit we perform. In general, the $B$ meson observables, {\it e.g.} $\mathcal{B}r(B_s\to X_s \gamma)$ and $\mathcal{B}r(B_s\to \mu^+\mu^-)$, are used to set constraints on the parameter space of the theories beyond the SM. It has been shown that, for low values of $\tan{\beta}$, the flavour bounds obtained from $\mathcal{B}r(B_s\to X_s\gamma)$ are relevant, while the constraints from $\mathcal{B}r(B_s\to \mu^+\mu^-)$ play a decisive role only for $\tan\beta \mathrel{\hbox{\rlap{\hbox{\lower5pt\hbox{$\sim$}}}\hbox{$>$}}} 10$ \cite{Btomumu}. As we focus on the low $\tan{\beta}$ region ($\mathrel{\mathpalette\@versim<}$10), given that the contribution of the triplet field to the Higgs mass grows as $\sin2\beta$, we study here only $\mathcal{B}r(B_s\to X_s \gamma)$. In \cite{tessm1} we already considered this constraint in the context of the lightest charged Higgs and the lightest chargino as they dominantly contribute to the decay. Here we have improved our analysis by considering the contributions from all charged Higgses and charginos at next to the leading order (NLO).
The rest of the paper is organized as follows. In the next Section, we give a brief description of the model. In Section~\ref{HmConst} we discuss the minimum of the TESSM scalar potential which leads to an extra contribution to the tree level lightest Higgs mass. In the same Section we describe the method we use to evaluate numerically the radiative corrections and find data points with a Higgs mass around 126 GeV that satisfy the current direct search limits on new particles. Section~\ref{finetuningTESSM} is devoted to the discussion on the fine-tuning associated to viable data points in TESSM. By running the dimensionless couplings with two loops beta functions, we show that there is a tension between the requirement of perturbativity at high scales and the possibility to reduce the amount fine-tuning typical for MSSM. In Section~\ref{Hphy} we consider the Higgs decay modes, especially Higgs decay to two photons for which our results partially differ from the ones obtained previously. In Section~\ref{btsgsec} we present the results of the calculation of $\mathcal{B}r(B_s\to X_s \gamma)$ at NLO in TESSM. Section~\ref{fitsect} is dedicated to the goodness of fit analysis of TESSM considering different experimental constraints from LHC and Tevatron along with $\mathcal{B}r(B_s\to X_s \gamma)$. In Section~\ref{concsec} we finally offer our conclusions.
\section{The Model} \label{modintr}
The field content of TESSM is the same as that of the MSSM with an additional field in the adjoint of SU$(2)_L$, the triplet chiral superfield $\hat T$, with zero hypercharge ($Y=0$), where the scalar component $T$ can be written as
\be
T=\left(\begin{array}{cc}\frac{1}{\sqrt{2}} T^0 & T^+ \\T^- & -\frac{1}{\sqrt{2}}T^0\end{array}\right)\ .
\ee
The renormalizable superpontential of TESSM includes only two extra terms as compared to MSSM, given that the cubic triplet term is zero:
\be
W_{\rm TESSM}=\mu_T {\rm Tr}(\hat T \hat T) +\mu_D \hat H_d\!\cdot\! \hat H_u + \lambda \hat H_d\!\cdot\! \hat T \hat H_u + y_t \hat U \hat H_u\!\cdot\! \hat Q - y_b \hat D \hat H_d\!\cdot\! \hat Q- y_\tau \hat E \hat H_d\!\cdot\! \hat L\ ,
\label{SP}
\ee
where "$\cdot$" represents a contraction with the Levi-Civita symbol $\epsilon_{ij}$, with $\epsilon_{12}=-1$, and a hatted letter denotes the corresponding superfield. Note that the triplet field couples to the Higgs doublets through the coupling $\lambda$. The soft terms corresponding to the superpotential above and the additional soft masses can be written similarly\footnote{We use the common notation using a tilde to denote the scalar components of superfields having a SM fermion component.} as
\bea\label{softV}
V_S&=&\left[\mu_T B_T {\rm Tr}(T T) +\mu_D B_D H_d\!\cdot\! H_u + \lambda A_T H_d\!\cdot\! T H_u + y_t A_t \tilde{t}^*_R H_u\!\cdot\! \tilde{Q}_L + h.c.\right] \nonumber\\
& & + m_T^2 {\rm Tr}(T^\dagger T) + m_{H_u}^2 \left|H_u\right|^2 + m_{H_d}^2 \left|H_d\right|^2 + \ldots \ ,
\eea
where we have included only the top squark cubic term, among those in common with MSSM\footnote{The neglected cubic terms are not necessary for phenomenological viability in the analysis we perform in this work.}, and wrote explicitly the squared soft mass terms only for the three scalar fields with neutral components. In the following we assume all the coefficients in the Higgs sector to be real, as to conserve CP symmetry. We moreover choose real vevs for the scalar neutral components, so as to break correctly EW symmetry SU$(2)_L\times$ U$(1)_Y$:
\be\label{vev}
\langle T^0\rangle=\frac{v_T}{\sqrt{2}}\ ,\quad\langle H_u^0\rangle=\frac{v_u}{\sqrt{2}}\ ,\quad\langle H_d^0\rangle=\frac{v_d}{\sqrt{2}}\ ,
\ee
which generate the EW gauge bosons masses
\be
m_W^2=\frac{1}{4} g_L^2 \left( v^2 + 4 v_T^2 \right)\ ,\quad m_Z^2=\frac{1}{4} \left( g_Y^2+ g_L^2 \right) v^2\ ,\quad v^2=v_u^2+v_d^2\ .
\ee
From these masses we find that there is a non-zero tree-level contribution to the EW $\alpha_eT$ parameter \cite{Peskin:1991sw,Burgess:1993vc}:
\be
\alpha_e T=\frac{\delta m_W^2}{m_W^2}=\frac{4 v_T^2}{v^2}\ ,
\ee
with $\alpha_e$ being the fine structure constant. The measured value of the Fermi coupling $G_F$ and the upper bound on the EW parameter $T$ ($\alpha_e T\leq 0.2$ at 95\% CL) \cite{Beringer:1900zz} then impose
\be
v_w^2=v^2+4 v_T^2=\left(\rm 246~GeV\right)^2\ ,\quad v_T \mathrel{\hbox{\rlap{\hbox{\lower5pt\hbox{$\sim$}}}\hbox{$<$}}} 5~{\rm GeV}\ .
\ee
Such a small value of the triplet vev evidently does not allow the triplet extension to solve the MSSM $\mu$ problem. Thus, the $\mu_D$ term is defined separately in the superpotential Eq. \eqref{SP}. Given that the triplet vev can still generate small differences in the light Higgs couplings to SM particles as compared to MSSM, throughout this paper we take a small but non-zero fixed value for $v_T$:
\be
v_T= 3\sqrt{2} ~{\rm GeV}\ .
\ee
Having defined a viable EW symmetry breaking minimum, in the next Section we proceed to determine the mass spectrum of TESSM.
\section{Higgs Mass \& Direct Search Constraints} \label{HmConst}
After EW symmetry breaking, the stability conditions for the full potential are defined by
\bea
\partial_{a_i} V|_{\rm vev}&=&0\ ,\quad V=V_D+V_F+V_S\ , \quad \langle a_i\rangle = v_i \ , \quad i=u,d,T\ ; \nonumber\\
H^0_u&\equiv& \frac{1}{\sqrt{2}}\left(a_u+i b_u \right)\ , \quad H^0_d\equiv \frac{1}{\sqrt{2}}\left(a_d+i b_d \right)\ , \quad T^0\equiv \frac{1}{\sqrt{2}}\left(a_T+i b_T \right)\ ,
\eea
where $V_D$ and $V_F$ are the $D$ and $F$ terms of the potential, respectively, while $V_S$ is given in Eq.~\eqref{softV}, and $a_i$ and $b_i$ are both real. The conditions above allow one to determine three of the Lagrangian free parameters:
\bea\label{stabV}
m_{H_u}^2&=&-\mu _D^2-\frac{g_Y^2+g_L^2}{8} \left(v_u^2-v_d^2\right)+B_D \mu _D \frac{v_d}{v_u}-\frac{\lambda^2}{4} \left(v_d^2+v_T^2\right)+\lambda \left(\mu
_D-\left(\frac{A_T}{2}+\mu _T\right)\frac{v_d}{v_u}\right) v_T\ ,\nonumber\\
m_{H_d}^2&=&-\mu _D^2+\frac{g_Y^2+g_L^2}{8} \left(v_u^2-v_d^2\right)+B_D \mu _D \frac{v_u}{v_d}-\frac{\lambda^2}{4} \left(v_u^2+v_T^2\right)+\lambda \left(\mu
_D-\left(\frac{A_T}{2}+\mu _T\right)\frac{v_u}{v_d}\right) v_T\ ,\nonumber\\
m_T^2&=&-\frac{\lambda ^2}{4} \left(v_d^2+v_u^2\right)-2 \mu _T \left(B_T+2 \mu _T\right)+\lambda \left(\mu _D \frac{v_d^2+v_u^2}{2 v_T}-
\left(\frac{A_T}{2}+\mu _T\right) \frac{v_d v_u}{v_T}\right) \ .
\eea
A simple condition that the remaining parameters have to satisfy for successful EW symmetry breaking is obtained by requiring the trivial vacuum at the origin to be unstable. By taking all the vevs to be zero, the requirement that one of the eigenvalues of ${\cal M}^2_{h^{0}}$, the neutral scalar squared mass matrix given in Eq.~\eqref{mns}, be negative, gives the condition
\be
B_D^2>\mu _D^2 \left(\frac{m_{H_d}^2}{\mu _D^2}+1\right) \left(\frac{m_{H_u}^2}{\mu _D^2}+1\right)\ .
\ee
When the condition above is satisfied, one can derive an important bound on the mass of the lightest neutral Higgs: given that the smallest eigenvalue of a $3\times 3$ Hermitian positive definite matrix, in this case ${\cal M}^2_{h^{0}}$, cannot be greater than the smaller eigenvalue of either of the $2\times 2$ submatrices on the diagonal, in the limit of large $B_D$ one obtains \cite{Espinosa:1991wt,Espinosa:1991gr}
\be\label{mhbnd}
m^2_{h^0_1}\leq m_Z^2 \left( \cos{2\beta} + \frac{\lambda^2}{g_Y^2+g_L^2} \sin{2\beta} \right)\ ,\quad \tan\beta=\frac{v_u}{v_d}\ .
\ee
The result in Eq.~\eqref{mhbnd} shows the main advantage and motivation of TESSM over MSSM: for $\tan\beta$ close to one and a large $\lambda$ coupling it is in principle possible in TESSM to generate the experimentally measured light Higgs mass already at tree-level \cite{DiChiara:2008rg}, which would imply no or negligible Fine-Tuning (FT) of the model. Indeed $\lambda\sim1$ and $\tan\beta\sim 1$ already saturate the bound in Eq.~\eqref{mhbnd}. Such large value of $\lambda$ in general grows nonperturbative at the GUT scale, and therefore also for TESSM, like for MSSM, radiative corrections are necessary to generate a light Higgs mass equal to 125.5~GeV \cite{Aad:2012tfa,Chatrchyan:2012ufa}.
\subsection{One Loop Potential}
The one loop contribution to the scalar masses is obtained from the Coleman-Weinberg potential \cite{Coleman:1973jx}, given by
\begin{align}\label{VCW}
V_{\rm CW}=\frac{1}{64\pi^2}{\rm STr}\left[ \mathcal{M}^4
\left(\log\frac{\mathcal{M}^2}{\mu_r^2}-\frac{3}{2}\right)\right],
\end{align}
where $\mathcal{M}^2$ are field-dependent mass matrices in which the fields are not replaced with their vevs nor the soft masses with their expressions at the EW vacuum, $\mu_r$ is the renormalization scale, and the supertrace includes a factor of $(-1)^{2J}(2J+1)$, with the spin degrees of freedom appropriately summed over. The corresponding one loop contribution to the neutral scalar mass matrix, $\Delta{\cal M}^2_{h^{0}}$, is given by \cite{Elliott:1993bs,DiChiara:2008rg}
\begin{align}
(\Delta\mathcal{M}^2_{h^0})_{ij}
&=\left.\frac{\partial^2 V_{\rm{CW}}(a)}{\partial a_i\partial a_j}\right|_{\rm{vev}}
-\frac{\delta_{ij}}{\langle a_i\rangle}\left.\frac{\partial V_{\rm{CW}}(a)}{\partial a_i}\right|_{\rm{vev}}
\label{1Lmha}\\
&=\sum\limits_{k}\frac{1}{32\pi^2}
\frac{\partial m^2_k}{\partial a_i}
\frac{\partial m^2_k}{\partial a_j}
\left.\ln\frac{m_k^2}{\mu_r^2}\right|_{\rm{vev}}
+\sum\limits_{k}\frac{1}{32\pi^2}
m^2_k\frac{\partial^2 m^2_k}{\partial a_i\partial a_j}
\left.\left(\ln\frac{m_k^2}{\mu_r^2}-1\right)\right|_{\rm{vev}}
\nonumber\\
&\quad-\sum\limits_{k}\frac{1}{32\pi^2}m^2_k
\frac{\delta_{ij}}{\langle a_i\rangle}
\frac{\partial m^2_k}{\partial a_i}
\left.\left(\ln\frac{m_k^2}{\mu_r^2}-1\right)\right|_{\rm{vev}}\ ,\quad i,j=u,d,T\ ;
\label{1Lmh}
\end{align}
where the second term in Eq.~\eqref{1Lmha} takes into account the shift in the minimization conditions, and $\{m^2_k\}$ is the set of eigenvalues of the field dependent mass matrices, which for the reader's convenience are given in the Appendix~\ref{massmapp}. Though the supertrace expressions are dropped in Eq.\eqref{1Lmh} for simplicity, the proper coefficient for each mass eigenvalue is taken into account in the calculation. Given that we include terms mixing the gauginos and higgsinos in the neutralino mass matrix, the mass matrices that enter Eq.\eqref{1Lmh} through their eigenvalues can be as large as $5\times 5$: to simplify the task of finding the one loop mass of the neutral scalars, we evaluate the derivatives in Eq.~\eqref{1Lmh} numerically at randomly assigned values for the independent parameters and for finite, though small, differentials $\Delta a_i$ around their respective vevs $v_u,v_d,v_T$, at a renormalization scale $\mu_r=m_Z$. For each randomly chosen point in the TESSM parameter space we check that, by changing the size of $\Delta a_i$ relative to $v_i$, the values of the neutral scalar masses are stable within a 0.1\% error or less.
To evaluate the phenomenological viability of TESSM we proceed by scanning randomly the parameter space for points that give the correct light Higgs mass while satisfying the constraints from direct searches of non-SM particles. The region of parameter space that we scan is defined by:
\bea\label{pscan}
&&1\leq t_{\beta }\leq 10\ ,\ 5 \,\text{GeV}\leq \left|\mu _D,\mu _T\right|\leq 2 \,\text{TeV}\ ,\ 50 \,\text{GeV}\leq \left|M_1,M_2\right|\leq 1 \,\text{TeV}\ ,\nonumber\\
&& \left| A_t,A_T,B_D,B_T\right|\leq 2 \,\text{TeV}\ ,\ 500 \,\text{GeV}\leq m_Q,m_{\tilde{t}},m_{\tilde{b}}\leq 2 \,\text{TeV}\ ,
\eea
with the last three being, respectively, the left- and right-handed squark squared soft masses. The value of $\lambda$ at each random point in the parameter space is determined by matching the lightest Higgs mass at one loop to 125.5~GeV: the matching is achieved by an iterative process that starts by assigning an initial random value $\left|\lambda\right|\leq 2$ to calculate the one loop contribution to the lightest Higgs mass $m_{h^0_1}^2$, solving for the value of $\lambda$ in the tree level contribution needed to match the measured light Higgs mass, using this value of $\lambda$ in place of the initial random value to calculate $m_{h^0_1}^2$, and repeating the process until $\lambda$ remains constant after the next iteration. We imposed no constraint on the sign of $\lambda$. The remaining free parameters of TESSM are of little relevance for the observables we consider in the rest of this paper (Higgs production and decay rates and $B_s\rightarrow X_s\gamma$ branching ratio), and can therefore be considered to be fixed to values consistent with the current experimental limits on new physics. Having implemented the setup outlined above, we scan randomly the parameter space defined in Eq.~\eqref{pscan} and collect 13347 points that satisfy the constraints
\bea
m_{h_1^0}=125.5\pm 0.1\, {\rm GeV}\ ;\ m_{A_{1,2}},\ m_{\chi^0_{1,2,3,4,5}}&\geq & 65\,{\rm GeV}\ ;\nonumber\\
m_{h^0_{1,2}} , m_{h^\pm_{1,2,3}}, m_{\chi^\pm_{1,2,3}}\geq 100\,{\rm GeV} \ ;\ m_{\tilde{t}_{1,2}},m_{\tilde{b}_{1,2}}&\geq & 650\,{\rm GeV}\ .
\eea
The experimental bounds \cite{Beringer:1900zz} on the mass of pseudoscalars and neutralinos are actually less tight than the ones above, but we prefer to avoid in this general study the phenomenological complicacies of invisible decays of the light Higgs, which are though relevant for dark matter \cite{Arina:2014xya}. In Section~\ref{finetuningTESSM} we impose additional, coupling dependent constraints on the heavy neutral Higgses. Before doing that, in the next Section we take up the task of studying the running of the coupling constants at high energy, and require that those couplings stay perturbative all the way up to $\Lambda_{\rm UV}$, a UV scale suitable for TESSM. This requirement, in turn, imposes a limit on the minimum amount of FT that TESSM can achieve.
\section{Perturbativity vs Fine-Tuning}
\label{finetuningTESSM}
In the parameter space scan we allow $\lambda$ to take up absolute values larger than 1, given that these generate a light Higgs mass that can easily match 125.5~GeV already at tree-level. Such large couplings, though, can easily diverge to infinity at high scales, making the perturbative treatment of the model inconsistent. We therefore calculate the two loop beta functions for the dimensionless couplings of the superpotential and the gauge couplings ($y_t,y_b,y_{\tau },\lambda ,g_3,g_2=g_L,g_1=\sqrt{5/6}\,g_Y$), for the first time for TESSM, and run each coupling from the renormalization scale $\mu_r=m_Z$ to the GUT scale, $\Lambda_{\rm GUT}=2\times 10^{16}$ GeV. Our results for two loop beta functions are presented in Appendix \ref{betas}.
For phenomenologically viable points, $y_t$ and $\lambda$ are the largest couplings at the $M_Z$ scale. It is important to notice that the one and two loop contributions to $y_t$ and $\lambda$ in general have numerically opposite signs close to the nonperturbative limit, so it happens that rather than diverging to infinity the couplings reach a fixed point somewhere above 2$\pi$. Given that this fixed point is an artifact of the truncated perturbative series arising close to the non-perturbative limit, we discard viable points for which any of the couplings reaches a value larger than $2\pi$ at $\Lambda_{\rm GUT}$. Because of the cancellation among the 1-loop and 2-loops contributions, $\lambda$ becomes non-perturbative at a value slightly larger than the corresponding value obtained with the one loop beta functions. Among the 13347 viable points collected with the random scan described in the previous section, only 7332, or about half, retain perturbativity at the GUT scale. Among these points, the maximum value of $\left|\lambda\right|$ is 0.85 (0.84 at one loop). Given that most of the viable perturbative points feature a value of $\left|\lambda\right|$ which is fairly smaller than 0.85, it is important to assess the amount of FT of TESSM at each of these points, and whether this represents an improvement over MSSM.
A simple estimate of FT in supersymmetry (SUSY) is given by the logarithmic derivative of the EW vev $v_w$ with respect to the logarithm of a given model parameter $\mu_p$ \cite{Ellis:1986yg,Barbieri:1987fn}: this represents the change of $v_w$ for a 100\% change in the given parameter, as defined below:
\be
\text{FT}\equiv \frac{\partial \log v_w^2}{\partial \log \mu_p ^2 \left(\Lambda\right) }\ ,\quad\mu_p ^2 \left(\Lambda\right) =\mu_p ^2 \left(M_Z\right)+\frac{\beta _{\mu_p ^2} }{16 \pi ^2} \log \left(\frac{\Lambda}{M_Z}\right)\ ,\quad
\beta _{\mu_p ^2}=16 \pi ^2 \frac{d\mu_p ^2}{d\text{logQ}}\ ,
\ee
where in parenthesis is the renormalisation scale of $\mu_p$. In MSSM $v_w$ shows its strongest dependence on $m_{H_u}^2$, which therefore produces also the largest value of FT: this is understandable given that the physical light Higgs is mostly of up type. The value of FT in $m_{H_u}^2$, which we calculate by deriving the one loop beta function of $m_{H_u}^2$, indeed happens to be largest in TESSM as well \footnote{The expression for the FT in $m_{H_d}^2$ becomes non-analytical at $\lambda\sim 0.5$, where there is a pole: excluding the vicinity of this point, for which FT is ill-defined, the largest values of FT indeed are associated to $m_{H_u}^2$.}:
\bea\label{FTdef}
{\rm FT} &=&\frac{\log\left(\Lambda /M_Z\right)}{16\pi \partial_{v_w^2}m_{H_u}^2}\left(6 y_t^2 A_t^2+3 \lambda^2 A_T^2+3 \lambda ^2 m_{H_d}^2+3 \lambda ^2 m_T^2+3 \lambda ^2 m_{H_u}^2-2 g_Y^2 M_1^2-6 g_L^2 M_2^2\right.\\
&+& 6 m_Q^2 y_t^2 + \left.6 m_{\tilde{t}}^2 y_t^2+6 m_{H_u}^2 y_t^2+ g_Y^2 \left(3 m_{\tilde{b}}^2-m_{H_d}^2-3 m_L^2+3 m_Q^2-6 m_{\tilde{t}}^2+m_{H_u}^2+3 m_{\tilde{\tau}}^2\right)\right) ,\nonumber
\eea
where the derivative in the denominator acts on the expression of $m_{H_u}^2$, Eqs.~\eqref{stabV}.
In Fig.~\ref{lambdaFT1} we present the value of FT evaluated at $\Lambda_{\rm GUT}$, where in blue are the perturbative points, for which no dimensionless coupling exceeds $2\pi$ in absolute value, in yellow are 102 points that are non-perturbative only at one loop, while in red are the nonperturbative points, as determined by the same criterium: it is clear that while values of $\lambda(M_Z)\sim 1$ indeed produce smaller FT, these large values also drive TESSM into a non-perturbative regime. Noticeably, for $\lambda$ values larger than 1 the tree-level mass of the light Higgs easily exceeds 125.5~GeV, in which case a large quantum correction, which drives up FT, is actually necessary to cancel the excess in mass. It is important to point out that when $\lambda\mathrel{\hbox{\rlap{\hbox{\lower5pt\hbox{$\sim$}}}\hbox{$<$}}} 0.2$ it is possible to obtain small FT as long as $t_{\beta}$ is large.
\begin{figure}[htb]
\centering
\includegraphics[width=0.46\textwidth]{lambdaFT.png}\hspace{0.1cm}
\caption{FT as a function of the triplet coupling $\lambda$: in (red) blue are the (non-perturbative) perturbative points, for which (some) no coupling exceeds $2\pi$ at $\Lambda_{\rm GUT}=2\times 10^{16}$ GeV. In yellow are the points which are perturbative for the two loop but not for the one loop beta functions.}
\label{lambdaFT1}
\end{figure}
For $\lambda(M_Z)\sim 1$, the coupling remains perturbative up to scales much higher than the one of $O({\rm TeV})$ tested at LHC. Taking a cutoff scale as high as the GUT scale is indeed less justifiable for TESSM than for MSSM, given that the triplet in the particle content spoils the unification of the gauge couplings at $\Lambda_{\rm GUT}$. Moreover, possible UV completions that generate spontaneous SUSY breaking in TESSM might well also alter the running of $\lambda$. Given these reasons, in the following analysis we choose a less restrictive cutoff scale, $\Lambda_{\rm UV}=10^4$ TeV, which is approximately the highest scale tested experimentally through flavor observables \cite{Beringer:1900zz}. Among the 13347 scanned viable data points, 11244 retain perturbativity at $\Lambda_{\rm UV}$, featuring $|\lambda|\leq 1.34$. In Fig.~\ref{lambdaFT2} we plot the FT associated to each of these viable points in function of $\tan\beta$, with a colour code showing the corresponding value of $|\lambda|$. Values of $\tan\beta$ close to 1 can be reached only for large values of $|\lambda|$ (greater than about 0.8) where the corresponding FT can be considerably smaller than for small values of $|\lambda|$, naively associated to MSSM-like phenomenology. In the same large $|\lambda|$ region, many data points suffer from large FT because $m_{h^0_1}$ at tree-level is actually much larger than 125.5 GeV, and so a large quantum correction is needed to achieve the right light Higgs mass value. For smaller values of $|\lambda|$ (greater than about 0.5), small $\tan\beta$ solutions also exist in a few cases but they lead to large FT. This is understandable because either $|\lambda|$ is large enough to generate most of the 125.5~GeV light Higgs mass at tree-level, or the stops need to be very heavy to compensate the smallness of $\tan\beta$, which in turn increases FT.
\begin{figure}[htb]
\centering
\includegraphics[width=0.46\textwidth]{tanbFT.png}\hspace{1cm}
\includegraphics[width=0.12\textwidth]{lambdacscale.png}\hspace{0.1cm}
\caption{FT as a function of $\tan\beta$: the region of small $\tan\beta$ and small FT is accessible only for values of $\lambda>0.8$.}
\label{lambdaFT2}
\end{figure}
This pattern is shown in Fig.~\ref{lambdaFT3}, where FT is plotted both as a function of the heavier stop mass and of $A_t$. It is interesting to notice that the viable region of small $\left|A_t\right|$ and small FT, like that of small $\tan\beta$, is accessible only for large values of $|\lambda|$, greater than about 0.8, where $m_{\tilde{t}_2}$ could be large. For small values of $|\lambda|$, $\left|A_t\right|$ needs to be large to generate the measured $m_{h_1^0}$.\begin{figure}[htb]
\includegraphics[width=0.46\textwidth]{mstFT.png}\hspace{1cm}
\includegraphics[width=0.46\textwidth]{AtFT.png}\hspace{0.1cm}
\caption{FT as a function, respectively, of the heavier stop mass $m_{\tilde{t}_2}$ (left panel) and the cubic stop coupling $A_t$ (right panel). Interestingly, for values of $|\lambda|>0.8$ it often happens that the tree-level light Higgs mass exceeds by a large amount 125.5 GeV, in which case another large but negative stop contribution, which generates a large FT, is required for viability. We also notice that the region of small $\left|A_t\right|$ is viable exclusively for values of $|\lambda|>0.8$, therefore opening up a region unaccessible to MSSM.}
\label{lambdaFT3}
\end{figure}
In the next Section we define the couplings relevant for light Higgs physics at LHC in terms of a set of coupling coefficients and SM-like couplings, and introduce an additional coupling coefficient of the heavy Higgses necessary to rescale the direct search constraint on the mass of a heavy SM Higgs. Equipped with these tools we then perform a goodness of fit analysis using the current experimental data.
\section{Higgs Physics at LHC}\label{Hphy}
Among the light Higgs production and decay channels, the only processes for which the non-SM particles become relevant are the gluon-gluon fusion and the decay to $\gamma\gamma$. The total contribution of non-SM particles to these loop-induced processes can be simply accounted for in the effective Lagrangian by adding a coloured and a charged scalar, respectively labeled $\Sigma$ and $S$, with masses much larger than 125.5~GeV. The couplings of these scalars and of the SM particles to the light Higgs can be expressed by rescaling the corresponding SM-like coupling by a coefficient. The light Higgs linear coupling terms that mimic the TESSM contributions to Higgs physics at LHC can therefore be written as\footnote{A similar parametrization of non-SM particles contributions to loop processes has been used in \cite{Antola:2013fka}.}
\bea
{\cal{L}}_{\textrm{eff}} &=& a_W\frac{2m_W^2}{v_w}hW^+_\mu W^{-\mu}+a_Z\frac{m_Z^2}{v_w}hZ_\mu Z^\mu
-\sum_{\psi=t,b,\tau}a_\psi\frac{m_\psi}{v_w}h\bar{\psi}\psi\nonumber \\
&&-a_\Sigma\frac{2m_\Sigma^2}{v_w}h\Sigma^* \Sigma-a_S\frac{2m_S^2}{v_w}hS^+ S^-.
\label{efflagr}
\eea
The experimental results are expressed in terms of the signal strengths, defined as
\be
\hat{\mu}_{ij}=\frac{\sigma_{\textrm{tot}}{\textrm{Br}}_{ij}}{\sigma_{\textrm{tot}}^{\textrm{SM}} \textrm{Br} ^{\textrm{SM}}_{ij}}\ ,\quad \sigma_{\rm tot}=\sum_{\Omega=h,qqh,\ldots}\!\epsilon_\Omega\sigma_{\Omega} \ ,
\label{LHCb}\ee
where $\textrm{Br}_{ij}$ is the light Higgs branching ratio into the $ij$ particles, $\sigma_\Omega$ the production cross section of the given final state $\Omega$, and $\epsilon_\Omega$ is the corresponding efficiency, which for inclusive searches is equal to 1. The production cross sections and decay rates for tree-level processes in TESSM are straightforwardly derived from Eqs.~(\ref{efflagr}, \ref{LHCb}) by rescaling the corresponding SM result with the squared coupling coefficient of the final particles being produced. For loop induced processes the calculation is more involved. By using the formulas given in \cite{Gunion:1989we} we can write\footnote{In the Eqs.~(\ref{hgamgam}, \ref{hgluglu}) we drop all the labels of $h$ given that these formulas apply generically to any SM-like Higgs particle.}
\be\label{hgamgam}
\Gamma_{h\rightarrow \gamma\gamma}= \frac{\alpha_e^2 m_{h}^3}{256 \pi^3 v_w^2}\left| \sum_i N_i e^2_i a_i F_{i} \right|^2,
\ee
where the index $i$ is summed over the SM charged particles plus $S^\pm$, $N_i$ is the number of colours, $e_i$ the electric charge in units of the electron charge, and the factors $F_{i}$ are defined by
\bea\label{dpam}
F_{W}&=&\left[2+3 \tau_{W}+3 \tau_{W}\left( 2-\tau_{W} \right) f(\tau_{W})\right]\,;\nonumber\\
F_{\psi}&=&-2 \tau_{\psi}\left[1+\left( 1-\tau_{\psi} \right) f(\tau_{\psi})\right]\ ,\quad \psi=t,b,\tau,c\ ;\nonumber\\
F_{S}&=&\tau_{S}\left[ 1-\tau_{S} f(\tau_{S})\right] ,\ \, \tau_{i}=\frac{4 m_i^2}{m_{h}^2}\,,
\eea
with
\bea
f(\tau_{i})=\left\{
\begin{array}{ll} \displaystyle
\arcsin^2\sqrt{1/\tau_{i}} & \tau_{i}\geq 1 \\
\displaystyle -\frac{1}{4}\left[ \log\frac{1+\sqrt{1-\tau_{i}}}
{1-\sqrt{1-\tau_{i}}}-i\pi \right]^2 \hspace{0.5cm} & \tau_{i}<1
\end{array} \right. .
\label{eq:ftau}
\eea
In the limit of heavy $S^\pm$, one finds
\be
F_S=-\frac{1}{3}\ .
\label{fwps}\ee
We account for the contribution to Higgs decays to diphoton of the charged non-SM particles in TESSM by defining
\be\label{aSd}
a_S \equiv -3 \left[ \sum^3_i \left(F_{h_i^\pm}+F_{\chi_i^\pm}\right)+\sum^2_j \left(\frac{4}{3} F_{\tilde{t}_j}+\frac{1}{3} F_{\tilde{b}_j}\right)\right]\ ,
\ee
where the functions $F$ for scalars and fermions are given by Eqs.~\eqref{dpam} after proper relabelling. Similarly to the two photon decay, the light Higgs decay rate to two gluons is given by
\be\label{hgluglu}
\Gamma_{h\rightarrow g g}= \frac{\alpha_s^2 m_{h}^3}{128 \pi^3 v_w^2}\left| \sum_i a_iF_{i} \right|^2\ ,\quad i=t,b,c,\Sigma\ ,
\ee
where the functions $F$ are given by Eqs.~\eqref{dpam} with proper relabelling. An overall factor accounting for the next to leading order QCD contributions \cite{Djouadi:2005gi} is independent of the coupling coefficients in Eq.~\eqref{efflagr}, and so it cancels out in the corresponding ratio of branching ratios in Eq.~\eqref{LHCb}. Similarly to the coupling coefficient $a_S$, to account for the contribution of non-SM particles of TESSM to the light Higgs decay into two gluons, we define $a_\Sigma$ as
\be\label{aSigmad}
a_\Sigma \equiv -3 \sum^2_{j=1} \left(F_{\tilde{t}_j}+F_{\tilde{b}_j}\right)\ .
\ee
To rescale the lower limit on the mass of the heavy neutral Higgs we calculate also $a^\prime_g$, the ratio of the TESSM decay rate, of $h^0_2$ to a gluon pair, to that of a SM-like Higgs of mass $m_{h^0_2}$,
\be
a^\prime_g\equiv\frac{\Gamma_{h^0_2\rightarrow g g}}{\Gamma^{SM}_{h\rightarrow g g}}\ .
\ee
This is still determined by Eqs.~(\ref{hgluglu}, \ref{aSigmad}), evaluated for the coupling coefficients and mass of $h^0_2$, rather than $h_1^0$, and then divided by the corresponding SM result. The most stringent limit on the mass of a heavy SM-like Higgs, $m_{h^0}>770$ GeV, comes from the gluon-gluon fusion Higgs production, subsequently decaying to $ZZ$ \cite{CMS:2013ada}. Assuming $h^0_2$ to decay on-shell and to be much heavier than twice the $W$ mass, the production rate by gluon-gluon fusion scales like the inverse of the Higgs squared mass, with a branching ratio to vector bosons greater than 0.8 for a SM-like Higgs \cite{Djouadi:2005gi}. Making the further assumption, for simplicity, that the same branching ratio for $h_2^0$ is unitary, which makes the constraint clearly more stringent, we impose
\be\label{Hconst}
a^\prime_g \frac{\left(770\ {\rm GeV}\right)^2}{m^2_{h^0_2}}<0.8\ .\quad
\ee
We evaluate Eq.~\eqref{Hconst} for each viable data point, and find it to hold for 10957 out of the 11244 viable data points that already satisfy perturbativity constraints. At each of these remaining viable points we then evaluate Eq.~\eqref{hgamgam}, making sure that the fermion mass parameter of each mass eigenstate appears with a negative sign in the Lagrangian, given that this is the convention we use in deriving Eq.~\eqref{hgamgam} \cite{Gunion:1989we}, and if that is not the case, we apply a phase rotation to the corresponding fermionic mass eigenstate to flip the sign of its mass operator. In Fig.~\ref{H2gamma1} we show the value of the Higgs decay rate to diphoton for TESSM relative to the SM one, as a function of $ \text{sign}\left(\mu_D\right)\times M_2$, the soft wino mass parameter times the sign of the superpotential doublet mass parameter. The colour code, given in Fig.~\ref{lambdaFT2}, shows the $\left|\lambda\right|$ value corresponding to the plotted data point. A possible experimental evidence for a suppression or enhancement of the SM Higgs decay rate to diphoton would point decisively, within TESSM, to an opposite or same sign of $M_2$ relative to $\mu_D$, respectively, besides likely large values of $\lambda$, depending on how large the deviation from the SM prediction is. These two mass parameters contribute to the lightest chargino mass, on which the Higgs decay rate to diphoton is strongly dependent.
\begin{figure}[htb]
\includegraphics[width=0.46\textwidth]{M2H2gamma.png}\hspace{1cm}
\includegraphics[width=0.46\textwidth]{mchH2gamma.png}\hspace{0.1cm}
\caption{Higgs decay rate to diphoton of the TESSM relative to the SM as a function, respectively, of ${\rm sign} (\mu_D)\times M_2$ (left panel) and of the lightest chargino mass $m_{\chi_1^\pm}$ (right panel). For opposite signs of $M_2$ and $\mu_D$, most of the viable points feature a suppression of the Higgs decay rate to diphoton as compared to the SM rate. The suppression or enhancement of the decay rate increases with decreasing $m_{\chi_1^\pm}$.}
\label{H2gamma1}
\end{figure}
As Fig.~\ref{H2gamma1} (right panel) shows, a small mass for the lightest chargino produces a sizable contribution to the decay rate to two photons, as expected, but this contribution can be either constructive or destructive with the one from the $W$ boson: the latter result seems to be in disagreement with results appeared in previous works on the same triplet extension of MSSM that we study here \cite{DiChiara:2008rg,Delgado:2012sm,Delgado:2013zfa,Arina:2014xya}.
It turns out that the constructive interference is a result of the choice to scan only a specific region of parameter space (positive fermion mass parameters and $\lambda$ coupling) for which the mass term of the mostly triplino-like chargino and the coupling to the light Higgs, unlike the top quark, have opposite signs. As a way of comparison with \cite{DiChiara:2008rg,Delgado:2012sm,Delgado:2013zfa,Arina:2014xya} we scan the parameter region again for viable points within the region defined below
\bea\label{ps}
&&1\leq t_{\beta }\leq 10\ ,\ 0\leq \lambda \leq 1\ ,\ 5 \,\text{GeV}\leq \mu _D,\mu _T\leq 250\,\text{GeV}\ ,\ 50 \,\text{GeV}\leq M_1,M_2\leq 300 \,\text{GeV}\ ,\nonumber\\
&& A_t=A_T=B_T=0\, ,\, 0\,\leq B_D\leq 2 \,\text{TeV}\ ,\ 500 \,\text{GeV}\leq m_Q,m_{\tilde{t}},m_{\tilde{b}}\leq 2 \,\text{TeV}
\eea
which roughly corresponds to (and exceeds) the region scanned in \cite{Arina:2014xya}, and apply again the perturbativity constraints (no coupling larger than $2\pi$ at $\Lambda_{\rm UV}$) as well as the lower bound on $m_{h^0_2}$, Eq.~\eqref{Hconst}. One key difference with previous calculations is that, among the non-SM particles, we include in the decay rate to diphoton the contributions of all the third generation SM and non-SM charged particles, without making any assumption on the coupling coefficients or masses of these particles. The result of the scan of this region of the TESSM parameter space is shown in Fig.~\ref{H2gammachk}. It is clearly consistent with previous results, as it shows that in this region of the parameter space only an enhancement, which becomes comparably large with large positive values of $\lambda$, is possible.
\begin{figure}[htb]
\centering
\includegraphics[width=0.46\textwidth]{tanbH2gammachk.png}\hspace{0.1cm}
\caption{Higgs decay rate to diphoton in the TESSM relative to the SM as a function of $\tan\beta$ for viable data points scanned only in the positive region of the mass parameters and of the couplings, with a generally small light chargino mass: in this region only an enhancement of the SM decay rate is observed.}
\label{H2gammachk}
\end{figure}
In the next Section we calculate a low energy flavor observable, $\mathcal{B}r(B_s\to X_s \gamma)$, which provides a strong constraint on the absolute size of $\lambda$.
\section{$\mathcal{B}r(B_s\to X_s \gamma)$ in TESSM}\label{btsgsec}
Besides the constraints obtained from Higgs decay channels, the low energy observables also provide stringent limitations on the parameter space of new physics beyond the SM. In particular, the parameter space of MSSM-like models with minimal or general flavour mixings in the sfermion sector has been investigated in great detail with the help of $B$-physics observables \cite{Bphysics}. Recently, it has been pointed out in Ref. \cite{Btomumu} that the branching ratio of the flavour changing decay $B_s\rightarrow X_s \gamma$ plays a very important role in constraining the viable parameter space of MSSM especially for low $\tan\beta$, whereas the flavour bounds obtained from the branching ratio of $B_s\rightarrow \mu^+ \mu^-$ become relevant only for large values of $\tan\beta$ ($\mathrel{\hbox{\rlap{\hbox{\lower5pt\hbox{$\sim$}}}\hbox{$>$}}} 10$). Since we limit our phenomenological study of TESSM to the low $\tan\beta$ region, it is sufficient to consider only the $B_s\rightarrow X_s \gamma$ decay for the rest of the analysis.
For any model, the branching ratio of $B_s\to X_s\gamma$ can be calculated via the effective Hamiltonian approach described by the generic structure
\be
\mathcal{H}_{eff}= \frac{G_F}{\sqrt{2}}V^*_{ts}V_{tb}\sum_i C_{i}(\mu_r)Q_{i}(\mu_r)\ ,
\label{effham}
\ee
where $V_{ij}$ are the entries of the CKM matrix, $C_{i}$ the Wilson coefficients, $\mu_r$ the renormalization scale, and $Q_i$ the relevant dimension 6 local operators. Here the Wilson coefficients can be written in the following form
\bea\label{wilson}
C_i (\mu_r)&=& C_i^{(0){\rm SM} } (\mu_r)+C_i^{(0)h_i^{\pm}} (\mu_r)+C_i^{(0){\rm SUSY}} (\mu_r)\nonumber\\
&+&\frac{\alpha_s(\mu_r)}{4 \pi}\left( C_i^{(1){\rm SM}} (\mu_r)+C_i^{(1)h_i^{\pm}} (\mu_r)+C_i^{(1){\rm SUSY}} (\mu_r)\right) \nonumber.
\eea
where $C_i^{(0)}$ stands for the leading order corrections (LO) to the Wilson coefficients while $C_i^{(1)}$ represents the next to leading order (NLO) effects. In particular, for $C_{i}^{(0){\rm SUSY}}$ we only consider the corrections from 1-loop chargino diagrams, in $C_i^{(1){\rm SUSY}}$ we include the 2-loops contributions of the three charginos and the gluino \cite{NLOSUSYbsg}, while those of the three charged Higgses are given by $C_i^{(1)h_i^{\pm}}$.
Similarly, the leading and next to leading order contributions from the SM at the $M_W$ scale can be obtained from \cite{LOSMCH}. For the charged Higgs contributions, Ref. \cite{LOSMCH} can be used as a starting point where one needs to replace the charged Higgs-quark couplings of the MSSM with the ones in TESSM: given that the latter possesses three physical charged Higgses, their contributions are summed over. After the total contribution at the $M_W$ scale is obtained, Ref.\cite{wilsonrun} can be used as a guideline to calculate the Wilson coefficients at the desired scale $\mu_r$. Here we emphasize that even though there is a greater number of particles that contribute to $B_s\rightarrow X_s \gamma$, it is still possible to get some suppression in the corresponding branching ratio, compared with the MSSM one, because of the lack of triplet coupling to the SM fermions.
In other words, the physical charged Higgses and charginos with triplet components give a suppressed contribution, as compared to their MSSM counterparts, to the rare $B$ decays to $ X_s\gamma$.
For the numerical analysis we calculate, at the next to leading order (NLO) and within TESSM, the values of $\mathcal{B}r(B_s\to X_s\gamma)$ corresponding to each of the 10957 viable data points, featuring perturbativity up to $\Lambda_{\rm UV}=10^4$ TeV, defined in Sections~\ref{finetuningTESSM}, \ref{Hphy}.
In Fig.~\ref{bsgmu} we plot $\mathcal{B}r(B_s\to X_s\gamma)$ as a function of $\mu_D$, and we use the colour code defined in Fig.~\ref{lambdaFT2} to represent different values of $\lambda$.
\begin{figure}
\centering
\includegraphics[width=0.46\textwidth]{muvsbsg_lastNLO.png}
\caption{Values of $\mathcal{B}r(B_s\to X_s\gamma$) associated to each viable data point as a function of $\mu_D$, where the NLO SUSY effects are taken into account. The yellow band shows the viable region at the $2\sigma$ CL around the experimental value of $\mathcal{B}r(B_s\to X_s\gamma)$.}\label{bsgmu}
\end{figure}
For small $|\mu_D|$, the contribution coming from the chargino with a mass mostly proportional to $\mu_D$ is non-negligible and, depending on the sign of $A_t$, this contribution increases or diminishes the total contribution to the $B_s\to X_s\gamma$ branching ratio. We observe that for values of $|\mu_D|\mathrel{\mathpalette\@versim>}$ 1 TeV a majority of data points fall within $\pm 2 \sigma$ of the experimental value, with $\mathcal{B}r(B_s\to X_s\gamma)_{exp}=3.55\pm0.24\pm0.09\times 10^{-4}$, and with $|\lambda|$ being generally small. It is relevant to point out that at LO $\mathcal{B}r(B_s\to X_s\gamma)$ is symmetric with respect to the sign of $\mu_D$, while at NLO there is a clear preference for the positive sign of $\mu_D$.
In order to understand this low $\lambda$ preference we investigate the effect of the mass as well as the structure of the lightest charged Higgs on the $\mathcal{B}r(B_s\to X_s\gamma)$. A large majority (93\%) of the viable data points with small lambda ($|\lambda|\leq 0.6$) features a larger triplet than doublet component of the lightest chargino mass eigenstate. This in turn produces a suppression of the $B_s\rightarrow X_s \gamma$ branching ratio, given that the triplet field gives no contribution at NLO to the $B_s\rightarrow X_s \gamma$ decay because it lacks direct couplings to quarks. For large $\lambda$ values, the $B_s\to X_s\gamma$ branching ratio falls within $2\sigma$ of the experimental value only for $m_{h_1^{\pm}}\mathrel{\mathpalette\@versim>} 700$ GeV, since the negative contribution of $h^\pm_1$ to the branching ratio becomes smaller in absolute value as $m_{h_1^{\pm}}$ increases.
Next we illustrate the $\tan \beta$ dependence of $\mathcal{B}r(B_s\to X_s\gamma)$, plotted in Fig.~\ref{bsgtan}. For values of $\tan\beta$ close to 10, corresponding to small values of $\lambda$, about half of the data points feature a $\mathcal{B}r(B_s\to X_s\gamma)$ prediction within $\pm2\sigma$ of the experimental value, while the other half generates a suppressed branching ratio.
\begin{figure}
\centering
\includegraphics[width=0.46\textwidth]{tanbvsbsg_lastNLO.png}
\caption{The values of $\mathcal{B}r(B_s\to X_s\gamma$) for the allowed data points as a function of $\tan\beta$. The yellow band represents the viable region at $2\sigma$ CL around the experimental value of $\mathcal{B}r(B_s\to X_s\gamma)$.}\label{bsgtan}
\end{figure}
For low $\tan\beta$ values, corresponding to large $\lambda$, the $\mathcal{B}r(B_s\to X_s\gamma)$ values associated to the viable data points sit mostly below the lower $2\sigma$ bound, and for no point the prediction actually matches the experimental value. It seems that the very large $\lambda$ values favored by FT, as discussed in Section~\ref{finetuningTESSM}, are severely constrained by the $B_s\to X_s\gamma$ branching ratio. This clear preference of the experiment for smaller values of $|\lambda|$ is unwelcome, given that, as shown in Section~\ref{finetuningTESSM}, values of $|\lambda|$ close to 1 can greatly reduce the amount of FT. On the other hand there are other observables, like the Higgs decay rate to diphoton, which prefer large values of $|\lambda|$, and can therefore tip the balance in favor of low FT. In the next Section we perform a goodness of fit analysis on the Higgs physics observables detailed in Section~\ref{Hphy} as well as $\mathcal{B}r(B_s\to X_s\gamma)$.
\section{Goodness of Fit to LHC Data}
\label{fitsect}
To determine the experimentally favored values of the free parameters $a_W,a_Z,a_u,a_d,a_S,a_\Sigma$, we minimize the quantity
\be
\chi^2=\sum_i\left(\frac{{\cal O}_i^{\textrm{exp}}-{\cal O}_i^{\textrm{th}}}{\sigma_i^{\textrm{exp}}}\right)^2,
\label{chi2}\ee
where $\sigma_i^{\textrm{exp}}$ represent the experimental uncertainty, while the observables ${\cal O}_i^{\textrm{exp}}$ correspond to the signal strengths, defined by Eq.~\eqref{LHCb}, for Higgs decays to $ZZ$, $W^+W^-$, $\tau^+\tau^-$, $b\bar{b}$, as well as all the topologies of decays to $\gamma\gamma$, respectively measured by ATLAS \cite{ATLAS:2013nma,ATLAS:2013wla,ATLAS:2012dsy,ATLAS:2012aha,ATLAS:2013oma} and CMS \cite{CMS:xwa,CMS:bxa,CMS:utj,CMS:2014ega}, and by Tevatron for decays to $W^+W^-$ and $b\bar{b}$ \cite{Aaltonen:2013kxa}. Because of the smallness of the triplet vev, $v_T$, and the relatively large mass of the lightest neutral Higgs, the values of $a_Z$ and $a_W$ for the viable data points are very close to one ($\sim 0.997$). We therefore set $a_W=a_Z=1$ in the $\chi^2$ function defined in \eqref{chi2}. Moreover, given that $a_u$ and $a_d$ are correlated through $\tan\beta$, in the minimization of $\chi^2$ with free coupling coefficients we also set $a_u=a_d=a_f$. The free coupling coefficients $a_f$, $a_S$, and $a_\Sigma$ produce a minimum of $\chi^2$ defined by
\bea\label{optimusprime}
\chi^2_{min}/d.o.f.&=&0.98\, ,\ d.o.f.=55\,,\ p\left(\chi^2>\chi^2_{min}\right)=51\%\, ,\nonumber\\
\hat{a}_f&=&1.03\,,\ \hat{a}_S=-2.30\,,\ \hat{a}_\Sigma=-0.04\ .
\eea
As a way of comparison, we determine the corresponding results for the SM, which has no free parameters:
\be
\chi^2_{min}/d.o.f.=0.96\, ,\ d.o.f.=58\,,\ p\left(\chi^2>\chi^2_{min}\right)=56\%\, .
\ee
One can define an approximate expression of $\chi^2$ around its minimum by assuming that the deviations of the free coupling coefficients from their optimal values (denoted by a hat in Eq.~\eqref{optimusprime} and below) are small as compared with their respective uncertainties \cite{Giardino:2013bma}:
\be
\Delta \chi^2=\chi^2-\chi^2_{min}=\delta^T \rho^{-1} \delta\,,\ \delta^T=\left( \frac{a_f-\hat{a}_f}{\sigma_f}, \frac{a_S-\hat{a}_S}{\sigma_S}, \frac{a_\Sigma-\hat{a}_\Sigma}{\sigma_\Sigma} \right)\ ,
\ee
with
\be
\sigma_f=0.165\,,\ \sigma_S=2.79\,,\ \sigma_\Sigma=0.431\,,\ \rho=
\left(\begin{array}{ccc}1 & -0.6 & -0.685 \\-0.6 & 1 & 0.785 \\ -0.685 & 0.785 & 1\end{array}\right)\ ,
\ee
where the uncertainties are explicitly defined to correspond to $\Delta\chi^2=1$.
In calculating $\chi^2$ for the TESSM viable data points we include also the $\mathcal{B}r(B_s\to X_s \gamma)$ observable. Assuming a total of four free parameters ($a_f,a_S,a_\Sigma$, plus one more to fit $\mathcal{B}r(B_s\to X_s \gamma)$), the viable data point featuring minimum $\chi^2$ has
\be
\chi^2_{min}/d.o.f.=1.01\, ,\ d.o.f.=55\,,\ p\left(\chi^2>\chi^2_{min}\right)=46\%\ .
\ee
This result should be compared with the SM one for the same set of observables:
\be
\chi^2_{min}/d.o.f.=0.99\, ,\ d.o.f.=59\,,\ p\left(\chi^2>\chi^2_{min}\right)=50\%\, .
\ee
We notice that the goodness of fit of TESSM is comparable, although smaller, to that of the SM. It is important, however, to realize that the quoted $p$ values are only indicative of the viability of TESSM and SM relative to one another, given that the chosen set of observables, besides $\mathcal{B}r(B_s\to X_s \gamma)$, tests only the linear Higgs sector of the Lagrangian. In Figs.~\ref{aSafaSigmaaf} we plot the 68\%, 95\%, 99\% CL viable regions (respectively in green, blue, and yellow) on the planes $a_S-a_f$ and $a_\Sigma-a_f$, each intersecting the optimal point (blue star) defined in Eq.~\eqref{optimusprime}. On the same respective planes we plot also the coupling coefficients values corresponding to each viable data point, determined numerically from the Lagrangian without any approximation, for which we plot together the values of $a_u$ (gray dots) and $a_d$ (black dots) along the $a_f$ dimension. While $a_\Sigma$ and even more $a_u$ seem to be underconstrained by the current data, about half of the scanned data points stretch outside the 68\% CL region along the $a_S$ direction, and a few $a_d$ values lie outside the 99\% CL region.
\begin{figure}[htb]
\includegraphics[width=0.46\textwidth]{aSaf.png}\hspace{1cm}
\includegraphics[width=0.46\textwidth]{aSigmaaf.png}\hspace{0.1cm}
\caption{Viable regions at the 68\%, 95\%, 99\% CL in the coupling coefficients $a_S,a_f$ (left panel) and $a_\Sigma,a_f$ (right panel) planes passing through the optimal point (blue star), together with the values of $a_u$ (grey) and $a_d$ (black) associated with each viable point.}
\label{aSafaSigmaaf}
\end{figure}
In Figs.~\ref{aSaSigma} we plot the 68\%, 95\%, 99\% CL viable regions (respectively in green, blue, and yellow) on the plane $a_S-a_\Sigma$ intersecting the optimal point (blue star) defined in Eq.~\eqref{optimusprime}, together with the corresponding coupling coefficients values for each viable data point (black). No viable data point matches the optimal values, as the bulk of data points deviates from it about $1\sigma$ along the $a_S$ axis. While $a_S$ seems to be still underconstrained, we can expect the viable regions to shrink considerably with the next run of the LHC at 14 TeV, in which case the constraint on $a_S$ might become relevant if the optimal values do not change considerably.
\begin{figure}[htb]
\centering
\includegraphics[width=0.46\textwidth]{aSaSigma.png}\hspace{0.1cm}
\caption{Viable regions at the 68\%, 95\%, 99\% CL in the coupling coefficients $a_S,a_\Sigma$ plane passing through the optimal point (blue star), together with the corresponding value (black) associated with each viable point.}
\label{aSaSigma}
\end{figure}
Finally, in Fig.~\ref{chi2FT} we plot the FT for each data point, with the colour code of the absolute value of $\lambda$ defined in Fig.~\ref{lambdaFT2}, as a function of its $\chi^2$ value, which includes the contribution of $\mathcal{B}r(B_s\to X_s \gamma$) defined in Eq.~\eqref{chi2}. As we can see from Fig.~\ref{bsgtan}, small $|\lambda|$ values more likely satisfy the $\mathcal{B}r(B_s\to X_s \gamma)$ experimental bound. It is important to notice that large absolute values of $\lambda$ are not able to improve the fit to current Higgs physics data enough to compensate for the bad fit to $\mathcal{B}r(B_s\to X_s \gamma$). The situation, though, has already changed considerably with the latest CMS data \cite{CMS:2014ega}, which has increased the significance of the enhancement of the Higgs decay to diphoton, favouring large $|\lambda|$ values. In a scenario in which both ATLAS and CMS confirm this enhancement with smaller uncertainty in the next LHC run, the TESSM would achieve a goodness of fit comparable to that of MSSM, with possibly a considerably smaller amount of FT.
\begin{figure}[htb]
\centering
\includegraphics[width=0.46\textwidth]{chi2FT.png}\hspace{0.1cm}
\includegraphics[width=0.12\textwidth]{lambdacscale.png}\hspace{0.1cm}
\caption{FT as a function of $\chi^2$ with colour code associated with the absolute value of $\lambda$. Mostly because of the deviation of the TESSM prediction on $\mathcal{B}r(B_s\to X_s \gamma)$ with the measured value the goodness of the fit worsens for points featuring large values of $\lambda$, which are also those that generally can achieve the smallest FT values.}
\label{chi2FT}
\end{figure}
\section{Conclusions}\label{concsec}
In this article we studied the phenomenology of the Triplet Extended Minimal Supersymmetric Standard Model, or TESSM, by first working out the neutral scalar masses at one loop using the Coleman-Weinberg potential and evaluating numerically the derivatives with respect to the neutral scalar fields. We performed a scan of the parameter space and found around 13000 points that satisfy direct search constraints besides producing the observed SM mass spectrum. Among these data points, we have shown that for large absolute values of the triplet coupling $\lambda$ it is possible to reach a smaller value of Fine-Tuning (FT) than for MSSM. Moreover, for large values of $|\lambda|$ it is possible to access regions of small $\tan\beta$ or/and small cubic stop coupling $A_t$, which are not accessible within MSSM with stop masses at the TeV scale.
To check that the couplings remain perturbative at the given UV scale, which we chose to be equal to $10^4$ TeV, the highest scale tested through flavour observables, we calculated the full two-loop beta functions and required all the dimensionless couplings to be smaller than $2\pi$: some of the points which would be non-perturbative at one loop order indeed feature perturbativity at two loop order.
To determine the phenomenological viability of TESSM we performed a goodness of fit analysis by comparing the TESSM predictions with 59 observables, comprising the $B_s\to X_s \gamma$ branching ratio, which we calculated at the next to leading order, as well as the light Higgs decays to $WW$, $ZZ$, $\tau\tau$, $b\bar{b}$, and all the topologies of $\gamma\gamma$, with experimental data from ATLAS, CMS, and Tevatron. A new result we obtained is the possibility of a suppression of the Higgs decay to diphoton, generated mostly for values of $M_2$, the wino soft mass, with sign opposite to that of $\mu_D$, the superpotential mass of the two Higgs doublets.
For large absolute values of $\lambda$ TESSM generates a large suppression or enhancement of the loop induced Higgs decay rate to diphoton. We find though that for large $|\lambda|$, or equivalently small $\tan\beta$, the values of $\mathcal{B}r(B_s\to X_s \gamma$) are always suppressed, with a deviation from the experimental value beyond $2\sigma$ for about half of the viable data points. The $\mathcal{B}r(B_s\to X_s \gamma$) values for small $|\lambda|$ instead feature both suppression and enhancement as compared with the measured value, with about half of the viable data points deviating less than $2\sigma$ from the experimental value. As a consequence, the goodness of fit of the 59 observables generally improves for smaller values of $|\lambda|$, for which the role of the triplet fields becomes less relevant in increasing the light Higgs mass and enhancing or suppressing the light Higgs decay to diphoton. The situation, though, has already changed considerably with the latest CMS data \cite{CMS:2014ega}, which has increased the significance of the enhancement of the Higgs decay to diphoton, favouring large $|\lambda|$ values. It is expected that the coming run of LHC will help the experiments to improve the accuracy of the Higgs branching ratios measurements. If the excess in the diphoton channel remains the same, the goodness of fit for TESSM would become comparable to that for MSSM, with FT in TESSM likely much smaller than in MSSM.
\section*{Acknowledgements}
The authors kindly thank Victor Martin-Lozano for useful discussions and for checking independently that a suppression of the Higgs decay rate indeed arises for suitable values of the free parameters. The authors acknowledge support from the Academy of Finland (Project Nro 137960). The work of ASK is also supported by the Finnish Cultural Foundation.
|
1,941,325,220,406 | arxiv | \section{Introduction}
\label{sec:introduction}
The development of efficient computational techniques and the growing
availablity of computing power over the past three decades have made it
possible to simulate the evolution of representative cosmological
volumes at high enough resolution to follow the formation of cosmic
structures over many orders of magnitude in mass.
One of the best established and most robust results from this
programme is the characterization of the density structure of dark
matter (DM) halos in equilibrium whose spherically averaged density
profile, $\rho(r)$, is nearly universal in shape and obeys simple
scaling relations \citep{Navarro1996a,Navarro1997}. The functional
form of this ``NFW'' radial profile is independent of mass, formation
redshift, and cosmological parameters and has the form:
\begin{equation}
\frac{\rho(r)}{\rhocr} = \frac{\delta_{\rm{c}}}{\left(r/r_{\rm{s}}\right)\left(1+r/r_{\rm{s}}\right)^2},
\label{eq:nfw}
\end{equation}
where $\rhocr$ is the critical density of the Universe,
$\delta_{\rm{c}}$ a characteristic density and $r_{\rm{s}}$
a characteristic radius. \cite{Navarro1997} showed that these two scale
parameters are strongly correlated and that the characteristic density
is proportional to the density of the universe at the time when the
halo was assembled. This proportionality constant or, equivalently,
the proportionality constant between halo mass and concentration has
been studied by many authors \citep[e.g.][]{AvilaReese1999, Jing2000,
Bullock2001, Eke2001, Zhao2003,Neto2007, Duffy2008, Gao2008, Navarro2010,
Ludlow2014, Dutton2014}. The validity of the model is well
established and a physical understanding of the universality of the
profile is beginning to emerge \citep{Ludlow2013,Correa2014,Correa2015}.
The nearly scale-free behaviour induced by gravity applies only to
halos made entirely of DM. In practice, halos of mass above $\sim 10^9
~\msun$ participate in the process of galaxy formation. The cooling
and dissipation of gas in these halos introduces a characteristic
scale that breaks self-similarity \citep{White1978, WhiteFrenk1991}
and the subsequent formation of stars can deepen the potential well and
modify the structure of the halo in this region.
One of the early models of the effects of baryon collapse on the
structure of a halo, making use of adiabatic invariants, concluded
that halos would become denser in their centres
\citep{Blumenthal1986}. These simple models, however, were later
shown not to match hydrodynamic simulations and led to a more general
framework for calculating adiabatic contraction based on the average
radial distribution of particles \citep{Gnedin2004,Gustafsson2006}.
The parameters of this model, however, have been shown to depend on
halo mass, redshift and on the details of the hydrodynamic simulation,
making analytical descriptions of adiabatic contraction complex and
uncertain \citep{Duffy2010}.
Baryons, however, can also produce the opposite effect and induce
expansion rather than contraction of the halo. Using idealized
hydrodynamic simulations, \cite{Navarro1996b} showed that the rapid
expulsion of gas that had previously cooled to very high density near
the centre of a halo could generate a central core. Subsequent work
using cosmological simulations has confirmed and extended this result
\citep[e.g.][]{Read2005, Dehnen2005,
Mashchenko2006,Governato2010,PontzenGovernato2012,Teyssier2013,Martizzi2013}.
The structure of the inner halo is often used as a test of the
\lcdm paradigm \citep[e.g.][]{Sand2002,Gilmore2007}. Such tests,
however, tend to compare observations of halos which have galaxies
within them with results from simulations of pure dark matter halos
\citep{Newman2013a}. For the tests to be meaningful, accurate and
reliable calculations of how baryons affect the structure of the halos
are essential. Such calculations are also of major importance for
efforts to detect DM experimentally, either directly in the laboratory,
or indirectly through the products of particle decay or annihilation.
Simulating the evolution of the visible components of the universe is
a much more complex task than simulating the evolution of the DM
because baryons undergo a variety of astrophysical processes many of
which are relatively poorly understood. The resolution that is
attainable even with the largest computers today is insufficient for
an {\em ab initio} calculation of most of these processes which, as a
result, need to be treated through parametrized ``subgrid'' models
added to the coupled hydrodynamical and gravitational evolution
equations. These models describe the effects of radiative cooling,
star formation, feedback from energy liberated during the evolution of
stars and supermassive black holes growing at the centres of
galaxies. Simulations that include some or all of these processes have
shown that significant changes can be induced in the total masses of
halos \citep{Sawala2013, Sawala2014, Cusworth2013, Velliscig2014,
Vogelsberger2014} and in their inner structure
\citep[e.g.][]{Gnedin2004, Pedrosa2009, Duffy2010,
PontzenGovernato2012, Brook2012, DiCintio2014}.
In this paper we investigate how baryon physics modifies the structure
of DM halos in the Evolution and Assembly of Galaxies and their
Environment (\eagle) cosmological hydrodynamical simulations \citep{Schaye2014}. An
important advantage of these simulations is that they give a good
match to the stellar mass function and and to the distribution of
galaxy sizes over a large range of stellar masses ($
(10^{8} -10^{11.5})~\msun $). Furthermore, the
relatively large volume of the reference \eagle simulation provides a
large statistical sample to derive the halo mass function in the mass
range $(10^9-10^{14})~\msun$ and to investigate the radial density
profiles of halos more massive than $10^{11}\msun$.
This paper is organised as follows. In Section~\ref{sec:simulation} we
introduce the simulations and describe the selection of halos. In
Section~\ref{sec:HaloCounts} we focus on the change in the mass of
halos induced by baryon processes and the effect this has on the
halo mass function. In Section~ \ref{sec:HaloProfile} we analyse the
radial density profile of the halos and decompose them according to
their different constituents. We fit the total matter profile with a
universal formula that accounts for deviations from the NFW
profile and show that the best fit parameters of these fits correlate with
the mass of the halo. Our main results are summarized in
Section~\ref{sec:conclusion}. All our results are restricted to redshift $z=0$ and
all quantities are given in physical units (without factors of $h$).
\section{The simulations}
\label{sec:simulation}
The simulations analysed in this paper were run as part of a Virgo
Consortium project called the Evolution and Assembly of Galaxies and
their Environment (\eagle; \citealt{Schaye2014}). The \eagle project
consists of simulations of \lcdm cosmological volumes with sufficient
size and resolution to model the formation and evolution of galaxies
of a wide range of masses, and also include a counterpart set of dark
matter-only simulations of these volumes. The galaxy formation
simulations include the correct proportion of baryons and model gas
hydrodynamics and radiative cooling. State-of-the-art subgrid models
are used to follow star formation and feedback processes from both
stars and AGN. The parameters of the subgrid model have been
calibrated to match certain observables as detailed in
\cite{Schaye2014}. In particular, the simulations reproduce the
observed present day stellar mass function, galaxy sizes and many
other properties of galaxies and the intergalactic medium remarkably
well. These simulations also show the correct trends with redshift of
many galaxy properties \citep{Schaye2014, Furlong2014}.
The simulations were run using an extensively modified version of the
code \gadget-3 \citep{Springel2008}, which is essentially a more
computationally efficient version of the public code \gadget-2
described in detail by \cite{Springel2005}. \gadget uses a Tree-PM
method to compute the gravitational forces between the \nbody
particles and implements the equations of hydrodynamics using Smooth
Particle Hydrodynamics \citep[SPH,][]{Monaghan1992, Price2010}.
The \eagle version of \gadget-3 uses an SPH implementation called
\anarchy (Dalla Vecchia in prep.), which is based on the general
formalism described by \cite{Hopkins2012}, with improvements to the
kernel functions \citep{Dehnen2012} and viscosity terms
\citep{Cullen2010}. This new implementation of SPH alleviates some of
the problems associated with modelling contact
discontinuities and fluid instabilities. As discussed by
Dalla Vecchia (in prep.), the new formalism improves on the treatment of
instabilities associated with cold flows and filaments and on the
evolution of the entropy of hot gas in halos. The timestep limiter
of \cite{Durier2012} is applied to ensure good energy conservation
everywhere, including regions disturbed by violent feedback due to
supernovae and AGN. The impact of this new hydrodynamics scheme on
our galaxy formation model is discussed by Schaller et al. (in prep.).
The analysis in this paper focusses on two simulations: the
Ref-L100N1504 simulation introduced by \cite{Schaye2014}, which is the
largest \eagle simulation run to date, and its counterpart dark
matter-only simulation, DM-L100N1504. To investigate smaller mass
halos and test for convergence in our results we also analyse the
higher resolution Recal-L025N0752 simulation (and its dark
matter-only counterpart) in which some of the sub-grid physics
parameters were adjusted to ensure that this calculation also
reproduces the observed galaxy stellar mass function, particularly at
the low-mass end, as discussed by \citep{Schaye2014}. We will refer to
the two simulations with baryon physics as ''\eagle'' simulations and
to the ones involving only dark matter as ''\dmonly'' simulations.
The main \eagle simulation models a cubic volume of side-length
$100~\rm{Mpc}$ with $1504^3$ gas and $1504^3$ dark matter particles to
redshift $z=0$. A detailed description of the initial conditions is
given in \cite{Schaye2014}. Briefly, the starting redshift was
$z=127$; the initial displacements and velocities were calculated
using second order Lagrangian perturbation theory with the method of
\cite{Jenkins2010}; the linear phases were taken from the public
multiscale Gaussian white noise field, Panphasia \citep{Jenkins2013};
the cosmological parameters were set to the best fit \lcdm values
given by the \emph{Planck-1} data \citep{Planck2013}:
$\left[\Omega_{\rm{m}}, \Omega_{\rm{b}},
\Omega_\Lambda,h,\sigma_8,n_s\right] = \left[0.307, 0.04825, 0.693,
0.6777, 0.8288, 0.9611\right]$; and the primordial mass fraction of
$\mathrm{He}$ was set to $0.248$. These choices lead to a dark matter
particle mass of $9.70\times10^6\msun$ and an initial gas particle
mass of $1.81\times10^6\msun$. We use a comoving softening of
$2.66~\rm{kpc}$ at early times, which freezes at a maximum physical
value of $700~\rm{pc}$ at $z=2.8$. The Recal-L025N0752 simulation
follows $752^3$ gas and $752^3$ DM particles in a $25~\rm{Mpc}$ volume
assuming the same cosmological parameters. This implies a DM particle
mass of $1.21\times10^6\msun$ and an initial gas mass of
$2.26\times10^5\msun$. The softening is $1.33~\rm{kpc}$ initially and
reaches a maximum physical size of $350~\rm{pc}$ at $z=0$.
The \dmonly simulations, DM-L100N1504 and DM-L025N0752, follow
exactly the same volume as \eagle, but with only $1504^3$ and $752^3$
collisionless dark matter particles, each of mass
$1.15\times10^7\msun$ and $1.44\times10^6\msun$, respectively. All
other cosmological and numerical parameters are the same as in the
\eagle simulation.
\subsection{Baryonic physics}
\label{ssec:baryonic_physics}
The baryon physics in our simulation correspond to the \emph{Ref}
\eagle model. The model, fully described in \cite{Schaye2014}, is
summarized here for completeness.
Star formation is implemented following \cite{Schaye2008}. A
polytropic equation of state, $P\propto \rho^{4/3}$, sets a lower
limit to the gas pressure. The star formation rate per unit mass
of these particles is computed using the gas pressure using an
analytical formula designed to reproduce the observed
Kennicutt-Schmidt law \citep{Kennicutt1998} in disk galaxies \citep{Schaye2008}.
Gas
particles are converted into stars stochastically. The threshold in
hydrogen density required to form stars is metallicity dependent with
lower metallicity gas having a higher threshold, thus capturing the
metallicity dependence of the $\rm{HI}-\rm{H}_2$ phase transition
\citep{Schaye2004}.
The stellar initial mass function is assumed to be that of
\cite{Chabrier2003} in the range $0.1\msun$ to $100\msun$ with each
particle representing a single age stellar population. After
$3\times10^7~\rm{yrs}$ all stars with an initial mass above $6\msun$
are assumed to explode as supernovae. The energy from these explosions
is transferred as heat to the surrounding gas. The temperature of an
appropriate amount of surrounding gas is raised instantly by
$10^{7.5}~\rm{K}$. This heating is implemented stochastically on one
or more gas particles in the neighbourhood of the explosion site
\citep{DallaVecchia2012}. This gas, once heated, remains coupled in a
hydrodynamic sense with its SPH neighbours in the ISM, and therefore
exerts a form of feedback locally that can affect future star
formation and radiative cooling.
The energy injected into the gas corresponds to $10^{51}~\rm{erg}$ per
supernovae times a dimensionless efficiency factor, $f_{\rm E}$,
that depends on the local gas metallicity and density. The
construction of $f_{\rm E}$ and its impact on galaxy formation is discussed
thoroughly by \cite{Schaye2014} and \cite{Crain2014}. For a gas of metallicity,
$Z$, and hydrogen
number density, $n_{\rm{H}}$, the efficiency in the reference model
is:
\begin{equation}
f_{\rm E} = 0.3 + 2.7 S\left(X;w\right),\nonumber
\end{equation}
where $w = 2/\ln10$,
\begin{equation}
X = 3.35\left(\frac{Z}{0.1Z_\odot}\right)\left(
\frac{0.1~\rm{cm}^{-3}}{n_{\rm{H}}}
\right),\nonumber
\end{equation}
and $S(X;w)$ is a convenient sigmoid function which varies between
0 and 1, and which we will need again in the following section.
We define the sigmoid function for $x\geq0$, $w>0$ as
\begin{equation}
S(X;w) = \frac{X^w}{1+X^w}.
\label{sigmoid}
\end{equation}
As $X$ varies from zero to infinity, the sigmoid function $S(X;w)$
smoothly varies between 0 and 1, taking the value of
$\frac{1}{2}$ when the argument $X=1$. The parameter $w$ controls the
rapidity of the transition between the asymptotes.
Besides energy from star formation, the star particles also
release metals into the ISM through three evolutionary channels: type
Ia supernovae, winds and supernovae from massive stars, and AGB
stars using the method discussed in \cite{Wiersma2009b}. The yields for each
process are taken from
\cite{Portinari1998}, \cite{Marigo2001} and \cite{Thielemann2003}. Following
\cite{Wiersma2009a}, the abundances
of the eleven elements that dominate the cooling rates are tracked.
These are used to compute element-by-element dependent cooling rates
in the presence of the Cosmic Microwave Background and the
ultraviolet and X-ray backgrounds from galaxies and quasars according
to the model of \cite{Haardt2001}.
For halos whose masses first exceed $M_{\rm{FOF}} = 10^{10}
h^{-1}\msun$ ($\approx1500$ dark matter particles, see section
\ref{ssec:halo_definition}), black hole (BH) sink particles are placed
at the centre of the halos. The BHs are then allowed to grow through
gas accretion and by merging with other BHs using methods based on
those introduced by \cite{Springel2005b} and \cite{Booth2009}. The
gas surrounding a BH is accreted at a rate given by the Bondi-Hoyle
formula \citep{Bondi1944} unless the viscous timescale of the gas
around the BH is larger than the Bondi time, in which case the
accretion rate is reduced by a factor proportional to the cube of the
ratio of the local sound speed and the rotation velocity
\citep{RosasGuevara2013}. For a BH of mass, $M_{\rm{BH}}$, surrounded
by gas at density, $\rho$, velocity with respect to the BH, $v$, and
sound speed, $c_{\rm{s}}$, the accretion rate is:
\begin{equation}
\dot m_{\rm{BH}} = \frac{4\pi G
M_{\rm{BH}}^2\rho}{\left(c_{\rm{s}}^2+v^2\right)^{3/2}}\cdot\left\{
\begin{array}{ccl}
\frac{1}{C_{\rm{visc}}}\left(\frac{c_{\rm{s}}}{V_\phi}\right)^3& \rm{if} &
C_{\rm{visc}}V_\phi^3>c_{\rm{s}}^3\\
1& \rm{if} & C_{\rm{visc}}V_\phi^3\leq c_{\rm{s}}^3
\end{array}
\right., \nonumber
\end{equation}
where $V_\phi$ is the circular speed of the gas at the Bondi radius
and $C_{\rm{visc}} = 2\pi$ in the reference simulation.
Feedback due to AGN activity is implemented in a similar way to the
feedback from star formation described above. The fraction of the
accreted rest mass energy liberated by accretion is
$\epsilon_r=0.1$, and the heating efficiency of this liberated energy
(i.e. the fraction of the energy that couples to the gas phase) is
$\epsilon_f=0.15$. Gas particles receiving AGN feedback energy are
chosen stochastically and their temperature is raised by
$10^{8.5}~\rm{K}$.
These models of supernova and AGN feedback are extensions of the
models developed for the Virgo Consortium projects \textsc{owls}
\citep{Schaye2010} and \textsc{gimic} \citep{Crain2009}. The values
of the parameters were constrained by matching key observables of the
galaxy population including the observed $z\approx0$ galaxy stellar
mass function, galaxy sizes and the relation between black hole and
stellar mass \citep{Crain2014}.
\subsection{Halo definition and selection}
\label{ssec:halo_definition}
Halos were identified using the Friends-of-Friends (FOF) algorithm on
all dark matter particles adopting a dimensionless linking length,
$b=0.2$ \citep{Davis1985}. We then applied the \subfind algorithm,
which is built into \gadget-3 \citep{Springel2001, Dolag2009}, to
split the FOF groups into self-bound substructures. A sphere is grown
outwards from the potential minimum of the dominant subgroup out to a
radius where the mean interior density equals a target value. This
target value is conventionally defined in units of the critical
density, $\rhocr(z)={3H^2(z)}/{8\pi G}$. With our choice of cosmology,
at $z=0$ we have $\rhocr = \rhocr(0) = 127.5~\msun~\rm{kpc}^{-3}$. A
halo of mass, $M_{\rm{X}}$, is then defined as all the mass within the
radius, $R_{\rm{X}}$, for which
\begin{equation}
\frac{3M_{\rm{X}}}{4\pi R_{\rm{X}}^3} = \rm{X}\rhocr(z)
\end{equation}
Commonly used values are $\rm{X}=200$, $500$ and $2500$,
leading to the definition of the mass, $M_{200}$, and the radius,
$R_{200}$, and similar definitions for other values of $\rm{X}$.
In the particular case of the virial radius, $R_{\rm vir}$, one can use the
spherical top-hat collapse model to derive the value of $\rm{X}$
\citep{Eke1996}. We use the
fitting formula given by \cite{Bryan1998}:
\begin{equation}
\rm{X} = 18\pi^2+82\left(\Omega_{\rm{m}}(z) -1\right) -
39\left(\Omega_{\rm{m}}(z) -1\right)^2,
\end{equation}
where
\begin{equation}
\Omega_{\rm{m}}(z)=\Omega_{\rm{m}}\left(1+z\right)^3\left(\frac{H_0}{H(z)}
\right)^2,
\end{equation}
and $H(z)$ is the value of the Hubble parameter at redshift $z$ which, in a flat
Universe, is
\begin{equation}
H(z) = H_0\sqrt{\Omega_{\rm{m}}(1+z)^3+\Omega_\Lambda}.
\end{equation}
In the case of the \emph{Planck1} cosmology, at $z=0$, $\rm{X}=102.1$,
giving $M_{\rm{vir}} = M_{102}$ and $R_{\rm{vir}} = R_{102}$.
We define the circular velocity, $V_{\rm{X}}$, as
\begin{equation}
V_{\rm{X}} = \sqrt{\frac{GM_{\rm{X}}}{R_{\rm{X}}}}.
\end{equation}
We only consider halos with more than $200$ particles within
$R_{200}$, implying a limit, $M_{200} \gtrsim 2.5\times10^8\msun$, in
our joint analysis of the two \eagle\ simulations. For specific properties
that depend on the internal structure of the halo we adopt more
conservative limits as described in section \ref{sec:HaloProfile}.
\subsection{Matching halos between the two simulations}
\label{ssec:halo_matching}
The \eagle and \dmonly simulations start from identical
Gaussian density fluctuations. Even at $z=0$ it is possible,
in most cases, to identify matches between halos in the two
simulations. These matched halos are comprised of matter that
originates from the same spatial locations at high redshift in the two
simulations. In practice, these identifications are made by matching
the particle IDs in the two simulations, as the values of the IDs
encode the Lagrangian coordinates of the particles in the same way in
both simulations.
For every FOF group in the \eagle simulation, we select the 50 most bound
dark matter particles. We then locate those particles in the \dmonly
simulation. If more than half of them are found in a single FOF group
in the \dmonly simulation, we make a link between those two halos. We then
repeat the procedure by looping over FOF groups in the \dmonly
simulation and
looking for the position of their counterparts in the \eagle simulation. More
than $95\%$ of the halos with $M_{200} > 2\times 10^{10}\msun$ can be
matched bijectively, with the fraction reaching unity for halos above
$7\times 10^{10}\msun$ in the L100N1504 volumes. Similarly, $95\%$ of
the halos with $M_{200} > 3\times10^9$ can be matched bijectively in
the L025N0752 volumes.
\section{Halo masses and content}
\label{sec:HaloCounts}
Previous work comparing the masses of halos in cosmological galaxy
formation simulations with matched halos in counterpart dark
matter-only simulations have found strong effects for all but the most
massive halos \citep[e.g.][]{Cui2012, Sawala2013}. \cite{Sawala2013}
found that baryonic effects can reduce the masses of halos by up to
$25\%$ for halo masses (in the dark matter only simulation) below
$10^{13}\msun$. (They did not include AGN feedback in their
simulation.) A similar trend was observed at even higher masses by
\cite{Martizzi2013}, \cite{Velliscig2014}, \cite{Cui2014} and
\cite{Cusworth2013} using a variety of subgrid models for star
formation and stellar and AGN feedback. All these authors stress that
their results depend on the details of the subgrid implementation
used. This is most clearly shown in \cite{Velliscig2014}, where the
amplitude of this shift in mass is shown explicitly to depend on the
subgrid AGN feedback heating temperature, for example. Hence, it is
important to use simulations that have been calibrated to reproduce
the observed stellar mass function.
In this section we find that similar differences to those seen before
occur between halo masses in the \eagle and \dmonly models. These
differences are of particular interest because \eagle reproduces well
a range of low-redshift observables of the galaxy population
such as masses, sizes and star formation rates \citep{Schaye2014},
although the properties of clusters of galaxies are not reproduced as
well as in the Cosmo-\textsc{owls} simulation \citep{LeBrun2014} analyzed
by \cite{Velliscig2014}.
\subsection{The effect of baryon physics on the total halo mass}
\label{ssec:HaloMass}
In this section we compare the masses of halos in the \eagle and
\dmonly simulations combining our simulations at two different
resolutions. To minimise any possible biases due to incomplete
matching between the simulations, we only consider halos above
$3\times10^{9}\msun$ (in \dmonly), since these can be matched
bijectively to their counterparts in more than $95\%$ of cases.
\begin{figure}
\includegraphics[width=\columnwidth]{Figures/massRatioTotalGeom_100_z0p0}
\caption{The ratio of the masses of the matched halos in the \eagle
and \dmonly simulations. The red squares show values for individual
halos and the black filled circles values binned by \dmonly halo
mass. Halos with $M_{200}^{\rm DMO} < 10^{10.1}\msun$ are extracted
from the higher resolution, L025N0752, simulation. The binned points are
the geometric
average of the individual ratios with the error bars at
$M_{200}^{\rm DMO} < 10^{10.1}\msun$ indicating the uncertainty
arising from the low number of halos in the high-resolution
simulation. The black dashed lines placed above and below the black
points show the geometrical $1\sigma$ scatter for each bin. The
lower horizontal grey dotted line indicates the universal dark
matter fraction $f_{\rm DM} = 1 - f_{\rm b} = (\Omega_{\rm{m}} -
\Omega_{\rm{b}}) / \Omega_{\rm{m}} = 0.843$. The upper dotted line
marks unity. The green solid line is the function of
Eqn.~\ref{eq:meFit} fitted to the binned ratios. The vertical dotted
lines mark the values of the fitting parameters $M_{12}$ and
$M_{23}$.}
\label{fig:MatchedHalos}
\end{figure}
Fig.~\ref{fig:MatchedHalos} shows the ratio of $M_{200}$ for matched
halos in the \eagle and \dmonly simulations as a function of $M_{200}$
in the \dmonly simulation. The black filled circles correspond to the
geometric mean of the ratios in each logarithmically spaced mass
bin. The choice of a geometric mean is motivated simply by the fact
that its reciprocal is the geometric mean of
$M_{200}^{\rm{DMO}}/M_{200}^{\rm{EAGLE}}$, which is also a quantity of
interest.
The halos in \eagle are typically lighter than their \dmonly
counterparts. There appear to be three distinct regimes in
Fig.~\ref{fig:MatchedHalos} . At the low mass end,
$M_{200}<5\times10^{10}~\msun$,
$M_{200}^{\rm{EAGLE}}/M_{200}^{\rm{DMO}}$ drops to $\sim0.72$. This
is less than one minus the universal baryon fraction, $f_{\rm DM}$, so not
only have the baryons been removed but the dark matter has also been
disturbed. The reduction in mass due to the loss of baryons lowers the
value of $R_{200}$ and thus the value of $M_{200}$. However, this
reduction in radius is not the sole cause for the reduction in halo
mass: the amount of mass within a fixed physical radius is actually
lower in the simulation with baryons because the loss of baryons,
which occurs early on, reduces the growth rate of the halo
\citep{Sawala2013}. At higher masses, stellar feedback becomes less
effective, but AGN feedback can still expel baryons and the ratio
rises to a plateau of $\sim0.85$ between
$M_{200}^{\rm{DMO}}=10^{12}~\msun$ and $5\times10^{12}~\msun$.
Finally, for the most massive halos ($M_{200} > 10^{14}~\msun$) not
even AGN feedback can eject significant amounts of baryons from the
halos and the mass ratio asymptotes to unity.
\cite{Sawala2013} proposed a fitting function to the ratio of
$M_{200}$ in simulations with and without baryons from the
\textsc{gimic} project \citep{Crain2009}. Their study focused mostly
on lower-mass objects and subhalos, but included enough large halos to
sample the high-mass end of the relation. Their four parameter fitting
function can be written as:
\begin{equation}
\frac{M_{200}}{M_{200}^{\rm{DMO}}} = a +
(b-a)S\left(\frac{M_{200}^{\rm{DMO}}}{M_t};w\right),
\label{eq:tillFit}
\end{equation}
where $S$ is a sigmoid function that varies smoothly between 0 and 1,
and is defined in Eqn.~\ref{sigmoid}. The best-fit parameter values in
\cite{Sawala2013} are:
$(a,b,\log_{10}(M_t/\msun),w)$ = $(0.69,0.98,11.6,0.79)$. The values of $a$
and $b$ correspond to the low- and high-mass asymptotes, respectively.
\cite{Velliscig2014} used a similar fitting function to summarise the
results of their study, again with four parameters, which can be
written as:
\begin{equation}
\frac{M_{200}}{M_{200}^{\rm{DMO}}} = a\left( \frac{b}{a}
\right)^{S\left(M_{200}^{\rm{DMO}}/M_t;w\right)},
\label{eq:marcoFit}
\end{equation}
where exactly the same sigmoid function is used to interpolate between the two
asymptotic values, $a$ and $b$, but now in a geometric rather
than arithmetic fashion. The functional forms of
Eqns.~\ref{eq:tillFit} and \ref{eq:marcoFit} are virtually
identical as, in practice, the ratio $b/a$ is never far from unity.
It is quite clear, however, from Fig.~\ref{fig:MatchedHalos} that a
single sigmoid function does not reproduce the behaviour we observe
particularly well: the ratio shows three, not two, distinct plateaux.
The simulations used by \cite{Sawala2013} did not include AGN feedback
and so did not show the change in mass arising from this form of
feedback. In contrast, the simulations used by \cite{Velliscig2014}
did not have sufficient numerical resolution to see the asymptotic
low-mass behaviour determined by stellar feedback.
To fit our results, we use a double sigmoid:
\begin{eqnarray}
\frac{M_{200}}{M_{200}^{\rm{DMO}}} = r_1 & + &
(r_2-r_1)S\left(\frac{M_{200}^{\rm{DMO}}}{M_{12}};t_{12}\right)
\nonumber\\
& + &
(r_3-r_2)S\left(\frac{M_{200}^{\rm{DMO}}}{M_{23}};t_{23}\right),
\label{eq:meFit}
\end{eqnarray}
where the seven parameters can be interpreted as follows: $r_1$, $r_2$
and $r_3$ are the values of the ratios corresponding to the three
distinct plateaux; the mass scales, $M_{12}$ and $M_{23}$, are the
mid-points between regimes 1 and 2, and 2 and 3 respectively;
and the parameters, $t_{12}$ and $t_{23}$, control the rapidity of
each transition.
The green curve in Fig.~\ref{fig:MatchedHalos} shows the best fitting
curve to the black binned data points. The fit was obtained by a
least-squares minimisation for all seven parameters assuming Poisson
uncertainties for each mass bin. Adopting a constant error instead
gives very similar values for all parameters. The values of the two
transition masses, $M_{12}$ and $M_{23}$, are shown as vertical dotted
lines in Fig.~\ref{fig:MatchedHalos}. The best-fitting parameters are
given in Table \ref{tab:bestFit}. Note that the value of $r_3$ is, as
expected, very close to unity.
\begin{table}
\caption{Best fitting parameters to the
black points in Fig.~\ref{fig:MatchedHalos} using Eqn.~\ref{eq:meFit},
and their uncertainties which are taken to be the diagonal elements
of the correlation matrix of the least-squares fitting procedure.}
\label{tab:bestFit}
\begin{center}
\begin{tabular}{|c|r|r|}
Parameter & Value & $1\textendash\sigma$ fit uncertainty\\
\hline
$r_1$ & $0.7309 $&$\pm 0.0014 $ \\
$r_2$ & $0.8432 $&$\pm 0.0084 $ \\
$r_3$ & $1.0057 $&$\pm 0.0024 $ \\
$\log_{10}(M_{12}/\msun)$ & $11.33 $&$\pm 0.003 $ \\
$\log_{10}(M_{23}/\msun)$ & $13.19 $&$\pm 0.029 $ \\
$t_{12}$ & $1.721 $&$\pm 0.045 $ \\
$t_{23}$ & $2.377 $&$\pm 0.18 $ \\
\hline
\end{tabular}
\end{center}
\end{table}
The value of the first transition mass, $M_{12}=10^{11.35}\msun$, is
similar to that reported by \cite{Sawala2013} who found
$M_t=10^{11.6}\msun$ for the \textsc{gimic} simulations. The second
transition, $M_{32}=10^{13.2}\msun$, is located well below the range
of values found by \cite{Velliscig2014} ($10^{13.7}\msun$
-$10^{14.25}\msun$). However, as \cite{Schaye2014} have shown the AGN
feedback in the few rich clusters formed in the \eagle volume may not
be strong enough, as evidenced by the fact that this simulation
overestimates the gas fractions in clusters, whereas the
$400~\rm{Mpc}/h$ Cosmo-\textsc{owls} simulation used by
\cite{Velliscig2014} reproduces these observations
\citep{LeBrun2014}.
A simulation with stronger AGN feedback, \eagle-AGNdT9, which gives a
better match to the group gas fractions and X-ray luminosities than
\eagle, was discussed by \cite{Schaye2014} . Applying the same halo
matching procedure to this simulation and its collisionless dark
matter-only counterpart, we obtain slightly different values for the
best-fitting parameters of Eqn.~\ref{eq:meFit}. The difference is
mainly in the parameters, $M_{23}$ and $t_{23}$, which describe the
high-mass end of the double-sigmoid function. In this model, the
transition occurs at
$\log_{10}\left(M_{23}/\msun\right)=13.55\pm0.09$, closer to the
values found by \cite{Velliscig2014}. The width of the transition,
however, is poorly constrained, $t_{23}=3.0\pm12.7$, due to the small
number of halos (only eight with
$M_{200,\rm{DMO}}>2\times10^{13}\msun$) in this simulation which had
only an eighth the volume of the reference simulation.
As \cite{Velliscig2014} did, we provide a fit to the scatter in the
log of the ratio about the mean relation, valid over the range where
appropriately constraining data are available:
\begin{equation}
\sigma\left(\log_{10}(M_{200}^{\rm{DMO}})\right) = 0.044 - 0.015 \log_{10}
\left(\frac{M_{200}^{\rm{DMO}}}{10^{12}\msun}\right).
\label{eq:Myfitscatter}
\end{equation}
The scatter is about 10\% for a halo mass of $10^{12} \msun$ and
decreases with mass. The slope in the relation is
approximatively a factor of two greater than that found for the AGN models
of \cite{Velliscig2014}.
\subsection{The halo mass function}
\label{ssec:HMF}
\begin{figure}
\includegraphics[width=\columnwidth]{Figures/HMF_100_z0p0}
\caption{Top panel: the abundance of halos at $z=0$ as a function of
the mass, $M_{200}$, in the \eagle (red curve, lower line) and
\dmonly (green curve, upper line) simulations. The high resolution volume is
used for $M_{200}^{\rm DMO} <
10^{10.1}\msun$. The resolution limits for both simulations are indicated by
the vertical dashed lines on the left, and the number of halos in sparsely
populated bins is given above the Poisson error bars.
Bottom panel: the ratio of the mass functions in the \eagle and \dmonly
simulations.}
\label{fig:HMF}
\end{figure}
The effect of baryons on the halo mass function can be seen in
Fig.~\ref{fig:HMF}. The red and green lines in the top panel show the
mass functions in the \eagle and \dmonly simulations. The ratio of
the two functions (bottom panel) shows an almost constant shift over
most of the plotted mass range, $M_{200} / \msun = 10^9 -
10^{13}$, as expected from Fig.~\ref{fig:MatchedHalos}. The relatively small
volume of the \eagle simulation does
not sample the knee of the halo mass function well, but extrapolating
the fit to the mass ratios of Eqn.~\ref{eq:meFit} to higher masses,
together with results from previous studies \citep{Cusworth2013,
Martizzi2013, Velliscig2014}, suggests that the differences vanish
for the most massive objects. Studies that rely on galaxy clusters to
infer cosmological parameters will need to take account of the effects
of the baryons, particularly for clusters of mass $M_{200} \lesssim
10^{14}\msun$.
\subsection{Baryonic and stellar fractions in the {\bf{\sc eagle}}
simulation}
\label{ssec:baryon_fraction}
We have shown in the previous subsection that for all but the most
massive examples, halo masses are systematically lower when baryonic
processes are included. In this subsection we examine the baryonic
content of halos in the \eagle simulation. We restrict our analysis to
the L100N1504 volume.
Fig.~\ref{fig:stellarFraction} shows the mass fractions of baryons and
stars within $R_{200}$ as a function of the halo mass, $M_{200}$, in
the \eagle simulation. The baryon fraction increases with halo mass
and approaches the universal mean value, $f_{\rm b}^{\rm{univ}} \equiv
\Omega_{\rm{b}}/\Omega_{\rm{m}}$, for cluster mass halos. The gas is
the most important baryonic component in terms of mass over the entire
halo mass range. At a much lower amplitude everywhere, the stellar
mass fraction peaks around a halo mass scale of $2\times10^{12}\msun$
where star formation is at its least inefficient.
The baryon fractions are much lower than the universal value for all but the
most
massive halos. For Milky Way sized halos, we find $f_{\rm b} /
f_{\rm b}^{\rm{univ}} \approx 0.35$. It is only for group and cluster sized
halos, whose deeper gravitational potentials are able to retain most of
the baryons even in the presence of powerful AGN, that the baryon
fraction is close to $f_{\rm b}^{\rm{univ}}$. The baryon fractions of the
halos extracted from the \eagle-AGNdT9 model (which provides a better
match to X-ray luminosities; \citealt{Schaye2014}) are presented in
Appendix~\ref{ssec:AGN_changes}.
\begin{figure}
\includegraphics[width=\columnwidth]{Figures/baryonFractionAll_100_z0p0}
\caption{Baryon fraction, $f_{\rm b}=M_{\rm b}/M_{200}$ (top panel), and stellar
fraction, $f_*=M_*/M_{200}$ (bottom panel), within $R_{200}$ as a
function of $M_{200}$. The right-hand axis gives the fractions in
units of the universal mean value, $f_{\rm b}^{\rm{univ}}=0.157$. The
solid circles in the top panel and the stars in the bottom panel
show the mean value of the fractions binned by mass. The dashed
lines above and below these symbols show the \rms\ width of each bin
with more than three objects. The stellar fractions are reproduced
as grey stars in the top panel.}
\label{fig:stellarFraction}
\end{figure}
The stellar mass fraction is never more than a few percent. At the
peak, around $M_{200}\approx2\times10^{12}\msun$, it reaches a value
of $\sim0.023$. Multiplying the stellar fraction by the halo mass
function leads to an approximate stellar mass function, which is close
to the actual one (published in \citealt{Schaye2014}), after a fixed
aperture correction is applied to mimic observational measurements.
As may be seen in both panels, there is significant scatter in the
baryonic and stellar fractions, with variations of a factor of a few
possible for individual halos.
\begin{figure}
\includegraphics[width=\columnwidth]{Figures/baryonFractionCentreAll_100_z0p0}
\caption{Same as Fig.~\ref{fig:stellarFraction} but for the mass
contained within 5\% of $R_{200}$. Note the different scale on the
ordinate axis. The dotted horizontal lines mark one and two times
the universal baryon fraction.}
\label{fig:stellarFractionCentre}
\end{figure}
While the baryonic and stellar fractions are low within $R_{200}$,
they are much higher in the inner regions of halos as shown in
Fig.~\ref{fig:stellarFractionCentre}, where these fractions are now
plotted within $0.05R_{200}$, a scale commensurate with the sizes of
galaxies both in \eagle and in the real universe. Within this radius
the fractions rise above the cosmic mean for halos in the mass range
$5\times10^{11}\msun<M_{200}<2\times10^{13}\msun$. The central parts
of these halos are strongly dominated by the baryons. In agreement
with observations of the nearby universe, the most important
contribution to the mass on these scales is from stars rather than
gas. Another notable feature is that the most massive halos are
baryon poor in their central regions, reflecting the regulation by AGN
feedback.
\section{ Halo profiles}
\label{sec:HaloProfile}
In this section we explore the effects of baryons on halo profiles
restricting the analysis to halos with more than $5000$ particles
within $R_{\rm vir}$, which corresponds to a halo mass of about
$5\times10^{10} \msun$ in the L100N1504 simulation and $6\times10^{9}
\msun$ in the L050N0752 simulation. The stellar masses found in the
\eagle simulation for halos of this mass are consistent with
observational expectations based on abundance matching
\citep{Schaye2014}. Halos smaller than this typically have fewer than
the hundred star particles, which \cite{Schaye2014} showed to be a
necessary criterion for many applications. This limit of 5000 in the
number of particles is intermediate between those used in other
studies. It is similar to the number adopted by \cite{Ludlow2013} and
lower than the number adopted by \cite{Neto2007} and
\cite{Duffy2008,Duffy2010} ($10000$ particles), but higher than the
number adopted by \cite{Gao2008,Dutton2014} ($3000$ particles) or
\cite{Maccio2007} ($250$ particles). There are $22867$ halos with at
least $5000$ particles in the Ref-L100N1504 \eagle simulation and
$2460$ in the Recal-L025N0752 simulation.
We define \emph{relaxed} halos as those where the separation between
the centre of the potential and the centre of mass is less than
$0.07R_{\rm vir}$, as proposed by \cite{Maccio2007}. \cite{Neto2007}
used this criterion, and also imposed limits on the substructure
abundance and virial ratio. \cite{Neto2007} found that the first
criterion was responsible for rejecting the vast majority of unrelaxed
halos. Their next most discriminating criterion was the amount of mass
in substructures. In common with \cite{Gao2008}, here we use stacked
profiles. Hence, individual substructures, which can be important when
fitting individual halos, have a smaller effect on the average
profile. We therefore do not use a substructure criterion to reject
halos. Our relaxed sample includes $13426$ halos in the L100N1504
simulation and $1590$ in the L025N0752 simulation. We construct the
stacked halos by coadding halos in a set of contiguous bins of width
$\Delta \log_{10}(M_{200}) = 0.2$.
The density and mass profiles of each halo and of the stacked halos
are obtained using the procedure described by \cite{Neto2007}. We
define a set of concentric contiguous logarithmically spaced spherical
shells of width $\Delta\log_{10}(r)=0.078$, with the outermost bin
touching the virial radius, $R_{\rm vir}$. The sum of the masses of
the particles in each bin is then computed for each component (dark
matter, gas, stars, black holes) and the density is obtained by
dividing each sum by the volume of the shell.
\subsection{Resolution and convergence considerations}
\label{ssec:resolution_test}
Determining the minimum radius above which the results are robust and
reliable is non-trivial. For DM-only simulations, \cite{Gao2008}
showed that the best fit NFW profiles are sensitive to this choice and
it is, therefore, important to estimate this minimum converged radius
accurately. For DM-only simulations the thorough resolution study of
\cite[][P03]{Power2003} suggests a convergence radius, $R_{P03}$, based
on the two-body relaxation timescale of particles orbiting in the
gravitational potential well. This criterion can be written
as:
\begin{equation} 0.6 \leq
\frac{\sqrt{200}}{8}\sqrt{\frac{4\pi\rhocr}{3m_{\rm{DM}}}}\frac{\sqrt{N(<R_{P03}
)}}{\ln
N(<R_{P03})}R_{P03}^{3/2},
\label{eq:P03}
\end{equation}
where $N(<r)$ is the number of particles of mass, $m_{\rm{DM}}$,
within radius $r$.
While this criterion could be applied to the \dmonly simulation, the
situation for the \eagle simulation is more complex since, as
discussed by \cite{Schaye2014}, the concept of numerical convergence
for the adopted subgrid model is itself ill defined. One option would
be simply to apply the P03 criterion, which is appropriate for the
\dmonly simulation, to both simulations. Alternatively, we could
apply the criterion to the dark matter component of the halos in the
baryon simulation or to all the collisionless species (stars, dark
matter and black holes). Neither of these options is fully
satisfactory but, in practice, they lead to similar estimates for
$R_{P03}$. For the smallest halos of the L100N1504 simulation considered in
this section, we find $R_{P03} \approx 5.1~\rm{kpc}$ whereas for the
largest clusters we obtain $R_{P03} \approx 3.5~\rm{kpc}$.
\begin{figure*}
\includegraphics[width=\textwidth]{Figures/StackedProfiles_resolutionTest_z0p0}
\caption{From left to right: the density, mass and circular velocity
profiles of a stack of the 44 relaxed halos of mass $10^{11}\msun$
at $z=0$ that are present in both the L025N0752 simulation (lines)
and the L025N0376 simulation (symbols). Profiles of total
matter (green), dark matter (black), gas (blue) and the stellar
component (red) are shown for both resolutions. The vertical dashed
and dotted lines show the resolution limits, $r_{\rm{c}}$, derived
from our modified P03 criterion for the L025N0376 and L025N0752
simulations respectively; data point are only shown at radii larger
than the Plummer equivalent force softening. The dark matter, total
matter and stellar profiles are well converged even at radii smaller
than $r_{\rm c}$, indicating that this convergence cirterion is very
conservative when relaxed halos in a narrow mass range are averaged
together. Convergence is much poorer for the subdominant gas
distribution at large radii.}
\label{fig:profilesResolution}
\end{figure*}
The original P03 criterion ensures that the mean density internal to
the convergence radius, $\bar\rho = 3M(r<R_{P03}) / 4\pi R_{P03}^3$,
is within $10\%$ of the converged value obtained in a simulation of
much higher resolution. As the magnitude of the differences between
the \eagle and \dmonly profiles that we see are significantly larger
than 10\% typically, we can relax the P03 criterion somewhat.
Reanalysing their data, we set the coefficient on the left-hand side
of Eqn.~\ref{eq:P03} to $0.33$, which ensures a converged value of the
mean interior density at the $20\%$ level. With this definition, our
minimal convergence radius $r_{\rm{c}}$ takes values between
$4~\rm{kpc}$ and $2.9~\rm{kpc}$ for halos with $M_{200} \sim
10^{11}\msun$ up to $M_{200}\sim 10^{14}\msun$. Similarly, in the
L025N0752 simulation our modified criterion gives $r_{\rm
c}\approx1.8~\rm{kpc}$. Note that despite adopting a less
conservative criterion than P03, the values of $r_{\rm{c}}$ are always
greater than the Plummer equivalent softening length where the force
law becomes Newtonian, $2.8\epsilon = 0.7~\rm{kpc}$ in the L100N1504
simulation and $0.35~\rm{kpc}$ in L025N0752 simulation.
The validity of our adopted convergence criterion can be tested
directly by comparing results from our simulations at two different
resolutions. Specifically, we compare our two simulations of $(25~
\rm{Mpc})^3$ volumes, L025N0752, and L025N0376 which has the same
initial phases as L025N0752 but the resolution of the reference,
L100N1504, simulation. In the language of \cite{Schaye2014}, this is
a \emph{weak} convergence test since the parameters of the subgrid
models have been recalibrated when increasing the resolution.
Fig. ~\ref{fig:profilesResolution} shows the stacked profiles of the
$44$ relaxed halos of mass $10^{11}\msun$ present in both the
L025N0376 and L025N0752 simulations. This mass bin contains enough
halos for the stacks not to be dominated by Poisson noise and the
halos are large enough to contain more than $5000$ particles in the
lower resolution simulation. The three panels show density, contained
mass and circular velocity profiles respectively, using symbols for
the default resolution and lines for the higher resolution
simulation. As may be seen, the stacked dark matter and total matter
profiles are very well converged over most of the radial range, both
in terms of the integral quantities, $M(r)$ and $V_{\rm c}(r)$, and in
terms of the differential quantity, $\rho(r)$. The dashed and dotted
vertical lines show the convergence radius, $r_{\rm c}$, for the
default and high resolution simulations respectively, computed
following the procedure described above.
The dark matter and total matter profiles converge well down to much
smaller radii than $r_{\rm c}$ implying that this limit is very
conservative. This is a consequence of comparing stacked rather than
individual halos since the stacks tend to average deviations arising
from the additional mass scales represented in the high resolution
simulation. We conclude from this analysis that the total matter and
dark matter profiles of stacked halos are well converged in our
simulations and that we can draw robust conclusions about their
properties for $r>r_{\rm c}$ in both the L100N1504 and L025N0752
simulations.
The gas profiles in these simulations display a much poorer level of
convergence. The disagreement between the two simulations increases at
radii larger than $r>r_{\rm c}$. However, since the mass in gas is
negligible at all radii and at all halo masses, the poor convergence
of the gas profiles does not affect our conclusions regarding the dark
and total matter profiles. We defer the question of the convergence of
gaseous profiles to future studies and simulations.
\subsection{Stacked halo density and cumulative mass of relaxed halos}
\label{ssec:density_profiles}
Having established a robust convergence criterion for stacked halos we
now analyse their profiles extracting halos of mass
$M_{200}\geq10^{11}\msun$ from the L100N1504 simulation and halos of
mass $10^{10}\msun \leq M_{200} \leq 10^{11}\msun$ from the
L025N0376 simulation.
\begin{figure*}
\includegraphics[width=\textwidth]{Figures/StackedProfiles_mixed_z0p0}
\caption{From left to right: the density, mass and circular velocity
profiles for stacks of relaxed halos in different mass bins at
$z=0$. From top to bottom: bins centred on
$M_{200}\approx10^{10}\msun$, $10^{11}\msun$, $10^{12}\msun$,
$10^{13}\msun$ and $10^{14}\msun$. Profiles of the total matter
(green diamonds), dark matter (black squares), gas (blue circles)
and stellar component (red stars) are shown for the halos extracted
from the \eagle simulation. Profiles extracted from halos of
similar mass in the \dmonly simulation are shown with a magenta
solid line on all panels. The \rms scatter of the total profile is
shown as a green shaded region. The vertical dashed line shows the
(conservative) resolution limit, $r_{\rm{c}}$, introduced in the
previous subsection; data are only shown at radii larger than the
force softening. The number of halos in each mass bin is indicated
in the middle panel of each row. The density profiles have been
multiplied by $r^2$ and normalized to reduce the dynamic range of
the plot and to enable easier comparisons between different halo
masses. Note that following the analysis of
Section~\ref{ssec:HaloMass}, matched halos are not guaranteed to
fall into the same mass bin. The oscillations seen in the profiles
of the two highest mass bins, which have only a few examples, are
due to the object-to-object scatter and the presence of
substructures.}
\label{fig:profilesComponent}
\end{figure*}
Fig.~\ref{fig:profilesComponent} shows the stacked profiles for five
different halo mass bins. The left-hand column shows that the DM is
the dominant component of the density of halos of all masses outside
about one percent of $R_{200}$. Inside this radius the stellar
component begins to contribute and even dominate in the case of halos
with mass $\gtrsim10^{12}\msun$. Considering only the baryonic
matter, the inner radii are dominated by stars, but gas dominates
outside of $\sim0.1R_{200}$, as we already saw in
Fig.~\ref{fig:stellarFraction}. In halos of Milky Way size
($M_{200}\sim10^{12}\msun$) the density profile of the gas is roughly
isothermal with $\rho(r)\propto r^{-2}$. The stars exhibit a steep
profile, $\rho(r)\propto r^{-3} - r^{-4}$, in the region where this is
resolved ($r>r_{\rm{c}}$). The resolution of our simulations is not
sufficient to enable the discussion of the stellar profile in the
central part of the galaxies, within $\sim3~\rm{kpc}$ of the centre of
potential.
The shape of the dark matter profiles in the \eagle simulation are
typically very close to those obtained in the \dmonly simulation. The
profiles depart from the \dmonly shape in halos with
$M_{200}\gtrsim10^{12}\msun$, where the slope in the inner regions
(below $0.1R_{200}$) is slightly steeper. This indicates that some
contraction of the dark matter has taken place, presumably induced by
the presence of baryons in the central region.
The {\em total} density profiles of the \eagle halos also closely
resemble those of the \dmonly simulation. This follows because the DM
dominates over the baryons at almost all radii. In halos with a
significant stellar fraction, the total profile is dominated by the
stars within $\sim0.01R_{200}$. This creates a total inner profile
that is steeper than in the \dmonly simulations. The stellar
contribution is dominant only in the first few kiloparsecs almost
independently of the halo mass. Given that \dmonly halos have
profiles similar to an NFW profile, this implies that the total
profile will be closer to an NFW for more massive halos because the
stars will only be important inside a smaller fraction of the virial
radius. This is most clearly seen in the $10^{14}\msun$ halo where
the profile is dominated by the DM and follows the NFW form down to
$~0.01R_{200}$. Similarly, in the smallest halos,
$M_{200}\approx10^{10}\msun$, the baryon content is so low that the
total matter profile behaves almost exactly like the dark matter
profile and is hence in very good agreement with dark matter-only
simulations.
It is also interesting to note the absence in our simulations of DM
cores of size $0.5-2~\rm{kpc}$ such as have been claimed in
simulations of individual halos of various masses, assuming different
subgrid models and, in some cases, different techniques for solving
the hydrodynamical equations \citep[e.g.][]{Navarro1996b, Read2005,
Mashchenko2006, PontzenGovernato2012, Teyssier2013, Martizzi2013,
Arraki2014,
PontzenGovernato2014,Trujillo2015,Murante2015,Onorbe2015}, even
though such cores would have been resolved in our highest resolution
simulations. As first shown by \cite{Navarro1996b}, density cores can
be generated by explosive events in the central regions of halos when
gas has become self-gravitating. Our simulations include violent
feedback processes but these are not strong enough to generate a core
or even a systematic flattening of the inner DM profile on resolved
scales. We cannot, of course, rule out the possibility that the
central profile could be modified even with our assumed subgrid model
in higher resolution simulations.
\subsection{Halo circular velocities}
\label{ssec:rotation_curves}
The right-hand column of Fig.~\ref{fig:profilesComponent} shows the
rotation curves. Those for Milky Way mass halos display a flat profile
at radii greater than $10~\rm{kpc}$ as observed in our galaxy and
others \citep[e.g.][]{Reyes2011}. The dominant contribution of the DM
is clearly seen here. The stellar component affects only the first few
kiloparsecs of the rotation curve. The rotation curves of halos with a
significant ($>0.01$) stellar fraction (i.e. halos with
$M_{200}>3\times10^{11}\msun$) have a higher amplitude than the
corresponding \dmonly stacked curves at small radii
$r\lesssim10~\rm{kpc}$. The combination of the stellar component and
contraction of the inner dark matter halo leads to a maximum rotation
speed that is $\approx30\%$ higher in the \eagle simulation compared
to that in \dmonly.
To assess whether the circular velocity profiles for the galaxies in the
\eagle simulation are realistic, we compare them to a sample of
observed disc galaxies. We use the data from \cite{Reyes2011}, who
observed a sample of 189 spiral galaxies and used $\rm{H}\alpha$ lines
to measure the circular speeds. From their SDSS $r-$band
magnitudes and $g-r$ colours, we derive the stellar masses of their
galaxies using the $M_*/L$ scaling relation of \cite{Bell2003}. We
apply a $-0.1~\rm{dex}$ correction to adjust these stellar mass
estimates from their assumed `diet Salpeter' IMF to our adopted
\cite{Chabrier2003} IMF, and apply the correction from
\cite{Dutton2011} to convert our masses to the MPA/JHU definitions
(See \cite{McCarthy2012} for the details.).
\begin{figure*}
\includegraphics[width=0.9\textwidth]{Figures/rotationCurvesRelaxed}
\caption{Simulated circular velocity curves and observed spiral galaxy
rotation curves in different stellar mass bins. The green diamonds
with error bars correspond to the total circular velocity and
the \rms scatter around the mean. The black squares,
red stars and blue circles represent the mean contributions of dark
matter, star and gas particles respectively. The dashed vertical line
is the conservative resolution limit, $r_{\rm{c}}$. The background brown
curves are the best-fit $\rm{H}\alpha$ rotation curves extracted from
\citet{Reyes2011}. We plot their data
up to their $i-$band measured isophotal $R_{80}$ radii.
}
\label{fig:rotationCurves}
\end{figure*}
In Fig.~\ref{fig:rotationCurves} we show the rotation curves of our
sample of relaxed halos binned by the stellar mass contained within an
aperture of $30~\rm{kpc}$, as used by \cite{Schaye2014} who already
compared the predicted maximum circular velocities to
observations. The simulated galaxies match the observations
exceptionally well, both in terms of the shape and the normalisation
of the curves. For all mass bins up to $M_*<10^{11}\msun$, the \eagle
galaxies lie well within the scatter in the data. Both the shape and
the amplitude of the rotation curves are reproduced in the
simulation. The scatter appears to be larger in the real than in the
simulated population, particularly in the range $10.5 < \log_{10}
M_*/\msun < 10.75$ (lower left panel), but the outliers in the data
might affected by systematic errors \citep{Reyes2011} arising, for
instance, from the exact position of the slit used to measure spectral
features or from orientation uncertainties.
The rotation curves for the highest stellar mass bin in the
simulation, $M_* >10^{11}\msun$, show a clear discrepancy with the
data. Although the general shape of the curves is still consistent,
the normalisation is too high. Part of this discrepancy might be due
to the selection of objects entering into this mass bin. The data
refer to spiral galaxies, whereas no selection besides stellar mass
has been applied to the sample of simulated halos. This highest mass
bin is dominated by elliptical objects in \eagle. Selecting
spiral-like objects (in a larger simulation) may well change the
results at these high stellar masses. A more careful measurement of
the rotation velocities in the simulations in a way that is closer to
observational estimates (e.g. by performing mock observations of
stellar emission lines) might also reduce the discrepancies. We defer
this, more careful, comparison to future work.
At all masses beyond the convergence radius the dominant contribution
to the rotation curve comes from the dark matter. For the highest mass
bins the stellar contribution is very important near the centre and
this is crucial in making the galaxy rotation curves relatively flat.
As already seen in the previous figure, the contribution of gas is
negligible.
\subsection{An empirical universal density profile}
\label{ssec:profiles_fit}
It is well known that the density profiles of relaxed halos extracted
from dark matter only simulations are well fit by the NFW profile
(Eqn.~\ref{eq:nfw}) at all redshifts down to a few percent of the
virial radius \citep{Navarro1997,Bullock2001, Eke2001, Navarro2004,
Shaw2006, Maccio2007, Neto2007, Duffy2008, Ludlow2013, Dutton2014}.
The total matter profiles shown in Fig.~\ref{fig:profilesComponent}
for the \eagle simulation follow the NFW prediction in the outer
parts, but the inner profile is significantly steeper than the NFW
form, which has an inner slope ($\rho(r\rightarrow0) = r^{-\eta}$ with
$\eta\approx1$). The deviations from an NFW profile can be quite
large on small scales.
To show this, we fit the total mass profiles using the fitting
procedure defined by \cite{Neto2007}. We fit an NFW profile to the
stacked profiles over the radial range $[0.05,1]R_{\rm vir}$, shown
respectively as blue dashed curves and filled circles in
Fig.~\ref{fig:profiles}. This choice of minimum radius is larger than
the conservative convergence radius given by version of the \cite{Power2003}
criterion that we adopted in the previous section. As described in
Section~\ref{ssec:density_profiles}, the bins are spherical and spaced
logarithmically in radius.
The \cite{Neto2007} fit is performed by minimizing a $\chi^2$
expression with two free parameters, $r_{\rm{s}}$ and
$\delta_{\rm{c}}$, characterising the NFW profile, over a set of $N_{\rm b}
(=17)$ radial bins. We use the Levenberg \& Marquart method to
minimize the \rms deviation, $\sigma_{\rm{fit}}$, between the binned
logarithmic densities $\rho_{\rm{i}}$ and the NFW profile
$\rho_{\rm{NFW}}$:
\begin{equation}
\sigma_{\rm{fit}} = \frac{1}{N_{\rm{b}}-1} \sum_{i=1}^{N_{\rm{b}}}
\left(\log_{10}\rho_{\rm{i}} -
\log_{10}\rho_{\rm{NFW}}(\delta_{\rm{c}},r_{\rm{s}})\right)^2.
\label{eq:chi2}
\end{equation}
Note that the bins are weighted equally.
\begin{figure*}
\includegraphics
[width=\textwidth]{Figures/Total_EAGLE_Profiles_Relaxed_mixed_z0p0}
\caption{Stacked density profiles of the total mass normalized by the
average $R_{200}$ radius and scaled by $r^2$ for halos of different
masses. The filled circles are the data points used to fit an NFW
profile following \citet{Neto2007}, i.e. radial bins above
data points below it are shown using fainter symbols. The blue
dashed lines correspond to the NFW fit to the filled circles, while
the brown lines correspond to an Einasto profile fit to all radial
bins down to the convergence radius, $r_{\rm{c}}$. The red solid
line is the best-fit profile given by Eqn.~\ref{eq:densityProfile},
which includes an NFW contribution for the outer parts of the halos
and an additional contribution around the centre to model the
baryons. The best-fitting parameters for each mass bins are given in
Table~\ref{tab:bestFitParameters}.}
\label{fig:profiles}
\end{figure*}
The best-fit profile for each stacked halo mass bin is shown in
Fig.~\ref{fig:profiles} as a blue dashed line. The NFW profile is a
very good fit to the filled circles, confirming that the outer parts
of the halos are well described by this profile within $R_{200}$.
However, the NFW profile is clearly a poor fit at small radii
($r\lesssim0.05R_{\rm vir}$) for halos with a significant stellar
mass, i.e. for halos above $\sim3\times10^{11}\msun$, as expected from
Fig.~\ref{fig:profilesComponent}, due to the increased contribution of
the stars and the subsequent contraction of the DM profile. For halo
masses above $10^{12}\msun$, the discrepancy between the NFW
prediction and the actual total mass density profile reaches factors
of two close to the resolution limit .
When multiplied by $r^2$, the NFW profile reaches a maximum at
$r=r_{\rm{s}}$. For $M_{200}>3\times10^{11}\msun$ the profiles do not
display a single sharp maximum but rather a broad range of radii at
almost constant $r^2\rho(r)$, i.e. a quasi isothermal profile. For
$M_{200} \gtrsim3\times10^{13}\msun$, the difference is even more
striking as a second maximum appears at small radii. We will explore
alternative fitting formula in what follow, but it is clear that a
fitting formula describing the most massive halos will require several
parameters to work well.
In their detailed study, \cite{Navarro2004} explored the use of a more
general class of profiles, where the slope varies with radius as a
power law. This alternative profile was originally introduced by
\cite{Einasto1965} to model old stellar populations in the Milky Way,
and so \cite{Navarro2004} called it the ``Einsasto profile'':
\begin{equation}
\rho(r) = \rho_{-2}
\exp\left[-\frac{2}{\alpha}\left(\left(\frac{r}{r_{-2}}\right)^\alpha -
1\right)\right],
\label{eq:Einasto}
\end{equation}
which can be rewritten as
\begin{equation}
\frac{\rm{d}\ln \rho(r)}{\rm{d}\ln r} = -2\left(\frac{r}{r_{-2}} \right)^\alpha,
\end{equation}
to highlight that the slope is a power-law of radius.
\cite{Navarro2004} showed that halos in \dmonly simulations are
typically better fit by the Einasto profile and that the value of the
power law parameter, $\alpha\approx0.17$, can be used across the whole simulated
halo mass
range. This was confirmed by \cite{Gao2008} and \cite{Duffy2008} who found a
weak
dependence of $\alpha$ on the peak-height parameter. \cite{Gao2008}
demonstrated that the Einasto profile is more robust
to choices of the minimal converged radius, $r_{\rm{c}}$, improving the
quality of the fit.
In the case of our sample of halos, the additional freedom to change
the slope of the power law describing the density profile helps
improve the fit. We use the same procedure as in the NFW case to find
the best-fitting parameters $(r_{-2}, \rho_{-2}, \alpha)$ but instead
of using only the radial bins with $r>0.05R_{\rm vir}$, we use all
bins with $r > r_{\rm{c}}$. The number of bins used is now a function
of the halo mass. The resulting best-fit profiles are displayed in
Fig.~\ref{fig:profiles} as solid yellow lines. The fits are slightly
better than in the NFW case simply because the rolling power law
allows for a wider peak in $r^2\rho(r)$, but the Einasto profile is
clearly unable to capture the complex behaviour seen in the profiles
of the highest mass bins. The better fit quality is only incidental.
Furthermore, if we had used the full range of radial bins for the NFW
fitting procedure, we would have obtained similar fits as the two
functions are very similar. Similarly, restricting the Einasto fit to
the bins with $r>0.05R_{\rm vir}$ yields a best fit profile (and
$\sigma_{\rm{fit}}$) almost identical to the NFW ones shown by the
dashed blue lines.
Clearly, in the presence of baryons, neither the NFW nor the Einasto
profile faithfully represents the inner matter density profile. As
Fig.~\ref{fig:profilesComponent} showed, the inner profile is shaped
by both a substantial stellar contribution and the contraction of the
dark matter associated with the elevated baryon fraction towards the
centre. We find that the total profile can be fit everywhere by the
following formula:
\begin{equation}
\frac{\rho(r)}{\rhocr} =
\frac{\delta_{\rm{c}}}{\left({r}/r_{\rm{s}}\right)\left(1 +
{r}/{r_{\rm{s}}}\right)^2}
+
\frac{\delta_{\rm{i}}}{\left({r}/{r_{\rm{i}}}\right)\left(1+\left({r}/{r_{\rm{i}
}}\right)^2\right)}.
\label{eq:densityProfile}
\end{equation}
The first term is the NFW profile, which we have shown gives a good
fit to the outer, DM-dominated profile. The second term is
NFW-like in that is shares the same asymptotic behaviour at small and
large radii and has a slope of -2 at its scale radius, $r=r_{\rm{i}}$. We
have found by trial and error that its sharper transition relative to
the NFW profile between the asymptotic slope regimes of -1 and -3,
which causes it to rise a factor of two above a corresponding NFW
profile that shares the same scale radius and asymptotic behaviour at
small and large radii, make it particularly suitable for describing the
deviations in the density profiles above an NFW profile seen in the
central regions of the \eagle halos.
We fit this profile using all the radial bins down to our resolution
limit, $r_{\rm{c}}$. We rewrite expression (\ref{eq:chi2}) using our
new profile and minimize $\sigma_{\rm{fit}}$ leaving the four
parameters $(r_{\rm{s}}, \delta_{\rm{c}}, r_{\rm{i}},
\delta_{\rm{i}})$ free. The resulting fits are displayed in
Fig.~\ref{fig:profiles} as red solid lines. The values of the
best-fitting parameters are given in
Table~\ref{tab:bestFitParameters}. The fit is clearly of a much better
quality than the NFW and Einasto formulas for the same set of radial
bins.
For the lowest mass halos ($M_{200}<6\times10^{10}\msun$), this new profile does
not provide a better $\sigma_{\rm
fit}$ than a standard NFW profile does. This is expected since the baryons have
had little impact on their inner
structure. The values of $r_{\rm i}$ and $\delta_{\rm i}$ are, hence, not
constrained by the fits. For these low mass
stacks, we only provide the best-fitting NFW parameters in
Table~\ref{tab:bestFitParameters} instead of the parameters
of our alternative profile.
The different features of the simulated halos are well captured by the
additional component of our profile. We will demonstrate in the next
sections that the additional degrees of freedom can be recast as
physically meaningful quantities and that these are closely correlated
with the halo mass. As in the case of the NFW profile, this implies
that this new profile is effectively a one parameter fit, where the
values of all the four parameters depend solely on the mass of the
halo. It is worth mentioning that this profile also reproduces the
trends in the radial bins below the resolution limit $r_{\rm{c}}$. \\
\begin{table*}
\begin{minipage}{137mm}
\caption{Best-fit parameters for the profile
(Eqn.~\ref{eq:densityProfile}) for each stack of relaxed halos as
plotted in Fig.~\ref{fig:profiles}. The tabulated values
correspond to the black circles plotted in
Figs.~\ref{fig:new_concentration}, \ref{fig:coreSize} and
\ref{fig:coreMass}. The first column gives the centre of the mass
bin used for each stack and the last column the number of
halos in each of the stacks. The concentration, $c_{200}$,
and inner profile mass, $M_{\rm{i}}$, are defined, respectively, by
Eqns.~\ref{eq:defconc} and~\ref{eq:M_i}. For the halo stacks in the lowest
mass bins, the profile
\ref{eq:densityProfile} does not provide a better fit than a standard NFW. We
hence only give the best-fitting
parameters to the NFW fit. }
\label{tab:bestFitParameters}
\begin{tabular}{|r|r|r|r|r|r|r|r|r|r|}
$M_{200}~[\msun]$ & $R_{200}~[\rm{kpc}]$ & $r_s~[\rm{kpc}]$ & $c_{200}~[-]$
&$\delta_c~[-]$ & $r_i~[\rm{kpc}]$ &
$\delta_i~[-]$ & $M_i~[\msun]$ & $N_{\rm{halo}}$\\
\hline
$ 1\times10^{10}$ & $ 45.4$ & $ 4.2$ & $10.7$ & $5.2\times10^{4}$ & $
\textendash$ & $ \textendash$ & $
\textendash$ & $362$\\
$1.6\times10^{10}$ & $ 52.8$ & $ 4.8$ & $11.0$ & $5.5\times10^{4}$ & $
\textendash$ & $ \textendash$ & $
\textendash$ & $231$\\
$2.5\times10^{10}$ & $ 61.4$ & $ 5.7$ & $10.7$ & $5.2\times10^{4}$ & $
\textendash$ & $ \textendash$ & $
\textendash$ & $153$\\
$ 4\times10^{10}$ & $ 70.8$ & $ 6.7$ & $10.5$ & $ 5\times10^{4}$ & $
\textendash$ & $ \textendash$ & $
\textendash$ & $96$\\
$6.3\times10^{10}$ & $ 83.5$ & $ 9.8$ & $8.5$ & $2.7\times10^{4}$ & $2.01$ &
$1.25\times10^{5}$ & $5.66\times10^{8}$ &
$96$\\
$ 1\times10^{11}$ & $ 97.4$ & $11.7$ & $8.3$ & $2.5\times10^{4}$ & $2.23$ &
$1.53\times10^{5}$ & $9.44\times10^{8}$
& $2412$\\
$1.6\times10^{11}$ & $113.7$ & $14.1$ & $8.0$ & $2.3\times10^{4}$ & $2.38$ &
$2.12\times10^{5}$ & $1.58\times10^{9}$
& $1657$\\
$2.5\times10^{11}$ & $132.6$ & $17.2$ & $7.7$ & $2.1\times10^{4}$ & $2.59$ &
$2.85\times10^{5}$ & $2.74\times10^{9}$
& $1119$\\
$ 4\times10^{11}$ & $154.3$ & $20.6$ & $7.5$ & $1.9\times10^{4}$ & $2.56$ &
$4.75\times10^{5}$ & $4.45\times10^{9}$
& $681$\\
$6.3\times10^{11}$ & $180.3$ & $25.7$ & $7.0$ & $1.6\times10^{4}$ & $2.61$ &
$7.28\times10^{5}$ & $7.17\times10^{9}$
& $457$\\
$ 1\times10^{12}$ & $208.8$ & $31.7$ & $6.6$ & $1.4\times10^{4}$ & $2.78$ &
$9.22\times10^{5}$ & $
1.1\times10^{10}$ & $282$\\
$1.6\times10^{12}$ & $244.7$ & $38.3$ & $6.4$ & $1.3\times10^{4}$ & $2.89$ &
$1.18\times10^{6}$ &
$1.58\times10^{10}$ & $180$\\
$2.5\times10^{12}$ & $286.3$ & $44.3$ & $6.5$ & $1.4\times10^{4}$ & $2.73$ &
$1.72\times10^{6}$ &
$1.94\times10^{10}$ & $126$\\
$ 4\times10^{12}$ & $332.4$ & $54.2$ & $6.1$ & $1.3\times10^{4}$ & $2.65$ &
$2.17\times10^{6}$ &
$2.23\times10^{10}$ & $83$\\
$6.3\times10^{12}$ & $386.6$ & $68.6$ & $5.6$ & $1.1\times10^{4}$ & $2.55$ &
$2.85\times10^{6}$ &
$2.63\times10^{10}$ & $60$\\
$ 1\times10^{13}$ & $455.2$ & $73.0$ & $6.2$ & $1.4\times10^{4}$ & $2.26$ & $
4.2\times10^{6}$ & $
2.7\times10^{10}$ & $29$\\
$1.6\times10^{13}$ & $534.3$ & $95.3$ & $5.6$ & $1.1\times10^{4}$ & $2.82$ &
$3.16\times10^{6}$ &
$3.95\times10^{10}$ & $27$\\
$2.5\times10^{13}$ & $631.4$ & $130.0$ & $4.9$ & $7.7\times10^{3}$ & $2.13$ &
$6.81\times10^{6}$ &
$3.65\times10^{10}$ & $5$\\
$ 4\times10^{13}$ & $698.9$ & $124.6$ & $5.6$ & $1.1\times10^{4}$ & $2.81$ &
$4.32\times10^{6}$ &
$5.31\times10^{10}$ & $8$\\
$6.3\times10^{13}$ & $838.1$ & $141.7$ & $5.9$ & $1.2\times10^{4}$ & $2.73$ &
$5.23\times10^{6}$ &
$5.87\times10^{10}$ & $4$\\
$ 1\times10^{14}$ & $964.7$ & $188.1$ & $5.1$ & $8.9\times10^{3}$ & $0.909$ &
$1.05\times10^{8}$ &
$4.38\times10^{10}$ & $1$\\
\hline
\end{tabular}
\end{minipage}
\end{table*}
For completeness, we give the analytic expressions for both the
enclosed mass, $M(r<R)$, and the gravitational potential, $\Phi(r)$,
for the empirical profile of Eqn.~\ref{eq:densityProfile},
\begin{eqnarray}
M(r<R) &=& 2\pi\rhocr\Bigg(2\delta_{\rm{c}} r_{\rm{s}}^3
\left[\ln\left(1+\frac{R}{r_{\rm{s}}}\right)-\frac{R}{R+r_{\rm{s}}}\right]
\nonumber \\
&& + \delta_{\rm{i}}
r_{\rm{i}}^3\ln\left(1+\frac{R^2}{r_{\rm{i}}^2}\right)\Bigg),
\label{eq:massProfile}
\end{eqnarray}
and
\begin{eqnarray}
\Phi(r)&=&-4\pi G \rhocr \Bigg(\frac{\delta_{\rm{c}} r_{\rm{s}}^3}{r}
\ln\left({\displaystyle 1+\frac{r}{r_{\rm{s}}}}\right) \\
&&+\delta_{\rm{i}}r_{\rm{i}}^2\left[\frac{\pi}{2}-\arctan\left(\frac{r}{r_{\rm{i
}}}\right) +
\frac{r_{\rm{i}}}{2r}\ln\left(1+\frac{r^2}{r_{\rm{i}}^2}\right)\right]\Bigg).
\nonumber
\label{eq:potentialProfile}
\end{eqnarray}
The expressions for an NFW profile are recovered by setting $\delta_{\rm{i}}=0$.
Finally, we stress that while this function provides an excellent fit
to the results over the range of applicability the second term should
not be interpreted as a description of the stellar profile. Rather,
the second term models a combination of the effect of all components,
including the contraction of the dark matter, and is only valid above
our resolution limit which is well outside the stellar half-mass
radius. Higher-resolution simulations, with improved subgrid models,
would be needed to model accurately the stars and gas in these very
inner regions.
\subsection{Dark matter density profile}
\label{ssec:DM_profiles}
\begin{figure*}
\includegraphics[width=\textwidth]{Figures/DMONLY_Profiles_Relaxed_mixed_z0p0}
\caption{Stacked density profiles of the \dmonly halos normalized by
the average $R_{200}$ radius and scaled by $r^2$ for a selection of
masses. The filled circles are the data points used to fit an NFW
profile following \citet{Neto2007}. The vertical line shows the
resolution limit. Data points are only shown at radii larger than the
Plummer-equivalent
softening ($2.8\epsilon=0.7~\rm{kpc}$). The blue dashed and solid brown lines
correspond, respectively, to the
best-fit NFW and Einasto profiles to the filled circles. Only one halo
contributes to the right hand
panel.}
\label{fig:DMONLY_profile}
\end{figure*}
It is interesting to see whether the radial distribution of dark
matter is different in the \dmonly and \eagle simulations. In this
subsection we look at the density profiles of just the DM in both the
\dmonly and \eagle simulations. In Fig.~\ref{fig:DMONLY_profile} we
show the profiles of the stacked halos extracted from the \dmonly
simulation for different halo mass bins. The dark matter outside
$0.05R_{\rm{vir}}$ is well fit by the NFW profile, in agreement with
previous work. The yellow curves show the best fit Einasto profile,
and in agreement with many authors \citep{Navarro2004, Gao2008,
Dutton2014} we find that the Einasto fit, with one extra parameter,
provides a significantly better fit to the inner profile.
\begin{figure*}
\includegraphics[width=\textwidth]{Figures/DM_EAGLE_Profiles_Relaxed_mixed_z0p0}
\caption{Stacked density profiles of the dark matter component of the
\eagle halos normalized by the average $R_{200}$ radius and scaled
by $r^2$ for a selection of halo masses. The green dash dotted line
represents the total mass profile (from Fig.~\ref{fig:profiles}.
The vertical line shows the resolution limit. Data points are only shown at
radii larger than the Plummer-equivalent
softening ($2.8\epsilon=0.7~\rm{kpc}$). The blue dashed
lines and solid brown lines correspond, respectively, to the
best-fit NFW and Einasto profiles to the filled circles.}
\label{fig:EAGLE_DM_profile}
\end{figure*}
We show the stacked DM density profiles for the \eagle
simulation in Fig.~\ref{fig:EAGLE_DM_profile} together with NFW
and Einasto fits to the density at $0.05 \leq r/R_{\rm{vir}} \leq 1$. For
the radii beyond $0.05R_{\rm{vir}}$ the NFW profile provides a
good fit. The Einasto profile fits are better in the
inner regions, but for the middle two mass bins
($10^{12}\msun$ and $10^{13}\msun$), the DM profile
rises significantly above the Einasto fit. This rise coincides with a
more pronounced feature in the total mass profile. The peak of
the central stellar mass fraction occurs at this same halo mass
scale, as shown in Fig.~\ref{fig:stellarFractionCentre}.
We conclude that the DM components of our simulated halos in both the
\dmonly and \eagle simulations are well described by an NFW profile
for radii $[0.05R_{200}-R_{200}]$. For the \dmonly simulation an
Einasto profile provides a better fit than an NFW profile at smaller
radii. However, for the \eagle simulation neither an NFW nor the
Einasto profile provide a particularly good fit inside $0.05R_{\rm
vir}$ for halos in the $10^{12}\msun$ and $10^{13}\msun$ mass bins,
where the contribution of stars to the inner profile is maximum. For
less massive and more massive halos than this both functions give
acceptable fits.
In their detailed study of ten simulated galaxies from the MaGICC
project \citep{Stinson2013}, \cite{DiCintio2014} fitted
$(\alpha,\beta,\gamma)$-profiles \citep{Jaffe1983} to the DM profiles
of haloes in the mass range $10^{10}\msun \leq M_{\rm
vir}\leq10^{12}\msun$ and studied the dependence of the parameters
on the stellar fraction. We leave the study of the DM profiles in the
\eagle halos to future work but we note that although in the small
halo regime, $M_{200}\leq10^{12}\msun$, an
$(\alpha,\beta,\gamma)$-profile may be a good fit, the profiles of our
most massive halos, $M_{200}\geq10^{13}\msun$, show varying slopes
down to small radii, $r\leq0.05R_{\rm vir}$, and are unlikely to be
well fit by such a function as was already suggested by \cite{DiCintio2014}.
\subsection{Halo concentrations}
\label{ssec:concentrations}
The concentration of a halo, $c_{\rm{X}}$, is conventionally defined
by the ratio, $c_{\rm{X}} =R_{\rm{X}}/r_{\rm conc}$, where
$R_{\rm{X}}$ is the radius within which mean internal density is
$X\rhocr$, and $r_{\rm conc}$ is the radius at which the spherically
averaged density profile (assumed monotonic) obeys
\begin{equation}
\frac{{\rm d}\ln\rho(r)}{{\rm d}\ln r} = -2.
\label{eq:defconc}
\end{equation}
For an NFW profile, $r_{\rm conc}=r_{\rm{s}}$, while for an Einasto
profile $r_{\rm conc}=r_{-2}$. We set $\rm{X}=200$.
Previous work \citep{Navarro1997, AvilaReese1999, Jing2000,
Bullock2001, Eke2001, Zhao2003,Neto2007, Maccio2007, Duffy2008, Gao2008,
Dutton2014} has shown that the concentration and the mass of relaxed
halos are anticorrelated (at $z=0$), and follow a power law of the
form
\begin{equation}
c_{200} = A\left(\frac{M_{200}}{ 10^{14} h^{-1} \msun}\right)^B,
\label{eq:massConcentration}
\end{equation}
where $A\approx5$ and $B\approx-0.1$. The best-fit values of these
parameters are sensitive to the cosmological parameters, particularly
to the values of $\sigma_8$ and $\Omega_{\rm{m}}$
\citep[e.g.][]{Duffy2008,Dutton2014}. The value of $c_{200}$ at
redshift zero is linked to the background density of the Universe at
the time of formation of the halo \citep{Navarro1997, Ludlow2013}
which is affected by $\sigma_8$ and $\Omega_{\rm{m}}$. Higher values
of these parameters lead to earlier halo formation times at a given
mass and therefore higher concentrations. The concentrations of
individual halos of a given mass scatter about the median value with
an approximately log-normal distribution \citep{Jing2000, Neto2007}.
The amplitude of this scatter decreases with halo mass \citep{Neto2007}.
\begin{figure}
\includegraphics
[width=\columnwidth]{Figures/MassConcentrationNFWRelaxed_merged_z0p0}
\caption{Halo concentration, $c_{200}$, as a function of mass
$M_{200}$. The top panel shows the \dmonly simulation fit with the
canonical NFW profile over the range $[0.05-1]R_{\rm vir}$. The
middle panel shows the same fit applied to the total matter density
profiles of the \eagle halos. The bottom panel shows the same fit
to just the dark matter in the \eagle halos. The faint coloured
points in each panel are the values for individual halos and the
black circles the values for the stacked profiles in each mass
bin. Halos and stacks with $M_{200}<6\times10^{10}\msun$ are taken from the
L025N0752 simulation whilst the higher mass objects have been extracted from
the L100N1504 simulation.
The solid black line is the best-fit power law
(Eqn.~\ref{eq:massConcentration}) to the solid black circles. The
best-fit parameters are shown in each panel. The best-fit power law
to the \dmonly halos is repeated in the other panels as a dashed
line. The red dashed line on the first panel is the best-fit
relation from \citet{Dutton2014}.}
\label{fig:concentration}
\end{figure}
While formally Eqn.~\ref{eq:defconc} implicitly defines $R_{\rm conc}$, it is
impractical to apply a differential measure of the density to
determine the concentrations of individual halos, even in simulations,
because the density profiles are noisy and sensitive to the presence
of substructures. In practice, the concentration is determined by
fitting the spherically averaged density profile over a range of radii
encompassing $r_{\rm{s}}$ with a model. This approach only works if
the model provides a good description of the true halo profile over
the fitted range. We have shown in Section~\ref{ssec:profiles_fit}
that the density profiles of halos in both the \eagle and DMO
simulations are well described by an NFW profile over the range
$[0.05-1]R_{\rm vir}$, so we fit an NFW model over this range.
Fig.~\ref{fig:concentration} shows the NFW concentration of relaxed
halos as a function of halo mass for the \dmonly and \eagle
simulations. The top panel shows the \dmonly simulation. The black
line is the best fit power law of Eqn.~\ref{eq:massConcentration} to
the solid black circles (corresponding to the stacks containing at
least five halos) using Poissonian errors for each bin. We have
verified that fitting individual halos (faint green circles in the
same figure) returns essentially the same values of $A$ and
$B$. Table~\ref{tab:massConcentration} lists the best-fitting values
of these parameters. It is worth mentioning that the best-fitting
power laws fit the halo stacks in the simulations equally well.
\begin{table}
\caption{Best fitting parameters and their $1\sigma$ uncertainty for the
mass-concentration relation (Eqn.~\ref{eq:massConcentration}) of
the stacks of relaxed halos. The values correspond to those shown in
the legends in Fig.~\ref{fig:concentration}. From top to bottom: NFW fit to the
\dmonly halos, NFW fit to the total mass of the \eagle halos, and NFW
fit to the dark matter component of the \eagle halos. All profiles were fit over
the radial range $[0.05-1]R_{\rm vir}$. The uncertainties are
taken to be the diagonal elements of the correlation matrix
of the least-squares fitting procedure.}
\label{tab:massConcentration}
\begin{center}
\begin{tabular}{|l|c|c|}
Fit & $A$ & $B$\\
\hline
$c_{200, \rm{DMO}}$ & $ 5.22\pm0.10 $ & $ -0.099\pm0.003 $ \\
$c_{200, \rm{tot}, \rm{NFW}}$ & $5.283\pm0.33$ & $-0.087\pm0.009 $ \\
$c_{200, \rm{DM}, \rm{NFW}}$ & $5.699\pm0.24 $ & $-0.074\pm0.006$ \\
\end{tabular}
\end{center}
\end{table}
The mass-concentration relation of \cite{Dutton2014} is shown as a red
dashed line in the top panel of Fig.~\ref{fig:concentration}. This
fit is based on a series of \dmonly cosmological simulations of a
\lcdm model very similar to ours with the cosmological parameters
values taken from the \cite{Planck2013} data. Using several
volumes at different resolutions, they were able to determine the
concentration-mass relation over the range $10^{10}\msun < M_{200} <
1.5\cdot10^{15}\msun$ at $z=0$. Fitting an NFW model to estimate the
concentration, as we do here, they obtained
\begin{equation}
c_{200} = 5.05 \left(\frac{M_{200}}{ 10^{14}h^{-1}\msun}\right)^{-0.101},\nonumber
\end{equation}
which agrees well with our results.
Not unexpectedly, given the sensitivity of the concentration to
changes in the cosmological parameters, the values for the fit we
obtain for the \dmonly simulation are significantly different from
those reported by \cite{Neto2007}, \cite{Maccio2007} and
\cite{Duffy2008}. Compared to the latter, the slope ($B$) is
steeper and the normalisation ($A$) is higher. This change can be
attributed mainly to changes in the adopted cosmological
parameters $(\sigma_8,\Omega_{\rm{m}})$ which were $(0.796,0,258)$ in
\cite{Duffy2008} and $(0.8288,0.307)$ here.
The second panel of Fig.~\ref{fig:concentration} shows the
concentrations for the total matter density profiles of the \eagle
simulation obtained using the same fitting procedure. The best-fitting
parameters for the mass - concentration relation are given in the
second line of Table~\ref{tab:massConcentration}. Both the amplitude
and slope are consistent with the values for the \dmonly
simulation. As discussed in Section \ref{ssec:HaloMass}, matched halos
in the \dmonly and \eagle simulations have, on average, a lower mass
in the \eagle simulation. For the smallest halos, the average ratio is as low
as $0.72$. Because of this shift in mass, some difference in the
concentration-mass relation might be expected between the two
simulations but, since the value of the slope is small and
$0.72^{-0.1} \simeq 1.04$, the effect on the amplitude is also
small. A consequence of the shift in $M_{200}$ is that the relative
sizes of $R_{200}$ for matched halos is
$R_{200}^{\rm{EAGLE}}/R_{200}^{\rm{DMO}} \simeq0.9$. In
Fig.~\ref{fig:scale_ratios} we show that the mean ratio of
$r_{\rm{s}}^{\rm{EAGLE}}/r_{\rm{s}}^{\rm{DMO}}$ for matched relaxed
halos is also slightly below unity, so the net effect of those two
shifts is that the concentrations are very similar in both
simulations.
\begin{figure}
\includegraphics[width=\columnwidth]{Figures/scaleLengthRatio_100_z0p0}
\caption{Ratio of NFW scale radii, $r_{\rm{s}}$, in matched relaxed
halos in the \dmonly and \eagle simulations. The black points are
placed at the geometric mean of the ratios in each mass bin.}
\label{fig:scale_ratios}
\end{figure}
Finally, the bottom panel of Fig.~\ref{fig:concentration} shows the
concentration of the DM only component of \eagle halos. We
fit an NFW profile in the same way as for the total matter profiles in
the panels above. As would be expected from the analysis of
Fig.~\ref{fig:profiles} and the fact that the outer parts of the
dark halos are well described by the NFW profile, the same trend
with mass can be seen as for the \dmonly simulation. The best-fitting
power law to the mass-concentration relation is given at the bottom of
Table~\ref{tab:massConcentration}. The values of the parameters are
again close to the ones obtained for both the \eagle and the \dmonly
simulations.
We stress that the agreement between the \eagle and \dmonly simulations breaks
down if we include radii smaller than $0.05R_{\rm{vir}}$ in the fit. Hence, the
mass - concentration relation given for \eagle in Table \ref{tab:massConcentration}
should only be used to infer the density profiles beyond $0.05R_{\rm{vir}}$.
\subsection{Best-fit parameter values for the new density profile}
\label{ssec:fit_parameters}
We showed in Section~\ref{ssec:profiles_fit} that the density profiles
of halos in the \eagle simulation are not well fit by an NFW profile
in the inner regions, and we proposed Eqn.~\ref{eq:densityProfile} as
a new fitting formula for these profiles. This new profile has two
lengthscales, $r_{\rm{s}}$ and $r_{\rm{i}}$, where the former
describes the NFW-like outer parts of the halo, and the latter the
deviations from NFW in the inner regions. For lower-mass halos these
two lengths become similar, so both terms of the profile can contribute
significantly to the density at all radii. We can still define the
concentration of a halo in this model as $R_{200}/r_{\rm{s}}$, but we
would expect to obtain a different mass-concentration relation from
that for the dark matter-only case. Fig.~\ref{fig:new_concentration}
shows this relation for relaxed \eagle halos. The anticorrelation seen
when fitting an NFW profile is still present and we can use the same
power-law formulation to describe the mass-concentration relation of
our halo stacks. The values of the best-fit parameters, given in the
figure, differ significantly from those obtained using the NFW fits
listed in Table~\ref{tab:massConcentration}.
\begin{figure}
\includegraphics[width=\columnwidth]{Figures/MassConcentrationNewRelaxed_merged_z0p0}
\caption{Halo concentration, $c_{200}$, as a function of mass,
$M_{200}$, for the total matter density profiles of the \eagle
simulation using the fitting function of
Eqn.~\ref{eq:densityProfile} and the $r_{\rm{s}}$ parameter to
define the concentration, $c_{200} = R_{200}/r_{\rm{s}}$. The
colour points are for individual halos and the black circles for the
stacked profiles in each mass bin. The solid black line is the
best-fit power law (Eqn.~\ref{eq:massConcentration}) to the solid
black circles. The best-fit values are given in the legend at the
top right. The dashed line shows the best fitting power law to the
halos extracted from the \dmonly simulation fitted using an NFW
profile.}
\label{fig:new_concentration}
\end{figure}
We now consider the two remaining parameters of the profile described
by Eqn.~\ref{eq:densityProfile}. The inner component is characterized
by two quantities, a scale radius, $r_{\rm{i}}$, and a density
contrast, $\delta_{\rm{i}}$. We stress that this inner profile
should not be interpreted as the true underlying model of the galaxy
at the centre of the halo. It is an empirical model that describes the
deviation from NFW due to the presence of stars and some contraction
of the dark matter. The profiles have been fit using the procedure
described in Section~\ref{ssec:profiles_fit} using all radial bins
with $r>r_{\rm{c}}$.
\begin{figure}
\includegraphics[width=\columnwidth]{Figures/MassCoreRadiusRelaxed_merged_z0p0}
\caption{The characteristic radius, $r_{\rm{i}}$, of the central
component as function of halo mass (Eqn.~\ref{eq:densityProfile})
for halos in the \eagle simulation. The red squares correspond to
all the halos fitted individually and the overlaying black circles
to the stacked halos in each mass bin. Stacks containing less than
three objects are shown as open circles. The minimum Plummer-equivalent softening
length ($\epsilon=0.7~\rm{kpc}$) is indicated by the grey dashed
line at the bottom of the figure. The average value of the stacks
with more than three objects is indicated by a solid black line.}
\label{fig:coreSize}
\end{figure}
The dependence of the $r_{\rm{i}}$ scale radius on the halo mass is
shown in Fig.~\ref{fig:coreSize}. The radius $r_{\rm{i}}$ is roughly
constant over the entire halo mass range in the simulation. The
scatter is large at all masses, but there is a weak trend with mass in
the low-mass regime. This regime is, however, difficult to study as
may be seen in the first few panels of Fig.~\ref{fig:profiles}: for
the smallest halos, the effects due to baryons are small and the
profile is thus closer to NFW than for the higher-mass bins.
The empirical profile (Eqn.~\ref{eq:densityProfile}) tends towards an
NFW profile as $\delta_{\rm{i}}\rightarrow0$ or
$r_{\rm{i}}\rightarrow0$. We find that, for the smallest halos, there
is a degeneracy between these two parameters and the values of
$r_{\rm{i}}$ and $\delta_{\rm{i}}$ can be changed by an order of
magnitude (self-consistently) without yielding a significantly
different $\sigma_{\rm{fit}}$ value. This is not a failure of the
method but rather a sign that the baryonic effects on the profile
shape become negligible for the lowest-mass halo, at least for the
range of radii resolved in this study.
Rather than working with the $\delta_{\rm{i}}$ and $r_{\rm{i}}$
parameters, we can combine them into a single parameter that reflects
the additional mass contained in the central parts of the halo above
and above that from the NFW component. Integrating the inner
profile up to $r_{\rm{i}}$, we can obtain an estimate of this
additional mass which we define as:
\begin{equation}
M_{\rm{i}} = (2\pi\ln2)\rhocr r_{\rm{i}}^3\delta_{\rm{i}} \approx 4.355\rhocr r_{\rm{i}}^3\delta_{\rm{i}}
\label{eq:M_i}.
\end{equation}
If $r_{\rm{i}}$ were really constant, then $M_{\rm{i}}$ would simply
be a proxy for $\delta_{\rm{i}}$.
\begin{figure}
\includegraphics[width=\columnwidth]{Figures/MassCoreMassRelaxed_merged_z0p0}
\caption{The mass, $M_{\rm{i}}$, defined in Eqn.~\ref{eq:M_i}, as a
function of halo mass, $M_{200}$. The red squares correspond to the
individual halos and the overlaying black circles to the stacked
profiles. The green solid line is the stellar mass - halo mass
relation from the \eagle simulation \citep{Schaye2014}.}
\label{fig:coreMass}
\end{figure}
The mass, $M_{\rm{i}}$, is shown in Fig.~\ref{fig:coreMass} as a
function of the halo mass, $M_{200}$. The black points corresponding
to the stacked profiles lie in the middle of the relation for
individual halos. The mass, $M_{\rm{i}}$, increases with halo
mass. For halos with $M_{200}\lesssim10^{12}\msun$, the fraction,
$M_{\rm{i}}/M_{200}$, increases with $M_{200}$ highlighting that the
effect of the baryons is more important for the bigger halos. This
could have been expected by a careful inspection of
Fig.~\ref{fig:stellarFractionCentre}, which shows that the central
stellar and baryonic fractions peak at
$M_{200}\approx10^{12}\msun$. For larger halos, the
$M_{200}$-$M_{\rm{i}}$ relation flattens reflecting the decrease in
stellar fractions seen at the centre of the largest \eagle halos.
To confirm this conjecture, we plot the stellar mass - halo mass
relation for the \eagle simulation as a solid green line in the same
figure \citep{Schaye2014}\footnote{Note that the \eagle simulation
reproduces abundance matching results \citep{Schaye2014}.}.
Neglecting the two highest mass bins (open circles), the similarity
between this relation and our somewhat arbitrary definition of
$M_{\rm{i}}$ seems to indicate that the stellar mass of the halos is
related to this parameter. The definition of the mass, $M_{\rm{i}}$,
could also be modified to take into account other properties of the
galaxy in the halo. We could, for instance, include the galaxy size
(half-stellar mass radius or half-light radius, for example) instead
of $r_{\rm{i}}$ in the definition of $M_{\rm{i}}$. It would then be
interesting to see how this newly defined mass correlates with the
galaxy's stellar mass.
\subsection{A non-parametric estimate of the concentration}
\label{ssec:enc_ratio}
The definition of concentration assumes that the halos are well fit by
an NFW (or other) profile. This is the case for our sample of halos
down to radii $\sim0.05R_{\rm vir}$, so we can safely compute the
concentration of these halos as $r_{\rm{s}} > 0.05 R_{\rm vir}$ for
almost all cases of interest. It is nevertheless worthwhile measuring
a proxy for the concentration which does not rely on a specific
parametrization of the profile. This is also more convenient for
observational purposes, where a large range of radii are not always
available to perform a fit. A simpler estimator of the concentration
can then help constrain the models without making assumptions about
the exact shape of the profile. This is particularly true for X-ray
observations because it is difficult to detect X-ray emission all the
way to the virial radius.
\begin{figure}
\includegraphics[width=\columnwidth]{Figures/nonParam_Relaxed_100_z0p0}
\caption{The average ratio of the $R_{500}$ and $R_{2500}$ radii as a
function of halo mass, $M_{500}$, for both the \eagle (red squares)
and \dmonly (green circles) simulations.The error bars represent the
$1\sigma$ scatter in the population. To ease the reading of the
plot, the points with error bars have been artificially displaced by
$0.02\rm{dex}$ towards the left and right for the \eagle and \dmonly
results respectively. The black dashed line shows the expected
relation for a NFW profile with the concentration-mass relation
determined for the \eagle simulation in
Section~\ref{ssec:concentrations}.}
\label{fig:nonParametric}
\end{figure}
Such an estimator is given by the ratio of spherical over-density
radii $R_{500}/R_{2500}$ \citep[e.g.][]{Duffy2010}. Both of these
quantities can be obtained without assuming anything about the slope
and functional form of the matter density profile. We show the value
of this ratio as a function of the spherical enclosed mass, $M_{500}$,
in Fig.~\ref{fig:nonParametric}. The \eagle and \dmonly simulations
show the same trends and the differences between them are smaller than
the scatter between individual halos. As could already be seen from
the profiles in Figs.~\ref{fig:profilesComponent} and
\ref{fig:profiles}, the effect of modelling the baryons is
concentrated at small radii, well within $R_{2500}$.
\subsection{Limitations of the subgrid model}
\label{ssec:missing_physics}
The convergence test in subsection \ref{ssec:resolution_test}
demonstrated that the simulation results of interest here are
converged at radii, $r>r_{\rm c}$ (given by a modified version of the
criterion proposed by \citealt{Power2003}) and that even at smaller
radii the profiles of stacked halos remain remarkably similar when the
resolution is increased by a factor of $8$ in particle mass. A halo of
mass $M_{200}\approx10^{11}\msun$ is then resolved with
$\mathcal{O}(10^5)$ particles and its stellar disk with
$\mathcal{O}(10^3)$ particles, which is enough to sample star
formation histories with good accuracy and obtain a realistic galaxy
population \citep{Schaye2014, Furlong2014, Crain2014}.
An interesting aspect of our simulations is that no halos (of any
mass) develop density cores in their central regions within the
resolved radial range. By contrast, simulations of dwarf and even
larger galaxies by a number of authors produce such cores \citep[see
references in Sec.~\ref{ssec:density_profiles} and] [for a
review]{PontzenGovernato2014}. As shown by \cite{Navarro1996b}, a
physical mechanism that can produce a flattening of the inner dark
matter density profile is the sudden removal, in a starburst, of gas
that had previously contracted enough to become self-gravitating,
dominate the central gravitational potential and slowly drag dark
matter in. The subsequent loss of binding energy from the central
regions by the removal of that gas on a timescale shorter than the
local dynamical time causes the dark matter to flow outwards resulting
in a flattening of the profile to a degree that depends on the size
and mass of the self-gravitating gas component. A variant of this
process is apparently responsible for the formation of cores in
simulations of dwarf galaxy \citep[e.g.][]{Governato2010} and in
simulations of galaxy clusters (where the source of energy is an AGN;
\citealt{Martizzi2013}).
An important aspect of the simulations by \cite{Governato2010} is that
the assumed subgrid model adopts a higher density threshold for star
formation ($10-100~m_{\rm H}\cdot\rm{cm}^{-3}$) than we have assumed
in \eagle (a metallicity-dependent threshold with a value of
$0.031~m_{\rm H}\cdot\rm{cm}^{-3}$ at solar metallicity that traces
the density above which a cold, molecular gas phase is expected to be
present; see \cite{Schaye2004,Schaye2014})\footnote{A significant
number of stars in \eagle, however, form from gas at much higher
densities that the threshold; see \cite{Crain2014}}. Although even
the high value assumed by \cite{Governato2010} is many orders of
magnitude below the gas density in the star-forming cores of molecular
clouds, it probably allows a substantial amount of gas to dominate the
potential in the central regions prior to a starburst, as required for
the \cite{Navarro1996b} mechanism to operate\footnote{It is unclear
whether cold, dense star-forming clouds in a multiphase interstellar
medium \cite{MckeeOstriker1977} would contain enough mass to
dominate the central potential of the halo.}.
It is not obvious {\em a priori} which, if any, of the subgrid models
for star formation used to date is more appropriate, but an important
virtue of the \eagle subgrid model is that it leads to a population of
galaxies with properties that agree well with a large set of
observations, from the regime of dwarf galaxies to the regime of
galaxy clusters \citep{Schaye2014, Furlong2014, Crain2014, Sawala2014,
Schaller2014b}. None of the simulations that produce cores in the
dark matter has yet been able to demonstrate such success. Indeed,
other large cosmological simulations with different subgrid models to
\eagle such as ``Illustris'' do not appear to have produced density
cores either \citep{Vogelsberger2014}. In any event, the evidence for
the existence of cores in real galaxies is still a matter of lively
debate, with some authors reporting cores \citep[e.g.][]{Salucci2000,
Swaters2003, Simon2005, Gentile2007, deBlok2008, Kuzio2008,
Oh2011a}, others reporting cusps \citep[even for some of the same
galaxies, e.g.][]{Adams2014}, and others arguing that current
kinematical data cannot distinguish cores from cusps (at least in the
case of satellites of the Milky Way for which kinematical studies on
resolved stellar populations are possible; \citealt{Strigari2010,
Strigari2014}).
Finally, we stress that the conclusions in this paper refer only to
radii larger than $r>r_{\rm c} \approx 1.8~\rm{kpc}$. Higher
resolution simulations would be required to test whether our subgrid
model can generate density cores on smaller scales than those resolved
in the present study.
\section{Conclusions}
\label{sec:conclusion}
The aim of this study was to characterize the mass density profiles of
dark matter halos in a cosmological \lcdm simulation, which includes
dark matter and baryons, and in which the resulting galaxy population
has realistic stellar masses and sizes; we also quantified the
differences with halos in a dark matter-only simulation. We used the
state-of-the-art \eagle simulation from which we selected halos above
$10^{9}\msun$ to study changes in the mass, and above $10^{11}\msun$
to study changes in the internal structure. Our results can be
summarized as follows:
\begin{enumerate}
\item The mass, $M_{200}$, of halos is reduced by the inclusion of
baryons and associated energy feedback effects. At the low mass
end, feedback from star formation expels gas and this reduces the total mass,
radius and growth factor of the halo; the reduction in mass can be as
large as $30\%$ for halos with $M_{200}\lesssim10^{11}\msun$. This
reduction is progressively smaller for larger halos as the source of
feedback shifts from star formation to AGN. In the \eagle simulation
there is virtually no effect for masses $M_{200}
\gtrsim10^{14}\msun$, but the exact value of the mass at which this
happens could be larger if, as suggested by \cite{Schaye2014}, more
effective AGN feedback is necessary than is present in \eagle. The reduction in
mass can be described by the double-sigmoid function of
Eqn.~\ref{eq:meFit}, and the scatter around the mean by the formula
of Eqn.~\ref{eq:Myfitscatter}.
\item The circular velocity curves of the \eagle halos are in
excellent agreement with observational data for galaxies with
stellar mass ranging from $10^9\msun$ to $5\times10^{11}\msun$
(corresponding to halo masses in the range $10^{11}\lesssim
M_{200}/\msun\lesssim 10^{13}$).
\item The radial density profiles of \eagle halos over the radial range
$[0.05R_{\rm{vir}},R_{\rm{vir}}]$ are very similar to the profiles of
halos in dark matter-only simulations and are well described by the
NFW formula. Halo concentrations estimated by fitting NFW profiles
over this range are also very similar to the dark matter-only case.
\item The central regions of halos more massive than $M_{200}
\gtrsim10^{12}\msun$, on the other hand, are dominated by the
stellar component. The presence of these baryons causes a
contraction of the halo, enhancing the density of dark matter in
this region. The variation in profile shape is greatest for halos in
the mass range $M_{200}=10^{12}\msun - 10^{13}\msun$ where the
stellar mass fraction peaks (as does the total baryonic mass
fraction within $0.05R_{\rm{vir}}$
\item The radial density profiles of the \eagle halos can therefore be
well fit (over the radial range resolved in the simulation) by the
sum of an NFW profile, which describes the outer, dark
matter-dominated regions, and an NFW-like profile with a sharper
bend, which describes the combined effect of the presence of stars
and the contraction of the dark matter halo
(Eqn.~\ref{eq:densityProfile}). Of the two additional parameters
required in this fit, one, $r_{\rm{i}}$, is approximately constant
with halo mass, while the other one, the characteristic inner mass
scale, $M_{\rm{i}}$, scales with halo mass in a similar way to the
stellar mass of the central galaxy.
\end{enumerate}
The way in which galaxy formation affects the host halos is a problem
that can only be reliably addressed with simulations of the kind we
have described here. However, it is clear that the nature of these
effects is sensitive to the way in which the baryon physics are
implemented, particularly to the subgrid models for feedback from star
formation and AGN. The \eagle simulations have the great advantage
that the subgrid models have been calibrated so that the simulation
reproduces the local galactic stellar mass function as well as the
distribution of galaxy sizes, and they also reproduce a wide variety
of other observations. This lends a certain degree of credibility to
our results and it would be interesting to compare them with other
simulations that assume different subgrid models but achieve similarly
good matches to observables over a large range of halo masses. A
limited comparison of this kind is carried out in Appendix~A1.
The simulations investigated here do not have enough resolution to
study dwarf galaxies for which there is much discussion regarding the
formation of central cores in the dark matter density distribution
\citep[for a review see][]{PontzenGovernato2014}. However, the related
high resolution simulations of the Local Group by \cite{Sawala2014},
which use essentially the same subgrid models as \eagle, do resolve
dwarfs. The behaviour of these smaller halos simply continues to
smaller masses the trends seen here: the halos become increasingly
dark matter-dominated and remain well described by the NFW profile.
\section*{Acknowledgements}
We are grateful to Lydia Heck and Peter Draper without whose technical expertise and
support this work would have not been possible. RAC is a Royal Society University Research Fellow.
This work was supported in part by an STFC Consolidated grant to Durham University
and by the European Research Council through ERC Grants Cosmiway (GA
267291), GasAroundGalaxies (GA 278594) and Cosmocomp (GA 238356) and
also the Inter-university Attraction Poles Programme initiated by the
Belgian Science Policy Office ([AP P7/08 CHARM]). This work was also
sponsored by the Dutch National Computing Facilities Foundation (NCF),
with financial support from the Netherlands Organization for
Scientific Research (NWO). The \eagle simulations used the DiRAC Data
Centric system at Durham University, operated by the Institute for
Computational Cosmology on behalf of the STFC DiRAC HPC Facility
(www.dirac.ac.uk). This equipment was funded by BIS National
E-infrastructure capital grant ST/K00042X/1, STFC capital grant
ST/H008519/1, and STFC DiRAC Operations grant ST/K003267/1 and Durham
University. DiRAC is part of the National E-Infrastructure. We
acknowledge PRACE for resources on the Curie supercomputer in France.
\bibliographystyle{mn2e}
|
1,941,325,220,407 | arxiv | \section{Introduction}
Galaxy merger research has shown how fundamental merging is to galaxy evolution, with historical merger rates generally increasing with galaxy mass \citep{bundy2009greater, schawinski2010role, l2012mass, pillepich2018first}. Distant galaxies (z$\approx$2) are often quoted as being a factor of 2-5 times smaller than those found locally \citep{daddi2005passively,van2008confirmation, saracco2009population}. As such it is widely assumed that a large amount of mass-assembly after z$\approx$2 is a result of hierarchical growth through galaxy mergers and accretion which has been widely corroborated from galaxy evolution models. Not only does merger history impact on almost all other aspects of galaxy evolution, but many galaxies have experienced large mergers throughout their history with around 50\% of galaxies experiencing a major merger \citep{maller2006galaxy}, and essentially all surviving galaxies experiencing minor mergers, with frequency increasing with merger mass-ratio \citep{lotz2011major}. The exception for these cases are some rare pristine galaxy types ($\lesssim$ 0.1\% of galaxies according to \citealt{quilis2013expected}) which have likely experienced no outside interaction or accretion events \citep{trujillo2013ngc}.
Modelling is an excellent way to delve into the mechanics and subsequent effects of galaxy mergers. Using simulations, the ex-situ mass fraction of accreted galaxies has been explored in depth \citep{pillepich2015building, qu2017chronicle, davison2020eagle}. This is useful for defining expected current merger rates to be compared to observationally. A challenging aspect of observational astronomy is demonstrating the merger history of observed nearby galaxies to verify these models, particularly if potential mergers occurred several Gyr ago.
Integral Field Spectroscopy has proven particularly useful in exploring galaxy kinematics and populations. Integral Field Units (IFU's) have provided spatially resolved maps of galaxies which can be used to diagnose population differences and kinematic effects as a result of mergers. This has been shown to be effective in numerous observational cases \citep[see e.g.][]{guerou2016, faifer2017, Ge2019}
The impact of mergers and merger history on galaxy evolution is an important aspect to understand. For one thing, mergers are known to drive gas towards the galaxy centre \citep{mihos1995gasdynamics}, causing AGN activity and black hole growth, which in turn can shut down or suppress star formation in the galaxy at large \citep{cales2015post, choi2015impact}. On the other hand, mergers can cause sudden and significant bursts of star formation due to the disruption of previously unperturbed gas kinematics \citep{di2008frequency, ellison2013galaxy, moreno2015mapping, capelo2015growth}. Disruption in the gas kinematics of galaxies can leave key fingerprints in identification of merger events. One of the most readily identifiable features of a recent or ongoing merger is counter rotating components, with up to 40\% of S0 galaxies displaying signatures of counter-rotation \citep{rubin1994multi, davis2011atlas3d, coccato2015properties, bassett2017formation}. Galaxy-galaxy mergers of the right combination can change the very morphological type of a galaxy. As such, mergers hold the power to define entire galaxy futures.
The S01-pec galaxy NGC\,7135 (AM 2146–350, IC 5136) in the constellation of Piscis Austrinus is a merger remnant galaxy \citep{Keel1985} that is likely en route to forming an S0 galaxy. It currently displays several immediately striking visual features including an extended tail, shell features, and curved structure (Figure \ref{phot}) based on photometry from the Carnegie-Irvine Galaxy Survey \citep{ho2011carnegie}.
NGC\,7135 was first described as having `a curious jet and shell' in \cite{malin1983catalog} with the `jet' later shown to be a tail in \cite{2003MNRAS.343..819R}. The shell structures of the galaxy were found to be particularly clear in UV \citep{rampazzo2007, marino2011nearby}, with FUV gas structure further linked to an accretion event that also likely formed the shells. \cite{ueda2014cold} found CO emitting gas that was unassociated with the nucleus, along with 3 mm continuum associated with the nucleus. Despite speculation, NGC\,7135 was determined to have no active nucleus as shown in \cite{zaw2009galaxies} through optical spectra analysis. Analysis in \cite{1985keel} identifies NGC\,7135 as a merger galaxy, and in \cite{2003MNRAS.343..819R} NGC\,7135 is shown to possess an elongated, asymmetric gas structure relative to the stellar material.
The local environment of NGC\,7135 is described by \cite{samir2016fundamental} as being `low density', with the classification of `low density' \citep{annibali2010nearby} a result of the richness parameter $\rho_{xyz}$=0.32 gal Mpc$^{-3}$ \citep{tully1988nearby}. Early type galaxies in low density environments are known to possess on average younger populations ($\sim$\,2Gyr younger) than similar galaxies in higher density environments \citep{thomas2003stellar}, a likely result of more recent mergers and star formation.
In this work we present new observations of the galaxy NGC\,7135, recently obtained with MUSE. We aim to show that NGC\,7135 is currently undergoing a major merger, with a history of older mergers underlying in the galaxy populations. The paper is presented as follows: In Section 2 we describe the motivation behind the observations, as well as the data reduction and limitations. In Section 3 we describe our methodology, including the use of regularisation during spectral fitting. In Section 4 we present the resultant maps of stellar populations and kinematics, as well as gas properties similarly derived, including rotation differences between the two components. In Section 5 we discuss the implications of the results and finally in Section 6 we provide a summary and concluding remarks.
\section{Observations and data reduction}
We observed NGC\,7135 with the Multi Unit Spectroscopic Explorer \citep[MUSE,][]{bacon2010MUSE,bacon2014MUSE} at the Very Large Telescope (VLT) as part of the Snapshot Optical Spectroscopic Imaging of Mergers and Pairs for Legacy Exploration (SOSIMPLE) survey (Program ID: 0103.A-0637(A), PI: B.~Husemann). The aim of the SOSIMPLE survey is to provide complementary IFU observations for an ongoing Hubble filler gap snapshot imaging program (Program ID: 15446, PI: J.~Dalcanton). HST imaging of NGC\,7135 is not yet taken due to the filler nature of the HST program, thus these MUSE observations act as a first look at the data, to which HST data can be compared to at a later date. Combining IFU spectroscopy with a large set of high-quality ancillary data will hopefully provide observational and theoretical insights into the evolution of merging systems.
The MUSE observations were conducted on 6 July 2019 during dark sky conditions and split into 3$\times$560\,s dithered pointings along with a 300\,s dedicated blank sky field exposure for background subtraction of this extended galaxy. Rotations of 90\degr\ were applied between exposures covering approximately 3.4 arcmin$^2$ as shown in Fig \ref{phot}. The seeing during the observations maintained at $\sim$1\arcsec\ , and the sky was covered with thin clouds during strong wind conditions from North-West direction.
The data were reduced with the standard ESO pipeline \citep{weilbacher2020pipeline} which performs detector calibrations, flat-fielding, wavelength calibration, flux calibration as well as sky subtraction, exposure alignment, and cube reconstruction of the combined exposures. We performed an additional correction for residual sky lines using a simple PCA algorithm. The MUSE pixel scale is 0.2 arcsec pixel$^{-1}$, with a mean spectral resolution of $\sim$2.5\AA\ though this can vary across the wavelength range (see figure 5 of \citealt{husser2016muse}). The resulting mean Signal-to-Noise (SN) ratio of the spaxels in the MUSE image within a wavelength range of 4759--6849\,\AA\ (limited from 4759--9300\,\AA ) is 9.5, with a maximum spaxel SN of 131.
\section{Methodology}\label{method}
Spaxels were Voronoi binned to a minimum SN of 50 per \AA, thereby poor signal regions were made available for analysis, whilst higher SN spaxels remained unbinned. This optimally allowed for spatial investigation of spectral properties, without losing valuable high resolution data at high SN locations.
The wavelength was restricted to 4759 - 6849\,\AA\, for all spaxels to ensure the strongest Balmer lines were included, and to exclude noisier sky-dominated regions at redder wavelengths. All spectra of spaxels within a bin were summed into a single spectra representing the area covered by the bin. An area containing a foreground star was masked from analysis in the West of the image (see Figure \ref{phot}).
To analyse the spectra from the binned NGC\,7135 data we utilised the Penalized PiXel-Fitting (pPXF) method, described in \cite{cappellari2004intro} and upgraded in \cite{cappellari2017upgrade}. With this method, single-age single-metallicity stellar population (SSP) models are fit to spectra to build a map of stellar populations across age and metallicity space. By identifying the combination of SSP models that approximate a given spectrum, the estimated constituent populations are extracted, as well as velocity and dispersion. Stellar models are weighted as per the estimated fraction of the population present in the galaxy. As a result, output weights of stellar models indicate the fractions of specific stellar populations present in the spectrum. The output model of combined spectra is made more physical by the use of template regularisation (see e.g. section 3.5 of \citealt{cappellari2017upgrade}), the methodology of which is explained in detail below. Standard pPXF cleaning algorithms were included to mask emission lines where necessary.
A total of 552 MILES SSP models \citep{vazdekis2010evolutionary} were used to fit to galaxy spectra. These models were of Kroupa revised initial mass function (log slope of 1.3, M$_{max}$=100M$_{\odot}$) using BaSTI isochrones, with a metallicity range of -2.27 to +0.4 [M/H] in 12 non-linear steps, and an age range of 0.1 to 14.0\,Gyr in 46 non-linear steps \cite[][]{kroupa2001variation, cassisi2005basti,pietrinferni2006large,falcon2011updated,vazdekis2012miuscat}.
Application of regularisation allows smoothing over stellar model weights to reproduce a population map consistent with physical results. The weighted templates that have been combined to produce a target spectrum will often be unphysically localised to only the strongest of possible solutions, with many other valid solutions being overlooked, despite their physicality. To produce more representative distributions, regularisation seeks to smooth the solutions to a physical state. The challenge is to smooth the template weights to a solution that most accurately represents observed conditions, whilst not overlooking genuine fluctuations and details present in the model-fit. The regularisation parameter controls the strength of the smoothing and is deduced through a robust iterative approach for each spectrum individually. The regularisation parameter is derived such that it corresponds to the maximum value consistent with observations. Thus the derived star formation history will be the smoothest that is consistent with the observations. This has been shown in literature to be an accurate and useful method of galaxy population extraction \cite[see e.g.][]{comeron2015, norris2015extended, guerou2016, faifer2017, Ge2019, boecker2020recovering}.
In this work an iterative routine is applied to extract the optimal regularisation parameter. For the best possible fit, the $\chi^2$ of the solution is expected to be approximately equal to the number of available voxels in the spectrum, $N$ (i.e. the number of voxels available after any masking). To obtain this optimal solution, the $\chi^2$ must be increased from the unregularised $\chi^2$ (referred to as $\chi^2_0$) by $\sqrt{2N}$.
After rescaling noise from the unregularised solution such that $\frac{\chi^2}{N}$ = 1, we make a number of primary guesses at the regularisation parameter. We find the $\Delta \chi^2$ of these initial guesses and fit a function to the input regularisation guesses and output $\Delta \chi^2$ values. By doing so we can precisely find the optimal regularisation parameter such that $\chi^2 = \chi^2_0+\sqrt{2N}$. This action is performed for every bin, resulting in optimal solutions across the entire image map.
\begin{figure}
\includegraphics[width=\linewidth]{images/NGC7135_color_MUSE3.pdf}
\caption{A colour image of NGC\,7135 showing the MUSE cube footprint. Photometry of NGC\,7135 is from the Carnegie-Irvine Galaxy Survey \citep{ho2011carnegie}. The blue border shows the boundaries of the reduced MUSE IFU data used in this study.
A green circle traces an area containing a bright foreground star that was entirely excluded from the analysis.}
\label{phot}
\end{figure}
\section{Results}
We separate the analysis of NGC\,7135 into three components; the stellar component analysis, encompassing the stellar kinematics; the gaseous component analysis, encompassing gas kinematics, emission lines and star formation aspects; and the population analysis, examining the various stellar populations and the resulting implications for the assembly history of NGC\,7135.
To examine the stellar component we utilise Voronoi binning as described in Section \ref{method}. From this we are able to examine the stellar rotation and bulk velocities, as well as mean age and metallicities spatially across the galaxy (Fig \ref{stellar_properties}). To investigate details related to the gaseous component we use regular binning to view the gas velocities and rotation, as well as the line strengths of H$\alpha$ and H$\beta$ (Fig \ref{gas_properties}). Though we see reasonable amounts of H$\alpha$ emission, there is scant evidence for significant ongoing star formation. This is explained in detail in Section \ref{gas_props}. Finally, in Section \ref{stell_pops_text} we further analyse age and metallicity distributions for sampled regions across the galaxy to diagnose assembly history and current merger status, then go on to examine underlying metal poor populations in Section \ref{acc_pop}.
\subsection{Stellar Properties}
Application of the pPXF method to the NGC\,7135 data cube provides mean kinematic properties which are extracted from each bin. Demonstrations of this for velocity and velocity dispersion of the galaxy are found in the top panels of Figure \ref{stellar_properties}. Application of regularisation and mass-to-light ratios produce maps of the constituent stellar populations within each bin of the galaxy. From these bins we can derive mean mass-weighted stellar age and metallicity values, as demonstrated in the lower panels of Figure \ref{stellar_properties}.
\begin{figure*}
\includegraphics[width=\linewidth]{images/stellar_final2.pdf}
\caption{Voronoi map of NGC\,7135 showing 4 different stellar kinematic or mass-weighted population properties. The top left panel shows the mean velocity in km/s for each bin. The top right panel shows mean velocity dispersion within bins in km/s. The lower left panel shows the mean age of populations within the bin in Gyr. Finally the lower right panel shows mean metallicity within each bin. North is to the top of the image, and East is to the left. The stars show clear rotation in the centre. Velocity dispersion, age and metallicity all increase towards the galaxy centre. Distinct kinematics and metallicity south of the centre highlight a distinct component.}
\label{stellar_properties}
\end{figure*}
The stellar kinematic, age, and metallicity maps of NGC\,7135 reveal much about the galaxy. Stellar rotation is immediately visible. This is of key interest when comparing to gas which rotates counter to the direction of stellar rotation. This is explored in detail in Section \ref{gas_props}. One prominent kinematic feature, perhaps most clearly seen in the velocity map (top left panel) of Figure \ref{stellar_properties}, is an arc of incongruous material at higher than average velocity, stretching from the South West of the Figure to the West. The Southern end of this arc is matched in the metallicity map (lower right panel, Figure \ref{stellar_properties}) by a higher metallicity region, which is also distinct in velocity and velocity dispersion. Upon inspection, this is revealed to be an infalling galaxy currently merging onto NGC\,7135. This can be clearly seen in photometry shown in Figure \ref{var_regions}, and even more compelling evidence comes from population analysis below.
\subsection{Gas Properties}\label{gas_props}
\begin{figure*}
\includegraphics[width=\linewidth]{images/gas_final2.pdf}
\caption{Regularly binned map of NGC\,7135 showing 4 different gas kinematic and strength properties. The top left panel shows the mean velocity of gas in km/s for each bin. The top right panel shows mean velocity dispersion of gas within bins in km/s. The lower left panel shows the H$\alpha$ flux throughout NGC\,7135. The scale has been limited from the true maximum to better display regions of intermediate strength. This limits the core from a true strength of at most 36.2$\times$10$^{-16}$erg/s/cm$^2$ (limited to 2.5$\times$10$^{-16}$erg/s/cm$^2$). The lower right panel shows H$\beta$ flux throughout NGC\,7135. The scale has been limited from the true maximum to better display regions of intermediate strength. This limits the core from a true strength of at most 5$\times$10$^{-16}$erg/s/cm$^2$ (limited to 2.1$\times$10$^{-16}$erg/s/cm$^2$). The gas velocity shows counter rotation compared to the stellar component, and on a slightly different axis, suggesting a merger origin. }
\label{gas_properties}
\end{figure*}
\begin{figure*}
\includegraphics[width=\linewidth]{images/gas_zoom_arrow.pdf}
\caption{Regularly binned and zoomed in map of NGC\,7135 showing 4 different gas kinematic and strength properties. The top left panel shows the mean velocity of gas in km/s for each bin. The top right panel shows mean velocity dispersion of gas within bins in km/s. The lower left shows the H$\alpha$ flux throughout NGC\,7135. The scale has been limited from the true maximum to better display regions of intermediate strength. This limits the strongest emission near the core from a true strength of at most 36.2$\times$10$^{-16}$erg/s/cm$^2$ (limited to 2.5$\times$10$^{-16}$erg/s/cm$^2$). The lower right panel shows H$\beta$ flux throughout NGC\,7135. The scale here has also been limited. This limits the strongest emission from a true strength of at most 5$\times$10$^{-16}$erg/s/cm$^2$ (limited to 2.1$\times$10$^{-16}$erg/s/cm$^2$). In the upper left panel, arrows show the average positive rotation direction. The solid arrow indicates the average stellar component positive rotation whilst the dotted arrow shows the average gas positive rotation direction. Shaded regions show the standard deviation of vectors for both components for bins of 0.1 effective radii. In the lower left panel, contours show integrated CO(J=1–0) emission detected in ALMA observations \citep{ueda2014cold}. Contours show the 0.8, 1.0 and 1.2 Jy km s$^{-1}$ levels. There is pervasive H$\alpha$ emission with a high luminosity and high velocity dispersion component in the centre, though there is little evidence of star formation.}
\label{gas_properties_zoom}
\end{figure*}
To explore gas kinematics and distribution in NGC\,7135, regular binning was employed to avoid biases caused by the stellar light controlling Voronoi binning. Large square bins containing 64 pixels were selected across the face of the data cube, and spectra within a given bin were summed and analysed with ppxf as described in Section \ref{method}. Following this, those bins with signal-to-noise that exceeded the minimum detection threshold were re-binned to a higher resolution. This adaptive `zoom' binning gave high resolution in areas of strong H$\alpha$ emission. The zoom resolution was limited to central regions of the galaxy, where the finest detail was required.
NGC\,7135 displays localised areas of strong Balmer emission, shown in Figure \ref{gas_properties} with a cropped version showing the galaxy centre in Figure \ref{gas_properties_zoom}. As seen from all panels, the gas is asymmetric in distribution as well as in kinematics. The rotation of the gas highlights the decoupled nature of the stellar material in the core.
Gas is counter-rotating to the stellar component, strongly indicating a disrupted system. A slight deviation to the coherent gas movement is seen in the galaxy centre, giving an `S' shaped gas rotation profile. Counter rotation has long been associated with galaxy mergers \citep[see e.g.][]{bertola1988counter}. Total decoupling of gas rotation from stellar components as a result of prograde-prograde merger shocks has been shown in simulation in \cite{capelo2016shocks}, and a similar event appears to be in play here, wherein a major merger has resulted in a counter rotation of the gas component. Plausibly this is the result of a previous merger providing counter rotation from a prograde-prograde merger, this is expanded further in section \ref{stell_pops_text}. Alternatively, counter rotation could have arisen as a result of a first pass of the currently infalling galaxy.
Velocity vectorisation of the gas and stars allows us to measure the gas and stellar rotation misalignment. The rotation consensus in the gas is fairly standard, with the gas rotating around the centre. In the stellar component however, matters are complicated by the velocity of the in-falling galaxy, which shifts the positive rotation vector compared to the core. If we consider only the core, the misalignment of gas and stars is 176$^{\circ}$, whereas when the entire cube is considered, the misalignment is 139$^{\circ}$. This is entirely within the realm of expected values for an interacting galaxy \citep[see e.g.][]{barrera2015tracing, bryant2019sami}. This is shown in Figure \ref{gas_properties_zoom} as solid and dashed arrows for the directions of mean positive stellar and gas rotation respectively, with associated errors shown as shaded regions.
Regions of H$\alpha$ emission can be seen in the southern areas of the lower left panel of Figure \ref{gas_properties}. This forms a large arc with patches exhibiting particularly strong emission. These are seemingly matched by arcs in the north in an asymmetrical manner.
Considering the gas asymmetry and the increase in both gas velocity and velocity dispersion, a large amount of gas can be attributed to material stripped from the outskirts of the infalling galaxy and which is currently in the process of accreting onto the host galaxy. This is seen in the largest area of gas velocity dispersion occurring outside the core, located in a tight region south of the galaxy core. This region indicates a quantity of gas that is not associated with the cohort gas of NGC\,7135, as it displays a region where infalling gas is interacting with the galaxy interstellar medium. This area of higher than expected dispersion is in the plane of the galaxy gas rotation, again evidence that gas is infalling, creating high velocity dispersion at the position where in-situ gas meets ex-situ gas.
A strong presence of H$\alpha$ in concentrated regions is consistent with the picture of NGC\,7135 as a galaxy that has perhaps recently undergone star formation as suggested in \cite{rampazzo2007}, though at low levels. Despite this, there is little to no evidence of strong ongoing star formation. This can be seen in the emission line diagnostic diagram in Figure \ref{bpt}. Almost all the sources of emission are associated with low-ionization nuclear emission-line regions (LINERs). Though a handful of active galactic nuclei (AGN) sources can be seen, they largely lie in the outer noisier regions of the data-cube, which makes the presence of true AGN sources doubtful, as shown in \cite{zaw2009galaxies}. This strong bias towards LINER emission is typical of merging systems with shock driven LINER emission \citep{monreal2010vlt, rich2011galaxy}.
ALMA data \citep{ueda2014cold} showing the $^{12}$CO(J=1–0) emission is overlaid in the lower left panel of Figure \ref{gas_properties_zoom}. The ALMA observations reveal a significant peak in CO emission offset from the galaxy core with an integrated molecular gas mass of $M_{\mathrm{H2}}=(5.4\pm1.4)\times10^7M_{\sun}$ adopting an $\alpha_\mathrm{CO}=4.8M_{\sun}\,\mathrm{pc}^{-2}(\mathrm{K\,km\,s}^{-1})^{-1}$ \citep{solomon1991co}. This cold gas mass would correspond to an expected SFR of only $\sim0.025M_{\sun}\,\mathrm{yr}^{-1}$ if a normal depletion time of 2\,Gyr for galaxies is assumed \citep{bigiel2011constant, leroy2013molecular}. Although there is no similarly distinct ionised gas structure observed with MUSE, there is plenty of ionized gas which may partially originate from star formation despite the LINER-like classification. The extinction-corrected H$\alpha$ flux within the central r=1$\arcsec$ is $(4\pm0.4)\times10^{-13}\mathrm{erg}\,\mathrm{s}^{-1}\,\mathrm{cm}^{-2}$ which would correspond to $\mathrm{SFR}=0.5\pm0.05M_{\sun}\,\mathrm{yr}^{-1}$ following \citet{kennicutt1998global}. So only 5\% of the central H$\alpha$ would need to be hidden among LINER-like classified ionised gas to be in agreement with ongoing star formation. Such a low fraction of star formation would not alter the line diagnostics significantly and would remain hidden. Hence, we cannot rule out ongoing star formation based on the central cold gas mass observed by \cite{ueda2014cold}. Given the highly disturbed kinematics, the possibility that dynamical suppression of star formation is preventing cold gas collapse cannot be tested by our observations.
\begin{figure}
\includegraphics[width=\linewidth]{images/bpt_onecol.pdf}
\caption{An emission line diagnostic diagram \citep{baldwin1981classification} divided into various sources. Each bin is shown as a point according to its emission ratios of [NII]/H$\alpha$ and [OIII]/H$\beta$ allowing for the identification of regions of star formation, AGN emission or Low-ionization nuclear emission-line region (LINER) emission. Detailed description of the line equations can be found in \protect\cite{park2013relationship}. NGC\,7135 shows no bins where current star formation is clear in the emission. Slight overlaps outside the LINER emission bin are unlikely to be genuine, but rather likely arise because of noise and intrinsic variations. The galaxy emission is overwhelmingly LINER type.}
\label{bpt}
\end{figure}
\subsection{Stellar Population Mapping}\label{stell_pops_text}
Populations of a galaxy evolve in metallicity over time, gradually enriching with age. The exact quantities and rates of this enrichment are well known \citep{carraro1994galactic,Layden_2000,pont2004}, with the rate of enrichment directly tied to galaxy mass resulting in the mass-metallicity relation. Thus, we can quickly establish whether a galaxy has followed the standard enrichment of its population as would be expected from an isolated galaxy.
In reality, galaxies are more often than not experiencing regular disturbances in the form of mergers, fly-bys and intracluster medium interaction such as ram-pressure stripping \citep{lotz2011major, sinha2012first, ebeling2014jellyfish, ventou2017muse}. One effect of this is the variation of the age-metallicity relation of a galaxy from the modelled form. This is most strikingly clear when a galaxy accretes material from a lower mass galaxy \citep{spolaor2009mass, leaman2013bifurcated}. Due to the lower metal enrichment rate of lower mass galaxies than that of larger mass galaxies, one finds that in general a smaller mass galaxy will exhibit far lower values of metallicity at late ages. Because of the ability for full spectral fitting methods to identify populations based on age and metallicity models, one would see these two populations as distinct and separate areas on an age-metallicity diagram. This is dependent on the difference in mass of the mergers however, as if two galaxies of similar mass were to merge, the separation of populations on the age-metallicity diagram would be too little to distinguish at the current resolutions of full-spectral fitting methods. Using these principles we can estimate which of the populations present are those which have accreted onto the host galaxy, and are therefore ex-situ in origin.
We apply these principles to the population maps of NGC\,7135 in order to derive the history of formation and evolution. In Figure \ref{var_regions}, nine regions are marked with sequential letters corresponding to population maps, which are similarly sequentially lettered, with maps taken from the Voronoi bin below the labelled cross. Each position marks an area of interest or standard uniformity across the maps of Figure \ref{stellar_properties} with which we can build a picture of the assembly and current status of NGC\,7135. Region `A' marks the core of NGC\,7135. Regions `B' and `C' sample the tidal tail clearly seen in the unsharp mask image (lower right panel of Figure \ref{var_regions}), with increasing galactocentric radius. Regions `D', `E', and `F' also sample with increasing galactocentric radius, however they do so outside of any prominent tidal features. These are assumed to be a `control' sample which are chosen to represent the underlying galaxy, though show signs of probing accreted material. Regions `G' and `H' sample the tidal regions opposite the tail, with `H' particularly covering unstripped remnants of the infalling galaxy. Finally region `K' covers the core of the infalling galaxy.
Starting with region `A', we see a very high metallicity, very old population associated with the galaxy core. This is to be expected and is commonly seen in galaxy cores \citep[see e.g.][]{guerou2016}. There is little obvious evidence for accreted populations as expected, as shown by the old and high metallicity population, and lack of any clear population bimodality.
Moving along the main tidal tail in region `B' we see a much younger population at high metallicity. When comparing to regions not associated with tidal features but at similar radius such as `E' and `F', we see that the population of `B' is not comparable to `E' or `F'. This is largely due to a lack of older material that would be expected to be associated with the host galaxy. Plausibly this is the result of the vast majority of the stellar material originating in the infalling galaxy and comprising the tidal tail, and thus the populations visible are instead associated with this infalling object, rather than original populations of NGC\,7135. A small amount of material is also visible as a young and metal poor population. This can be attributed to ex-situ material that merged onto either NGC\,7135 or the infalling galaxy in the past prior to the current merger, and thus shows a separate population signature.
As we move further out along the tidal tail to region `C', many of the features become more prominent. For one thing, the high metallicity population associated with the stripped material from the infalling galaxy remains. Furthermore, low metallicity ex-situ populations increase in the fraction of contributed mass (as seen as a distinctly separate low metallicity population). Care must be taken in comparison due to colour normalisation differences on the plot, however the maximum low metallicity ex-situ fraction increases from $\sim$0.5\% in `B' to $\sim$1.0\% in `C', with a higher sum total of ex-situ material. This increase is to be expected, as ex-situ material commonly increases in fraction with galactocentric radius \citep{LaBarbera12, Martin18, davison2020eagle}. It is unclear whether this ex-situ population is associated with NGC\,7135 or the infalling galaxy, however it could plausibly be from both, as models of hierarchical growth suggest both galaxies would have undergone historical minor mergers in all but the rarest cases \citep{fakhouri2010merger}. A burst of star formation is also seen in the final Gyr history. This is suggestive of a rapid star formation event, most likely triggered as a result of the galaxy interactions. Following this, no star formation is noticed in any bin. A shutdown of star formation after a major merger is discussed widely in literature \cite[see e.g.][]{bekki2001galaxy,barr2007formation,cortesi2013planetary,querejeta2015formation, Puglisi2021}.
Region `D' samples an inner region of NGC\,7135. It shows similar populations as in `A', however extends slightly to lower ages as expected following galaxy population age gradients. Little to no ex-situ material is clear. Moving further out in radius, we come to region `E'. This also shows the expected populations previously seen in `A' and `D'. This time however there is a more significant low metallicity ex-situ population, which as mentioned previously is expected as one reaches regions further from the galaxy centre according to galaxy simulations. Also prominent in region `E' is a population of intermediate age and high metallicity stars. As shown below in region `H', this is almost certainly associated with the infalling galaxy.
Region `F' samples at a slightly greater radius than `E', again with more prominent features, though in similar positions to `E'. We see an increase in the low metallicity ex-situ population radially along the tidal tail (`A', `B' and `C') and well as radially in areas not associated with tidal features (`D', `E' and `F').
The final regions sample the galaxy shell and associated infalling object. Region `G' examines an area of tidal shell seemingly also originating from the infalling galaxy. The region almost identically matches `H' which is placed to examine the outskirts of the infalling object, in regions that have yet to be stripped. The fact that these two populations are quite so similar suggests they are of the same origin, and that the tidal shells and tails are the result of scattered accreted material from the infalling galaxy.
Finally region `K' examines the core of the infalling galaxy at approximately 0.5 effective radii from the centre of NGC\,7135. It shows a highly metal rich and old population with the exact tendencies of a galaxy nucleus. It shows largely the same properties as the nucleus of NGC\,7135, though with marginally lower metallicity and a greater extent in age, suggesting a lower mass.
The velocity dispersion of region `K' (seen in Fig \ref{stellar_properties}) is at a much lower average velocity dispersion than the host galaxy, again suggesting a lower mass of the merging galaxy compared to NGC\,7135. This is curious considering its high metallicity. One explanation would be that the in-falling galaxy is the remnant of a galaxy core stripped of its halo, which would explain both its relatively high brightness and high metallicity. This is also supported by the large amounts of seemingly ex-situ gas that are seen in Figure \ref{gas_properties}, where this gas would have formed the outer regions of the infalling galaxy as explained further in section \ref{gas_props}.
The velocity dispersion (Fig \ref{stellar_properties}) increases significantly midway between the accreting galaxy core and the host galaxy core. This further lends weight to the idea that material is accreting onto the host galaxy, as the high velocity dispersion area indicates a region where accreted material begins encountering large amounts of in-situ material, and the difference in velocities becomes more evident, inflating the velocity dispersion, prior to mixing.
In summary, the population maps are indicative of three distinct galaxy populations, in which two significant merger events are present. The first is ongoing, with an intact core of a second galaxy currently in close proximity to NGC\,7135, with material being stripped off, accreted onto NGC\,7135, and creating large tidal features. These make up the high metallicity populations at intermediate ages. Yet another population is consistently present, as a low metallicity, intermediate to old aged population. As discussed previously, chemical enrichment and mass-metallicity relations mean this population is not associated with either galaxy. Therefore we attribute these stars to older historical mergers, now mixed loosely with the main populations. It is unclear which of these two present galaxies these populations accreted to, however as mentioned previously, the ex-situ population is likely present in both galaxies independently, and was captured by each prior to this ongoing merger.
\begin{figure*}
\includegraphics[width=\linewidth]{images/9_ext_map.png}
\caption{NGC\,7135 population sampling diagram. The upper nine panels display mass weighted metallicity space of NGC\,7135 for various regions. Corresponding regions are marked in the lower left panel with crosses marking the position extracted, and the corresponding letter. The lower right panel shows the same region as an unsharp masked image to highlight tidal features. Data for the unsharp masked image are taken from the VST ATLAS survey \protect{\citep{shanks2015vlt}}. The diagrams build a narrative in which a recent and ongoing merger creates large tidal features in NGC\,7135. There are also populations of far lower metallicity which are well mixed in the galaxy. These populations indicate historical mergers of high merger-mass ratio.}
\label{var_regions}
\end{figure*}
\subsection{Accreted Populations}\label{acc_pop}
As seen in Figure \ref{var_regions}, many bins display a bimodality in population distribution (see e.g. panels `B', `C', `E', `F', `G', and `H'). Such a strong separation in populations suggests stellar material being obtained from more than a single source. Galaxies not associated with the main galaxy will evolve with a different metallicity due to the mass metallicity relation. As such, when the galaxies merge, there will be a distinct separation in the Age-Metallicity relation of each galaxy. The most obvious explanation for the bimodal populations seen in Figure \ref{var_regions} would be the merger of a less massive, lower metallicity galaxy to the host galaxy or onto the infalling galaxy, beginning $\sim$ 10\,Gyr ago. Furthermore, the fact that the bi-modality of populations is seen at almost all positions across the galaxy outside of the cores (panels `B', `C', `E', `F', `G', and `H') suggests that this material has been well mixed and is distributed throughout the galaxy, with the exception of the two galaxy cores (see panels `A', `D', and `K').
To explore the population bi-modality, the fraction of stars not associated with the main host population was determined from each bin. To identify two discontinuous populations, a dividing line was sought across the population map, which would follow the lowest saddle points. This `path of least resistance' then divided the populations into two distinct sources; one being material from NGC\,7135 and the in-situ material of the infalling galaxy; and the other source being low metallicity populations accreted onto both galaxies at earlier times. This can be imagined as the valley between two hills, with the dividing line taking the natural path of a river at the lowest potential. This is visualised in Figure \ref{3dpath} with a red line showing the calculated separation path for one random bin, separating the populations into two sources.
\begin{figure}
\includegraphics[width=\linewidth]{images/3d_lglabel.pdf}
\caption{Population map of one bin projected on 3D axes. A line is sought for each map to bisect the lower metallicity population from the older using low saddle points. For this example, the path is marked by a red line.}
\label{3dpath}
\end{figure}
Application of this to all bins provides a map such as in Figure \ref{bimodal}, where we can examine the fraction of stellar material associated with the lower metallicity source. Figure \ref{bimodal} shows a polar view of NGC\,7135 to better examine radial features. By examining fraction across the galaxy we can infer regions of higher or lower concentration of the accreted material.
At the centre of NGC\,7135 we see no accreted material suggesting the core is dominated by in-situ stars. The density of accreted material rises with radius which is indicative of galaxy mergers depositing material on the outer regions of the galaxy. The material seems to be unevenly radially mixed, with proportionally higher quantities of ex-situ material deposited between 0 and 1 radians from North. This is likely a projection effect, as the area at the south of the galaxy (the left and right extents of Figure \ref{bimodal}) aligns with the previously mentioned high metallicity galaxy, with the stream of stellar material obscuring the host galaxy structure, and dominating the spectral light.
\begin{figure*}
\includegraphics[width=\linewidth]{images/acc_pop_re.png}
\caption{The left panel shows a polar oriented map of NGC\,7135. Blue colour shows the mass fraction of derived material not associated with the host galaxy population, with contouring shown in red-orange-yellow. The angle is shown with 0 radians as the North of the image and positive angle increase showing clockwise movement around the galaxy. Gaussian smoothing has been applied to show more clearly larger structures of ex-situ material. The radius from centre has been limited to include only radii in which a complete circle can be arranged within the image. The adjoining right-hand panel shows the same radial positions as the left side, however it shows the mean discontinuous mass fraction for a complete circle for the radii. Mean fraction was calculated using circular annuli of radius 3 pixels with a moving average. The effective radius is taken from table 1 of \protect\cite{marino2011nearby}. The fraction of accreted material increases with radius, with a roughly 7\% increase within 0.6 effective radii.}
\label{bimodal}
\end{figure*}
We can further see evidence of the division of the various populations by examining stellar mass estimates per population, determined with the division of the age-metallicity plane in combination with mass-to-light ratios. We show this in Figure \ref{4pan_pops}, with three regions of different populations separated roughly. Using mass to light ratios from \cite{thomas2003stellar}, we estimate the stellar mass per population division, per pixel. The panel labelled `1' corresponds to intermediate age stars with high metallicity which were associated with the infalling galaxy. This is confirmed in the first map in the Figure (panel 2) in which there is a noticeably higher stellar mass associated with the infalling object for only this population. This panel also encompasses much of the stellar material of NGC\,7135 near to the centre though at a slight distance, as is expected from standard galaxy age gradients. Though effects from the pointing overlaps are visible, it is notable that we see a small amount of material tracing the tidal tail and other tidally derived features. This suggests that the intermediate age material and tidal tail is associated with the infalling galaxy exclusively, though further data analysis from a higher resolution stellar model grid would be required for verification of this.
In the second labelled map (panel 3) we see that the most metal rich and oldest material is associated heavily with the host galaxy, with a strong gradient from the galaxy centre. This in-situ population is generally undisturbed and centrally concentrated, in comparison to the largely ex-situ population represented in the 1st map. Finally in the third labelled map (panel 4), we see again a gradient of stellar mass associated with the host galaxy. This third map shows only stars at far lower metallicities than the majority of the stellar material. This material is assumed to be low mass objects which have historically accreted to NGC\,7135, and are now well mixed into the galaxy. It should be noted that these are rigid divisions, and that the true population distributions from each object undoubtedly bleed over into the other divided regions (especially in regions `1' and `2').
\begin{figure*}
\includegraphics[width=\linewidth]{images/4_panel2.pdf}
\caption{The first panel shows a general galaxy age-metallicity map. This is divided by the red boxes into 3 groups of populations to examine the mass associated with each area. Panel labels correspond to the numbers on the age-metallicity map. These show the divided nature of the populations, in which the intermediate age high metallicity population is more strongly associated with the infalling object and tidal features, whilst the older metal rich population is associated with the host galaxy.}
\label{4pan_pops}
\end{figure*}
\section{Discussion}
Analysis of the galaxy kinematics and gas of NGC\,7135 yielded evidence for both historical galaxy mergers, as well as an ongoing disruptive major merger. Despite the kinematics of past mergers being hidden (to the available resolution of data) due to mixing over time, ex-situ populations were extracted from the galaxy using full spectral fitting. This allowed for the identification of a well mixed low-metallicity stellar population relative to the larger fraction of higher metallicity stellar population. Considering expected enrichment patterns, this can only have occurred if either gas or stars (or both) originating in an ex-situ galaxy rapidly accreted or fully merged onto NGC\,7135. The lower metal content of this population made it distinct from the original population.
Potentially, all the stellar material in this population could have been created in-situ using gas that was accreted from another galaxy. This is highly unlikely however considering the specificity of the age and metallicity of the two distinct populations. Were these stars to be the product of new infalling gas, we would expect to see a mixing of the gas, and for the metallicity of new stars born after the merger event to be at a more intermediate metallicity. Instead, we see the two populations continuing to form stars without a sharp change in metallicity, thus the lower metallicity population stars are considered to be born ex-situ.
The bimodality of these stars allowed for clean separation of the ex-situ and in-situ populations. Thus the relative fraction of ex-situ material could be ascertained. This allowed for the exploration of ex-situ fraction with galactocentric radius, as shown in Figure \ref{bimodal}. The Figure shows a clear preference for ex-situ material to be located at the outer edges of the galaxy, with no detectable ex-situ material in the centre of the galaxy. This is akin to simulated results showing the same preference for ex-situ fraction increase with galactocentric radius \citep{schaye1014,Crain2015,rodrgom2016,davison2020eagle}, as well as observational studies showing the same principles \citep{forbes2011,Pillepich_2015,Oyarz_n_2019}. The mean ex-situ fraction measured for NGC\,7135 at approximately 0.6\,effective radii (the greatest extent captured by the MUSE image) is 7\%. This is only representative of the low metallicity populations from low-mass systems. Higher metallicity populations from mergers of smaller mass-ratio mergers would be disguised amongst in-situ populations.
Limitations of this technique largely arise from the ability to separate populations. At current resolutions of full spectral fitting techniques, populations must be wholly distinct in metallicity to be noticeably separable from the host population. Accreted material with age and metallicity similar to that of the host galaxy would be largely indistinguishable from the main population. Further limitations are the inability to directly distinguish between stars that are born ex-situ, and those born in-situ but of ex-situ material. As discussed above, these limitations are unlikely to be dominant in this scenario.
One interesting area to consider is the eventual fate of NGC7135. Will it retain some semblance of a spiral structure, or evolve into an S0 or elliptical galaxy? Conversion into an S0 galaxy seems to be a distinct possibility as S0 galaxies with coherent disk kinematics form through merger mechanisms, though the exact merger specifics continue to be debated within the community. Some evidence suggests that S0 galaxies are rarely expected to be formed through major mergers (<4:1 merger ratio) \citep{bournaud2005galaxy, lofthouse2016major}, with the conclusion given that major mergers are a plausible but non-dominant mechanism for early type formation. Conversely, other arguments suggest that S0 galaxies can indeed be formed from major mergers \citep{querejeta2015formation, querejeta2015formation2}. Furthermore major mergers can be shown to give rise to much of the inner structure often found in early types \citep{eliche2018formation}. Perhaps the most consistent agreement for the formation requirements of an S0 via mergers is the necessity for a misalignment of angular momentum between the in-situ and ex-situ accreted baryonic components \citep[see e.g.][]{sales2012origin}. Considering the existing baryonic misalignment present in NGC\,7135 in the form of a counter rotating disk, and considering the seemingly misaligned orbit of the ongoing merger, it is perhaps likely that the ongoing disruption will lead to NGC\,7135 tending towards S0 morphology. Plausibly the kinematics would increasingly reflect those of general spheroid galaxies as newly formed stars with an opposing angular momentum to the mean, and those recently accreted, would begin to reduce kinematic coherence. Though this is a distinct possibility, the true future of NGC\,7135 will remain unknown until more decisive techniques and modelling are developed. Due to the complex nature of the recent history of NGC\,7135, any predictions on future evolution are speculation.
\section{Conclusions}
We have used a Voronoi binned map of NGC\,7135 to explore kinematic stellar features such as velocity and velocity dispersion, as well as the distributions of stellar properties such as age and metallicity. Gas properties were also explored in regular bins, with both kinematic gas properties and gas distribution investigated. Gas was shown to be counter rotating compared to stellar material, with significant evidence of disturbance in the galaxy core. This along with population evidence shows a galaxy currently merging onto NGC\,7135. Despite gas being present, little to no current star formation was identified. ALMA data of the galaxy core points to a star formation rate of only $0.025M_{\sun}\,\mathrm{yr}^{-1}$ assuming normal depletion times. Strong LINER emission likely obscures emission associated with star formation and as such a higher SFR cannot be ruled out.
During population analysis of NGC\,7135 from data provided by the SOSIMPLE project, we have identified both historic and ongoing merger activity. This was achieved using a `full spectral fitting' method to disentangle strong bi-modalities in stellar populations. We show that in a snapshot of a `single' galaxy, we are in reality witnessing the product of three distinct galaxy populations.
An ongoing merger or large accretion event is clear from the stellar kinematic maps, showing a distinct area of stellar material not associated with the host galaxy, but interacting with the galaxy structure. Likewise in gas maps we see large velocity dispersion in areas where ex-situ infalling gas interacts with in-situ gas.
At least one historical large merger event took place at 6-10\,Gyr ago according to star-formation history derived by full spectral fitting. This potentially provided gas with lower enrichment with which NGC\,7135 birthed stars of lower metallicity; however the timeline of stellar ages, matched with the likely merger date makes it highly likely that most, if not all of the stars belonging to this population are ex-situ stars originating in another galaxy. Considering there is no discernible change in the host population metallicity of new stars born after the merger, we assume that all lower metallicity population stars are ex-situ in origin. The timeline of star formation history suggests that this merger caused a general shut-down of star formation in NGC\,7135, not long after the merger event.
We calculate the fraction of the ex-situ material as a function of galactocentric radius, finding a steep increase in ex-situ material as we probe further to the outskirts of the galaxy. The centre of the galaxy exhibits no signs of ex-situ material, whilst by 0.6 effective radii, this fraction is at 7\%. This is in common with literature expectations of `two phase' galaxy assembly seen both observationally and in simulation, where ex-situ material is preferentially deposited on the outskirts of a galaxy.
Many more SOSIMPLE galaxies are available from the survey, with much left to explore.
\section{Acknowledgements}
Many thanks to an anonymous referee for useful comments. This work was completed with the support of the ESO studentship program and the Moses Holden Scholarship. BH acknowledges support by the DFG grant GE625/17-1 and DLR grant 50OR1911. Based on observations collected at the European Southern Observatory under ESO programme 0103.A-0637(A).
\section{Data Availability}
The data described in this article are accessible via the ESO archive of MUSE data.
\bibliographystyle{mnras}
\section{Introduction}
Galaxy merger research has shown how fundamental merging is to galaxy evolution, with historical merger rates generally increasing with galaxy mass \citep{bundy2009greater, schawinski2010role, l2012mass, pillepich2018first}. Distant galaxies (z$\approx$2) are often quoted as being a factor of 2-5 times smaller than those found locally \citep{daddi2005passively,van2008confirmation, saracco2009population}. As such it is widely assumed that a large amount of mass-assembly after z$\approx$2 is a result of hierarchical growth through galaxy mergers and accretion which has been widely corroborated from galaxy evolution models. Not only does merger history impact on almost all other aspects of galaxy evolution, but many galaxies have experienced large mergers throughout their history with around 50\% of galaxies experiencing a major merger \citep{maller2006galaxy}, and essentially all surviving galaxies experiencing minor mergers, with frequency increasing with merger mass-ratio \citep{lotz2011major}. The exception for these cases are some rare pristine galaxy types ($\lesssim$ 0.1\% of galaxies according to \citealt{quilis2013expected}) which have likely experienced no outside interaction or accretion events \citep{trujillo2013ngc}.
Modelling is an excellent way to delve into the mechanics and subsequent effects of galaxy mergers. Using simulations, the ex-situ mass fraction of accreted galaxies has been explored in depth \citep{pillepich2015building, qu2017chronicle, davison2020eagle}. This is useful for defining expected current merger rates to be compared to observationally. A challenging aspect of observational astronomy is demonstrating the merger history of observed nearby galaxies to verify these models, particularly if potential mergers occurred several Gyr ago.
Integral Field Spectroscopy has proven particularly useful in exploring galaxy kinematics and populations. Integral Field Units (IFU's) have provided spatially resolved maps of galaxies which can be used to diagnose population differences and kinematic effects as a result of mergers. This has been shown to be effective in numerous observational cases \citep[see e.g.][]{guerou2016, faifer2017, Ge2019}
The impact of mergers and merger history on galaxy evolution is an important aspect to understand. For one thing, mergers are known to drive gas towards the galaxy centre \citep{mihos1995gasdynamics}, causing AGN activity and black hole growth, which in turn can shut down or suppress star formation in the galaxy at large \citep{cales2015post, choi2015impact}. On the other hand, mergers can cause sudden and significant bursts of star formation due to the disruption of previously unperturbed gas kinematics \citep{di2008frequency, ellison2013galaxy, moreno2015mapping, capelo2015growth}. Disruption in the gas kinematics of galaxies can leave key fingerprints in identification of merger events. One of the most readily identifiable features of a recent or ongoing merger is counter rotating components, with up to 40\% of S0 galaxies displaying signatures of counter-rotation \citep{rubin1994multi, davis2011atlas3d, coccato2015properties, bassett2017formation}. Galaxy-galaxy mergers of the right combination can change the very morphological type of a galaxy. As such, mergers hold the power to define entire galaxy futures.
The S01-pec galaxy NGC\,7135 (AM 2146–350, IC 5136) in the constellation of Piscis Austrinus is a merger remnant galaxy \citep{Keel1985} that is likely en route to forming an S0 galaxy. It currently displays several immediately striking visual features including an extended tail, shell features, and curved structure (Figure \ref{phot}) based on photometry from the Carnegie-Irvine Galaxy Survey \citep{ho2011carnegie}.
NGC\,7135 was first described as having `a curious jet and shell' in \cite{malin1983catalog} with the `jet' later shown to be a tail in \cite{2003MNRAS.343..819R}. The shell structures of the galaxy were found to be particularly clear in UV \citep{rampazzo2007, marino2011nearby}, with FUV gas structure further linked to an accretion event that also likely formed the shells. \cite{ueda2014cold} found CO emitting gas that was unassociated with the nucleus, along with 3 mm continuum associated with the nucleus. Despite speculation, NGC\,7135 was determined to have no active nucleus as shown in \cite{zaw2009galaxies} through optical spectra analysis. Analysis in \cite{1985keel} identifies NGC\,7135 as a merger galaxy, and in \cite{2003MNRAS.343..819R} NGC\,7135 is shown to possess an elongated, asymmetric gas structure relative to the stellar material.
The local environment of NGC\,7135 is described by \cite{samir2016fundamental} as being `low density', with the classification of `low density' \citep{annibali2010nearby} a result of the richness parameter $\rho_{xyz}$=0.32 gal Mpc$^{-3}$ \citep{tully1988nearby}. Early type galaxies in low density environments are known to possess on average younger populations ($\sim$\,2Gyr younger) than similar galaxies in higher density environments \citep{thomas2003stellar}, a likely result of more recent mergers and star formation.
In this work we present new observations of the galaxy NGC\,7135, recently obtained with MUSE. We aim to show that NGC\,7135 is currently undergoing a major merger, with a history of older mergers underlying in the galaxy populations. The paper is presented as follows: In Section 2 we describe the motivation behind the observations, as well as the data reduction and limitations. In Section 3 we describe our methodology, including the use of regularisation during spectral fitting. In Section 4 we present the resultant maps of stellar populations and kinematics, as well as gas properties similarly derived, including rotation differences between the two components. In Section 5 we discuss the implications of the results and finally in Section 6 we provide a summary and concluding remarks.
\section{Observations and data reduction}
We observed NGC\,7135 with the Multi Unit Spectroscopic Explorer \citep[MUSE,][]{bacon2010MUSE,bacon2014MUSE} at the Very Large Telescope (VLT) as part of the Snapshot Optical Spectroscopic Imaging of Mergers and Pairs for Legacy Exploration (SOSIMPLE) survey (Program ID: 0103.A-0637(A), PI: B.~Husemann). The aim of the SOSIMPLE survey is to provide complementary IFU observations for an ongoing Hubble filler gap snapshot imaging program (Program ID: 15446, PI: J.~Dalcanton). HST imaging of NGC\,7135 is not yet taken due to the filler nature of the HST program, thus these MUSE observations act as a first look at the data, to which HST data can be compared to at a later date. Combining IFU spectroscopy with a large set of high-quality ancillary data will hopefully provide observational and theoretical insights into the evolution of merging systems.
The MUSE observations were conducted on 6 July 2019 during dark sky conditions and split into 3$\times$560\,s dithered pointings along with a 300\,s dedicated blank sky field exposure for background subtraction of this extended galaxy. Rotations of 90\degr\ were applied between exposures covering approximately 3.4 arcmin$^2$ as shown in Fig \ref{phot}. The seeing during the observations maintained at $\sim$1\arcsec\ , and the sky was covered with thin clouds during strong wind conditions from North-West direction.
The data were reduced with the standard ESO pipeline \citep{weilbacher2020pipeline} which performs detector calibrations, flat-fielding, wavelength calibration, flux calibration as well as sky subtraction, exposure alignment, and cube reconstruction of the combined exposures. We performed an additional correction for residual sky lines using a simple PCA algorithm. The MUSE pixel scale is 0.2 arcsec pixel$^{-1}$, with a mean spectral resolution of $\sim$2.5\AA\ though this can vary across the wavelength range (see figure 5 of \citealt{husser2016muse}). The resulting mean Signal-to-Noise (SN) ratio of the spaxels in the MUSE image within a wavelength range of 4759--6849\,\AA\ (limited from 4759--9300\,\AA ) is 9.5, with a maximum spaxel SN of 131.
\section{Methodology}\label{method}
Spaxels were Voronoi binned to a minimum SN of 50 per \AA, thereby poor signal regions were made available for analysis, whilst higher SN spaxels remained unbinned. This optimally allowed for spatial investigation of spectral properties, without losing valuable high resolution data at high SN locations.
The wavelength was restricted to 4759 - 6849\,\AA\, for all spaxels to ensure the strongest Balmer lines were included, and to exclude noisier sky-dominated regions at redder wavelengths. All spectra of spaxels within a bin were summed into a single spectra representing the area covered by the bin. An area containing a foreground star was masked from analysis in the West of the image (see Figure \ref{phot}).
To analyse the spectra from the binned NGC\,7135 data we utilised the Penalized PiXel-Fitting (pPXF) method, described in \cite{cappellari2004intro} and upgraded in \cite{cappellari2017upgrade}. With this method, single-age single-metallicity stellar population (SSP) models are fit to spectra to build a map of stellar populations across age and metallicity space. By identifying the combination of SSP models that approximate a given spectrum, the estimated constituent populations are extracted, as well as velocity and dispersion. Stellar models are weighted as per the estimated fraction of the population present in the galaxy. As a result, output weights of stellar models indicate the fractions of specific stellar populations present in the spectrum. The output model of combined spectra is made more physical by the use of template regularisation (see e.g. section 3.5 of \citealt{cappellari2017upgrade}), the methodology of which is explained in detail below. Standard pPXF cleaning algorithms were included to mask emission lines where necessary.
A total of 552 MILES SSP models \citep{vazdekis2010evolutionary} were used to fit to galaxy spectra. These models were of Kroupa revised initial mass function (log slope of 1.3, M$_{max}$=100M$_{\odot}$) using BaSTI isochrones, with a metallicity range of -2.27 to +0.4 [M/H] in 12 non-linear steps, and an age range of 0.1 to 14.0\,Gyr in 46 non-linear steps \cite[][]{kroupa2001variation, cassisi2005basti,pietrinferni2006large,falcon2011updated,vazdekis2012miuscat}.
Application of regularisation allows smoothing over stellar model weights to reproduce a population map consistent with physical results. The weighted templates that have been combined to produce a target spectrum will often be unphysically localised to only the strongest of possible solutions, with many other valid solutions being overlooked, despite their physicality. To produce more representative distributions, regularisation seeks to smooth the solutions to a physical state. The challenge is to smooth the template weights to a solution that most accurately represents observed conditions, whilst not overlooking genuine fluctuations and details present in the model-fit. The regularisation parameter controls the strength of the smoothing and is deduced through a robust iterative approach for each spectrum individually. The regularisation parameter is derived such that it corresponds to the maximum value consistent with observations. Thus the derived star formation history will be the smoothest that is consistent with the observations. This has been shown in literature to be an accurate and useful method of galaxy population extraction \cite[see e.g.][]{comeron2015, norris2015extended, guerou2016, faifer2017, Ge2019, boecker2020recovering}.
In this work an iterative routine is applied to extract the optimal regularisation parameter. For the best possible fit, the $\chi^2$ of the solution is expected to be approximately equal to the number of available voxels in the spectrum, $N$ (i.e. the number of voxels available after any masking). To obtain this optimal solution, the $\chi^2$ must be increased from the unregularised $\chi^2$ (referred to as $\chi^2_0$) by $\sqrt{2N}$.
After rescaling noise from the unregularised solution such that $\frac{\chi^2}{N}$ = 1, we make a number of primary guesses at the regularisation parameter. We find the $\Delta \chi^2$ of these initial guesses and fit a function to the input regularisation guesses and output $\Delta \chi^2$ values. By doing so we can precisely find the optimal regularisation parameter such that $\chi^2 = \chi^2_0+\sqrt{2N}$. This action is performed for every bin, resulting in optimal solutions across the entire image map.
\begin{figure}
\includegraphics[width=\linewidth]{images/NGC7135_color_MUSE3.pdf}
\caption{A colour image of NGC\,7135 showing the MUSE cube footprint. Photometry of NGC\,7135 is from the Carnegie-Irvine Galaxy Survey \citep{ho2011carnegie}. The blue border shows the boundaries of the reduced MUSE IFU data used in this study.
A green circle traces an area containing a bright foreground star that was entirely excluded from the analysis.}
\label{phot}
\end{figure}
\section{Results}
We separate the analysis of NGC\,7135 into three components; the stellar component analysis, encompassing the stellar kinematics; the gaseous component analysis, encompassing gas kinematics, emission lines and star formation aspects; and the population analysis, examining the various stellar populations and the resulting implications for the assembly history of NGC\,7135.
To examine the stellar component we utilise Voronoi binning as described in Section \ref{method}. From this we are able to examine the stellar rotation and bulk velocities, as well as mean age and metallicities spatially across the galaxy (Fig \ref{stellar_properties}). To investigate details related to the gaseous component we use regular binning to view the gas velocities and rotation, as well as the line strengths of H$\alpha$ and H$\beta$ (Fig \ref{gas_properties}). Though we see reasonable amounts of H$\alpha$ emission, there is scant evidence for significant ongoing star formation. This is explained in detail in Section \ref{gas_props}. Finally, in Section \ref{stell_pops_text} we further analyse age and metallicity distributions for sampled regions across the galaxy to diagnose assembly history and current merger status, then go on to examine underlying metal poor populations in Section \ref{acc_pop}.
\subsection{Stellar Properties}
Application of the pPXF method to the NGC\,7135 data cube provides mean kinematic properties which are extracted from each bin. Demonstrations of this for velocity and velocity dispersion of the galaxy are found in the top panels of Figure \ref{stellar_properties}. Application of regularisation and mass-to-light ratios produce maps of the constituent stellar populations within each bin of the galaxy. From these bins we can derive mean mass-weighted stellar age and metallicity values, as demonstrated in the lower panels of Figure \ref{stellar_properties}.
\begin{figure*}
\includegraphics[width=\linewidth]{images/stellar_final2.pdf}
\caption{Voronoi map of NGC\,7135 showing 4 different stellar kinematic or mass-weighted population properties. The top left panel shows the mean velocity in km/s for each bin. The top right panel shows mean velocity dispersion within bins in km/s. The lower left panel shows the mean age of populations within the bin in Gyr. Finally the lower right panel shows mean metallicity within each bin. North is to the top of the image, and East is to the left. The stars show clear rotation in the centre. Velocity dispersion, age and metallicity all increase towards the galaxy centre. Distinct kinematics and metallicity south of the centre highlight a distinct component.}
\label{stellar_properties}
\end{figure*}
The stellar kinematic, age, and metallicity maps of NGC\,7135 reveal much about the galaxy. Stellar rotation is immediately visible. This is of key interest when comparing to gas which rotates counter to the direction of stellar rotation. This is explored in detail in Section \ref{gas_props}. One prominent kinematic feature, perhaps most clearly seen in the velocity map (top left panel) of Figure \ref{stellar_properties}, is an arc of incongruous material at higher than average velocity, stretching from the South West of the Figure to the West. The Southern end of this arc is matched in the metallicity map (lower right panel, Figure \ref{stellar_properties}) by a higher metallicity region, which is also distinct in velocity and velocity dispersion. Upon inspection, this is revealed to be an infalling galaxy currently merging onto NGC\,7135. This can be clearly seen in photometry shown in Figure \ref{var_regions}, and even more compelling evidence comes from population analysis below.
\subsection{Gas Properties}\label{gas_props}
\begin{figure*}
\includegraphics[width=\linewidth]{images/gas_final2.pdf}
\caption{Regularly binned map of NGC\,7135 showing 4 different gas kinematic and strength properties. The top left panel shows the mean velocity of gas in km/s for each bin. The top right panel shows mean velocity dispersion of gas within bins in km/s. The lower left panel shows the H$\alpha$ flux throughout NGC\,7135. The scale has been limited from the true maximum to better display regions of intermediate strength. This limits the core from a true strength of at most 36.2$\times$10$^{-16}$erg/s/cm$^2$ (limited to 2.5$\times$10$^{-16}$erg/s/cm$^2$). The lower right panel shows H$\beta$ flux throughout NGC\,7135. The scale has been limited from the true maximum to better display regions of intermediate strength. This limits the core from a true strength of at most 5$\times$10$^{-16}$erg/s/cm$^2$ (limited to 2.1$\times$10$^{-16}$erg/s/cm$^2$). The gas velocity shows counter rotation compared to the stellar component, and on a slightly different axis, suggesting a merger origin. }
\label{gas_properties}
\end{figure*}
\begin{figure*}
\includegraphics[width=\linewidth]{images/gas_zoom_arrow.pdf}
\caption{Regularly binned and zoomed in map of NGC\,7135 showing 4 different gas kinematic and strength properties. The top left panel shows the mean velocity of gas in km/s for each bin. The top right panel shows mean velocity dispersion of gas within bins in km/s. The lower left shows the H$\alpha$ flux throughout NGC\,7135. The scale has been limited from the true maximum to better display regions of intermediate strength. This limits the strongest emission near the core from a true strength of at most 36.2$\times$10$^{-16}$erg/s/cm$^2$ (limited to 2.5$\times$10$^{-16}$erg/s/cm$^2$). The lower right panel shows H$\beta$ flux throughout NGC\,7135. The scale here has also been limited. This limits the strongest emission from a true strength of at most 5$\times$10$^{-16}$erg/s/cm$^2$ (limited to 2.1$\times$10$^{-16}$erg/s/cm$^2$). In the upper left panel, arrows show the average positive rotation direction. The solid arrow indicates the average stellar component positive rotation whilst the dotted arrow shows the average gas positive rotation direction. Shaded regions show the standard deviation of vectors for both components for bins of 0.1 effective radii. In the lower left panel, contours show integrated CO(J=1–0) emission detected in ALMA observations \citep{ueda2014cold}. Contours show the 0.8, 1.0 and 1.2 Jy km s$^{-1}$ levels. There is pervasive H$\alpha$ emission with a high luminosity and high velocity dispersion component in the centre, though there is little evidence of star formation.}
\label{gas_properties_zoom}
\end{figure*}
To explore gas kinematics and distribution in NGC\,7135, regular binning was employed to avoid biases caused by the stellar light controlling Voronoi binning. Large square bins containing 64 pixels were selected across the face of the data cube, and spectra within a given bin were summed and analysed with ppxf as described in Section \ref{method}. Following this, those bins with signal-to-noise that exceeded the minimum detection threshold were re-binned to a higher resolution. This adaptive `zoom' binning gave high resolution in areas of strong H$\alpha$ emission. The zoom resolution was limited to central regions of the galaxy, where the finest detail was required.
NGC\,7135 displays localised areas of strong Balmer emission, shown in Figure \ref{gas_properties} with a cropped version showing the galaxy centre in Figure \ref{gas_properties_zoom}. As seen from all panels, the gas is asymmetric in distribution as well as in kinematics. The rotation of the gas highlights the decoupled nature of the stellar material in the core.
Gas is counter-rotating to the stellar component, strongly indicating a disrupted system. A slight deviation to the coherent gas movement is seen in the galaxy centre, giving an `S' shaped gas rotation profile. Counter rotation has long been associated with galaxy mergers \citep[see e.g.][]{bertola1988counter}. Total decoupling of gas rotation from stellar components as a result of prograde-prograde merger shocks has been shown in simulation in \cite{capelo2016shocks}, and a similar event appears to be in play here, wherein a major merger has resulted in a counter rotation of the gas component. Plausibly this is the result of a previous merger providing counter rotation from a prograde-prograde merger, this is expanded further in section \ref{stell_pops_text}. Alternatively, counter rotation could have arisen as a result of a first pass of the currently infalling galaxy.
Velocity vectorisation of the gas and stars allows us to measure the gas and stellar rotation misalignment. The rotation consensus in the gas is fairly standard, with the gas rotating around the centre. In the stellar component however, matters are complicated by the velocity of the in-falling galaxy, which shifts the positive rotation vector compared to the core. If we consider only the core, the misalignment of gas and stars is 176$^{\circ}$, whereas when the entire cube is considered, the misalignment is 139$^{\circ}$. This is entirely within the realm of expected values for an interacting galaxy \citep[see e.g.][]{barrera2015tracing, bryant2019sami}. This is shown in Figure \ref{gas_properties_zoom} as solid and dashed arrows for the directions of mean positive stellar and gas rotation respectively, with associated errors shown as shaded regions.
Regions of H$\alpha$ emission can be seen in the southern areas of the lower left panel of Figure \ref{gas_properties}. This forms a large arc with patches exhibiting particularly strong emission. These are seemingly matched by arcs in the north in an asymmetrical manner.
Considering the gas asymmetry and the increase in both gas velocity and velocity dispersion, a large amount of gas can be attributed to material stripped from the outskirts of the infalling galaxy and which is currently in the process of accreting onto the host galaxy. This is seen in the largest area of gas velocity dispersion occurring outside the core, located in a tight region south of the galaxy core. This region indicates a quantity of gas that is not associated with the cohort gas of NGC\,7135, as it displays a region where infalling gas is interacting with the galaxy interstellar medium. This area of higher than expected dispersion is in the plane of the galaxy gas rotation, again evidence that gas is infalling, creating high velocity dispersion at the position where in-situ gas meets ex-situ gas.
A strong presence of H$\alpha$ in concentrated regions is consistent with the picture of NGC\,7135 as a galaxy that has perhaps recently undergone star formation as suggested in \cite{rampazzo2007}, though at low levels. Despite this, there is little to no evidence of strong ongoing star formation. This can be seen in the emission line diagnostic diagram in Figure \ref{bpt}. Almost all the sources of emission are associated with low-ionization nuclear emission-line regions (LINERs). Though a handful of active galactic nuclei (AGN) sources can be seen, they largely lie in the outer noisier regions of the data-cube, which makes the presence of true AGN sources doubtful, as shown in \cite{zaw2009galaxies}. This strong bias towards LINER emission is typical of merging systems with shock driven LINER emission \citep{monreal2010vlt, rich2011galaxy}.
ALMA data \citep{ueda2014cold} showing the $^{12}$CO(J=1–0) emission is overlaid in the lower left panel of Figure \ref{gas_properties_zoom}. The ALMA observations reveal a significant peak in CO emission offset from the galaxy core with an integrated molecular gas mass of $M_{\mathrm{H2}}=(5.4\pm1.4)\times10^7M_{\sun}$ adopting an $\alpha_\mathrm{CO}=4.8M_{\sun}\,\mathrm{pc}^{-2}(\mathrm{K\,km\,s}^{-1})^{-1}$ \citep{solomon1991co}. This cold gas mass would correspond to an expected SFR of only $\sim0.025M_{\sun}\,\mathrm{yr}^{-1}$ if a normal depletion time of 2\,Gyr for galaxies is assumed \citep{bigiel2011constant, leroy2013molecular}. Although there is no similarly distinct ionised gas structure observed with MUSE, there is plenty of ionized gas which may partially originate from star formation despite the LINER-like classification. The extinction-corrected H$\alpha$ flux within the central r=1$\arcsec$ is $(4\pm0.4)\times10^{-13}\mathrm{erg}\,\mathrm{s}^{-1}\,\mathrm{cm}^{-2}$ which would correspond to $\mathrm{SFR}=0.5\pm0.05M_{\sun}\,\mathrm{yr}^{-1}$ following \citet{kennicutt1998global}. So only 5\% of the central H$\alpha$ would need to be hidden among LINER-like classified ionised gas to be in agreement with ongoing star formation. Such a low fraction of star formation would not alter the line diagnostics significantly and would remain hidden. Hence, we cannot rule out ongoing star formation based on the central cold gas mass observed by \cite{ueda2014cold}. Given the highly disturbed kinematics, the possibility that dynamical suppression of star formation is preventing cold gas collapse cannot be tested by our observations.
\begin{figure}
\includegraphics[width=\linewidth]{images/bpt_onecol.pdf}
\caption{An emission line diagnostic diagram \citep{baldwin1981classification} divided into various sources. Each bin is shown as a point according to its emission ratios of [NII]/H$\alpha$ and [OIII]/H$\beta$ allowing for the identification of regions of star formation, AGN emission or Low-ionization nuclear emission-line region (LINER) emission. Detailed description of the line equations can be found in \protect\cite{park2013relationship}. NGC\,7135 shows no bins where current star formation is clear in the emission. Slight overlaps outside the LINER emission bin are unlikely to be genuine, but rather likely arise because of noise and intrinsic variations. The galaxy emission is overwhelmingly LINER type.}
\label{bpt}
\end{figure}
\subsection{Stellar Population Mapping}\label{stell_pops_text}
Populations of a galaxy evolve in metallicity over time, gradually enriching with age. The exact quantities and rates of this enrichment are well known \citep{carraro1994galactic,Layden_2000,pont2004}, with the rate of enrichment directly tied to galaxy mass resulting in the mass-metallicity relation. Thus, we can quickly establish whether a galaxy has followed the standard enrichment of its population as would be expected from an isolated galaxy.
In reality, galaxies are more often than not experiencing regular disturbances in the form of mergers, fly-bys and intracluster medium interaction such as ram-pressure stripping \citep{lotz2011major, sinha2012first, ebeling2014jellyfish, ventou2017muse}. One effect of this is the variation of the age-metallicity relation of a galaxy from the modelled form. This is most strikingly clear when a galaxy accretes material from a lower mass galaxy \citep{spolaor2009mass, leaman2013bifurcated}. Due to the lower metal enrichment rate of lower mass galaxies than that of larger mass galaxies, one finds that in general a smaller mass galaxy will exhibit far lower values of metallicity at late ages. Because of the ability for full spectral fitting methods to identify populations based on age and metallicity models, one would see these two populations as distinct and separate areas on an age-metallicity diagram. This is dependent on the difference in mass of the mergers however, as if two galaxies of similar mass were to merge, the separation of populations on the age-metallicity diagram would be too little to distinguish at the current resolutions of full-spectral fitting methods. Using these principles we can estimate which of the populations present are those which have accreted onto the host galaxy, and are therefore ex-situ in origin.
We apply these principles to the population maps of NGC\,7135 in order to derive the history of formation and evolution. In Figure \ref{var_regions}, nine regions are marked with sequential letters corresponding to population maps, which are similarly sequentially lettered, with maps taken from the Voronoi bin below the labelled cross. Each position marks an area of interest or standard uniformity across the maps of Figure \ref{stellar_properties} with which we can build a picture of the assembly and current status of NGC\,7135. Region `A' marks the core of NGC\,7135. Regions `B' and `C' sample the tidal tail clearly seen in the unsharp mask image (lower right panel of Figure \ref{var_regions}), with increasing galactocentric radius. Regions `D', `E', and `F' also sample with increasing galactocentric radius, however they do so outside of any prominent tidal features. These are assumed to be a `control' sample which are chosen to represent the underlying galaxy, though show signs of probing accreted material. Regions `G' and `H' sample the tidal regions opposite the tail, with `H' particularly covering unstripped remnants of the infalling galaxy. Finally region `K' covers the core of the infalling galaxy.
Starting with region `A', we see a very high metallicity, very old population associated with the galaxy core. This is to be expected and is commonly seen in galaxy cores \citep[see e.g.][]{guerou2016}. There is little obvious evidence for accreted populations as expected, as shown by the old and high metallicity population, and lack of any clear population bimodality.
Moving along the main tidal tail in region `B' we see a much younger population at high metallicity. When comparing to regions not associated with tidal features but at similar radius such as `E' and `F', we see that the population of `B' is not comparable to `E' or `F'. This is largely due to a lack of older material that would be expected to be associated with the host galaxy. Plausibly this is the result of the vast majority of the stellar material originating in the infalling galaxy and comprising the tidal tail, and thus the populations visible are instead associated with this infalling object, rather than original populations of NGC\,7135. A small amount of material is also visible as a young and metal poor population. This can be attributed to ex-situ material that merged onto either NGC\,7135 or the infalling galaxy in the past prior to the current merger, and thus shows a separate population signature.
As we move further out along the tidal tail to region `C', many of the features become more prominent. For one thing, the high metallicity population associated with the stripped material from the infalling galaxy remains. Furthermore, low metallicity ex-situ populations increase in the fraction of contributed mass (as seen as a distinctly separate low metallicity population). Care must be taken in comparison due to colour normalisation differences on the plot, however the maximum low metallicity ex-situ fraction increases from $\sim$0.5\% in `B' to $\sim$1.0\% in `C', with a higher sum total of ex-situ material. This increase is to be expected, as ex-situ material commonly increases in fraction with galactocentric radius \citep{LaBarbera12, Martin18, davison2020eagle}. It is unclear whether this ex-situ population is associated with NGC\,7135 or the infalling galaxy, however it could plausibly be from both, as models of hierarchical growth suggest both galaxies would have undergone historical minor mergers in all but the rarest cases \citep{fakhouri2010merger}. A burst of star formation is also seen in the final Gyr history. This is suggestive of a rapid star formation event, most likely triggered as a result of the galaxy interactions. Following this, no star formation is noticed in any bin. A shutdown of star formation after a major merger is discussed widely in literature \cite[see e.g.][]{bekki2001galaxy,barr2007formation,cortesi2013planetary,querejeta2015formation, Puglisi2021}.
Region `D' samples an inner region of NGC\,7135. It shows similar populations as in `A', however extends slightly to lower ages as expected following galaxy population age gradients. Little to no ex-situ material is clear. Moving further out in radius, we come to region `E'. This also shows the expected populations previously seen in `A' and `D'. This time however there is a more significant low metallicity ex-situ population, which as mentioned previously is expected as one reaches regions further from the galaxy centre according to galaxy simulations. Also prominent in region `E' is a population of intermediate age and high metallicity stars. As shown below in region `H', this is almost certainly associated with the infalling galaxy.
Region `F' samples at a slightly greater radius than `E', again with more prominent features, though in similar positions to `E'. We see an increase in the low metallicity ex-situ population radially along the tidal tail (`A', `B' and `C') and well as radially in areas not associated with tidal features (`D', `E' and `F').
The final regions sample the galaxy shell and associated infalling object. Region `G' examines an area of tidal shell seemingly also originating from the infalling galaxy. The region almost identically matches `H' which is placed to examine the outskirts of the infalling object, in regions that have yet to be stripped. The fact that these two populations are quite so similar suggests they are of the same origin, and that the tidal shells and tails are the result of scattered accreted material from the infalling galaxy.
Finally region `K' examines the core of the infalling galaxy at approximately 0.5 effective radii from the centre of NGC\,7135. It shows a highly metal rich and old population with the exact tendencies of a galaxy nucleus. It shows largely the same properties as the nucleus of NGC\,7135, though with marginally lower metallicity and a greater extent in age, suggesting a lower mass.
The velocity dispersion of region `K' (seen in Fig \ref{stellar_properties}) is at a much lower average velocity dispersion than the host galaxy, again suggesting a lower mass of the merging galaxy compared to NGC\,7135. This is curious considering its high metallicity. One explanation would be that the in-falling galaxy is the remnant of a galaxy core stripped of its halo, which would explain both its relatively high brightness and high metallicity. This is also supported by the large amounts of seemingly ex-situ gas that are seen in Figure \ref{gas_properties}, where this gas would have formed the outer regions of the infalling galaxy as explained further in section \ref{gas_props}.
The velocity dispersion (Fig \ref{stellar_properties}) increases significantly midway between the accreting galaxy core and the host galaxy core. This further lends weight to the idea that material is accreting onto the host galaxy, as the high velocity dispersion area indicates a region where accreted material begins encountering large amounts of in-situ material, and the difference in velocities becomes more evident, inflating the velocity dispersion, prior to mixing.
In summary, the population maps are indicative of three distinct galaxy populations, in which two significant merger events are present. The first is ongoing, with an intact core of a second galaxy currently in close proximity to NGC\,7135, with material being stripped off, accreted onto NGC\,7135, and creating large tidal features. These make up the high metallicity populations at intermediate ages. Yet another population is consistently present, as a low metallicity, intermediate to old aged population. As discussed previously, chemical enrichment and mass-metallicity relations mean this population is not associated with either galaxy. Therefore we attribute these stars to older historical mergers, now mixed loosely with the main populations. It is unclear which of these two present galaxies these populations accreted to, however as mentioned previously, the ex-situ population is likely present in both galaxies independently, and was captured by each prior to this ongoing merger.
\begin{figure*}
\includegraphics[width=\linewidth]{images/9_ext_map.png}
\caption{NGC\,7135 population sampling diagram. The upper nine panels display mass weighted metallicity space of NGC\,7135 for various regions. Corresponding regions are marked in the lower left panel with crosses marking the position extracted, and the corresponding letter. The lower right panel shows the same region as an unsharp masked image to highlight tidal features. Data for the unsharp masked image are taken from the VST ATLAS survey \protect{\citep{shanks2015vlt}}. The diagrams build a narrative in which a recent and ongoing merger creates large tidal features in NGC\,7135. There are also populations of far lower metallicity which are well mixed in the galaxy. These populations indicate historical mergers of high merger-mass ratio.}
\label{var_regions}
\end{figure*}
\subsection{Accreted Populations}\label{acc_pop}
As seen in Figure \ref{var_regions}, many bins display a bimodality in population distribution (see e.g. panels `B', `C', `E', `F', `G', and `H'). Such a strong separation in populations suggests stellar material being obtained from more than a single source. Galaxies not associated with the main galaxy will evolve with a different metallicity due to the mass metallicity relation. As such, when the galaxies merge, there will be a distinct separation in the Age-Metallicity relation of each galaxy. The most obvious explanation for the bimodal populations seen in Figure \ref{var_regions} would be the merger of a less massive, lower metallicity galaxy to the host galaxy or onto the infalling galaxy, beginning $\sim$ 10\,Gyr ago. Furthermore, the fact that the bi-modality of populations is seen at almost all positions across the galaxy outside of the cores (panels `B', `C', `E', `F', `G', and `H') suggests that this material has been well mixed and is distributed throughout the galaxy, with the exception of the two galaxy cores (see panels `A', `D', and `K').
To explore the population bi-modality, the fraction of stars not associated with the main host population was determined from each bin. To identify two discontinuous populations, a dividing line was sought across the population map, which would follow the lowest saddle points. This `path of least resistance' then divided the populations into two distinct sources; one being material from NGC\,7135 and the in-situ material of the infalling galaxy; and the other source being low metallicity populations accreted onto both galaxies at earlier times. This can be imagined as the valley between two hills, with the dividing line taking the natural path of a river at the lowest potential. This is visualised in Figure \ref{3dpath} with a red line showing the calculated separation path for one random bin, separating the populations into two sources.
\begin{figure}
\includegraphics[width=\linewidth]{images/3d_lglabel.pdf}
\caption{Population map of one bin projected on 3D axes. A line is sought for each map to bisect the lower metallicity population from the older using low saddle points. For this example, the path is marked by a red line.}
\label{3dpath}
\end{figure}
Application of this to all bins provides a map such as in Figure \ref{bimodal}, where we can examine the fraction of stellar material associated with the lower metallicity source. Figure \ref{bimodal} shows a polar view of NGC\,7135 to better examine radial features. By examining fraction across the galaxy we can infer regions of higher or lower concentration of the accreted material.
At the centre of NGC\,7135 we see no accreted material suggesting the core is dominated by in-situ stars. The density of accreted material rises with radius which is indicative of galaxy mergers depositing material on the outer regions of the galaxy. The material seems to be unevenly radially mixed, with proportionally higher quantities of ex-situ material deposited between 0 and 1 radians from North. This is likely a projection effect, as the area at the south of the galaxy (the left and right extents of Figure \ref{bimodal}) aligns with the previously mentioned high metallicity galaxy, with the stream of stellar material obscuring the host galaxy structure, and dominating the spectral light.
\begin{figure*}
\includegraphics[width=\linewidth]{images/acc_pop_re.png}
\caption{The left panel shows a polar oriented map of NGC\,7135. Blue colour shows the mass fraction of derived material not associated with the host galaxy population, with contouring shown in red-orange-yellow. The angle is shown with 0 radians as the North of the image and positive angle increase showing clockwise movement around the galaxy. Gaussian smoothing has been applied to show more clearly larger structures of ex-situ material. The radius from centre has been limited to include only radii in which a complete circle can be arranged within the image. The adjoining right-hand panel shows the same radial positions as the left side, however it shows the mean discontinuous mass fraction for a complete circle for the radii. Mean fraction was calculated using circular annuli of radius 3 pixels with a moving average. The effective radius is taken from table 1 of \protect\cite{marino2011nearby}. The fraction of accreted material increases with radius, with a roughly 7\% increase within 0.6 effective radii.}
\label{bimodal}
\end{figure*}
We can further see evidence of the division of the various populations by examining stellar mass estimates per population, determined with the division of the age-metallicity plane in combination with mass-to-light ratios. We show this in Figure \ref{4pan_pops}, with three regions of different populations separated roughly. Using mass to light ratios from \cite{thomas2003stellar}, we estimate the stellar mass per population division, per pixel. The panel labelled `1' corresponds to intermediate age stars with high metallicity which were associated with the infalling galaxy. This is confirmed in the first map in the Figure (panel 2) in which there is a noticeably higher stellar mass associated with the infalling object for only this population. This panel also encompasses much of the stellar material of NGC\,7135 near to the centre though at a slight distance, as is expected from standard galaxy age gradients. Though effects from the pointing overlaps are visible, it is notable that we see a small amount of material tracing the tidal tail and other tidally derived features. This suggests that the intermediate age material and tidal tail is associated with the infalling galaxy exclusively, though further data analysis from a higher resolution stellar model grid would be required for verification of this.
In the second labelled map (panel 3) we see that the most metal rich and oldest material is associated heavily with the host galaxy, with a strong gradient from the galaxy centre. This in-situ population is generally undisturbed and centrally concentrated, in comparison to the largely ex-situ population represented in the 1st map. Finally in the third labelled map (panel 4), we see again a gradient of stellar mass associated with the host galaxy. This third map shows only stars at far lower metallicities than the majority of the stellar material. This material is assumed to be low mass objects which have historically accreted to NGC\,7135, and are now well mixed into the galaxy. It should be noted that these are rigid divisions, and that the true population distributions from each object undoubtedly bleed over into the other divided regions (especially in regions `1' and `2').
\begin{figure*}
\includegraphics[width=\linewidth]{images/4_panel2.pdf}
\caption{The first panel shows a general galaxy age-metallicity map. This is divided by the red boxes into 3 groups of populations to examine the mass associated with each area. Panel labels correspond to the numbers on the age-metallicity map. These show the divided nature of the populations, in which the intermediate age high metallicity population is more strongly associated with the infalling object and tidal features, whilst the older metal rich population is associated with the host galaxy.}
\label{4pan_pops}
\end{figure*}
\section{Discussion}
Analysis of the galaxy kinematics and gas of NGC\,7135 yielded evidence for both historical galaxy mergers, as well as an ongoing disruptive major merger. Despite the kinematics of past mergers being hidden (to the available resolution of data) due to mixing over time, ex-situ populations were extracted from the galaxy using full spectral fitting. This allowed for the identification of a well mixed low-metallicity stellar population relative to the larger fraction of higher metallicity stellar population. Considering expected enrichment patterns, this can only have occurred if either gas or stars (or both) originating in an ex-situ galaxy rapidly accreted or fully merged onto NGC\,7135. The lower metal content of this population made it distinct from the original population.
Potentially, all the stellar material in this population could have been created in-situ using gas that was accreted from another galaxy. This is highly unlikely however considering the specificity of the age and metallicity of the two distinct populations. Were these stars to be the product of new infalling gas, we would expect to see a mixing of the gas, and for the metallicity of new stars born after the merger event to be at a more intermediate metallicity. Instead, we see the two populations continuing to form stars without a sharp change in metallicity, thus the lower metallicity population stars are considered to be born ex-situ.
The bimodality of these stars allowed for clean separation of the ex-situ and in-situ populations. Thus the relative fraction of ex-situ material could be ascertained. This allowed for the exploration of ex-situ fraction with galactocentric radius, as shown in Figure \ref{bimodal}. The Figure shows a clear preference for ex-situ material to be located at the outer edges of the galaxy, with no detectable ex-situ material in the centre of the galaxy. This is akin to simulated results showing the same preference for ex-situ fraction increase with galactocentric radius \citep{schaye1014,Crain2015,rodrgom2016,davison2020eagle}, as well as observational studies showing the same principles \citep{forbes2011,Pillepich_2015,Oyarz_n_2019}. The mean ex-situ fraction measured for NGC\,7135 at approximately 0.6\,effective radii (the greatest extent captured by the MUSE image) is 7\%. This is only representative of the low metallicity populations from low-mass systems. Higher metallicity populations from mergers of smaller mass-ratio mergers would be disguised amongst in-situ populations.
Limitations of this technique largely arise from the ability to separate populations. At current resolutions of full spectral fitting techniques, populations must be wholly distinct in metallicity to be noticeably separable from the host population. Accreted material with age and metallicity similar to that of the host galaxy would be largely indistinguishable from the main population. Further limitations are the inability to directly distinguish between stars that are born ex-situ, and those born in-situ but of ex-situ material. As discussed above, these limitations are unlikely to be dominant in this scenario.
One interesting area to consider is the eventual fate of NGC7135. Will it retain some semblance of a spiral structure, or evolve into an S0 or elliptical galaxy? Conversion into an S0 galaxy seems to be a distinct possibility as S0 galaxies with coherent disk kinematics form through merger mechanisms, though the exact merger specifics continue to be debated within the community. Some evidence suggests that S0 galaxies are rarely expected to be formed through major mergers (<4:1 merger ratio) \citep{bournaud2005galaxy, lofthouse2016major}, with the conclusion given that major mergers are a plausible but non-dominant mechanism for early type formation. Conversely, other arguments suggest that S0 galaxies can indeed be formed from major mergers \citep{querejeta2015formation, querejeta2015formation2}. Furthermore major mergers can be shown to give rise to much of the inner structure often found in early types \citep{eliche2018formation}. Perhaps the most consistent agreement for the formation requirements of an S0 via mergers is the necessity for a misalignment of angular momentum between the in-situ and ex-situ accreted baryonic components \citep[see e.g.][]{sales2012origin}. Considering the existing baryonic misalignment present in NGC\,7135 in the form of a counter rotating disk, and considering the seemingly misaligned orbit of the ongoing merger, it is perhaps likely that the ongoing disruption will lead to NGC\,7135 tending towards S0 morphology. Plausibly the kinematics would increasingly reflect those of general spheroid galaxies as newly formed stars with an opposing angular momentum to the mean, and those recently accreted, would begin to reduce kinematic coherence. Though this is a distinct possibility, the true future of NGC\,7135 will remain unknown until more decisive techniques and modelling are developed. Due to the complex nature of the recent history of NGC\,7135, any predictions on future evolution are speculation.
\section{Conclusions}
We have used a Voronoi binned map of NGC\,7135 to explore kinematic stellar features such as velocity and velocity dispersion, as well as the distributions of stellar properties such as age and metallicity. Gas properties were also explored in regular bins, with both kinematic gas properties and gas distribution investigated. Gas was shown to be counter rotating compared to stellar material, with significant evidence of disturbance in the galaxy core. This along with population evidence shows a galaxy currently merging onto NGC\,7135. Despite gas being present, little to no current star formation was identified. ALMA data of the galaxy core points to a star formation rate of only $0.025M_{\sun}\,\mathrm{yr}^{-1}$ assuming normal depletion times. Strong LINER emission likely obscures emission associated with star formation and as such a higher SFR cannot be ruled out.
During population analysis of NGC\,7135 from data provided by the SOSIMPLE project, we have identified both historic and ongoing merger activity. This was achieved using a `full spectral fitting' method to disentangle strong bi-modalities in stellar populations. We show that in a snapshot of a `single' galaxy, we are in reality witnessing the product of three distinct galaxy populations.
An ongoing merger or large accretion event is clear from the stellar kinematic maps, showing a distinct area of stellar material not associated with the host galaxy, but interacting with the galaxy structure. Likewise in gas maps we see large velocity dispersion in areas where ex-situ infalling gas interacts with in-situ gas.
At least one historical large merger event took place at 6-10\,Gyr ago according to star-formation history derived by full spectral fitting. This potentially provided gas with lower enrichment with which NGC\,7135 birthed stars of lower metallicity; however the timeline of stellar ages, matched with the likely merger date makes it highly likely that most, if not all of the stars belonging to this population are ex-situ stars originating in another galaxy. Considering there is no discernible change in the host population metallicity of new stars born after the merger, we assume that all lower metallicity population stars are ex-situ in origin. The timeline of star formation history suggests that this merger caused a general shut-down of star formation in NGC\,7135, not long after the merger event.
We calculate the fraction of the ex-situ material as a function of galactocentric radius, finding a steep increase in ex-situ material as we probe further to the outskirts of the galaxy. The centre of the galaxy exhibits no signs of ex-situ material, whilst by 0.6 effective radii, this fraction is at 7\%. This is in common with literature expectations of `two phase' galaxy assembly seen both observationally and in simulation, where ex-situ material is preferentially deposited on the outskirts of a galaxy.
Many more SOSIMPLE galaxies are available from the survey, with much left to explore.
\section{Acknowledgements}
Many thanks to an anonymous referee for useful comments. This work was completed with the support of the ESO studentship program and the Moses Holden Scholarship. BH acknowledges support by the DFG grant GE625/17-1 and DLR grant 50OR1911. Based on observations collected at the European Southern Observatory under ESO programme 0103.A-0637(A).
\section{Data Availability}
The data described in this article are accessible via the ESO archive of MUSE data.
\bibliographystyle{mnras}
|
1,941,325,220,408 | arxiv | \section{Introduction}
The flux transport dynamo model (FTD) attempts to explain the large-scale
evolution of the Sun's magnetic field. The central ideas behind the model are
that poloidal flux is wound up by differential rotation until it becomes
sufficiently strong that magnetic buoyant flux tubes emerge through the solar
surface. The erupted field is in the form of a bipolar active region, and the
two opposite polarities are observed to be systematically tilted with respect
to the equator (Joy's law). This tilt is such that the leading polarity is
slightly closer to the equator than the following polarity. This latitudinal
offset means that poloidal field has been created from the toroidal flux and,
in the language of dynamo theory, the emergence process is a non-local alpha
effect \citep{Kitchatinov11b}. The poloidal flux is then stretched and diffused
by surface motions, reversing the polar fields and completing one half of a
solar cycle. For a review of the basic ideas, see \citet{Charbonneau10}. This
picture has recently gained observational support from the analysis by
\citet{Dasi-Espuig10} and later by \citet{Kitchatinov11a}, which show that the
observed sunspot group tilt angles, which go into the construction of the
poloidal source term, vary systematically from cycle to cycle in a way which
possibly can explain the observed changes in cycle amplitudes during the
twentieth century.
The winding up of the field by differential rotation and the rise of the tubes
to the surface are hidden below the photosphere. The evolution of the field
after it has broken through the surface can be and has been observed. The
surface flux transport model (SFT) has been found to provide a good description
of the large-scale evolution after emergence. For a detailed historical
account, see \citet{Sheeley05}. This model assumes that the magnetic field is
purely radial at the surface and evolves passively driven by surface flows
including differential rotation, meridional circulation and small-scale
convective motions (granulation and supergranulation). The small-scale motions
essentially cause the magnetic field to undergo a random walk and hence can be
treated as a diffusive term. The ingredients which go into the SFT model are
all observable, as is the output of the model -- it is thus tightly constrained
and supported by observations. For example, \citet{Cameron10} showed that the
SFT model, with the observed cycle-to-cycle variations of the tilt angle, can
reproduce the inferred open magnetic flux of the Sun. Since the open flux
during the maxima and minima of activity reflect the equatorial and axial
dipole moments respectively, the model's ability to reproduce the open flux
over an extended period is a strong test of the model.
In this paper we investigate what constraints can be inferred for the FTD
model, given that it should also reproduce the same surface dynamics as is
described by the SFT model. We have used the FTD code developed at the
Max-Planck-Institut f\"ur Sonnensystemforschung. For
the SFT model we have used a 1-D surface flux transport model developed at the
MPS. The 1-D SFT model includes exactly the component which can be compared
between the two models. The details of the two approaches will be discussed in
Sect.~2. In Sect.~3 we present the results of the simulations and compare the
two models. The effect of varying some of the most important unconstrained
parameters and the boundary condition will be discussed in Sect.~4. We will
conclude in Sect.~5 with the finding that the appropriate boundary condition
for FTD models is that the field is vertical at the surface, and that a certain
amount of turbulent pumping must be included for the FTD simulations to mimic
the surface behavior of the SFT model and to thus match the observations.
Downward pumping has in particular been discussed for the base of the solar
convection zone. Direct numerical simulations show a downward transport of
large-scale magnetic field near the base of convective unstable layers
\citep[e.g.,][]{Jennings92,Tobias98,Tobias01,Ossendrijver02} though it is not clear
whether this should be interpreted in terms of turbulent pumping
\citep{Zeldovich57,Raedler68} or of topological pumping \citep{Drobyshevski74}.
In mean-field dynamo models of the solar cycle turbulent pumping is often not
included. In those cases where it is included, most of the attention is focussed
on its role in transporting flux into the top of the stable layer immediately
below the convection zone
\citep[see for example][]{Brandenburg92, Kaepylae06, DoCao11,Kitchatinov11b,Kitchatinov12}.
The effect of pumping throughout the convection zone was considered by
\citep{Guerrero08}, who found that the pumping affects
whether the preferred mode of the solution is dipolar or quadrupolar, and identified
the possible importance of radial transport by the pumping in the dynamo process.
In this paper we are especially
paying attention to the pumping in the near-surface boundary layer
\citep{Miesch11} which is necessary to match FTD to SFT simulations of the
magnetic flux on the solar surface.
\section{Physical models and numerical codes}
\subsection{The Flux Transport Dynamo (FTD) model}
The flux transport dynamo equations describe the induction, advection and
diffusion of a large-scale magnetic field. Their axisymmetric form is:
\begin{eqnarray}
\frac{\partial A}{\partial t} &=& \eta(r)\left(\nabla^2-\frac{1}{(r\sin \theta)^2}\right)A \nonumber\\
& &-\frac{{\vec{u}}_{\mathrm{m}}(r,\theta) + {\vec{u}}_{\mathrm{p}}(r,\theta)}{r\sin\theta}\cdot\nabla \left(A r \sin \theta \right)
+\alpha (B)
\label{eqn:A} \\
\frac{\partial B}{\partial t} &=& \eta(r)\left(\nabla^2-\frac{1}{(r\sin \theta)^2}\right)B
+ \frac{1}{r}\frac{\partial \eta}{\partial r} \frac{\partial rB}{\partial r} \nonumber \\
& & {}- r \sin\theta \left({\vec{u}}_{\mathrm{m}}(r,\theta) +{\vec{u}}_{\mathrm{p}}(r,\theta) \right)\cdot\nabla
\left(\frac{B}{r \sin \theta}\right) \nonumber\\
& &- B\,\nabla \cdot \left(\vec{u}_{\mathrm{m}}(r,\theta) + \vec{u}_{\mathrm{p}}(r,\theta)\right) \nonumber \\
& & {}+ r \sin \theta \left(\nabla \times \left(A {\vec{\hat e}}_\phi\right)\right) \cdot \nabla \Omega(r,\theta)
\label{eqn:B}
\end{eqnarray}
where $A(r,\theta)$ is the $\phi$-component of the vector potential associated
with the poloidal components of ${\vec{B}}$, $B(r,\theta)$ is the toroidal
component of the field, ${\vec{u}}_{\mathrm{m}}(r,\theta)$ is the velocity in the meridional plane,
$\Omega(r,\theta)$ is the angular velocity,
$\vec{u}_{\mathrm{p}}(r,\theta)$ is a velocity field corresponding
to the pumping of the magnetic field
and $\alpha$ is a source term in the equation
for $A$ corresponding to the generation of poloidal flux from toroidal flux.
Since the purpose of the current study is to compare the response of the
SFT and FTD models to equivalent sources of poloidal flux, we restrict ourselves
to the case $\alpha=0$ -- for other choices of $\alpha$ we would need to modify the
source term in the SFT model accordingly.
In relation to the term $\vec{u}_{\mathrm{p}}$ it is important to note that,
as in \cite{Guerrero08}, it does not correspond
to a true motion of the fluid and need not satisfy $\nabla \cdot \rho \vec{u}_{\mathrm{p}}=0$.
Rather it is a parametrization of the effect of the turbulent motions on the field:
for diamagnetic pumping it has the form $\vec{u}_{\mathrm{p}}(r,\theta)=-\frac{1}{2}\nabla \eta$.
Other effects, such as topological pumping, are also expected to transport the field downwards,
and for this study we assume that the combined effects of the turbulent convection,
including diamagnetic pumping, can be written in the form
$\vec{u}_{\mathrm{p}}(r,\theta)=-\frac{k}{2}\nabla \eta$, with $k \ge 1$. This choice allows us to
vary the magnitude of the pumping in the near surface layers.
We solve the dynamo equations (\ref{eqn:A}) and (\ref{eqn:B}) forward in time
in a spherical shell $r_0\le r\le R_{\sun}$ with inner boundary
$r_0=0.65R_{\sun}$ matching to a perfect conductor and outer boundary matching
to either a radial field or vacuum conditions outside. This leads to the boundary conditions
\begin{eqnarray}
A=0 \quad \mathrm{and} \quad \frac{\partial}{\partial r}(rB)=0 \quad \mathrm{at} \quad r=r_0
\end{eqnarray}
and
\begin{eqnarray}
\frac{\partial}{\partial r}(rA)=0 \quad \mathrm{and} \quad B=0 \quad \mathrm{at} \quad r=R_{\sun}
\end{eqnarray}
for the field to be vertical at the Sun's surface, or alternatively
\begin{eqnarray}
A&=&\sum_k a_kP_k^1(\cos\theta),\\
\frac{\partial A}{\partial r}&=&-\sum_k (k+1)a_kP_k^1(\cos\theta)
\quad \mathrm{and} \\
B&=&0 \quad \mathrm{at} \quad r=R_{\sun}
\end{eqnarray}
for matching to a potential field outside. At the poles we require regularity
resulting in
\begin{eqnarray}
A=B=0 \quad \mathrm{at} \quad \theta=0,\pi \;.
\end{eqnarray}
The equations are discretized using second order accurate centered
finite differences on an equidistant grid and forwarded in time
with an Alternating Direction Implicit scheme for the diffusion terms and an
explicit scheme for the induction and advection terms. The code is tested
against the dynamo benchmark of \citet{Jouvre08}.
For current purposes we will consider $\alpha=0$ so that there is no source of
poloidal field during the simulation. From any initial condition the field must
then eventually decay towards zero, however at any finite time the magnetic
field will depend on the initial field and can be compared with the result of
the SFT model.
For the initial condition we take
\begin{eqnarray}
A &=& \frac{1}{8}R_{\sun}\left(1+{\mathrm{erf}}\left(\frac{r-r_1}{\Delta r}\right)\right) \times
\left(1+{\mathrm{erf}}\left(\frac{\theta-\theta_1}{\Delta \theta}\right)\right) \times \nonumber \\
& & \hspace{1cm} \left(1-{\mathrm{erf}}\left(\frac{\theta-\theta_2}{\Delta \theta}\right)\right), \\
B&=&0,
\end{eqnarray}
where $r_1=0.80 R_{\sun}$, $\theta_1=80^{\circ}$, $\theta_2=86^{\circ}$,
$\Delta \theta=2.9^{\circ}$ and $\Delta r=0.01 R_{\sun}$. This corresponds to
an isolated bipole emerging on the solar surface slightly north of the equator.
Since both the SFT and FTD studied here are linear, the evolution of such a
bipole is independent of the emergence and evolution of other emerging groups.
To check that the models are consistent, it is thus sufficient to follow the
evolution of a single feature starting near the equator to the poles.
The velocity in the meridional plane is taken from \citet{Dikpati04}. The
velocity components can be written in terms of a stream function as
\begin{eqnarray}
{\vec{u}}_{\mathrm{m}}(r,\theta)&=& \frac{\varv_0}{\rho} \frac{1}{r\sin\theta} \frac{\partial\Psi\sin\theta}{\partial\theta}\,{\vec{\hat e}}_r
-\frac{\varv_0}{r \rho} \frac{\partial r \Psi}{\partial r}\,{\vec{\hat e}}_\theta \;,
\label{eqn:ur_start}
\end{eqnarray}
where
\begin{eqnarray}
\xi &=& \frac{R_{\sun}}{r}-0.985 \;, \\
\rho &=&\xi^m
\end{eqnarray}
and
\begin{eqnarray}
\Psi(r,&\theta&) = \frac{R_{\sun}}{r} \times \nonumber\\
& & \left( \frac{-1}{m+1}\xi^{{m+1}}
+\frac{c_1}{2m+1}\xi^{2m+1}-\frac{c_2}{2m+p+1}\xi^{2m+p+1} \right) \nonumber\\
& &\times \sin^{q+1}\theta\cos\theta
\end{eqnarray}
with
\begin{eqnarray}
c_1 = \frac{(2m+1)(m+p)}{(m+1)p} \xi_0^{-m} \;,
\end{eqnarray}
\begin{eqnarray}
c_2 = \frac{(2m+p+1)m}{(m+1)p}\xi_0^{-(m+p)} \;.
\label{eqn:ur_end}
\end{eqnarray}
For the reference case we take $q=1.5$, $m=1.5$, $p=3$, $r_0=0.7$ and
$\xi_0=\xi(r_0)$. In all cases $\varv_0$ is chosen so that the maximum meridional
velocity at $r=R_{\sun}$ is 15 m/s. The resulting velocity approximates the
meridional circulation in the solar convection zone derived by numerical
modeling of \citet{Rempel05} and \citet{Kitchatinov05} and is consistent with
the velocity in the subsurface layers as derived from helioseismology
\citep[as measured, e.g. by][]{Giles97}.
The differential rotation is taken from \citet{Belvedere00}, and is also used
e.g. by \citet{Kitchatinov11a},
\begin{eqnarray}
\Omega(r,\theta) = \sum_{j=0}^2 \cos\left(2j\left(\frac{\pi}{2}-\theta\right)\right)\,\sum_{i=0}^4 c_{ij} r^i
\end{eqnarray}
where the coefficients $c_{ij}$ are given in Table 1 of \citet{Belvedere00}.
This approximates the internal rotation of the Sun as derived from
helioseismological inversions \citep[as reported, e.g., by][]{Schou98}.
For the diffusivity we assumed
\begin{eqnarray}
\eta(r) &=&\eta_0 +\frac{\eta_1-\eta_0}{2}\left(1+{\mathrm{erf}}\left(\frac{r-0.7R_{\sun}}{0.02 R_{\sun}}\right)\right) \nonumber\\
& &+\frac{\eta_2-\eta_1}{2}\left(1+{\mathrm{erf}}\left(\frac{r-0.95R_{\sun}}{0.02 R_{\sun}}\right)\right)
\label{eqn:eta}
\end{eqnarray}
with $\eta_0=0.1$~km$^2$s$^{-1}$, $\eta_1=10$~km$^2$s$^{-1}$ and
$\eta_2=250$~km$^2$s$^{-1}$, see e.g. \citet{Munoz-Jaramillo11}. Here $\eta_2$
represents the turbulent diffusivity in the near-surface layers, $\eta_1$ in
the bulk of the convection zone, and $\eta_0$ in the overshoot region at the
base of the convection zone. Other choices will be considered in Sect.~4.
Recently \citet{Kitchatinov11b} have highlighted the importance of downward
pumping of magnetic fields due to gradients in the turbulent diffusivity, and
have argued that this is particularly important near the base of the convection
zone. We here consider downward pumping in the near-surface layers. We have
found it necessary to increase the strength of the downward pumping from its
usual value of $(1/2) (\partial\eta/\partial r)$ in order to obtain a match
between the FTD and SFT models. We have therefore introduced in
Eqs.~(\ref{eqn:A}) and (\ref{eqn:B}) a scaling factor $k$ which we have varied
between 0 and 20. The diffusivity profile and the corresponding diamagnetic
pumping velocity with $k=1$ is shown in Fig.~\ref{fig:eta}.
While we solve both Eqs.~(\ref{eqn:A}) and (\ref{eqn:B})
we note that the comparison with the SFT model only depends on
Eq.~(\ref{eqn:A}) as $\alpha=0$.
The physical ingredients which affect $A$ are the meridional flow, the
radial and latitudinal diffusion, and the downward pumping.
\begin{figure}[h]
\includegraphics[scale=0.5]{eta_profile.eps}
\caption{The assumed profile of the turbulent diffusivity is shown in black, the effective radial velocity
due to the radial derivative of the turbulent diffusivity, for the case $k=1$, is shown in red.}
\label{fig:eta}
\end{figure}
\subsection{The Surface Flux Transport (SFT) model}
The SFT model, which describes the evolution of the magnetic field on the solar
surface, assumes that the field is vertical and evolves passively under the
action of the surface flows. The surface differential rotation and surface
meridional flow towards the pole are modeled as systematic flows, while
granular and supergranular flows are assumed to only cause the fields to
diffuse across the solar surface. In this sense correlations between the radial
component of the magnetic field $B_r$ and the supergranular velocity field
$U_{\mathrm{SG}}$ are ignored, i.e. it is assumed that $\langle U_{\mathrm{SG}} B_r
\rangle=\langle U_{\mathrm{SG}}\rangle \langle B_r \rangle$, and since
differential rotation and the meridional flow have been removed, $\langle
U_{\mathrm{SG}}\rangle=0$. This assumption is not justified since the magnetic
field and supergranular velocity fields are correlated, as the magnetic field
is located at the edge of the supergranules. This presumably accounts for the
observation by \citet{Meunier05} that magnetic fields rotate faster than the
local plasma, with the extent of the prograde motion depending on the technique
used to measure the velocity. In the current context this is a small effect
which can be ignored.
The SFT model additionally assumes that there is no transport of flux, either
advective or diffusive, across the solar surface. The relevant equation is
\begin{eqnarray}
\frac{\partial B_r}{\partial t} &=&
- \omega(\theta)\frac{\partial B_r}{\partial\phi}
- \frac{1}{R_{\sun}\sin\theta}\frac{\partial}{\partial\theta}\left[\varv(\theta)B_r \sin\theta \right] \nonumber \\
& & {}+ \frac{\eta}{R_{\sun}^2}\left[\frac{1}{\sin \theta}\frac{\partial}{\partial\theta}
\left(\sin\theta\frac{\partial B_r}{\partial\theta}\right)
+ \frac{1}{\sin^2\theta}\frac{\partial^2 B_r}{\partial\phi^2}\right]
\end{eqnarray}
where $B_r$ is the radial component of the magnetic field, $\theta$ is the
heliographic colatitude, and $\phi$ is the heliographic longitude.
$\omega(\theta)$ is the surface differential rotation and $\varv(\theta)$ is the
surface meridional flow. For the purposes of comparison with the FTD
simulation, we take $\varv(\theta)={\vec{u}}_{\mathrm{m}}(R_{\sun},\theta)\cdot{\vec{\hat
e}}_\theta$, $\omega(\theta)=\Omega(R_{\sun},\theta)$, and
$\eta=\eta(R_{\sun})=250$~km$^2$/s.
For comparison with the FTD simulation, we can only use the azimuthally
averaged (signed) field strength. This averaged field is independent of the
initial structure of the field in the azimuthal direction and hence we can take
\begin{eqnarray*}
B_r(\theta)=\frac{1}{r\sin\theta}\frac{\partial}{\partial\theta}(A\sin\theta)
\end{eqnarray*}
as our initial condition, consistent with the initial condition of the FTD
simulation. The solution to this one dimensional problem, $B_r(\theta,t)$, can
be directly compared to $R_{\sun}(\partial/\partial\theta)(A\sin\theta)$ from
the FTD simulation. We have used the code described in \citet{Cameron07} to
solve this 1-D surface flux transport problem.
\section{Reference case}
In Fig.~\ref{fig:ft} the evolution of the surface flux according to the SFT
model is displayed. Figure~\ref{fig:vert} shows the surface latitudinal
dependence of the different FTD models with the vertical boundary condition
with that from the SFT model shown for comparison. We note that for
$k\gtrsim5$, both FTD and SFT models match very well. For $k=0$ the match is
much worse, e.g. there is too little flux in the southern hemisphere ($\theta > 90$)
at $t=72$ months. In the northern hemisphere at $t=18$ months, the amplitude of
the field in the FTD model is greater for both polarities. By $t=72$ months the
amplitude of the field in the southern hemisphere has also fallen as the
opposite polarities are merging. This implies that downward pumping
corresponding to at least $k=5$ is required for the FTD model to be consistent
with the SFT model and therefore with observations.
\begin{figure}[h]
\includegraphics[scale=0.7]{ft.eps}
\caption{Evolution of the azimuthally averaged signed field strength from the
SFT simulation, with black and white representing opposite polarities saturated
at 36\% of the initial azimuthally averaged field strength. The solid and dashed
red contours indicate where the field strength reaches $\pm 5$\%, $\pm 10$\%, etc
of its maximum value, with the dotted curve representing the 0 level.}
\label{fig:ft}
\end{figure}
\begin{figure}[h]
\includegraphics[scale=0.5]{yr2_vert_month18.eps}
\includegraphics[scale=0.5]{yr2_vert_month72.eps}
\caption{Azimuthally averaged signed field from the SFT model (black) vs
surface field from FTD simulations at $t=18$ months (top)
and $t=72$ months (bottom) with a vertical
field outer boundary and pumping with factors $k=5$ (red),
2 (blue), 1 (green), and 0 (yellow).}
\label{fig:vert}
\end{figure}
The reason why downward pumping is important can be seen in
Fig.~\ref{fig:fl_vert} where, in the case without pumping, the diffusive
emergence of flux through the upper boundary is obvious. This emergence of flux
is strongly inhibited by the downward pumping. The requirement that the
downward pumping should inhibit the diffusion of flux across the surface is
captured by the corresponding magnetic Reynolds number $R_m$ being larger than
1:
\begin{eqnarray*}
R_m&=&\frac{|{\vec{u}}_\mathrm{p}|L}{\eta} \nonumber\\
&=&\frac{(k/2)(\partial\eta/\partial r)L}{\eta} \nonumber\\
&\approx&\frac{(k/2)[(\eta_2-\eta_1)/L]L}{(\eta_2+\eta_1)/2}\nonumber\\
&\approx& k\nonumber\\
\quad \mathrm{for} \quad \eta_1\ll\eta_2 \;.
\end{eqnarray*}
Here ${\vec{u_{\mathrm{p}}}}$ is the pumping velocity and $L$ is the boundary layer
thickness corresponding to the region over which $\eta$ changes from its value
throughout the bulk of the convection zone $\eta_1$ to its surface values
$\eta_{2}$. Basing $R_m$ on the mean over this transition yields $R_m\approx k$
when $\eta_1\ll\eta_2$. To prevent diffusive transport, we require $R_m\gg1$,
which for our purposes appears to be achieved by $R_m\approx k\gtrsim5$. This
argument also shows that, for the chosen diffusivity profile, the downward
pumping velocity needs to be of the order of 25~m/s. In reality, this pumping
can be due to a mixture of turbulent and topological effects
and the choice of the form for ${\vec{u_{\mathrm{p}}}}$ is not critical.
\begin{figure}[h]
\includegraphics[scale=0.9,angle=90]{field.010.ps}
\includegraphics[scale=0.9,angle=90]{field.013.ps}
\caption{Magnetic field structure from the FTD simulations at $t=72$ months for the case with
a vertical boundary condition and $k=0$ (top) and $k=5$ (bottom). In each subpanel
the left half shows contours of the toroidal field ($T$), the right panel shows selected fieldlines of the poloidal field
(formally it shows contours of $P=r \sin \theta A(r,\theta)$). The dashed contours of the toroial field indicate
negative fields, the solid contours represent either zero or positive toroidal field. In particular
the solid contours which touch the boundaries correspond to zero toroidal flux.}
\label{fig:fl_vert}
\end{figure}
\section{Effects of varying the diffusivity and meridional velocity}
In this section we briefly discuss four variations to the above reference case.
Explicitly, we consider one simulation with a potential field boundary
condition, one with a different diffusivity profile, one with anisotropic
diffusivity, and one with a different meridional flow profile.
\subsection{Potential field boundary condition}
Figure~\ref{fig:pot} shows the evolution of the field from the FTD simulations
with a potential field boundary condition. The SFT result is again shown for
reference. With this boundary condition we see that the match is always poor.
This is because there is now a strong diffusive flux across the solar surface,
corresponding to the retraction of field lines, as can be seen in
Fig.~\ref{fig:fl_pot}. Hence for the FTD to be consistent with the SFT model,
we need strong downward pumping ($k \gtrsim 5$) and a vertical boundary
condition. These two requirements correspond directly to the assumptions of the
SFT model, that the only sources are those which are explicitly put in (i.e. no
diffusive sources) and that the field at the surface is vertical. We note that
extensions to the SFT model have slightly relaxed the assumption that there are
no diffusive fluxes \citep[see for example][]{Baumann06}, but the values of the
radial diffusivities suggested there correspond to long decay times of the SFT
fields at the poles which are still not comparable to our FTD simulations with
$k=0$, 1 or 2.
\begin{figure}[h]
\includegraphics[scale=0.5]{yr2_pot_month18.eps}
\includegraphics[scale=0.5]{yr2_pot_month72.eps}
\caption{Similar to Fig.~\ref{fig:vert} except a potential field
upper boundary condition was used for the FTD simulations. The black line shows the surface field
from the SFT model, the colored lines show the FTD results for different values of $k$.}
\label{fig:pot}
\end{figure}
\begin{figure}[h]
\includegraphics[scale=0.9,angle=90]{field.014.ps}
\includegraphics[scale=0.9,angle=90]{field.017.ps}
\caption{The magnetic field structure from the FTD model in the same format as in Fig.~\ref{fig:fl_vert}
when the potential field boundary condition is used.}
\label{fig:fl_pot}
\end{figure}
\subsection{High diffusivity in the bulk of the convection zone}
For the simulation with a different $\eta$, we considered
$\eta_0=0.1$~km$^2$s$^{-1}$, $\eta_1=100$~km$^2$s$^{-1}$ and
$\eta_2=250$~km$^2$s$^{-1}$ in Eq.~(15). This is similar to the diffusivity
profile of the reference case discussed in Sect.~3 except that the diffusivity
in the bulk of the convection zone has been raised to $100$~km$^2$s$^{-1}$. The
average magnetic diffusivity of the transition between low and high velocities
is then higher, the velocity by contrast has fallen. The magnetic Reynolds
number is then $R_m=(1.5 k)/3.5$. To have $R_m \gtrsim5$ we then need
$k\gtrsim5\times(3.5/1.5)\approx12$; and indeed we found that with $k=10$ the
FTD and SFT models were close to, though not quite, matching.
\subsection{Anisotropic diffusivity}
In our third experiment, we studied the effect of an anisotropy in the
diffusivity near the surface. We used the same formula for the different
components of $\eta$ (i.e., Eq.~\ref{eqn:eta}), but with different values of
the surface diffusivity, $\eta_2$, for the horizontal and vertical directions.
Motivated by the work of \citet{Miesch11}, we chose the longitudinal and
latitudinal diffusivities to be the same, $\eta_2=250$~km$^2$s$^{-1}$, and the
radial diffusivity to be an order of magnitude smaller,
$\eta_2=25$~km$^2$s$^{-1}$. We based the downward pumping, ${\vec{u_{\mathrm{p}}}}$, on the gradient of
the vertical component of the diffusivity. The comparison of the FTD and SFT
models, for several values of $k$, are shown in Fig.~\ref{fig:anis} for two
times. Importantly, a strong downward pumping with $k>10$ is needed for the FTD
to match the SFT surface evolution.
\begin{figure}[h]
\includegraphics[scale=0.5]{yr2_anis_month18.eps}
\includegraphics[scale=0.5]{yr2_anis_month72.eps}
\caption{The surface field from the reference SFT simulations at $t=18$ months (top)
and $t=72$ months (bottom) are shown in black. The results of the FTD simulation with a vertical
field outer boundary and an anisotropic near-surface diffusivity (250~km$^2$s$^{-1}$ in the
horizontal directions and 25~km$^2$s$^{-1}$ in the vertical direction). The vertical pumping
is based on the vertical diffusivity gradient with $k=20$ (red), 10 (blue), 5 (green), and 0 (yellow).}
\label{fig:anis}
\end{figure}
\subsection{Variation of the meridional circulation}
For the simulation with a different meridional velocity profile, we used the
same form as described in Eqs.~(\ref{eqn:ur_start}) to (\ref{eqn:ur_end}) but
with $p=0.25$, $q=0$ and $m=0.5$ \citep{Dikpati99}. For this choice of the
meridional flow, the FTD and SFT models always evolve differently, even though
the surface velocity is used for the SFT calculation (Fig.~\ref{fig:mv2},
top). The reason is that the meridional velocity in this case is not constant
above the transition from low to high diffusivities, which occurs at about
0.95~$R_{\sun}$. The magnetic field in the FTD calculation sees a range of
velocities above the `boundary layer' associated with the transition and the
strong pumping. Because the diffusivity is reasonably large above the
transition, the magnetic flux should essentially be advected according to the
average meridional flow in this layer. Therefore the surface field is
effectively advected with the average meridional flow speed in the boundary
layer, and not with its surface value. This indeed happens as can be seen in
Fig.~\ref{fig:mv2} (bottom). It is noteworthy that this mainly affects the time
it takes for the flux to reach the poles, not the amount that eventually gets
there. The meridional flow is difficult to measure at depths below about 10~Mm;
in the top 10~Mm the indications from helioseismology are that the meridional
flow first increases and then decreases \citep{Basu10}.
\begin{figure}[h]
\includegraphics[scale=0.5]{jj_yr2_vert_month18.eps}
\includegraphics[scale=0.5]{jj_yr2_vert_month72.eps}
\caption{The surface field from FTD simulations at $t=18$ months (top)
and $t=72$ months (bottom) with a vertical
field outer boundary using the $k=5$ and the meridional velocity profile
with $p=0.25$, $q=0$ and $m=0.5$ (black) is used. In this case there is a strong near-surface shear.
The results from the SFT model using the surface meridional velocity (blue) and using the average of
the meridional velocity above 0.95 $R_{\sun}$ (red) are shown for comparison.}
\label{fig:mv2}
\end{figure}
There is also an observed near-surface shear in the differential rotation
\citep{Thomson95}, which has been used to explain the observed difference
between the rotation of magnetic features \citep{Snodgrass83} and the rate
deduced from surface Doppler observations of the flow. For a review of the
observational results, see \citet{Beck00}. The conventional explanation is made
in terms of the `anchoring depth' of the features \citep{Nesme-Ribes93}. Our
suggestion is that the observed rotation rate of magnetic features is
partly due to the average value in a high-diffusivity layer, which is partially isolated
from the deeper dynamics by a boundary layer associated with magnetic pumping.
\section{Conclusion}
With a vertical outer boundary condition and enough pumping the FTD model is
consistent with the SFT model. The pumping needs to be strong enough to result
in a magnetic Reynolds number of approximately 5. With a potential boundary
condition or weaker pumping, the models do not match. This strong pumping
requires a velocity which is greater than the standard value given by
mean-field theory for diamagnetic pumping. Since the SFT model matches
observations, it follows that the vertical field boundary condition and
sufficient downward pumping are required for the FTD model to match the
observed surface evolution of the field.
\begin{acknowledgements}
The authors gratefully acknowledge Manfred Sch\"ussler for enlightening
discussions on various aspects of this paper. JJ acknowledges financial support
from the National Natural Science Foundations of China through grants 11173033,
11178005, 11125314 and a Young Researcher Grant of the National Astronomical
Observatories, Chinese Academy of Sciences.
\end{acknowledgements}
\bibliographystyle{aa}
|
1,941,325,220,409 | arxiv | \section{Introduction} \label{sec:intro}
Since the seminal work by Darwin \cite{Darwin:1859}, the evolution of biological
species has been recognized as a complex dynamics involving broad distributions
of temporal and spatial scales as well as stochastic effects, giving rise to
so-called frozen accidents. There is vast exchange and overlap of concepts and
methods between the theory of evolution and the foundations of complex systems
such as fitness landscapes \cite{Wright:1932,Gavrilets:2004,Klemm:2012} and
neutral networks \cite{Kimura:1983}, the evolution of cooperation
\cite{Axelrod:1984} and self-organized criticality \cite{Bak:1996} to name but a
few.
A striking feature of biological macroevolution is its burstiness. The temporal
distribution of speciation and extinction events is highly inhomogeneous in time
\cite{Sepkoski:1993}. As described by the theory of punctuated equilibrium
\cite{Gould:1993}, a connection between punctuated equilibrium in evolution and
the theory of self-organized criticality \cite{Bak:1996} is established through
the model by Bak and Sneppen \cite{Bak:1993,Sneppen:1995}. Ecology, i.e.\ the
system of trophic interactions and other dependencies between species'
fitnesses, is driven to a critical state. Then minimal perturbations cause
relaxation cascades of broadly distributed sizes.
Rather than through ecological interaction across possibly all species, bursty
diversification may also be due to {\em adaptive radiation} as a
rapid multiplication of species in one lineage after a triggering event. About
200 million years ago, a novel chewing system with dedicated molar teeth
evolved in the lineage of mammals, allowing it to rapidly diversify into species
using vastly distinct types of nutrition \cite{Ungar:2010}. There are many more
examples where a single {\em innovation} triggers adaptive
radiation such as the tetrapod limb morphology caused by a binary shift in bone arrangement
\cite{Thomson1992Macroevolution} and the homeothermy as a key innovation by the group of
mammals \cite{HeardHauser1995keyInnovations,Leim1990innovationHomeothermy}.
Environmental conditions a species has not encountered previously,
e.g.\ when entering a geographical area with unoccupied ecological niches, may
also be the source of adaptive radiation. The diversity of finch species on
Galapagos islands is the famous example first studied by Darwin. Spontaneous
phenotypic or genetic innovations and those caused by the pressure to adapt to a
change in environment are treated on the same footing for the modeling purposes
in this contribution. Though being a central concept in the theory of
evolution, the term innovation has not been ascribed a unique definition so far
\cite{Pigliucci:2008}.
Here we study a branching process to mimic the evolution of species driven
by innovations. The process involves a separation of time scales.
Rare innovation events trigger rapid cascades of diversification where a
feature combines with previously existing features. We call this newly defined
branching process {\em innovation model}.
How can the validity of models of this kind be assessed? The evolutionary
history of species is captured by phylogenetic trees. These are binary
trees where leaves represent extant species, alive today, and inner nodes stand
for ancestral species from which the extant species have descended.
By comparing the shapes of these trees
\cite{Sackin1972phenogram,Herrada2008universalScaling,Campos:2004,Stich:2009},
in particular their degree of imbalance
\cite{Colless1982phylogenetics,Mckenzie2000DistributionCherries},
with trees generated by different evolutionary mechanisms
\cite{Aldous2001FromYuleToToday,Blum2006whichRandomProcess,HernandezGarcia2010scaling}, a selection of
realistic models is possible.
\section{Stochastic models of macroevolution}
We consider models of macroevolution within the following formal framework.
At each point in time $t$, there is a set of species $S(t)$. Evolution proceeds
as follows. A species $s \in S(t)$ is chosen according to a probability
distribution $\pi(s,t)$ on $S(t)$. Speciation of $s$ means replacing $s$ by two
new species $s^\prime$ and $s^{\prime\prime}$ such that
\begin{equation}
S(t+1) = S(t) \setminus \{s\} \cup \{s^\prime, s^{\prime\prime} \}
\end{equation}
is the set of species at time $t+1$. The initial condition (at $t=1$) is
a single species. Therefore discrete time $t$ and number of species $n$ are
identical, $n=|S(t)|=t$.
\subsection{Trees}
\begin{figure}
\centerline{\includegraphics{balanced_imbalanced_tree.pdf}}
\caption{Comparison of tree shapes. Each tree of size eight consists of a root (white diamond), a set of inner nodes (black squares) and a set of leaves (gray circles). The left tree is totally imbalanced, also called comb tree,
with depth $d=35/8 =4.375$ and Colless index $c=21/21=1$~.
The right tree is a complete binary tree with depth $d=24/8=3$ and Colless index $c=0/21=0$~.}
\label{fig:balance}
\end{figure}
The evolutionary history of organisms is represented by a phylogenetic tree. For
the purpose of this contribution, a phyologenetic tree is a rooted strict binary
tree $T$: a tree with exactly one node (the root) with degree two or zero, all
other nodes having degree three (inner node) or one (leaf node), cf. illustrations in Figure \ref{fig:balance}. For such a tree
$T$ with root $w$, a subtree $T^\prime$ is obtained as the component not
containing $w$ after cutting an edge $\{i,j\}$ of $T$. $T^\prime$ is again a
rooted strict binary tree. Since this contribution focuses on tree shape,
all edges have unit length. The distance between nodes
$i$ and $j$ on a tree $T$ is the number of edges contained in the unique
path between $i$ and $j$.
From the evolutionary dynamics, an evolving phylogenetic tree $T(t)$ is obtained
as follows. At each time step $t$, the leaves of $T(t)$ are the species $S(t)$.
When $s$ undergoes specation, two new leaves $s^\prime$ and $s^{\prime\prime}$
attach to a leaf $s$. After this event, $s$ is an inner node and no longer a
leaf of the tree. In this way, each model of speciation dynamics also defines a
model for the growth of a binary tree by iterative splitting of leaves.
\subsection{Yule model}
In the simplest case, the probability of choosing a species is uniform
at each time step, $\pi(s,t) = 1/t$. This is the Yule model or ERM model.
It serves as a null model of evolution.
The model corresponds to a particularly simple probability distribution on the
set of generated trees. For a tree with $n \ge 2$ leaves generated by the Yule
model and $i \in \{1,2\dots,n-1\}$, let $p_\text{ERM}(i|n)$ be the probability
that exactly $i$ leaves are in the left subtree of the root. Then
$p_\text{ERM}(i|n) = 1/(n-1)$. This is shown inductively
as follows. Obtaining exactly $i$ leaves
at step $n$, either they were already present at the previous step and the
speciation took place in the right subtree, or the number increased from $i-1$
to $i$ by speciation in the left subtree. Addition of these products of
probabilities for the two cases yields
\begin{equation}
p_\text{ERM}(i|n) =
\frac{n-1-i}{n-1} p_\text{ERM}(i|n-1) +
\frac{i-1}{n-1} p_\text{ERM}(i-1|n-1) ~.
\end{equation}
With the induction hypothesis $p_\text{ERM}(j|n-1)=1/(n-2)$ for all $j$,
we obtain
\begin{equation}
\label{eq:perm}
p_\text{ERM}(i|n) = \frac{(n-1-i)+(i-1)}{(n-1)(n-2)} = \frac{1}{n-1}~.
\end{equation}
The induction starts with $p_\text{ERM}(1|2)=1$ which holds because a tree
with two leaves has one leaf each in the left and in the right subtree.
Thus the uniform selection of species turns into a uniform distribution
on the number of nodes in the left or right subtree. Note that the same
distribution applies to each subtree of an ERM tree. Therefore $p_\text{ERM}$
fully describes the statistical ensemble of ERM trees. The probability
of obtaining a particular tree is the product of $p_\text{ERM}$ terms taken
over all subtrees. This becomes particularly relevant for modifications of the
model taking $p$ non-uniform, see the following subsection.
\subsection{Aldous' branching (AB) model}
The class of beta-splitting models defines a distribution of trees by the
probability
\begin{equation} \label{eq:beta}
p_{\beta}(i|n)=\frac{1}{\alpha_{\beta}(n)}\frac{\Gamma(\beta+l+1)\Gamma(\beta+n-l+1)}{\Gamma(l+1)\Gamma(n-l+1)}
\end{equation}
with appropriate normalization factor $\alpha_{\beta}(n)$. Analogous to $p_\text{ERM}$
of the previous subsection, $p_\beta(i|n)$ is the probability that a
tree has $i$ out of its $n$ leaves in the left subtree. In order to build
a tree with $n$ leaves, one first decides according to $p_\beta(i|n)$
to have $i$ leaves in the left and $n-i$ leaves in the right subtree.
Then the same rule is applied to both subtrees with the determined number
of leaves. The recursion into deeper subtrees naturally stops when a subtree
is decided to have one leaf.
The parameter $\beta\in [-2;+\infty[$
in Equation~(\ref{eq:beta}) tunes the expected imbalance.
By increasing $\beta$, equitable splits with $i \approx n/2$ become
more probable. The probability distribution of trees from the Yule model
is recovered by taking $\beta=0$. The case $\beta=-1.5$ is called
Proportional to Distinguishable Arrangements (PDA). It produces a uniform
distribution of all ordered (left-right labeled) trees of a given size $n$
\cite{Rosen1978biogeography,Pinelis2003evolutinarymodels,SteelMcKenzie2001propertiesOfPhyloTrees,CottonPage2006ShapeGeneFamilyPhylogenies}.
Another interesting case is Aldous' branching (AB) model
\cite{ALDOUSBetaSplitting1996,Aldous2001FromYuleToToday}
obtained for $\beta=-1$, where Equation~\ref{eq:beta}
reads
\begin{equation}
p_{-1}(i|n) \propto \frac{1}{i(n-i)}~.
\end{equation}
Blum and Fran{\c c}ois have found that $\beta=-1.0$ is the maximum-likelihood
choice of $\beta$ over a large set of phylogenetic trees
\cite{Blum2006whichRandomProcess}. Therefore we use it as a standard of
comparison. The AB model does not have an interpretation in
terms of macroevolution, as noted by Blum and Fran{\c c}ois
\cite{Blum2006whichRandomProcess}. In particular, it is unknown if
its probability distribution of trees can be obtained by stochastic processes
of iterated speciation as introduced at the beginning of this section.
\subsection{Activity model}
In the activity model \cite{HernandezGarcia2010scaling}, the set
of species $S(t)$ is partitioned into a set of active species $S_A(t)$ and
a set of inactive species $S_I(t)$. At each time step, a species $s \in S_A(t)$
is drawn uniformly if $S_A(t)$ is non-empty. Otherwise $s \in S_I(t)$ is
drawn uniformly. The two new species
$s^\prime$ and $s^{\prime\prime}$ independently enter the active set $S_A(t+1)$
with probability $p$. The activation probability $p$ is a parameter of the model.
For $p=0.5$ a critical branching process is obtained. Otherwise the model is
similar to the Yule model. A variation of the activity model has been
introduced by Herrada et al.\ \cite{Herrada:2011} in the context of
protein family trees.
\subsection{Age-dependent speciation}
In the \emph{age model} \cite{KellerSchmidtTugrul2010AgeModel},
the probability of speciation is inversely proportional to the
age of a species. At each time, a species $s \in S(t)$ is
drawn with probability
\begin{equation}
\pi_s(t) \propto \tau_s(t)^{-1}
\end{equation}
normalized properly. The age $\tau_s$ is the number of time steps
passed since creation of species $s$.
\subsection{Innovation model} \label{sec:innov_def}
\begin{algorithm2e}
\BlankLine
set $t=1$, $F(0)=\emptyset$, $S(0) = \{\emptyset\}$;\\
\While(\tcp*[f]{$N$ as final size of simulated tree}){$|S(t)| < N$}{
\eIf{$S(t) \setminus \{ s\setminus \{\phi\} : s \in S(t), \phi \in F(t)\} \neq \emptyset$}{
\tcp{loss event}
draw $\phi \in F(t)$ uniformly;\\
draw $s \in S(t)$ uniformly;\\
\If{$s \setminus \{\phi\} \notin S(t)$}{
$S(t+1) = S(t) \cup \{s \setminus \{\phi\}\}$;\\
$F(t+1) = F(t)$;\\
increment $t$;\\
}
}{
\tcp{innovation event}
draw $s \in S(t)$ uniformly;\\
set $\phi = 1 + \max (F(t) \cup \{0\})$; \\
set $S(t+1) = S(t) \cup \{ s \cup \{\phi\}\};$ \\
set $F(t+1) = F(t) \cup \{\phi\}$;\\
increment $t$;\\
}
\caption{Pseudocode for the innovation model}
\label{algo:innovmodel}
\end{algorithm2e}
\begin{figure}
\centerline{\includegraphics[width=\textwidth]{innov_example.pdf}}
\caption{\label{fig:innov_example}
A tree of five leaves generated by the innovation model. The
root node labeled with the empty feature set $\emptyset$ speciates by
an innovation event adding the feature 1 to the feature set. This results
in the species $\emptyset$ and $\{1\}$~. Innovation events are
performed, generating features until a loss event is possible. The
loss event generates the species $\{3\}$ by removing
the feature $1$ from $\{1,3\}$.}
\end{figure}
In the {\em innovation model}, each species $s$ is defined as a finite set of
features $s \subseteq \mathbb{N}$. Features are taken as integer numbers in
order to have an infinite supply of symbols. We denote by $F(t)$ the set of
all features existing at time $t$, that is $F(t)=\bigcup_{s \in S(t)} s$.
Each speciation occurs as one of two possible events.
An {\em innovation} is the addition of a new feature
$\phi \in \mathbb{N} \setminus F(t)$ not yet contained in any
species at the given time $t$. One of the resulting species
carries the new feature, $s^\prime = s \cup \{\phi\}$. The other
species has the same features as the ancestral one, $s^{\prime\prime}=s$.
A {\em loss} event generates a new species by the
disappearance of a feature. A feature $\phi$ is drawn from $F(t)$ uniformly.
The loss event is performed only if $s \setminus \{\phi\} \notin S(t)$
such that elimination of $\phi$ from $s$ actually generates a new species.
In this case, the resulting species are the one having suffered the loss,
$s^\prime = s \setminus \{\phi\}$ and the species $s^{\prime\prime}=s$
remaining unaltered. Otherwise, $\phi$ is not present in $s$ or its loss
would lead to another already existing species, so nothing happens.
We assume that creation of novel features is significantly less abundant than speciation
by losses. This separation of time scales is implemented by the rule that
an innovation event is only possible when no more losses can be performed.
In order to facilitate further studies with the model, we provide a pseudocode
description in Algorithm~\ref{algo:innovmodel}. Figure~\ref{fig:innov_example}
shows an example of the dynamics.
\section{Comparison of simulated and empirical data sets}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{treeplots.pdf}
\caption{\label{fig:treeplots}
Empirical and simulated trees. The depicted phylogenetic tree in (a) is from the database TreeBASE (Matrix ID M2957, relationships in rosids based on mitochondrial
matR sequences), (b) is a tree created as a realization of the innovation model and
(c) a tree from the ERM (Yule) model. Each of the trees has 161 leaves.}
\end{figure}
Now let us compare the tree shapes obtained by the models with those of
evolutionary trees in databases. The TreeBASE \cite{Sanderson1994treebase}
database contains phylogenetic information about the evolution of species
whereas the database PANDIT \cite{Whelan2006Pandit} contains phylogenetic trees
representing protein domains. Analysing the properties with reference to the
tree shape of both data sets and applying a comparative study with statistical
data sets of different models one can conclude how well a growth model
constructs ``real'' trees.
Comparison by simple inspection of trees from real data and models may already
reveal substantial shape differences. Figure~\ref{fig:treeplots} shows
an example. The trees in panels (a) and (b) are less compact than that of
panel (c) of Figure~\ref{fig:treeplots}.
For an objective and quantitative comparison of trees, we use the following two
measures of tree shape. Compactness is best described by the distance $d_i$ of a
leaf $i$ from root being small. The {\em depth} (or Sackin index)
\cite{Sackin1972phenogram} is the average distance of leaves from root,
\begin{equation}
d =\frac{\sum_{i=1}^{n} d_i}{n}~.
\end{equation}
The {\em Colless index} measures the average
imbalance of a tree \cite{Colless1982phylogenetics}.
The imbalance at an {\em inner node} $j$ of the tree
is the absolute difference $c_j = |l_j-r_j|$ of leaves in the left and right
subtree rooted at $j$. Then the average of imbalances
\begin{equation}
c= \frac{2}{(n-1)(n-2)} \sum_{j=1}^{n-1} c_j
\end{equation}
with appropriate normalization is the Colless index $c$ of the tree.
The index $j$ runs over all $n-1$ inner nodes including the root itself.
We find $c=0$ for a totally balanced tree and $c=1$ for a comb tree,
see also Figure~\ref{fig:balance}.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{plots_depth_col.pdf}
\caption{\label{fig:comparison}
Comparison of size-dependent summary statistics for models and real trees.
Symbols distinguish the ERM model ($\circ$), the AB model ($\Box$) and the
innovation model ($\diamond$) and the data sets TreeBASE ($\ast$) and PANDIT
($+$). The data sets were preprocessed by solving monotomies and
polytomies randomly as well as removing the outgroups as proposed by
\cite{Blum2006whichRandomProcess}. The mean values of depth,
and Colless index, panels (a) and (b) are binned logarithmically as a function
of tree size $n$. The same procedure is applied to the standard deviations,
panels (c) and (d).
The analysed TreeBASE data set has been downloaded from http://www.treebase.org
on June, 2007 containing 5,087 trees of size 5 to 535~ after preprocessing. The
PANDIT data set has been downloaded from http://www.ebi.ac.uk/goldman-srv/pandit on May 2008 and includes 36,136 preprocessed trees of size 5 to 2,562~. The simulated data set comprises for each model (AB model, ERM model and innovation model) 1,000
trees for each tree size from 5 to 535 and 10 trees for each tree size from 536
to 2,562.
}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{plots_resc.pdf}
\caption{\label{fig:rescaled}
The same values of depth and Colless index as in Figure~\ref{fig:comparison}
(a,b) with an $n$-dependent rescaling. (a)
Average depth divided by $\ln n$. (b) Average Colless index divided by
$n^{-1}\ln n$. These factors are chosen such that the rescaled values for
the ERM model asymptotically approach a constant. See refererence
\cite{BlumFrancoisJanson2007MeanVarianceTwoStatistics} for the scaling
of the indices of the ERM model.}
\end{figure}
Ensemble mean values and standard deviations of these indices are
shown in Figure \ref{fig:comparison}. Comparing the results of three models
(ERM, AB and innovation) to those of trees from two databases, the
least discrepancy is obtained between the innovation model and the
trees from TreeBASE, representing macroevolution. In Figure~\ref{fig:rescaled},
the averages of the two indices are shown after rescaling to facilitate
the comparison. Of all models,
the values of the innovation model are also best matching those
of PANDIT.
\section{Depth scaling in the innovation model}
\subsection{Subtree generated by an innovation}
\begin{figure}
\centerline{\includegraphics[width=\textwidth]{nf_0.pdf}}
\caption{ \label{fig:depth_scaling_loss}
Average depth in dependence of the number of leaves $n$ in trees generated
with stochastic loss events (dots with error bars). Each data point is an average over 100 realizations with error bars indicating standard deviations. For comparison, the expected depth for the ERM model ($\Box$) and for complete binary trees ($\triangle$) are shown.
}
\end{figure}
Suppose the $i$-th innovation, generating feature $i$, affects a species $s $
with $f$ features. Then $s$ is removed from the set $S$ of extant species,
turning into an inner node in the tree. Two new species $s^\prime$ and
$s^{\prime\prime}$ are attached, having feature sets $s^\prime=s$ and
$s^{\prime\prime} = \{ i \} \cup s$. In subsequent loss events, a subtree $T_i$
is built up with $2^f$ leaves, each of which is a species $\sigma \subseteq s
\cup \{i\}$. Call $D(T_i)$ sum of the distances of all the leaves in $T_i$ from
the root of $T_i$.
Let us now estimate the expectation value $\langle D(T_i) \rangle$, which
only depends on the number of features of $f$. Trivially, $D(T_i)$ is lower
bounded by $f 2^f$ since the most compact tree is the fully balanced one with
all nodes at distance $f$ from root. In particular, we conjecture
\begin{equation} \label{eq:subtree_bounds}
f 2^f < \langle D(T_i) \rangle < D_\text{ERM}(2^f)~.
\end{equation}
The second inequality is corroborated by the plots in
Figure~\ref{fig:depth_scaling_loss}. We make it plausible as follows. Similar to
the ERM model, a leaf is chosen in each time step when executing loss events.
Here, however, the loss event is performed only if the chosen leaf carries the
chosen feature and the reduced feature set is not yet present in the tree.
Thus the probability of accepting a proposed loss event at a leaf $s$ is
anticorrelated with the number of features $|s|$ at $s$. The expected number of
features carried by a leaf decreases with its distance from root. Therefore we
argue that the present model adds new nodes preferentially to leaves closer to
root than average, resulting in trees with an expected depth increasing more
slowly than in the ERM model.
\subsection{Approximation of depth scaling}
\begin{figure}
\centerline{\includegraphics[width=\textwidth]{innov_steps_mod.pdf}}
\caption{\label{fig:steps_illu}
The deterministic growth of a tree considered as an approximation of
the innovation model. Each subtree generated by an innovation is
indicated as a shaded area.}
\end{figure}
We study a tree growth that is derived from the
innovation model by two simplifying assumptions. (i) Each innovation is
introduced at the leaf with the largest number of features in the tree. (ii)
Introducing an innovation at a leaf with $f$ features triggers the growth of
a subtree that is a perfect (complete) binary tree with $2^f$ leaves at
distance $f$ from the root of this subtree.
This leads us to consider the following {\em deterministic growth} starting with
a single node and $i=0$. Choose a leaf $s$ at maximum distance from root; split
$s$ obtaining new leaves $s^\prime$ and $s^{\prime\prime}$; take
$s^{\prime\prime}$ as the root of a newly added subtree that is a perfect tree
with $2^i$ leaves; increase $i$ by one and iterate. Figure~\ref{fig:steps_illu}
illustrates the first few steps of the growth.
After $i$ steps, the number of leaves added to the tree most recently
is $2^{i-1}$. Therefore, the total number of leaves after step $i$ is
\begin{equation}
n(i) = 1 + \sum_{j=1}^i 2^{j-1} = 2^i~.
\end{equation}
because the procedure starts with a single leaf at $i=0$.
The leaves of the subtree added by the $j$-th innovation have distance
\begin{equation}
\sum_{k=1}^j k = \frac{j(j+1)}{2}
\end{equation}
from root because these leaves are $j$ levels deeper than those generated
by the previous innovation. Therefore the sum of all leaves' distances from
root is
\begin{equation} \label{eq:D_of_i}
D(i) = i+ \sum_{j=1}^n 2^{j-1} [j(j+1)/2]
\end{equation}
after the $i$-th innovation has been performed. The first term $i$ arises
because the innovation itself renders one previously existing leaf at a
distance increased by one, cf. the leaves outside the shaded areas in
Figure~\ref{fig:steps_illu}. In performing the sum of
Equation~\ref{eq:D_of_i} we use the equality
\begin{equation}
\sum_{j=0}^i x^{j-1} [j(j+1)] = 2^i [i^2-i+2] -2
\end{equation}
to arrive at
\begin{equation}
D(i) = i + 2^{i-1} [i^2-i+2] -1~.
\end{equation}
We substitute $n(i)=2^i$, i.e.\ $i=\log_2 n$, and divide $D$ by $n$ to arrive
at the depth
\begin{equation}\label{eq:d_of_n}
d(n) = \frac{1}{2} [ (\log_2 n)^2 - (\log_2 n) +2 ] + \frac{(\log_2 n) -1}{n}
\end{equation}
of the tree with $n$ leaves generated by deterministic growth. For
large $n$, the depth scaling is
\begin{equation}
d(n) \sim (\log n)^2~.
\end{equation}
By the comparison in Fig.~\ref{fig:scaling_global}, we find
the $(\log n)^2$ scaling also for the depth of
trees obtained from the
innovation model as defined in Section~\ref{sec:innov_def}.
Thus we hypothesize that the deterministic growth captures the
essential mechanism leading to the depth scaling of the innovation model.
The prefactor of $(\log n)^2$ is smaller in the innovation model than
in the deterministic growth. In the actual model, most innovations hit a
leaf with a non-maximal number of features and therefore trigger the
growth of a lower subtree than assumed by deterministic growth.
Table~\ref{tab:depth} provides an overview of the scaling of average depth with
the number of leaves for various tree models .
\begin{table}
\renewcommand{\arraystretch}{1.5}
\caption{\label{tab:depth} Depth scaling of models.}
\begin{tabular}{@{}ll@{}}
\hline
\bf{innovation model} & $(\log n)^2$ \\
$\beta$-\bf{splitting} \cite{ALDOUSBetaSplitting1996}& $\begin{cases}
\log n & \text{if } \beta > -1 \text{, includes \bf{ ERM} }(\beta=0)\\
(\log n)^2 & \text{if } \beta = -1 \text{, \bf{AB} model}\\
n^{-\beta-1} & \text{if } \beta < -1 \text{, includes \bf{ PDA} }(\beta=-1.5)
\end{cases}$\\
\bf{age model} \cite{KellerSchmidtTugrul2010AgeModel} & $(\log n)^2$ \\
\bf{activity model} \cite{HernandezGarcia2010scaling}
& $\begin{cases} n^{0.5} & \text{if } p=0.5,\\ \log n & \text{otherwise.} \end{cases}$ \\
\bf{complete tree} & $\log n$\\
\bf{comb tree} & $n$\\
\hline
\end{tabular}
\end{table}
\begin{figure}
\centerline{\includegraphics[width=\textwidth]{lim_1.pdf}}
\caption{\label{fig:scaling_global}
Depth as a function of tree size $n$ for the innovation model
($\circ$) and for the deterministic growth
(solid curve) according to Equation~(\ref{eq:d_of_n}). Note
that square root of depth is plotted such that a straight line
in the plot indicates a depth scaling $d(n) \sim (\log n)^2$.
For each size $n$, the plotted point ($\circ$) is the average over
$\sqrt{d(n)}$ for 100 independently generated trees. Error bars
give the standard deviation.
}
\end{figure}
\section{Discussion}
The innovation model establishes a connection between the burstiness of
macroevolution and the observed imbalance of phylogenetic trees. Bursts of
diversification are triggered by generation of new features and combination
with the repertoire of existing traits. In order to keep the model simple,
the diversification after an innovation is implemented as a sequence of
random losses of features. More realistic versions of the model could be
studied where combinations of traits are enriched by re-activation of
previously silenced traits or horizontal transfer between species.
Furthermore, the model as presented here neglects the extinction of species
and their influence on the shapes of phylogenetic trees.
Regarding the robustness of the model, the depth scaling would have to be
tested under modifications. In particular, the infinite time
scale separation between rare innonvations and frequent loss events
could be given up by allowing innovations to occur at a finite rate
set as a parameter.
In summary, we have defined a well-working, biologically motivated model which
nevertheless is sufficiently simple to allow for further enhancement
regarding biological concepts such as sequence evolution and
genotype-phenotype relations.
\section*{Acknowledgments}
The authors thank Kathrin Lembcke, Maribel Hern\'{a}ndez Rosales and Nicolas
Wieseke for a critical reading of the draft. This work was supported by
Volkswagen Stiftung through the initiative for Complex Networks as a Phenomenon
across Disciplines.
|
1,941,325,220,410 | arxiv |
\section{Introduction}
Polar codes \cite{arikan2009channel} are the first class of capacity-achieving codes.
They are based on the $N \times N$ Arikan polarizing transformation $A^{(N)}=F^{\otimes M}$,
$N=2^M$, where $F=\begin{pmatrix}1&0\\1&1\end{pmatrix}$ is called the Arikan kernel.
Many other matrices were proposed to replace kernel $F$, together with efficient corresponding kernel processing algorithms
\cite{yao2019explicit,trofimiuk2019reduced}.
Performance of polar codes with given $n\times n$ kernel $K$ depends on properties of matrix $K$,
such as polarization rate and scaling exponent \cite{mondelli2016unified, fazeli2018binary}.
Convolutional polar codes (CvPC, also b-MERA codes) are introduced in \cite{ferris2013branching}.
They are based on convolutional polarizing transformation (CvPT), which is an $n\times n$ matrix, $n=2^m$, which is \textit{not} of the form $K^{\otimes M}$.
They outperform Arikan polar codes under successive cancellation (SC) decoding \cite{prinz2018successive, saber2018convolutional,morozov2018efficient} due to better
polarization properties.
More precisely, consider kernel $K$ and codeword $c_0^{n-1}=u_0^{n-1}K$. On each phase $\varphi$, the SC decoder, trying to estimate $u_\varphi$, considers probabilities of two cosets: $(\hat u_0^{\varphi-1},0,u_{\varphi+1}^{n-1})K$ and $(\hat u_0^{\varphi-1},1,u_{\varphi+1}^{n-1})K$,
where $u_{\varphi+1}^{n-1}$ runs over all possible binary vectors of length $n-\varphi-1$, and $\hat u_0^{\varphi-1}$ are already estimated
symbols. Note that the difference between (XOR of) any two vectors from the cosets is a vector from the set $C_\varphi=\set{(0_0^{\varphi-1},1,u_{\varphi+1}^{n-1})K}$.
Consider a ``dominating set'' of $C_\varphi$, i.e., set $\overline C_\varphi=\set{\overline a_0^{n-1}|\exists a_0^{n-1}\in C_\varphi: \forall i: \overline a_i\geq a_i}$.
Note that in the case of BEC, set $\overline C_\varphi$ describes all erasure patterns, after which one cannot recover $u_\varphi$.
Polarization properties of $K$ depend on the weight distributions of $\overline C_\varphi$ for each $\varphi$.
In some sense, matrix $Q^{(n)}$ has better weight distributions of $\overline C_\varphi$ then the Arikan polarizing transformation
$F^{\otimes m}$ of the same size $n=2^m$.
The weight distributions of $\overline C_\varphi$ allow one to obtain scaling exponent and polarization rate of a kernel.
In this paper we derive them for kernel $Q^{(n)}$, based on the recursive expansion $Q^{(n)}=(X^{(n)}Q^{(n/2)},Z^{(n)}Q^{(n/2)})$, where $(A,B)$ means concatenation of matrices $A$ and $B$.
Matrices $X^{(n)}$ and $Z^{(n)}$ are of size $n\times n/2$, and their rank is $n/2$.
They have diagonal-like structure, i.e. all positions of $1$'s are not far from diagonal $\set{(2j,j), 0\leq j<n/2}$, which results in simple recursive relations between weight distributions of $\overline C_\varphi$ for $Q^{(n/2)}$ and $Q^{(n)}$.
In this paper we prove these relations, which lead to an algorithm of computing scaling exponent for $Q^{(n)}$
for any $n$ with polynomial complexity in $n$.
\section{Background}
\label{s:bg}
\subsection{Notations}
The following notations are used in the paper.
$\mathbb{F}$ denotes the Galois field of two elements.
For integer $n$ we denote the set $[n]=\{0,1,\ldots n-1\}$.
Symbol $a_b^c$ denotes vector $(a_b,a_{b+1},\ldots, a_c)$.
For $m \times n$ matrix $A$ and sets $\mathcal{X} \subseteq [m]$, $\mathcal{Y} \subseteq [n],$ by $A_{\mathcal{X},\mathcal{Y}}$ we denote the submatrix of $A$ with rows from set $\mathcal{X}$ and columns from set $\mathcal{Y}$, where indexing of rows and columns starts from zero.
Notation $c_{\mathcal{X}}$ is defined similarly for vector $c$.
If $\mathcal{X}=*$ or $\mathcal{Y}=*$, this means that all rows or all columns of the original matrix are in the submatrix.
Symbol $A_{\overline \mathcal{X},\overline \mathcal{Y}}$ denotes a submatrix of $A$ consisting of rows and columns with indices that are not in $\mathcal{X}$ and $\mathcal{Y}$, respectively.
The vector of $i$ zeroes is denoted by $\mathbf 0^i$, or just by $\mathbf 0$, if $i$ is clear from the context.
We also use symbol $(a,b)$ for concatenation of vectors/matrices/elements $a$ and $b$.
Also we use strings of $0$'s and $1$'s for an explicit binary vector, e.g. $110=(1,1,0)$.
\subsection{Polar Codes }
\label{ss:sc}
In this paper we consider polar codes, defined as a set of vectors
\begin{align}
c_{[N]}=u_{[N]}K^{\otimes M}, u_{\mathcal{F}}=\mathbf 0^{N-k}, u_{\mathcal{I}}\in\mathbb{F}^k,
\label{eq:pcdef}
\end{align}
where $K$ is an $n\times n$ invertible matrix over $\mathbb{F}$, which is not upper-triangular under any column permutation, $\mathcal{F}\subset [N]$, $|\mathcal{F}|=N-k$, $\mathcal{I}=[N]\setminus\mathcal{F}$, and symbol $K^{\otimes M}$ denotes the $M$-times Kronecker product of $K$ with itself.
The length of the code is $N=n^M$, the dimension is $k$.
Matrix $K$ is called the kernel.
Consider transmission of codeword $c_{[N]}=\overline u_{[N]}K^{\otimes M}$ through a binary-input memoryless channel $\mathcal{W}:\mathbb{F}\to\mathcal{Y}$.
The SC\ decoding algorithm makes successive estimations $\hat u_\varphi$ of symbols $\overline u_\varphi$, $\varphi\in[N]$.
On phase $\varphi$, for $u_\varphi\in\mathbb{F}$ the SC decoding algorithm calculates the value of $W^{(\varphi)}_N(y_0^{N-1}, \hat u_{[\varphi]}|u_\varphi)$, defined as
\begin{align}
W^{(\varphi)}_N(y_{[N]}, u_{[\varphi]}|u_{\varphi}) = 2^{-N}\cdot\!\!\!\!\!\!\!\!\!\!\sum_{u_{\varphi+1}^{N-1} \in \mathbb{F}^{N-\varphi-1}}\!\!\!\!\!\!\!\!\! \mathcal{W}^N(y_{[N]}|u_{[N]}K^{\otimes M}),
\label{eq:wdef}
\end{align}
where $\mathcal{W}^N(y_{[N]}|c_{[N]})=\prod_{i=0}^{N-1}\mathcal{W}(y_i|c_i)$.
Then, the estimation of $\overline u_\varphi$ is made by
\begin{align}
\hat u_\varphi=\begin{cases}
0, &\varphi\in\mathcal{F} \\
\arg \displaystyle\max_{u_{\varphi}\in\mathbb{F}}W^{(\varphi)}_N(y_{[N]},\hat u_{[\varphi]}|u_\varphi) , &\varphi \in \mathcal{I}.
\end{cases}
\label{eq:hd}
\end{align}
Computing \eqref{eq:wdef} can be done recursively by
\ifonecol
\begin{align}
W^{(ni+j)}_N(u_0^{ni+j}|y_0^{N-1})=
\sum_{u_{ni+j+1}^{ni+n-1}}\prod_{s=0}^{n-1}\!W_{N/n}^{(j)}\left((u_{nt}^{nt+n-1}K)_s,t\in[j\!+\!1]\big|y_{sN/n}^{sN/n+N/n-1}\right)\!.
\label{eq:wrec}
\end{align}
\else
\begin{align}
&W^{(ni+j)}_N(u_0^{ni+j}|y_0^{N-1})=\nonumber\\
&\sum_{u_{ni+j+1}^{ni+n-1}}\prod_{s=0}^{n-1}\!W_{N/n}^{(j)}\left((u_{nt}^{nt+n-1}K)_s,t\in[j\!+\!1]\big|y_{N/ns}^{N/ns+N/n-1}\right)\!.
\label{eq:wrec}
\end{align}
\fi
If transmitted $\overline u_i\in\mathbb{F}$ are uniformly distributed, then \eqref{eq:wrec} is equal to \eqref{eq:wdef} multiplied by a constant which does not affect maximization \eqref{eq:hd}.
Computing \eqref{eq:wrec} on one layer of recursion for all $j\in[n]$ is called \textit{kernel processing}.
\subsection{Scaling Exponent and Polarization Rate}
In this paper we consider two polarization properties of a kernel, namely, scaling exponent and polarization rate, which can be used to estimate performance of polar codes with a given kernel.
Polar codes are based on the polarization phenomenon, i.e., some part of channels $W^{(\varphi)}_N$ tend to the noiseless channel, and others tend to complete noise with $N\to\infty$.
The Bhattacharyya parameter of a binary-input channel $W$ with output alphabet $\mathcal{Y}$ is used as an upper bound on error probability of channel $W$.
It is defined as
\begin{align}
Z(W)=\sum_{y\in\mathcal{Y}}\sqrt{W(y|0)W(y|1)}.
\label{eq:bhadef}
\end{align}
\textit{Scaling exponent} \cite{fazeli2014scaling,hassani2014finitelength} is defined for channel $W$ and kernel $K$ as number $\mu(W,K)$, such that there exists a finite non-zero value of
\begin{align}
\lim_{N\to\infty}\frac{\#\set{i|\epsilon < Z(W^{(i)}_N)<1-\epsilon'}}{N}\cdot N^{1/\mu(W,K)}
\end{align}
for any $0<\epsilon<1-\epsilon'<1$, where $N=n^M$.
Such number is not yet proven to exist.
We assume it exists (this assumption is also known as the scaling assumption \cite{yao2019explicit}).
\textit{Polarization rate} is defined for a kernel (independent of the underlying channel) as number $E(K)$, such that:
\begin{align*}
\forall \beta < E(K):& \liminf_{N\to\infty}\frac{\#\set{i|Z(W^{(i)}_N)\leq 2^{-n^{N\beta}}}}{N}=I(W),\\
\forall \beta > E(K):& \liminf_{N\to\infty}\frac{\#\set{i|Z(W^{(i)}_N)\geq 2^{-n^{N\beta}}}}{N}=1,
\end{align*}
where $I(W)$ denotes the capacity of channel $W$.
\subsection{Convolutional Polarizing Transformation}
Convolutional polar codes \cite{ferris2017convolutional} (CvPCs) are a family of linear block codes of length $n=2^m$.
The generator matrix of a CvPC consists of rows of $n \times n$ non-singular matrix $Q^{(n)}$, called convolutional polarizing transformation
(CvPT), defined as
\begin{align}
Q^{(n)}=\left(X^{(n)}Q^{(n/2)},Z^{(n)}Q^{(n/2)}\right),
\label{eq:qdef}
\end{align}
where $Q^{(1)}=(1)$, $X^{(l)}$ and $Z^{(l)}$ are $l\times l/2$ matrices, defined for even $l$ as
\begin{align}
&X^{(l)}_{i,j} =\begin{cases} 1, & \text{if } 2j\leq i \leq 2j+2\\
0, & \text{otherwise}
\end{cases}
\label{eq:xdef}
\\
&Z^{(l)}_{i,j} =\begin{cases} 1, & \text{if } 2j< i \leq 2j+2\\
0, & \text{otherwise}
\end{cases}
\label{eq:zdef}
\end{align}
For example,
$$X^{(8)}=\begin{pmatrix}11100000\\00111000\\00001110\\00000011\end{pmatrix}^T,
Z^{(8)}=\begin{pmatrix}01100000\\00011000\\00000110\\00000001\end{pmatrix}^T.$$
Expansion \eqref{eq:qdef} corresponds to one layer of the CvPT, which is depicted in Fig.~\ref{fig:cvpt}.
The $m$-th layer of the CvPT is a mapping of vector $u_0^{n-1}$ to vectors $x_0^{n/2-1}=u_0^{n-1}X^{(n)}$ and $z_0^{n/2-1}=u_0^{n-1}Z^{(n)}$,
where
\ifonecol
\begin{align}
x_i=u_{2i}+u_{2i+1}+u_{2i+2}, z_i=u_{2i+1}+u_{2i+2},i\leq\frac{n}{2}-2; \;
x_{n/2-1}=u_{n-2}+u_{n-1}, z_{n/2-1}=u_{n-1}.
\label{eq:xz}
\end{align}
\else
\begin{align}
x_i=u_{2i}+u_{2i+1}+u_{2i+2},\; &z_i=u_{2i+1}+u_{2i+2},i\leq\frac{n}{2}-2; \nonumber\\
x_{n/2-1}=u_{n-2}+u_{n-1}, \;&z_{n/2-1}=u_{n-1}.
\label{eq:xz}
\end{align}
\fi
\begin{figure}
\centering
\input{tikz-cpt.tex}
\caption{Convolutional polarizing transformation $Q^{(n)}$}
\label{fig:cvpt}
\end{figure}
\subsection{Polarization Behavior (PB)}
For a given kernel $K$, scaling exponent for BEC\ and polarization rate
can be obtained from so-called polarization behaviour, which is defined as follows.
Consider transmission of codeword $c_0^{n-1}=u_0^{n-1}K$ through BEC $\mathcal{W}$.
Denote by $\mathcal{E}\subseteq [n]$ the erasure configuration, i.e., the set of erased positions of $c_0^{n-1}$.
Consider phase $\varphi$ of SC decoding. Assume that all $u_0^{\varphi-1}$ was estimated correctly.
Assume for simplicity $u_0^{\varphi-1}=\mathbf 0^\varphi$ (otherwise we can set $\tilde c_0^{n-1}=c_0^{n-1}+u_0^{\varphi-1}K_{[\varphi],*}$).
Each non-erased symbol $c_j, j\in\overline\mathcal{E}=[n]\setminus\mathcal{E}$ can be expressed as
$
c_j=\sum_{i=\varphi}^{n-1}u_iK_{i,j}=u_i^{n-1}K_{\overline{[\varphi]},\set{j}}, \;j\in\overline\mathcal{E},
$
where symbol $\bigcdot$ denotes dot product of two vectors with the same dimension over $\mathbb{F}$. Given $c_{\overline{\mathcal{E}}}$, the receiver can compute any linear combination $\sum_{j\in\overline{\mathcal{E}}} b_jc_j$, which is also a linear combination of input symbols $u_{\varphi}^{n-1}$. The receiver can recover any linear combination of the form
\ifonecol
\begin{align}
\sum_{j\in\overline{\mathcal{E}}}b_jc_j=\sum_{j\in\overline{\mathcal{E}}}b_j\sum_{i=\varphi}^{n-1}u_iK_{i,j}=\sum_{i=\varphi}^{n-1}u_i\sum_{j\in\overline{\mathcal{E}}}b_jK_{i,j}
=u_{\varphi}^{n-1}\ \bigcdot p_0^{n-\varphi-1},\; p_0^{n-\varphi-1}\in\cs\hat K,
\label{eq:rec}
\end{align}
\else
\begin{align}
&\sum_{j\in\overline{\mathcal{E}}}b_jc_j=\sum_{j\in\overline{\mathcal{E}}}b_j\sum_{i=\varphi}^{n-1}u_iK_{i,j}=\sum_{i=\varphi}^{n-1}u_i\sum_{j\in\overline{\mathcal{E}}}b_jK_{i,j}
\nonumber\\&
=u_{\varphi}^{n-1}\ \bigcdot p_0^{n-\varphi-1},\; p_0^{n-\varphi-1}\in\cs\hat K,
\label{eq:rec}
\end{align}
\fi
where $\hat K=K_{\overline{[\varphi]},\overline{\mathcal{E}}}$ and $\cs \hat K$ denotes the column space of matrix $\hat K$.
Symbol $u_{\varphi}$ corresponds to linear combination $u_{\varphi}^{n-1}\bigcdot~(1,\mathbf 0^{n-\varphi-1})$.
Thus, $u_{\varphi}$ is erased iff $(1,0,...,0)\notin \cs \hat K$.
\begin{definition}
\label{d:pb}
\textit{Polarization behavior (PB)} of $n\times n$ kernel $K$ is a collection of $n$ polynomials $P^{(0)}(x),...,P^{(n-1)}(x)$, where each polynomial $P^{(\varphi)}(x)=\sum_{w=0}^{n}A_wx^w$ is the weight enumerator of erasure configurations that erase $u_\varphi$:
$$
A_w=\left|\set{\mathcal{E}\subseteq[n]\ \big|\ (1,\mathbf 0^{n-\varphi-1})\notin\cs K_{\overline{[\varphi]},\overline{\mathcal{E}}} \text{ and }|\mathcal{E}|=w}\right|.
$$
\end{definition}
Knowing PB, one can compute scaling exponent for BEC by the algorithm presented in \cite{hassani2014finitelength}.
In the following section, we present an algorithm for computing PB of $K=Q^{(n)}$.
\section{Computing Scaling Exponent for Convolutional Polar Kernel \label{s:se}}
\subsection{General Description of the Algorithm}
Our algorithm for computing scaling exponent for CvPK consists of three steps:
\begin{enumerate}
\item Compute generalized polarization behaviour (GPB) of CvPK by the recursion, described in Section~\ref{s:gpbcvpk}.
\item Convert GPB to PB, as given in Section~\ref{s:gpb2pb}.
\item Given PB for CvPK, compute scaling exponent for BEC by the algorithm, presented in \cite{hassani2014finitelength} (we do not describe it in this paper).
\end{enumerate}
The proposed algorithm is similar to the algorithm in \cite{morozov2019distance} for computing partial distances of CvPT.
After publishing \cite{morozov2019distance} we found that partial distances of CvPT can be computed with much simpler algorithm \cite{morozov2019simplified}.
However, computing PB of CvPK requires one to fully employ the idea of \cite{morozov2019distance}.
Furthermore, we believe that our approach can be extended to compute PB for an arbitrary kernel.
We provide a list of variables, used in this section, in Table~\ref{t:not} to simplify the reader's life.
\begin{table}
\centering
\caption{The summary of notations.}
\label{t:not}
\begin{tabular}{|m{0.21\linewidth}|m{0.64\linewidth}|}
\hline
$\mathbb{F}$ & The binary field
\rule{0pt}{8pt}
\\\hline
kernel $K$ & Any non-singular binary $n\times n$ matrix which is not upper-triangular under any column permutation
\\\hline
$\cs A$ & The column space of matrix $A$
\rule{0pt}{9pt}
\\\hline
$[n]$ & Set $\set{0,1,...,n-1}$
\rule{0pt}{9pt}
\\\hline
$\overline \mathcal{S}$ & For a set $\mathcal{S}\subseteq[n]$, the complement to $[n]$
\rule{0pt}{9pt}
\\\hline
$a_{\mathcal{A}}$ & \rule{0pt}{8pt}A subvector of vector $a_0^{t-1}=a_{[t]}$ with ascending indices from set $\mathcal{A}\subseteq[t]
\\\hline
$u_{[n]}$ & Input vector, which is multiplied by kernel $K$
\\\hline
$c_{[n]}$ & Output vector $c_{[n]}=u_{[n]}K$
\rule{0pt}{9pt}
\\\hline
$\varphi$ & The phase of SC decoding; the number of first elements of $u$ that we have already estimated correctly.
Due to linearity we assume $u_{[\varphi]}=\mathbf 0$
\\ \hline
erasure configuration $\mathcal{E}$ & \rule{0pt}{7pt
The set of erased positions $\mathcal{E}\subseteq [n]$ of $c_{[n]}$.
After erasures, the receiver knows $c_{\overline{\mathcal{E}}}$
\rule{0pt}{7pt}
\\\hline
$\mathcal{E}'$, $\mathcal{E}''$ &
\rule{0pt}{7pt
Given the erasure configuration $\mathcal{E}$ of $c_{[n]}$, $\mathcal{E}'$ is the e.c. of $c_{[n/2]}$ and $\mathcal{E}''$ is the e.c. of $c_{n/2}^{n-1}$
\rule{0pt}{5pt}
\\\hline
$P^{(\varphi)}(x)$ & For an $n\times n$ kernel $K$, the weight enumerator polynomial of erasure configurations of $c_{[n]}=u_{[n]}K$
that erase input symbol $u_\varphi$.
Monomial $ax^b$ means that there are $a$ such erasure configurations of cardinality $b$
\\\hline
PB, polarization behaviour (Def.~\ref{d:pb}) & The collection of $P^{(\varphi)}(x)$ for each $\varphi$
\\\hline
$\mathbb{S}_J$ & The set of all linear subspaces of $\mathbb{F}^J$ ($\mathbb{S}_J\subseteq 2^{\mathbb{F}^J}$
\rule{0pt}{10pt}
\\\hline
$a\bigcdot b$ & Dot product $\sum_i a_ib_i$ of vectors $a$ and $b
\rule{0pt}{8pt}
\\\hline
$(\mathcal{E},\varphi)$-recoverable vector (Def.~\ref{d:chi})& \rule{0pt}{8pt
Any vector $p\in\mathbb{F}^3$, s. t. the value of $p\bigcdot u_{\varphi}^{\varphi+2}$ can
be computed from subvector $c_{\overline{\mathcal{E}}}$ of codeword $c_{[n]}=u_{[n]}K$. This condition is equivalent to $(p,\mathbf 0)\in\cs K_{\overline{[\varphi]}, \overline{\mathcal{E}}}$
\\\hline
$\chi_{\varphi}(\mathcal{E})$ (Def.~\ref{d:chi}) &
\rule{0pt}{8pt
The set of all $(\mathcal{E},\varphi)$-recoverable vectors (the kernel is assumed to be clear from the context)
\\\hline
$P^{(\varphi,\mathcal{S})}(x)$ & For an $n\times n$ kernel $K$, the weight enumerator polynomial of erasure configurations $\mathcal{E}$ for
which $\chi_\varphi(\mathcal{E})=\mathcal{S}$
\\\hline
GPB, generalized PB (Def.~\ref{d:gpb}) &
The collection of polynomials $P^{(\varphi,\mathcal{S})}(x)$ for each $\varphi\in [n-2]$ and $\mathcal{S}\in\mathbb{S}_3$.
\\\hline
$\hull{001,101}$ & The set of all linear combinations of vectors listed inside $\hull{}$. By default, $\hull{}=\set{\mathbf 0}$
\\\hline
\end{tabular}
\end{table}
\subsection{Generalized Polarization Behaviour (GPB)}
Polarization behaviour (PB)\ characterizes weight spectrum of erasure configurations that erase $u_\varphi$.
We found no simple recursion for convolutional polar kernel $K=Q^{(n)}$, that, given PB of $Q^{(n/2)}$, allows one to obtain PB of $Q^{(n)}$.
However, we can obtain recursive formulae for enumerators which count erasure configurations that erase some linear combinations of symbols $u_{\varphi}^{\varphi+2}$.
Thus, after we generalize the definition of PB to GPB, the GPB of $Q^{(n)}$ can be computed recursively and then converted
to PB.
Assume that the receiver knows $u_0^{\varphi-1}$.
Consider linear combination $p_0^2\bigcdot u_{\varphi}^{\varphi+2}$ of three adjacent input symbols $u_{\varphi}^{\varphi+2}$ for some given $p_0^2\in\mathbb{F}^3$.
Recalling \eqref{eq:rec}, one can see that this linear combination can be recovered after erasure configuration $\mathcal{E}$ iff $(p_0^2,\mathbf 0^{n-\varphi-3})\in \cs\hat K$, where $\hat K=K_{\overline{[\varphi]},\overline{\mathcal{E}}}$.
\begin{definition}
\label{d:chi}
Vector $p_0^{2}$ is $(\mathcal{E},\varphi)$\textit{-recoverable vector} for kernel $K$, iff $(p_0^2,\mathbf 0^{n-\varphi-3})\in\cs K_{\overline{[\varphi]}, \overline{\mathcal{E}}}$.
The set of $(\mathcal{E},\varphi)$-recoverable vectors is denoted by $\chi_{\varphi}(\mathcal{E})$ (following Greek word $\chi\acute{\omega}\rho o\varsigma$
meaning ``space'').
\end{definition}
It is easy to see that the set $\chi_\varphi(\mathcal{E})$ is indeed a linear subspace of $\mathbb{F}^3$, which we write as $\chi_\varphi(\mathcal{E})\in\mathbb{S}_3$,
denoting by $\mathbb{S}_3$ the set of all linear subspaces of $\mathbb{F}^3$. Throughout the paper, a subspace of $\mathbb{F}^3$ is specified by its basis vectors, which are comma-separated strings of $0$ and $1$ listed inside triangular brackets, e.g. $\hull{001, 110}=\hull{001, 111}=\set{\mathbf 0^3, 001, 110, 111}$.
For the sake of convenience, attach index $i\in[16]$ to each subspace $\mathcal{T}_i\in\mathbb{S}_3$ of $\mathbb{F}^3$ (see Table~\ref{t:gpb4}).
In the case of $Q^{(4)}$, $c_{[4]}=(u_0+u_1+u_3,u_0+u_2,u_1+u_2,u_0+u_1+u_2+u_3)$.
After each erasure configuration $\mathcal{E}\subseteq[4]$ the receiver knows $c_j$ for all $j\notin \mathcal{E}$, and it can compute all
linear combinations (LCs) of symbols $c_j$.
These LCs correspond to some linear combinations of $u_{[4]}$.
On phase $\varphi=0$, we are interested only in LCs of $u_0^2$, i.e., expressions $p_{[4]}\bigcdot u_{[4]}$ which do not include $u_3$, or, equivalently, when $p_3=0$.
All such $p$'s constitute some set $\mathcal{T}_i=\chi_0(\mathcal{E})$.
For the case of $\varphi=1$, we assume that we know exactly the value of $u_0$ and we can subtract it from $c_0$.
Thus, we can assume that $u_0=0$ and $\tilde{c}=(u_1+u_3,u_0+u_2,u_1+u_2,u_0+u_1+u_2+u_3)$.
After erasure configuration $\mathcal{E}$, the receiver knows $\tilde{c}_j$, $j\notin\mathcal{E}$, and all their linear combinations, which lead
to $p_{[3]}\bigcdot u_1^3$ for some $p$'s.
All such $p$'s form the set $\mathcal{T}_i=\chi_1(\mathcal{E})$.
\begin{example}
Let us compute $\chi_0(\set{0,3})$ for $Q^{(4)}$.
In this case, $\varphi=0$, $\mathcal{E}=\set{0,3}$, $c_{\overline{\mathcal{E}}}=c_{\set{1,2}}$ and
\ifonecol
\begin{align*}
K=Q^{(4)}=\begin{pmatrix} 1000\\1010\\0110\\1111\end{pmatrix},\hat K=Q^{(4)}_{*, \set{1,2}}=\begin{pmatrix}00\\01\\11\\11\end{pmatrix},
c_{\set{1,2}}=(u_2+u_3, u_1+u_2+u_3).
\end{align*}
\else
\begin{align*}
&K=Q^{(4)}=\begin{pmatrix} 1000\\1010\\0110\\1111\end{pmatrix},\hat K=Q^{(4)}_{*, \set{1,2}}=\begin{pmatrix}00\\01\\11\\11\end{pmatrix},\\
&c_{\set{1,2}}=(u_2+u_3, u_1+u_2+u_3).
\end{align*}
\fi
After erasures, the receiver knows $u_2+u_3$ and $u_1+u_2+u_3$, which are not linear combinations of symbols $u_0^2$ as they
include $u_3$. However, the sum $c_1+c_2=u_1$ is a linear combination $p_0^2 \bigcdot
u_0^2$ with $p_0^2=(010$).
Thus, $\chi_0(\set{0,3})=\hull{010}$.
Another way of thinking is to observe that $\cs \hat K=\set{\mathbf 0^4, 0011, 0111, 0100}$.
Vectors, corresponding to linear combinations of $u_0^2$, have the last zero element.
These vectors are $\set{\mathbf 0^4, 0100}$.
Removing the last element, which corresponds to the zero coefficient before $u_3$, we obtain $\chi_0(\set{0,3})=\set{\mathbf 0^3, 010}=\hull{010}$.
\end{example}
Consider also the mapping $\chi^{-1}:\mathbb{S}_3\to 2^{2^{[n]}}$, the inverse image of $\chi$.
In words, $\chi^{-1}_\varphi(\mathcal{S})$ is the set of all erasure configurations, after which the receiver can recover linear combination $p_0^2 \bigcdot u_\varphi^{\varphi+2}$ if \textit{and only if} $p_0^2\in\mathcal{S}$.
We can imagine this mapping as dividing all $\mathcal{E}\subseteq[n]$ into $16$ ``boxes'', the $i$-th box contains those $\mathcal{E}$ for which $\chi_\varphi(\mathcal{E})=\mathcal{T}_i$.
Thus, the $i$-th box contains exactly $\chi^{(-1)}_\varphi(\mathcal{T}_i)$.
\begin{example}
Let us compute $\chi_0^{-1}(\hull{110})$ for $Q^{(4)}$.
In this case, $\varphi=0$, $\mathcal{S}=\set{\mathbf 0^3,110}$.
The set $\chi^{-1}_0(\hull{110})$ is the set of erasure configurations, after which the receiver can recover $u_0+u_1$ (and no other non-zero linear combination of $u_0^2$).
Consider erasure configuration $\mathcal{E}_0=\set{2}$.
The receiver knows $(c_0,c_1,c_3)=(u_0+u_1+u_3,u_2+u_3,u_3)$.
It can recover $u_0+u_1=c_0+c_3$.
But it can also recover $u_2=c_1+c_3$ and $u_0+u_1+u_2=c_0+c_1$ and others, so the space corresponding to $\mathcal{E}_0$ is not $\mathcal{S}$, though it contains it
as a proper subset.
If we erase positions $\mathcal{E}_1=\set{1,2}$, the receiver knows $(c_0,c_3)=(u_0+u_1+u_3,u_3)$, and it can compute only $c_0+c_3=u_0+u_1$.
It can be seen that there is no other erasure configuration, which leads to knowing $u_0+u_1$ and erasing all other linear combinations of symbols $u_0^2$.
So, $\chi^{-1}_0(\hull{110})=\set{\set{1,2}}$.
\end{example}
\begin{definition}
\label{d:gpb}
A \textit{generalized polarization behaviour} (GPB) for kernel $K$ is a collection of polynomials $P^{(\varphi, \mathcal{S})}(x)=\sum_{w=0}^n P^{(\varphi,\mathcal{S})}_wx^w$ for each
$\varphi\in[n-2]$ and each $\mathcal{S}\in\mathbb{S}_3$, such that
\begin{align}
P^{(\varphi,\mathcal{S})}_w=\left|\set{\mathcal{E}\subseteq[n]\;\big| \; \chi^{(n)}_\varphi(\mathcal{E})=\mathcal{S}\text{ and } |\mathcal{E}|=w}\right|.
\label{eq:gpbdef}
\end{align}
\end{definition}
In other words, $P^{(\varphi,\mathcal{S})}(x)$ is the weight enumerator polynomial of erasure configurations in $\chi_\varphi^{-1}(\mathcal{S})$.
\begin{table}
\caption{The GPB of $Q^{(4)}$}
\label{t:gpb4}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
$i$&$\mathcal{T}_i$&$\!\!P^{(0,\mathcal{T}_i)}\!\!$&$\!\!\!P^{(1,\mathcal{T}_i)}\!\!\!$&$i$&$\mathcal{T}_i$&$\!\!\!P^{(0,\mathcal{T}_i)}\!\!\!$&$\!\!\!P^{(1,\mathcal{T}_i)}\!\!\!$
\\\hline
0&$\set{\mathbf 0}$&$\!\!\!x^4+4x^3\!\!\!$&$x^4$ & 8&$\hull{100,010}$&$0$&$0$
\\\hline
1&$\hull{100}$&$0$&$0$ & 9&$\hull{100,001}$&$0$&$x^2$
\\\hline
2&$\hull{010}$&$x^2$&$0$ & 10&$\hull{010,001}$&$x$&$x^2$
\\\hline
3&$\hull{001}$&$x^2$&$x^3$ & 11&$\hull{110,001}$&$x$&$x^2$
\\\hline
4&$\hull{110}$&$x^2$&$0$ & 12&$\hull{100,011}$&$0$&$x^2$
\\\hline
5&$\hull{101}$&$x^2$&$x^3$ & 13&$\hull{101,010}$&$x$&$x^2$
\\\hline
6&$\hull{011}$&$x^2$&$x^3$ & 14&$\hull{110,101}$&$x$&$x^2$
\\\hline
7&$\hull{111}$&$x^2$&$x^3$ & 15&$\mathbb{F}^3$&$1$&$\!\!\!4x+1\!\!\!$
\\\hline
\end{tabular}
\end{table}
\begin{example}
\label{e:gpb}
The GPB of $Q^{(4)}$ is given in Table~\ref{t:gpb4}.
The GPB consists of polynomials $P^{(\varphi,\mathcal{T}_i)}(x)$ for $\varphi\in[2]$ and $i\in[16]$.
\end{example}
\subsection{Recursive Computation of GPB \label{s:gpbcvpk}}
Assume that we know GPB for kernel $Q^{(n/2)}$.
Recall that $c_0^{n-1}=u_0^{n-1}Q^{(n)}=(x_0^{n/2-1}Q^{(n/2)},z_0^{n/2-1}Q^{(n/2)})$.
Consider linear combination $p_0^2 \bigcdot u_{\varphi}^{\varphi+2}$ for some $p_0^2\in\mathbb{F}^3$.
Denote the erasure configurations of left and right half of $c_0^{n-1}$ by $\mathcal{E}'=\mathcal{E}\cap [n/2]$ and $\mathcal{E}''=\set{j-\frac{n}{2}|j\in\mathcal{E}, j\geq \frac{n}{2}}$.
Then, all recoverable $p \bigcdot u_{\varphi}^{\varphi+2}$ follow from recoverability of $p'\bigcdot x_{\psi}^{\psi+2}$ and $p''\bigcdot z_{\psi}^{\psi+2}$ for erasure configurations $\mathcal{E}'$ and $\mathcal{E}''$, respectively, for some particular $\psi\in[\frac{n}{2}]$, $p',p''\in\mathbb{F}^3$.
This connection is given by the following theorem.
\begin{theorem}
\label{t1}
Consider kernel $Q^{(n)}$, defined in \eqref{eq:qdef}--\eqref{eq:zdef}, $n\geq 8$.
For given $\mathcal{E}\subseteq[n]$ and $0\leq \varphi \leq n-3$, vector $p_0^2$ is $(\mathcal{E},\varphi)$-recoverable iff
\begin{align}
\exists p',p''\in\mathbb{F}^3: (p_0^2,\mathbf 0^{J_\varphi})=p'A_{\varphi}+p''B_\varphi,
\label{eq:t1}
\end{align}
where $p'$ and $p''$ are $(\mathcal{E}',\psi)$-recoverable and $(\mathcal{E}'',\psi)$-recoverable for kernel $Q^{(n/2)}$ and $\psi=\max\set{0,\floor{\frac{\varphi-1}{2}}}$.
The values of $J_\varphi$, $A_\varphi$, $B_\varphi$ depend on $\varphi$ as follows. For $\psi\leq\frac{n}{2}-3$:
\allowdisplaybreaks
\begin{align}
\label{eq:t1vp0}
&J_0\!=\!3, A_0=\mathbf{A}=\begin{pmatrix}111000\\001110\\000011\end{pmatrix},B_0=\mathbf{B}=\begin{pmatrix}011000\\000110\\000001\end{pmatrix}\\
\label{eq:t1vp1}
&J_{2\psi+1}=2, A_{2\psi+1}=\mathbf{A}_{*,\overline{[1]}}, B_{2\psi+1}=\mathbf{B}_{*,\overline{[1]}}\\
\label{eq:t1vp2}
&J_{2\psi+2}\!=\!1, A_{2\psi+2}=\mathbf{A}_{*,\overline{[2]}}, B_{2\psi+2}=\mathbf{B}_{*,\overline{[2]}}\\
\label{eq:t1vp3}
&J_{n-3}\!=\!0, A_{n-3}=\mathbf{A}_{*,\overline{[3]}},\normalsize B_{n-3}=\mathbf{B}_{*,\overline{[3]}}
\end{align}
\end{theorem}
\begin{proof}
The proof is in the Appendix~\ref{a:t1}.
\end{proof}
Theorem~\ref{t1} defines the relation between subspaces of known linear combinations of symbols $x_{\psi}^{\psi+2}$ and $z_{\psi}^{\psi+2}$ and subspace of known linear
combinations of $u_{\varphi}^{\varphi+2}$ for some given erasure configuration $\mathcal{E}$.
Applying this relation to each $\mathcal{E}\subseteq[n]$, one can compute weight enumerators of erasure configurations for each possible subspace of
linear combinations of symbols $u_{\varphi}^{\varphi+2}$ by the following theorem.
\begin{theorem}
\label{t2}
For given $n\geq8$, $\varphi\in[n-2]$, consider the transformation $\mathbf{T}_\varphi:\mathbb{S}_3\times\mathbb{S}_3\to\mathbb{S}_3$, which maps spaces of
$p'$ and $p''$ to space of all possible $p$'s defined by \eqref{eq:t1}:
\ifonecol
\begin{align}
\mathbf{T}_\varphi(\mathcal{S}',\mathcal{S}'')=\set{p_0^2\big|\exists p'\in\mathcal{S}', p''\in\mathcal{S}'': (p_0^2,\mathbf 0^{J_\varphi})=p'A_\varphi+p''B_\varphi},
\label{eq:tdef}
\end{align}
\else
\begin{align}
&\mathbf{T}_\varphi(\mathcal{S}',\mathcal{S}'')=\nonumber\\
&\set{p_0^2\big|\exists p'\in\mathcal{S}', p''\in\mathcal{S}'': (p_0^2,\mathbf 0^{J_\varphi})=p'A_\varphi+p''B_\varphi},
\label{eq:tdef}
\end{align}
\fi
where $J_\varphi$, $A_\varphi$, $B_\varphi$ are given in \eqref{eq:t1vp0}--\eqref{eq:t1vp3}.
Denote by $P^{(\varphi, \mathcal{S})}(x)$ the GPB of kernel $Q^{(n)}$ for phase $\varphi$, and by $R^{(\psi, \mathcal{S})}(x)$ the GPB of kernel $Q^{(n/2)}$ for phase $\psi=\max\set{0,\frac{\varphi-1}{2}}$.
Then,
\begin{align}
P^{(\varphi, \mathcal{S})}(x)=\sum_{(\mathcal{S}',\mathcal{S}'')\in\mathbf{T}_\varphi^{-1}(\mathcal{S})}R^{(\psi,\mathcal{S}')}(x)\cdot R^{(\psi,\mathcal{S}'')}(x),
\label{eq:t2}
\end{align}
where $\mathbf{T}_\varphi^{-1}:\mathbb{S}_3\to 2^{\mathbb{S}_3\times \mathbb{S}_3}$ is the inverse image of $\mathbf{T}_\varphi$.
\end{theorem}
\begin{proof}
The proof is in the Appendix~\ref{a:t2}.
\end{proof}
\begin{example}
On one hand, one can straightforwardly compute $\mathbf{T}_{\varphi}(\hull{010},\hull{110,001})$ for the case of odd $\varphi=2\psi+1$.
Values of $\mathcal{S}'=\hull{010}$ and $\mathcal{S}''=\hull{110,001}$ mean that, given values of $(x_0^{n/2-1}X^{(n)})_{\overline{\mathcal{E}'}}$ and $(z_0^{n/2-1}Z^{(n)})_{\overline{\mathcal{E}''}}$, the receiver knows
\begin{align*}
&f_0=(010)\bigcdot x_{\psi}^{\psi+2}=x_{\psi+1}=u_{\varphi+1}+u_{\varphi+2}+u_{\varphi+3}, \\
&f_1=(110)\bigcdot z_{\psi}^{\psi+2}=z_{\psi}\!+\!z_{\psi+1}=u_{\varphi}\!+\!u_{\varphi+1}\!+\!u_{\varphi+2}\!+\!u_{\varphi+3},\\
&f_2=z_{\psi+2}=u_{\varphi+3}+u_{\varphi+4}.
\end{align*}
Now we must find linear combinations of symbols $f_0^2$, which involve only symbols $u_{\varphi}^{\varphi+2}$.
There is only one such non-zero linear combination: $f_0+f_1=u_\varphi=(100)\bigcdot u_\varphi^{\varphi+2}$.
This means that $\mathbf{T}_{2\psi+1}(\hull{010},\hull{110,001})=\hull{100}$.
On the other hand, we can compute the same value via Theorem~\ref{t2}:
\ifonecol
\begin{align*}
\set{p'A_{2\psi+1}+p''B_{2\psi+1}}_{p'\in\mathcal{S}',p''\in\mathcal{S}''}=
\set{\mathbf 0^5, \underline{01110},\underline{11110},10000,\underline{00001}, 01111,11111,10001}.
\end{align*}
\else
\begin{align*}
&\set{p'A_{2\psi+1}+p''B_{2\psi+1}}_{p'\in\mathcal{S}',p''\in\mathcal{S}''}=\\
&\set{\mathbf 0^5, \underline{01110},\underline{11110},10000,\underline{00001}, 01111,11111,10001}.
\end{align*}
\fi
The underlined vectors correspond to $f_0,f_1,f_2$, others are their linear combinations.
From the above set, we choose vectors with last $J_\varphi=2$ zero elements.
They are $\set{\mathbf 0^5,10000}$.
Throwing away the last $2$ zeroes, we obtain $\hull{100}$.
\end{example}
\begin{corollary}
The GPB of CvPK can be computed as shown in Algorithm~\ref{alg:gpb}.
\end{corollary}
\ifonecol
\linespread{1.5}
\begin{algorithm}
\caption{GPB($m$)}
\label{alg:gpb}
\input{alg2e-gpbcvpk.tex}
\end{algorithm}
\linespread{2.0}
\else
\IncMargin{1em
\begin{algorithm}
\caption{\texttt{GPB}($m$)}
\label{alg:gpb}
\input{alg2e-gpbcvpk.tex}
\end{algorithm}
\DecMargin{1em
\fi
\begin{proof}
Let $\mathcal{T}_0,\mathcal{T}_1,...,\mathcal{T}_{15}$ be the subspaces of $\mathbb{F}^3$, indexed by operator $I:[16]\to\mathbb{S}_3$, which returns $\mathcal{T}_i$ by input index $i$ (for example, as given in Table~\ref{t:gpb4}).
The first loop (lines \ref{l:tinit0}--\ref{l:tinit1}) uses \eqref{eq:tdef} to compute tables $T_0,T_1,T_2,T_3:[16]\times[16]\to[16]$, which correspond to $\mathbf{T}_0$, $\mathbf{T}_{2\psi+1}$, $\mathbf{T}_{2\psi+2}$, $\mathbf{T}_{n-3}$, respectively, but work with indices $i$ instead of spaces $\mathcal{T}_i$ themselves.
For example, $T_1[i][j]=l$ in the Algorithm means $\mathbf{T}_{2\psi+1}(\mathcal{T}_i,\mathcal{T}_j)=\mathcal{T}_l$ in Theorem~\ref{t2}.
In the first loop, we run over all pairs of subspaces from $\mathbb{S}_3$.
For each pair of subspaces $(\mathcal{T}_i,\mathcal{T}_j)$, in the internal loop (lines \ref{l:pq0}--\ref{l:pq1}) we run over all possible pairs of vectors $p_0^2$ and $q_0^2$ from these subspaces, and compute $r_0^5=pA_0+qB_0$.
In line \ref{l:a0b0} we use matrices $A_0$ and $B_0$, since $A_{2\psi+1}$, $A_{2\psi+2}$, $A_{n-3}$ are submatrices of $A_0$, the same holds for matrices $B_\varphi$ (see \eqref{eq:t1vp0}--\eqref{eq:t1vp3}).
We check if the last $J_\varphi$ positions of $r_0^5$ are zero.
If so, we choose the appropriate subvector of $r_0^5$, and place it in the corresponding list $S_k$.
The list $S_k$ at the end of the internal loop is equal to $\mathcal{T}_l=\mathbf{T}_\varphi(\mathcal{T}_i,\mathcal{T}_j)$.
Then, in line~\ref{l:tinit1} we perform the inverse indexing $I^{-1}$ of spaces in $\mathbb{S}_3$ and obtain $l=T[i][j]$, defined above.
In line~\ref{l:gpb4} $P$ is initialized with the GPB of kernel $Q^{(4)}$, i.e., the array $P[0..1][0..15]$ of polynomials in $x$.
Each output value $P[\varphi][i]$ is given in Table~\ref{t:gpb4} as $P^{(\varphi,\mathcal{T}_i)}$.
\ifonecol
\linespread{1.5}
\IncMargin{1em
\begin{algorithm}
\caption{Combine($R, T$)}
\label{alg:combine}
\input{alg2e-gpbcombine.tex}
\end{algorithm}
\DecMargin{1em
\linespread{2.0}
\else
\IncMargin{1em
\begin{algorithm}
\caption{\texttt{Combine}($R, T$)}
\label{alg:combine}
\input{alg2e-gpbcombine.tex}
\end{algorithm}
\DecMargin{1em
\fi
In the main loop (lines~\ref{l:main0}--\ref{l:main1}) the GPB is recursively computed by Theorem~2.
At the beginning of iteration $\lambda$, array $P$ contains the GPB for kernel $Q^{({\Lambda/2})}$, where $\Lambda=2^\lambda$.
In line \ref{l:swap}, we swap $P$ and $R$ (as pointers), so after this line $R$ contains the GPB for $Q^{(\Lambda/2)}$.
Then, we compute GPB of kernel $Q^{(\Lambda)}$ and place it in array $P$.
In lines~\ref{l:p0}, \ref{l:p1}--\ref{l:p3} we use function \texttt{Combine}, defined in Alg.~\ref{alg:combine}, which applies \eqref{eq:t2} with input table $T[0..15][0..15]$ to the input GPB.
\end{proof}
Since the first loop of computing $\mathbf{T}_\varphi$ in lines~\ref{l:tinit0}--\ref{l:tinit1} has constant complexity, the asymptotic complexity $C_\text{total}$ of Algorithm~\ref{alg:gpb} is $C_\text{total}=\sum_{\lambda=3}^m C_{\text{main}}(\lambda)$, where $C_\text{main}(\lambda)$ is the complexity of the $\lambda$-th iteration
of the main loop.
The complexity $C_\text{main}(\lambda)$ is $\Lambda=2^\lambda$ times the complexity of function \texttt{Combine}.
The complexity of function \texttt{Combine} depends on current $\lambda$, because the degrees of input polynomials grow approximately as $\Theta(\Lambda)=\Theta(2^\lambda)$, and the polynomial coefficients grow as $\Theta(2^\Lambda)$.
Function \texttt{Combine} consists in $256$ multiplications of such polynomials.
Assume that we multiply these polynomials and their integer coefficients straightforwardly.
Then, polynomial multiplication includes $\Theta(\Lambda^2)$ multiplications of integers. Each integer has length $\Theta(\Lambda)$
and their straightforward multiplication has complexity $\Theta(\Lambda^2)$.
Thus, the complexity of \texttt{Combine} function is asymptotically
$
C_\text{combine}(\lambda)\approx\Lambda^4=16^{\lambda}.
$
The total complexity is
$$
C_\text{total}=\sum_{\lambda=3}^{m}C_\text{main}(\lambda)=\sum_{\lambda=3}^{m}\Theta(2^\lambda\cdot16^\lambda)=\Theta(32^m)=\Theta(n^5).
$$
One can reduce this complexity to $\Theta(n^3\log^2n)$ by using fast algorithms for multiplication of big integers and polynomials.
\subsection{Converting GPB to PB \label{s:gpb2pb}}
Polarization behaviour $P^{(\varphi)}(x)$ (see Definition~\ref{d:pb}) is the weight spectrum of all erasure configurations $\mathcal{G}$ that erase $u_\varphi$.
This means that linear combination $(1,0,0)\bigcdot u_\varphi^{\varphi+2}$ must not be recoverable, so $(1,0,0)\notin \chi_\varphi(\mathcal{G})$.
More formally, let $\Xi$ be the set of all erasure configurations $\mathcal{G}$ such that
$(1,\mathbf 0^{n-\varphi-1})\notin\cs K_{\overline{[\varphi]},\overline{\mathcal{G}}}$.
Then,
$$
P^{(\varphi)}(x)=\sum_{\mathcal{G}\in\Xi}x^{|\mathcal{G}|}.
$$
Observe that $\mathcal{G}\in\Xi\iff(1,\mathbf 0^{n-\varphi-1})\notin\cs K_{\overline{[\varphi]},\overline{\mathcal{G}}}\implies (1,\mathbf 0^2)\notin\chi_\varphi(\mathcal{G})$.
The reverse implication also holds and \mbox{$\mathcal{G}\in\Xi\iff(1,\mathbf 0^2)\notin\chi_\varphi(\mathcal{G})$}, which leads
to
\begin{align*}
\Xi=\bigcup_{\mathcal{S}\in\mathbb{S}_3:(1,0,0)\notin\mathcal{S}}\chi^{-1}_\varphi(\mathcal{S}).
\end{align*}
The last two equations imply
\begin{align}
P^{(\varphi)}(x)=\sum_{\mathcal{S}\in\mathbb{S}_3:(1,0,0)\notin\mathcal{S}}P^{(\varphi, \mathcal{S})}(x),
\label{eq:pb2gpb}
\end{align}
where $P^{(\varphi,\mathcal{S})}(x)$ is the GPB of $K$.
Formula \eqref{eq:pb2gpb} is defined for $\varphi\leq n-3$.
Polynomials $P^{(n-2)}(x)$ and $P^{(n-1)}(x)$ can be obtained by
\begin{align}
P^{(n-2)}(x)&=\sum_{\mathcal{S}\in\mathbb{S}_3:\forall a\in\mathbb{F}: (a,1,0)\notin\mathcal{S}}P^{(\varphi, \mathcal{S})}(x) \label{eq:pb2gpb1}\\
P^{(n-1)}(x)&=\sum_{\mathcal{S}\in\mathbb{S}_3:\forall a\in\mathbb{F}^2: (a,1)\notin\mathcal{S}}P^{(\varphi, \mathcal{S})}(x) \label{eq:pb2gpb2}
\end{align}
Computing each of \eqref{eq:pb2gpb}--\eqref{eq:pb2gpb2} consists of adding respectively $11$, $8$ and $5$ polynomials of degree $n$ with
integer coefficients of length $O(n)$, so the total complexity of converting GPB to PB is $n\cdot O(n^2)=O(n^3)$, which does
not affect the total asymptotic complexity.
\subsection{Polarization Rate of CvPK \label{s:secvpk}}
Polarization rate of an $n\times n$ polarizing kernel $K$ can be obtained as \cite{korada2010polar}
\begin{align}
E(K)=\frac{1}{n}\sum_{i=0}^{n-1}\log_nd_i,
\label{eq:rho}
\end{align}
where $d_i$ is called the $i$-th partial distance
and is defined by
\begin{align}
d_i=\min_{u_{i+1}^{n-1}\in\mathbb{F}^{n-i-1}}\mathbf{w}\left((1,u_{i+1}^{n-1})K_{\overline{[i]},*}\right).
\label{eq:di}
\end{align}
Observe that $d_i$ is the minimum degree of a non-zero monomial in $P^{(i)}(x)$ (see e.g. \cite{morozov2019distance} for the proof).
So, the values of $d_i$ for $Q^{(n)}$ can be easily obtained from PB of $Q^{(n)}$.
\subsection{Row-permuted CvPKs \label{s:perm}}
We observed that one can permute rows of $Q^{(n)}$ and obtain better scaling exponent.
Moreover, we found a permutation that does not affect much neither the kernel processing algorithm, nor the Alg.~\ref{alg:gpb} of computing GPB of a CvPK.
We start with a proposition, which shows how to construct kernel $\widetilde{K}$ from a given $K$ with improved polarization rate in general.
\begin{proposition}
\label{p:swapd}
Consider $n\times n$ kernel $K$ and $i\in[n]$, for which $d_i\geq d_{i+1}$.
Swap rows $i$ and $i+1$ and denote the resulting kernel by $\widetilde{K}$.
Then, $E(\widetilde{K})\geq E(K)$.
\end{proposition}
\begin{proof}
Denote disjoint sets
\begin{align*}
\mathcal{A}=\set{c_0^{n-1}\!=\!(1,0,u_{i+2}^{n-1})K_{\overline{[i]},*}\;\big|\;u_{i+2}^{n-1}\in\mathbb{F}^{n-i-2}}\\
\mathcal{B}=\set{c_0^{n-1}\!=\!(0,1,u_{i+2}^{n-1})K_{\overline{[i]},*}\;\big|\;u_{i+2}^{n-1}\in\mathbb{F}^{n-i-2}}\\
\mathcal{C}=\set{c_0^{n-1}\!=\!(1,1,u_{i+2}^{n-1})K_{\overline{[i]},*}\;\big|\;u_{i+2}^{n-1}\in\mathbb{F}^{n-i-2}}
\end{align*}
For set of vectors $S$, denote by $\underline{\mathbf{w}}(S)$ the minimum weight of vector from $S$.
Observe that
\begin{align*}
d_i&=\underline{\mathbf{w}}(\mathcal{A}\cup\mathcal{C})=\min\set{\underline{\mathbf{w}}(\mathcal{A}), \underline{\mathbf{w}}(\mathcal{C})}\\
d_{i+1}&=\underline{\mathbf{w}}(\mathcal{B})\leq d_i \implies \underline{\mathbf{w}}(\mathcal{B})\leq\underline{\mathbf{w}}(\mathcal{C})\\
\widetilde{d}_i&=\underline{\mathbf{w}}(\mathcal{B}\cup\mathcal{C})=\underline{\mathbf{w}}(\mathcal{B})=d_{i+1}\\
\widetilde{d}_{i+1}&=\underline{\mathbf{w}}(\mathcal{A})\geq\underline{\mathbf{w}}(\mathcal{A}\cup\mathcal{C})=d_i,
\end{align*}
where $\widetilde{d}_0^{n-1}$ are the partial distances of $\widetilde{K}$.
Thus, $\widetilde{d}_i=d_{i+1}$, $\widetilde{d}_{i+1}\geq d_{i}$.
Obviously, $\widetilde{d}_j=d_j$ for $j\notin\set{i,i+1}$.
Recalling \eqref{eq:rho}, one obtains $E(\widetilde{K})\geq E(K)$.
\end{proof}
We can apply the proposition multiple times and obtain bubble sorting of rows by their partial distances.
\begin{corollary}
\label{c:ok}
Denote by $\overline K$ kernel with rows of $K$, sorted by $d_i$ in ascending order. Then, $E(\overline K)\geq E(K)$.
\end{corollary}
\begin{corollary}
\label{c:swapp}
Let $d_i=d_{i+1}=w$ and $P^{(i)}_w<P^{(i+1)}_w$, where $d_*$ and $P^{(*)}$ are partial distances and PB of kernel $K$, respectively.
Swap rows $i$ and $i+1$ and denote the resulting kernel by $\widetilde{K}$.
Then, $\widetilde P^{(i+1)}_w\leq P^{(i)}_w<P_w^{(i+1)}\leq \widetilde P^{(i)}_w$, where $\widetilde P^{(*)}$ is the PB of $\widetilde{K}$.
\end{corollary}
\begin{proof}
For set of vectors $S$, denote by $S_w$ the set of all vectors from $S$ with weight $w$.
Then, $P^{(i)}_w=|\mathcal{A}_w|+|\mathcal{C}_w|$, $P^{(i+1)}_w=|\mathcal{B}_w|$ and
\begin{align*}
\widetilde P^{(i)}_w&=|\mathcal{B}_w|+|\mathcal{C}_w|\geq P_w^{(i+1)}\\
\widetilde P^{(i+1)}_w&=|\mathcal{A}_w|\leq P_w^{(i)}.
\end{align*}
Thus, $\widetilde P^{(i+1)}_w\leq P^{(i)}_w<P_w^{(i+1)}\leq \widetilde P^{(i)}_w$.
\end{proof}
\begin{remark}
\label{r:betterse}
Intuitively, in the pair of subchannels $W^{(i)}$ and $W^{(i+1)}$, induced by the kernel from Corollary~\ref{c:swapp}, the ``bad'' one becomes ``worse'' and the ``good''
one becomes ``better'' by swapping the rows.
Intuition suggests that this leads to $\mu(\widetilde{K})\leq\mu(K)$.
Also, by Proposition~\ref{p:swapd}, $E(\widetilde{K})\geq E(K)$.
\end{remark}
\begin{remark}
\label{r:tq}
We observed that for CvPK $d_{2i}\geq d_{2i+1}$ for $i=2..n/2-3$.
Denote by $\widetilde Q^{(n)}$ kernel $Q^{(n)}$ with swapped $2i$-th and $(2i+1)$-th rows for $i=2..n/2-3$.
One can easily obtain PB $\widetilde P^{(\varphi)}(x)$ of kernel $\widetilde Q^{(n)}$ from GPB $P^{(\varphi,\mathcal{S})}(x)$ of kernel $Q^{(n)}$ by similar to \eqref{eq:pb2gpb}--\eqref{eq:pb2gpb2} formulae:
\begin{align}
\widetilde P^{(\varphi)}(x)&=P^{(\varphi)}(x), \text{ for }\varphi\leq 3 \text{ or } \varphi\geq n-4, \label{eq:tp0}\\
\widetilde P^{(2i)}(x)&=\sum_{\mathcal{S}\in\mathbb{S}_3: \forall a\in\mathbb{F}: (0,1,0)\notin\mathcal{S}} P^{(2i, \mathcal{S})}(x), \label{eq:tp1}\\
\widetilde P^{(2i+1)}(x)&=\sum_{\mathcal{S}\in\mathbb{S}_3: \forall a\in\mathbb{F}: (1,a,0)\notin\mathcal{S}}P^{(2i, \mathcal{S})}(x). \label{eq:tp2}
\end{align}
\end{remark}
Also, SC decoding for $\widetilde Q^{(n)}$ is very similar to SC decoding for $Q^{(n)}$, as described in Appendix~\ref{s:tq}.
\section{Numerical Results \label{s:nr}}
\subsection{Scaling Exponent and Polarization Rate}
\ifonecol
\begin{table}
\parbox{.6\textwidth}{
\centering
\caption{Polarization rate $E$ and scaling exponent $\mu$ of CvPK of size $n$.}
\label{t:se}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
$n$ & $\!E(Q^{(n)})\!$ & $\!\underline E(B^{(n)})\!$ & $\!\mu(Q^{(n)})\!$ & $\!\mu(\widetilde Q^{(n)})\!$ & $\!\mu(\overline Q^{(n)})\!$ & best $\!\mu\!$ \\ \hline
4 & 0.5 & 0.5 & 3.627 & 3.627 & 3.627 & 3.627 \\ \hline
8 & 0.5 & 0.5 & 3.577 & 3.577 & 3.577 & 3.577 \\ \hline
16 & 0.50914 & 0.51828 & 3.470 & 3.409 & 3.400 & 3.346 \\ \hline
32 & 0.52194 & 0.53656 & 3.382 & 3.316 & 3.153 & 3.122 \\ \hline
64 & 0.52923 & 0.56427 & 3.333 & 3.283 & & 2.87 \\ \hline
128 & 0.53482 & 0.58775 & 3.310 & \textbf{3.277} & & \\ \hline
256 & 0.53865 & 0.61333 & \textbf{3.303} & \textbf{3.283} & & \\ \hline
512 & 0.54106 & 0.63559 & \textbf{3.308} & \textbf{3.296} & & \\ \hline
1024 & 0.54260 & 0.65688 & \textbf{3.317} & \textbf{3.311} & & \\ \hline
\end{tabular}
}
\parbox{.3\textwidth}{
\centering
\caption{Polarization rate $E$ of large CvPK of size $n$.}
\label{t:pr}
\begin{tabular}{|c|c|c|}
\hline
$n$ & $E(Q^{(n)})$ & $\!\underline E(B^{(n)})\!$ \\ \hline
2048 & 0.54351 & 0.67558 \\ \hline
4096 & 0.54398 & 0.69274 \\ \hline
8192 & \textbf{0.54414} & 0.70802 \\ \hline
16384 & \textbf{0.54408} & 0.72187 \\ \hline
32768 & \textbf{0.54386} & 0.73432 \\ \hline
65536 & \textbf{0.54353} & 0.74564 \\ \hline
\end{tabular}
}
\end{table}
\else
\begin{table}
\centering
\caption{Polarization rate $E$ and scaling exponent $\mu$ of convolutional polar kernels of size $n$. Best $\mu$ corresponds to a known kernel with the lowest scaling exponent
from \cite{yao2019explicit}.}
\label{t:se}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
$n$ & $\!E(Q^{(n)})\!$ & $\!\underline E(B^{(n)})\!$ & $\!\mu(Q^{(n)})\!$ & $\!\mu(\widetilde Q^{(n)})\!$ & $\!\mu(\overline Q^{(n)})\!$ & best $\!\mu\!$ \\ \hline
4 & 0.5 & 0.5 & 3.627 & 3.627 & 3.627 & 3.627 \\ \hline
8 & 0.5 & 0.5 & 3.577 & 3.577 & 3.577 & 3.577 \\ \hline
16 & 0.50914 & 0.51828 & 3.470 & 3.409 & 3.400 & 3.346 \\ \hline
32 & 0.52194 & 0.53656 & 3.382 & 3.316 & 3.153 & 3.122 \\ \hline
64 & 0.52923 & 0.56427 & 3.333 & 3.283 & & 2.87 \\ \hline
128 & 0.53482 & 0.58775 & 3.310 & \textbf{3.277} & & \\ \hline
256 & 0.53865 & 0.61333 & \textbf{3.303} & \textbf{3.283} & & \\ \hline
512 & 0.54106 & 0.63559 & \textbf{3.308} & \textbf{3.296} & & \\ \hline
1024 & 0.54260 & 0.65688 & \textbf{3.317} & \textbf{3.311} & & \\ \hline
\end{tabular}
\end{table}
\begin{table}
\centering
\caption{Polarization rate $E$ of large CvPK of size $n$.}
\label{t:pr}
\begin{tabular}{|c|c|c|}
\hline
$n$ & $E(Q^{(n)})$ & $\!\underline E(B^{(n)})\!$ \\ \hline
2048 & 0.54351 & 0.67558 \\ \hline
4096 & 0.54398 & 0.69274 \\ \hline
8192 & \textbf{0.54414} & 0.70802 \\ \hline
16384 & \textbf{0.54408} & 0.72187 \\ \hline
32768 & \textbf{0.54386} & 0.73432 \\ \hline
65536 & \textbf{0.54353} & 0.74564 \\ \hline
\end{tabular}
\end{table}
\fi
In Table~\ref{t:se} one can see the computed values of scaling exponent for BEC and polarization rate for kernels $Q^{(n)}$ and $\widetilde Q^{(n)}$.
Since PB for these kernels can be obtained by polynomial algorithm, we obtain scaling exponent for these kernels for $n$ up to $1024$.
Remark~\ref{r:betterse} suggests $\mu(\widetilde Q^{(n)})\leq\mu(Q^{(n)})$.
Although we do not prove this inequality, one can see in Table~\ref{t:se} that it indeed holds for all $n\leq 1024$, becoming strict for $n\geq 16$.
We also provide scaling exponent for kernel $\overline Q^{(n)}$, consisting of rows of $Q^{(n)}$, sorted by partial distances, as described in Corollary~\ref{c:ok}.
Also some adjacent rows were sorted by $P^{(i)}_w$ as described in Corollary~\ref{c:swapp}.
The specific row permutations $\pi_{16}$ and $\pi_{32}$, corresponding to $\overline Q^{(16)}_{i,*}=Q^{(16)}_{\pi_{16}(i),*}$ and $\overline Q^{(32)}_{i,*}=Q^{(32)}_{\pi_{32}(i),*}$, are
\small
\ifonecol
\begin{align*}
&\pi_{16}=(0, 1, 2, 3, 5, 4, 7, 6, 10,8, 11,9, 12,13,14,15),\\
&\pi_{32}=(0,1,2,3,6,4,9,7,13,5,20,8,14,11,18,15,16,10,23,19,24,12,26,17,25,21,27,22,28,29,30,31).
\end{align*}
\else
\begin{align*}
\pi_{16}=(&0, 1, 2, 3, 5, 4, 7, 6, 10,8, 11,9, 12,13,14,15),\\
\pi_{32}=(&0,1,2,3,6,4,9,7,13,5,20,8,14,11,18,15,\\&16,10,23,19,24,12,26,17,25,21,27,22,28,29,30,31).
\end{align*}
\fi
\normalsize
One can see that, unlike the case of kernel $\widetilde Q^{(n)}$, the rows order in $\overline Q^{(n)}$ is very different from the original order in $Q^{(n)}$.
We found no formulae to obtain PB of $\overline Q^{(n)}$ from the GPB of $Q^{(n)}$, similar to \eqref{eq:tp0}--\eqref{eq:tp2}.
We obtain PB of $\overline Q^{(n)}$ for $n\leq 32$ by brute force.
One can see that the proposed row permutation leads to smaller scaling exponent, comparable to the best known \cite{yao2019explicit}.
For all studied cases, $E(Q^{(n)})=E(\widetilde Q^{(n)})=E(\overline Q^{(n)})$.
In Table~\ref{t:pr} one can see polarization rate of large CvPKs, obtained by a simplified procedure \cite{morozov2019simplified}.
We also provide a lower bound $\underline E(B^{(n)})$ of polarization rate of BCH kernels $B^{(n)}$, where partial distances are lower-bounded by constructive distances of extended BCH codes, generated by the bottom rows of $B^{(n)}$.
What is counter-intuitive is that $\mu(Q^{(512)})>\mu(Q^{(256)})$, $\mu(Q^{(256)})>\mu(Q^{(128)})$ and $E(Q^{(16384)})<E(Q^{(8192)})$.
Intuitively, for the kernels which have the same structure, the larger is the kernel, the better polarization properties it has.
Although results for scaling exponent may be imprecise due to numerical errors, computing polarization rate is simple and numerically stable.
On the other hand, if the scaling exponent of $Q^{(n)}$ tended to $2$ with $n\to\infty$, that would mean existence of codes of lengths $N=n^M$, which achieve optimal scaling exponent with decoding complexity $O(N\log N)$.
This sounds too good to be true.
Polarization rate in \cite{ferris2013branching} was heuristically estimated to be around 0.62, although no rigorous proof of channel polarization was provided.
In our scenario, channel polarization follows from the general proof for the case of large kernels, obtained in \cite{korada2010polar}, and we obtain a precise estimate of the polarization rate.
\subsection{Performance of Polar Codes with CvPK}
\ifonecol
\begin{figure}
\begin{subfigure}{0.5\textwidth}
\includegraphics[width=\textwidth]{fer-snr-cvpk-sc-1024-2.75db-sim.eps}
\caption{Performance of $(1024,512)$ polar codes with various CvPKs (solid) and other kernels (dashed) under SC decoding.}
\label{fig:cvpk1k}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\includegraphics[width=\textwidth]{fer-snr-cvpk-sc-4096-2.25db-sim.eps}
\caption{Performance of $(4096,2048)$ polar codes with various CvPKs (solid) and other kernels (dashed) under SC decoding.}
\label{fig:cvpk4k}
\end{subfigure}
\caption{Performance of polar codes with CvPKs under SC decoding}
\label{fig:scres}
\end{figure}
\else
\begin{figure}
\includegraphics[width=0.5\textwidth]{fer-snr-cvpk-sc-1024-2.75db-sim.eps}
\caption{Performance of $(1024,512)$ polar codes with various CvPKs (solid) and other kernels (dashed) under SC decoding.}
\label{fig:cvpk1k}
\end{figure}
\begin{figure}
\includegraphics[width=0.5\textwidth]{fer-snr-cvpk-sc-4096-2.25db-sim.eps}
\caption{Performance of $(4096,2048)$ polar codes with various CvPKs (solid) and other kernels (dashed) under SC decoding.}
\label{fig:cvpk4k}
\end{figure}
\fi
Fig.~\ref{fig:cvpk1k} presents the SC decoding performance of $(1024,512)$ codes, corresponding to polarizing transformations $F^{\otimes 10}$, $Q^{(32)\otimes 2}$, $Q^{(64)} \otimes \overline Q^{(16)}$, $Q^{(1024)}$, $K_{3}^{\otimes 2}$, $\overline Q^{(32)\otimes 2}$, the order is the same as in the legend.
Kernel $K_{3}$ is from \cite{trofimiuk2018efficient}, $\mu(K_{3})=3.207$ and $E(K_{3})=0.52925$.
The design SNR is $E_b/N_0=2.75$ dB.
One can see that polar code with sorted $32\times 32$ CvPK $\overline Q^{(32)}$ outperforms polar codes with other kernels and the CvPC due to its lower scaling exponent, even though it does not have the highest polarization rate.
The polarizing transformation $Q^{(64)} \otimes \overline Q^{(16)}$ corresponds to a polar code with mixed kernels.
The definition of polar codes with mixed kernels can be obtained by replacing $K^{\otimes M}$ in \eqref{eq:pcdef} with $K_1\otimes ...\otimes K_M$.
Fig.~\ref{fig:cvpk4k} presents the SC decoding performance of $(4096,2048)$ codes with polarizing transformations $F^{\otimes 12}$, $K_{2}^{\otimes 3}$, $Q^{(4096)}$ (dashed),
and $Q^{(16)\otimes 3}$, $\overline Q^{(16)\otimes 3}$, $Q^{(64)\otimes 2}$, $Q^{(128)}\otimes \overline Q^{(32)}$ (solid).
Kernel $K_{2}$ is from \cite{trofimiuk2019reduced}, $\mu(K_{2})=3.346$ and $E(K_{2})=0.51828$.
The design SNR is $E_b/N_0=2.25$~dB.
One can see that polar code with $Q^{(128)}\otimes \overline Q^{(32)}$ has the best performance.
Polar codes with Arikan kernel were constructed using Gaussian approximation \cite{trifonov2012efficient}, other codes were constructed using Monte-Karlo simulations.
For kernels $K_2$ and $K_3$ kernel processing is defined in \cite{trofimiuk2018efficient, trofimiuk2019reduced}.
Efficient processing of $\overline Q^{(n)}$ is done by the general trellis-based algorithm \cite{trifonov2019trellis}.
For $Q^{(n)}$ the kernel processor is the SC decoder from \cite{morozov2018efficient}.
Note that for CvPK $Q^{(n)}$ the complexity of kernel processing is $O(n\log n)$, in contrast with an arbitrary kernel of size $n$, where, in general, the complexity is $O(2^n)$.
Observe also that processing of kernel $\widetilde Q^{(n)}$ can be also done by the SC decoder for CvPC with swapping adjacent phases on layer $m$.
The complexity of SC decoding for $(1024,512)$ codes from Fig.~\ref{fig:cvpk1k} is presented in Table~\ref{t:compl}, together with the SC decoding frame error probability
(FER) at $E_b/N_0=3$ dB.
Note that the decoding complexity increases monotonously with the decrease of error probability.
This approves the fact that CvPKs are competitive compared to other polarization kernels.
Regarding distance properties of the obtained $(1024,512)$ polar codes, all codes have the same minimum distance of $16$,
so we also present the error coefficient, i.e., the number of codewords of weight $16$.
One can see non-monotonous dependence of FER on the error coefficient, since SC decoding is not near-ML decoding.
\begin{table}
\footnotesize
\caption{ SC decoding complexity of $(1024,512)$ polar codes, and an approximate number of minimum-weight codewords, found by \cite{canteaut98new}. In all cases $d=16$.}
\label{t:compl}
\centering
\begin{tabular}[width=\textwidth]{|c|c|c|c|c|}
\hline
Polar. transform & Compl. & FER at 3 dB & Err. coeff. &Decoder
\\ \hline
${\begin{pmatrix}10\\11\end{pmatrix}}^{\otimes 10}$ & $1.4\cdot 10^4$ & $1.6\cdot 10^{-3}$ &49344& \cite{arikan2009channel}
\\ \hline
$Q^{(32)}\otimes Q^{(32)}$ & $6.6\cdot 10^4$& $1.5\cdot 10^{-4}$ & 19648 & \cite{morozov2020efficient}
\\ \hline
$Q^{(64)}\otimes \overline Q^{(16)}$ & $8.4\cdot 10^4$ &$1.4\cdot 10^{-4}$ & 18624 & \cite{morozov2020efficient,trifonov2019trellis}
\\ \hline
$Q^{(1024)}$ & $2.4\cdot 10^5$ & $5.3\cdot 10^{-5}$ & 2240 & \cite{morozov2020efficient}
\\ \hline
$K_3\otimes K_3$ & $4.4\cdot 10^5$ & $3.3\cdot 10^{-5}$ & 1984 & \cite{trofimiuk2018efficient}
\\ \hline
$\overline Q^{(32)}\otimes \overline Q^{(32)}$ &$1.1\cdot 10^6$ & $9.0\cdot10^{-6}$ & 4288 & \cite{trifonov2019trellis}
\\\hline
\end{tabular}
\end{table}
\section{Conclusions}
In this paper, a family of convolutional polar kernels (CvPKs) of size $n=2^m$ was proposed together with the polynomial-complexity algorithm for computing polarization behaviour, scaling exponent and polarization rate. The kernels are based on convolutional polar codes. The proposed algorithm enables one to study
polarization properties of CvPKs of size up to $1024\times 1024$.
Polarization properties of convolutional polar kernels are getting worse, starting from sufficiently large size.
The row permutation operation was suggested, that can improve scaling exponent of CvPK.
The proposed family of kernels allow kernel processing with complexity $O(n \log n)$ as the kernel size $n$ tends to infinity.
\appendices
\section{Proof of Theorem 1 \label{a:t1}}
Let us prove the theorem for the case of $\varphi=2\psi+1$, corresponding to \eqref{eq:t1vp1}.
If the receiver knows $u_0^{2\psi}$, then it knows $x_0^{\psi-1}$ and $z_0^{\psi-1}$ by \eqref{eq:xz}.
Denote the stripped kernels without rows, corresponding to known (already estimated) symbols, and without columns, corresponding to erased symbols, by
$\hat Q=Q^{(n)}_{\overline{[\varphi]},\overline{\mathcal{E}}}, \;\hat Q'=Q^{(n/2)}_{\overline{[\psi]},\overline{\mathcal{E}'}}, \;\hat Q''=Q^{(n/2)}_{\overline{[\psi]},\overline{\mathcal{E}''}}$.
Denote $k=n-\varphi$, $k'=\frac{n}{2}-\psi$.
and $\overline w=n-|\mathcal{E}|$, $\overline w'=n/2-|\mathcal{E}'|$, $\overline w''=n/2-|\mathcal{E}''|$.
Then, the size of $\hat Q$ is $k\times\overline w$, the sizes of $\hat Q'$ and $\hat Q''$ are $k'\times \overline w'$ and $k'\times \overline w''$.
Denote the transition matrices $X^{(n)}$ and $Z^{(n)}$ without rows and columns, corresponding to known symbols, by
$
\hat X=X^{(n)}_{\overline{[\varphi]},\overline{[\psi]}},\; \hat Z=Z^{(n)}_{\overline{[\varphi]},\overline{[\psi]}}.
$
The sizes of $\hat X$ and $\hat Z$ are $k\times k'$.
Using above notations, one obtains
$
\hat Q=(\hat X\hat Q',\hat Z \hat Q'').
$
The theorem for the case of \eqref{eq:t1vp1} now can be reformulated as
$(p_0^2,\mathbf 0^{k-3})\in\cs\hat Q$, if and only if there exists $(p',\mathbf 0^{k'-3})\in\cs\hat Q'$, $(p'',\mathbf 0^{k'-3})\in\cs\hat Q''$, such that $(p,\mathbf 0^{2})=p'A+p''B$.
Observe that $(p,\mathbf 0^{k-3})\in\cs\hat Q$ iff there exists $q$, s.t.
\begin{align}
(p,\mathbf 0^{k-3})=\hat Qq^T\!=\!(\hat X \hat Q', \hat Z \hat Q'')q^T=\hat X \hat Q'q'^T+ \hat Z \hat Q''q''^T,
\label{eq:vq}
\end{align}
where $q=(q',q'')$.
Denote $a=\hat Q'q'^T$, $b=\hat Q''q''^T$.
Note that $a\in\cs\hat Q'$ and $b\in\cs\hat Q''$.
Thus, such $q$ in \eqref{eq:vq} exists iff
\begin{align}
\exists a\in\cs\hat Q', b\in\cs\hat Q'':
(p,\mathbf 0^{k-3})^T=\hat X a^T+ \hat Z b^T.
\label{eq:t1ab}
\end{align}
The r.h.s. of \eqref{eq:t1ab} are $a_i+b_i$ for the $2i$-th equation, and $a_i+a_{i+1}+b_i$ for the $(2i+1)$-th equation.
The first five equations of \eqref{eq:t1ab} are
{\allowdisplaybreaks
\ifonecol
\begin{align}
a_0+b_0=p_0,\;
a_0+a_1+b_0=p_1,\;
a_1+b_1=p_2, \;
a_1+a_2+b_1=0, \;
a_2+b_2=0 \label{eq:t1p0}
\end{align}
\else
\begin{align}
a_0+b_0=p_0,\;
a_0+a_1+b_0&=p_1,\;
a_1+b_1=p_2,\nonumber\\
a_1+a_2+b_1&=0, \;
a_2+b_2=0. \label{eq:t1p0}
\end{align}
\fi
Then, there are $k-5$ equations of the form
\begin{align*}
a_2+a_3+b_2=0 & \iff a_3=0 \text{ (since } a_2+b_2=0\text{)}\\
a_3+b_3=0 & \iff b_3=0 \text{ (since } a_3=0\text{)}\\
a_3+a_4+b_3=0 & \iff a_4=0 \text{ (since } a_3+b_3=0\text{)}
\end{align*}
}and so on. Thus,
$
a_3^{k'-1}=b_3^{k'-1}=\mathbf 0.
$
Since $a\in\cs\hat Q', b\in\cs\hat Q''$, by Def.~\ref{d:chi} the last $k-5$ equations are equivalent to $a_0^2\in\chi_\psi(\mathcal{E}')$,
$b_0^2\in\chi_\psi(\mathcal{E}')$ for kernel $Q^{(n/2)}$.
Combining this with \eqref{eq:t1p0}, one can prove the theorem, since \eqref{eq:t1} with \eqref{eq:t1vp1} are precisely \eqref{eq:t1p0}, written in matrix form for $p'=a$ and $p''=b$.
The other cases of $\varphi$ can be proved similarly.
\section{Proof of Theorem 2}
\label{a:t2}
First, fix some $\mathcal{E}\in[n]$.
Let $\chi_\psi(\mathcal{E}')=\mathcal{S}'$ and $\chi_\psi(\mathcal{E}'')=\mathcal{S}''$.
By Theorem~\ref{t1}, $p_0^2\in\chi_\varphi(\mathcal{E})\iff \exists p'\in\chi_\psi(\mathcal{E}'), p''\in\chi_\psi(\mathcal{E}'')$, such that $(p_0^2,\mathbf 0^{J_\varphi})\!=\!p'A_\varphi\!+\!p''B_\varphi$.
Substituting $\mathcal{S}'=\chi_\psi(\mathcal{E}')$ and $\mathcal{S}''=\chi_\psi(\mathcal{E}'')$ one obtains precisely the conditional part of \eqref{eq:tdef}.
Thus,
$
\chi_\varphi(\mathcal{E})=\mathbf{T}_\varphi(\chi_\psi(\mathcal{E}'),\chi_\psi(\mathcal{E}'')).
$
Using Definition~\ref{d:chi}, rewrite \eqref{eq:gpbdef} as
\begin{align}
P^{(\varphi, \mathcal{S})}(x)=\sum_{\mathcal{E}\in\chi^{-1}_\varphi(\mathcal{S})}x^{|\mathcal{E}|}.
\label{eq:gpbsum}
\end{align}
Observe that $\mathcal{E}\in\chi^{-1}_\varphi(\mathcal{S})$ iff $\exists (\mathcal{S}', \mathcal{S}'')\in\mathbf{T}_\varphi^{-1}(\mathcal{S})$, such that $\mathcal{E}'\in\chi^{-1}_\psi(\mathcal{S}')$ and $\mathcal{E}''\in\chi^{-1}_\psi(\mathcal{S}'')$.
The erasure configuration $\mathcal{E}$ is bijectively defined by its ``halves'' $\mathcal{E}'$ and $\mathcal{E}''$, so can replace summation over $\chi^{-1}_\varphi(\mathcal{S})$ in \eqref{eq:gpbsum} by two independent summations over $\chi^{-1}_\psi(\mathcal{S}')$ and $\chi^{-1}_\psi(\mathcal{S}'')$.
Obviously, $|\mathcal{E}|\!=\!|\mathcal{E}'|\!+\!|\mathcal{E}''|$.
Thus,
\ifonecol
\begin{align*}
P^{(\varphi, \mathcal{S})}(x)&=
\!\!\sum_{(\mathcal{S}',\mathcal{S}'')\in\mathbf{T}^{-1}_\varphi(\mathcal{S})} \sum_{\mathcal{E}'\in\chi^{-1}_\psi(\mathcal{S}')}\sum_{\mathcal{E}''\in\chi^{-1}_\psi(\mathcal{S}'')}x^{|\mathcal{E}'|+|\mathcal{E}''|}=\!\!\sum_{(\mathcal{S}',\mathcal{S}'')\in\mathbf{T}^{-1}_\varphi(\mathcal{S})}\!\!\left(\sum_{\mathcal{E}'\in\chi^{-1}_\psi(\mathcal{S}')}x^{|\mathcal{E}'|}\right)\!\cdot\!\left(\sum_{\mathcal{E}''\in\chi^{-1}_\psi(\mathcal{S}'')}x^{|\mathcal{E}''|}\right)\\
&=\!\!\sum_{(\mathcal{S}',\mathcal{S}'')\in\mathbf{T}^{-1}_\varphi(\mathcal{S})}R^{(\psi,\mathcal{S}')}(x)\cdot R^{(\psi,\mathcal{S}'')}(x).
\end{align*}
\else
\begin{align*}
&P^{(\varphi, \mathcal{S})}(x)=
\!\!\sum_{(\mathcal{S}',\mathcal{S}'')\in\mathbf{T}^{-1}_\varphi(\mathcal{S})} \sum_{\mathcal{E}'\in\chi^{-1}_\psi(\mathcal{S}')}\sum_{\mathcal{E}''\in\chi^{-1}_\psi(\mathcal{S}'')}x^{|\mathcal{E}'|+|\mathcal{E}''|}\\
&=\!\!\sum_{(\mathcal{S}',\mathcal{S}'')\in\mathbf{T}^{-1}_\varphi(\mathcal{S})}\!\!\left(\sum_{\mathcal{E}'\in\chi^{-1}_\psi(\mathcal{S}')}x^{|\mathcal{E}'|}\right)\!\cdot\!\left(\sum_{\mathcal{E}''\in\chi^{-1}_\psi(\mathcal{S}'')}x^{|\mathcal{E}''|}\right)\\
&=\!\!\sum_{(\mathcal{S}',\mathcal{S}'')\in\mathbf{T}^{-1}_\varphi(\mathcal{S})}R^{(\psi,\mathcal{S}')}(x)\cdot R^{(\psi,\mathcal{S}'')}(x).
\end{align*}
\fi
\section{On Decoding of CvPC with Matrix $\widetilde Q^{(n)}$}
\label{s:tq}
The decoder for convolutional polar codes with matrix $Q^{(n)}$ (e.g. \cite{morozov2018efficient}) computes at each phase
$\varphi$ the vector log-likelihood
\begin{align}
L_\varphi[a,b,c]=\ln\!\!\!\!\max_{u_{\varphi+3}^{n-1}\in\mathbb{F}^{n-\varphi-3}}\!W^n\left((\hat u_0^{\varphi-1},a,b,c,u_{\varphi+3}^{n-1})Q^{(n)}|y\right).
\label{eq:lvp}
\end{align}
The output LLR for symbol $u_\varphi$, needed for hard decision, is defined as
\begin{align}
S_\varphi=\ln\frac{\max_{u_{\varphi+1}^{n-1}} W^n\left((\hat u_0^{\varphi-1},0,u_{\varphi+1}^{n-1})Q^{(n)}|y\right)}
{\max_{u_{\varphi+1}^{n-1}} W^n\left((\hat u_0^{\varphi-1},1,u_{\varphi+1}^{n-1})Q^{(n)}|y\right)},
\label{eq:svp}
\end{align}
and can be computed by marginalization
\begin{align*}
S_\varphi=\max_{b,c}L_\varphi[0,b,c]-\max_{b,c}L_\varphi[1,b,c].
\label{eq:marg}
\end{align*}
Matrix $\widetilde Q^{(n)}$ is obtained from matrix $Q^{(n)}$ by swapping some of pairs of adjacent rows $(2i,2i+1)$.
Formally,
\begin{align*}
\widetilde Q_{2i,j}=\begin{cases}
Q_{2i,j} &i\notin\mathcal{J}\\
Q_{2i+1,j} &i\in\mathcal{J}
\end{cases},
\;\;\;\;
\widetilde Q_{2i+1,j}=\begin{cases}
Q_{2i+1,j} &i\notin\mathcal{J}\\
Q_{2i,j} &i\in\mathcal{J}
\end{cases}
\end{align*}
where we denote by $\mathcal{J}\subset[n/2]$ the set of all $i$, for which rows $2i$ and $2i+1$ are swapped in $\widetilde Q^{(n)}$.
In \eqref{eq:svp}, replace $Q^{(n)}$ with $\widetilde Q^{(n)}$ and denote corresponding LLR by $\widetilde{S}_\varphi$.
Then, $S_{2i}=\widetilde{S}_{2i}$ and $S_{2i+1}=\widetilde{S}_{2i+1}$ for $i\notin\mathcal{J}$.
For $i\in\mathcal{J}$, values of $\widetilde{S}_{2i}$ and $\widetilde{S}_{2i+1}$ can be also obtained from vector log-likelihoods \eqref{eq:lvp} with
the only change in marginalization:
\begin{align*}
\widetilde{S}_{2i}&=\max_{a,c}L_{2i}[a,0,c]-\max_{a,c}L_{2i}[a,1,c]\\
\widetilde{S}_{2i+1}&=\max_{c}L_{2i}[0,\hat u_{2i},c]-\max_{c}L_{2i}[1,\hat u_{2i},c]
\end{align*}
So, the only difference between decoding with $Q^{(n)}$ and decoding with $\widetilde Q^{(n)}$ is in final marginalization when converting
vector log-likelihood to the output LLR.
\bibliographystyle{IEEEtran}
|
1,941,325,220,411 | arxiv | \section{Introduction}
A Hadamard matrix of order $m$ is a square matrix $[h(i, j)]$ with
entries $h(i, j) = \pm 1$, $1 \leq i, j \leq m$, whose row
vectors are pairwise orthogonal. A Hadamard matrix must have order
$1$, $2$ or a multiple of $4$, but no other restrictions on the
order of a Hadamard matrix are known, and the century-old Hadamard
Conjecture proposes that a Hadamard matrix exists for every $m
\equiv 0 \pmod 4$.
About 20 years ago, the use of cocycles and cocyclic matrices was
introduced by Horadam and de Launey \cite{HdL95} as a structural
approach to resolving the Hadamard Conjecture. Its advantages led to the
cocyclic Hadamard conjecture: that a cocyclic Hadamard matrix exists for every $m
\equiv 0 \pmod 4$. The study and use of cocyclic matrices has expanded substantially since then, to include generalised Hadamard matrices \cite{Hor07,Hor10} and pairwise combinatorial designs \cite{deLF11}.
If $G$ is a group
and $C$ is an abelian group, a ($2$-dimensional, normalized)
cocycle $\psi$ is a mapping $\psi : G \times G \to C$ satisfying
$\psi (1, 1) = \psi (g, 1) = \psi (1, g) = 1, ~ g \in G$ and the
cocycle equation:
\begin{equation}
\psi (g, h) ~\psi (gh, k) ~=~ \psi (g, hk)~ \psi(h, k), ~~g, h, k \in G.
\end{equation}
The set of cocycles from $G$ to $C$ forms an abelian group $Z^2(G,
C)$ under pointwise multiplication. The simplest cocycles are the coboundaries
$\partial f$, defined for any function $f : G \rightarrow C$ by
$\partial f(g, h) = f(g)^{-1} f(h)^{-1} f(gh)$.
A cocycle may be represented by its matrix of values
\begin{equation}\label{coc}
M_\psi = [ \psi (g,h) ]_{g,h \in G}
\end{equation}
once an indexing of the elements of $G$ has been chosen.
We set $C = \{ \pm 1 \} \cong {\mathbb Z}_2$ when searching for cocyclic
Hadamard matrices. A cocycle $\psi$ for which the cocyclic matrix $M_\psi$ is Hadamard is termed {\it orthogonal}. It is computationally easy to check whether $M_\psi$ is
a Hadamard matrix, as we only need to check whether the dot
product of the first row with each other row is $0$. This
computational cutdown is one motivation for using cocyclic matrices.
Most of the known constructions of Hadamard matrix families are cocyclic \cite[Ch. 6]{Hor07}.
Computationally, the most prolific indexing groups $G$ for
producing cocyclic Hadamard matrices appear to be the abelian
groups ${\mathbb Z} _t \times {\mathbb Z}_2^2$ and the dihedral groups $D_{4t}$,
where we may assume $t$ is odd. The $D_{4t}$ family, related to the
Ito type Hadamard matrices, has been investigated
by many researchers including the authors (see \cite{Hor07}).
The ${\mathbb Z} _t \times {\mathbb Z}_2^2$ family, related to the Williamson type Hadamard matrices,
has also been investigated by the authors \cite{AGG11a,BH95}, and while exhaustive search often finds fewer Hadamard matrices in each order than for $D_{4t}$, abelian-ness makes the family computationally more tractable.
In parallel with the search for examples of Hadamard matrices in new orders, whether cocyclic or not, has been the attempt to classify them into equivalence classes. Hadamard equivalence of a $\{\pm 1\}$ matrix involves only permutation of rows or columns, and multiplication of a row or column by $-1$. While the transpose of a Hadamard matrix is a Hadamard matrix, transposition is not a Hadamard equivalence. The total number of Hadamard equivalence classes in small orders grows so rapidly that Orrick \cite{Orr08} uses a coarser $Q$-equivalence relation on Hadamard matrices which allows extra ``switching" operations and leads to a dramatic reduction in the number of classes.
The total number of equivalence classes of cocyclic Hadamard matrices over all index groups $G$ is studied by \'{O} Cath\'{a}in and R\"{o}der \cite{OCR11} and calculated up to $m = 36$. An allied but distinct approach has been to identify equivalences of cocycles that preserve orthogonality. For the ${\mathbb Z} _t \times {\mathbb Z}_2^2$ family, two different types of equivalence of cocycles, both of which preserve orthogonality, have been discovered independently.
The first of these is defined (see \cite{Hor07}) for any $G$ and $C$ by all compositions of a ``shift" action and two ``automorphism" actions. (If $C = \{ \pm 1 \}$ one of the automorphism actions is
trivial.) The resulting equivalence classes, called {\em bundles}, are already known by other names in different contexts. For example, if $f$ is a cryptographic function and $\psi = \partial f$ is a coboundary, the bundle corresponds to the Extended Affine (EA) equivalence class of $f$. Shift action is also studied separately, for applications to the search for self-dual codes \cite{Rao05} and, via shift representations, to classification of pairwise combinatorial designs \cite{FlEg2014}.
The second of these equivalences, independently introduced in \cite{AGG11a}, is specific to cocycles $\psi$ in $Z^2 := Z^2({\mathbb Z} _t \times {\mathbb Z}_2^2, \{ \pm 1 \})$ and arises from detailed investigation of a generating set of cocycles for $Z^2$. Corresponding to the decomposition of $\psi$ as a product of generators there is a Hadamard product decomposition of $M_\psi$ into generator matrices. Geometric actions on these generator matrices lead to a concise diagrammatic representation of cocycles and geometric equivalences which is very useful for effective computation.
This paper relates and reconciles the two types of equivalence.
The paper is organized as follows. Section \ref{sec:background} describes the two types of equivalence. The group acting on cocycles is determined for each type; the two groups are not isomorphic. Section \ref{sec:results} gives our main results,
Theorems \ref{thm:translate} and \ref{thm:main}, translating shift
action and the remaining automorphism action into diagram actions, relating the two groups of actions, and showing that the diagram action termed ``complement" has no algebraic analogue. In Section \ref{sec:complement} this diagram action is shown to be the transposing operation on $M_\psi$. We summarise and suggest further work.
\section{Background} \label{sec:background}
From now on we assume $C = \{ \pm 1 \}$, $G \cong {\mathbb Z} _t \times {\mathbb Z}_2^2$ with $t>1$
odd, and $\psi \in Z^2$. Denote the group of units of the ring ${\mathbb Z}_t$ by ${\mathbb Z}_t^*$. Let $G$ have presentation $$G = \langle x,u,v:\;
x^t=u^2=v^2=1,xu=ux,xv=vx,uv=vu\rangle ,$$ and ordering
$$(x^i,1)<(x^i,u)<(x^i,v)<(x^i,uv), \, 0 \leq i < t, \;
(x^i,uv) < (x^{i+1},1), \, 0 \leq i <t-1\,.$$
We describe an orthogonality-preserving algebraic action on $\psi$ in the first subsection and an orthogonality-preserving geometric action on $\psi$ in the second.
\subsection{Bundle action on cocycles}
For any $a \in G$, the {\it shift} $\psi \cdot a$ of $\psi$ is the cocycle $(\psi \cdot a)(g, h) = \psi (ag, h) \psi(a, h)^{-1}.$ It is orthogonal if $\psi$ is orthogonal. For any automorphism $\theta \in {\rm Aut}(G)$, the cocycle $\psi \circ (\theta \times \theta)$ is orthogonal if $\psi$ is. When the two actions are combined, the result is an action by the semidirect product $H = G \rtimes {\rm Aut}(G)$ called {\em bundle action} under which the orbit of $\psi$ is its {\em bundle}
\begin{equation} \label{eq:bundle} {\cal B}(\psi) = \{(\psi \cdot a)
\circ (\theta \times \theta): \; a \in G, \; \theta \in
\mbox{Aut}(G)\}.
\end{equation}
The group $H$ acting on $Z^2$ is $H = G \rtimes {\rm Aut}(G)$, where the
semidirect product is defined for $a,b \in G, ~\theta_1,\theta_2 \in {\rm
Aut}(G)$ by $a \theta_1 \circ b \theta _2 = a\theta_1^{-1}(b)\theta_1\theta_2$. See \cite[Ch. 8]{Hor07} for details. \\
The Hadamard equivalence operations on $M_\psi$
corresponding to shift and automorphism action can be easily described. $M_{\psi \cdot a}$
is Hadamard equivalent to $M_\psi $ by first permuting the rows of
$M_\psi$ with respect to the row index permutation $g \mapsto g' =
ag, ~g \in G$, obtaining $M' = [\psi (ag, h)]_{g,h\in G}$. The
first row of $M'$ is the $a^{th}$ row of $M_\psi$. Then obtain
$M_{\psi \cdot a}$ from $M'$ by multiplying every row of $M'$
point-wise by its first row, or, equivalently, by multiplying
every column of $M'$ by its first entry. $M_{\psi \circ (\theta
\times \theta)}$ is Hadamard equivalent to $M_\psi$ by permuting
rows and columns under $\theta$.
We complete this subsection by identifying the group $H = G \rtimes
{\rm Aut}(G)$ which partitions cocycles into bundles
(\ref{eq:bundle}).
\begin{theorem} \label{thm:bundle group}
The group $H$ defined by bundle action on $Z^2$ is $H \cong [{\mathbb Z}_t \rtimes {\mathbb Z}_t^*] \times [{\mathbb Z}_2^2 \rtimes S_3]$. Therefore the order of $H$ is $\,24\, t\, \phi(t)$,\,
where $\phi$ is the Euler function.
A generating set for $H$ is $\{x, u, v, h_r, r \in {\mathbb Z} _t^*, h_{23}, h_{243} \}$, where $x, u$ and $v$ are shift actions and $h_{23}: x \mapsto x, u \mapsto v, v \mapsto
u$; $h_{243}: x \mapsto x, u \mapsto uv, v \mapsto u$ and $h_{r}:
x \mapsto x^r, u \mapsto u, v \mapsto v$ are automorphism actions.
\end{theorem}
{\it Proof.} Since $t$ is odd, ${\rm Aut}({\mathbb Z} _t \times {\mathbb Z}_2^2) \cong {\rm Aut}({\mathbb Z}_t)
\times {\rm Aut}({\mathbb Z}_2^2) \cong {\mathbb Z} _t^* \times S_3$.
Under the identification $1 \leftrightarrow 1, u \leftrightarrow
2, v \leftrightarrow 3, uv \leftrightarrow 4$, ${\rm Aut}({\mathbb Z}_2^2)$
is the subgroup of $S_4$ which fixes $1$. Then
$\{{\rm Id}\} \times {\rm Aut}({\mathbb Z}_2^2)$ is generated by $h_{23}$
and $h_{243}$. Thus $H = [{\mathbb Z} _t \times {\mathbb Z}_2^2] \rtimes [{\mathbb Z} _t^*
\times S_3]$, with the listed generating set. Since $h_{23}(x) =
h_{243}(x) = x$, ${\mathbb Z}_t$ commutes with $S_3$ and since $h_r(u) =
u,\, h_r(v) = v$, ${\mathbb Z}_2^2$ commutes with ${\mathbb Z}_t^*$. Hence $H \cong
[{\mathbb Z}_t \rtimes {\mathbb Z}_t^*] \times [{\mathbb Z}_2^2 \rtimes S_3]$. \hfill
$\square$
\begin{remark}\label{nota1} In terms of the Coxeter presentation of $S_n$, if $\sigma _i$ denotes the transposition $(i\; i+1)$, $S_n = \langle \sigma _i: \; \sigma _i^2=(\sigma _i \sigma _{i+1})^3=1, 1 \leq i \leq n-1 \rangle$ and $S_4=\langle \sigma _1, \sigma_2, \sigma_3: \; \sigma _i^2=(\sigma _i \sigma _{i+1})^3=1, 1 \leq i \leq 3 \rangle > \langle \sigma_2,\sigma_3 \rangle \cong S_3$, so that in Theorem \ref{thm:bundle group}, $h_{23}=\sigma_2$ and $h_{243}=\sigma_3\sigma_2$. \end{remark}
\subsection{Geometric action on cocycle diagrams}
The group of cocycles $Z^2$ has a generating set ${\cal Z} = \{\partial _1, \ldots,
\partial _{4t}, \beta_1, \beta_2, \kappa\}$ consisting of $4t$
coboundaries $\partial _i := \partial \delta_i$, where $\delta_i$ is the Kronecker delta function of the $i^{th}$-element in $G$ in the given ordering, and three representative cocycles
$\beta_1, \beta_2, \kappa$, all of which are explicitly described in \cite{AAFR08,AGG11a}. Every 2-cocycle over $G$ admits a (non unique) representation as a product of the generators in
${\cal Z}$. The identity of $Z^2$ is the trivial cocycle ${\bf 1}$ for which $M_{\bf 1} = J_{4t}$ is the all-ones matrix. All orthogonal cocycles known so far (cf. \cite{BH95,AGG11a}) contain the factor $\rho ={\beta_1}{\beta_2}{\kappa}$, where
\begin{equation} \label{eq:M_rho}
M_\rho = J_t \otimes
\left(\begin{array}{rrrr}1&1&1&1\\1&-1&1&-1 \\1&-1&-1&1\\1&1&-1&-1
\end{array}\right)\,,
\end{equation}
and $J_t$ denotes the $t \times t$ matrix all of 1s. It is conjectured this must always be true \cite[Research Problem 37]{Hor07}. For the remainder of the paper, we assume that we work with cocycles of this type. That is, $\psi =
{\partial_1}^{\epsilon_1} \ldots
{\partial_{4t}}^{\epsilon_{4t}}\, \rho$, $\epsilon_i \in \{0, 1\}$.
In \cite{AGG11a}, a more concise notation to describe $\psi = {\partial_{d_1}} \ldots {\partial_{d_k}}\, \rho$ is introduced, which allows one to determine if $\psi$ is orthogonal much
more easily. Partition the set $\{d_1,
\ldots, d_k\}$ according to the equivalence classes modulo 4, in the
class order $2, 3, 0, 1$ and in descending order within each
class. We will denote this ordered set of coboundaries
\begin{equation}
\{ {\bf {c_2,c_3,c_4,c_1}} \} = \{ \{d_{2+4 j_2}\},\{d_{3+4 j_3}\},
\{d_{4 j_4}\}, \{d_{1+4 j_1}\} \}.
\end{equation}
For example, for $t = 7$, the cocycle $\; \psi = {\partial_{4}} {\partial_{6}}
{\partial_{9}}{\partial_{10}}{\partial_{11}}{\partial_{12}}
{\partial_{14}}{\partial_{20}}{\partial_{21}}{\partial_{25}}\,\rho\;$
is orthogonal, and is represented as
\begin{equation} \label{ex:t=7}
\{ {\bf {c_2,c_3,c_4,c_1}} \} = \{ \{14, 10, 6\}, \{11\}, \{20, 12, 4 \}, \{25, 21, 9 \} \}.
\end{equation}
Alternatively, we can write all the integers $1, \dots , 4t$, by
equivalence classes modulo 4, in descending order, as the rows of a $4 \times
t$ matrix (treated as a cylinder, i.e. left and right edges are
identified) and mark out only the entries occurring in $\{d_1, \ldots,
d_k\}$.
\begin{definition} \cite{AGG11a}
The {\em diagram} of $ \psi = {\partial_{d_1}} \ldots
{\partial_{d_k}} \, \rho$ is a $4 \times t$ matrix A, such that
$a_{ij} = \times$ if $ 4t - 4(j-1)-3 + i \;mod\; 4t \in \{ d_1,
\ldots , d_k \}$ and $a_{ij} = -$ elsewhere.
\end{definition}
The diagram for the example in (\ref{ex:t=7}) above is
\begin{equation} \label{eq:diagram ex}
A = \left|
\begin{array}{ccccccc}
- & - & - & \times & \times & \times & - \\
- & - & - & - & \times & - & - \\
- & - & \times & - & \times & - & \times \\
\times & \times & - & - & \times & - & -\\
\end{array}
\right|
\end{equation}
We now list the four types of orthogonality-preserving operations on $\psi$ described in
\cite{AGG11a}. We adopt the notation $[m]_n$ for $m \;mod\; n$ for brevity.
\begin{definition}
Let $\{ {\bf {c_2,c_3,c_4,c_1}} \}$ be a set of coboundaries. Denote the columns of its diagram $A$ by $ ({\cal{C}}_{t-1},\cdots ,{\cal{C}}_0)$. Let $ {\bf {c_j + k }}$ denote the set of coboundaries obtained by adding $k$ to each element of ${\bf {c_j}}$ modulo $4t$.
\begin{enumerate}
\item The {\em complement} $ {\textsc{C}_2} (\{ {\bf {c_2},\bf{c_3,c_4,c_1}} \} )$ of this set is the set $\{ {\bf {\overline{c_2},c_3,c_4,c_1}} \} $
where ${\bf {\overline{c_2}}}$ is complement of ${\bf {c_2}}$ in the equivalence class 2 modulo 4.
\item Six elementary {\em swapping} operations are possible on this set: $s_{12}, s_{13}, s_{14}$ (see \cite{AGG11a}) and
\begin{itemize}
\item $s_{23} ( \{ {\bf {c_2,c_3,c_4,c_1}} \} ) = \{ {\bf {c_3 -1,c_2+1,c_4,c_1}} \}. $
\item $s_{24} ( \{ {\bf {c_2,c_3,c_4,c_1}} \} ) = \{ {\bf {c_4-2,c_3,c_2+2,c_1}} \}. $
\item $s_{34} ( \{ {\bf {c_2,c_3,c_4,c_1}} \} ) = \{ {\bf {c_2,c_4-1,c_3+1,c_1}} \}. $
\end{itemize}
\item The
{\em $i$-rotation} $\textsc{T}_i ( \{
{\bf {c_2,c_3,c_4,c_1}} \})$, $0 \leq i \leq t-1$, of this set is the set
$$\{ {\bf c_2 -4i, c_3 -4i, c_4 -4i, c_1 -4i} \}. $$
\item The {\em $r$-th dilatation} $\textsc{V}_r (\{ {\bf {c_2,c_3,c_4,c_1}} \})$, for $r \in {\mathbb Z}_t^*$, is the set with diagram $\textsc{V}_r(A)$, where
$\textsc{V}_r({\cal C}_{j}) = {\cal C}_{[jr]_t}, ~ 0 \leq j \leq t-1$.
\end{enumerate}
\end{definition}
Clearly the order of $\textsc{C}_2$ is 2 and $\langle \textsc{C}_2 \rangle \cong {\mathbb Z}_2$. The swappings each have order 2 and generate a group $\cong S_4$ which, in terms of a Coxeter presentation (Remark \ref{nota1}), is generated by $\sigma_1 = s_{23}, ~\sigma_2 = s_{34}$ and $\sigma_3 = s_{14}$. The rotations are generated by $\textsc{T}_1$ so $\langle \textsc{T}_1 \rangle \cong {\mathbb Z}_t$; and $\langle \textsc{V}_r, r \in {\mathbb Z}_t^* \rangle \cong {\mathbb Z}_t^*$.
In terms of diagrams, $\textsc{C}_2$ complements the first row of $A$; $s_{ij}$ swaps rows corresponding to ${\bf c_i}$ and ${\bf c_j}$; $\textsc{T}_i$ cyclically shifts
columns $i$ places to the right; and $\textsc{V}_r$ permutes columns according to multiplication of column index by the invertible element $r$ (so ${\cal C}_0$ is always fixed).
For instance, if $A$ is the diagram in (\ref{eq:diagram ex}),
{\small
$$
\begin{array}{ll}
{\textsc{C}_2}(A) =
\left|
\begin{array}{ccccccc}
\times & \times & \times & - & - & - & \times \\
-& - & - & - & \times & - & - \\
-& - & \times & - & \times & - & \times \\
\times & \times & - & - & \times & - & - \\
\end{array}
\right|,
&
\textsc{T}_2(A) = \left|
\begin{array}{ccccccc}
\times & - & - & - & - & \times & \times \\
-& - & - & - & - & - & \times\\
- & \times & - & - & \times & - & \times \\
- & - & \times & \times & - & - & \times \\
\end{array}
\right| , \\
& \\
s_{23}(A) = \left|
\begin{array}{ccccccc}
-& - & - & - & \times & - & - \\
-& - & - & \times & \times & \times & - \\
-& - & \times & - & \times & - & \times \\
\times & \times & - & - & \times & - & - \\
\end{array} \right| ,
&
\textsc{V}_2(A) = \left|
\begin{array}{ccccccc}
\times & - & \times & - & \times & - & -\\
-& - & \times & - & - & - & - \\
- & - & \times & - & - & \times & \times \\
- & \times & \times & \times & - & - & - \\
\end{array}
\right| .
\end{array}
$$
}
It is possible to identify the action of $\textsc{C}_2$ on coboundaries directly.
\begin{lemma} \label{lem:C-action}
$ \textsc{C}_2( {\partial_{d_1}} \ldots {\partial_{d_k}}) = {\partial_{d_1}} \ldots {\partial_{d_k}}~\prod _{i=0}^{t-1}\,{\partial_{2+4i}}.$
\end{lemma}
{\it Proof.}
If $\psi = {\bf 1}$ is the trivial coboundary in $Z^2$, with $M_{\bf 1} = J_{4t}$ the all-ones matrix, then it has $\{ {\bf {c_2,c_3,c_4,c_1}} \} = \{ \emptyset, \emptyset, \emptyset, \emptyset\}$ so
$\textsc{C}_2({\bf 1})=\prod _{i=0}^{t-1}{\partial_{2+4i}}$. By simple inspection, it may be checked that
\begin{equation}\label{eq:C-action}
\textsc{C}_2(J_{4t})=\prod _{i=0}^{t-1}M_{\partial_{2+4i}}=J_t \otimes \left(\begin{array}{rrrr}1&1&1&1\\1&1&-1&-1 \\1&-1&1&-1\\1&-1&-1&1
\end{array}\right).
\end{equation}
The result follows immediately. \hfill $\square$
\
We complete this subsection by identifying the group $H'$ generated by the diagrammatic
operations above.
\begin{theorem} \label{thm:diagram group}
The group $H'$ defined by diagrammatic action on $Z^2$ is $H' \cong [{\mathbb Z}_t \rtimes {\mathbb Z}_t^*] \times S_4 \times {\mathbb Z}_2$. Therefore the order of $H'$ is $\,48 \, t\, \phi(t)$. A generating set for $H'$ is $\{ \textsc{T}_1,\, \textsc{V}_r, r \in {\mathbb Z}_t^*,~ s_{14}, s_{23}, s_{34},\, \textsc{C}_2 \}$.
\end{theorem}
{\it Proof.} It is shown in \cite{AGG11a} that the
complement and swapping operations commute with each other and
with rotations and dilatations, but that rotation and dilatation do not commute.
The composition $\textsc{V}_r^{-1} \textsc{T}_1 \textsc{V}_r$ acts on column $[j]_t$ of $A$ to give column $[(jr-1)r^{-1}]_t=[j-r^{-1}]_t$, so $\textsc{V}_r^{-1} \textsc{T}_1 \textsc{V}_r=\textsc{T}_{r^{-1}}$. Define a homomorphism $\mu : {\mathbb Z}_t^* \rightarrow \mbox{Aut}({\mathbb Z}_t)$ by $\mu (\textsc{V}_r)(\textsc{T}_1)=\textsc{T}_{r^{-1}}$. Consequently, $\langle \textsc{T}_1, \textsc{V}_r, r \in {\mathbb Z}_t^* \rangle \cong {\mathbb Z}_t \rtimes _\mu Z_t^*$.
Swapping permutes rows while rotations and
dilatations permute columns, so swapping is not in the subgroup of
$H'$ generated by rotations and dilatations. All combinations of swapping,
rotation and dilatation preserve the total number of coboundaries
but complementation does not, so complementation is not in the
subgroup of $H'$ generated by rotations, swappings and
dilatations. \hfill $\square$
\section{Bundle actions as Diagram actions} \label{sec:results}
In this section we express the bundle actions on $Z^2$ in terms of the diagrammatic
operations and identify the role of the diagrammatic action $\textsc{C}_2$. Subsection \ref{ssec:proof} is given to proving the following
theorem.
\begin{theorem} \label{thm:translate}
\begin{enumerate}
\item The shift actions by $x, u$ and $v$, respectively, on $\psi$, are the diagrammatic actions
$\textsc{T}_1,\,s_{12} s_{34}$ and $s_{13} s_{24}$, respectively.
\item The automorphism actions by $h_r, h_{23}$ and $h_{243}$,
respectively, on $\psi$, are the diagrammatic actions $\textsc{V}_{r^{-1}}, \, \textsc{C}_2
s_{23}$ and $s_{234} := s_{23}s_{24}$, respectively.
\end{enumerate}
\end{theorem}
From Theorem \ref{thm:translate} we obtain our main result.
\begin{theorem} \label{thm:main}
Bundle action by $H$ on ${\mathbb Z}_t \times {\mathbb Z}_2^2$-cocyclic matrices
corresponds to diagrammatic action by the subgroup $$H^* = \langle
\textsc{T}_1,\,\textsc{V}_{r^{-1}},\,s_{12} s_{34},\,s_{13} s_{24},\,\textsc{C}_2
s_{23},\,s_{23}s_{24}\rangle \cong ({\mathbb Z}_t \rtimes {\mathbb Z}_t^*) \times
S_4$$ of index 2 in $H'$. The operation $\textsc{C}_2$ is not in $H^*$.
\end{theorem}
{\it Proof.} Define a homomorphism $\alpha : H \rightarrowtail H'$
by $x \mapsto \textsc{T}_1, h_r \mapsto \textsc{V}_{r^{-1}}, u \mapsto s_{12}
s_{34}, v \mapsto s_{13} s_{24}, h_u \mapsto \textsc{C}_2 s_{23}$ and $h_v
\mapsto s_{23}s_{24}$.
By Theorem
\ref{thm:diagram group} and Theorem \ref{thm:translate}, $\alpha(\langle x, h_r, r \in {\mathbb Z}_t^*\rangle) = \langle \textsc{T}_1, \textsc{V}_{r^{-1}}, r \in {\mathbb Z}_t^*\rangle \cong {\mathbb Z}_t \rtimes {\mathbb Z}_t^* $ is an isomorphism.
Let $CS_4$ be the subgroup of $H'$ isomorphic to $S_4$ which is
generated by the 6 order-2 elements $\textsc{C}_2 s_{ij}$ (i.e. compose
every transposition $s_{ij}$ with the complement $\textsc{C}_2$; they
commute so order doesn't matter). Products corresponding to even
permutations in $S_4$ will appear unchanged, while those
corresponding to odd permutations in $S_4$ will be multiplied by
$\textsc{C}_2$. Then, from Theorem \ref{thm:bundle group} and Theorem
\ref{thm:translate}, $\alpha({\mathbb Z}_2^2 \rtimes S_3)$ is generated by
$\textsc{C}_2 s_{12} \textsc{C}_2 s_{34} = s_{12} s_{34}$ and $\textsc{C}_2 s_{13} \textsc{C}_2 s_{24}
=s_{13} s_{24}$ (shift action, isomorphic to ${\mathbb Z}_2^2$), and $\textsc{C}_2
s_{23}$ and $\textsc{C}_2 s_{23} \textsc{C}_2 s_{24} = s_{23}s_{24}$ (automorphism
action, isomorphic to $S_3$). Direct calculation shows that
$\alpha$ maps ${\mathbb Z}_2^2 \rtimes S_3$ onto $CS_4$, so $\alpha$ is an isomorphism.
Thus $H^* \cong ({\mathbb Z}_t \rtimes {\mathbb Z}_t^*)
\times S_4$, and $\alpha(H)$ does not contain $\textsc{C}_2$. \hfill
$\square$
\subsection{Proof of Theorem \ref{thm:translate}} \label{ssec:proof}
Every cocyclic matrix $M_\psi$ admits a
decomposition as the Hadamard (pointwise) product of the cocyclic
matrices corresponding to the generators. That is, $M_\psi =
M_{\partial_1}^{\epsilon_1} \ldots
M_{\partial_{4t}}^{\epsilon_{4t}}\, M_\rho$, $\epsilon_i \in \{0, 1\}$.
Each matrix $M_{\partial_i}$ is symmetric. Without loss of generality we negate the $i^{th}$ row and $i^{th}$
column of $M_{\partial_i}$. These Hadamard equivalent
matrices, denoted $M_{i}$, have a very particular form (see
\cite{AAFR08} for details). Each $M_{i}$ is a $4 \times
4$-block back diagonal square matrix of order $4t$. The first
block row has a $4 \times 4$ matrix $A_{[i]_4}$ as the $\lceil
\frac{i}{4} \rceil ^{th}$ block and $4 \times 4$ all-1s blocks in
the other $t-1$ positions. The remaining block rows are obtained
by successively back-cycling the first.
The $4\times 4$-blocks $A_{[i]_4}$ depend on the equivalence class of $i$
modulo $4$, as follows. Let
$R = \left(
\begin{array}{rr} 0 & 1 \\ 1 & 0 \end{array}\right)$, $D = \left(
\begin{array}{rr} -1 & 1\\ 1 &-1\end{array}\right)$ so $DR = \left(
\begin{array}{rr} 1 & -1\\ -1 &1\end{array}\right)$.
Then, adopting the notation blank for 1 and $-$ for $-1$, for brevity,
$A_0=\left( \begin{array}{rr} &DR\\DR& \end{array}\right),
A_1=\left( \begin{array}{rr} D&\\ &D
\end{array}\right), A_2=\left(
\begin{array}{rr}DR& \\ &DR \end{array}\right),
A_3=\left( \begin{array}{rr} &D\\displaystyle&\end{array}\right).$\\
It may be checked without difficulty that bundle action by each of $x, u, v, h_r$ and $h_{243}$ leaves $M_\rho$ invariant. Only action by $h_{23}$ alters $M_\rho$. In terms of identifying diagram actions, it does not matter whether we work with $M_{\partial_i}$ or $M_i$ so we use the latter. We determine each bundle action on $M_i$ in the subsections below, concluding with the action of $h_{23}$ on $M_\rho$.
\subsubsection{Shift action of $x$}
First, we change the order of the elements in the group to $g'=xg$, obtaining
$$(x,1) < (x,u) < \dots < (x^{t-1},uv)<(1,1)< \dots < (1,uv) $$
that is, we put the first block of $4$ elements at the end of the list.
For an individual coboundary $\partial_i$, the reordering takes
the first four rows to the last four, moving the other rows
upwards. Now the blocks $A_{[i]_4}$ start from the $\lceil
\frac{i}{4} \rceil -4 ^{th}$-column, the negated row is the $i-4
^{th}$ row, and the negated column is still the $i^{th}$ column.
Next we perform the pointwise product of the first row and the
others. This first row (the former $5^{th}$) has two negative
entries, at positions $i$ and $i-4$, so we have to negate these
columns, getting the coboundary $\partial_{i-4}$.
So, the action of $x$ on the cocyclic matrix is the
$1$-rotation $\textsc{T}_1$ on it.
\subsubsection{Shift action of $u$}
First, we change the order of the elements in the group to
$g'=ug$, obtaining
$$(x^i,u) < (x^i,1) < (x^i, uv) < (x^i, v),\,
0 \leq i < t, ~(x^i,v) < (x^{i+1},u),\, 0 \leq i <t-1,$$ that is,
we reorder every block of $4$ elements by means of the permutation
$\sigma = (12) (34)$.
For an individual coboundary $\partial_i$, the reordering permutes
rows in the same way. This permutation transforms the blocks
$A_{[i]_4}$ in the same way, under $(A_1 A_2)(A_3 A_0)$, the
negated row is the $\sigma(i)^{th}$ and the negated column is the
$i^{th}$. The first row (the former $2^{nd}$) has two negative
entries, at positions $i$ and $\sigma(i)$. After negating these
columns, we get the coboundary $\partial_{\sigma(i)}$.
So, the action of $u$ on the cocyclic matrix is the
composition of swappings $s_{21} s_{34}$.
\subsubsection{Shift action of $v$}
First, we change the order of the elements in the group to $g'=vg$, obtaining
$$(x^i,v) < (x^i,uv) < (x^i, 1) < (x^i, u),\,
0 \leq i < t, ~(x^i,u) < (x^{i+1},v)\, 0 \leq i <t-1,$$ that is,
we reorder every block of $4$ elements by means of the permutation
$\sigma' = (13) (24)$.
For an individual coboundary $\partial_i$, the reordering permutes
rows in the same way. This permutation transforms the blocks
$A_{[i]_4}$ in the same way, under $(A_1 A_3)(A_2 A_0)$, the
negated row is the $\sigma'(i)^{th}$ and the negated column is the
$i^{th}$. The first row (the former $3^{rd}$) has two negative
entries, at positions $i$ and $\sigma'(i)$. After negating these
columns, we get the coboundary $\partial_{\sigma'(i)}$.
So, the action of $v$ on the cocyclic matrix is the
composition of swappings $s_{13} s_{24}$.
\subsubsection{Automorphism action of $h_r$}
A straightforward algebraic calculation shows that $h_r(\partial
_k) = \textsc{V}_{r^{-1}}(\partial _k)$, for each
$k=x^{k_x}u^{k_u}v^{k_v}$. Set $\delta _{ij}=-1$ if $i=j$, and
$\delta _{ij}=1$ otherwise.
On one hand, $h_r(\partial
_k)(x^{i_x}u^{i_u}v^{i_v},~x^{j_x}u^{j_u}v^{j_v})$
\begin{eqnarray}
&=&
\partial _k( x^{r\cdot i_x \bmod t} \,u^{i_u} \,v^{i_v},~x^{r\cdot
j_x\bmod t}\,u^{j_u}\,v^{j_v}) \\\nonumber &=& \delta
_{x^{k_x}u^{k_u}v^{k_v},~x^{[r\cdot i_x]_ t}u^{i_u}v^{i_v}} ~
\delta _{x^{k_x}u^{k_u}v^{k_v},x^{[r\cdot
j_x]_t}u^{j_u}v^{j_v}}\\\nonumber & & \delta
_{x^{k_x}u^{k_u}v^{k_v},~x^{[r\cdot (i_x+j_x)]_
t}u^{[i_u+j_u]_2}v^{[i_v+j_v]_2}}.\label{delta1}
\end{eqnarray}
On the other hand, $\textsc{V}_{r^{-1}}(\partial
_k)(x^{i_x}u^{i_u}v^{i_v},~x^{j_x}u^{j_u}v^{j_v})$
\begin{eqnarray}
&=& \partial _{x^{[k_x\cdot r^{-1}]_t}\,u^{k_u}\,v^{k_v}}
(x^{i_x}\,u^{i_u}\,v^{i_v},~x^{j_x}\,u^{j_u}\,v^{j_v})
\\\nonumber &=&
\delta _{x^{[k_x\cdot
r^{-1}]_t}\,u^{k_u}\,v^{k_v},~x^{i_x}\,u^{i_u}\,v^{i_v}} ~ \delta
_{x^{[k_x\cdot
r^{-1}]_t}\,u^{k_u}\,v^{k_v},~x^{j_x}\,u^{j_u}\,v^{j_v}}
\\\nonumber & &
\delta _{x^{[k_x\cdot r^{-1}]_t}\,u^{k_u}\,v^{k_v},~x^{[i_x+j_x]_
t}\,u^{[i_u+j_u]_2}\,v^{[i_v+j_v]_2}}.\label{delta2}
\end{eqnarray}
A careful check, using the invertibility of $r$ in ${\mathbb Z}_t$, shows
these equations are equal term by term. Consequently, $h_r =
\textsc{V}_{r^{-1}}$, for all $r \in {\mathbb Z}_t^*$.
\subsubsection{Automorphism action of $h_{243}$}
The automorphism $h_{243}$ shifts cyclically to the right the
second, third and fourth positions of the elements in $G$, in each
block of $4$, leaving the first element unchanged. So the action
on the cocycles will be the same permutation of every second,
third and fourth rows and columns in every block of four.
For an individual coboundary $\partial_i$, this reordering
transforms the blocks $A_{[i]_4}$ in the same way, giving the
permutation $(A_2 A_3 A_0)$, and the negated row/column remains
unchanged if $[i]_4$ is $1$ and is interchanged cyclically between
cosets $2$, $3$ and $0$, so we get the coboundary $s_{234}
(\partial_i)$.
Hence, the action of $h_{243}$ on any cocyclic matrix
gives us the operation $s_{234}$.
\subsubsection{Automorphism action of $h_{23}$}
The action of the automorphism $h_{23}$ on the cocyclic matrix
will be the permutation of second and third rows and columns in
every block of four.
For an individual coboundary $\partial_i$, this reordering
transforms the blocks $A_{[i]_4}$ in the same way, giving the
permutation $(A_2 A_3)$, and the negated row/column remains
unchanged if $[i]_4$ is $0$ or $1$ and interchanged between cosets
$2$ and $3$, so we get the coboundary $s_{23} (\partial_i)$.
The action of this reordering on matrix $M_\rho$ applies the same
permutation to its rows and columns, so
the $4 \times 4$ blocks in (\ref{eq:M_rho}) become $$\left( \begin{array}{rrrr} 1&1&1&1 \\ 1&-1&-1&1\\
1&1&-1&-1 \\1&-1&1&-1 \end{array}\right).$$ This expression
coincides with the pointwise product of the $4 \times 4$ block in (\ref{eq:M_rho}) and the block $A_{2}$
with the second row and column negated, so the action of the
automorphism $h_{23}$ on $M_\rho$ gives us $M_\rho \cdot M_{\partial _2} \cdot
M_{\partial_ 6} \dots M_{\partial_{4t-2}}$, the product with all
coboundaries whose index is congruent to $2$ modulo $4$.
Hence, by Lemma \ref{lem:C-action},
$h_{23}({\partial_{d_1}} \ldots {\partial_{d_k}}\, \rho) = s_{23}({\partial_{d_1}} \ldots {\partial_{d_k}})~(\prod _{i=0}^{t-1}\,{\partial_{2+4i}})\, \rho = \textsc{C}_2(s_{23}({\partial_{d_1}} \ldots {\partial_{d_k}}))\, \rho.$
Hence, the action of $h_{23}$ on any cocyclic matrix gives us the operation
$\textsc{C}_2 s_{23}$.
\section{Complement} \label{sec:complement}
Next we demonstrate that complementation corresponds to matrix transposition and gives the matrix of the transpose cocycle.
\begin{theorem} \label{thm:transposition}
The operation $\textsc{C}_2$ on $M_\psi$ coincides with transposition: $\textsc{C}_2(M_\psi)=(M_\psi) ^{\top} = M_{\psi^\top}$.
\end{theorem}
{\it Proof.} Consider $M_\psi = M_{\partial_{d_1}} \ldots M_{\partial_{d_k}}\, M_\rho$.
Since transposition commutes with pointwise products,
$M_\psi ^\top=M_{\partial_{d_1}} \ldots
M_{\partial_{d_k}}\, M_\rho^\top$. By (\ref{eq:M_rho}) $$M_\rho^\top= J_t \otimes
\left(\begin{array}{rrrr}1&1&1&1\\1&-1&-1&1 \\1&1&-1&-1\\1&-1&1&-1
\end{array}\right)= \left( J_t \otimes \left(\begin{array}{rrrr}1&1&1&1\\1&1&-1&-1 \\1&-1&1&-1\\1&-1&-1&1
\end{array}\right)\right)\, \cdot \, M_\rho \,.$$
By (\ref{eq:C-action}), $\displaystyle M_\psi ^\top = M_{\partial_{d_1}} \ldots
M_{\partial_{d_k}} \left(\prod _{i=0}^{t-1}M_{\partial_{2+4i}} \right) \, M_\rho = \textsc{C}_2(M_\psi)$, as claimed. Since $G = {\mathbb Z} _t \times {\mathbb Z}_2^2$ is abelian, the transpose $\psi^\top$ of $\psi$, with $\psi^\top(g, h) = \psi(h, g)$, is a cocycle \cite[(6.10)]{Hor07}, and $(M_\psi) ^{\top} = M_{\psi^\top}$. \hfill $\square$
\
In summary, we have shown that the diagrammatic operations which can be implemented for effective calculation of cocyclic Hadamard matrices over $G = {\mathbb Z} _t \times {\mathbb Z}_2^2$, can all be interpreted as compositions of known algebraic equivalences, with the exception of complementation, which corresponds to matrix transposition. \'{O}Cath\'{a}in \cite{OCathainPhD} has used the algebraic equivalences together with transposition to determine classes of cocyclic matrices of order $4t$ over various $G$. He then checks any transposes lying in such a class to partition them into Hadamard inequivalence classes. He coins the term {\it strong inequivalence} for Hadamard matrices $H$ and $H'$ for which $H'$ is not Hadamard equivalent to $H$ or to $H^\top$. So, this approach using diagrammatic operations may be computationally effective.
It will also be interesting to investigate if diagrams and diagram operations can be found for cocycles over $G = D_{4t}$, and whether there are diagrammatic operations which correspond to Orrick's switching operations.
|
1,941,325,220,412 | arxiv | \section{Introduction}
A celebrated theorem of Hadamard characterises the complex matrices with entries of norm at most one which have maximal determinant: they are precisely the solutions to the matrix equation $HH^{\ast} = nI_{n}$ satisfying $|h_{ij}| = 1$ for all $1 \leq i,j\leq n$. Equivalently, all entries of $H$ have unit norm, and all rows are mutually orthogonal under the Hermitian inner product, \cite{Hadamard1893}. Real Hadamard matrices, having entries in $\{\pm1\}$, have been extensively studied for a century, though the existence problem is far from settled. We refer the reader to the recent monographs of Horadam and of de Launey and Flannery for extensive discussion of Hadamard matrices, \cite{HoradamHadamard, deLauneyFlannery}.
In this paper we will study the problem of constructing real Hadamard matrices from complex Hadamard matrices (CHM). Suppose that $X$ is a set of complex numbers of modulus $1$.
We define $\mathcal{H}(n, X)$ to be the set of $n \times n$ Hadamard matrices with entries drawn from $X$. In the special case that $X$ is the set of $k^{\textrm{th}}$ roots of unity, a CHM is called a \textit{Butson Hadamard matrix}; the set of such matrices is denoted $\mathcal{BH}(n,k)$. Examples of Butson Hadamard matrices are furnished by the character tables of abelian groups of order $n$ and exponent $k$. Cohn and Turyn proved independently that the existence of $H \in \mathcal{BH}(n,4)$ implies the existence of a real Hadamard matrix of order $2n$, \cite{Cohn65, Turyn}. More recently, Compton, Craigen and de Launey proved that an $n \times n$ matrix with entries in the \textit{unreal} sixth roots of unity $\{\omega_{6}, \omega_{6}^{2}, \omega_{6}^{4}, \omega_{6}^{5}\}$ can be used to construct a real Hadamard matrix of order $4n$, \cite{CCdeL}.
A general construction for mappings between sets of Butson Hadamard matrices is described by Egan and one of the present authors, \cite{mypaper-morphisms}. A key ingredient in the construction is a matrix $H \in \mathcal{BH}(n, k)$ with minimal polynomial $\Phi_{t}(x)$ for some integer $t$. The construction of such matrices was considered further in collaboration with
Eric Swartz, \cite{mypaper-explicitmorphisms}. In all the examples considered previously, matrix entries are roots of unity, and all fields considered are cyclotomic. In this paper, we consider a family of complex Hadamard matrices with entries in the biquadratic extension $\mathbb{Q}[\sqrt{-q}, \sqrt{q+1}]$. When the matrix entries are all in the set $X_{q} = \{ \frac{\pm 1\pm \sqrt{-q}}{\sqrt{q+1}} \}$, such a matrix is called a \textit{Quaternary Unit Hadamard matrix}, abbreviated QUH. Such matrices were first considered by Fender, Kharaghani and Suda, \cite{FKS}.
We will construct a morphism from QUH matrices onto real Hadamard matrices, using skew-Hadamard matrices. This provides a new construction for a family of Hadamard matrices of order $q^{n} + q^{n-1}$ for each prime power $q \equiv 3 \mod 4$ and each $n\geq 1$, previously constructed by Mukhopadhyay and studied further by Seberry, \cite{Mukh, SeberrySkew}. We conclude the paper by studying the decomposition of prime ideals in the field $\mathbb{Q}[\sqrt{-q}, \sqrt{q+1}]$ to obtain non-existence results for QUH matrices in the style of Winterhof \cite{WinterhofExistence}.
\section{Morphisms of QUH matrices}
In this section we construct an isomorphism of fields, which we lift to an isomorphism of matrix algebras. We prove that this isomorphism carries a QUH matrix in the set $\mathcal{H}(n, X_{m})$ to a real Hadamard matrix of order $n(m+1)$; that is, the isomorphism is a \textit{morphism} of complex Hadamard matrices. We will require some standard results in algebra, as discussed in, e.g., Chapters 17--19 of Isaacs' \textit{Graduate Algebra}, \cite{Isaacs}. An \textit{extension field} $k$ of $\mathbb{Q}$ is a field containing $\mathbb{Q}$ as a subfield; in this case $k$ is a $\mathbb{Q}$-vector space and its \textit{degree} is its dimension as a vector space. The degree of $k$ over $\mathbb{Q}$ is denoted by $[k:\mathbb{Q}]$. In the ring $\mathbb{Q}[x]$ every ideal contains a unique monic polynomial of minimal degree, this polynomial is irreducible if and only if the ideal is maximal. For a polynomial $p(x)$ the quotient $\mathbb{Q}[x]/\left(p(x) \right)$ is a field if and only if the polynomial $p(x)$ is irreducible. An extension field $k$ is the \textit{splitting field} of a polynomial $p(x) \in \mathbb{Q}[x]$ if $k$ is a field of minimal degree over $\mathbb{Q}$ which contains all the roots of $p(x)$. We apply these results to the polynomial $\mathfrak{m}(x) = x^{4} + \frac{2(m-1)}{m+1}x^2+1$. By abuse of notation, a Hadamard matrix is \textit{skew} if $H-I$ is a skew-symmetric matrix.
\begin{proposition}\label{fieldprop}
Let $H$ be a skew-Hadamard matrix of order $m+1$, where $m+1$ is not a perfect square.
The minimal polynomial of $\alpha_{m} = \frac{1+\sqrt{-m}}{\sqrt{m+1}}$ and the minimal
polynomial of $\frac{1}{\sqrt{m+1}}H$ are both equal to
\[ \mathfrak{m}(x) = x^4+\frac{2(m-1)}{m+1}x^2+1 \,.\]
\end{proposition}
\begin{proof}
It is easily checked that $\alpha_m$ is a root of $\mathfrak{m}(x)$. Since $\mathfrak{m}(x)$ is even, $-\alpha_{m}$ is also a root. The coefficients of $\mathfrak{m}(x)$ are real, thus $\alpha_{m}^{\ast}$ and $-\alpha_{m}^{\ast}$ are roots. From the fact that $\mathfrak{m}(x)$ has degree $4$, we conclude that these are all the possible roots. Therefore we obtain the factorisation \[\mathfrak{m}(x)=(x-\alpha_m)(x-\alpha_m^{\ast})(x+\alpha_m)(x+\alpha_m^{\ast})\,.\]
Clearly $\mathfrak{m}(x)$ has no linear factors in $\mathbb{Q}[x]$. The only possible quadratic factors with real entries are $(x-\alpha_m)(x-\alpha_m^{\ast})=x^2-2x/\sqrt{m+1}+1$ and $(x+\alpha_m)(x+\alpha_m^{\ast})=x^2+2x/\sqrt{m+1}+1$. By hypothesis, $m+1$ is not a perfect square so these factors are not in $\mathbb{Q}[x]$. We have shown that $\mathfrak{m}(x)$ is irreducible.
The field extension $\mathbb{Q}[\alpha_{m}]$ contains $\alpha^{-1} = \alpha_{m}^{\ast}$ and $-\alpha_{m}$, so it is the splitting field of $\mathfrak{m}(x)$.
Since $H$ is skew-Hadamard we have both $HH^{\top} = (m+1)I_{m+1}$ and $H^{\top} = 2I - H$. It follows that $H(2I-H) = (m+1)I$, or $H^{2} = 2H - (m+1)I$.
Hence,
\[ \left(\frac{1}{\sqrt{m+1}} H\right)^{2} = \frac{2}{m+1} H - I \,.\]
We also compute that
\begin{eqnarray*}
\left(\frac{1}{\sqrt{m+1}} H\right)^{4} & = & \frac{4}{(m+1)} \left(\frac{1}{\sqrt{m+1}} H\right)^{2} - \frac{4}{m+1}H + I \\
& = & \frac{4}{(m+1)} \left(\frac{1}{\sqrt{m+1}} H\right)^{2} - 2 \left( \frac{2}{m+1}H - I \right) - I \\
& = & \frac{4}{(m+1)} \left(\frac{1}{\sqrt{m+1}} H\right)^{2} - 2 \left(\frac{1}{\sqrt{m+1}} H\right)^{2} - I \\
& = & \frac{2-2m}{m+1} \left(\frac{1}{\sqrt{m+1}} H\right)^{2} - I
\end{eqnarray*}
We conclude that the unitary matrix $\frac{1}{\sqrt{m+1}}H$ is a root of polynomial $\mathfrak{m}(x)$, which must be the minimal polynomial of $\frac{1}{\sqrt{m+1}}H$ by irreducibility.
\end{proof}
When $m+1$ is a perfect square, the polynomial $\mathfrak{m}(x)$ factors into two irreducible quadratic factors in $\mathbb{Q}[x]$, which correspond to the distinct minimal polynomials of $\alpha_m$ and $-\alpha_m$. In this case, the minimal polynomials of $\alpha_m$ and $\frac{1}{\sqrt{m+1}}H$ coincide, and also the minimal polynomials of $-\alpha_m$ and $\frac{1}{\sqrt{m+1}}(H-2I)$ coincide. The case that $m+1$ is a perfect square will be discussed after the proof of Theorem \ref{skewmorphism}. From Proposition \ref{fieldprop}, the next result is immediate.
\begin{proposition}\label{isomorphism} If $H$ is a skew-Hadamard matrix of order $m$, then all of the following $\mathbb{Q}$-algebras are isomorphic:
\begin{equation}\label{isomorphismEq}
\mathbb{Q}[x]/(\mathfrak{m}(x))\simeq \mathbb{Q}\left[\frac{1}{\sqrt{m+1}}H\right] \simeq \mathbb{Q}[\alpha_{m}].
\end{equation}
\end{proposition}
\begin{definition}
A \textit{Quaternary Unit Hadamard} (QUH) matrix is an element of $\mathcal{H}(n,X_m)$, where
\[ X_{m} = \left\{ \frac{\pm 1 \pm \sqrt{-m} } {\sqrt{m+1}} \right\}\,. \]
\end{definition}
Now we give the main result of this section.
\begin{thm}\label{skewmorphism}
If there exists a skew-Hadamard matrix $H$ of order $m+1$, where $m+1$ is not a perfect square, there exists a morphism $\mathcal{H}(n,X_m)\rightarrow BH(nm+n,2)$.
\end{thm}
\begin{proof}
Assume that there exists $M\in \mathcal{H}(n,X_m)$, since otherwise the claim is vacuous.
By Proposition \ref{isomorphism} that there exists an isomorphism $\mathbb{Q}(\alpha_{m}) \rightarrow \mathbb{Q}(\frac{1}{\sqrt{m+1}}H)$. We make this explicit:
\begin{equation*}
\varphi: \alpha_m\mapsto \frac{1}{\sqrt{m+1}}H
\end{equation*}
and since $\alpha_m$ is a generator of $\mathbb{Q}(\alpha_{m})$ the function $\varphi$ extends uniquely to the whole field. Recalling that $H$ is skew, we obtain
\[ \varphi(-\alpha_m)=\frac{-1}{\sqrt{m+1}}H = \frac{1}{\sqrt{m+1}}(H-2I)^{\top},\,\,\, \varphi(\alpha_{m}^{\ast}) = \frac{1}{\sqrt{m+1}}H^{\top}\,. \]
Define $M^\varphi$ to be the block matrix obtained from $M$ by applying $\varphi$ entrywise. Then $M^{\varphi}$ is a real matrix of size $n(m+1)\times n(m+1)$ with entries in the set $\{\pm 1/\sqrt{m+1}\}$. Since $M\in \mathcal{H}(n,X_m)$ the (Hermitian) inner product of columns $c_{i}$ and $c_{j}$ of $M$ is $\langle c_i, c_j\rangle = n\delta_{i}^{j}$, where $\delta_{i}^{j}$ is the Kronecker $\delta$ function. Since $\varphi$ is an isomorphism of $\mathbb{Q}$-algebras, $\varphi(0)=\mathbf{0}_{m+1}$ and $\varphi(1)= I_{m+1}$. It follows that
\begin{align*}
\sum_{k} \varphi(c_{i,k})\varphi(c_{j,k})^{\top} &= \sum_{k} \varphi(c_{i,k})\varphi(c_{j,k}^{\ast})\\
&= \varphi\left(\sum_k c_{i,k}c_{j,k}^{\ast}\right)\\
&= \varphi(\langle c_i, c_j\rangle)\\
&= n\delta_{i}^{j}I_{m+1}\,.
\end{align*}
This shows that $M^{\varphi}\left(M^{\varphi}\right)^{\top} = nI_{n(m+1)}$. The entries of $M^{\varphi}$ are in the set $\{\pm 1/\sqrt{m+1}\}$, so the entries of $\sqrt{m+1}M^{\varphi}$ are in the set $\{\pm 1\}$. We have shown that
\begin{equation*}
\sqrt{m+1}M^{\varphi}\left(\sqrt{m+1}M^{\varphi}\right)^\top = n(m+1)I_{n(m+1)} \,,
\end{equation*}
which establishes the theorem.
\end{proof}
A less technical method to prove the above theorem without assumptions on $m+1$ is as follows: Let $H\in\mathcal{H}(n, X_m)$, and let
\[H = \frac{1}{\sqrt{m+1}}A + \frac{\sqrt{-m}}{\sqrt{m+1}}B,\]
where $A$ and $B$ are $\pm 1$ matrices of order $n$. Then
\[AB^{\top} = BA^{\top}\hbox{ and } AA^{\top}+BB^{\top} = n(m+1)I_n.\]
Let $M$ be a skew Hadamard matrix of order $m+1$. Substituting $A$ for the diagonal entries of $M$ and $\pm B$ for the off-diagonal entries $\pm 1$ of $M$, it can be verified that the resulting matrix will be a Hadamard matrix of order $n(m+1)$. Although this proof is simpler than that of Theorem 4, our method gives additional insights into existence and non-existence of QUH matrices, as demonstrated in Section 3.\\
Let $q$ be an odd prime power and $\mathbb{F}_q$ be a finite field with $q$ elements. The element $a \in \mathbb{F}_{q}$ is a \textit{quadratic residue} if there exists $y \in \mathbb{F}_{q}$ such that $y^{2} = a$. Otherwise, $a$ is a non-residue. The \textit{quadratic character} is defined to be $\chi_q(a)=1$ if $a \in \mathbb{F}_q^*=\mathbb{F}_q-\{0\}$ is a quadratic residue in $\mathbb{F}_q$, $\chi_q(a)=-1$ if $a\in \mathbb{F}_q^*$ is a quadratic non-residue in $\mathbb{F}_q$ and $\chi_q(0)=0$. In the case where $q=p$ is a prime number, the quadratic character $\chi_p(a)$ on $\mathbb{F}_p\simeq \mathbb{Z}/p\mathbb{Z}$ can be identified with the \textit{Legendre symbol} and is denoted $(a/p)$. Later we will use the fact that for a fixed prime $p$ and for every $a,b\in\mathbb{Z}$, $\left(ab/{p}\right)=\left({a}/{p}\right)\left({b}/{p}\right)$, \cite[Proposition 5.1.2]{IrelandRosen}. Let $\{g_0=0,g_1,\dots,g_{q-1}\}$ be an enumeration of $\mathbb{F}_q$ then $Q = \left[ \chi_q(g_i-g_j)\right]_{0\leq i,j\leq q-1}$ is the \textit{Jacobsthal matrix} of order $q$.
\begin{thm}[Section 3, \cite{FKS}]
Let $q$ be an odd prime power with $q\equiv 3\pmod 4$. Define $1\times 1$ matrices $A_{0} = B_{0} = 1$, let $Q$ be the $q \times q$ Jacobsthal matrix and $J_q$ the $q\times q$ all-ones matrix. For each $t\geq 1$, define
\[ A_{t} = J_{q} \otimes B_{t-1}, \,\,\, B_{t} = I_{q} \otimes A_{t-1} + Q \otimes B_{t-1} \,.\]
Then for each $t$ the matrix $\frac{1}{\sqrt{q+1}}A_{t} + i \frac{\sqrt{q}}{\sqrt{q+1}} B_{t}$ is a matrix in $\mathcal{H}(q^t,X_{q})$.
\end{thm}
Hence there exist $\mathcal{H}(q^{t}, X_q)$ matrices for all prime powers $q \equiv 3 \mod 4$. Since the Paley matrix of order $q + 1$ is skew, we can apply Theorem \ref{skewmorphism} to obtain the following result.
\begin{corollary}
Let $q \equiv 3 \mod 4$ be a prime power. For any integer $n \geq 1$ there exists a (real) Hadamard matrix of order $q^{n} + q^{n-1}$.
\end{corollary}
This result was first discovered by Mukhopadhyay, and later clarified and elaborated by Seberry, \cite{Mukh, SeberrySkew}. Of course, it would be interesting to develop constructions of Hadamard matrices at previously unknown orders. As a first contribution in this direction, we investigate the non-existence of QUH matrices in the next section.
\section{Nonexistence of quaternary unit Hadamard matrices}
The \textit{Galois group} of an irreducible polynomial $p(x)$ is the group of field automorphisms of a splitting field of $p(x)$. Over $\mathbb{Q}$, the order of the Galois group and the degree of the splitting field coincide. The Galois correspondence gives an inclusion-reversing bijection between the lattice of subfields of $\mathbb{Q}[x]/\left(p(x)\right)$ and the subgroups of the Galois group.
An element $x\in \mathbb{C}$ is an \textit{algebraic integer} if it is a root of a monic polynomial in $\mathbb{Z}[x]$. The ring of integers of a number field $k\subseteq\mathbb{C}$ is the largest subring of the algebraic integers contained in $k$, usually denoted $\mathcal{O}_k$. In the ring of integers of a number field, ideals factorise uniquely as a product of \textit{prime ideals}, \cite[Theorem 14]{Marcus}. Studying prime factorisations related to the determinant of a putative complex Hadamard matrix can sometimes yield nonexistence results. This argument is similar to one given by Winterhof for certain Butson Hadamard matrices, \cite{WinterhofExistence}.
First, we introduce terminology for the factorisation of a prime ideal of $\mathbb{Z}$ in $\mathcal{O}_{k}$ for a number field $k$. As is customary we will denote prime ideals in $k$ by the gothic letters $\mathfrak{p}$ and $\mathfrak{q}$ and rational primes by $p$ and $q$.
\begin{definition} Let $k$ be the splitting field of an irreducible polynomial, and $q$ be a rational prime.
\begin{itemize}
\item $q$ is \textit{inert} in $\mathcal{O}_k$ if $(q)$ is a prime ideal in $\mathcal{O}_k$.
\item If $q$ is not inert then it \textit{splits} in $\mathcal{O}_k$. Let $(q)=\prod \mathfrak{q}_i^e$ be the prime ideal decomposition of $(q)$. If $e\geq 1$ then $q$ is \textit{ramified}, otherwise it \textit{splits completely}.
\end{itemize}
\end{definition}
The \textit{discriminant} of a number field is an integer valued invariant that controls the factorisation of rational primes in that field. The following result is a special case of a more general result on the splitting of rational primes on number fields, see Theorems 21, 23 and 24 of Marcus' \textit{Number Fields} for details, \cite{Marcus}.
\begin{thm}\label{splittingthm}
Let $k$ be a number field. If a rational prime $q$ is ramified in $\mathcal{O}_k$, then $q\mid \mathrm{disc}(k)$. Let $k$ be the splitting field of some irreducible polynomial, where the degree of $k$ over $\mathbb{Q}$ is $n=[k:\mathbb{Q}]$. If $q$ is a rational prime such that $q\nmid \mathrm{disc}(k)$, then
\[(q)=\mathfrak{q}_1\dots \mathfrak{q}_r,\]
where $r|n$. Furthermore the action of the Galois group on $\{\mathfrak{q}_1,\dots,\mathfrak{q}_r\}$ is transitive.
\end{thm}
In a quadratic extension of $\mathbb{Q}$, the Legendre symbol controls the splitting of prime ideals.
\begin{thm}[p.24, Theorem 25, \cite{Marcus}]\label{quadsplit}
Let $k=\mathbb{Q}[\sqrt{d}]$ where $d$ is a squarefree integer. Then $\mathrm{disc}(k)= d$ if $d \equiv 1 \mod 4$ and $\mathrm{disc}(k) = 4d$ if $d \equiv 2, 3 \mod 4$. Suppose that $q$ is an odd rational prime and $q\nmid \mathrm{disc}(k)$. Then
\begin{itemize}
\item $q$ is inert in $\mathcal{O}_k$ if $\left({d}/{q}\right)=-1$.
\item $q$ splits into distinct prime ideals in $\mathcal{O}_k$ if $\left({d}/{q}\right)=1$.
\end{itemize}
\end{thm}
We will study these concepts for the field $K=\mathbb{Q}[\alpha]$, which by Proposition \ref{fieldprop} is the splitting field of $\mathfrak{m}(x)$. Since $2/(\alpha_m+\alpha^{\ast}_m)=\sqrt{m+1}$ and $(\sqrt{m+1})\alpha_m -1= \sqrt{-m}$ we have an isomorphism $\mathbb{Q}[\alpha_m]\simeq \mathbb{Q}[\sqrt{-m},\sqrt{m+1}]$. There are three intermediate subfields of $K$, as illustrated.
\begin{center}
\begin{tikzpicture}[node distance =2cm]
\node(K) {$K=\mathbb{Q}\left[\sqrt{m+1},\sqrt{-m}\right]$};
\node(K2)[below=.75cm of K]{$K_2=\mathbb{Q}\left[\sqrt{m+1}\right]$};
\node(K1)[left=1.25cm of K2] {$K_1=\mathbb{Q}\left[\sqrt{-m}\right]$};
\node(K3)[right=1.25cm of K2]{$K_3=\mathbb{Q}\left[\sqrt{-m(m+1)}\right]$};
\node(Q)[below =2.25cm of K]{$\mathbb{Q}$};
\foreach \x in {1,2,3}{
\draw(K\x)--(K);
\draw(Q)--(K\x);
}
\end{tikzpicture}
\end{center}
\begin{center}
\textit{The lattice of subfields of $K$.}
\end{center}
The discriminant of a biquadratic extension is given as an exercise by Marcus.
\begin{proposition}[p.36-37, \cite{Marcus}]\label{discprop}
The discriminant of a biquadratic extension $k=\mathbb{Q}[\sqrt{a},\sqrt{b}]$ where $\gcd(a,b)=1$ is
\[\mathrm{disc}(k)=\mathrm{disc}(k_1)\mathrm{disc}(k_2)\mathrm{disc}(k_3),\]
where $k_1=\mathbb{Q}[\sqrt{a}]$, $k_2=\mathbb{Q}[\sqrt{b}]$ and $k_3=\mathbb{Q}[\sqrt{ab}]$.
\end{proposition}
Let $G=\mathrm{Gal}(K/\mathbb{Q})$ be the Galois group the splitting field of $\mathfrak{m}(x)$. By the Galois correspondence $G$ has order 4, and has three distinct subgroups of order 2. So $G$ is elementary abelian, generated by $\sigma:\sqrt{m+1}\mapsto -\sqrt{m+1}$ and $\tau:\sqrt{-m}\mapsto -\sqrt{-m}$. We identify $\tau$ with complex conjugation. Note that $K_1=\mathrm{Fix}(\sigma)$ is the fixed field of $\sigma$, that $K_2=\mathrm{Fix}(\tau)$ is the fixed field of $\tau$ and $K_3=\mathrm{Fix}(\sigma\tau)$ is the fixed field of $\sigma\tau$.
From now on, let $m = p$ be a prime congruent to $3$ modulo $4$, and write $s$ for the squarefree part of $p+1$. Then $K\simeq\mathbb{Q}[\sqrt{-p},\sqrt{s}]$, and applying Proposition \ref{discprop} we have
\[\mathrm{disc}(K)=
\begin{cases}
s^2p^2 &\hbox{ if } s\equiv 1\mod 4\\
16s^2p^2 & \hbox{ if } s\equiv 2,3 \mod 4
\end{cases}.
\]
Let $q$ be a prime number. By Theorem \ref{splittingthm}, the prime $q$ ramifies in $\mathcal{O}_K$ only if $q=p$ or $q|s$. Next we describe which non-ramified primes split in $\mathcal{O}_{K}$.
\begin{proposition}\label{biquadsplit}
Let $q$ be a rational prime not dividing $\mathrm{disc}(k)$. Then one of the following holds:
\begin{itemize}
\item $(q)=\mathfrak{q}_1\mathfrak{q}_2\mathfrak{q}_3\mathfrak{q}_4$ in $\mathcal{O}_K$ and $q$ splits in every subfield of $K$.
\item $(q)=\mathfrak{q}_1\mathfrak{q}_2$ in $\mathcal{O}_K$ and $q$ splits in one proper subfield of $K$, being inert in the other two.
\end{itemize}
\end{proposition}
\begin{proof}
By Theorem \ref{quadsplit}, the prime $q$ splits in $K_{1}$ if and only if $\left({-p}/{q}\right) = 1$, and $q$ splits in $K_{2}$ if and only if $\left({s}/{q}\right) = 1$.
Suppose that $\left({-p}/{q}\right)=\left({s}/{q}\right)=-1$. Then $\left({-ps}/{q}\right)=\left({-p}/{q}\right)\left({s}/{q}\right)=1$, so $q$ splits in $K_3$.
We conclude that no rational prime is inert in $K$.
Since by assumption $q$ does not ramify, Theorem \ref{splittingthm} tells us that $q$ splits in $\mathcal{O}_K$ into two or four prime ideals. Suppose that $(q)=\mathfrak{q}_1\mathfrak{q}_2\mathfrak{q}_3\mathfrak{q}_4$. Then up to a relabeling of the primes $\mathfrak{q}_i$ we can assume that
\begin{center}
\begin{tabular}{l l}
$\mathfrak{q}_1^{\sigma}=\mathfrak{q}_2,$ & $\mathfrak{q}_3^{\sigma}=\mathfrak{q}_4$\\
$\mathfrak{q}_1^{\tau}=\mathfrak{q}_3,$ & $\mathfrak{q}_2^{\tau}=\mathfrak{q}_4$\\
$\mathfrak{q}_1^{\sigma\tau}=\mathfrak{q}_4,$ & $\mathfrak{q}_2^{\sigma\tau}=\mathfrak{q}_3$\\
\end{tabular}
\end{center}
This implies that $(\mathfrak{q}_1\mathfrak{q}_2)^{\sigma}=\mathfrak{q}_1\mathfrak{q}_2$ and $(\mathfrak{q}_3\mathfrak{q}_4)^{\sigma}=\mathfrak{q}_3\mathfrak{q}_4$, therefore $\mathfrak{q}_1\mathfrak{q}_2$ and $\mathfrak{q}_3\mathfrak{q}_4$ are ideals in the fixed field $K_1$ of $\sigma$ and thus $q$ splits as $(q)=(\mathfrak{q}_1\mathfrak{q}_2)(\mathfrak{q}_3\mathfrak{q}_4)$ in $K_1$. We can show analogously that $q$ splits in $K_2$ and $K_3$. Suppose next that $q$ splits in $K$ as $\mathfrak{q}_1\mathfrak{q}_2$. Then the Galois group acts as in one of the following possibilities.
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
$\mathfrak{q}_1^{\sigma}$ & $\mathfrak{q}_1^{\tau}$ & $\mathfrak{q}_1^{\sigma\tau}$ & Subfield containing $\mathfrak{q}_1$ and $\mathfrak{q}_2$\\
\hline
$\mathfrak{q}_1$ & $\mathfrak{q}_2$ & $\mathfrak{q}_2$ & $K_1=\mathrm{Fix}(\sigma)$\\
$\mathfrak{q}_2$ & $\mathfrak{q}_1$ & $\mathfrak{q}_2$ & $K_2=\mathrm{Fix}(\tau)$\\
$\mathfrak{q}_2$ & $\mathfrak{q}_2$ & $\mathfrak{q}_1$ & $K_3=\mathrm{Fix}(\sigma\tau)$\\
\hline
\end{tabular}
\end{center}
In each case, there is exactly one non-identity element $g\in G$ fixing both $\mathfrak{q}_1$ and $\mathfrak{q}_2$. So $q$ splits in the fixed field of $g$, and is inert in the other two intermediate subfields.
\end{proof}
In our application to QUH matrices, we will require the following special case of Proposition \ref{biquadsplit}.
\begin{corollary}\label{splitcor}
Let $q$ be an odd rational prime $q$, coprime to both $p$ and $s$. In $\mathcal{O}_K$, we have $(q)=\mathfrak{q}_1\mathfrak{q}_2$ with $\mathfrak{q}_1^\tau=\mathfrak{q}_1$ and $\mathfrak{q}_2^\tau=\mathfrak{q}_2$ if and only if $(-p/q)=-1$ and $(s/q)=1$.
\end{corollary}
\begin{proof}
Since $\mathfrak{q}_1^\tau=\mathfrak{q}_1$ it must be the case that $\mathfrak{q}_{1}^{\sigma} = \mathfrak{q}_{2}$ and, by Proposition \ref{biquadsplit}, $q$ splits in $K_{2}$ as $\mathfrak{q}_{1}\mathfrak{q}_{2}$. So by Theorem \ref{quadsplit}, we must have $(s/q) = 1$. Furthermore, $(q)$ must be inert in $K_{1}$, from which we obtain $(-p/q)= -1$ as required. The converse follows from Theorem \ref{quadsplit} and Proposition \ref{biquadsplit}.
\end{proof}
Recall that the action of $\tau$ on $K$ corresponds to the action of complex conjugation on $K$. Therefore the case above is equivalent to $(q)=\mathfrak{q}_1\mathfrak{q}_2$ with $\mathfrak{q}_1^*=\mathfrak{q}_1$ and $\mathfrak{q}_2^*=\mathfrak{q}_2$. We can now formulate our main nonexistence theorem.
\begin{thm}\label{nonexistence}
Let $n$ be an odd integer, with squarefree part $t$. Let $p\equiv 3\mod 4$ be a prime number such that the squarefree part of $p+1$ is $s>1$. If there exists an odd prime $q$ such that
\begin{itemize}
\item $q$ divides $t$,
\item $\left({s}/{q}\right)=1$, and
\item $\left({-p}/{q}\right)=-1$,
\end{itemize}
then $\mathcal{H}(n,X_p)$ is empty.
\end{thm}
\begin{proof}
Let $M\in \mathcal{H}(n,X_p)$ and set $D=(p+1)^n\det M$. Then $ D \in \mathcal{O}_K$, since $(p+1)\alpha\in\mathcal{O}_K$ for every $\alpha\in X_p$. The matrix $H$ is complex Hadamard, therefore $DD^*=(p+1)^{2n}n^n=a^2t^n$, for some $a\in \mathbb{Z}$. By Corollary \ref{splitcor}, $(q)=\mathfrak{q}_1\mathfrak{q}_2$ in $\mathcal{O}_K$ with $\mathfrak{q}_1=\mathfrak{q}_1^*$. We have that $q|t$, so since $n$ is odd the prime ideal $\mathfrak{q}_1$ appears with odd multiplicity in the decomposition of $(p+1)^{2n}n^n$ in $\mathcal{O}_K$.
Since $\mathfrak{q}_{1}$ is prime and divides the product $(D)(D^{\ast})$, it divides one of the factors; without loss of generality, suppose that $\mathfrak{q}_1$ divides $(D)$. So $(D)$ factors into prime ideals uniquely as
\[(D)=\mathfrak{q}_1^{\ell}\prod_j \mathfrak{p}_j^{\ell_j},\]
Then $(D^*)=(D)^*=\mathfrak{q}_1^{\ell}\prod_j(\mathfrak{p}_j^*)^{\ell_j}$. But implies that $\mathfrak{q}_1$ appears with even multiplicity in $(D)(D^{\ast})$, contradicting its odd multiplicity in $(p+1)^{2n}n^{n}$.
\end{proof}
The only prime of the form $n^2-1$ is 3. In this case the matrices $\mathcal{H}(n,X_3)$ coincide with the unreal $BH(n,6)$ matrices of Compton, Craigen and de Launey. The set $\mathcal{H}(n, X_{3})$ is empty if there exists a prime $q \equiv 5 \pmod {6}$ which divides the square-free part of $n$ (see Theorem 2 of \cite{CCdeL} or Theorem 5 of \cite{WinterhofExistence} for a proof).\\
We conclude this paper by discussing some consequences of Theorem \ref{nonexistence}. Suppose first that $p=7$. Then a prime $q$ satisfying both $(q/7) = -1$ and $(2/q) = 1$ cannot divide the square-free part of $n$. By quadratic reciprocity, these are the primes which satisfy both $q \equiv 3, 5, 6 \mod 7$ and $q \equiv 1, 7 \mod 8$. By Dirichlet's Theorem on primes in arithmetic progressions, there are infinitely many such primes. Similar results hold for each prime $p$, as illustrated in the table below.\\
\begin{center}
\begin{tabular}{|l |l |}
\hline
$p$ & $n$ \\
\hline
$7$ & $17,31,41,47,51,73,85,89,93,97,103,119,123,141,\dots$\\
$11$ & $13,39,61,65,73,83,91,107,109,117,131,143,167,\dots$\\
$19$ & $29,31,41,59,71,79,87,89,93,109,123,145,151,\dots$\\
$23$ & $5,15,19,35,43,45,53,55,57,65,67,85,95,97,105,\dots$\\
$31$ & $17,23,51,69,73,79,85,89,115,119,127,137,151,\dots$\\
$43$ & $5,7,15,19,21,35,37,45,55,57,63,65,77,85,89,91,\dots$\\
\hline
\end{tabular}\\\vspace{0.5cm}
\textit{Pairs $(n,p)$ such that $\mathcal{H}(n,p)$ is empty.}
\end{center}
In fact, it is a consequence of the Chebotarev Density Theorem that the proportion of primes $q\leq N$ to which the conditions of Theorem \ref{nonexistence} apply tends to $1/4$ as $N$ tends to infinity. In particular, there are infinitely many primes which obstruct the existence of matrices in $\mathcal{H}(n, X_{p})$ for any fixed $p$.\\
To illustrate Theorem \ref{nonexistence} in a case where not all ideals are principal, we consider $p=43$ and $q=5$, then $s=11$. We have $(5/43)=-1$, thus the prime $5$ should be inert in $\mathcal{O}_{K_1}$.
By Proposition \ref{biquadsplit}, $(5)$ splits in $\mathcal{O}_K$ as the product of two prime ideals in $\mathcal{O}_{K_2}$, indeed $(5)=(5,1+\sqrt{11})(5,1-\sqrt{11})$ in $\mathcal{O}_K$. If there exists $H\in\mathcal{H}(5,X_{43})$ then $D=11^5\det H$ and $DD^*=11^{10}\cdot 5^5$. Thus in $\mathcal{O}_K$ this means
\[(D)(D)^* = (11^{5})^2 (5,1+\sqrt{11})^{5}(5,1-\sqrt{11})^{5}.\]
The ideal $(5,1+\sqrt{11})=(5,1+\sqrt{11})^*$ appears with even multiplicity on the left hand side and odd multiplicity on the right hand side. Hence $\mathcal{H}(5,X_{43})$ is empty.
\section*{Acknowledgements}
This research was completed while JP and PH were undergraduates and GNP was a doctoral student at Worcester Polytechnic Institute.
JP was supported by a Student Undergraduate Research Fellowship sponsored by the office of the Dean of Arts and Sciences. PH and GNP
were supported by P\'{O}C's startup funds.
The authors acknowledge the anonymous referees for many helpful suggestions which improved the exposition of the paper.
\bibliographystyle{abbrv}
\flushleft{
|
1,941,325,220,413 | arxiv | \section{\bf Introduction}\label{S:intro}
The following theorem is a standard exercise in functional analysis.
\begin{thm}\label{T:norm}
Let $X$ and $Y$ be Banach spaces, let $(T_n)_{n\ge1}$ and $T$
be continuous linear maps from $X$ to $Y$, and let $D$ be a dense subset of~$X$.
Then the following statements are equivalent:
\begin{enumerate}
\item[(i)] $T_nx\to Tx$ for all $x\in X$;
\item[(ii)] $T_nx\to Tx$ for all $x\in D$ and $\sup_n\|T_n\|<\infty$.
\end{enumerate}
\end{thm}
The implication (ii)$\Rightarrow$(i) is an easy $\epsilon/3$-argument.
The implication (i)$\Rightarrow$(ii) is an application of the
Banach--Steinhaus theorem; in fact it dates back
to the original paper of Banach and Steinhaus \cite{BS27}.
The completeness of $Y$ is not really needed here,
since we can always embed $Y$ in its completion.
The completeness of $X$, however, is needed for
the Banach--Steinhaus theorem.
Without it the implication (i)$\Rightarrow$(ii) may actually fail.
\begin{example}\label{X:incomplete}
Let $X=c_{00}$, the space of finitely supported
sequences of real numbers, with the sup-norm,
and let $Y=\mathbb{R}$. Let $\pi_n:c_{00}\to \mathbb{R}$ be the
$n$-th coordinate functional, let $T_n:=n\pi_n$ and let $T:=0$.
For each $x\in c_{00}$, we have $T_nx=0$ for all sufficiently large $n$, so (i) holds.
However, $\|T_n\|=n$ for all $n$, so (ii) fails.
\end{example}
Theorem~\ref{T:norm} has a corollary for weak topologies.
\begin{cor}\label{C:weak}
Let $X$ and $Y$ be Banach spaces, let $(T_n)_{n\ge1}$ and $T$
be continuous linear maps from $X$ to $Y$, and let $D$ be a weakly dense subset of $X$.
Then the following statements are equivalent:
\begin{enumerate}
\item[(i)] $T_nx\to Tx$ weakly for all $x\in X$;
\item[(ii)] $T_nx\to Tx$ weakly for all $x\in D$ and $\sup_n\|T_n\|<\infty$.
\end{enumerate}
\end{cor}
\begin{proof}
The implication (i)$\Rightarrow$(ii) holds because, by the Banach--Steinhaus theorem,
weakly bounded implies norm bounded.
For the implication (ii)$\Rightarrow$(i), let $\phi\in Y^*$, the dual of $Y$.
If (ii) holds, then $(\phi\circ T_n)x\to (\phi\circ T)x$ for all $x\in D$,
hence also for all $x$ in the linear span of $D$,
and if the latter is weakly dense then it is also norm dense
(see e.g.\ \cite[Corollary to Theorem~3.12]{Ru91}).
Thus we may apply Theorem~\ref{T:norm}
to deduce that $(\phi\circ T_n)x\to (\phi\circ T)x$ for all $x\in X$.
As this holds for each $\phi\in Y^*$, we conclude that (i) holds.
\end{proof}
If $X$ and $Y$ happen to be dual spaces, then we may also ask
whether the analogue of Corollary~\ref{C:weak} holds for weak* topologies.
This problem arose recently in \cite{GMR22} in the context of summability operators.
It turns out that the answer is negative.
This time, interestingly, it is the implication (ii)$\Rightarrow$(i) that breaks down.
\begin{example}\label{X:weak*}
Let $X=\ell^\infty$, the space of bounded sequences, normed by the
sup-norm, and let $Y=\mathbb{R}$.
Let $\pi_n:\ell^\infty\to\mathbb{R}$ be the $n$-th coordinate functional, let $T_n:=\pi_n$ and let $T:=0$.
Let $D=c_0$, the subspace of $\ell^\infty$ consisting of sequences that tend to zero.
Since the bidual of $c_0$ is $\ell^\infty$, it follows that $c_0$ is weak* dense in $\ell^\infty$
(see e.g.\ \cite[Chapter~4, Exercise~1]{Ru91}).
Also $T_nx\to0$
for all $x\in c_0$ and $\sup_n\|T_n\|=1<\infty$, so (ii) holds.
However, if $x$ is the constant sequence $(1,1,\dots)$, then
$x\in\ell^\infty$ and $T_nx\not\to0$, so (i) fails.
\end{example}
One might reasonably argue that, to obtain a true weak* version of
Corollary~\ref{C:weak}, one should also replace the condition
$\sup_n\|T_n\|<\infty$ by one that is more closely tied to the weak* topologies on $X$ and $Y$.
A natural candidate is that the sequence $(T_n)$ be weak* equicontinuous, i.e., that for each weak* $0$-neighbourhood $V$ in $Y$, there exists a weak* $0$-neighbourhood $U$ in $X$
such that $T_n(U)\subset V$ for all $n$. With this change, it is true that (ii) implies (i) (for the weak* topologies). However, as the following example shows, we then lose the implication (i)$\Rightarrow$(ii).
\begin{example}\label{X:equicts}
Let $X=\ell^2$, with the usual $\ell^2$-norm, and let $Y=\mathbb{R}$.
Let $\pi_n:\ell^2\to\mathbb{R}$ be the $n$-th coordinate functional, let
$T_n:=\pi_n$ and let $T:=0$.
For each $x\in \ell^2$, we have $T_nx\to0$ in $\mathbb{R}$, so (i) holds.
However, if $U$ is any weak* $0$-neighbourhood in $\ell^2$,
then $U$ contains a non-zero subspace of $\ell^2$
(see e.g.\ \cite[p.66]{Ru91}),
and it follows easily that $T_n(U)=\mathbb{R}$ for all $n$.
We conclude that the sequence $(T_n)$ is not weak* equicontinuous, and so (ii) fails
in this setting.
\end{example}
In the article \cite{GMR22}, these difficulties were circumvented by exploiting
the structure of the particular operators involved.
But for general operators, the problem remains.
Our purpose in this article is to propose a solution,
by replacing the condition
$\sup_n\|T_n\|<\infty$ in Theorem~\ref{T:norm}
with an appropriate condition
so that the equivalence (i)$\iff$(ii) holds for weak* topologies,
and indeed for arbitrary topological vector spaces.
\section{Asymptotic equicontinuity}
Given a set $X$ and a sequence $(F_n)$ of subsets of $X$,
we write $\liminf_n F_n$ for the set of $x\in X$ that
belong to $F_n$ for all but finitely many $n$.
\begin{defn}
Let $X,Y$ be topological vector spaces,
and let $(T_n)$ be a sequence of continuous linear maps from $X$ to $Y$.
We say that $(T_n)$ is \emph{asymptotically equicontinuous} if,
for each $0$-neighbourhood $V$ in $Y$, the set
$\liminf_nT_n^{-1}(V)$ is a $0$-neighbourhood in $X$.
\end{defn}
Let us spell this out explicitly: $(T_n)$ is asymptotically equicontinuous if,
for each $0$-neighbourhood $V$ in $Y$,
there exists a $0$-neighbourhood $U$ in $X$ such that, whenever $x\in U$, then $T_nx\in V$ for all large enough $n$.
Clearly, if $(T_n)$ is equicontinuous with respect to $X$ and $Y$,
then it is asymptotically equicontinuous.
The converse is true if $X$ and $Y$ are Banach spaces.
Indeed, in this case,
$(T_n)$ asymptotically equicontinuous implies that
$\sup_{n}\|T_nx\|<\infty$ for each $x\in X$,
which in turn implies that $\sup_{n}\|T_n\|<\infty$
by the Banach--Steinhaus theorem,
whence $(T_n)$ is equicontinuous.
However, in general, asymptotically equicontinuous does not imply equicontinuous.
For example, the sequence $(T_n)$ in Example~\ref{X:incomplete},
being unbounded in norm, is not equicontinuous.
However it is asymptotically equicontinuous: this follows from Theorem~\ref{T:main} below,
but it is also easy to verify directly.
We can now state our main result.
\begin{thm}\label{T:main}
Let $X$ and $Y$ be topological vector spaces,
let $(T_n)_{n\ge1}$ and $T$ be continuous linear maps from $X$ to $Y$,
and let $D$ be a dense subset of~$X$.
Then the following statements are equivalent:
\begin{enumerate}
\item[(i)] $T_n x\to Tx$ for all $x\in X$;
\item[(ii)] $T_n x\to Tx$ for all $x\in D$, and the sequence $(T_n)$
is asymptotically equicontinuous.
\end{enumerate}
\end{thm}
For the proof of Theorem~\ref{T:main}, we require a lemma.
\begin{lem}\label{L:main}
If $(S_n)$ and $(T_n)$ are asymptotically equicontinuous sequences
of linear maps from $X$ to $Y$, then so is $(S_n+T_n)$.
\end{lem}
\begin{proof}
Let $V$ be a $0$-neighbourhood in $Y$.
Let $W$ be a $0$-neighbourhood in $Y$ such that $W+W\subset V$.
As $(S_n)$ and $(T_n)$ are asymptotically equicontinuous sequences,
the sets
$U_1:=\liminf S_n^{-1}(W)$ and $U_2:=\liminf T_n^{-1}(W)$ are $0$-neighbourhoods in $X$.
Set $U:=U_1\cap U_2$.
Then $U$ is a $0$-neighbourhood in $X$ and, if $x\in U$,
then, for all large enough $n$, we have $S_nx\in W$ and $T_nx\in W$,
whence $(S_n+T_n)x\in W+W\subset V$.
This shows that $U\subset \liminf(S_n+T_n)^{-1}(V)$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{T:main}]
(i)$\Rightarrow$(ii): Suppose that $T_nx\to Tx$ for all $x\in X$.
Obviously this holds, in particular, for all $x\in D$.
Also, for each $0$-neighbourhood $V$ in $Y$,
we have $\liminf (T_n-T)^{-1}(V)=X$, simply by the definition of convergence of $T_nx$ to $Tx$.
Therefore the sequence $(T_n-T)$ is asymptotically equicontinuous.
Obviously the constant sequence $(T)$ is asymptotically equicontinuous,
so, by the lemma, $(T_n)$ is asymptotically equicontinuous.
\medskip
(ii)$\Rightarrow$(i): Suppose that the hypotheses in (ii) hold.
Set $R_n:=T_n-T$. Then $R_nx\to0$ for all $x\in D$, and by the lemma,
the sequence $(R_n)$ is asymptotically equicontinuous.
We need to show that $R_nx\to0$ for all $x\in X$.
Let $x\in X$ and $V$ be a $0$-neighbourhood in $Y$.
We shall prove that $R_n x\in V$ for all large enough $n$.
We may choose another $0$-neighbourhood $W$ in $Y$ such that $W-W\subset V$.
Since $(R_n)$ is asymptotically equicontinuous, $U:=\liminf R_n^{-1}(W)$
is a $0$-neighbourhood in~$X$.
Since $D$ is dense in $X$, there exists $x'\in D$ such that $x'\in x+U$.
Since $x'-x\in U$, there exists $N$ such that
\[
n\ge N\quad\Rightarrow\quad R_n(x'-x)\in W.
\]
Also, since $x'\in D$, we have $R_nx'\to0$, so there exists $N'$ such that
\[
n\ge N'\quad\Rightarrow\quad R_n x'\in W.
\]
Hence, finally,
\[
n\ge \max(N,N')\quad\Rightarrow\quad R_n x=R_n(x')-R_n(x'-x)\in W-W\subset V.
\]
This completes the proof.
\end{proof}
\section{Concluding remarks}
We have formulated the notion of asymptotic continuity
for sequences of operators.
However, given that our main result, Theorem~\ref{T:main},
treats topological vector spaces that are not necessarily metrizable,
it would perhaps be more logical to define asymptotic continuity for nets rather than sequences.
In this section, we discuss the (relatively minor) changes to the preceding section needed to achieve this.
Let $X$ be a set, and let $(F_\alpha)_{\alpha\in A}$ be a net
of subsets of $X$, i.e., a collection of subsets indexed by a directed set $A$. We write $\liminf_\alpha F_\alpha$ for the set of $x\in X$ with the following property: there exists $\alpha_0\in A$ (depending on $x$) such that
$x\in F_\alpha$ for all $\alpha\ge \alpha_0$.
\begin{defn}
Let $X,Y$ be topological vector spaces, and let $(T_\alpha)$
be a net of continuous linear maps from $X$ to $Y$. We say that
$(T_\alpha)$ is \emph{asymptotically equicontinuous} if,
for each $0$-neighbourhood $V$ in $Y$, the set
$\liminf_\alpha T_\alpha^{-1}(V)$ is a $0$-neighbourhood in $X$.
\end{defn}
The following results are the extensions of Lemma~\ref{L:main} and Theorem~\ref{T:main} to nets. The proofs are obtained by making the obvious modifications to the arguments for sequences. We omit the details.
\begin{lem}
If $(S_\alpha)$ and $(T_\alpha)$ are asymptotically equicontinuous nets
of linear maps from $X$ to $Y$, indexed by the same directed set, then $(S_\alpha+T_\alpha)$ is also an asymptotically equicontinuous net.
\end{lem}
\begin{thm}
Let $X$ and $Y$ be topological vector spaces,
let $(T_\alpha)$ be a net of continuous linear maps from $X$ to $Y$,
let $T:X\to Y$ be another continuous linear map,
and let $D$ be a dense subset of~$X$.
Then the following statements are equivalent:
\begin{enumerate}
\item[(i)] $T_\alpha x\to Tx$ for all $x\in X$;
\item[(ii)] $T_\alpha x\to Tx$ for all $x\in D$, and the net $(T_\alpha)$
is asymptotically equicontinuous.
\end{enumerate}
\end{thm}
|
1,941,325,220,414 | arxiv | \section{Introduction}
Pretraining ever-larger language models (LMs) on massive corpora has led to large
improvements in NLP
\cite[\emph{i.a.}]{radford2018improving,devlin2018bert,liu2019roberta,raffel2019exploring}. A standard approach is to replace the pretrained model's output layer with a task-specific head and finetune the entire model on a set of labeled training data. However, language modeling is not only a powerful pretraining objective, but many tasks can be reformulated as cloze questions (e.g., by appending phrases such as ``the correct answer is \mask{}''), allowing pretrained LMs to solve them without any or with only very few labeled examples \cite{radford2018language,schick2020exploiting}.
\begin{figure}
\begin{tikzpicture}
\begin{axis}[
cycle list name=color list,
xlabel={\sffamily\small Parameters (Millions)},
ylabel={\sffamily\small SuperGLUE Performance},
axis line style={decentgrey!95!black},
grid=major,
major grid style={line width=.2pt,draw=decentgrey},
ymin = 45,
ymax = 80,
xmin = 100,
xmax = 1000000,
xmode = log,
minor tick style={decentgrey!0},
major tick style={decentgrey},
log basis x={10},
xtick pos=left,
ytick pos=left,
ylabel near ticks,
xlabel near ticks,
xticklabels={$10^2$, $10^3$, $10^4$, $10^5$, $10^6$},
tick align=outside,
tick label style={font=\footnotesize},
major tick length=0.075cm,
width = \linewidth,
height = 0.23\textheight,
log ticks with fixed point,
x tick label style={/pgf/number format/1000 sep=\,},
]
\addplot[mark=*, c0, thick, mark options={solid}] coordinates {
(125,50.1)
(350,56.2)
(760,56.8)
(1300,60.0)
(2700,64.3)
(6700,63.6)
(13000,66.9)
(175000,73.2)
} node[right,pos=1,xshift=0.025cm]{\small\sffamily GPT-3};
\addplot[mark=*, c1, thick, mark options={solid}] coordinates {
(223,74.1)
} node[right,pos=1,xshift=0.025cm]{\small\sffamily PET};
\addplot[mark=*, c2, thick, mark options={solid}] coordinates {
(223,76.8)
} node[right,pos=1,xshift=0.025cm]{\small\sffamily iPET};
\end{axis}
\end{tikzpicture}
\caption{Performance on SuperGLUE with 32 training examples.
\textbf{ALBERT with \textsc{Pet}/i\textsc{Pet}{}
outperforms GPT\nobreakdash-3{} although it is much ``greener'' in that it
has three orders of magnitude
fewer parameters.}}
\label{figure:intro}
\end{figure}
Recently, \citet{brown2020language} introduced GPT\nobreakdash-3{}, a
pretrained LM with an enormous 175 billion parameters, and
showed that it has amazing few-shot abilities: By
reformulating tasks as LM problems, GPT\nobreakdash-3{} achieves near
state-of-the-art results for some
SuperGLUE \citep{wang2019superglue} tasks given just 32 labeled examples. This is achieved through \emph{priming}: GPT\nobreakdash-3{} is given a few demonstrations of inputs and corresponding outputs as context for its predictions, but no gradient updates are performed. While being straightforward to use, this method has two major drawbacks:
\begin{itemize}
\setlength\itemsep{0.1em}
\item It requires a gigantic LM to work well, making it \textbf{unusable in many real-world scenarios} and \textbf{resulting in a large carbon footprint} \citep{strubell-etal-2019-energy}.
\item It \textbf{does not scale to more than a few examples}
as the context window of most LMs
is limited to
a few hundred tokens.\footnote{While GPT\nobreakdash-3{} can process up to 2,048 tokens, this is still not enough to fit $\geq$32 examples for some SuperGLUE tasks.}
\end{itemize}
An alternative to priming is \emph{pattern-exploiting
training} (\textsc{Pet}{}) \citep{schick2020exploiting}, which
combines the idea of reformulating tasks as cloze questions
with regular gradient-based finetuning. While \textsc{Pet}{}
additionally requires unlabeled data, unlabeled data is much easier to obtain than labeled examples for many real-world applications. Crucially, \textsc{Pet}{} only works when the answers to be predicted by the LM correspond to a single token in its vocabulary; this is a severe limitation as many tasks cannot easily be worded that way.
In this work, we adapt \textsc{Pet}{} for tasks that require predicting multiple tokens. We then show that in combination with ALBERT \citep{lan2019albert}, \textsc{Pet}{} and its iterative variant (i\textsc{Pet}{}) both outperform GPT\nobreakdash-3{} on SuperGLUE with 32 training examples, while requiring only 0.1\% of its parameters (Figure~\ref{figure:intro}). Moreover, training with \textsc{Pet}{} can be performed in several hours on a single GPU without requiring expensive hyperparameter optimization. Finally, we show that similar performance can also be achieved without unlabeled data and provide a detailed analysis of the factors contributing to \textsc{Pet}{}'s strong performance: its ability to combine multiple task formulations, its resilience to wordings that are hard to understand, its usage of labeled data, and characteristics of the underlying LM.
Given \textsc{Pet}{}'s ``green'' properties,
we see our work as an important contribution to an
environmentally sound NLP.
\section{Related Work}
Enabling LMs to perform zero-shot learning by providing task descriptions was proposed by \citet{radford2018language} and has been applied to text classification \citep{puri2019zeroshot}, commonsense knowledge mining \citep{davison-etal-2019-commonsense} and argumentative relation classification \citep{opitz2019argumentative}. It is also commonly used for probing the knowledge contained within LMs \cite[\emph{i.a.}]{trinh2018simple,Petroni_2019,talmor2019olmpics,schick2019ota,ettinger2020bert}.
As finding ways to reformulate tasks as cloze questions that are understood well by LMs is difficult \cite{jiang2019know}, \citet{schick2020exploiting} propose \textsc{Pet}{}, a method that uses knowledge distillation \citep{hinton2015distilling} and self-training \citep[e.g.,][]{scudder1965probability,yarowsky-1995-unsupervised,brin1999extracting,mcclosky-etal-2006-effective} to easily combine several reformulations. Our modified version of \textsc{Pet}{} uses masked language models \citep{devlin2018bert} to assign probabilities to sequences of text; this is similar to using them in a generative fashion \cite{wang2019bert} and has previously been investigated by \citet{salazar2019masked} and \citet{ghazvininejad2019maskpredict}. In contrast to \textsc{Pet}{}, which uses gradient-based optimization, \citet{radford2018language} and \citet{brown2020language} investigate priming, where examples are given as context but no parameter updates are performed.
Finally, our focus on reducing the amount of compute required for few-shot learning is closely related to other efforts in Green AI \citep{schwartz2020green} that aim to improve model efficiency, including techniques for knowledge distillation \citep[e.g.,][]{hinton2015distilling,sanh2020distilbert,jiao-etal-2020-tinybert,mao-etal-2020-ladabert,anderson-gomez-rodriguez-2020-distilling}, pruning \citep{NIPS2015_ae0eb3ee,han2015deep_compression,NEURIPS2020_eae15aab} and quantization \citep{gong2014compressing,Zafrir2019Q8BERTQ8,stock2021training} as well as early exit strategies for inference \citep{liu-etal-2020-fastbert,schwartz-etal-2020-right,xin-etal-2020-early}.
\section{Pattern-Exploiting Training}
\label{section:pet}
Let $M$ be a masked language model (MLM),
$T$ its vocabulary and $\mask{} \in T$ the mask token; we denote
the set of all token sequences as $T^*$. For some $\mathbf{z} \in T^*$ containing at least $k$ masks and $t \in T$, we denote with $q_M^k(t \mid \textbf{z})$ the probability that $M$ assigns to $t$ at the $k$th masked position in $\textbf{z}$; the model's logits before applying softmax are denoted with $s_M^k(t \mid \textbf{z})$.
We consider the task of mapping inputs $x \in X$ to outputs $y \in Y$,
for which \textsc{Pet}{} requires a
set of \emph{pattern-verbalizer pairs} (PVPs). Each PVP $\mathbf{p} = (P, v)$ consists of
\begin{itemize}
\item a \emph{pattern} $P: X \rightarrow T^*$ that maps inputs to cloze questions containing a single mask;
\item a \emph{verbalizer} $v: Y \rightarrow T$ that
maps each output to a single token representing
its task-specific meaning in the pattern.
\end{itemize}
\begin{figure}
\tikzset{
every node/.style={
outer sep=0, text height=1.5ex, text depth=0.25ex
},
input/.style={
draw=c0, rounded corners, line width=2pt
},
pattern/.style={
draw=c1, rounded corners, line width=2pt
},
label/.style={
font=\sffamily\small, rounded corners, inner ysep=0.12cm, inner xsep=0.2cm, outer xsep=0.1cm, text=darkgrey, line width=1pt
},
arrow/.style={
draw=darkgrey,->,>=latex
},
}
\centering
\begin{tikzpicture}
\path[input] node[partialbox, font=\sffamily\small, fill=c0!10, outer sep=0, inner sep=0.15cm, thick, align=center](input-x1) {Oil prices rise};
\node[font=\sffamily\small, right=0.05cm of input-x1, inner sep=0, outer sep=0](pattern-text-1){?\vphantom{pt}};
\node[font=\sffamily\small, right=0.05cm of pattern-text-1, inner sep=0, outer ysep=0.1cm](pattern-text-2){\mask{}\vphantom{pt}};
\node[font=\sffamily\small, right=0.05cm of pattern-text-2, inner sep=0, outer ysep=0.1cm](pattern-text-3){,\ \vphantom{pt}};
\path[input] node[partialbox, font=\sffamily\small, fill=c0!10, outer sep=0, inner sep=0.15cm, thick, align=center, right=0.05cm of pattern-text-3](input-x2) {\textsf{Oil prices fall back}};
\node[font=\sffamily\small, right=0.05cm of input-x2, inner sep=0, outer ysep=0.1cm](pattern-text-3){.\vphantom{pt}};
\node[below=0.025cm of input-x1.south, anchor=center, outer sep=0cm, inner sep=0cm, text=c0](input-label){ ${x}_2$};
\node[below=0.025cm of input-x2.south, anchor=center, outer sep=0cm, inner sep=0cm, text=c0](input-label){ ${x}_1$};
\begin{pgfonlayer}{bg}
\path[pattern] node[partialbox=13pt, fit=(input-x1)(pattern-text-1)(pattern-text-2)(pattern-text-3)(input-x2), fill=c1!10, inner ysep=0.25cm, inner xsep=0.25cm](pattern){};
\node[below=0.025cm of pattern.south, anchor=center, outer sep=0cm, inner sep=0cm, text=c1](pattern-label){ $P({x})$};
\end{pgfonlayer}
\node[label, below=0.1cm of pattern.south east, anchor=north east, minimum width=0.9cm, xshift=0.1cm](verbalizer-e){Yes};
\node[label, below=0cm of verbalizer-e.south west, anchor=north west, text=black, minimum width=0.9cm, fill=c1!10, draw=c1](verbalizer-c){No};
\node[label, left=0.2cm of verbalizer-e](label-e){entailment};
\node[label, left=0.2cm of verbalizer-c, text=black, fill=c0!10, draw=c0](label-c){not\_entailment};
\path[] (label-e) edge[arrow] (verbalizer-e);
\path[] (label-c) edge[arrow, draw=black] (verbalizer-c);
\node[below=0.1cm of label-c, text=c0, inner sep=0](y-label){$y\vphantom{v()}$};
\node[below=0.1cm of verbalizer-c, text=c1, inner sep=0](y-label){$v(y)$};
\draw [black!75, dotted, thick, rounded corners, ->, >=latex] (verbalizer-c.east)--([xshift=0.2cm]verbalizer-c.east)--([xshift=0.2cm, yshift=2.55cm]verbalizer-c.east) -- ([yshift=2.55cm]verbalizer-c.east -| pattern-text-2.center) node [midway, fill=white] {$q_{\mathbf{p}}(y \mid {x})$} -- (pattern-text-2.north);
\end{tikzpicture}
\caption{Application of a PVP ${\mathbf{p} = (P,v)}$ for recognizing textual entailment: An input ${{x} = ({x}_1, {x}_2)}$ is converted into a cloze question $P({x})$; $q_\mathbf{p}(y \mid {x})$ for each $y$ is derived from the probability of $v(y)$ being a plausible choice for the masked position.}
\label{figure:pet}
\end{figure}
As illustrated in Figure~\ref{figure:pet}, the core idea of \textsc{Pet}{} is to derive the probability of $y$ being the correct output for ${x}$ from the probability of $v(y)$ being the ``correct'' token at the masked position in $P({x})$.
Based on this intuition, a conditional probability distribution $q_\mathbf{p}$ of $y$ given
$x$
is defined as
\begin{equation}
q_\mathbf{p}(y \mid {x}) = \frac{\exp s_\mathbf{p}(y \mid x)}{ \sum_{y' \in Y} \exp s_\mathbf{p}(y' \mid x)} \label{eq:q_p}
\end{equation}
where $s_\mathbf{p}(y \mid x) = s_M^1(v(y) \mid P(x))$ is the raw score of $v(y)$ at the masked position in $P(x)$.
\iffalse
\begin{figure}
\tikzset{
every node/.style={
outer sep=0, text height=1.5ex, text depth=0.25ex,
},
input/.style={
draw=c0, rounded corners, line width=2pt
},
pattern/.style={
rounded corners, inner sep=0.15cm, outer xsep=0.15cm, text=black, fill=c1!10, draw=c1, line width=1pt
},
model/.style={
font=\sffamily\small, rounded corners, inner sep=0.15cm, outer xsep=0.15cm, text=black, fill=c2!10, draw=c2, line width=1pt
},
arrow/.style={
draw=black!75,->,>=latex, dotted, thick
},
}
\centering
\begin{tikzpicture}
\node[rounded corners, inner sep=0.15cm, outer sep=0.15cm, text=black, fill=c0!10, draw=c0, line width=1pt](x){$x$};
\node[right=0.4cm of x, pattern](p2-x){$P_2(x)$};
\node[above=0.2cm of p2-x, pattern](p1-x){$P_1(x)$};
\node[below=0.2cm of p2-x, pattern](p3-x){$P_3(x)$};
\node[right=0.4cm of p1-x, model](mlm1){MLM$_1$};
\node[right=0.4cm of p2-x, model](mlm2){MLM$_2$};
\node[right=0.4cm of p3-x, model](mlm3){MLM$_3$};
\node[right=0.4cm of mlm2](qP){$q_{\mathbf{P}}(y\,{\mid}\,x)$};
\node[right=0.4cm of qP, model](C){$C$};
\path[] (x) edge[arrow, bend left=10] (p1-x.west);
\path[] (x) edge[arrow] (p2-x.west);
\path[] (x) edge[arrow, bend right=10] (p3-x.west);
\path[] (p1-x) edge[arrow] (mlm1);
\path[] (p2-x) edge[arrow] (mlm2);
\path[] (p3-x) edge[arrow] (mlm3);
\path[] (mlm1.east) edge[arrow, bend left=10] (qP.north west);
\path[] (mlm2.east) edge[arrow] (qP.west);
\path[] (mlm3.east) edge[arrow, bend right=10] (qP.south west);
\path[] (qP) edge[arrow] (C);
\end{tikzpicture}
\caption{Distillation with three PVPs $\mathbf{p_i} = (P_i, v_i)$: The unlabeled example $x$ is transformed into three different cloze questions using the patterns $P_1$, $P_2$ and $P_3$ and then processed with the MLMs corresponding to these patterns. Their predictions are combined (Eq.~\ref{eq:q_P}) to obtain soft labels for training a final classifier~$C$.}
\label{figure:pet-distillation}
\end{figure}
\fi
For a given task, identifying PVPs that perform well is challenging in the absence of a large development set. Therefore, \textsc{Pet}{} enables a combination of multiple PVPs $\mathbf{P} = \{ \mathbf{p}_1, \ldots, \mathbf{p}_n \}$ as follows:
\begin{enumerate}
\item For each PVP $\mathbf{p}$, a MLM is finetuned on training examples $(x, y)$ by minimizing the cross entropy between $y$ and $q_\mathbf{p}(y \mid x)$. In practice, \citet{schick2020exploiting} train three MLMs per pattern as performance can vary substantially between runs.
\item The ensemble of finetuned MLMs is used to annotate a set of unlabeled examples; each unlabeled example $x \in X$ is annotated with soft labels based on the probability distribution
\begin{equation}
q_{\mathbf{P}}(y \mid x) \propto \exp \sum_{\mathbf{p} \in \mathbf{P}} w_\mathbf{p} \cdot s_\mathbf{p}(y \mid x) \label{eq:q_P}
\end{equation}
similar to Eq.~\ref{eq:q_p} where $w_\mathbf{p}$ is a weighting term that is proportional to the accuracy achieved with $\mathbf{p}$ on the training set \emph{before} training.
\item The resulting soft-labeled dataset is used to train a regular sequence classifier by minimizing cross entropy between its output and $q_\mathbf{P}$.
\end{enumerate}
As steps (2) and (3) above closely resemble knowledge distillation \citep{hinton2015distilling}, we also refer to them simply as \emph{distillation}.
Importantly, this process does not require holding the entire ensemble of MLMs in memory at the same time as each model's predictions can be computed sequentially; therefore, it is not more memory expensive than using a single model.
To give MLMs trained on different patterns further opportunity to learn from one another, \citet{schick2020exploiting} also propose i\textsc{Pet}{}, an iterative variant of \textsc{Pet}{} in which several generations of models are trained on datasets of increasing size that are labeled by previous generations.
This is achieved as follows: First, an ensemble of MLMs is trained as in regular \textsc{Pet}{}. For each model $M_i$, a random subset of other models is used to generate a new training set $T_i$ by assigning labels to those unlabeled examples for which the selected subset of models is most confident in its prediction. Each $M_i$ is then retrained on $T_i$; this process is repeated several times, each time increasing the number of examples in $T_i$ by a constant factor. For further details, we refer to \citet{schick2020exploiting}.
\subsection{\textsc{Pet}{} with Multiple Masks}
\label{section:pet-mm}
An important limitation of \textsc{Pet}{} is that the verbalizer $v$ must map each output to a \emph{single} token, which is impossible for many tasks. We thus generalize verbalizers to functions $v: Y \rightarrow T^*$; this requires some modifications to inference and training.\footnote{While \textsc{Pet}{} can easily be adapted to generative MLMs \citep[e.g.,][]{lewis2019bart,raffel2019exploring}, we stick with regular MLMs as they are more lightweight and performed better on simple cloze tasks in preliminary experiments.}
We further generalize \textsc{Pet}{} in that we do not assume the output space to be identical for each input: for each $x \in X$, we denote with $Y_x \subseteq Y$ the set of possible outputs given $x$ as input. Given a PVP $\mathbf{p} = (P, v)$, we define $l(x) = \max_{y \in Y_x}|v(y)|$ to be the maximum number of tokens required to express any output in $Y_x$ and $P^k(x)$ to be $P(x)$ with the mask token replaced by $k$ masks.
As a running example, we consider the task of binary sentiment classification for restaurant reviews with labels $Y = \{+1, -1\}$. We use the pattern
$
P(x) = \inlinepattern{$x$\textsf{\small{. It was \mask{} .}}}
$
and a verbalizer $v$ that maps $+1$ to the single token \textsf{\small great} and $-1$ to the sequence \textsf{\small terri} \textsf{\small $\sbullet$ble}, i.e., we assume that the MLM's tokenizer splits the word ``terrible'' into the two tokens \textsf{\small terri} and \textsf{\small $\sbullet$ble}. For this example, $l(x) = 2$ for all $x$; $P^2(x)$ is illustrated in Figure~\ref{figure:pet-mm} (a).
\begin{figure}
\tikzset{
every node/.style={
outer sep=0, text height=1.5ex, text depth=0.25ex
},
input/.style={
draw=c0, rounded corners, line width=2pt
},
pattern/.style={
draw=c1, rounded corners, line width=2pt
},
label/.style={
font=\sffamily\small, rounded corners, inner ysep=0.12cm, inner xsep=0.2cm, outer xsep=0.1cm, text=darkgrey, line width=1pt
},
arrow/.style={
draw=black!75,->,>=latex, dotted, thick
},
}
\centering
\begin{tikzpicture}
\path[input] node[partialbox, font=\sffamily\small, fill=c0!10, outer sep=0, inner sep=0.15cm, thick, align=center](input-x1) {Awful pizza!};
\node[font=\sffamily\small, right=0.15cm of input-x1, inner sep=0, outer sep=0](pattern-text-1){It was \vphantom{pt}};
\node[font=\sffamily\small, right=0.15cm of pattern-text-1, inner sep=0, outer ysep=0.1cm](pattern-text-2){\mask{}\vphantom{pt}};
\node[font=\sffamily\small, right=0.15cm of pattern-text-2, inner sep=0, outer ysep=0.1cm](pattern-text-3){\mask{}\vphantom{pt}\negphantom{\mask{}}\phantom{$\sbullet$ble}};
\node[font=\sffamily\small, right=0.05cm of pattern-text-3, inner sep=0, outer ysep=0.1cm](pattern-text-4){.\vphantom{pt}};
\node[below=0.025cm of input-x1.south, anchor=center, outer sep=0cm, inner sep=0cm, text=c0](input-label){${x}$};
\node[below=0.7cm of pattern-text-2, xshift=-1cm, inner xsep=0, outer xsep=0](terri){$q_M^1(\text{\sffamily\small terri}\,{\mid}\,\mathbf{z})$};
\node[right=0.1cm of terri, inner xsep=0, outer xsep=0](less){\textcolor{c1}{$\pmb{<}$}};
\node[right=0.1cm of less, inner xsep=0, outer xsep=0](ble){$q_M^2(\sbullet\text{\sffamily\small ble}\,{\mid}\, \mathbf{z})$};
\path[] (terri.north) edge[bend right=20, dotted, thick, black!75, ->, >=latex] node [left] {} (pattern-text-2.south);
\path[] (ble.north) edge[bend left=10, dotted, thick, black!75, ->, >=latex] node [left] {} (pattern-text-3.south);
\begin{pgfonlayer}{bg}
\path[pattern] node[partialbox=17pt, fit=(input-x1)(pattern-text-1)(pattern-text-2)(pattern-text-3)(pattern-text-4), fill=c1!10, inner ysep=0.25cm, inner xsep=0.25cm](pattern){};
\node[below=0.025cm of pattern.south, anchor=center, outer sep=0cm, inner sep=0cm, text=c1](pattern-label){ $P^2({x})$};
\end{pgfonlayer}
\node[left=0cm of pattern](){(a)$\quad\mathbf{z}\phantom{'}\,{=}$};
\path[input] node[below=1.8cm of input-x1, partialbox, font=\sffamily\small, fill=c0!10, outer sep=0, inner sep=0.15cm, thick, align=center](input-x1b) {Awful pizza!};
\node[font=\sffamily\small, right=0.15cm of input-x1b, inner sep=0, outer sep=0](pattern-text-1b){It was \vphantom{pt}};
\node[font=\sffamily\small, right=0.15cm of pattern-text-1b, inner sep=0, outer ysep=0.1cm](pattern-text-2b){\mask{}\vphantom{pt}};
\node[font=\sffamily\small, right=0.15cm of pattern-text-2b, inner sep=0, outer ysep=0.1cm](pattern-text-3b){$\sbullet$ble\vphantom{pt}};
\node[font=\sffamily\small, right=0.05cm of pattern-text-3b, inner sep=0, outer ysep=0.1cm](pattern-text-4b){.\vphantom{pt}};
\node[below=0.025cm of input-x1b.south, anchor=center, outer sep=0cm, inner sep=0cm, text=c0](input-labelb){${x}$};
\node[below=0.7cm of pattern-text-2b](terrib){$q_M^1(\text{\sffamily\small terri}\,{\mid}\,\mathbf{z}')$};
\draw [black!75, dotted, thick, rounded corners, ->, >=latex] (terrib)--(pattern-text-2b.south);
\begin{pgfonlayer}{bg}
\path[pattern] node[partialbox=0pt, fit=(input-x1b)(pattern-text-1b)(pattern-text-2b)(pattern-text-3b)(pattern-text-4b), fill=c1!10, inner ysep=0.25cm, inner xsep=0.25cm](patternb){};
\end{pgfonlayer}
\node[left=0cm of patternb](){(b)$\quad\mathbf{z}'\,{=}$};
\end{tikzpicture}
\caption{Inference for a verbalization consisting of the two tokens \textsf{\small{terri}} and $\sbullet$\textsf{\small{ble}}. (a) We first compute the probability of each token at its position in the cloze question $P^2(x)$ and identify the token with the highest probability. (b) We insert this token into the cloze question and compute the probability of the remaining token.}
\label{figure:pet-mm}
\end{figure}
\paragraph{Inference}
For $x \in X$, $y \in Y_x$ and $|v(y)|=k$, we redefine $q_\mathbf{p}(y \mid x)$ in an autoregressive fashion: Starting from $P^k(x)$, we perform $k$ consecutive predictions, where we always select the next token to predict based on the MLM's confidence. That is, we set $q_\mathbf{p}(y \mid x) = q(v(y) \mid P^k(x))$ where
\begin{equation}
q(t_1 \mathinner{{\ldotp}{\ldotp}{\ldotp}} t_k {\mid} \mathbf{z}) =
\begin{cases}
1&\hskip-5pt\text{if } k\,{=}\,0 \\
q_M^j(t_j {\mid} \mathbf{z})\,{\cdot}\,q(t' {\mid} \mathbf{z}')&\hskip-5pt \text{if } k\,{\geq}\,1
\end{cases}
\label{eq:q-multimask}
\end{equation}
with $j = \argmax_{i=1}^k q_M^i(t_i \mid \mathbf{z})$, $\mathbf{z}'$ is $\mathbf{z}$ except $\mathbf{z}'_j = t_j$ and $t' = t_1\mathinner{{\ldotp}{\ldotp}{\ldotp}} t_{j-1} t_{j+1} \mathinner{{\ldotp}{\ldotp}{\ldotp}} t_k$. Note that unlike in original \textsc{Pet}{} (Eq.~\ref{eq:q_p}), $q_\mathbf{p}$ is not a probability distribution as its values do not sum to one.
For our sentiment classification example, Figure~\ref{figure:pet-mm} illustrates how $q_\mathbf{p}(-1 \mid x)$ is computed: As $|v(y)| = |\{ \textsf{\small{terri}}, \sbullet\textsf{\small{ble}} \}| = 2$, we first use $\mathbf{z} = P^2(x)$ to compute the probability of each token in $v(y)$ (Figure~\ref{figure:pet-mm}a). We then choose the token with the highest probability, put it in place of the corresponding mask token, and use the resulting cloze question $\mathbf{z'}$ to compute the probability of the remaining token (Figure~\ref{figure:pet-mm}b). The overall score for $y = -1$ is then computed as \[
q_\mathbf{p}(-1 \mid x ) = q_M^2(\sbullet\textsf{\small{ble}} \mid \mathbf{z}) \cdot q_M^1(\textsf{\small{terri}} \mid \mathbf{z}')
\]
\paragraph{Training}
Computing $q_\mathbf{p}(y \mid x)$ as in Eq.~\ref{eq:q-multimask} for each training example $(x, y)$ would be prohibitively expensive. To enable computation of all required probabilities in a single forward pass, we approximate $q_\mathbf{p}(y \mid x)$ by (i) always inserting the maximum number of mask tokens required to express any output and (ii) for each $y' \in Y_x$, predicting all tokens in $v(y') = t_1 \ldots t_k$ in parallel, where we simply ignore the model's predictions for all $l(x)-k$ superfluous mask tokens:
\begin{equation}
\tilde{q}_\mathbf{p}(y' \mid x) = \prod_{i=1}^{k} q_M^i(t_i \mid P^{l(x)}(x)) \label{eq:q-tilde}
\end{equation}
For our running example, this means we approximate the scores $q_\mathbf{p}(y \mid x)$ by computing
\begin{align*}
\tilde{q}_\mathbf{p}(+1 \mid x) & = q_M^1 (\textsf{\small{great}} \mid \mathbf{z}) \\
\tilde{q}_\mathbf{p}(-1 \mid x) & = q_M^1 (\textsf{\small{terri}} \mid \mathbf{z}) \cdot q_M^2(\sbullet\textsf{\small{ble}} \mid \mathbf{z})
\end{align*}
which can be done in a single forward pass as it only requires processing the cloze question $\mathbf{z} = P^2(x)$ shown in Figure~\ref{figure:pet-mm} (a) once.
As $\tilde{q}_\mathbf{p}$ is not a probability distribution over $Y_x$, cross entropy is not an ideal training objective as it can also be minimized by reducing the probability assigned to sequences $\mathbf{z} \notin v(Y_x)$ that are not part of the output space, despite this having no effect on the model's prediction.
We instead opt for multi-class hinge loss \citep{weston1999support,dogan2016unified} and minimize:
\begin{equation}
\sum_{y' \in Y_x} \text{max}\left(0;1 {-} \log \tilde{q}_\mathbf{p}(y {\mid} x){+}\log \tilde{q}_\mathbf{p}(y' {\mid} x)\right)
\end{equation}
That is, we require the difference between the log probability of $y$ and the log probability of any output $y' \in Y_x \setminus \{ y \}$ to be at least $1$.
\section{Experiments}
\label{section:experiments}
We compare \textsc{Pet}{} and GPT\nobreakdash-3{}
on SuperGLUE \cite{wang2019superglue}, a natural language understanding benchmark consisting of eight challenging tasks.
We cannot evaluate \textsc{Pet}{} using the exact same training data as GPT\nobreakdash-3{} because for most tasks, GPT\nobreakdash-3{} uses a different set of training examples for each test example and for the other tasks, training sets were not available upon request; however, the exact choice of examples has little impact on GPT\nobreakdash-3{}'s performance.\footnote{Based on personal correspondence with the authors.} We thus create new training sets by randomly selecting 32 examples for each task using a fixed random seed.
We additionally create sets of up to 20,000 unlabeled examples for each task; this is done by removing all labels from the original training sets. We refer to the resulting sets of training examples and unlabeled examples as \emph{FewGLUE}.\footnote{FewGLUE is publicly available at \url{https://github.com/timoschick/fewglue}.}
\subsection{Tasks}
\label{subsection:tasks}
Below, we describe each of the SuperGLUE tasks and our corresponding PVPs. We use a vertical bar ($|$) to mark
boundaries between text segments. Of the eight tasks considered, only COPA, WSC and ReCoRD require the use of \textsc{Pet}{} with multiple masks as introduced in Section~\ref{section:pet-mm}. \\[-0.5em]
\noindent \textbf{BoolQ} \citep{clark2019boolq} is a QA task where each example consists of a passage $p$ and a yes/no question $q$. We use the following patterns:
\begin{itemize}[topsep=0.5em]
\setlength\itemsep{-0.1em}
\item \pattern{$p$\textsf{\small. Question: }$q$\textsf{\small? Answer: \mask{}.}}
\item \pattern{$p$\textsf{\small. Based on the previous passage, }$q$\textsf{\small? \mask{}.}}
\item \pattern{\textsf{\small Based on the following passage, }$q$\textsf{\small? \mask{}. }$p$}
\end{itemize}
We define two verbalizers mapping questions containing a true statement to \textsf{\small yes}/\textsf{\small true} and others to \textsf{\small no}/\textsf{\small false}, respectively, for a total of 6 PVPs. \\[-0.5em]
{
\setlength{\abovedisplayskip}{0.5em}
\setlength{\belowdisplayskip}{0.5em}
\noindent \textbf{CB} \citep{demarneffe:cb} and \textbf{RTE} \citep{dagan2006pascal} are textual entailment tasks
like MNLI, so we use PVPs similar to
\citet{schick2020exploiting}. For a premise $p$ and
hypothesis $h$, we use
\[
\pattern{$h$\textsf{\small?$\,|\,$\mask{}, }$p$},
\pattern{\textsf{\small``}$h$\textsf{\small''?$\,|\,$\mask{}, ``}$p$\textsf{\small''}},
\pattern{$h$\textsf{\small?$\,|\,$\mask{}. }$p$},
\pattern{\textsf{\small``}$h$\textsf{\small''?$\,|\,$\mask{}. ``}$p$\textsf{\small''}}
\]
and a verbalizer that maps entailment to \textsf{\small yes}, disagreement to \textsf{\small no} and neutral to \textsf{\small maybe}.\\[-0.5em]
\noindent Given a premise $p$, the task in \textbf{COPA} \citep{roemmele2011choice} is to determine the \emph{cause} or \emph{effect} of the premise given two options $c_1$ and $c_2$. For determining the \emph{effect}, we use the following patterns:
\[
\pattern{\textsf{\small``}$c_1$\textsf{\small'' or ``}$c_2$\textsf{\small''? }$p$\textsf{\small, so \mask{}.}},
\pattern{$c_1$\textsf{\small\ or }$c_2$\textsf{\small? }$p$\textsf{\small, so \mask{}.}}
\]
For determining the \emph{cause}, we use the same patterns but replace \textsf{\small so} with \textsf{\small because}. The verbalizer for $c_1$ and $c_2$ is the identity function.}\\[-0.5em]
\begin{table*}
\small
\centering
\setlength\tabcolsep{0.6em}
\begin{tabularx}{\linewidth}{lXrccccccccc}
\toprul
& & \fontseries{b}\selectfont Params & \fontseries{b}\selectfont BoolQ & \fontseries{b}\selectfont CB & \fontseries{b}\selectfont COPA & \fontseries{b}\selectfont RTE & \fontseries{b}\selectfont WiC & \fontseries{b}\selectfont WSC & \fontseries{b}\selectfont MultiRC & \fontseries{b}\selectfont ReCoRD & \fontseries{b}\selectfont Avg \\
& \fontseries{b}\selectfont Model & (M) & Acc. & Acc. / F1 & Acc. & Acc. & Acc. & Acc. & EM / F1a & Acc. / F1 & -- \\
\midrul
\multirow{10}{*}{\rotatebox[origin=c]{90}{dev}}
& GPT\nobreakdash-3{} Small & 125 & 43.1 & 42.9 / 26.1 & 67.0 & 52.3 & 49.8 & 58.7 & \pzero6.1 / 45.0 & 69.8 / 70.7 & 50.1 \\
& GPT\nobreakdash-3{} Med & 350 & 60.6 & 58.9 / 40.4 & 64.0 & 48.4 & 55.0 & 60.6 & 11.8 / 55.9 & 77.2 / 77.9 & 56.2 \\
& GPT\nobreakdash-3{} Large & 760 & 62.0 & 53.6 / 32.6 & 72.0 & 46.9 & 53.0 & 54.8 & 16.8 / 64.2 & 81.3 / 82.1 & 56.8 \\
& GPT\nobreakdash-3{} XL & 1,300 & 64.1 & 69.6 / 48.3 & 77.0 & 50.9 & 53.0 & 49.0 & 20.8 / 65.4 & 83.1 / 84.0 & 60.0 \\
& GPT\nobreakdash-3{} 2.7B & 2,700 & 70.3 & 67.9 / 45.7 & 83.0 & 56.3 & 51.6 & 62.5 & 24.7 / 69.5 & 86.6 / 87.5 & 64.3 \\
& GPT\nobreakdash-3{} 6.7B & 6,700 & 70.0 & 60.7 / 44.6 & 83.0 & 49.5 & 53.1 & 67.3 & 23.8 / 66.4 & 87.9 / 88.8 & 63.6 \\
& GPT\nobreakdash-3{} 13B & 13,000 & 70.2 & 66.1 / 46.0 & 86.0 & 60.6 & 51.1 & 75.0 & 25.0 / 69.3 & 88.9 / 89.8 & 66.9 \\
& GPT\nobreakdash-3{} & 175,000 & 77.5 & 82.1 / 57.2 & 92.0 & 72.9 & \fontseries{b}\selectfont 55.3 & 75.0 & 32.5 / 74.8 & {\fontseries{b}\selectfont 89.0} / \fontseries{b}\selectfont 90.1 & 73.2 \\
& \textsc{Pet}{} & 223 & 79.4 & 85.1 / 59.4 & \fontseries{b}\selectfont 95.0 & 69.8 & 52.4 & \fontseries{b}\selectfont 80.1 & {\fontseries{b}\selectfont 37.9} / \fontseries{b}\selectfont 77.3 & 86.0 / 86.5 & 74.1 \\
& i\textsc{Pet}{} & 223 & \fontseries{b}\selectfont 80.6 & {\fontseries{b}\selectfont 92.9} / \fontseries{b}\selectfont 92.4 & \fontseries{b}\selectfont 95.0 & \fontseries{b}\selectfont 74.0 & 52.2 & \fontseries{b}\selectfont 80.1 & 33.0 / 74.0 & 86.0 / 86.5 & \fontseries{b}\selectfont 76.8 \\
\midrul
\multirow{4}{*}{\rotatebox[origin=c]{90}{test}}
& GPT\nobreakdash-3{} & 175,000 & 76.4 & 75.6 / 52.0 & \fontseries{b}\selectfont 92.0 & 69.0 & 49.4 & 80.1 & 30.5 / 75.4 & {\fontseries{b}\selectfont 90.2} / \fontseries{b}\selectfont 91.1 & 71.8 \\
& \textsc{Pet}{} & 223 & 79.1 & 87.2 / 60.2 & 90.8 & 67.2 & \fontseries{b}\selectfont 50.7 & \fontseries{b}\selectfont 88.4 & {\fontseries{b}\selectfont 36.4} / \fontseries{b}\selectfont 76.6 & 85.4 / 85.9 & 74.0 \\
& i\textsc{Pet}{} & 223 & \fontseries{b}\selectfont 81.2 & {\fontseries{b}\selectfont 88.8} / \fontseries{b}\selectfont 79.9 & 90.8 & \fontseries{b}\selectfont 70.8 & 49.3 & \fontseries{b}\selectfont 88.4 & 31.7 / 74.1 & 85.4 / 85.9 & \fontseries{b}\selectfont 75.4 \\
& SotA & 11,000 & \fontshape{it}\selectfont 91.2 & {\fontshape{it}\selectfont 93.9} / \fontshape{it}\selectfont 96.8 & \fontshape{it}\selectfont 94.8 & \fontshape{it}\selectfont 92.5 & \fontshape{it}\selectfont 76.9 & \fontshape{it}\selectfont 93.8 & {\fontshape{it}\selectfont 88.1} / \fontshape{it}\selectfont 63.3 & {\fontshape{it}\selectfont 94.1} / \fontshape{it}\selectfont 93.4 & \fontshape{it}\selectfont 89.3 \\
\bottomrul
\end{tabularx}
\caption{Results on SuperGLUE for GPT\nobreakdash-3{} primed with
32 randomly selected examples and for \textsc{Pet}{} /
i\textsc{Pet}{} with ALBERT-xxlarge-v2 after training on
FewGLUE. State-of-the-art results when using the
regular, full size training sets for all tasks \citep{raffel2019exploring} are shown in italics.
}
\label{table:main_results}
\end{table*}
\noindent For \textbf{WiC} \citep{pilehvar2018wic}, given a word $w$ and two sentences $s_1$ and $s_2$ in which it occurs, the task is to decide if $w$ is used with the same sense in both sentences. We use:
\begin{itemize}[topsep=0.5em]
\setlength\itemsep{-0.1em}
\item \pattern{\textsf{\small``}$s_1$\textsf{\small'' / ``}$s_2$\textsf{\small''. Similar sense of ``}$w$\textsf{\small''? \mask{}.}}
\item
\begin{multipattern}
$s_1\ s_2$\textsf{\small\ Does }$w$\textsf{\small\ have the same meaning in both sentences? \mask{}}
\end{multipattern}
\item \pattern{$w$\textsf{\small. Sense (1) (a) ``}$s_1$\textsf{\small'' (\mask{}) ``}$s_2$\textsf{\small''}}
\end{itemize}
For the first two patterns, we use \textsf{\small yes} as verbalization for words used in the same sense and \textsf{\small no} for other words; for the third pattern, we use \textsf{\small b} and \textsf{\small 2}.\\[-0.5em]
\noindent For \textbf{WSC} \citep{levesque2011winograd}, each example consists of a sentence $s$ with a marked pronoun $p$
and noun $n$, and the task is to determine whether $p$ refers to $n$. We follow \citep{raffel2019exploring,brown2020language} and treat WSC as a generative task. We highlight $p$ in $s$ by putting it in asterisks and use the following patterns:
\begin{itemize}[topsep=0.5em]
\setlength\itemsep{-0.1em}
\item \pattern{$s$\textsf{\small\ The pronoun `}$*p*$\textsf{\small' refers to \mask{}.}}
\item
\begin{multipattern}
$s$\textsf{\small\ In the previous sentence, the pronoun `}$*p*$\textsf{\small' refers to \mask{}.}
\end{multipattern}
\item
\begin{multipattern}
$s$\textsf{\small\ In the passage above, what does the pronoun `}$*p*$\textsf{\small' refer to? Answer: \mask{}.}
\end{multipattern}
\end{itemize}
We use the identity function as verbalizer for $n$. Note that WSC is different from other tasks in that it requires free-form completion. This in turn requires some modifications during training and inference that are discussed in Appendix~\ref{appendix:training_details}.\\[-0.5em]
\noindent \textbf{MultiRC} \citep{khashabi2018looking} is a QA task. Given a passage $p$, a question $q$ and an answer candidate $a$, the task is to decide whether $a$ is a correct answer for $q$. We use the same verbalizer as for BoolQ and similar patterns:
\begin{itemize}[topsep=0.5em]
\setlength\itemsep{-0.1em}
\item \pattern{$p$\textsf{\small. Question: }$q$\textsf{\small? Is it }$a$\textsf{\small? \mask{}.}}
\item \pattern{$p$\textsf{\small. Question: }$q$\textsf{\small? Is the correct answer ``}$a$\textsf{\small''? \mask{}.}}
\item
\begin{multipattern}
$p$\textsf{\small. Based on the previous passage, }$q$\textsf{\small? Is ``}$a$\textsf{\small'' a correct answer? \mask{}.}
\end{multipattern}
\end{itemize} \vspace{0.5em}
\noindent For \textbf{ReCoRD} \citep{zhang2018record}, given a passage $p$ and a cloze question $q$, the task is to decide which of a given set of answer candidates is the correct replacement for the placeholder in the cloze question. As this task is already presented in the form of a cloze question, there is little room for designing PVPs, so we only use a trivial one: the concatenation of $p$ and $q$ as pattern and the identity function as verbalizer. With only one PVP, there is no need to perform knowledge distillation so we directly use the resulting model as our final classifier.
\subsection{Setup}
As underlying LM for \textsc{Pet}{} we choose
ALBERT-xxlarge-v2 \citep{lan2019albert}, the best-performing
MLM on SuperGLUE when training is performed on the regular,
full size training sets. We use the same model, supplemented by a sequence classification head, as our final classifier. We run
\textsc{Pet}{} on the FewGLUE training sets for all SuperGLUE tasks. We do not use any development set to optimize hyperparameters; instead we use the exact same setup and hyperparameters as
\citet{schick2020exploiting}. For COPA, WSC and ReCoRD, we
use our proposed modification of \textsc{Pet}{} to support
verbalizers mapping labels to multiple tokens; for all other
tasks, we use regular \textsc{Pet}. We train i\textsc{Pet}{} on all
tasks except COPA and WSC, as their unlabeled sets contain
well below 1,000 examples, as well as ReCoRD, for which i\textsc{Pet}{} makes no sense as we only use a single PVP. For these three tasks, we simply reuse the results of regular \textsc{Pet}{}.
\subsection{Results}
Our main results are shown in
Table~\ref{table:main_results}. As can be seen, ALBERT with
\textsc{Pet}{} performs similar to the largest GPT\nobreakdash-3{} model, which
is larger by a factor of 785. On average, \textsc{Pet}{} performs 18
points better compared to GPT\nobreakdash-3{} Med, a model of similar size. i\textsc{Pet}{} brings further improvements for 3 out of the 5 tasks that we use i\textsc{Pet}{} for, most notably for CB, but results in a slight performance drop for MultiRC.
Despite \textsc{Pet}{}'s strong performance, it still clearly performs worse than a state-of-the-art model trained on the regular, full size SuperGLUE training set.
\section{Analysis}
We investigate the importance of several factors for
few-shot performance: the choice of patterns and
verbalizers, the usage of both unlabeled and labeled data,
and properties of the underlying language model. We also
look into our proposed modification for \textsc{Pet}{} to work with
multiple masks and compare it to various baselines. Finally,
we measure how choosing different sets of training examples
affects performance. Our analysis focuses on \textsc{Pet}{}
as GPT\nobreakdash-3{} is not publicly available.\footnote{We
could not obtain access to OpenAI's GPT\nobreakdash-3{} API.}
\subsection{Patterns}
The way in which tasks are reformulated as cloze questions can have a huge impact on performance \citep{jiang2019know,schick2020exploiting}. These reformulations can be arbitrarily complex; for example, the pattern used by GPT\nobreakdash-3{} for WSC contains an introductory section of almost 30 words; it is unclear if and how this formulation has been optimized.\footnote{While the authors use a different terminology, GPT\nobreakdash-3{} also makes use of PVPs \citep[pp.~50--61]{brown2020language}.} To investigate the importance of patterns and verbalizers, we compare three sets of PVPs: our initial set as defined in Section~\ref{subsection:tasks} (denoted $\mathbf{p}_\text{ours}$), the single PVP used by GPT\nobreakdash-3{} ($\mathbf{p}_\text{GPT\nobreakdash-3{}}$), and the combination of both ($\mathbf{p}_\text{comb}$).
We train ALBERT using \textsc{Pet}{} with all three sets of patterns; results for selected SuperGLUE tasks are shown in Table~\ref{table:pattern_results} (top). As can be seen, the PVP used by GPT\nobreakdash-3{} outperforms our PVPs on RTE whereas our initial set of patterns performs much better on MultiRC.
These large differences in performance highlight the importance of finding good ways to express tasks as cloze questions. As it is difficult to ascertain which patterns perform well without trying them on a large set of examples, a key challenge for few-shot approaches is to compensate for PVPs that the
LM fails to understand well. As seen in the performance
of the model trained with $\mathbf{p}_\text{comb}$, \textsc{Pet}{}
is able to do so: not only does combining all PVPs
compensate for the worse performance of
$\mathbf{p}_\text{ours}$ on RTE and of
$\mathbf{p}_\text{GPT\nobreakdash-3{}}$ on MultiRC, it even further
improves average performance across the three tasks
compared to the best-performing set of patterns. This clearly demonstrates the potential of carefully engineering a set of suitable patterns as opposed to just choosing a single formulation without means of evaluating its effectiveness.
\begin{table}
\small
\centering
\setlength\tabcolsep{0.5em}
\begin{tabularx}{\linewidth}{Xcccc}
\toprul
& \fontseries{b}\selectfont CB & \fontseries{b}\selectfont RTE & \fontseries{b}\selectfont MultiRC & \fontseries{b}\selectfont Avg \\
\fontseries{b}\selectfont Model & Acc. / F1 & Acc. & EM / F1a & -- \\
\midrul
\textsc{Pet}{} ($\mathbf{p}_\text{ours}$) & {\fontseries{b}\selectfont 85.1} / 59.4 & 69.8 & 37.9 / 77.3 & 66.6 \\
\textsc{Pet}{} ($\mathbf{p}_\text{GPT\nobreakdash-3{}}$) & 83.3 / 58.1 & 71.8 & 25.4 / 68.3 & 63.1 \\
\textsc{Pet}{} ($\mathbf{p}_\text{comb}$) & 84.5 / 59.0 & \fontseries{b}\selectfont 74.7 & 39.1 / \fontseries{b}\selectfont 77.7 & 68.3 \\\midrule
\textsc{Pet}{} ($\mathbf{p}_\text{ours}$) $\neg$dist & 83.9 / \fontseries{b}\selectfont 76.2 & 66.4 & 38.9 / 76.2 & 68.0 \\
\textsc{Pet}{} ($\mathbf{p}_\text{comb}$) $\neg$dist & 83.9 / \fontseries{b}\selectfont 76.2 & 72.9 & {\fontseries{b}\selectfont 39.6} / 76.6 & \fontseries{b}\selectfont 70.4 \\
\bottomrul
\end{tabularx}
\caption{Results on selected tasks for various sets of PVPs for regular \textsc{Pet}{} and for an ensemble of \textsc{Pet}{} models with no knowledge distillation (``$\neg$dist'')}
\label{table:pattern_results}
\end{table}
\subsection{Unlabeled Data Usage}
Unlike GPT\nobreakdash-3{}, \textsc{Pet}{} requires unlabeled data to distill the knowledge of all models based on individual PVPs into a single classifier; for i\textsc{Pet}{}, unlabeled data is additionally used to generate training sets for future generations. The underlying assumption is that unlabeled data can easily be obtained, which may not always be the case in real-world settings. We thus investigate the importance of unlabeled data for regular \textsc{Pet}{}. To this end, we compare the performance of the final classifier in \textsc{Pet}{} to that of directly using the ensemble of models corresponding to individual PVPs. While using this ensemble entirely removes the need for unlabeled data, the ensemble for $k$ PVPs is larger than the distilled model by a factor of $3\cdot k$ as we follow the default setting of \textsc{Pet}{} and train three models per PVP. However, even for a large number of PVPs the ensemble is smaller than GPT\nobreakdash-3{} by two orders of magnitude.
\begin{figure}
\begin{tikzpicture}
\begin{axis}[
cycle list name=color list,
xlabel={\sffamily\small iPET Generation},
ylabel={\sffamily\small Task Performance},
axis line style={decentgrey!95!black},
major grid style={line width=.2pt,draw=decentgrey},
enlarge x limits={0.05},
enlarge y limits={0.075},
ymin = 58,
ymax = 92,
xmin = 1,
xmax = 4,
minor tick style={decentgrey!0},
major tick style={decentgrey},
xtick pos=left,
ytick pos=left,
ylabel near ticks,
xlabel near ticks,
xtick={1,2,3,4},
xticklabels={$1$, $2$, $3$, dist.},
tick align=outside,
tick label style={font=\footnotesize},
major tick length=0.075cm,
width = \linewidth,
height = 0.23\textheight,
log ticks with fixed point,
x tick label style={/pgf/number format/1000 sep=\,},
legend style={draw=none, fill=white!75, at={(0.99,0.01)},anchor=south east, font=\sffamily\scriptsize},
legend cell align=left,
legend columns=2,
]
\addplot[mark=*, c0, thick, mark options={solid}] coordinates {
(1,73.1)
(2,77.0)
(3,78.8)
};
\addlegendentry{BoolQ}
\addplot[mark=*, forget plot, c0, thick, dotted, mark options={solid}] coordinates {
(3,78.8)
(4,80.6)
};
\addplot[name path=boolq-top, opacity=0, forget plot] coordinates {
(1,75.8)
(2,78.5)
(3,80.0)
(4,80.8)
};
\addplot[name path=boolq-down, opacity=0, forget plot] coordinates {
(1,70.4)
(2,75.5)
(3,77.6)
(4,80.4)
};
\addplot[c0,fill opacity=0.2, forget plot] fill between[of=boolq-top and boolq-down];
\addplot[mark=triangle*, c1, thick, mark options={solid}] coordinates {
(1,81.4)
(2,83.9)
(3,84.1)
};
\addlegendentry{CB (Acc)}
\addplot[mark=triangle*, forget plot, c1, thick, mark options={solid}, dotted] coordinates {
(3,84.1)
(4,92.9)
};
\addplot[name path=cb-top, opacity=0, forget plot] coordinates {
(1,86.1)
(2,89.0)
(3,87.5)
(4,92.9)
};
\addplot[name path=cb-down, opacity=0, forget plot] coordinates {
(1,76.7)
(2,78.8)
(3,80.7)
(4,92.9)
};
\addplot[c1,fill opacity=0.2, forget plot] fill between[of=cb-top and cb-down];
\addplot[mark=square*, c2, thick, mark options={solid}] coordinates {
(1,61.6)
(2,69.6)
(3,71.0)
};
\addlegendentry{RTE}
\addplot[mark=square*, c2, thick, dotted, forget plot, mark options={solid}] coordinates {
(3,71.0)
(4,74.0)
};
\addplot[name path=rte-top, opacity=0, forget plot] coordinates {
(1,65.5)
(2,71.5)
(3,73.3)
(4,74.2)
};
\addplot[name path=rte-down, opacity=0, forget plot] coordinates {
(1,57.7)
(2,67.7)
(3,68.7)
(4,73.8)
};
\addplot[c2,fill opacity=0.2, forget plot] fill between[of=rte-top and rte-down];
\addplot[mark=pentagon*, c4, thick, mark options={solid}] coordinates {
(1,74.5)
(2,75.3)
(3,75.9)
};
\addlegendentry{MultiRC (F1a)}
\addplot[mark=pentagon*, c4, thick, dotted, forget plot, mark options={solid}] coordinates {
(3,75.9)
(4,74.0)
};
\addplot[name path=multirc-top, opacity=0, forget plot] coordinates {
(1,75.33)
(2,75.93)
(3,76.64)
(4,74.2)
};
\addplot[name path=multirc-down, opacity=0, forget plot] coordinates {
(1,73.6)
(2,74.7)
(3,75.2)
(4,73.8)
};
\addplot[c4,fill opacity=0.2, forget plot] fill between[of=multirc-top and multirc-down];
\end{axis}
\end{tikzpicture}
\caption{Average performance ($\pm$ standard deviation) of all MLMs trained on individual patterns for three generations and of the distilled classifier (``dist.'') across three individual training runs}
\label{figure:ipet}
\end{figure}
Results without distillation can be seen in
Table~\ref{table:pattern_results} (bottom). Averaged across
the three tasks, the ensemble performs even better
than the distilled classifier. This shows that
if the goal is only to achieve good performance,
then unlabeled data is not necessary; however, it is required to obtain a single, lightweight model as final classifier.
Figure~\ref{figure:ipet} illustrates the benefit of training multiple generations with i\textsc{Pet}{}. For all tasks except MultiRC, there are substantial improvements from the first to the second generation, whereas the third generation achieves only slight additional improvements. On average, standard deviation is reduced in later generations, illustrating that the models learn from each other and their predictions converge. The final distillation step brings further improvements for all tasks except MultiRC and reduces standard deviation across three training runs to almost zero, illustrating that \textsc{Pet}{} and i\textsc{Pet}{} are effective means of reducing finetuning instability \citep{dodge2020finetuning}.
Of course, there are further ways to leverage unlabeled data such as keeping an auxiliary language modeling objective during finetuning \cite{chronopoulou-etal-2019-embarrassingly}. While we leave investigating the impact of additionally using such methods to future work, we note that they can easily be applied to \textsc{Pet}{} while there is no straightforward way to combine them with priming.
\subsection{Labeled Data Usage}
\label{subsection:labeled_data_usage}
\begin{table}
\small
\centering
\setlength\tabcolsep{0.5em}
\begin{tabularx}{\linewidth}{Xcccc}
\toprul
& \fontseries{b}\selectfont CB & \fontseries{b}\selectfont RTE & \fontseries{b}\selectfont MultiRC & \fontseries{b}\selectfont Avg \\
\fontseries{b}\selectfont Model & Acc. / F1 & Acc. & EM / F1a & -- \\
\midrul
\textsc{Pet}{} & {\fontseries{b}\selectfont 85.1} / \fontseries{b}\selectfont 59.4 & \fontseries{b}\selectfont 69.8 & {\fontseries{b}\selectfont 37.9} / \fontseries{b}\selectfont 77.3 & \fontseries{b}\selectfont 66.6 \\
unsupervised & 33.5 / 23.1 & 55.0 & \pzero3.9 / 60.3 & 38.5 \\
supervised & 60.7 / 42.5 & 50.2 & \pzero4.3 / 49.8 & 43.0 \\
\midrul
\textsc{Pet}{} (XLNet) & {\fontseries{b}\selectfont 88.7} / \fontseries{b}\selectfont 83.0 & \fontseries{b}\selectfont 60.4 & {\fontseries{b}\selectfont 21.4} / \fontseries{b}\selectfont 66.6 & \fontseries{b}\selectfont 63.4 \\
Priming (XLNet) & 56.3 / 37.7 & 49.5 & \phantom{0}--\phantom{0} / \phantom{0}--\phantom{0} & -- \\
\bottomrul
\end{tabularx}
\caption{Results on selected tasks for various ways of using the labeled examples available in FewGLUE}
\label{table:labeled_results}
\end{table}
We next investigate the effect of how labeled data is used, which is one of the key differences between priming and \textsc{Pet}{}. We first compare \textsc{Pet}{} with regular supervised training (i.e., without using any patterns), and with a fully unsupervised model (i.e., an ensemble using all PVPs but no labeled training examples). Given 32 examples, \textsc{Pet}{} clearly outperforms both baselines (Table~\ref{table:labeled_results})
We next compare \textsc{Pet}{} directly to priming. However, we
cannot do so using ALBERT as it
is only able to process sequences of up to 512 tokens, which
is not enough for a set of 32 examples;
we instead use XLNet
\citep{NIPS2019_8812} for this comparison. As shown in
Table~\ref{table:labeled_results}, XLNet in general performs
worse than ALBERT. More importantly, XLNet with \textsc{Pet}{}
performs much better than priming.
We were not able to
obtain results with priming on MultiRC because the 32
examples in FewGLUE would require more than 10,000 tokens, so
processing them with a standard Transformer
\citep{Vaswani2017} is infeasible due to the quadratic
complexity of self-attention. This highlights another
important issue with priming: It does not scale well to more
than a few examples; even GPT\nobreakdash-3{} is only able to process
sequences of up to 2,048 tokens. While there are some
Transformer variants that can deal with much longer
contexts
\citep[e.g.,][]{kitaev2020reformer,beltagy2020longformer}, it
has yet to be investigated to what extent such models make good use of priming examples over
long context spans.
\begin{figure}
\centering
\begin{tikzpicture}
\begin{axis}[
width=0.92\linewidth,
height=0.23\textheight,
view={0}{90},
colorbar,
colormap = {blackwhite}{color(0cm) = (c1);color(0.5cm) = (white);color(1cm) = (c0)},
colorbar style={
yticklabel style={
/pgf/number format/.cd,
fixed,
precision=1,
fixed zerofill,
},
width=0.2cm,
ytick={-30, -20, -10, 0, 10, 20, 30},
yticklabels={-30, -20, -10,$\pm$0, +10, +20, +30}
},
enlargelimits=false,
axis on top,
point meta min=-33,
point meta max=33,
ymin=-0.5,
ymax=8.5,
xmin=-0.5,
xmax=10.5,
xtick={0,1,2,3,4,5,6,7,8,9,10},
xticklabels={BoolQ, CB Acc, CB F1, COPA, RTE, WiC, WSC, MultiRC EM, MultiRC F1a, ReCoRD Acc, ReCoRD F1},
xticklabel style={rotate=35, anchor=east},
yticklabels={\textsc{Pet}{}, 175B, 13B, 6.7B, 2.7B, XL, Large, Med, Small},
ytick={0,1,2,3,4,5,6,7,8,9,10},
xtick pos=left,
ytick pos=left,
ylabel near ticks,
xlabel near ticks,
tick align=outside,
major tick length=0.075cm,
tick label style={font=\sffamily\scriptsize}
]
\addplot[matrix plot*,point meta=explicit] table [x=task, y=model, meta=val, col sep=comma] {heatmap.csv};
\addplot[black, thick, mark options={solid}] coordinates {
(-1,0.5)
(15,0.5)
};
\end{axis}
\end{tikzpicture}
\caption{Accuracy differences between priming with 32 examples and one-shot priming for all GPT\nobreakdash-3{} models as well as between ALBERT with \textsc{Pet}{} (without distillation) and unsupervised ALBERT (bottom row)}
\label{figure:heatmap}
\end{figure}
We further investigate the effectiveness of priming by
looking at results obtained with GPT\nobreakdash-3{} more closely.
To this end, Figure~\ref{figure:heatmap} shows the performance difference
between priming GPT\nobreakdash-3{} with 32 examples and priming it with just a single
example for each task and model size.\footnote{
We do not compare priming to zero-shot performance as for unknown reasons,
zero-shot GPT\nobreakdash-3{} performs well below
random guessing for some tasks (e.g., 0.0\% accuracy for WiC).
To not overestimate the benefit of priming%
, we therefore
show gains from providing 32 examples compared
to just one.} As can be seen, priming with 32
examples only slightly improves performance for most tasks and model sizes. For some tasks, adding more examples even
leads to worse performance, especially for smaller models. For ReCoRD, even the largest model's performance slightly drops when adding more examples.
The bottom row of Figure~\ref{figure:heatmap} shows the performance difference between ALBERT trained with \textsc{Pet}{} (without distillation) and a fully unsupervised ALBERT model on all tasks. While results are not directly comparable due to different underlying models and PVPs, \textsc{Pet}{} results in much stronger performance improvements compared to priming and does not worsen results for any task.
\subsection{Model Type}
We next look into the impact of the underlying LM on
\textsc{Pet}{} by comparing ALBERT with RoBERTa large \citep{liu2019roberta} and GPT-2 medium \citep{radford2018language}. As GPT-2 is a unidirectional model similar to GPT\nobreakdash-3{}, it can only process patterns where the mask token is the very last token. We therefore use $\mathbf{p}_\text{GPT\nobreakdash-3{}}$ for CB and RTE; for MultiRC, we stick with our original set of patterns as they already fulfill this requirement. We also do not perform distillation and instead report the ensemble's performance as there is no established way of equipping GPT-2 with a sequence classification head.
\begin{table}
\small
\centering
\setlength\tabcolsep{0.4em}
\begin{tabularx}{\linewidth}{Xccccc}
\toprul
& & \fontseries{b}\selectfont CB & \fontseries{b}\selectfont RTE & \fontseries{b}\selectfont MultiRC & \fontseries{b}\selectfont Avg \\
\fontseries{b}\selectfont Model & \fontseries{b}\selectfont Params & Acc. / F1 & Acc. & EM / F1a & -- \\
\midrul
ALBERT & 223M & {\fontseries{b}\selectfont 87.5} / \fontseries{b}\selectfont 78.7 & \fontseries{b}\selectfont 74.7 & {\fontseries{b}\selectfont 38.9} / \fontseries{b}\selectfont 76.2 & \fontseries{b}\selectfont 71.8 \\
RoBERTa & 355M & 85.7 / 77.5 & 62.8 & 23.3 / 70.0 & 63.7 \\
GPT-2 & 345M & 73.2 / 73.7 & 47.7 & 12.4 / 57.4 & 52.0 \\
\bottomrul
\end{tabularx}
\caption{Results on selected tasks for \textsc{Pet}{} without knowledge distillation combined with various LMs using $\mathbf{p}_\text{GPT\nobreakdash-3{}}$ for CB/RTE and $\mathbf{p}_\text{ours}$ for MultiRC}
\label{table:model_type_results}
\end{table}
Results for training all three LMs with \textsc{Pet}{} in
Table~\ref{table:model_type_results} show that using ALBERT
as underlying LM is crucial for \textsc{Pet}{}'s strong performance;
exchanging ALBERT with RoBERTa results in an average
performance drop of 8 points. However, RoBERTa still clearly
outperforms GPT\nobreakdash-3{} 13B, which is larger by two orders of
magnitude. Importantly, \textsc{Pet}{} with GPT-2 performs much
worse than with the two other models.
As anticipated by
\citet{brown2020language}, a reason for this drop in
performance may be that like GPT\nobreakdash-3{}, GPT-2 is
unidirectional,
making tasks that require
comparing two sequences a
challenge. However, it is important to note that there are
also other substantial differences between GPT-2 and the
other two models, most notably the pretraining
dataset. Regardless of whether
unidirectionality is
the reason for GPT-2's bad performance, bidirectionality of
the underlying LM is important for \textsc{Pet}{} as it removes the
need for the mask token to be at the very end and thus allows for more flexibility in the creation of patterns.
\subsection{\textsc{Pet}{} with Multiple Masks}
We modified \textsc{Pet}{} to work for outputs that require more
than a single token.
To investigate the impact of this modification, we look at
the three tasks for which this is required: COPA, WSC and
ReCoRD. We compare our decoding strategy of predicting tokens in order of the probability assigned to them, to which we refer as \emph{max-first}, with two
alternatives: decoding left-to-right (ltr) as is common for
many autoregressive language models, and decoding all tokens
simultaneously (parallel) as is done during
training. Additionally, we compare \textsc{Pet}{} with
untrained ALBERT
to measure the effectiveness of our proposed training loss.
Results are shown in
Table~\ref{table:multimask_results}. \textsc{Pet}{} clearly
outperforms untrained ALBERT for the three tasks. Not performing distillation hurts performance for COPA, but leads to slight improvements on WSC; for ReCoRD, we did not perform distillation in the first place as we only use a single PVP.
Our decoding strategy is clearly superior to parallel decoding except for WSC, for which most predictions consist only of one or two tokens, and performs slightly better than left-to-right decoding.
\begin{table}
\small
\centering
\setlength\tabcolsep{0.45em}
\begin{tabularx}{\linewidth}{Xcccc}
\toprul
& \fontseries{b}\selectfont COPA & \fontseries{b}\selectfont WSC & \fontseries{b}\selectfont ReCoRD & \fontseries{b}\selectfont Avg \\
\fontseries{b}\selectfont Model & Acc. & Acc. & Acc. / F1 & -- \\
\midrul
\textsc{Pet}{} & \fontseries{b}\selectfont 95.0 & 80.1 & {\fontseries{b}\selectfont 86.0} / \fontseries{b}\selectfont 86.5 & \fontseries{b}\selectfont 87.1 \\
\textsc{Pet}{} $\neg$dist (max-first) & 90.0 & \fontseries{b}\selectfont 80.8 & {\fontseries{b}\selectfont 86.0} / \fontseries{b}\selectfont 86.5 & 85.7 \\
\textsc{Pet}{} $\neg$dist (ltr) & 89.0 & 79.8 & 84.7 / 85.3 & 84.6 \\
\textsc{Pet}{} $\neg$dist (parallel) & 77.0 & \fontseries{b}\selectfont 80.8 & 82.5 / 83.1 & 80.2 \\
untrained & 72.5 & 59.9 & 84.7 / 85.4 & 72.5 \\
\bottomrul
\end{tabularx}
\caption{Results on selected tasks for
our proposed
variant of \textsc{Pet}{} as well as other
decoding strategies and for
untrained ALBERT}
\label{table:multimask_results}
\end{table}
\subsection{Training Examples}
Recall that we conduct our experiments with training examples from
FewGLUE, a randomly selected subset of the original SuperGLUE training examples. We used
a fixed random seed $s_0$ to generate FewGLUE. Let $\Sigma_i$ be
the randomly selected subset of SuperGLUE for random seed
$s_i$, so $\Sigma_0 =$ FewGLUE. In this subsection, we create two additional subsets
of SuperGLUE, $\Sigma_1$ and $\Sigma_2$, based on different seeds.
This allows us to investigate how
different sets of training
examples affect performance.
To this end,
we run \textsc{Pet}{}
for CB, RTE and MultiRC
using the three
$\Sigma_i$. To
measure only the effect of varying the training set while
ignoring unlabeled examples, we do not use distillation.
Table~\ref{table:seed_results} shows that for all tasks, changing the set of training examples can result in large performance differences for \textsc{Pet}{}. This highlights the importance of using the same set of examples when comparing different few-shot approaches, which is why we make the particular set of examples in FewGLUE publicly available. However, we note that the average performance of \textsc{Pet}{} is similar to that of GPT\nobreakdash-3{} for all seeds.
While our results may seem contrary to the insight that for GPT\nobreakdash-3{}, the exact choice of examples does not play a major role, we suspect this to be due to the fact that priming benefits much less from training examples than \textsc{Pet}{} (cf. Section~\ref{subsection:labeled_data_usage}); accordingly, the influence of the exact set of training examples on the model's performance is smaller.
\begin{table}
\small
\centering
\setlength\tabcolsep{0.55em}
\begin{tabularx}{\linewidth}{Xcccc}
\toprul
& \fontseries{b}\selectfont CB & \fontseries{b}\selectfont RTE & \fontseries{b}\selectfont MultiRC & \fontseries{b}\selectfont Avg \\
\fontseries{b}\selectfont Model & Acc. / F1 & Acc. & EM / F1a & -- \\
\midrul
GPT\nobreakdash-3{} & 82.1 / 57.2 & \fontseries{b}\selectfont 72.9 & 32.5 / 74.8 & 65.4 \\
\textsc{Pet}{} $\neg$dist ($\Sigma_0$) & 83.9 / 76.2 & 66.4 & 38.9 / 76.2 & \fontseries{b}\selectfont 68.0 \\
\textsc{Pet}{} $\neg$dist ($\Sigma_1$) & 82.1 / 57.4 & 61.4 & {\fontseries{b}\selectfont 39.2} / \fontseries{b}\selectfont 77.9 & 63.2 \\
\textsc{Pet}{} $\neg$dist ($\Sigma_2$) & {\fontseries{b}\selectfont 87.5} / \fontseries{b}\selectfont 84.0 & 61.4 & 34.7 / 76.3 & 67.6 \\
\bottomrul
\end{tabularx}
\caption{Results on selected tasks for GPT\nobreakdash-3{} and
for \textsc{Pet}{} using training sets $\Sigma_0$,
$\Sigma_1$,
$\Sigma_2$}
\label{table:seed_results}
\end{table}
\section{Conclusion}
We have proposed a simple yet effective modification of \textsc{Pet}{}, enabling us to use it for tasks that require predicting multiple tokens. In extensive experiments, we have identified several factors responsible for the strong performance of \textsc{Pet}{} combined with ALBERT: the possibility to concurrently use multiple patterns for transforming examples into cloze questions, the ability to compensate for patterns that are difficult to understand, the usage of labeled data to perform parameter updates%
, and the underlying LM itself.
We have shown that using \textsc{Pet}{}, it is possible to achieve few-shot text classification
performance similar to GPT\nobreakdash-3{} on SuperGLUE with LMs that
have three orders of magnitude fewer
parameters. This not only lowers financial cost, but above all reduces environmental impact immensely and leads to a much smaller carbon footprint. We see this as an important contribution to
achieving the goal of
an environmentally more friendly NLP.
To enable comparisons with our work, we make our code, models and datasets publicly available.
For future work, it would be interesting to see whether \textsc{Pet}{} also works for generative tasks when combined with generative LMs and whether further improvements are possible in multi-task settings.
\paragraph*{Acknowledgments}
This work was funded by the European Research Council (ERC \#740516).
We thank the anonymous reviewers
for their helpful comments.
|
1,941,325,220,415 | arxiv | \section{Overview}
The production of prompt photons at hadron colliders provides means
for testing perturbative QCD predictions, providing a colorless probe
of the hard scattering process. The dominant production mechanism of
single photons in $pp$ collisions at the Large Hadron Collider (LHC)
energies is $qg\to{}q\gamma$, while the production of di-photon final
states mainly occurs through quark-antiquark annihilation,
$q\bar{q}\to\gamma\gamma$, and gluon-gluon interaction
$gg\to\gamma\gamma$ mediated by a quark box diagram. In both single
and di-photon final states, parton fragmentation to photon also
contributes.
Because of the main production mechanism, the measurement of the
inclusive photon cross section at the LHC can constrain the gluon
density in protons. The study of the distribution of the azimuthal
separation between the two photons in di-photon events can provide
insight on the fragmentation model, while for balanced back-to-back
di-photons the di-photon cross section is sensitive to soft gluon
emission, which is not accurately described by fixed-order
perturbation theory. Di-photon production is also an irreducible
background for some new physics processes, such as the Higgs decay
into photon pairs.
We present here two measurements of the inclusive isolated prompt
photon production cross section as a function of the photon transverse
energy $\ETg$, using $pp$ collision data collected in 2010 with the
ATLAS detector \cite{ATLAS} at the LHC at a center-of-mass energy of 7
TeV. The former is based on an integrated luminosity $\int {\mathscr
L} dt$~=~(0.88 $\pm$ 0.1) pb$^{-1}$
\cite{InclusivePhotons2010_880nb}, and provides a measurement of the
cross section for 15 $\leq\ETg<$ 100 GeV in the photon pseudorapidity
$\eta$ intervals [0,0.6), [0.6,1.37) and [1.52,1.81). The latter uses
the full 2010 data sample $\int {\mathscr L} dt$~=~(36.4 $\pm$ 1.2)
pb$^{-1}$ \cite{InclusivePhotons2010_35pb}, covering the 40
$\leq\ETg<$ 400 GeV $\ETg$ range and extending to the [1.81,2.37)
pseudorapidity region.
We also present the measurement of the inclusive di-photon cross
section as a function of the di-photon invariant mass $\mgg$, the
di-photon system momentum $\ptgg$ and the azimuthal separation between
the two photons $\dphigg$, using an integrated luminosity $\int
{\mathscr L} dt$~=~(36.0 $\pm$ 0.1) pb$^{-1}$ \cite{DiPhotons2010}.
\section{Photon selection, reconstruction and identification in ATLAS}
Single photon events are triggered in ATLAS using a high-level trigger
with a nominal transverse energy threshold of 10 GeV
\cite{InclusivePhotons2010_880nb} or 40 GeV
\cite{InclusivePhotons2010_35pb}; di-photon events are triggered by
two photon candidates having $\ETg>$ 15 GeV
\cite{DiPhotons2010}. Using unbiased or lower-threshold triggers,
these triggers are found to be fully efficient for photons and
di-photons passing the selection criteria of the analyses.
Photon candidates depositing their energy in the ATLAS Liquid Argon
(LAr) electromagnetic calorimeter (EMC) in the regions $|\eta|<$1.37
and 1.52$\leq|\eta|<$2.37 are reconstructed. The photons converting in
$e^+e^-$ pairs before reaching the EMC (about 30\% in the samples
under study) are separated from electrons by associating the
reconstructed tracks and conversion vertices to the EMC energy
deposit. The overall photon reconstruction efficiency is about 85\%
(75\%) for $|\eta|<$1.37 (1.52$\leq|\eta|<$2.37), the main losses
being due to nonoperational LAr EMC readout modules during the 2010
data taking. In the inclusive photon analyses photon candidates are
required to have reconstructed $\ETg$ larger than 15 GeV
\cite{InclusivePhotons2010_880nb} and 45 GeV
\cite{InclusivePhotons2010_35pb}; in the di-photon measurement both
photon candidates in a event must have $\ETg>$ 16 GeV.
Background from non-prompt photons originating from decays of leading
neutral mesons inside jets (e.g. $\pi^0$) is suppressed by means of
selections on the electromagnetic shower momenta, and of a requirement
on the photon isolation in the EMC. Photon candidates must pass tight
identification criteria based on nine discriminating variables
computed from the lateral and longitudinal profiles of the energy
deposited in the EMC, and in the hadronic calorimeter behind it. The
first LAr EMC layer is finely segmented such to allow the resolution
of two maxima in the energy deposit, typical of the superposition of
two photons from a neutral meson decay. The efficiency of these
selections ranges from $\sim$ 60\% to $\sim$ 90\% for increasing
$\ETg$.
The photon transverse isolation energy $\ETiso$ is computed from the
sum of the energies in the LAr EMC cells in a cone of radius 0.4 in
the $\eta-\phi$ plane around the photon candidate axis. The
contribution to $\ETiso$ from the photon itself is subtracted, as well
as the energy from the soft-jet activity from the underlying event and
from event pileup \cite{UEcorrection}. All measurements presented here
require $\ETiso<$ 3 GeV.
\section{Background subtraction}
After the selection criteria described above, a residual contribution
of background candidates pollutes the photon and diphoton samples.
In the inclusive photon analysis, this background contamination is
estimated using a data-driven counting technique based on the observed
number of events in the control regions (sidebands) of a
two-dimensional plane formed by the photon transverse isolation energy
and a photon identification variable.
Corrections for signal leakage in the background control regions and
for correlation between the two variables in background events are
taken into account.
The same sideband method is iterated on di-photon event on the leading
and sub-leading photon candidates, allowing to separate the di-photon
signal from the photon--jet and jet--jet background
components. Alternatively, a matrix approach classifying the events in
categories according to whether each of the tight photon candidates
passes or not the isolation criteria, or a template approach fitting
the distributions of the isolation energy profiles, are used and give
compatible yield results.
In both inclusive and di-photon analyses the residual contamination
from electron misidentified as photons is evaluated and subtracted.
\section{Cross section measurements}
\begin{figure}[t]
\centering
\begin{minipage}[c]{1.2\linewidth}
\hspace{-0.07\textwidth}
\includegraphics[width=0.24\textwidth]{fig1_1}
\hspace{-0.02\textwidth}
\includegraphics[width=0.24\textwidth]{fig1_2}
\hspace{-0.02\textwidth}
\includegraphics[width=0.24\textwidth]{fig1_3}
\hspace{-0.02\textwidth}
\includegraphics[width=0.24\textwidth]{fig1_4}
\end{minipage}
\vspace{-3mm}
\caption{Measured cross section of isolated prompt-photon production
as a function of $\ETg$ in different pseudorapidity ranges,
compared with theoretical predictions. The red triangle represent
the results from \cite{InclusivePhotons2010_880nb}, the black dots
those from \cite{InclusivePhotons2010_35pb}.}
\vspace{-3mm}
\label{fig:inclusive_cross_section}
\end{figure}
Inclusive prompt photon and di-photon cross sections are respectively
measured in different bins of $\ETg$ and $\mgg, \ptgg, \dphigg$ from
the extracted signal yields, the corresponding integrated
luminosities, and the trigger, reconstruction and selection
efficiencies. The measured cross sections are affected by different
systematic uncertainties, primarily associated to the uncertainty on
the photon reconstruction efficiency (3-4\% due to the isolation
efficiency cut, and 1-2.5\% associated to the limited knowledge of the
material upstream the EMC); to the uncertainty on the photon
identification efficiency (ranging from 8\% to 1.5\%, the higher
values being applicable at lower $\ETg$); to uncertainty on the signal
yields due the background subtraction technique (at most 10\%, mostly
associated to the definition of the background control regions and the
photon energy scale).
Figure~\ref{fig:inclusive_cross_section} shows the inclusive photon
cross sections as a function of $\ETg$ in four different $\eta$
regions. The theoretical pQCD cross sections, computed with a
fixed-order NLO parton-level generator (JETPHOX \cite{JETPHOX}) for
photons having parton transverse energy in a cone of radius 0.4 around
the photon smaller then 4 GeV, are overlaid (yellow and blue bands,
accounting for the scale and PDF uncertainties). The measured cross
sections are in good agreement with the theoretical predictions for
$\ETg>$ 35 GeV, while at lower $\ETg$, where the contribution from
parton-to-photon fragmentation is larger, the theory tends to
overestimate the data, possibly hinting to the need of more accurate
predictions.
\begin{figure}[t]
\centering
\includegraphics[width=0.32\textwidth]{fig2_1}
\includegraphics[width=0.32\textwidth]{fig2_2}
\includegraphics[width=0.32\textwidth]{fig2_3}
\vspace{-3mm}
\caption{Measured cross section of isolated di-photon production as
a function of $\mgg, \ptgg$ and $\dphigg$, compared with
theoretical predictions.}
\vspace{-3mm}
\label{fig:diphoton_cross_section}
\end{figure}
Figure~\ref{fig:diphoton_cross_section} shows the diphoton cross
section as a function of $\mgg$, $\ptgg$ and $\dphigg$. Two
theoretical predictions are overlaid for photons having parton
transverse energy in a cone of radius 0.4 around the photon smaller
then 4 GeV, one corresponding to a fixed-order NLO parton-level
generator calculation (DIPHOX \cite{DIPHOX}), the other featuring
transverse momentum resummation (RESBOS \cite{RESBOS}). The agreement
is generally good, but some deviations are observed for low $\dphigg$
values, where both theoretical predictions underestimate the
measurements. In this region the LO elements do not contribute to the
cross section, and NLO is the first order giving non-zero
contributions: more accurate NNLO predictions would provide
clarifications on these residual discrepancies.
\section*{References}
\bibliographystyle{unsrt}
|
1,941,325,220,416 | arxiv |
\section{Introduction}
Currently particle physics is in a similar situation as physics was about 120 years
ago. Its standard model (SM) can explain successfully most of the low and high energy
phenomena and provide predictions that are in agreement with measurements at high precision.
Nevertheless, there are also a handful of outstanding observations that cannot be
predicted by the standard model and point towards beyond the standard model (BSM) physics.
These unexplained facts are
(i) the non-vanishing neutrino masses and mixing matrix elements \cite{Fukuda:1998mi,Ahmad:2001an},
(ii) the metastable vacuum of the standard model \cite{Bezrukov:2012sa,Degrassi:2012ry},
(iii) the need for lepto- and/or baryogenesis to explain baryon asymmetry, i.e.~our
obvious existence, (iv) the existence of dark matter in the Universe
\cite{Hinshaw:2012aka,Aghanim:2018eyx,Eisenstein:2005su,Sofue:2000jx,Bartelmann:1999yn},
and also (v) the existence of dark energy in the Universe \cite{Hinshaw:2012aka}.
In addition there is general consensus about the occurrence of cosmic inflation
in the early Universe, which also calls for an explanation.
There are other observations in particle physics that have almost reached the status
of discoveries. Most prominently the prediction of the standard model for the
anomalous magnetic moment $a_\mu$ of the muon \cite{Aoyama:2020ynm} is smaller
than the result of the measurement \cite{Muong-2:2006rrc,Muong-2:2021ojo} by
4.2 standard deviations. In this case however, the status of the theory
is controversial because the evaluation of the hadronic contribution to $a_\mu$
requires non-perturbative approach, and the result depends on the method
\cite{Aoyama:2020ynm,Borsanyi:2020mff}. The resolution
of this discrepancy calls for an independent evaluation of this hadronic
vacuum polarization contribution before discovery can be claimed.
Some of the observations (i--v) should find understanding in particle physics models, while
others may have cosmological origins. Nevertheless, the intimate relation between
particle physics and the early Universe, originating from the universal expansion of
space-time, gives a strong support for searching answers within particle physics by
extending the SM. Such extensions can be put into three categories:
(a) ultraviolet complete models from theoretical motivations, such as
supersymmetric models;
(b) effective field theories like the standard model effective field theory (SMEFT);
(c) simplified models that focus on a subset of open questions.
This third category includes the dark photon models (gauge extension, see e.g.~Refs.~\cite{Holdom:1985ag,Pospelov:2007mp}),
the singlet scalar extensions (see e.g.~Refs.~\cite{Schabinger:2005ei,Patt:2006fwx,Falkowski:2015iwa}) and
the introduction of neutrino mass matrices with some variant of the see-saw mechanism,
such as in Ref.~\cite{Lindner:2013awa}.
The UV complete supersymmetric extensions of the SM are very attractive for solving
theoretical issues, but they are becoming less favored by the results of the LHC
experiments \cite{ATLAS:SUSY,CMS:SUSY}. Effective field theories proved to be
very useful in the past. However, the SMEFT contains 2499 dimension six operators
\cite{Grzadkowski:2010es}, which makes it rather
difficult to study experimentally. The simplified models on the other end
contain only few new parameters, hence are very attractive from the experimental point
of view. However, being simplified models, those cannot give answers to all observations
pointing towards BSM physics simultaneously.
In this paper we study a simple UV complete BSM extension along the principles of the SM
itself: a renormalizable gauge theory that adds one layer of interactions below the
hierarchic layers of the strong, electromagnetic and weak forces, which is called
superweak (SW) force \cite{Trocsanyi:2018bkm}, mediated by a new U(1) gauge boson $Z'$,
see \fig{fig:forces}. In order to explain the origin of neutrino masses, the field
content is enhanced by three generations of right-handed neutrinos. The new gauge
symmetry is broken spontaneously by the vacuum expectation value of a new complex
scalar singlet. According to exploratory studies, the superweak extension of the
standard model (SWSM) has the potential to explain the origin of
(i) neutrino masses and mixing matrix elements \cite{Iwamoto:2021wko},
(ii) dark matter \cite{Iwamoto:2021fup},
(iii) cosmic inflation \cite{Peli:2019vtp},
(iv) stabilization of the electroweak vacuum \cite{Peli:2019vtp} and possibly
(v) leptogenesis (under investigation).
\begin{figure}[t!]
\includegraphics[width=0.85\linewidth]{forces.pdf}
\caption{\label{fig:forces}The standard model particle sheet with the superweak extension. The forces act on all particles within the respective box
}
\end{figure}
While these findings are promising, more refined analyses are needed in order to
explore the viability of the model. The main motivation of our work is not to
prove that the SWSM is the correct description of the fundamental interactions,
but rather to check if questions (i--v) listed above can be answered within a
single model with as few new parameters as possible. In this paper we revisit
the study of the parameter space of the scalar sector of the SWSM as allowed
by the requirement of the stability of the vacuum. We improve significantly
on our previous analysis \cite{Peli:2019vtp} in two respects.
Firstly, we use renormalization group equations (RGEs) containing the beta
functions at two-loop order. More importantly, we take into account both
the radiative corrections up to two-loop accuracy and the measured physical
values and uncertainties of the parameters of the scalar sector as constraints.
A similar study has been performed earlier in the simplified model of single
real scalar extension of the SM in Ref.~\cite{Falkowski:2015iwa}.
The important difference between the present work and that analysis is that
we include the effect of the right-handed neutrinos in the running of
the couplings, which constrains the parameter space further. The inclusion
of the two-loop effects is also for the first time in the present work.
\section{Superweak model}
The SWSM is a gauged U(1) extension of the standard model with an additional complex
scalar field $\chi$ and three families of sterile neutrinos $\nu_{\rR,i}$.
The model was defined in Ref.~\cite{Trocsanyi:2018bkm} and further details on the
new sectors were presented in Refs.~\cite{Peli:2019vtp,Iwamoto:2021wko}.
Here we recall some details relevant to the present analysis.
The anomaly free charge assignment is shown in Table~\ref{tab:Field-rep}.
In particular, the $\chi$ field does not couple directly to any fields of the SM.
\begin{table}[th]
\centering
\caption{\label{tab:Field-rep}Group representations
and charges of the fermions and scalars in the SWSM}
\begin{tabular}{|c|cccc|}\hline\hline
\textbf{field}& SU(3)$_\rc$ & SU(2)$_\rL$ & U(1)$_Y$ & U(1)$_z$ \\ \hline \hline
$Q_\rL$ & \textbf{3} & \textbf{2} & $\frac16$& $\frac16$ \bigstrut\\\hline
$u_\rR$ & \textbf{3} & \textbf{1} & $\frac23$& $\frac76$ \bigstrut\\\hline
$d_\rR$ & \textbf{3} & \textbf{1} & $-\frac{1}{3}$& $-\frac{5}{6}$ \bigstrut\\\hline
$L_\rL$ & \textbf{1} & \textbf{2} & $-\frac12$& $-\frac12$ \bigstrut\\\hline
$\ell_\rR$ & \textbf{1} & \textbf{1} & $-1$& $-\frac32$ \bigstrut\\\hline
$N_\rR$ & \textbf{1} & \textbf{1} & $0$& $\frac12$ \bigstrut\\\hline
$\phi$ & \textbf{1} & \textbf{2} & $\frac12$& 1\bigstrut\\\hline
$\chi$ & \textbf{1} & \textbf{1} & $0$& $-1$\bigstrut\\\hline
\hline
\end{tabular}
\end{table}
After spontaneous symmetry breaking (SSB), we parametrize the SM scalar doublet $\phi$
and the new scalar field as
\begin{equation}
\phi =
\frac{1}{\sqrt{2}}\begin{pmatrix}
-\ri\sqrt{2}\sigma^+\\
v+H+\ri\sigma_\phi
\end{pmatrix},
\quad\text{and}\quad
\chi = \frac{1}{\sqrt{2}}\bigl(w+S+\ri\sigma_\chi\bigr)
\label{eq:parametrization}
\end{equation}
where $v$ and $w$ are the two vacuum expectation values (VEVs), $H$ and $S$
are two real, scalar fields and $\sigma^+$, $\sigma_{\phi/\chi}$ are charged
and neutral Goldstone bosons. In terms of these fields the scalar potential
in the SWSM is given by
\begin{equation}
\label{eq:V_phichi}
V(\phi,\chi) = V_0 - \mu_\phi^2 |\phi|^2 - \mu_\chi^2 |\chi|^2
+\lambda_\phi |\phi|^4 + \lambda_\chi |\chi|^4
+\lambda |\phi|^2|\chi|^2
\,.
\end{equation}
The constant $V_0$ is irrelevant in our considerations, so we set it to zero in the rest of the paper.
Substituting the parametrization \eqref{eq:parametrization} into \eqref{eq:V_phichi}, we obtain
the tree-level (effective) potential
\begin{equation}\label{eq:V(h,s)}
V(H,S) =
- \frac12\Big(\mu_\phi^2 H^2 + \mu_\chi^2 S^2\Big)
+ \frac14\Big(\lambda_\phi H^4 + \lambda_\chi S^4 + \lambda~H^2 S^2\Big)
\end{equation}
of the real scalar fields. The VEVs are determined by the tadpole equations:
\begin{equation}
\label{eq:tadpole}
\bsp
\frac{\partial V}{\partial H}\biggl|_{H=v,S=w} = 0 &=
v\biggl( -\mu_\phi^2 + \frac{1}{2}\lambda w^2 + \lambda_\phi v^2\biggr),
\\
\frac{\partial V}{\partial S}\biggl|_{H=v,S=w} = 0 &=
w\biggl( -\mu_\chi^2 + \frac{1}{2}\lambda v^2 + \lambda_\chi w^2\biggr)
\,.
\esp
\end{equation}
The mass matrix of the scalar fields is given by the Hessian:
\begin{equation}
\label{eq:hessian}
\textbf{M}_\rs^2 =
\begin{pmatrix}
\frac{\partial^2 V}{\partial H^2} & \frac{\partial^2 V}{\partial H\, \partial S} \\
\frac{\partial^2 V}{\partial S\, \partial H} & \frac{\partial^2 V}{\partial S^2}
\end{pmatrix}_{H=v,S=w}
=\begin{pmatrix}
2\lambda_\phi v^2 & \lambda v w \\
\lambda v w & 2\lambda_\chi w^2
\end{pmatrix}\,,
\end{equation}
which can be diagonalized by a rotation
matrix
\begin{equation}
\ZS = \begin{pmatrix}
\cS & \sS \\
-\sS & \cS
\end{pmatrix},
\end{equation}
so that $\ZS^T \textbf{M}_\rs^2 \ZS = \text{diag}(\mhp^2,\msp^2)$. The
parameters $\mhp$ and $\msp$ are the masses of the propagating states $h$ and $s$
\footnote{We shall denote the pole mass of a particle $p$ as $M_p$.}.
The positivity condition for the masses implies the condition
\begin{equation}
(4 \lambda_\chi \lambda_\phi - \lambda^2) v^2 w^2 > 0
\label{eq:positivity}
\end{equation}
among the scalar couplings and VEVs. Explicitly, the angle of rotation and the scalar masses
$\mhp$ and $\msp$ can be expressed through the VEVs and couplings at tree level as
\begin{eqnarray}
\label{eq:thetas_tree}
\tan(2\tS) &=& \frac{\lambda v w}{\lambda_\chi w^2 - \lambda_\phi v^2},\\
\label{eq:mh_tree}
\mhp^2 &=&
\lambda_\phi v^2 + \lambda_\chi w^2 - \frac{\lambda_\chi w^2 - \lambda_\phi v^2}{\cos(2\tS)}\,\,,
\\
\label{eq:ms_tree}
\msp^2 &=&
\lambda_\phi v^2 + \lambda_\chi w^2 + \frac{\lambda_\chi w^2 - \lambda_\phi v^2}{\cos(2\tS)}\,.
\end{eqnarray}
In the absence of mixing ($\lambda=0$, $\tS=0$) we have
$\mhp = \sqrt{2\lambda_\phi v^2}$, $\msp = \sqrt{2\lambda_\chi w^2}$.
As the scalar fields are coupled to the $W^\pm$ bosons with the
interaction vertices
\begin{equation}
\Gamma_{hWW}^{\mu\nu} = \frac{\ri}{2}\biggl(g_\rL^2 v \cS\biggr)g^{\mu\nu}\,,
\quad\text{and}\quad
\Gamma_{sWW}^{\mu\nu} = \frac{\ri}{2}\biggl(g_\rL^2 v \sS\biggr)g^{\mu\nu}\,,
\end{equation}
only the BEH field is coupled to the $W$ bosons and to the other SM fields
in the limit of vanishing mixing between the scalars. Hence, we naturally
identify the VEV $v$ as that related to the Fermi coupling and also
the parameter $\mhp$ with the mass of the Higgs boson measured
at the LHC \cite{ParticleDataGroup:2020ssz} by introducing the notation
\begin{equation} \label{eq:mhcond}
m_h = 125.10\,\GeV,\quad
\Delta m_h = 0.14\,\GeV \quad\text{and}\quad
v = \Big(\sqrt{2}G_\rF\Big)^{-1/2} = 246.22\,\GeV\,,
\end{equation}
and requiring $\mhp \in [m_h-\Delta m_h,m_h+\Delta m_h]$. In accordance
with this assumption, we restrict $\tS$ to fall in the range
$(-\pi/4,\pi/4)$.
The VEV $w$ can be expressed through these known parameters and the scalar couplings using
Eqs.~\eqref{eq:thetas_tree} and \eqref{eq:mh_tree},
\begin{equation}\label{eq:wev_tree}
w = \mhp \sqrt{\frac{\mhp^2 - 2 \lambda_\phi v^2}
{2\lambda_\chi\bigl(\mhp^2 - 2 \lambda_\phi v^2 \bigr)+ \lambda^2 v^2}}
\,.
\end{equation}
Thus, the formal conditions for the non-vanishing $w$, required at the electroweak
scale are either
\begin{equation}
\mhp^2 > 2 \lambda_\phi v^2
\,,\quad\text{with}\quad
4 \lambda_\chi \lambda_\phi > \lambda^2
\end{equation}
(the second condition deriving from the positivity constraint in \eqref{eq:positivity} for positive $v^2 w^2$), or
\begin{equation}\label{eq:wvev_cond}
4 \lambda_\chi \biggl(\lambda_\phi - \frac{1}{2}\frac{\mhp^2}{v^2} \biggr) > \lambda^2\,,\quad\text{if}\quad 2 \lambda_\phi v^2 > \mhp^2> 0\,.
\end{equation}
As we have fixed $v$ and $\mhp$ experimentally, the input value of
$\lambda_\phi$ decides which of these two cases are to be considered.
Eqs.~\eqref{eq:V(h,s)}--\eqref{eq:hessian} are valid at tree level. The effect of the quantum
corrections can be summarized by substituting the potential $V$ with the effective potential
$\veff$, whose formal loop expansion is.
\begin{equation}
\veff = \sum_{i=0}^\infty \veff^{(i)}
\label{eq:Veff-expansion}
\end{equation}
where $\veff^{(0)} = V$ and $\veff^{(i)}$ represents the $i$-loop correction.
\section{Vacuum stability in the SWSM at one-loop accuracy}
The potential (\ref{eq:V(h,s)}) is stable if it is bounded from below.
Due to its continuity in the field variables, it is sufficient to study
the positivity of (\ref{eq:V(h,s)}) for large values of $h$ and $s$,
which translates to the following conditions on the quartic scalar
couplings:
\begin{equation}
\bsp
\label{eq:stab_conditions}
\lambda_\phi,\lambda_\chi &> 0\,,\\
4\lambda_\phi \lambda_\chi - \lambda^2 &> 0 \quad\mbox{for}\quad \lambda<0\,.
\esp
\end{equation}
Taking into account the radiative corrections leads to
(i) dependence on the renormalization scale $\mu$ for all renormalized couplings and
(ii) the corrections $\veff^{(i)}$. While it is straightforward to require that the
conditions (\ref{eq:stab_conditions}) be satisfied for the running couplings at
any sensible value of $\mu$, we cannot write the stability conditions for the one-loop
effective potential in a closed form such as in Eq.~(\ref{eq:stab_conditions}) valid
at tree level. Instead, we take an alternative path by requiring the existence
of a non-vanishing $w(\mtp)$ indirectly, extracting it from the known pole mass
of the Higgs boson, rather than computing it explicitly form the effective potential
\eqref{eq:Veff-expansion} with radiative corrections taken into account.
Our procedure can be described in terms of analytic expressions at the one-loop
accuracy as follows.
We investigate the vacuum stability in the range $\mu\in (\mtp, \Mpl)$, i.e.~from the
pole mass $\mtp$ of the t quark up to the Planck mass $\Mpl$ where quantum
gravitational effects become important. The scale dependence of a given coupling $g$
is described by the autonomous system coupled differential equations of the form
\begin{equation}
\frac{\partial g}{\partial t} = \beta_g\,,
\end{equation}
called RGEs, where $\partial/\partial t = \mu \,\partial/\partial \mu$.
We assume that the model remains perturbatively valid for the complete
range by requiring
\begin{equation}\label{eq:pt_conditions}
|g(\mu)| < 4 \pi\,,\quad \mu\in (\mtp, \Mpl)
\end{equation}
for any coupling $g$ in the theory, which we check in the stability analysis.
Consequently, we can employ perturbation theory to compute the $\beta_g$
functions. We integrate the complete set of RGEs of the SWSM, while requiring the
stability and perturbativity conditions \eqref{eq:stab_conditions} and
\eqref{eq:pt_conditions}. We also assume the existence of $w$ at the scale $\mu=\mtp$, which implies the existence of a second
massive neutral gauge boson and a second massive scalar particle as predictions of
the model. To check this condition, we compute the loop corrected scalar mixing
angle and scalar pole masses:
\begin{eqnarray}
\label{eq:thetas_1loop}
&&\tan\bigl(2\tS(p^2)\bigr) = \frac{\lambda(\mu) v(\mu) w(\mu) + \Pi_{HS}(p^2)}{\lambda_\chi(\mu) w(\mu)^2 - \lambda_\phi(\mu) v(\mu)^2+ \Pi_{-}(p^2)},
\\
\label{eq:mh_1loop}
&&\mhp^2 =
\lambda_\phi(\mu) v(\mu)^2 + \lambda_\chi(\mu) w(\mu)^2 +\Pi_{+}(\mhp^2)
- \frac{\lambda_\chi(\mu) w(\mu)^2 - \lambda_\phi(\mu) v(\mu)^2 +\Pi_{-}(\mhp^2)}{\cos\bigl(2\tS(\mhp^2)\bigr)}\,,
\\
\label{eq:ms_1loop}
&&\msp^2 = \lambda_\phi(\mu) v(\mu)^2 + \lambda_\chi(\mu) w(\mu)^2 +\Pi_{+}(\msp^2)
+ \frac{\lambda_\chi(\mu) w(\mu)^2 - \lambda_\phi(\mu) v(\mu)^2 +\Pi_{-}(\msp ^2)}{\cos\bigl(2\tS( \msp^2)\bigr)}\,,
\end{eqnarray}
using the shorthand notation
\begin{equation}
\Pi_{\pm}(p^2) = \frac{1}{2}\biggl(\tilde{\Pi}_{SS}(p^2) \pm\tilde{\Pi}_{HH}(p^2)\biggr)\,,
\end{equation}
where $\tilde{\Pi}_{\varphi\varphi}(p^2) = \Pi_{\varphi\varphi}(p^2)-T_\varphi/\langle\varphi\rangle$, with
$\Pi_{\varphi_I \varphi_J}(p^2)$ being the sum of all one particle irreducible
(1PI) two-point
functions with external legs $\varphi_I$ and $\varphi_J$, while $T_\varphi$ is the
sum of all 1PI one-point functions with external leg $\varphi$ ($\varphi$,
$\varphi_I = H$ or $S$). In other words,
Eqs.~\eqref{eq:thetas_1loop}--\eqref{eq:ms_1loop} are valid at any order
in perturbation theory. We collect these one- and two-point functions
computed at one-loop accuracy in App.~\ref{app:oneloop}. As shown explicitly,
each coupling and VEV in Eqs.~\eqref{eq:thetas_1loop}--\eqref{eq:ms_1loop}
depends on the renormalization scale $\mu$, but the pole masses $\mhp^2$
and $\msp^2$ do not up to the effect of neglected higher order corrections.
An important check of our calculations is the independence of the scalar pole
masses $\mhp$ and $\msp$ of the renormalization scale $\mu$
\begin{equation}\label{eq:pole_scaling}
\mu \frac{\partial \mhp}{\partial \mu } = \mu \frac{\partial \msp}{\partial \mu } = 0.
\end{equation}
As mentioned, we identify the pole mass $\mhp$, computed in perturbation theory in
\eqref{eq:mh_1loop} as the observed Higgs boson mass $m_h\pm \Delta m_h$, which
constrains the possible values of $w(\mtp)$ severely for a given set of input
couplings at $\mu = \mtp$. The lower panels in Fig.~\ref{fig:wdependence}
show the dependence of $|\Delta M_h|$, with $\Delta M_h = M_h-m_h$,
on $w(\mtp)$. We see that it falls below the experimental uncertainty
$\Delta m_h$, represented by the dashed lines, in a fairly narrow range of
$w(\mtp)$. To find the range of values of the allowed $w^{(i)}(\mtp)$,
with superscript referring to the accuracy in the perturbative order,
we solve the two equations
\begin{equation}
\mhp(w^{(1)})\biggr|_{\mu=\mtp} = m_h\pm\Delta m_h
\end{equation}
for $w^{(1)}(\mtp)$ numerically. We consider the two solutions physical
if those are positive, shown by the vertical lines. Then we use the accepted values
$w^{(1)}(\mtp)$, falling into the ranges between the vertical line,
to compute the possible values of $\msp$ using Eq.~\eqref{eq:ms_1loop}. This
procedure is shown by the plots on the top of Fig.~\ref{fig:wdependence}
for a specific set of input couplings.
\begin{figure}[t!]
\includegraphics[width=0.475\linewidth]{SW1L_wdep_000.pdf}
\includegraphics[width=0.475\linewidth]{SW1L_wdep_040.pdf}
\caption{\label{fig:wdependence}
Dependence of the absolute difference $\mhp$ minus the observed Higgs boson mass
on $w(\mtp)$ (bottom) and the dependence of $\msp$ on $w(\mtp)$ (top)
with input values $\lambda_\phi(\mtp)=0.15$, $\lambda_\chi(\mtp)=0.2$ and
$\lambda(\mtp)=0.1$.
Left: $y_x(\mtp) = 0$, right: $y_x(\mtp) = 0.8$. The dashed horizontal line corresponds
to the uncertainty $\Delta m_h$. The black dash-dotted curves are computed at tree level
(Eqs.~\eqref{eq:mh_tree} and \eqref{eq:ms_tree}), while the solid colored ones at one loop
(Eqs.~\eqref{eq:mh_1loop} and \eqref{eq:ms_1loop}).
}
\end{figure}
The complete set of running couplings can be grouped into three sets. The
(i) SM couplings $g_Y,~g_\rL,~g_\rs,~y_\rt$, the
(ii) SW gauge coupling $g_z$ and
(iii) the scalar quartic couplings $\lambda_\phi,\lambda_\chi,\lambda$
together with the sterile neutrino Yukawa coupling $y_x$. We assume one light
sterile neutrino -- a candidate for dark matter \cite{Iwamoto:2021fup} -- and two
heavy ones with equal masses for simplicity, $y_x = y_{x,5} = y_{x,6}$.
We neglect the effect of the SW gauge coupling from our analysis because
its maximally allowed value is very small, $g_z \lesssim 10^{-4}$, if the model
is to explain the origin of dark matter \cite{Iwamoto:2021fup} and also should
obey the direct observational limit of the NA61 experiment \cite{NA64:2019imj}.
Explicitly, in group (iii) we have the following autonomous system of RGEs at one loop:
\begin{equation}\label{eq:1loop_RGE}
\bsp
\frac{\partial \lambda_\phi}{\partial t} &= \beta_{\lambda_\phi,\text{SM}}^{(1)}+ \frac{\lambda^2}{(4\pi)^2}\,,\quad
\frac{\partial \lambda_\chi}{\partial t} =
\frac{1}{(4\pi)^2}\biggl(20\lambda_\chi^2+2\lambda^2 - 2 y_x^4 +4\lambda_\chi y_x^2 \biggr),
\\
\frac{\partial \lambda}{\partial t} &=
\frac{\lambda}{(4\pi)^2}\biggl(-\frac{3}{2}g_Y^2 -
\frac{9}{2} g_\rL^2 +12\lambda_\phi + 8 \lambda_\chi +4\lambda +6 y_t^2 + 2y_x^2\biggr),
\esp
\end{equation}
for the scalar couplings, with $\beta_{\lambda_\phi,\text{SM}}^{(1)}$ being the
one-loop beta function of the SM quartic scalar coupling, and
\begin{equation}\label{eq:1loop_RGEy}
\frac{\partial y_x}{\partial t} =
\frac{2 y_x^3}{(4\pi)^2}\,,
\qquad
\frac{\partial w}{\partial t} = -\frac{ w}{(4\pi)^2} \frac{y_x^2}2
\end{equation}
for the Yukawa coupling and new VEV. The one-loop beta functions show, that
a sufficiently large Higgs portal coupling $\lambda$ is able to drive $\lambda_\phi$
and $\lambda_\chi$ to positive values, while the sterile neutrino Yukawa couplings
drive $\lambda_\chi$ towards negative values. The last equation, the RGE for $w$ does
not affect the vacuum stability analysis. We present it as it is used in checking
the conditions in Eq.~\eqref{eq:pole_scaling}.
There are three SM precision parameters measured precisely, $G_\rF,~\mzp$ and
$\alpha_{\text{em}}^{\overline{\text{MS}}}(\mzp)$, which can be turned into input
values for the couplings in group (i) together with the less precisely known $\mtp$
and $\alpha_\rs^{\overline{\text{MS}}}(\mzp)$.
The self energies $\Pi_{WW}(p^2)$ and $\Pi_{ZZ}(p^2)$ also receive
contributions $\Pi^{\text{SW}}_{VV}(p^2)$ due to the SW extension,
given in Eq.~\eqref{eq:wz_selfenergy}, which shift the input values
of the VEV $v$ and the electroweak gauge couplings.
Hence, we use the following inputs in group (i)
\begin{equation}
\bsp
g_Y(\mtp) &= 0.3586 + \delta g_Y(\mtp),
\\
g_\rL(\mtp) &= 0.6477 + \delta g_\rL(\mtp),
\\
v(\mtp) &= 247.55~\GeV + \delta v(\mtp),
\esp
\end{equation}
with $g_\rs(\mtp)=1.167$ and $y_\rt(\mtp)=0.940$. The SW corrections
$\delta g_Y,\delta g_\rL$ and $\delta v$ are defined in
App.~\ref{app:electroweak-correction}. The SM value of the gauge and scale
dependent VEV $v$ in the Feynman gauge is $v_{\text{SM}}(\mtp) = 247.55~\GeV$.
The SW corrections to the electroweak input parameters $\delta g_Y$,
$\delta g_\rL$ and $\delta v$ are small and do not modify noticably
our final results even at two loops. We take the value of $y_\rt(\mtp)$
from the fit formula (25) of Ref.~\cite{Degrassi:2012ry} as the
largest possible value. This choice is the most conservative one concerning the
vacuum stability because the main culprit causing the metastable SM vacuum is the
large value of the t quark Yukawa coupling $y_t(\mtp)$. The last set (iii) of the
input couplings are unconstrained and we scan their values at $\mu=\mtp$ in order
to obtain the parameter space in $\{\lambda_\phi,\lambda_\chi,\lambda,y_x\}_{\mu=\mtp}$
where the stability \eqref{eq:stab_conditions}, perturbativity \eqref{eq:pt_conditions}
conditions in the range $\mu\in (\mtp, \Mpl)$, together with existence of
the $w$ vacuum at $\mu= \mtp$ are fulfilled.
We have scanned the volume $V_\lambda(y_x)= \{ \lambda_\phi,\lambda_\chi,\lambda \}_{\mu=\mtp}$
spanned by the input couplings at fixed values of $y_x(\mtp)$ to find the parameter
space allowed by our conditions. There are two quantitatively different regions.
In the first one (a) $\msp < \mhp$, i.e.~the new scalar is lighter than the
Higgs boson, whereas in the second one (b) $\msp > \mhp$. We shall present the
result of such scans in the next section where we the computations will be performed
at two-loop accuracy. Having found the allowed region of the input parameters,
we can compute the scalar mixing angle and mass of the new scalar using
Eqs.~\eqref{eq:thetas_1loop} and \eqref{eq:ms_1loop}, to obtain the allowed parameter
space in the $\msp-|\sin(\tS(\mtp))|$ plane, shown in Fig.~\ref{fig:1loop_masses}
at selected values of the neutrino Yukawa coupling.
In case (a), the parameter space is empty for $y_x(\mtp) = 0$, but it grows
non-linearly with increasing $y_x(\mtp)$. For instance, the parameter space
for $y_x(\mtp) = 0.4$ is not empty, but still invisible at the resolution of
Fig.~\ref{fig:1loop_masses}, as in that case one has $\msp < 300\,\MeV$ and
$|\sin(\tS)| < 0.04$. For $y_x(\mtp) \gtrsim 1$ the stability condition
$\lambda_\chi >0$ is not satisfied at any scale below $\Mpl$. It turns out
that in case (b) the value of the VEV $w^{(1)}(\mtp)$ and hence that of
$\msp$ can be larger than shown in the plot when the scalar mixing coupling
tends to zero. As that also means vanishing mixing coupling $\lambda$,
it represents the phenomenologically rather irrelevant case of very weakly
coupled dark sector.
\begin{figure}[t!]
{
\includegraphics[width=0.47\linewidth]{SW1L_thetams_B.pdf}
\includegraphics[width=0.48\linewidth]{SW1L_thetams_A.pdf}
}
\caption{\label{fig:1loop_masses}
Allowed parameter space $V_\lambda(y_x)$ in the $\msp-|\sin(\Theta_S)|$ plane
at representative values of $y_x$ at one loop accuracy.
The different colored areas correspond to different values of $y_x(\mtp)$
as shown in the legends. Left: $\msp < \mhp$, right: $\msp > \mhp$.
}
\end{figure}
\section{Vacuum stability in the SWSM at two-loop accuracy}\label{sec:2loops}
In order to check the robustness of the perturbative analysis of the parameter
space where the vacuum is stable, we have repeated the procedure described in the
previous section at two-loop accuracy. Given a set of input couplings
$\{\lambda_\phi(\mtp),\lambda_\chi(\mtp),\lambda(\mtp),y_x(\mtp)\}$, we first
computed $w^{(1)}(\mtp)$ at $\mu=\mtp$, using our analytic formulae as
described in the previous section. We solved the two-loop $\beta$-functions to check the
conditions of stability and perturbativity only if we found $w^{(1)}(\mtp) > 0$.
If all the stability and perturbativity conditions were fulfilled for the input
values $\{\lambda_\phi(\mtp),\lambda_\chi(\mtp),\lambda(\mtp),y_x(\mtp)\}$, we used
\texttt{SPheno} \cite{Porod:2003um,Porod:2011nf} to compute the scalar pole masses
at two-loops, $\mhp^{(2)}$ and $\msp^{(2)}$, using $w$ as a free input
parameter. Starting from the initial value $w = w^{(1)}(\mtp)$,
we searched for the $w$ at which
\begin{equation}
\mhp^{(2)}(w) = m_h\,,
\end{equation}
which we call $w=w^{(2)}$. This procedure of starting with using only such points
in the parameter space where the condition $w^{(1)}(\mtp) > 0$ is satisfied
saves significant CPU time as the numerical solution of the two-loop
$\beta$-functions and especially the computations in \texttt{SPheno} are very time consuming
\footnote{There is a price to pay for this speed-up, namely we discard
small portions of the parameter space, where $w^{(1)}(\mtp)$ is not positive
but $w^{(2)}(\mtp)$ is so.}.
\begin{figure}[t!]
\includegraphics[width=0.5\linewidth]{SW2L_3d.pdf}
\caption{\label{fig:3d param} Three dimensional parameter space at $y_x(\mtp) = 0.4$.
}
\end{figure}
\begin{figure}[t!]
{
\includegraphics[width=0.47\linewidth]{SW2L1_thetams.pdf}
\includegraphics[width=0.485\linewidth]{SW2L1_l1l2.pdf}
}
\\
{
\includegraphics[width=0.48\linewidth]{SW2L1_LL1.pdf}
\includegraphics[width=0.48\linewidth]{SW2L1_LL2.pdf}
}
\caption{\label{fig:lambdaspace_point2}
Planar projections of the allowed parameter space, where $\msp > \mhp$
and the conditions \eqref{eq:stab_conditions}, \eqref{eq:pt_conditions},
$w^{(2)}(\mtp) > 0$ are fulfilled at two-loop accuracy. Top left: allowed regions in the
$\msp-|\sin\tS|$ plane, other plots show the two-dimensional projections of the
three-dimensional allowed regions in $V_\lambda(y_x)$. The different colored
regions correspond to different values of $y_x$ as shown in the legends.
}
\end{figure}
The parameter space is shown by a perspective view at $y_x(\mtp) = 0.4$ in
Fig.~\ref{fig:3d param}, and its projections to the two-dimensional sub-spaces
at selected values of $y_x(\mtp)$ in Figs.~\ref{fig:lambdaspace_point2} and
\ref{fig:lambdaspace_point1}. We have also computed the regions $V_\lambda(y_x)$
using the tree-level relation \eqref{eq:wev_tree} instead of the one-loop one in
Eq.~\eqref{eq:mh_1loop} as done typically in the case of scalar singlet extensions
(see e.g.~\cite{SM-eft-stability}). We found that the allowed region on the
$\msp-|\sin(\tS)|$ plane for case (a) $\msp < \mhp$ is sensitive to such a change
in an interesting way: for vanishing Yukawa coupling the allowed region found using
Eq.~\eqref{eq:wev_tree} disappears at one loop (see the discussion of Fig.~\ref{fig:1loop_masses} in the previous section),
but reappears at two loops, which had not been found in pervious analyses.
If $y_x(\mtp)$ is increased from zero, we find
non-empty parameter space at any of the first three orders in perturbation theory.
The minimum value of $\msp$ in region (a) depends on $y_x(\mtp)$, but it is always
larger than about 1\,GeV in the two-loop analysis.
\begin{figure}[t!]
\includegraphics[width=0.48\linewidth]{SW2L2_thetams.pdf}
\includegraphics[width=0.475\linewidth]{SW2L2_l1l2.pdf}
\\
\includegraphics[width=0.48\linewidth]{SW2L2_LL1.pdf}
\includegraphics[width=0.49\linewidth]{SW2L2_LL2.pdf}
\caption{\label{fig:lambdaspace_point1}
Same as Fig.~\ref{fig:lambdaspace_point2} for $\msp > \mhp$.
}
\end{figure}
One can make two important remarks about the parameter space in case
(b) $\msp > \mhp$ presented in Fig.~\ref{fig:lambdaspace_point1}.
On the one hand, the parameter space shrinks as $y_x(\mtp)$ increases
and it disappears completely for $y_x(\mtp) \gtrsim 1$.
On the other hand, we have $|\lambda(\mtp)|>0$,
because for $\lambda(\mtp)=0$ the scalar mixing vanishes, and then
$\lambda_\phi$ coincides with $\lambda_{\rm SM}$, which does not satisfy
the vacuum stability conditions, while preserving the pole mass of the Higgs boson.
Also, the volume $V_\lambda(y_x)$ increases slightly with increasing order
in perturbation theory.
We have checked Eq.~(\ref{eq:pole_scaling}) numerically for both cases
(a) and (b) at randomly selected input values
$\{\lambda_\phi,\lambda_\chi,\lambda,y_x\}_{\mu=\mtp}$ in the range
$\mu\in\bigl(0.5\mtp,2\mtp \bigr)$ and compared the scale
dependences of the tree level masses \eqref{eq:mh_tree} and
\eqref{eq:ms_tree} to the scale dependences of the one-loop
accurate pole masses (\ref{eq:mh_1loop}) and (\ref{eq:ms_1loop}).
As shown in Fig.~\ref{fig:scale_check}, we have found that the scale
dependences of the tree level masses are reduced significantly at one-loop
and even more at two-loop accuracy. The sizeable difference between the scalar
pole masses $\msp$ at the first two orders of perturbation theory (and to much less
extent between the next two orders) are not caused by radiative corrections.
Rather than loop corrections to the masses, the jumps originate from the shifts in
$w(\mtp)$ required to reproduce Higgs boson pole mass at different orders of
perturbation theory, as can be seen in Fig.~\ref{fig:wdependence}.
\begin{figure}[t!]
\includegraphics[width=0.475\linewidth]{SW2L_mh_rgstab.pdf}
\includegraphics[width=0.475\linewidth]{SW2L_ms_rgstab.pdf}
\caption{\label{fig:scale_check} Dependencees of the scalar boson masses $\mhp^{(i)}$ and
$\msp$ on the renormalization scale $\mu$ in the range $(0.5\mtp,2\mtp)$ at different
orders of the perturbation theory at $y_x(\mtp)=0.4$,
$\lambda_\phi(\mtp)=0.241$, $\lambda_\chi(\mtp) = 0.096$, $\lambda(\mtp)=0.217$.
}
\end{figure}
The theoretical prediction for the $W$-boson mass uses precision electroweak
observables (except $\mwp$ itself) and it is sensitive to new physics
\cite{Robens:2015gla}. Hence, it is often used as a benchmark compared
to the experimentally observed value \cite{ParticleDataGroup:2020ssz}
\begin{equation}
M_W^{\text{exp.}} = 80.379 \pm 0.012\,\GeV\,.
\end{equation}
The current, most precise theoretical estimates are
$M_W^{\text{theo.}} = 80.359 \pm 0.011\,\GeV$ \cite{Baak:2012kk},
$80.362 \pm 0.007\,\GeV$ \cite{Ciuchini:2013pca} and
$80.357 \pm 0.009 \pm 0.003\,\GeV$ \cite{degrassi2014}.
We take 80.360\,GeV as SM prediction and the combined uncertainty from
the experimental and theoretical values to be
$\Delta\mwp \simeq 17\,\MeV$. We set twice this value as an upper limit
to find the allowed range for the new physics contribution to
$M_W^{\text{theo.}}$, which means that the SW contribution to the
mass of the $W$ boson is excluded outside the range $(19\pm 34)$\,MeV,
i.e~outside $[-15,53]$\,MeV.
\begin{figure}[t!]
\includegraphics[width=0.47\linewidth]{SW2_thetams_wmass_B_v2.pdf}
\includegraphics[width=0.48\linewidth]{SW2_thetams_wmass_A_v2.pdf}
\caption{\label{fig:final_result}
Allowed parameter space in the $\msp-|\sin(\Theta_S)|$
plane at representative values of $y_x$ at two-loop accuracy. The region
with a gray grid represents the portion of the parameter space where
$\delta\mwp$, i.e. the radiative corrections in the SWSM to the
$W$-boson mass exceed the value given in the plot legend.
}
\end{figure}
We computed the SW contributions $\delta\mwp^{\rm SW}$ to $M_W^{\text{theo.}}$
at one loop accuracy. We found, that the contribution of the new gauge sector
is heavily suppressed due to the required smallness of the new gauge coupling
$g_z \lesssim 10^{-4}$. The sterile neutrinos may change
the measured value of the Fermi coupling $G_\rF$ affecting the mass of the $W$
boson already at tree level \cite{Fernandez-Martinez:2016lgt}.
As a matter of fact right-handed neutrinos can provide significant contribution
to the $W$ boson mass \cite{Blennow:2022yfm}, although at the price of introducing
some tension with universality bounds. Hence, a proper account of the
effect of sterile neutrinos is certainly warranted, but it is beyond the
scope of the present paper and we leave it for a planned global scan of
the parameter space. The contribution of the new scalar sector to $\mwp$ however,
can be comparable to $\Delta\mwp$ \cite{Lopez-Val:2014jva,Robens:2015gla}.
We present the SW correction $\delta\mwp$ in \eqref{app:1loopwmass} of
App.~\ref{app:electroweak-correction}, expressed with two free parameters,
$\msp$ and $\sin^2\tS$. For the case of light new scalar, $\msp<\mhp$, the SW
correction is positve, while for the heavy one it is negative.
The excluded regions obtained by (a) $\delta\mwp > 53$\,MeV and
(b) $\delta\mwp < -15$\,MeV are presented in Fig.~\ref{fig:final_result}
overlayed the region where the new scalar particle is allowed by vacuum
stability, perturbativity and precision measurement of the Higgs boson mass.
We see that the $W$-mass measurement at present uncertainties does not provide
significant reduction of the parameter space. However, if the improved
measurement published recently by the CDF collaboration \cite{CDF:2022hxs}
will be confirmed, the stability of the vacuum in the high-mass region becomes
incompatible with the CDF-II $W$ mass as in that case the SW correction to the
SM value is negative. The low-mass
region also becomes significantly constrained.
In Fig.~\ref{fig:final_result-CDF} we present the allowed parameter space together
with contour lines representing the border of the excluded parameter space (below
the line) assuming a $\delta\mwp$ increase of the $W$ mass by selected
benchmark values due to the new scalar in the self energy loop. Clearly,
the large positive shift required to explain the CDF-II result is not
compatible with the conditions of stability and perturbativity of the scalar
sector of the model.
\begin{figure}[t]
\includegraphics[width=0.48\linewidth]{SW2_thetams_CDF_lines_v2.pdf}
\caption{\label{fig:final_result-CDF}
Allowed parameter space in the $\msp-|\sin(\Theta_S)|$
plane at representative values of $y_x$ at two-loop accuracy for $\msp > \mhp$.
The contours at selective values of $\delta\mwp$ represent the
borderline in the parameter space below which the new scalar cannot
be solely responsible to the increase of the $W$ boson mass by $\delta\mwp$
with respect to the SM value.}
\end{figure}
\section{Conclusions and outlook}
In this paper we have scanned the parameter space of the superweak extension of the
standard model in order to find the allowed parameter space of the scalar sector where
the following assumptions are fulfilled: (i) the vacuum be stable and
(ii) the model parameters remain perturbative up to the Planck scale,
(iii) the pole mass of the Higgs boson must fall into its experimentally measured
range. The first two of these constraints were taken into account in our preliminary
work \cite{Peli:2019vtp}. In this paper we superseed that former study by taking
into account the two-loop corrections both in the renormalization group equations
of the running couplings as well as the measured value of the mass of the Higgs
boson. We have taken into account the largest neutrino Yukawa coupling
$y_x$. In the limit of vanishing $y_x$ and neglecting the superweak gauge
coupling, the model essentially reduces to the case of singlet scalar extension of
the standard model.
In the two-loop analysis we found a non-empty region in the $\msp-\sin\tS$ parameter
space for $\msp<\mhp$, increasing with $y_x$ up to $y_x(\mtp) \simeq 1$ where
the condition of stability is not fulfilled any longer. Such a region have been
missed in the case of $y_x=0$ in earlier analyises of the singlet scalar extension
performed only at one-loop accuracy (see e.g.~\cite{Falkowski:2015iwa}).
Of course, there is a lot of experimental results that also constrain the parameter
space. The new physics contributions to electroweak precision observables as well
as direct searches for the decay of a scalar particle into standard model ones
provide strong constraints. Of those, we have studied only the effect of the
experimental result on the mass of the $W$ boson in this paper. We saw that
while $\mwp$ can indeed limit the parameter space, the current world average
without the new CDF-II result cannot provide further constraint on the parameter
space. If we also include the CDF II result in the average -- in spite of being
incompatible with the previous average --, then the parameter space allowed
by our assumptions become incompatible with the $W$-mass constraint.
Clearly, it is of utmost importance to take into account all the available
experimental constraints, not only from collider experiments, also from
neutrino experiments. Such a complete study of the parameter space is beyond
the scope of the present paper and we leave it to an upcoming study where we
plan to use the analytic expressions of the present work.
\section*{Acknowledgments}
We are grateful to members of the ELTE PPPhenogroup (pppheno.elte.hu/people),
especially to Josu Hernandez-Garcia for useful discussions.
This work was supported by grant K 125105 of the National Research,
Development and Innovation Fund in Hungary.
|
1,941,325,220,417 | arxiv |
\section{Introduction}
\begin{wrapfigure}[18]{r}{0.50\textwidth}
\centering
\vspace{-2cm}
\includegraphics[width = \linewidth]{figures/gd-ls.pdf} \vspace{-0.75cm}
\caption{As the model grows ($x$-axis), the standard deviation (shaded region) in the halting time of gradient descent on random least squares vanishes and the halting time becomes {\bfseries predictable}. Note also a \textbf{universality} phenomenon, that is, the halting time limit is the same for problems generated from different distributions. (See Sec.~\ref{sec:numerical_simulations} for a description of simulations.)}
\label{fig:gd-ls}
\end{wrapfigure}
Traditional worst-case analysis of optimization algorithms provides complexity bounds for any input, no matter how unlikely \citep{nemirovski1995information, nesterov2004introductory}. It gives convergence guarantees, but the bounds are not always representative of the typical runtime of an algorithm.
In contrast, average-case analysis gives sharper runtime estimates when some or all of its inputs are random. This is often paired with concentration bounds that quantify the spread of those estimates.
In this way, it is more representative of the typical behavior.
Yet, average-case analysis is rarely used in optimization because the complexity of algorithms is assumed to depend on the specific probability distribution of the inputs. Surprisingly, simulations reveal this is not the case for large-scale problems (see Figure \ref{fig:gd-ls}).
We show that almost all instances of high-dimensional data are indistinguishable to first-order algorithms. Particularly, the \emph{halting time}, i.e.\ the number of iterations to reach a given accuracy, for any first-order method converges to a deterministic value which is independent of the input distribution (see Figure~\ref{fig:gd-ls}). Since the halting time is deterministic, the empirical complexity coincides almost surely with the average-case rates.
\renewcommand{\arraystretch}{2}
\ctable[notespar,
caption = {\textbf{Comparison of convergence guarantees for non-strongly convex objectives} in terms of asymptotic behavior of $\|\nabla f({\boldsymbol x}_k)\|^2$ as problem size and iteration are large in the isotropic features model. The average-case guarantees are strictly faster than the traditional worst-case and adversarial rates. Furthermore, the traditional worst-case complexity bounds depend on the distance to the optimum which under our constant signal-to-noise model, grows as the problem size, or dimension, $d$, increases. The `without noise' setting refers to the case when the targets ${\boldsymbol b}$ equal ${\boldsymbol A} \widetilde{{\boldsymbol x}}$ with $\widetilde{{\boldsymbol x}}$ the signal and the `noisy' setting when the targets ${\boldsymbol b}$ follow a generative model but are corrupted by noise, that is, ${\boldsymbol b} = {\boldsymbol A} \widetilde{{\boldsymbol x}} + {\boldsymbol \eta}$, where ${\boldsymbol \eta}$ is a noise vector. The rates are stated in terms of an absolute constant $C$, the amount of signal $R$ and noise $\widetilde{R}$, the ratio of number of features to samples $d/n \to r \in (0,\infty)$, and the maximum $\lambda^+$ and minimum $\lambda^-$ eigenvalues. Denote $\|J_1^2(x)\|_{\infty}$ the maximum value of the squared Bessel function of the first kind $(J_1(x))$ over $[0, \infty)$. See Section~\ref{sec: average_case} and \ref{sec: avg_derivations} for derivations and definitions of terms such as non-strongly convex.},
captionskip=2ex,
label={tab:comparison_worst_avg_cvx},
pos =ht!
]{clll}{\tnote[1]{In the noisy setting, we lower bounded $\|{\boldsymbol x}_0-{\boldsymbol x}^*\|^2$ by $d$ (see Lemma~\ref{lem:growth_dist_optimum}) to the worst-case complexity bound provided in \citet[Section 4.1.3]{taylor2017smooth}. } \tnote[2]{\cite{nesterov2004introductory,Beck2009Fast}}
\tnote[3]{\cite{nesterov2012how}} \tnote[4]{Adversarial model maximizes the norm of the gradient subject to a fixed condition number (see Section~\ref{sec: average_case}).} \tnote[5]{When noise is added, the convergence rates are dominated by the term with $\widetilde{R}$ in \eqref{eq: something_1_main}.}}{
\toprule
\textbf{Method} & & \begin{minipage}{0.3\textwidth} \begin{center} \textbf{Non-strongly cvx\\
w/o noise} \end{center} \end{minipage} & \begin{minipage}{0.3\textwidth} \begin{center} \textbf{Non-strongly cvx\\ w/ noise}\tmark[5] \end{center} \end{minipage}\\
\midrule
\multirow{3}{*}{\begin{minipage}{0.1\textwidth} \begin{center} Gradient descent (GD) \end{center} \end{minipage}} & Worst\tmark[1] & $\textcolor{teal}{\mfrac{1}{(k+1)^2}} \cdot R^2 (\lambda^+)^2$ & $\textcolor{teal}{\mfrac{\textcolor{purple}{d}}{(k+1)^2}} \cdot \widetilde{R}^2 (\lambda^+)^2 C$ \\
\cmidrule(r){2-4}
& Adversarial\tmark[4] & $\textcolor{teal}{\mfrac{1}{(k+1)^2}} \cdot \mfrac{R^2 (\lambda^+)^2}{e^{2}} $ &
$\textcolor{teal}{\mfrac{1}{k}} \cdot \mfrac{\widetilde{R}^2 \lambda^+}{2}$\\
\cmidrule(r){2-4}
& Average & $\textcolor{teal}{\mfrac{1}{k^{5/2}}} \cdot \mfrac{R^2 (\lambda^+)^2 \Gamma(5/2)}{2^{3/2} \pi}$ &
$\textcolor{teal}{\mfrac{1}{k^{3/2}}} \cdot \mfrac{\widetilde{R}^2 \lambda^+ \Gamma (3/2 )}{2^{1/2} \pi}$\\
\midrule
\multirow{3}{*}{\begin{minipage}{0.16\textwidth} \begin{center} Nesterov\\ accelerated method \tmark[2] \end{center} \end{minipage}}
& Worst \tmark[3] & $\textcolor{teal}{\mfrac{1}{k(k+2)^2}} \cdot 8 R^2 (\lambda^+)^2$ & $\textcolor{teal}{\mfrac{\textcolor{purple}{d}}{k(k+2)^2}} \cdot 8\widetilde{R}^2 (\lambda^+)^2 C$ \\
\cmidrule(r){2-4}
& Adversarial & $\textcolor{teal}{\mfrac{1}{k^{7/2}}} \cdot \mfrac{8e^{-1/2}}{\sqrt{2} \pi} R^2 (\lambda^+)^2$ &
$\textcolor{teal}{\mfrac{1}{k^{2}}} \cdot \|J_1^2(x)\|_{\infty} \widetilde{R}^2 \lambda^+ $ \\
\cmidrule(r){2-4}
& Average & $\textcolor{teal}{\mfrac{1}{k^4}} \cdot \mfrac{8 R^2 (\lambda^+)^2}{\pi^2}$ &
$\textcolor{teal}{\mfrac{\log(k)}{k^3}} \cdot \mfrac{4\widetilde{R}^2 \lambda^+}{\pi^2 }$\\
\bottomrule}
\renewcommand{\arraystretch}{2.5}
\ctable[notespar, caption = {\textbf{Comparison of convergence guarantees for strongly convex objectives} in terms of asymptotic behavior of $\|\nabla f({\boldsymbol x}_k)\|^2$ as problem size and iteration are large in the isotropic features model. Average-case matches the worst-case asymptotic guarantees multiplied by an additional \textit{polynomial correction term} (\textcolor{teal}{green}). This polynomial term has little effect on the complexity compared to the linear rate. However as the matrix ${\boldsymbol H} = \tfrac{1}{n} {\boldsymbol A}^T {\boldsymbol A}$ becomes ill-conditioned $(r\to1)$, the polynomial correction starts to dominate the average-case complexity. Indeed this shows that the support of the spectrum does not fully determine the rate. \textit{Many} eigenvalues contribute meaningfully to the average-rate.
See Section~\ref{sec: avg_derivations} for derivations and Table~\ref{tab:comparison_worst_avg_cvx} for definition of terms in the rates.},
label= {tab:comparison_worst_avg_str_cvx},
captionskip=2ex,
pos = ht!
]{cll}{\tnote[1]{\citet[Section 4.1.3]{taylor2017smooth}}}{
\toprule
\textbf{Method} & & \textbf{Strongly cvx w/ noise} \\
\midrule
\multirow{2}{*}{\begin{minipage}{0.15\textwidth} \begin{center} Gradient descent (GD) \end{center} \end{minipage}} & Worst\tmark[1] & $\textcolor{purple}{\big (1- \frac{\lambda^-}{\lambda^+} \big )^{2k}} (\lambda^+)^2 $\\
\cmidrule(r){2-3}
& Average & $\textcolor{purple}{\big (1- \frac{\lambda^-}{\lambda^+} \big )^{2k}} \textcolor{teal}{\mfrac{1}{k^{3/2}}} \big [ R^2 \lambda^- + \widetilde{R}^2 r \big ] \cdot C $\\
\midrule
\multirow{2}{*}{\begin{minipage}{0.15\textwidth} \begin{center} Polyak\\ \citep{Polyak1962Some} \end{center} \end{minipage}}
& Worst & $ \textcolor{purple}{ \big ( 1 - \frac{2 \sqrt{\lambda^-}}{\sqrt{\lambda^+} + \sqrt{\lambda^-}}\big )^{2k}} \cdot C$ \\
\cmidrule(r){2-3}
& Average & $ \textcolor{purple}{ \big ( 1 - \frac{2 \sqrt{\lambda^-}}{\sqrt{\lambda^+} + \sqrt{\lambda^-}}\big )^{2k}} \big [\frac{(\lambda^+-\lambda^-)}{2}R^2 + \widetilde{R}^2r \big ] \cdot C$ \\
\midrule
\multirow{2}{*}{\begin{minipage}{0.2\textwidth} \begin{center} Nesterov accelerated method\\ \citep{nesterov2004introductory} \end{center} \end{minipage}} & Worst & $\textcolor{purple}{ \big ( 1 - \frac{2 \sqrt{\lambda^-}}{\sqrt{\lambda^+} + \sqrt{\lambda^-}}\big )^{k} \big (1- \frac{\lambda^-}{\lambda^+} \big )^k} \cdot C$ \\
\cmidrule(r){2-3}
& Average & $ \textcolor{purple}{ \big ( 1 - \frac{2 \sqrt{\lambda^-}}{\sqrt{\lambda^+} + \sqrt{\lambda^-}}\big )^{k} \big (1- \frac{\lambda^-}{\lambda^+} \big )^k} \big [\textcolor{teal}{\mfrac{1}{k^{1/2}}} \cdot R^2 \lambda^- + \textcolor{teal}{\mfrac{1}{k^{1/2}}} \cdot \widetilde{R}^2 r \big ] \cdot C$ \\
\bottomrule}
\paragraph{Notation.} We write vectors in lowercase boldface (${\boldsymbol x}$) and matrices in uppercase boldface (${\boldsymbol H}$). The norm $\|{\boldsymbol x}\|_2^2 = {\boldsymbol x}^T {\boldsymbol x}$ gives the usual Euclidean $2$-norm and $\|{\boldsymbol H}\|_{\text{op}} = \text{maximum singular value of ${\boldsymbol H}$}$ is the usual operator-2 norm. Given a matrix ${\boldsymbol H} \in {\mathbb R}^{d \times d}$, the largest eigenvalue of ${\boldsymbol H}$ is $\lambda_{{\boldsymbol H}}^+$ and its smallest eigenvalue is $\lambda_{{\boldsymbol H}}^-$. A sequence of random variables $\{y_d\}_{d =0}^\infty$ converges in probability to $y$, indicated by $y_d \Prto[d] y$, if for any $\varepsilon > 0$, $\displaystyle \lim_{d \to \infty} \Pr(|y_d-y| > \varepsilon) = 0$. In other words, the probability that $y_d$ is far from $y$ goes to $0$ as $d$ increases. Probability measures are denoted by $\mu$ and their densities by $\mathop{}\!\mathrm{d}\mu$. We say a sequence of random measures $\mu_d$ converges to $\mu$ weakly in probability if for any bounded continuous function $f$, we have $\int f \mathop{}\!\mathrm{d}\mu_d \to \int f \mathop{}\!\mathrm{d}\mu$ in probability.
All stochastic quantities defined hereafter live on a probability space denoted by $(\Pr, \Omega, \mathcal{F})$ with probability measure $\Pr$ and the $\sigma$-algebra $\mathcal{F}$ containing subsets of $\Omega$. A random variable (vector) is a measurable map from $\Omega$ to ${\mathbb R}$ $({\mathbb R}^d)$ respectively. Let $X : (\Omega, \mathcal{F}) \mapsto ({\mathbb R}, \mathcal{B})$ be a random variable mapping into the Borel $\sigma$-algebra $\mathcal{B}$ and the set $B \in \mathcal{B}$. We use the standard shorthand for the event $\{X \in B\} = \{\omega : X(\omega) \in B\}$.\\
\subsection{Main results}
In this paper, we analyze the halting time and develop the first explicit average-case analysis for first-order methods on quadratic objectives.
Quadratic objective functions are rich enough to reproduce the dynamics that arise in more complex models, yet simple enough to be understood in closed form.
Quadratic models are receiving renewed interest in the machine learning community as recent advances have shown that over-parameterized models, including neural networks, have training dynamics similar to those of quadratic problems~\citep{jacot2018neural, novak2018bayesian, arora2019exact, chizat2019lazy}.
The precise form of the quadratic problem we consider is
\begin{equation} \label{eq:LS_main}
\vspace{0.5em}\argmin_{{\boldsymbol x} \in {\mathbb R}^d} \Big \{ f({\boldsymbol x}) \stackrel{\text{def}}{=} \frac{1}{2n} \|{\boldsymbol A} {\boldsymbol x}-{\boldsymbol b}\|^2 \Big \}, \quad \text{with } {\boldsymbol b} \stackrel{\text{def}}{=} {\boldsymbol A} \widetilde{{\boldsymbol x}} + {\boldsymbol \eta}\,,
\end{equation}
where ${\boldsymbol A} \in {\mathbb R}^{n \times d}$ is the data matrix, $\widetilde{{\boldsymbol x}} \in {\mathbb R}^d$ is the signal vector \footnote{The signal $\widetilde{{\boldsymbol x}}$ is not the same as the vector for which the iterates of the algorithm are converging to as $k \to \infty$.}, and ${\boldsymbol \eta} \in {\mathbb R}^n$ is a source of noise. All of these inputs will possibly be random and the target ${\boldsymbol b} = {\boldsymbol A} \widetilde{{\boldsymbol x}} + {\boldsymbol \eta}$ is produced by a generative model corrupted by noise. We refer to the noiseless (without noise) setting when ${\boldsymbol b} = {\boldsymbol A} \widetilde{{\boldsymbol x}}$ and the noisy setting as ${\boldsymbol b} = {\boldsymbol A} \widetilde{{\boldsymbol x}} + {\boldsymbol \eta}$.
We work in the following setting: Both the number of features $(d)$ and data dimension $(n)$ grow to infinity while $d/n$ tends to a fixed $r \in (0,\infty)$.
We use $\widetilde{R}^2=\frac{1}{d}\mathbb{E}\left[\|{\boldsymbol \eta}\|^2\right]$ to denote the magnitude of the noise. For intuition, we implicitly define $R^2 \approx \frac{1}{d}\|{\boldsymbol b}\|^2 - \widetilde{R}^2$ to measure the strength of the signal{\footnote{The definition of $\widetilde{R}^2$ in Assumption~\ref{assumption: Vector} does not imply that $R^2 \approx \frac{1}{d}\|{\boldsymbol b}\|^2 - \widetilde{R}^2$. However the precise definition of $\widetilde{R}$ and this intuitive one yield similar magnitudes and both are generated from similar quantities. }}; we make the definition of $\widetilde{R}^2$ precise in Assumption \ref{assumption: Vector} of Section~\ref{sec: problem_setting}, one of two assumptions fundamental to this work. Throughout, the signal-to-noise ratio in $[0,\infty]$ is held constant as the problem size grows.
Moreover, we assume that the data matrix ${\boldsymbol A}$ is \emph{independent} of both the signal, $\widetilde{{\boldsymbol x}}$, and noise ${\boldsymbol \eta}.$ Note this, together with the generative model, allows for some amount of dependence between ${\boldsymbol A}$ and the target ${\boldsymbol b}.$ We will also assume that ${\boldsymbol H} \stackrel{\text{def}}{=} \frac{1}{n} {\boldsymbol A}^T {\boldsymbol A}$ has a well-defined \textit{limiting spectral density}, denoted by $\mathop{}\!\mathrm{d}\mu$, as $n,d \to \infty$ (see Assumption \ref{assumption: spectral_density} of Section~\ref{sec: problem_setting}).
Our first contribution is a framework to analyze the average-case complexity of gradient-based methods in the described setting. Our framework highlights how the algorithm, signal and noise levels interact with each other to produce different average-case convergence guarantees. The culmination of this framework is the average-case convergence rates for first-order methods (see Tables~\ref{tab:comparison_worst_avg_cvx} and \ref{tab:comparison_worst_avg_str_cvx}).
\begin{wrapfigure}[15]{r}{0.47\textwidth}
\centering
\vspace{-0.55cm}
\includegraphics[width = 0.85\linewidth]{figures/mp_pdf.pdf}
\vspace{-0.25cm}
\caption{The spectrum of matrices $\tfrac{1}{n} {\boldsymbol A}^T {\boldsymbol A}$ under the isotropic features model converges as $n,d \to \infty$ to the \emph{Mar\v{c}enko-Pastur} distribution, shown here for different values of $r = d/n$.}
\label{fig:MP}
\end{wrapfigure}
Our framework is broad enough to facilitate multiple perspectives on average-case analysis. Our motivating and central application is the \emph{fully-average-case}, in which we assume that all inputs are random. The quintessential random data model is \emph{isotropic features}. This supposes the entries of ${\boldsymbol A}$ are i.i.d.\ random variables with zero mean, equal variance, and bounded fourth moments, that is, ${\mathbb E}\,[A_{ij}] = 0, {\mathbb E}\,[A_{ij}^2] = \sigma^2, {\mathbb E}\,[A_{ij}^4] < \infty$ for all $i, j$. In a celebrated theorem of \cite{marvcenko1967distribution}, the spectrum of ${\boldsymbol H} = \frac{1}{n}{\boldsymbol A}^T {\boldsymbol A}$ converges to a compactly supported measure as the problem size grows \emph{without any further assumptions on the distribution of the entries of ${\boldsymbol A}$}. This limiting spectral distribution is known as the Mar\v{c}enko-Pastur law:
\begin{equation} \label{eq:MP}
\begin{gathered} \mathop{}\!\mathrm{d}\mu_{\mathrm{MP}}(\lambda) \stackrel{\text{def}}{=} \delta_0(\lambda) \max\{1-\tfrac{1}{r}, 0\} + \frac{\sqrt{(\lambda-\lambda^-)(\lambda^+-\lambda)}}{2 \pi \lambda \sigma^2 r} 1_{[\lambda^-, \lambda^+]}\,,\\
\text{where} \qquad \lambda^- \stackrel{\text{def}}{=} \sigma^2(1 - \sqrt{r})^2 \quad \text{and} \quad \lambda^+ \stackrel{\text{def}}{=} \sigma^2(1+ \sqrt{r})^2\,.
\end{gathered}
\end{equation}
However, our framework is built to be vastly more general. To start, the framework covers a fully-average-case analysis with other data models, such as the one-hidden layer network with random weights and the correlated features model (see Section \ref{sec: data_generate}).
More to the point, this framework also allows for a type of semi-average-case analysis, in which only ${\boldsymbol b}$ is taken to be random. When we do this and then choose ${\boldsymbol A}$ in such a way as to maximize the halting time, we call this the \emph{adversarial average-case}. See Section \ref{sec: average_case} for further details and motivations.
We now discuss the contents of this framework in detail,
which is to say we survey how Assumptions \ref{assumption: Vector} and \ref{assumption: spectral_density} combine to show the halting time is concentrated and deterministic.
The first step is to express the conditional expectation of the gradient at the $k$-th iterate as a sum of expected traces of polynomials in the matrix ${\boldsymbol H} = \frac{{\boldsymbol A}^T {\boldsymbol A}}{n}$ (c.f.\ Proposition~\ref{proposition:conditional}):
\begin{equation} \label{eq:conditional_main_result}
{\mathbb E}\,[\|\nabla f({\boldsymbol x}_k)\|^2 \, | \, {\boldsymbol H}] = \tfrac{R^2}{d} \text{tr} \big ( {\boldsymbol H}^2 P_k^2({\boldsymbol H}) \big ) + \tfrac{\widetilde{R}^2}{n} \text{tr} \big ( {\boldsymbol H} P_k^2({\boldsymbol H}) \big ).
\end{equation}
The polynomial $P_k$, known as the \textit{residual polynomial}, is a $k$-th degree polynomial associated with the gradient-based algorithm. This tool of associating each algorithm with polynomials is a classic technique in numerical iterative methods for solving linear systems \citep{Flanders1950Numerical,golub1961chebyshev,fischer1996polynomial,rutishauser1959refined}. Such polynomials are used to prove convergence of some of the most celebrated algorithms like the conjugate gradient method \citep{Hestenes&Stiefel:1952}. Explicit expressions of the residual polynomials for Nesterov's accelerated methods \citep{nesterov2004introductory, Beck2009Fast}, both convex and strongly convex, as well as, gradient descent and Polyak's momemtum (a.k.a Heavy-ball) \citep{Polyak1962Some} are derived in Section~\ref{sec: poly}. These polynomials may be of independent interest.
The result in \eqref{eq:conditional_main_result} gives an \textit{exact expression} for the expected gradient depending only on traces of powers of ${\boldsymbol H}$, which in turn can be expressed in terms of its \emph{eigenvalues}. Our second main assumption (Assumption \ref{assumption: spectral_density}) then ensures that these traces converge to integrals against the spectral density $\mathop{}\!\mathrm{d}\mu.$ In summary, the squared gradient norm concentrates to a deterministic quantity, \footnote{
In many situations this deterministic quantity
$\! \! \underset{d \rightarrow \infty}{\mathcal{E}} \! [\|\nabla f(\xx_{k})\|^2]\,$
is in fact the limiting expectation of the squared-norm of the gradient. However, under the assumptions that we are using, this does not immediately follow. It is however always the limit of the median of the squared-norm of the gradient.}\footnote{Technically, there is no need to assume the measure $\mu$ has a density -- the theorem holds just as well for any limiting spectral measure $\mu$. In fact, a version of this theorem can be formulated at finite $n$ just as well, thus dispensing entirely with Assumption \ref{assumption: spectral_density} -- c.f.\ Proposition \ref{proposition:conditional}.} $\! \! \underset{d \rightarrow \infty}{\mathcal{E}} \! [\|\nabla f(\xx_{k})\|^2]\,$:
\begin{theorem}[Concentration of the gradient] \label{thm: concentration_main}
Under Assumptions~\ref{assumption: Vector} and~\ref{assumption: spectral_density} the norm of the gradient concentrates around a deterministic value:
\begin{equation} \label{eq: something_1_main} \vspace{0.25cm}
\hspace{-0.28cm} \|\nabla f({\boldsymbol x}_k)\|^2 \Prto[d] \textcolor{teal}{\overbrace{R^2}^{\text{signal}}} \int { \underbrace{\lambda^2 P_k^2(\lambda)}_{\text{algorithm}}} \textcolor{mypurple}{\overbrace{\mathop{}\!\mathrm{d}\mu}^{\text{model}} } + \textcolor{purple}{\overbrace{ \widetilde{R}^2} ^{\text{noise}} } r \int { \underbrace{\lambda P_k^2(\lambda)}_{\text{algorithm}}} \textcolor{mypurple}{\overbrace{ \mathop{}\!\mathrm{d}\mu}^{\text{model}} } \stackrel{\text{def}}{=} \! \! \underset{d \rightarrow \infty}{\mathcal{E}} \! [\|\nabla f(\xx_{k})\|^2]\,.
\end{equation}
\end{theorem}
\noindent
Notably, the deterministic value for which the gradient concentrates around depends only on ${\boldsymbol H}$ through its eigenvalues.
The concentration of the norm of the gradient above yields a candidate for the limiting value of the halting time, or the first time the gradient $\|\nabla f({\boldsymbol x}_k)\|^2$ falls below some predefined $\varepsilon$. We define this candidate for the halting time $\tau_{\varepsilon}$ from $ \! \! \underset{d \rightarrow \infty}{\mathcal{E}} \! [\|\nabla f(\xx_{k})\|^2]\,$ and we denote the halting time $T_{\varepsilon}$, by
\begin{align}
\tau_{\varepsilon} \stackrel{\text{def}}{=} \inf \, \{ k > 0 : \! \! \underset{d \rightarrow \infty}{\mathcal{E}} \! [\|\nabla f(\xx_{k})\|^2]\, \le \varepsilon\} \quad \text{and} \quad T_{\varepsilon} \stackrel{\text{def}}{=} \inf \, \{ k > 0 : \|\nabla f({\boldsymbol x}_k)\|^2 \le \varepsilon\}\,.
\end{align}
We note that the deterministic value $\tau_{\varepsilon}$ is, by definition, the average complexity of the first-order algorithm. This leads to our second main result that states the almost sure convergence of the halting time to a constant value.
\begin{theorem}[Halting time universality] \label{thm: Halting_time_main} Fix an $\varepsilon > 0$ and suppose $\! \! \underset{d \rightarrow \infty}{\mathcal{E}} \! [\|\nabla f(\xx_{k})\|^2]\, \neq \varepsilon$ for all $k$. Under Assumptions~\ref{assumption: Vector} and \ref{assumption: spectral_density},
\begin{empheq}[box=\mybluebox]{equation}
\vphantom{\sum_i^n}\lim_{d \to \infty} \Pr(T_{\varepsilon} = \tau_{\varepsilon} ) = 1\,.
\end{empheq}
\end{theorem}
A result of this form previously appeared in \cite{deift2019conjugate} for the conjugate gradient method.
\subsubsection{Extension beyond least squares, ridge regression} \label{sec:ridge_regression_main}
One extension of Theorems~\ref{thm: concentration_main} and \ref{thm: Halting_time_main} to other objective functions is the ridge regression problem or $\ell_2$-regularization, that is, we consider a problem of the form
\begin{equation} \label{eq:ridge_regression_main}
\argmin_{{\boldsymbol x} \in \mathbb{R}^d} \left \{ f({\boldsymbol x}) \stackrel{\text{def}}{=} \frac{1}{2n} \|{\boldsymbol A} {\boldsymbol x} - {\boldsymbol b}\|^2 + \frac{\gamma}{2} \|{\boldsymbol x}\|^2 \right \}, \quad \text{with ${\boldsymbol b} \stackrel{\text{def}}{=} {\boldsymbol A} \widetilde{{\boldsymbol x}} + {\boldsymbol \eta}$\,.}
\end{equation}
As discussed above, we assume that ${\boldsymbol A} \in \mathbb{R}^{n \times d}$ is (possibly random) data matrix, $\widetilde{{\boldsymbol x}} \in \mathbb{R}^d$ is an unobserved signal vector, and ${\boldsymbol \eta} \in \mathbb{R}^n$ is a noise vector. We make the same assumptions on the limiting spectral measure of ${\boldsymbol H} = \frac{1}{n} {\boldsymbol A}^T {\boldsymbol A}$, the ratio of features to samples, that is, $d/n$ tends to some fixed $r \in (0,\infty)$ as $d \to \infty$, and the magnitude of the noise $\widetilde{R}^2 = \tfrac{1}{d} \mathbb{E}[\|{\boldsymbol \eta}\|^2]$. In addition to the independence assumption between the data matrix ${\boldsymbol A}$ and the signal $\widetilde{{\boldsymbol x}}$ and ${\boldsymbol x}_0$, we add that the signal and the initialization are also independent of each other with magnitudes $\mathbb{E}[\|{\boldsymbol x}_0\|^2] = \dot{R}^2$ and $\mathbb{E}[\|\widetilde{{\boldsymbol x}}\|^2] = \widehat{R}^2$ (see Assumption~\ref{assumption:ridge_vector} for precise statement). The constant $\gamma > 0$ is the ridge regression parameter.
The addition of the $\ell_2$-regularizer to the least squares problem alters the Hessian of the least squares by adding a multiple of the identity. Therefore the matrix ${\boldsymbol M} \stackrel{\text{def}}{=} {\boldsymbol H} + \gamma {\boldsymbol I}$ and its eigenvalues play the role of ${\boldsymbol H}$ and its eigenvalue in Theorem~\ref{thm: concentration_main}. The result is the following theorem.
\begin{theorem}[Concentration of the gradient for ridge regression] \label{thm: concentration_main_main_ridge}
Under Assumptions~\ref{assumption:ridge_vector} and~\ref{assumption: spectral_density} the norm of the gradient concentrates around a deterministic value:
\begin{equation} \begin{aligned} \label{eq: something_1_main_ridge_main} \vspace{0.25cm}
\hspace{-0.28cm} \|\nabla f({\boldsymbol x}_k)\|^2 \Prto[d] &\textcolor{teal}{\overbrace{\dot{R}^2}^{\text{initial.}}} \! \!\! \int { \underbrace{(\lambda + \gamma)^2 P_k^2(\lambda + \gamma; \lambda^{\pm})}_{\text{algorithm}}} \textcolor{mypurple}{\overbrace{\mathop{}\!\mathrm{d}\mu}^{\text{model}} } + \textcolor{teal}{\overbrace{\widehat{R}^2}^{\text{signal}}} \! \!\! \int { \underbrace{\lambda^2 P_k^2(\lambda + \gamma; \lambda^{\pm})}_{\text{algorithm}}} \textcolor{mypurple}{\overbrace{\mathop{}\!\mathrm{d}\mu}^{\text{model}} } \\
& \quad \quad + \textcolor{purple}{\overbrace{ \widetilde{R}^2} ^{\text{noise}} } r \int { \underbrace{\lambda P_k^2(\lambda + \gamma; \lambda^{\pm})}_{\text{algorithm}}} \textcolor{mypurple}{\overbrace{ \mathop{}\!\mathrm{d}\mu}^{\text{model}} }. \end{aligned}
\end{equation}
\end{theorem}
Here $\mathop{}\!\mathrm{d} \mu$ is the limiting spectral density of ${\boldsymbol H}$. The limiting gradient \eqref{eq: something_1_main_ridge_main} decomposes into three terms which highlight the effects of initialization, signal, and noise. This is unlike the two terms in \eqref{eq: something_1_main} which illustrate the noise and signal/initialization effects. The extra $\dot{R}^2$ term in \eqref{eq: something_1_main_ridge_main} only adds to the magnitude of the gradient due to the independence between the signal and initialization. We also note that the matrix ${\boldsymbol M}$ always has eigenvalues bounded away from $0$ even in the limit as $d \to \infty$. As such, we expect linear convergence. By defining the right-hand side of \eqref{eq: something_1_main_ridge_main} to be $\! \! \underset{d \rightarrow \infty}{\mathcal{E}} \! [\|\nabla f(\xx_{k})\|^2]\,$, it follows that Theorem~\ref{thm: Halting_time_main} holds under Assumption~\ref{assumption:ridge_vector} in replace of Assumption~\ref{assumption: Vector}. For additional discussion see Section~\ref{sec:ridge_regression}.
\subsection{Comparison between average and worst-case} \label{sec: average_case}
The average-case analysis we develop in this paper is effective in the large problem size limit, whereas worst-case analysis is performed for a fixed matrix size. This implies that there are potentially \emph{dimension-dependent} quantities which must be addressed when making a comparison.
For example, all the first-order methods considered here converge linearly for the finite-dimensional least squares problem: the rate is determined by the gap between the smallest nonzero eigenvalue of the matrix ${\boldsymbol H}$ and $0$. However this could very well be meaningless in the context of a high-dimensional problem, as this gap becomes vanishingly small as the problem size grows.
In the context of the isotropic features model, when the ratio of features to samples $r$ is $1,$ this is precisely what occurs: the smallest eigenvalues tend to $0$ as the matrix size grows. In contrast, when $r$ is bounded away from $1$, the least squares problem in \eqref{eq:LS_main} has a \emph{dimension-independent} lower bound on the Hessian which holds with overwhelming probability, (c.f.\ Figure~\ref{fig:MP}). However, for the comparison we do here, there is another dimension-dependent quantity which will have a greater impact on the worst-case bounds.
Before continuing, we remark on some terminology we will use throughout the paper. While for any realization of the least squares problem the Hessian ${\boldsymbol H}$ is almost surely positive definite, as problem size grows, the matrix ${\boldsymbol H}$ can become ill-conditioned, that is, the smallest eigenvalues tend to $0$ as $n \to \infty$ when $r = 1$. Consequently, the computational complexity of first-order algorithms as $n \to \infty$ exhibit rates similar to non-strongly convex problems. On the other hand, when $r$ is bounded away from $1$, the gap between the smallest nonzero eigenvalue of ${\boldsymbol H}$ and 0 results in first order methods having complexity rates similar to strongly convex problems. We use this terminology, \textit{non-strongly convex} and \textit{strongly convex}, in Tables~\ref{tab:comparison_worst_avg_cvx} and \ref{tab:comparison_worst_avg_str_cvx} to distinguish the different convergence behaviors when $r = 1$ and $r \neq 1$ resp. and for worst-case complexity comparisons.
\paragraph{Worst-case rates and the distance to optimality.}
Typical worst-case upper bounds for first-order algorithms depend on the distance to optimality, ${\|{\boldsymbol x}_0-{\boldsymbol x}^{\star}\|^2}$. For example, let us consider gradient descent (GD). Tight worst-case bounds for GD in the strongly convex and convex setting \citep{taylor2017smooth}, respectively, are
\begin{gather*}
\|\nabla f({\boldsymbol x}_k)\|^2 \le (\lambda_{{\boldsymbol H}}^+)^2 \|{\boldsymbol x}_0-{\boldsymbol x}^{\star}\|^2 \left ( 1- \tfrac{\lambda_{{\boldsymbol H}}^-}{\lambda_{{\boldsymbol H}}^+} \right )^{2k} \stackrel{\text{def}}{=} \mathrm{UB}_{\text{sc}}(\|\nabla f({\boldsymbol x}_k)\|^2\\
\text{and} \quad \|\nabla f({\boldsymbol x}_k)\|^2 \le \frac{(\lambda^+_{{\boldsymbol H}})^2 \|{\boldsymbol x}_0-{\boldsymbol x}^{\star}\|^2}{(k+1)^2} \stackrel{\text{def}}{=} \mathrm{UB}_{\text{cvx}}(\|\nabla f({\boldsymbol x}_k)\|^2),
\end{gather*}
where ${\boldsymbol x}^{\star}$ is the solution to \eqref{eq:LS_main} found by the algorithm, i.e, the iterates of the algorithm converge ${\boldsymbol x}_k \to {\boldsymbol x}^{\star}$.
To formulate a comparison between the fully-average-case rates, where ${\boldsymbol A}$ follows isotropic features, and the worst-case rates, we must make an estimate of this distance to optimality ${\|{\boldsymbol x}_0-{\boldsymbol x}^{\star}\|^2}$. In the noiseless setting $(\widetilde{R} = 0)$, the expectation of $\|{\boldsymbol x}_0-{\boldsymbol x}^{\star}\|^2$ is a constant multiple of $R^2$. In particular, it is independent of the dimension. Similarly when we have dimension-independent-strong-convexity, $(r \neq 1),$ even with noisy targets ${\boldsymbol b}$ $(\widetilde{R} > 0)$, the distance to the optimum is well-behaved and $\mathbb{E}[\|{\boldsymbol x}_0-{\boldsymbol x}^{\star}\|^2]$ is a constant involving $\widetilde{R}^2$ and $R^2$. Hence, a direct comparison between worst and average-case is relatively simple.
For the ill-conditioned case when $r=1$, the situation is more complicated with noisy targets. To maintain a fixed and finite signal-to-noise ratio, the distance to optimality will behave like ${\|{\boldsymbol x}^{\star} - {\boldsymbol x}_0 \|^2} \approx d \widetilde{R}^2$; that is, it is dimension-dependent.\footnote{Precisely, we show that $\tfrac{d \widetilde{R}^2}{\|{\boldsymbol x}^{\star}-{\boldsymbol x}_0\|^2}$ is tight (see Section~\ref{sec: avg_derivations}, Lemma~\ref{lem:growth_dist_optimum}).} So the worst-case rates have a dimension-dependent constant whereas the average-case rates are dimension-independent. This dimension-dependent term can be see in the last column of Table~\ref{tab:comparison_worst_avg_cvx}. Conversely, if one desires to make ${\mathbb E}\,[\|{\boldsymbol x}_0-{\boldsymbol x}^\star\|^2]$ constant across dimensions using a generative model with noise, one is forced to scale ${\boldsymbol \eta}$ to go to zero as $d \to \infty$, thus reducing the full generative model to the noiseless regime.
\paragraph{Adversarial model.}
As mentioned above, the comparison with existing worst-case bounds is problematic due to dimension-dependent factors. To overcome this, we consider the following \textit{adversarial model}. First, we assume a noisy generative model for ${\boldsymbol b}$ (Assumption~\ref{assumption: Vector} holds). Next, our adversary chooses the matrix ${\boldsymbol A}$ without knowledge of ${\boldsymbol b}$ to \textit{maximize the norm of the gradient} subject to the constraint that the convex hull of the eigenvalues of ${\boldsymbol H} = \tfrac{1}{n}{\boldsymbol A}^T {\boldsymbol A}$ equals $[\lambda^{-},\lambda^+]$. For comparison to the average-case analysis with isotropic features, we would choose $\lambda^{\pm}$ to be the endpoints of the Mar\v{c}enko-Pastur law. In light of Theorem~\ref{thm: concentration_main}, the adversarial model seeks to solve the constrained optimization problem
\begin{equation} \begin{aligned} \label{eq: adversary_worst_case_main}
\lim_{d \to \infty} \max_{{\boldsymbol H}} \, \mathbb{E} \big [ \|\nabla f({\boldsymbol x}_k)\|^2 \big ]
&=
\max_{ \lambda \in [\lambda^-, \lambda^+] }
\bigl\{ R^2 \lambda^2 P_k^2(\lambda) + \widetilde{R}^2 r \lambda P_k^2(\lambda)\bigr\}.
\end{aligned}
\end{equation}
We call this expression the \textit{adversarial average-case guarantee}.
The main distinction between worst-case and adversarial average-case is that traditional worst-case maximizes the gradient over \textit{all} inputs -- both targets ${\boldsymbol b}$ and data matrix ${\boldsymbol A}$. This leads to dimension--dependent complexity, as there are usually exceptional target vectors that are heavily dependent on the data matrix ${\boldsymbol A}$ (such as those built from extremal singular vectors of ${\boldsymbol A}$) and cause the algorithm to perform exceptionally slowly.
In contrast, the adversarial average-case keeps the randomness of the target ${\boldsymbol b}$ while maximizing over the data matrix ${\boldsymbol A}$. This is a more meaningful worst-case comparison: for example, in the setting of linear regression, the response and measurements of the independent variables are typically generated through different means and have different and independent sources of noise (see for example \cite[Example 10.1]{walpole1989probability}. Hence the independence of the noise intervenes to limit how truly bad the data matrix ${\boldsymbol A}$ can be. Furthermore, the complexity of the adversarial average-case is dimension-independent. Table~\ref{tab:comparison_worst_avg_cvx} shows these adversarial complexities for non-strongly convex objectives \eqref{eq:LS_main}. Similar results can also be derived for strongly convex objectives but are omitted for brevity.
\begin{figure}
\centering
\includegraphics[width = 0.48\linewidth]{figures/rates.pdf}
\includegraphics[width = 0.48\linewidth]{figures/gap_1.pdf}
\caption{{\bfseries Average-case vs worst-case in least squares} with isotropic features ($r =1, d = 4096$). {\bfseries Left}: 8000 runs of GD, standard deviation (shaded region, undetectable), and theoretical rates (dashed lines). Empirical runs precisely match the theoretical average-case rates.
{\bfseries Right}: Ratio of the upper bound in worst-case to empirical gradient after $k=4096$ iterations,
$\mathrm{UB}_{\text{cvx}}(\|\nabla f({\boldsymbol x}_k)\|^2) / \|\nabla f({\boldsymbol x}_k)\|^2$. From the concentration of gradients (left), this implies that \emph{the norms of the gradient for worst-cases are always larger than average.} The distribution of worst-case gradient rates with noise has large variance contrasting the little variance in (left) and makes the expected worst-case unpredictable in contrast with the noiseless.
\label{fig:avg_rates}
}
\end{figure}
\paragraph{Comparison with adversarial and worst-case complexities.}
By construction, the average-case convergence rates are at least as good as the worst-case and adversarial guarantees. The average-case complexity in the convex, noiseless setting ($r =1, \widetilde{R}=0$) for Nesterov's accelerated method (convex) \citep{nesterov2004introductory, Beck2009Fast} and gradient descent (GD) are an order of magnitude faster in $k$ than the worst case rates (see Table~\ref{tab:comparison_worst_avg_cvx}, first column). It may appear at first glance (Table~\ref{tab:comparison_worst_avg_cvx}) that there is a discrepancy between the average-case and exact worst-case rate when $r =1$ and noisy setting (${\boldsymbol \eta} \neq \bm{0}$). As noted in the previous section, the worst-case rates have dimension-dependent constants. Provided the dimension is bigger than the iteration counter ($d \ge k^{1/2}$ for GD and $d \ge \log(k)$ for Nesterov), the average complexity indeed yields a faster rate of convergence. Average-case is always strictly better than adversarial rates (see Table~\ref{tab:comparison_worst_avg_cvx}). This improvement in the average rate indeed highlights that \textit{the support of the spectrum does not fully determine the rate.} Many eigenvalues contribute meaningfully to the average rate. Hence, our results are not and cannot be purely explained by the support of the spectrum.
The average-case complexity in the strongly convex case matches the worst-case guarantees multiplied by an additional \textit{polynomial correction term} (\textcolor{teal}{green} in Table~\ref{tab:comparison_worst_avg_str_cvx}). This polynomial term has little effect on the complexity compared to the linear rate. However as the matrix ${\boldsymbol H}$ becomes ill-conditioned $(r\to1)$, the polynomial correction starts to dominate the average-case complexity. The sublinear rates in Table~\ref{tab:comparison_worst_avg_cvx} show this effect and it accounts for the improved average-case rates.
Our average-case rates accurately predict the empirical convergence observed in simulations, in contrast to the worst-case rates (see Figure~\ref{fig:avg_rates}). Although our rates only hold on average, surprisingly, even a single instance of GD exactly matches the theoretical predictions. Moreover, the noisy non-strongly convex worst-case is highly unpredictable due to the instability in ${\boldsymbol x}^{\star}$ across runs. As such, the worst-case analysis is not representative of typical behavior (see Figure~\ref{fig:avg_rates}).
These theoretical results are supported by simulations and empirically extended to other models, such as logistic regression, as well as other algorithms, such as stochastic gradient descent (SGD) (see Section~\ref{sec:numerical_simulations}). This suggests that this universality property holds for a wider class of problems.
\paragraph{Related work.} The average-case analysis has a long history
in computer science and numerical analysis. Often it is used to
justify the superior performance of algorithms as compared with their
worst-case bounds such as Quicksort (sorting)
\citep{Hoare1962Quicksort} and the simplex method in linear
programming, see for example \citep{Spielman2004Smooth, smale1983on,
borgwardt1986probabilistic, todd1991probabilistic} and references
therein. Despite this rich history, it is challenging to transfer
these ideas into continuous optimization due to the ill-defined notion
of a typical continuous optimization problem. Recently
\citet{pedregosa2020average, lacotte2020optimal} derived a framework
for average-case analysis of gradient-based methods and developed
optimal algorithms with respect to the average-case. The class of
problems they consider is a special case of \eqref{eq:LS_main} with
vanishing noise. We use a similar framework -- extending the results to all first-order methods and noisy quadratics while also providing concentration and explicit average-case convergence guarantees.
A natural criticism of a simple average-case analysis is that the complexity is data model dependent and thus it only has predictive power for a small subset of real world phenomena. Because of this, it becomes important to show that any modeling choices made in defining the data ensemble have limited effect. \citet{paquette2020universality} showed that the halting time for conjugate gradient becomes deterministic as the dimension grows and it exhibits a universality property, that is, for a class of sample covariance matrices, the halting times are identical (see also \citet{deift2019conjugate}).
It is conjectured that this property holds in greater generality -- for more distributions and more algorithms~\citep{deift2014universality,deift2018universality}). In \citet{Sagun2017Universal}, empirical evidence confirms this for neural networks and spin glass models. Our paper is in the same spirit as these-- definitively answering the question that all first-order methods share this universality property for the halting time on quadratic problems.
This work is inspired by research in numerical linear algebra that uses random matrix theory to quantify the ``probability of difficulty" and ``typical behavior" of numerical algorithms \citep{demmel1988probability}. For many numerical linear algebra algorithms, one can place a random matrix as an input and analyze the algorithm's performance. It is used to help explain the success of algorithms and heuristics that could not be well understand through traditional worst-case analysis. Numerical algorithms such as the QR \citep{pfrang2014how}, Gaussian elimination \citep{sankar2006smoothed,trefethen1990average}, and other matrix factorization algorithms, for example, symmetric triadiagonalization and bidiagonalization \citep{edelman2005random} have had their performances analyzed under random matrix inputs (typically Gaussian matrices). In \cite{deift2019universality}, an empirical study extended these results beyond Gaussian matrices and showed that the halting time for a many numerical algorithms were independent of the random input matrix for a large class of matrix ensembles. This universality result eventually was proven for the conjugate gradient method \citep{paquette2020universality,deift2019conjugate}.
An alternative approach to explaining successes of numerical algorithms, introduced in \citep{Spielman2004Smooth}, is smoothed analysis. Smoothed analysis is a hybrid of worst-case and average-case analysis. Here one randomly perturbs the worst-case input and computes the maximum expected value of a measure for the performance of an algorithm. It has been used, for example, to successful analyze linear programming \citep{Spielman2004Smooth}, semi-definite programs \citep{bhojanapalli2018smoothed}, and conjugate gradient \citep{menon2016smoothed}. In this work, we instead focus on the random matrix approach to analyze first-order methods on optimization problems.
Our work draws heavily upon classical polynomial based iterative methods. Originally designed for the Chebyshev iterative method \citep{Flanders1950Numerical,golub1961chebyshev}, the polynomial approach for analyzing algorithms was instrumental in proving worst-case complexity for the celebrated conjugate gradient method \citep{Hestenes&Stiefel:1952}. For us, the polynomial approach gives an explicit equation relating the eigenvalues of the data matrix to the iterates which, in turn, allows the application of random matrix theory.\\
The remainder of the article is structured as follows: in Section~\ref{sec: problem_setting} we introduce the full mathematical model under investigation including some examples of data models. Section~\ref{sec: poly} discusses the relationship between polynomials and optimization. Our main results are then described and proven in Section~\ref{sec: halting_time}. Section~\ref{sec: avg_derivations} details the computations involved in the average-case analysis, the proofs of which are deferred to the appendix. The article concludes on showing some numerical simulations in Section~\ref{sec:numerical_simulations}.
\section{Problem setting} \label{sec: problem_setting} In this paper, we develop an average-case analysis for first-order methods on quadratic problems of the form
\begin{equation} \label{eq:LS}
\vspace{0.5em}\argmin_{{\boldsymbol x} \in {\mathbb R}^d} \Big \{ f({\boldsymbol x}) \stackrel{\text{def}}{=} \frac{1}{2n} \|{\boldsymbol A} {\boldsymbol x}-{\boldsymbol b}\|^2 \Big \}, \quad \text{with } {\boldsymbol b} \stackrel{\text{def}}{=} {\boldsymbol A} \widetilde{{\boldsymbol x}} + {\boldsymbol \eta}\,,
\end{equation}
where ${\boldsymbol A} \in {\mathbb R}^{n \times d}$ is a (possibly random) matrix (discussed in the next subsection), $\widetilde{{\boldsymbol x}} \in {\mathbb R}^d$ is an unobserved signal vector, and ${\boldsymbol \eta} \in {\mathbb R}^n$ is a noise vector.
\subsection{Data matrix, noise, signal, and initialization assumptions}\label{sec: assumptions}
Throughout the paper we make the following assumptions.
\begin{assumption}[Initialization, signal, and noise] \label{assumption: Vector} The initial vector ${\boldsymbol x}_0 \in {\mathbb R}^d$, the signal $\widetilde{{\boldsymbol x}} \in {\mathbb R}^d$, and noise vector ${\boldsymbol \eta} \in {\mathbb R}^n$ are independent of ${\boldsymbol A}$ and satisfy the following conditions:
\begin{enumerate}[leftmargin=*]
\item The entries of ${\boldsymbol x}_0-\widetilde{{\boldsymbol x}}$ are i.i.d. random variables and there exist constants $C, R > 0$ such that for $i = 1, \ldots, d$
\begin{equation} \label{eq:R} \begin{gathered} {\mathbb E}\,[{\boldsymbol x}_0-\widetilde{{\boldsymbol x}}] = \bm{0}, \quad {\mathbb E}\,[\|{\boldsymbol x}_0-\widetilde{{\boldsymbol x}}\|^2] = R^2,
\quad \text{and} \quad {\mathbb E}\,[(\widetilde{{\boldsymbol x}}-{\boldsymbol x}_0)_{i}^4] \le \tfrac{1}{d^2} C.
\end{gathered}
\end{equation}
\item The entries of the noise vector ${\boldsymbol \eta}$ are i.i.d. random variables satisfying the following for $i = 1, \ldots, n$ and for some constants $\widetilde{C}, \widetilde{R} > 0$
\begin{equation}
{\mathbb E}\,[{\boldsymbol \eta}] = \bm{0}, \quad {\mathbb E}\,[\eta_i^2] = \widetilde{R}^2, \quad \text{and} \quad {\mathbb E}\,[\eta_i^4] \le \widetilde{C}.
\end{equation}
\end{enumerate}
\end{assumption}
Assumption~\ref{assumption: Vector} encompasses the setting where the signal is \textit{random} and the algorithm is initialized at ${\boldsymbol x}_0 = \bm{0}$. But it is more general. Starting farther from the signal requires more iterations to converge. Hence, intuitively, \eqref{eq:R} restricts the distance of the algorithm's initialization to the signal so that it remains constant across problem sizes. The unbiased initialization about the signal, put another way, says the initialization is rotationally-invariantly distributed about the signal $\widetilde{x}$ (see Figure~\ref{fig: Assumption_1}).
Assumption~\ref{assumption: Vector} arises as a result of preserving a constant signal-to-noise ratio in the generative model.
Such generative models with this scaling have been used in numerous works \citep{mei2019generalization,hastie2019surprises}.
\begin{wrapfigure}[16]{r}{0.47\textwidth}
\vspace{-0.5cm}
\centering \begin{tikzpicture}[scale = 0.72]
\filldraw[pattern color = darkgray, pattern= north west lines, draw = gray, dashed] (0,0) circle [x radius=1cm, y radius=5mm, rotate=30];
\draw (0,0) circle [radius=2.5];
\node[mark size=2pt] at (0,0) {\pgfuseplotmark{*}};
\node at (0.25,0) {$\widetilde{{\boldsymbol x}}$};
\node[mark size=2pt,color=red] at (-1.5,2) {\pgfuseplotmark{*}};
\node[red] at (-1.9, 2.05) {${\boldsymbol x}_0$};
\node[mark size=2pt,color=red] at (-0.21,0.5) {\pgfuseplotmark{*}};
\node[color=red] at (-0.1,0.75) {${\boldsymbol x}_k$};
\draw[red] (-1.5,2)--(1.25,1.25)--(-1,1)-- (-0.25,0.5);
\node[mark size=3pt,color=blue] at (-2.45,-0.5) {\pgfuseplotmark{triangle*}};
\node[color=blue] at (-2.8,-0.5) {${\boldsymbol x}_0$};
\draw[blue] (-2.45,-0.5)--(0.75,-1.75)--(-0.3,-1)--(0.25,-0.5);
\node[mark size=3pt,color=blue] at (0.25,-0.5) {\pgfuseplotmark{triangle*}};
\node[color=blue] at (-0.1,-0.5) {${\boldsymbol x}_k$};
\node[mark size=2.75pt,color=orange] at (2.29,-1) {\pgfuseplotmark{square*}};
\node[color=orange] at (2.75,-1) {${\boldsymbol x}_0$};
\draw[orange] (2.29, -1)--(1,-1.25)--(1.25,-0.5)--(0.9,0.4);
\node[mark size=2.75pt,color=orange] at (0.9,0.4) {\pgfuseplotmark{square*}};
\node[color=orange] at (1.25,0.4) {${\boldsymbol x}_k$};
\end{tikzpicture}
\caption{The pictured ${\boldsymbol x}_0$ are equiprobable. Each colored line is a different run of GD with random matrix ${\boldsymbol A}$ and the shaded gray area is the set where $\|\nabla f({\boldsymbol x})\|^2 < \varepsilon$. Intuitively, our result says all runs of GD starting from a random ${\boldsymbol x}_0$ take the same number of iterations to reach the shaded area. } \label{fig: Assumption_1}
\end{wrapfigure}
\paragraph{Tools from random matrix theory.} Random matrix theory studies properties of matrices ${\boldsymbol H}$ (most notably, statistics of matrix eigenvalues) whose entries $H_{ij}$ are random variables. We refer the reader to \citep{bai2010spectral, tao2012topics} for a more thorough introduction. Many important statistics of random matrix theory can be expressed as functionals on the eigenvalues of a matrix ${\boldsymbol H}$ (\textit{e.g.}, determinants and traces). Let $\lambda_1, \ldots, \lambda_d$ be the eigenvalues of ${\boldsymbol H}$ and define the \textit{empirical spectral measure} (ESM), $\mu_{{\boldsymbol H}}$, as
\begin{equation}
\mu_{{\boldsymbol H}}(\lambda) \stackrel{\text{def}}{=} \frac{1}{d} \sum_{i=1}^d \delta_{\lambda_i},
\end{equation}
where $\delta_{\lambda_i}$ is a Dirac delta function, \textit{i.e.}, a function equal to $0$ except at $\lambda_i$ and whose integral over the entire real line is equal to one. The empirical spectral measure puts a uniform weight on each of the eigenvalues of ${\boldsymbol H}$. When ${\boldsymbol H}$ is random, this becomes a random measure. A main interest in random matrix theory is to characterize the behavior of the empirical spectral measure as the dimension of the matrix tends to infinity.
Because the ESM is a well-studied object for many random matrix ensembles, we state the following assumption on the ESM for the data matrix, ${\boldsymbol A}$. In Section~\ref{sec: data_generate}, we review practical scenarios in which this is verified.
\begin{assumption}[Data matrix] \label{assumption: spectral_density}
Let ${\boldsymbol A}$ be a (possibly random) $n \times d$ matrix such that the number of features, $d$, tends to infinity proportionally to the size of the data set, $n$, so that $\tfrac{d}{n} \to r \in (0, \infty)$. Let ${\boldsymbol H} \stackrel{\text{def}}{=} \tfrac{1}{n} {\boldsymbol A}^T {\boldsymbol A}$ with eigenvalues $\lambda_1 \leq \ldots \leq \lambda_d$ and let $\delta_{\lambda_i}$ denote the Dirac delta with mass at $\lambda_i$. We make the following assumptions on the eigenvalue distribution of this matrix:
\begin{enumerate}[leftmargin=*]
\item The ESM converges weakly in probability to a deterministic measure $\mu$ with compact support,
\begin{equation} \label{eq:ESM_convergence}
\mu_{{\boldsymbol H}} = \mfrac{1}{d}\sum_{i=1}^d \delta_{\lambda_i} \to \mu \quad \text{weakly in probability\,.}
\end{equation}
\item The largest eigenvalue of ${\boldsymbol H}$ converges in probability to the largest element in the support of $\mu$. In particular, if $\lambda^+$ denotes the top edge of the support of $\mu$ then
\begin{equation} \label{eq:max_eigenvalue} \lambda_{{\boldsymbol H}}^+ \Prto[d] \lambda^+. \,\end{equation}
\item (Required provided the algorithm uses the smallest eigenvalue) The smallest eigenvalue of ${\boldsymbol H}$ converges in probability to the smallest, non-zero element in the support of $\mu$. In particular, if $\lambda^-$ denotes the bottom edge of the support of $\mu$ then
\begin{equation} \label{eq:min_eigenvalue} \lambda_{{\boldsymbol H}}^- \Prto[d] \lambda^-. \,\end{equation}
\end{enumerate}
\end{assumption}
\subsection{Examples of data distributions.} \label{sec: data_generate}
In this section we review three examples of data-generating distributions that verify Assumption~\ref{assumption: spectral_density}: a model with isotropic features, a correlated features model, and a one-hidden layer neural network with random weights. Numerous works studying the spectrum of the Hessian on neural networks have found that this spectrum shares many characteristics with the limiting spectral distributions discussed below including compact support, a concentration of eigenvalues near $0$, and a stable top eigenvalue \citep{dauphin2014identifying, papyan2018the, sagun2016eigenvalues, behrooz2019investigation}. In fact, the work of \citet{martin2018implicit} directly compares the Hessians of deep neural networks at various stages in training with the Mar\v{c}enko-Pastur density, that is, the limiting spectral density for the isotropic features model.
\paragraph{Isotropic features.}
We will now elaborate on the well developed theory surrounding the isotropic features model (see \eqref{eq:MP} and the text just above it).
In particular, parts 2 and 3 of Assumption~\ref{assumption: spectral_density} on the convergence of the largest and smallest eigenvalues is well known:
\begin{lemma}[Isotropic features]({\rm \textbf{\citet[Theorem 5.8]{bai2010spectral}}}) \label{lem:bai_Spectral}
Suppose the matrix ${{\boldsymbol A} \in {\mathbb R}^{n \times d}}$ is generated using the isotropic features model.
The largest and smallest eigenvalue of ${\boldsymbol H}$, $\lambda_{{\boldsymbol H}}^+$ and $\lambda_{{\boldsymbol H}}^-$, resp., converge in probability to $\lambda^+$ and $\lambda^-$ resp. where $\lambda^+ = \sigma^2 (1+ \sqrt{r})^2$ is the top edge of the support of the Mar\v{c}enko-Pastur measure and $\lambda^- = \sigma^2(1+\sqrt{r})^2$ is the bottom edge of the support of the Mar\v{c}enko-Pastur measure.
\end{lemma}
In addition, the isotropic features model is sufficiently random that it is possible to weaken Assumption \ref{assumption: Vector} and still derive for it the conclusion of Theorem \ref{thm: concentration_main}.
In particular, we may let ${\boldsymbol b}_n$ be defined as
\begin{equation}\label{eq:weakb}
{\boldsymbol b}_n =
\mfrac{1}{\sqrt{n}} R {\boldsymbol A} \boldsymbol{\omega}_{1,d}
+
\widetilde{R} \boldsymbol{\omega}_{2,n}
\end{equation}
for any deterministic sequences of vectors $\{\boldsymbol{\omega}_{1,d}\}$ and $\{\boldsymbol{\omega}_{2,n}\}$ from the $d$-dimensional and the $n$-dimensional spheres, respectively, multiplied by the signal strength $R$ and noise $\widetilde{R}$.
Then, under the \emph{further} moment assumption on ${\boldsymbol A}$ that for any $k \in \mathbb{N}$
\begin{equation}\label{eq:strongmoments}
\sup_{i,j} \biggl\{\mathbb{E} |A_{i,j}|^k \biggr\} < \infty,
\end{equation}
it is a consequence of \cite[Theorem 3.6,3.7]{KnowlesYin},
that
\begin{equation}
\label{eq:isotropicE}
\|\nabla f({\boldsymbol x}_k)\|^2\Prto[d]
R^2 \int {\lambda^2 P_k^2(\lambda)}\mathop{}\!\mathrm{d}\mu+\widetilde{R}^2 r \int {\lambda P_k^2(\lambda)}\mathop{}\!\mathrm{d}\mu
=
\! \! \underset{d \rightarrow \infty}{\mathcal{E}} \! [\|\nabla f(\xx_{k})\|^2]\,.
\end{equation}
This implies that for the isotropic features model under the stronger assumption for the data matrix \eqref{eq:strongmoments}, but the weaker target assumption \eqref{eq:weakb}, we obtain the same complexity results presented in Tables \ref{tab:comparison_worst_avg_cvx} and \ref{tab:comparison_worst_avg_str_cvx}. See also \cite[Corollary 5.12]{paquette2020universality} in which a central limit theorem for the gradient is derived under these same assumptions
\paragraph{Correlated features.}
In this model, one takes a random matrix ${\boldsymbol W} \in \mathbb{R}^{n \times d}$ generated from the isotropic features model and a symmetric positive definite correlation matrix ${\boldsymbol \Sigma}_d \in \mathbb{R}^{d \times d}$. One then defines the correlated features model by
\[
{\boldsymbol A} \stackrel{\text{def}}{=} {\boldsymbol W} {\boldsymbol \Sigma}_d^{1/2}.
\]
This makes ${\boldsymbol H} = \frac{1}{n}{\boldsymbol A}^T {\boldsymbol A}$ the normalized sample covariance matrix of $d$ samples of a $n$-dimensional random vector with covariance structure ${\boldsymbol \Sigma}_d.$
Under the assumption that the empirical spectral measure of ${\boldsymbol \Sigma}_d$ converges to a measure $\nu$ and that the norm of ${\boldsymbol \Sigma}_d$ is uniformly bounded, it is consequence of \cite{Bai1999a,Bai1999b} (see also the discussions in \cite{bai2004CLT,KnowlesYin, HachemHardyNajim}) that Assumption \ref{assumption: spectral_density} holds. Unlike in isotropic features, the limiting spectral measure is not known explicitly, but is instead only characterized (in general) through a fixed-point equation describing its Stieltjes transform.
\paragraph{One-hidden layer network with random weights.} In this model, the entries of ${\boldsymbol A}$ are the result of a matrix multiplication composed with a (potentially non-linear) activation function $g \, : \, \mathbb{R} \mapsto \mathbb{R}$:
\begin{align}
A_{ij} \stackrel{\text{def}}{=} g \big (\tfrac{[{\boldsymbol W} {\boldsymbol Y}]_{ij}}{\sqrt{m}} \big ), \quad \text{where ${\boldsymbol W} \in {\mathbb R}^{n \times m}$, ${\boldsymbol Y} \in {\mathbb R}^{m \times d}$ are random matrices\,.}
\end{align}
The entries of ${\boldsymbol W}$ and ${\boldsymbol Y}$ are i.i.d. with zero mean, isotropic variances
${\mathbb E}\,[W_{ij}^2] = \sigma_w^2$ and ${\mathbb E}\,[Y_{ij}^2] = \sigma_y^2$, and light tails, that is, there exists constants $\theta_w, \theta_y > 0$ and $\alpha > 0$ such that for any $t > 0$
\begin{equation} \label{eq: light_tail}
\Pr(|W_{11}| > t) \le \exp(-\theta_w t^{\alpha}) \quad \text{and} \quad \Pr(|Y_{11}| > t) \le \exp(-\theta_y t^{\alpha})\,.
\end{equation}
Although stronger than bounded fourth moments, this assumption holds for any sub-Gaussian random variables (\textit{e.g.}, Gaussian, Bernoulli, etc). As in the previous case to study the large dimensional limit, we assume that the different dimensions grow at comparable rates given by $\frac{m}{n} \to r_1 \in (0, \infty)$ and $\frac{m}{d} \to r_2 \in (0, \infty)$.
This model encompasses two-layer neural networks with a squared loss, where the first layer has random weights and the second layer's weights are given by the regression coefficients ${\boldsymbol x}$.
In this case, problem \eqref{eq:LS} becomes
\begin{equation} \label{eq: general_LS}
\min_{\boldsymbol x} \, \left\{ f({\boldsymbol x}) = \mfrac{1}{2n} \|{g} \big ( \tfrac{1}{\sqrt{m}} {\boldsymbol W} {\boldsymbol Y} \big ){\boldsymbol x} - {\boldsymbol b}\|^2_2 \right\}.
\end{equation}
The model was introduced by \citep{Rahimi2008Random} as a randomized approach for scaling kernel methods to large datasets, and has seen a surge in interest in recent years as a way to study the generalization properties of neural networks
\citep{hastie2019surprises,mei2019generalization,pennington2017nonlinear,louart2018random,liao2018dynamics}.
The most important difference between this model and the isotropic features is the existence of a potentially non-linear activation function $g$. We assume $g$ to be entire with a growth condition and have zero Gaussian-mean,
\begin{align} \label{eq: Gaussian_mean}
\hspace{-3em}\text{(Gaussian mean)} \qquad \int {g}(\sigma_w \sigma_y z) \tfrac{e^{-z^2/2}}{\sqrt{2 \pi} } \, \mathop{}\!\mathrm{d} z = 0\,.
\end{align}
The additional growth condition on the function $g$ is precisely given as there exists positive constants $C_g, c_g, A_0 > 0$ such that for any $A \ge A_0$ and any $n \in \mathbb{N}$
\begin{align}
\sup_{z \in [-A,A]} |g^{(n)}(z)| \le C_g A^{c_g n}\, .
\end{align}
Here $g^{(n)}$ is the $n$th derivative of $g$. This growth condition is verified for common activation functions such as the sigmoid ${g}(z) = (1+ e^{-z})^{-1}$ and the softplus ${g}(z) = \log(1+e^z)$, a smoothed approximation to the ReLU. The Gaussian mean assumption \eqref{eq: Gaussian_mean} can always be satisfied by incorporating a translation into the activation function.
\cite{benigni2019eigenvalue} recently showed that the empirical spectral measure and largest eigenvalue of ${\boldsymbol H}$ converge to a deterministic measure and largest element in the support, respectively.
This implies that this model, like the isotropic features one, verifies Assumption~\ref{assumption: spectral_density}.
However, contrary to the isotropic features model, the limiting measure does not have an explicit expression, except for some specific instances of $g$ in which it is known to coincide with the Mar\v{c}enko-Pastur distribution.
\begin{lemma}[One-hidden layer network]({\rm \textbf{\citet[Theorems~2.2 and~5.1]{benigni2019eigenvalue}}}) \label{lem:rand_feat_measure} Suppose the matrix ${\boldsymbol A} \in {\mathbb R}^{n \times d}$ is generated using the random features model. Then there exists a deterministic compactly supported measure $\mu$ such that $\mu_{{\boldsymbol H}} \underset{d \to \infty}{\longrightarrow} \mu$ weakly in probability. Moreover $\lambda_{{\boldsymbol H}}^+ \Prto[d] \lambda^+$ where $\lambda^+$ is the top edge of the support of $\mu$.
\end{lemma}
\renewcommand{\arraystretch}{2.5}
\ctable[notespar,
caption = {{\bfseries Residual Polynomials.} Summary of the residual polynomials associated with the methods discussed in this paper. $T_k$ is the $k$-th Chebyshev polynomial of the first kind, $U_k$ is $k$-th Chebyshev polynomial of the second kind, and $L_k$ is the $k$-th Legendre polynomial. Derivations of these polynomials can be found in Appendix~\ref{apx: derivation_polynomial}. In light of Proposition~\ref{prop: polynomials_methods}, an explicit expression for the polynomial $P_k$ is enough to determine the polynomial $Q_k$. },
label = {table:polynomials},
captionskip=2ex,
pos =!t
]{l c l}{\tnote[1]{\citep{nesterov2004introductory,Beck2009Fast}} \tnote[2]{\citep{Polyak1962Some}} \tnote[3]{\citep{nesterov2004introductory}} }{
\toprule
\textbf{Methods} & \textbf{Polynomial $P_k$} & \textbf{Parameters} \\
\midrule
Gradient Descent & $(1-\alpha \lambda)^k$ & $\alpha = 1 / \lambda^+_{{\boldsymbol H}}$\\
\midrule
\begin{minipage}{0.18\textwidth} Nesterov (cvx) \tmark[1]
\end{minipage} & $\frac{2(1-\alpha \lambda)^{(k+1)/2}}{\alpha \lambda k} \big ( \sqrt{1-\alpha \lambda} L_k(\sqrt{1-\alpha \lambda}) - L_{k+1}(\sqrt{1-\alpha \lambda}) \big )$
& $\alpha = {1}/{\lambda_{{\boldsymbol H}}^+}$\\
\midrule
\begin{minipage}{0.15\textwidth} Polyak \tmark[2]
\end{minipage} &$\beta^k \big [ \tfrac{ ( \sqrt{\lambda_{{\boldsymbol H}}^+}-\sqrt{\lambda_{{\boldsymbol H}}^-})^2}{\lambda_{{\boldsymbol H}}^+ + \lambda_{{\boldsymbol H}}^-} \cdot T_k(\sigma(\lambda)) + \tfrac{2 \sqrt{\lambda_{{\boldsymbol H}}^- \lambda_{{\boldsymbol H}}^+}}{\lambda_{{\boldsymbol H}}^+ + \lambda_{{\boldsymbol H}}^-} \cdot U_k(\sigma(\lambda)) \big ]$ & \begin{minipage}{0.19\textwidth} $\sigma(\lambda) = \frac{\lambda_{{\boldsymbol H}}^+ + \lambda_{{\boldsymbol H}}^- - 2\lambda}{\lambda_{{\boldsymbol H}}^+ - \lambda_{{\boldsymbol H}}^-}$ \\
$\beta = \tfrac{\sqrt{\lambda_{{\boldsymbol H}}^+}-\sqrt{\lambda_{{\boldsymbol H}}^-}}{\sqrt{\lambda_{{\boldsymbol H}}^+} + \sqrt{\lambda_{{\boldsymbol H}}^-}}$ \end{minipage}
\\
\midrule
\begin{minipage}{0.18\textwidth} Nesterov\\
(Strongly cvx) \tmark[3]
\end{minipage} & $\tfrac{2\beta (\beta x)^{k/2} }{1+\beta} T_k \left ( \tfrac{1+\beta}{2 \sqrt{\beta}} \sqrt{x} \right ) + \left (1 - \frac{2\beta}{1+\beta} \right ) (\beta x)^{k/2} U_k \left (\tfrac{1+\beta}{2 \sqrt{\beta}} \sqrt{x} \right )$
& \begin{minipage}{0.18\textwidth} $x = 1-\alpha\lambda$,\\ $\alpha = {1}/{\lambda_{{\boldsymbol H}}^+}$\\
$\beta =
\tfrac{\sqrt{\lambda_{{\boldsymbol H}}^+} - \sqrt{\lambda_{{\boldsymbol H}}^-}}{\sqrt{\lambda_{{\boldsymbol H}}^+} + \sqrt{\lambda_{{\boldsymbol H}}^-} }$
\end{minipage}\\
\bottomrule
}
\section{From optimization to polynomials} \label{sec: poly}
In this section, we look at the classical connection between optimization algorithms, iterative methods, and polynomials \citep{Flanders1950Numerical,golub1961chebyshev,fischer1996polynomial,rutishauser1959refined}. While the idea of analyzing optimization algorithms from the perspective of polynomials is well-established, many modern algorithms, such as the celebrated Nesterov accelerated gradient \citep{nesterov2004introductory}, use alternative approaches to prove convergence.
This connection between polynomials and optimization methods will be crucial to proving the average-case guarantees in Tables~\ref{tab:comparison_worst_avg_cvx} and \ref{tab:comparison_worst_avg_str_cvx}.
To exploit this connection, we will construct the residual polynomials associated with the considered methods and we prove novel facts which may be of independent interest. For example, the polynomials associated with Nesterov's method provides an alternative explanation to the ODE in \citep{su2016differential}.
Throughout the paper, we consider only gradient-based methods, algorithms which can be written as a linear combination of the previous gradients and the initial iterate.
\begin{definition}[Gradient-based method] \rm{An optimization algorithm is called a \textit{gradient-based method} if each update of the algorithm can be written as a linear combination of the previous iterate and previous gradients. In other words, if every update is of the form
\begin{equation}\label{eq:gradient_based}
{\boldsymbol x}_{k+1} = {\boldsymbol x}_0 + \sum_{i=0}^{k} c_{k i} \nabla f({\boldsymbol x}_k)~,
\end{equation}
for some scalar values $c_{k i}$ that can potentially depend continuously on $\lambda^+_{{\boldsymbol H}}$ and $\lambda^-_{{\boldsymbol H}}$.
}
\end{definition}
Examples of gradient-based methods include momentum methods \citep{Polyak1962Some}, accelerated methods \citep{nesterov2004introductory,Beck2009Fast}, and gradient descent. Now given any gradient-based method, we can associate to the method \textit{residual polynomials} $P_k$ and \textit{iteration polynomials} $Q_k$ which are polynomials of degree $k$, precisely as followed.
\begin{proposition}[Polynomials and gradient-based methods] \label{prop: polynomials_methods} Consider a gradient-based method with coefficients $c_{ki}$ that depend continuously on $\lambda^-_{{\boldsymbol H}}$ and $\lambda^+_{{\boldsymbol H}}$.
Define the sequence of polynomials $\{P_k, Q_k\}_{k = 0}^\infty$ recursively by
\begin{equation} \begin{gathered} \label{eq:recursive_noise_poly}
P_0({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm}) = {\boldsymbol I} \quad \text{and} \quad P_k({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm}) = {\boldsymbol I} - {\boldsymbol H} Q_{k}({\boldsymbol H}; \lambda^{\pm}_{{\boldsymbol H}})\\
Q_0({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm}) = \bm{0} \quad \text{and} \quad Q_k({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm}) = \sum_{i=0}^{k-1} c_{k-1,i} \big [ {\boldsymbol H} Q_i({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm}) - {\boldsymbol I} \big ]\,.
\end{gathered} \end{equation}
These polynomials $P_k$ and $Q_k$ are referred to as the \emph{residual} and \emph{iteration} polynomials respectively.
We express the difference between the iterate at step $k$ and $\widetilde{{\boldsymbol x}}$ in terms of these polynomials:
\begin{equation} \label{eq:recursive_noise_poly_1}
{\boldsymbol x}_k - \widetilde{{\boldsymbol x}} = P_k({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm}) ({\boldsymbol x}_0-\widetilde{{\boldsymbol x}}) + Q_k({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm}) \cdot \frac{{\boldsymbol A}^T {\boldsymbol \eta}}{n}\,.
\end{equation}
\end{proposition}
\begin{proof}
We will prove the result by induction.
For $k=0$, the claimed result holds trivially. We assume it holds up to iteration $k$ and we will prove it holds for $k+1$. To show this, we will use the following equivalent form of the gradient $\nabla f({\boldsymbol x}) = {\boldsymbol H} ({\boldsymbol x} - \widetilde{{\boldsymbol x}}) - \tfrac{{\boldsymbol A}^T {\boldsymbol \eta}}{n}$, which follows from the definition of ${\boldsymbol b}$. Using this and the definition of gradient-based method, we have:
\begin{align*}
{\boldsymbol x}_{k+1} &- \widetilde{{\boldsymbol x}} = {\boldsymbol x}_0 - \widetilde{{\boldsymbol x}} + \sum_{i=0}^{k} c_{ki} \nabla f({\boldsymbol x}_i) = {\boldsymbol x}_0 - \widetilde{{\boldsymbol x}} + \sum_{i=0}^k c_{ki} \big [{\boldsymbol H} ({\boldsymbol x}_i-\widetilde{{\boldsymbol x}}) - \tfrac{{\boldsymbol A}^T {\boldsymbol \eta}}{n} \big ]\\
&= {\boldsymbol x}_0-\widetilde{{\boldsymbol x}} + \sum_{i=0}^k c_{ki} \big [{\boldsymbol H} \big ( \big ( {\boldsymbol I} - {\boldsymbol H} Q_i({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm}) \big ) ({\boldsymbol x}_0-\widetilde{{\boldsymbol x}}) + Q_i({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm}) \tfrac{{\boldsymbol A}^T {\boldsymbol \eta}}{n} \big ) - \tfrac{{\boldsymbol A}^T {\boldsymbol \eta}}{n} \big ]\\
&= {\boldsymbol x}_0-\widetilde{{\boldsymbol x}}+ {\boldsymbol H} \sum_{i=0}^k c_{ki} ( {\boldsymbol I} - {\boldsymbol H} Q_i({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm})) ({\boldsymbol x}_0-\widetilde{{\boldsymbol x}}) + \sum_{i=0}^k c_{ki} \big ( {\boldsymbol H} Q_i({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm}) - {\boldsymbol I} \big ) \tfrac{{\boldsymbol A}^T {\boldsymbol \eta}}{n} \\
&= \underbrace{\Big [{\boldsymbol I} - {\boldsymbol H} Q_{k+1}({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm}) \Big ]}_{=P_{k+1}({\boldsymbol H}, \lambda_{{\boldsymbol H}}^{\pm})} ({\boldsymbol x}_0-\widetilde{{\boldsymbol x}}) + Q_{k+1}({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm}) \tfrac{{\boldsymbol A}^T {\boldsymbol \eta}}{n}\,,
\end{align*}
where in the second identity we have used the induction hypothesis and in the last one the recursive definition of $Q_{k+1}$.
\end{proof}
\subsection{Examples of residual polynomials.}\label{sec:Ex_polynomials}
Motivated by the identity linking the error and the residual polynomial in Proposition~\ref{prop: polynomials_methods},
we derive the residual polynomials for some well-known optimization methods. Some of these residual polynomials are known but some, like Nesterov's accelerated methods, appear to be new.
\paragraph{Gradient descent.} Due to the simplicity in the recurrence of the iterates for gradient descent, its residual polynomials $P_k$ and $Q_k$ are explicit. Take for example the typical step size $\alpha = \tfrac{1}{\lambda_{{\boldsymbol H}}^+}$. Then iterates on \eqref{eq:LS} follow the recursion
\begin{equation}
{\boldsymbol x}_k - \widetilde{{\boldsymbol x}} = {\boldsymbol x}_{k-1} - \widetilde{{\boldsymbol x}} - \alpha \nabla f({\boldsymbol x}_{k-1}) = \big ( {\boldsymbol I} - \alpha {\boldsymbol H} \big )({\boldsymbol x}_{k-1}-\widetilde{{\boldsymbol x}}) + \alpha \tfrac{{\boldsymbol A}^T {\boldsymbol \eta}}{n}\,.
\end{equation}
Applying Proposition~\ref{prop: polynomials_methods} to this recurrence, we obtain the following polynomials:
\begin{equation} \begin{gathered}
P_k(\lambda; \alpha^{-1}) = (1-\alpha \lambda)^k, \quad
Q_k(\lambda; \alpha^{-1}) = \alpha \sum_{i=0}^{k-1} (1-\alpha \lambda)^i \quad \text{with $Q_0(\lambda) = 0$}.
\end{gathered}
\end{equation}
\paragraph{Nesterov's accelerated method.} Nesterov's accelerated method \citep{nesterov2004introductory} and its variant FISTA \citep{Beck2009Fast} generate iterates on \eqref{eq:LS} satisfying the recurrence
\begin{equation} \begin{gathered}
{\boldsymbol x}_{k+1}-\widetilde{{\boldsymbol x}} = (1 + \beta_{k-1}) (I- \alpha {\boldsymbol H}) ({\boldsymbol x}_k-\widetilde{{\boldsymbol x}}) - \beta_{k-1} (I - \alpha {\boldsymbol H})({\boldsymbol x}_{k-1}-\widetilde{{\boldsymbol x}}) + \alpha \cdot \tfrac{{\boldsymbol A}^T {\boldsymbol \eta}}{n},\\
\text{where} \quad \alpha = \frac{1}{\lambda_{{\boldsymbol H}}^+} \quad \text{and} \quad \beta_k = \begin{cases}
\tfrac{\sqrt{\lambda_{{\boldsymbol H}}^+} - \sqrt{\lambda_{{\boldsymbol H}}^-}}{\sqrt{\lambda_{{\boldsymbol H}}^+} + \sqrt{\lambda_{{\boldsymbol H}}^-} }, &\text{if $\lambda_{{\boldsymbol H}}^- \neq 0$}\\
\frac{k}{k+3}, & \text{if $\lambda_{{\boldsymbol H}}^- = 0$}\,,
\end{cases}
\end{gathered}
\end{equation}
with initial vector ${\boldsymbol x}_0 \in {\mathbb R}^d$ and ${\boldsymbol x}_1 = {\boldsymbol x}_0-\alpha \nabla f({\boldsymbol x}_0)$. Unrolling the recurrence, we can obtain an explicit formula for the corresponding polynomials
\begin{equation} \begin{gathered} \label{eq:Nesterov_polynomial_main}
P_{k+1}(\lambda; \lambda_{{\boldsymbol H}}^{\pm}) = (1+\beta_{k-1}) (1-\alpha \lambda) P_k(\lambda; \lambda_{{\boldsymbol H}}^{\pm}) - \beta_{k-1}(1-\alpha \lambda) P_{k-1}(\lambda; \lambda_{{\boldsymbol H}}^{\pm})\\
\text{with} \quad P_0(\lambda; \lambda_{{\boldsymbol H}}^{\pm}) = 1 \quad \text{and} \quad P_1(\lambda; \lambda_{{\boldsymbol H}}^{\pm}) = 1-\alpha \lambda\\
Q_{k+1}(\lambda; \lambda_{{\boldsymbol H}}^{\pm}) = (1+\beta_{k-1})(1-\alpha \lambda) Q_k(\lambda; \lambda_{{\boldsymbol H}}^{\pm}) - \beta_{k-1} (1 - \alpha \lambda) Q_{k-1}(\lambda; \lambda_{{\boldsymbol H}}^{\pm}) + \alpha \\
\text{with} \quad Q_0(\lambda; \lambda_{{\boldsymbol H}}^{\pm}) = 0 \quad \text{and} \quad Q_1(\lambda; \lambda_{{\boldsymbol H}}^{\pm}) = \alpha\,.
\end{gathered}
\end{equation}
We derive the polynomials $P_k$ explicitly in Appendix \ref{apx: Nesterov_accelerated_method}. When $\lambda_{{\boldsymbol H}}^- > 0$ (strongly convex), the polynomial $P_k$ is given by
\begin{gather} P_k(\lambda; \lambda^{\pm}_{{\boldsymbol H}}) = \tfrac{2\beta}{1+\beta} (\beta (1-\alpha \lambda))^{k/2} T_k \left ( \tfrac{1+\beta}{2 \sqrt{\beta}} \sqrt{1-\alpha \lambda} \right ) + \left (1 - \tfrac{2\beta}{1+\beta} \right ) (\beta (1-\alpha \lambda))^{k/2} U_k \left (\tfrac{1+\beta}{2 \sqrt{\beta}} \sqrt{1-\alpha \lambda} \right ), \nonumber \\ \text{where $\beta =
\tfrac{\sqrt{\lambda_{{\boldsymbol H}}^+} - \sqrt{\lambda_{{\boldsymbol H}}^-}}{\sqrt{\lambda_{{\boldsymbol H}}^+} + \sqrt{\lambda_{{\boldsymbol H}}^-} }$, \quad and \quad $\alpha = \frac{1}{\lambda_{{\boldsymbol H}}^+}$}\,, \label{eq:Nesterov_poly_sc}
\end{gather}
where $T_k$ and $U_k$ the Chebyshev polynomials of the 1st and 2nd-kind respectively. When the smallest eigenvalue of ${\boldsymbol H}$ is equal to $0$ (non-strongly convex setting) the polynomial $P_k$ is given by
\begin{equation} \label{eq: Nesterov_Legendre}
P_k(\lambda; \lambda_{{\boldsymbol H}}^{\pm}) = \frac{2(1-\alpha\lambda)^{(k+1)/2}}{k \alpha \lambda} \left ( \sqrt{1-\alpha \lambda} \, L_k(\sqrt{1-\alpha \lambda}) - L_{k+1}(\sqrt{1-\alpha \lambda}) \right )\,,
\end{equation}
where $L_k$ are the Legendre polynomials.
\begin{wrapfigure}[16]{r}{0.45\textwidth}
\vspace{-0.5cm}
\centering
\includegraphics[scale = 0.2]{figures/Halting_time_bessel_poly_2}
\caption{\textbf{Bessel approx. of Nesterov's (convex) poly.} For small $\lambda$, the Bessel approx. (blue) in \eqref{eq:Bessel_asymptotic_main} and Nesterov's (convex) poly. (orange) are indistinguishable. Only when $\lambda$ is far from zero that one sees any, albeit minor, differences.}
\label{fig:Bessel}
\end{wrapfigure}
Working directly with the polynomial in \eqref{eq: Nesterov_Legendre} will prove difficult. As such, we derive an asymptotic expression for this polynomial. Nesterov's polynomial satisfies in a sufficiently strong sense
\begin{equation} \label{eq:Bessel_asymptotic_main}
P_k(\lambda; \lambda_{{\boldsymbol H}}^{\pm}) \sim \frac{2J_1(k\sqrt{\alpha \lambda})}{ k\sqrt{\alpha \lambda}} e^{-\alpha \lambda k / 2}
\end{equation}
where $J_1$ is the Bessel function of the first kind. A derivation of this can be found in Appendix~\ref{apx: Nesterov_accelerated_cvx}. Let $f(t,z) \stackrel{\text{def}}{=} P_{tn}(z n^{-2}; \lambda^{\pm}_{{\boldsymbol H}})$. Then the recurrence in \eqref{eq:Nesterov_polynomial_main} becomes a discrete approximation to the initial value problem
\[ \partial_{tt} f + \frac{3}{t} \partial_t f + z f = 0, \, \, f(t,0) = 1 \, \, \text{and} \, \, \partial f_t(t,0) = 0,\]
which bears a strong resemblance to the differential equation model for Nesterov's accelerated method in \citep{su2016differential}.
The solution to this initial value problem is $\frac{2 J_1(k \sqrt{\alpha \lambda})}{k \sqrt{\alpha \lambda}}$. Our result in \eqref{eq:Bessel_asymptotic_main}, not derived using this differential equation, yields an even tighter result for Nesterov's accelerated method by including the exponential.
\paragraph{Polyak momentum algorithm.} We aim to derive the residual polynomials for the Polyak momentum algorithm (a.k.a Heavy-ball method) \citep{Polyak1962Some}. The Polyak momentum algorithm takes as arguments the largest and smallest eigenvalues of ${\boldsymbol H}$ and iterates as follows
\begin{equation} \begin{gathered}
{\boldsymbol x}_{k+1}-\widetilde{{\boldsymbol x}} = {\boldsymbol x}_k-\widetilde{{\boldsymbol x}} + m ({\boldsymbol x}_{k-1}-\widetilde{{\boldsymbol x}}-({\boldsymbol x}_{k}-\widetilde{{\boldsymbol x}})) + \alpha \nabla f({\boldsymbol x}_{k}),\\
{\boldsymbol x}_0 \in {\mathbb R}^d, \quad {\boldsymbol x}_1-\widetilde{{\boldsymbol x}} = {\boldsymbol x}_0-\widetilde{{\boldsymbol x}}-\tfrac{2}{\lambda_{{\boldsymbol H}}^+ + \lambda_{{\boldsymbol H}}^-} \nabla f({\boldsymbol x}_0)\\
\text{where $m = - \left ( \tfrac{\sqrt{\lambda_{{\boldsymbol H}}^+} - \sqrt{\lambda_{{\boldsymbol H}}^-}}{\sqrt{\lambda_{{\boldsymbol H}}^+} + \sqrt{\lambda_{{\boldsymbol H}}^-}} \right )^2$ and $\alpha = -\mfrac{4}{(\sqrt{\lambda_{{\boldsymbol H}}^-}+\sqrt{\lambda_{{\boldsymbol H}}^+})^2}$}.
\end{gathered} \end{equation}
Using these initial conditions, the residual polynomials for Polyak momentum satisfy
\begin{equation} \begin{gathered} P_{k+1}(\lambda; \lambda^{\pm}_{{\boldsymbol H}}) = (1-m + \alpha\lambda) P_k(\lambda; \lambda^{\pm}_{{\boldsymbol H}}) + m P_{k-1}(\lambda; \lambda_{{\boldsymbol H}}^{\pm}),\\
\text{with} \qquad P_0(\lambda; \lambda_{{\boldsymbol H}}^{\pm}) = 1, \qquad P_1(\lambda; \lambda^{\pm}_{{\boldsymbol H}}) = 1 - \tfrac{2}{\lambda_{{\boldsymbol H}}^+ + \lambda_{{\boldsymbol H}}^-} \lambda\\
\text{and} \qquad Q_{k+1}(\lambda; \lambda_{{\boldsymbol H}}^{\pm}) = (1-m + \alpha\lambda) Q_k(\lambda; \lambda_{{\boldsymbol H}}^{\pm}) + m Q_{k-1}(\lambda; \lambda_{{\boldsymbol H}}^{\pm}) - \alpha,\\
\text{with} \qquad Q_0(\lambda; \lambda_{{\boldsymbol H}}^{\pm}) = 0, \qquad Q_1(\lambda; \lambda^{\pm}_{{\boldsymbol H}}) = \tfrac{2}{\lambda_{{\boldsymbol H}}^+ + \lambda_{{\boldsymbol H}}^-}.
\end{gathered}
\end{equation}
By recognizing this three-term recurrence as Chebyshev polynomials, we can derive an explicit representation for $P_k$ namely
\begin{equation} \begin{gathered}
P_k(\lambda; \lambda_{{\boldsymbol H}}^{\pm}) = \left ( \tfrac{\sqrt{\lambda_{{\boldsymbol H}}^+}-\sqrt{\lambda_{{\boldsymbol H}}^-}}{\sqrt{\lambda_{{\boldsymbol H}}^+} + \sqrt{\lambda_{{\boldsymbol H}}^-}} \right )^k \big [ \tfrac{ ( \sqrt{\lambda_{{\boldsymbol H}}^+}-\sqrt{\lambda_{{\boldsymbol H}}^-})^2}{\lambda_{{\boldsymbol H}}^+ + \lambda_{{\boldsymbol H}}^-} \cdot T_k(\sigma(\lambda)) + \tfrac{2 \sqrt{\lambda_{{\boldsymbol H}}^- \lambda_{{\boldsymbol H}}^+}}{\lambda_{{\boldsymbol H}}^+ + \lambda_{{\boldsymbol H}}^-} \cdot U_k(\sigma(\lambda)) \big ] \\
\text{where $T_k(x)$ and $U_k(x)$ are the Chebyshev polynomials of the 1st and 2nd-kind respectively}\\
\text{and \quad $\sigma(\lambda) = \tfrac{\lambda_{{\boldsymbol H}}^+ + \lambda_{{\boldsymbol H}}^- -2 \lambda}{\lambda_{{\boldsymbol H}}^+ - \lambda_{{\boldsymbol H}}^-}$.}
\end{gathered}
\end{equation}
\begin{figure*}[t!]
\begin{center}
\includegraphics[scale = 0.2]{figures/Halting_time_nesterov_strongly_cvx_poly_2}
\hspace{-0.5cm}\includegraphics[scale = 0.2]{figures/Halting_time_polyak_poly_2}
\hspace{-0.5cm}\includegraphics[scale = 0.2]{figures/Halting_time_nesterov_cvx_poly_2}
\end{center}
\caption{{\bfseries Residual polynomials.} The oscillations in the polynomials for Nesterov's accelerated method (convex) are pronounced near zero compared with the other methods. In fact, both Nesterov (strongly convex) and Polyak momentum polynomials decay quite rapidly to the zero polynomial. To see these oscillations in the figures one needs to have a badly conditioned matrix (condition number 40,000). The slower decay to zero in the residual polynomials for Nesterov (strongly convex) as compared with Polyak's momentum suggest a worst rate of convergence.
} \label{fig:polynomials}
\end{figure*}
\subsection{Properties of residual polynomials}
In the following sections, it will be convenient to know some general properties of residual polynomials. Particularly, the polynomials, $\lambda^2 P_k^2(\lambda; \lambda^{\pm})$ and $\lambda P_k^2(\lambda; \lambda^{\pm})$ are uniformly bounded in $k$ and that these polynomials goes to zero on some fixed support $[\lambda^-, \lambda^+]$. The importance of these facts are twofold. First, these polynomials appear in the formula for the expected gradient, Theorem~\ref{thm: concentration_main}. Second, we use the boundedness and convergence properties in the proof of halting time universality, Theorem~\ref{thm: Halting_time_main}. If one \textit{a priori} knows an explicit expression for these polynomials, then these properties are easily deduced. However when such an expression does not exist, we still can conclude these properties hold provided that the algorithm is \textit{convergent}.
\begin{definition}[Convergent algorithms] \rm{We say a gradient-based method is \textit{(strongly) convergent} if for every matrix ${\boldsymbol A}$ such that $({\boldsymbol A}^T {\boldsymbol A} \succ 0)$ ${\boldsymbol A}^T{\boldsymbol A} \succeq 0$ and any vectors ${\boldsymbol b}$ and ${\boldsymbol x}_0$, we have that the sequence of iterates generated by the algorithm starting at ${\boldsymbol x}_0$ satisfies $\|\nabla f({\boldsymbol x}_k)\|^2 \to 0$ as $k \to \infty$ and there exists constants $C, \widetilde{C}$ depending on $\lambda_{{\boldsymbol H}}^+$ and $\lambda_{{\boldsymbol H}}^-$ such that
\begin{equation} \begin{gathered} \label{eq: boundedness_grad} \|\nabla f({\boldsymbol x}_k)\|^2 \le C
\big ( f({\boldsymbol x}_0)-f({\boldsymbol x}^{\star}) + \|{\boldsymbol x}_0-{\boldsymbol x}^{\star}\|^2 \big )\\
f({\boldsymbol x}_k)-f({\boldsymbol x}^{\star}) \le \widetilde{C} (f({\boldsymbol x}_0)-f({\boldsymbol x}^{\star}) + \|{\boldsymbol x}_0-{\boldsymbol x}^{\star}\|^2)
\end{gathered}
\end{equation}
where ${\boldsymbol x}^{\star}$ in the optimum of \eqref{eq:LS}.
}
\end{definition}
\begin{remark}[Minimal norm solutions] \label{rmk: minimal_norm} For any gradient-based method, the iterates generated by the algorithm on the least squares problem \eqref{eq:LS} satisfy ${\boldsymbol x}_k \in {\boldsymbol x}_0 + \ospan\{ \nabla f({\boldsymbol x}_0), \hdots, \nabla f({\boldsymbol x}_{k-1})\} \subseteq {\boldsymbol x}_0 + \text{\rm Null}({\boldsymbol A})^{\perp}$. If the algorithm converges to some ${\boldsymbol x}^{\star}$ and we have that ${\boldsymbol A}^T {\boldsymbol A} {\boldsymbol x}^{\star} = {\boldsymbol A}^T {\boldsymbol b}$, then the solution ${\boldsymbol x}^{\star}$ is independent of the algorithm in the following sense
\begin{equation}
{\boldsymbol x}^{\star} = \argmin_{\{ {\boldsymbol x} \, : \, {\boldsymbol A}^T {\boldsymbol A} {\boldsymbol x} = {\boldsymbol A}^T {\boldsymbol b} \}} \|{\boldsymbol x}_0-{\boldsymbol x}\|^2_2.
\end{equation}
In particular when ${\boldsymbol x}_0 \in \text{\rm Null}({\boldsymbol A})^{\perp}$, the optimum ${\boldsymbol x}^{\star}$ is the minimal norm solution. See \textit{e.g.}, \cite{gunasekar2018characterizing, wilson2017marginal} and references therein.
\end{remark}
\begin{remark} All the algorithms discussed in Section~\ref{sec:Ex_polynomials} are convergent.
\end{remark}
The following lemma shows that convergent algorithms have residual polynomials which go to $0$ as $k \to \infty$ on compact subsets of the positive real line. In essence if optimality measures go to zero, then so must the residual polynomial.
\begin{lemma}[Convergent algorithms $\Rightarrow$ Residual polynomials $\to 0$] \label{lem: convergent_algorithm} Suppose the algorithm $\mathcal{A}$ is a (strongly) convergent gradient-based method. Fix positive constants $0 \le \lambda^- < \lambda^+$ for a convergent algorithm and constants $0 < \lambda^- < \lambda^+$ if one has a strongly convergent algorithm. The residual polynomial, $P_k$, for the algorithm $\mathcal{A}$ satisfies
\[ \lim_{k \to \infty} \lambda^2 P_k^2(\lambda; \lambda^{\pm}) = 0 \quad \text{and} \quad \lim_{k \to \infty} \lambda P_k^2(\lambda; \lambda^{\pm}) = 0 \quad \text{for all $\lambda \in [\lambda_-, \lambda_+]$}. \]
\end{lemma}
\begin{proof}
Suppose we consider the noiseless setting where ${\boldsymbol \eta} = (0,0,0)^T$ in the generative model so that ${\boldsymbol A} \widetilde{{\boldsymbol x}} = {\boldsymbol b}$. Fix a constant $\lambda \in [\lambda^-, \lambda^+]$ and define the following matrix and vectors
\begin{equation} \label{eq:matrix_AA} {\boldsymbol A} = \bBigg@{4} [ \begin{matrix} \sqrt{3
\lambda^+} & 0 & 0
\vspace{-0.5cm}\\
0 & \sqrt{3 \lambda} & 0 \vspace{-0.5cm}\\
0 & 0 & \sqrt{3\lambda^-}
\end{matrix} \bBigg@{4} ], \qquad {\boldsymbol x}_0-\widetilde{{\boldsymbol x}} = ( 0, 1, 0 )^T, \quad \text{and} \quad {\boldsymbol \eta} = ( 0, 0, 0 )^T.
\end{equation}
A simple computation shows that ${\boldsymbol H} = \tfrac{1}{3}{\boldsymbol A}^T{\boldsymbol A} = \text{diag}(\lambda^+, \lambda, \lambda^-)$. Because the method is (strongly) convergent, the algorithm converges for these choices of ${\boldsymbol H}$ and ${\boldsymbol x}_0-\widetilde{{\boldsymbol x}}$. Moreover, we know that $\nabla f({\boldsymbol x}_k) = {\boldsymbol H} ({\boldsymbol x}_k-\widetilde{{\boldsymbol x}})$ and by Proposition~\ref{prop: polynomials_methods}, the vector ${\boldsymbol x}_k-\widetilde{{\boldsymbol x}} = P_k({\boldsymbol H}; \lambda^{\pm}) ({\boldsymbol x}_0-\widetilde{{\boldsymbol x}})$. Therefore we have that
\begin{equation} \label{eq:convergent_stuff_1} \lim_{k \to \infty} \lambda^2 P_k^2(\lambda; \lambda^{\pm}) = \lim_{k \to\infty} ({\boldsymbol x}_0-\widetilde{{\boldsymbol x}})^T {\boldsymbol H}^2 P_k^2({\boldsymbol H}; \lambda^{\pm}) ({\boldsymbol x}_0-\widetilde{{\boldsymbol x}}) = \lim_{k \to \infty} \|\nabla f({\boldsymbol x}_k)\|^2 = 0.
\end{equation}
Similarly, we consider the same matrix ${\boldsymbol A}$ as in \eqref{eq:matrix_AA} but instead a pure noise setting,
\begin{equation} {\boldsymbol x}_0-\widetilde{{\boldsymbol x}} = ( 0, 0, 0)^T, \quad \text{and} \quad {\boldsymbol \eta} = ( 0, \sqrt{3}, 0)^T.
\end{equation}
As before, the matrix ${\boldsymbol H} = \text{diag}(\lambda^+, \lambda, \lambda^-)$. By Proposition~\ref{prop: polynomials_methods}, the iterates ${\boldsymbol x}_k-\widetilde{{\boldsymbol x}} = Q_k({\boldsymbol H}; \lambda^{\pm}) \frac{{\boldsymbol A}^T {\boldsymbol \eta}}{3}$ as ${\boldsymbol x}_0-\widetilde{{\boldsymbol x}} = \bm{0}$. With this, the gradient equals
\[ \nabla f({\boldsymbol x}_k) = {\boldsymbol H}({\boldsymbol x}_k- \widetilde{{\boldsymbol x}}) - \tfrac{{\boldsymbol A}^T {\boldsymbol \eta}}{3} = \big [ {\boldsymbol H} Q_k({\boldsymbol H}; \lambda^{\pm}) - {\boldsymbol I} \big ] \tfrac{{\boldsymbol A}^T {\boldsymbol \eta}}{3} = - P_k({\boldsymbol H}; \lambda^{\pm}) \tfrac{{\boldsymbol A}^T {\boldsymbol \eta}}{3}. \]
Here, again, we used Proposition~\ref{prop: polynomials_methods}. A (strongly) convergent method has the following
\begin{equation}
\lim_{k \to \infty} \lambda P_k^2(\lambda; \lambda^{\pm}) = \lim_{k \to \infty} \tfrac{{\boldsymbol \eta}^T {\boldsymbol A}}{3} P_k^2({\boldsymbol H}; \lambda^{\pm}) \tfrac{{\boldsymbol A}^T {\boldsymbol \eta}}{3} = \lim_{k \to \infty} \|\nabla f({\boldsymbol x}_k)\|^2 = 0.
\end{equation}
This completes the result.
\end{proof}
The following lemma shows that the residual polynomials are uniformly bounded over $k$ on any compact subset of the positive real line.
\begin{lemma}[Convergent algorithms $\Rightarrow$ boundedness of $P_k$] \label{lem: convergent_bounded} Suppose $\mathcal{A}$ is a (strongly) convergent algorithm with residual polynomial $P_k$.
Under the assumptions of Lemma~\ref{lem: convergent_algorithm},
\[ \max_{k, \lambda \in [\lambda^-, \lambda^+]} \lambda^2 P_k^2(\lambda; \lambda^{\pm}) \le B \quad \text{and} \quad \max_{k, \lambda \in [\lambda^-, \lambda^+]} \lambda P_k^2(\lambda; \lambda^{\pm}) \le \widetilde{B},\]
for some constants $B, \widetilde{B} > 0$.
\end{lemma}
\begin{proof} Suppose we consider the noiseless setting ${\boldsymbol \eta} = \bm{0}$ in the generative model \eqref{eq:LS} so that ${\boldsymbol A} \widetilde{{\boldsymbol x}} = {\boldsymbol b}$. It then follows that $f({\boldsymbol x}^{\star}) = 0$ where ${\boldsymbol x}^{\star}$ is the optimum. A simple computation using Proposition~\ref{prop: polynomials_methods} shows that for all $k \ge 0$
\begin{equation} \begin{aligned} \label{eq: stuff_10}
f({\boldsymbol x}_k) - f({\boldsymbol x}^{\star}) &= \tfrac{1}{2n} \|{\boldsymbol A} ({\boldsymbol x}_k-\widetilde{{\boldsymbol x}})\|^2 = \tfrac{1}{2} ({\boldsymbol x}_k-\widetilde{{\boldsymbol x}})^T {\boldsymbol H} ({\boldsymbol x}_k-\widetilde{{\boldsymbol x}})\\
&= \tfrac{1}{2}({\boldsymbol x}_0-\widetilde{{\boldsymbol x}})^T {\boldsymbol H} P_k^2({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm}) ({\boldsymbol x}_0-\widetilde{{\boldsymbol x}}).
\end{aligned}
\end{equation}
Next consider the matrix ${\boldsymbol A}$ and vectors ${\boldsymbol x}_0-\widetilde{{\boldsymbol x}}$ and ${\boldsymbol \eta}$ as in \eqref{eq:matrix_AA} with the initial iterate ${\boldsymbol x}_0 = (0,0,0)^T$. We consider cases: suppose $\lambda^- = 0$. Fix a constant $\lambda \in [\lambda^-, \lambda^+]$. It follows from our choice of ${\boldsymbol A}$, ${\boldsymbol x}_0$, $\widetilde{{\boldsymbol x}}$, and ${\boldsymbol \eta}$ that the vector ${\boldsymbol A} \widetilde{{\boldsymbol x}} = {\boldsymbol b} = (0, -\sqrt{3 \lambda}, 0)^T$ and by \eqref{eq: stuff_10} that
\begin{equation} \label{eq: stuff_11} f({\boldsymbol x}_0)-f({\boldsymbol x}^{\star}) = \tfrac{1}{2} \lambda P_0^2(\lambda; \lambda^{\pm}).
\end{equation}
The solution set $\{{\boldsymbol x} \, : \, {\boldsymbol A}^T{\boldsymbol A} {\boldsymbol x} = {\boldsymbol A}^T{\boldsymbol b}\} = \{(0,-1, a)^T : a \in {\mathbb R}\}$ if $\lambda > 0$ and otherwise it equals $\{(0, a, b)^T : a,b \in {\mathbb R}\}$ if $\lambda = 0$. From Remark~\ref{rmk: minimal_norm}, we have that $\displaystyle {\boldsymbol x}^{\star} = \argmin_{{\boldsymbol A}^T{\boldsymbol A} {\boldsymbol x} = {\boldsymbol A}^T {\boldsymbol b}} \|{\boldsymbol x}-{\boldsymbol x}_0\|^2$ and thus we deduce that
\[ {\boldsymbol x}^{\star} = \begin{cases}
(0,-1,0)^T, & \text{if $\lambda > 0$}\\
(0, 0, 0)^T & \text{if $\lambda = 0$}.
\end{cases}
\]
In both cases, we have that $\|{\boldsymbol x}_0-{\boldsymbol x}^{\star}\|^2 \le 1$. Therefore using the boundedness assumption \eqref{eq: boundedness_grad} and the expression for the gradient in \eqref{eq:convergent_stuff_1}, we have that
\begin{align*} \sup_{k, \, \lambda \in [\lambda^-, \lambda^+] } \! \! \! \! \! \! \lambda^2 P_k^2(\lambda; \lambda^{\pm}) &= \! \! \! \! \! \! \sup_{k, \, \lambda \in [\lambda^-, \lambda^+] } \! \! \! \! \! \! \|\nabla f({\boldsymbol x}_k)\|^2 \le \! \! \! \! \! \! \sup_{k, \, \lambda \in [\lambda^-, \lambda^+] } \! \! \! \! \! \! C ( f({\boldsymbol x}_0)-f({\boldsymbol x}^{\star}) + \|{\boldsymbol x}_0-{\boldsymbol x}_{\star}\|^2 )\\
&\le \! \! \! \! \! \! \sup_{k, \, \lambda \in [\lambda^-, \lambda^+] } \! \! \! \! \! \! C \big ( \tfrac{1}{2} \lambda P_0^2(\lambda; \lambda^{\pm}) + 1 \big ) \le B.
\end{align*}
Here we used that the distance to the optimum $\|{\boldsymbol x}_0-{\boldsymbol x}^{\star}\|^2 \le 1$ and the polynomial in \eqref{eq: stuff_11} is bounded on a compact set.
Now we suppose that $\lambda^- > 0$. As above, we use the same matrix ${\boldsymbol A}$ and vectors ${\boldsymbol x}_0-\widetilde{{\boldsymbol x}}$ and ${\boldsymbol \eta}$ as in \eqref{eq:matrix_AA} and, in addition, we set ${\boldsymbol x}_0 = (0,0,0)^T$. In this situation, the matrix ${\boldsymbol A}$ is invertible and ${\boldsymbol x}^{\star} = (0,-1,0)^T$. Hence both \eqref{eq: stuff_11} and $\|{\boldsymbol x}_0-{\boldsymbol x}^{\star}\|^2 \le 1$ holds. Using the boundedness assumption on function values \eqref{eq: boundedness_grad} and the expression for the function values in \eqref{eq: stuff_10}, we deduce
\begin{align*}
\sup_{k, \, \lambda \in [\lambda^-, \lambda^+] } \! \! \! \! \! \! \lambda P_k^2(\lambda; \lambda^{\pm}) = \! \! \! \! \! \! \sup_{k, \, \lambda \in [\lambda^-, \lambda^+] } \! \! \! \! \! \! f({\boldsymbol x}_k)-f({\boldsymbol x}^{\star})
&\le \! \! \! \! \! \! \sup_{k, \, \lambda \in [\lambda^-, \lambda^+] } \! \! \! \! \! \! \widetilde{C} ( f({\boldsymbol x}_0)-f({\boldsymbol x}^{\star}) + \|{\boldsymbol x}_0-{\boldsymbol x}^{\star}\|^2 )\\
&\le \! \! \! \! \! \! \sup_{k, \, \lambda \in [\lambda^-, \lambda^+] } \! \! \! \! \! \! \widetilde{C} \big ( \tfrac{1}{2} \lambda P_0^2(\lambda; \lambda^{\pm}) + 1 \big ) \le \widetilde{B}.
\end{align*}
The result immediately follows.
\end{proof}
\section{Halting time is almost deterministic} \label{sec: halting_time}
In this section we develop a framework for the average-case analysis and state a main result of this paper: the concentration of the halting time.
We define the halting time $T_{\varepsilon}$ as the first iteration at which the gradient falls below some predefined $\varepsilon$:
\begin{equation} \label{eq:something_2} T_{\varepsilon} \stackrel{\text{def}}{=} \inf \, \{ k > 0 \, : \, \|\nabla f({\boldsymbol x}_k)\|^2 \le \varepsilon\}\,.
\end{equation}
Our main result (Theorem~\ref{thm: Halting_time}) states that this halting time is predictable for almost all high-dimensional data, or more precisely,
\begin{equation}
\lim_{d \to \infty} \Pr(T_{\varepsilon} = \text{constant}) = 1\,.
\end{equation}
Furthermore, we provide an implicit expression for this constant, otherwise known as the average complexity, and in Tables~\ref{tab:comparison_worst_avg_cvx} and \ref{tab:comparison_worst_avg_str_cvx} an explicit expression under further assumptions. The rest of this section provides a proof of this result.
\subsection{First-order methods as polynomials} \label{apx: GD_poly}
\begin{proposition}[Residual polynomials and gradients] \label{prop:gradient_polynomial} Suppose the iterates $\{{\boldsymbol x}_k\}_{k=0}^\infty$ are generated from a gradient based method. Let $\{P_k\}_{k=0}^\infty$ be a sequence of polynomials defined in \eqref{eq:recursive_noise_poly}. Then the following identity exists between the iterates and its residual polynomial,
\begin{equation} \begin{gathered} \label{eq:grad_optimality_cond_app}
\| \nabla f({\boldsymbol x}_k) \|^2 = ({\boldsymbol x}_0-\widetilde{{\boldsymbol x}})^T {\boldsymbol H}^2 P_k^2({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm}) ({\boldsymbol x}_0-\widetilde{{\boldsymbol x}}) + \tfrac{{\boldsymbol \eta}^T {\boldsymbol A}}{n} P_k^2({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm}) \tfrac{{\boldsymbol A}^T {\boldsymbol \eta}}{n} \\
-2({\boldsymbol x}_0-\widetilde{{\boldsymbol x}})^T {\boldsymbol H} P_k^2({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm}) \tfrac{{\boldsymbol A}^T {\boldsymbol \eta}}{n}. \nonumber
\end{gathered}
\end{equation}
\end{proposition}
\begin{proof} The gradient in \eqref{eq:LS} is given by the expression $\nabla f({\boldsymbol x}_k) = {\boldsymbol H} ({\boldsymbol x}_k-\widetilde{{\boldsymbol x}}) - \frac{{\boldsymbol A}^T {\boldsymbol \eta}}{n}$. The result follows immediately by plugging in \eqref{eq:recursive_noise_poly_1} into the formula for the gradient and using the relationship that ${\boldsymbol H}^2 Q_k^2({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm}) -2 {\boldsymbol H} Q_k({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm}) + {\boldsymbol I} = ({\boldsymbol I} - {\boldsymbol H} Q_k({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm}))^2 = P_k^2({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm})$.
\end{proof}
This \textit{equality} for the squared norm of the gradient is crucial for deriving average-case rates. In contrast, worst-case analysis typically uses only \textit{bounds} on the norm. A difficulty with the polynomials $P_k$ and $Q_k$ is that their coefficients depend on the largest and smallest eigenvalue of ${\boldsymbol H}$, and hence are random. We can remove this randomness thanks to Assumption~\ref{assumption: spectral_density}, replacing $\lambda_{\boldsymbol H}^+$ and $\lambda_{\boldsymbol H}^-$ with the top (bottom) edge of the support of $\mu$, denoted by $\lambda^+$ and $\lambda^-$, without loss of generality.
\begin{proposition}[Remove randomness in coefficients of polynomial] \label{proposition: norm} Suppose Assumption~\ref{assumption: spectral_density} holds. Fix any $k$-degree polynomial $\widetilde{P}_k$ whose coefficients depend continuously on the largest and smallest eigenvalues of ${\boldsymbol H}$. Then the following hold
\begin{equation} \| \widetilde{P}_k({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm}) - \widetilde{P}_k({\boldsymbol H}; \lambda^{\pm})\|^2_{\text{\rm op}} \Prto[d] 0\,. \end{equation}
\end{proposition}
\begin{proof} Fix any $\varepsilon, \delta > 0$. Let $c_i(\cdot)$ where $i = 0, \hdots, k$ be the coefficients associated with the term of degree $i$ in $\widetilde{P}_k({\boldsymbol H}; \cdot)$. For each $i$, the continuity of $c_i(\cdot)$ implies there exists $\delta_{\varepsilon} > 0$ such that
\begin{equation} \text{whenever} \quad \|(\lambda_{{\boldsymbol H}}^+, \lambda_{{\boldsymbol H}}^-)-(\lambda^+, \lambda^-)\| \le \delta_{\varepsilon} \quad \Rightarrow \quad |c_i(\lambda_{{\boldsymbol H}}^{\pm})-c_i(\lambda^{\pm})| \le \frac{\varepsilon}{4 (4\lambda^+)^i}\,. \end{equation}
For sufficiently large $d$, Assumption~\ref{assumption: spectral_density} implies $\Pr \big (|\lambda_{{\boldsymbol H}}^+-\lambda^+| > \min\{\tfrac{\delta_{\varepsilon}}{2}, \lambda^+\} \big ) \le \tfrac{\delta}{2}$ and $\Pr \big (|\lambda_{{\boldsymbol H}}^--\lambda^-| > \min\{ \tfrac{\delta_{\varepsilon}}{2}, \lambda^+\} \big ) \le \tfrac{\delta}{2}$.
With this, we define the event
$\mathcal{S} = \{ | \lambda_{{\boldsymbol H}}^+ - \lambda^+| \le \min\{ \tfrac{\delta_{\varepsilon}}{2}, \lambda^+ \} \} \cap \{ | \lambda_{{\boldsymbol H}}^- - \lambda^-| \le \min\{ \tfrac{\delta_{\varepsilon}}{2}, \lambda^+ \} \}.$ We have for all sufficiently large $d$
\begin{align}
\Pr \big ( \|\widetilde{P}_k({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm})-&\widetilde{P}_k({\boldsymbol H}; \lambda^{\pm}) \|_\text{op} > \varepsilon \big )
= \Pr \big ( \mathcal{S} \cap \big \{ \|\widetilde{P}_k({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm})-\widetilde{P}_k({\boldsymbol H}; \lambda^{\pm}) \|_\text{op} > \varepsilon \big \} \big ) \nonumber \\
&\qquad \qquad + \Pr \big ( \mathcal{S}^c \cap \big \{ \|\widetilde{P}_k({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm})-\widetilde{P}_k({\boldsymbol H}; \lambda^{\pm}) \|_\text{op} > \varepsilon \big \} \big ) \nonumber\\
&\le \Pr \big ( \mathcal{S} \cap \big \{ \|\widetilde{P}_k({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm})-\widetilde{P}_k({\boldsymbol H}; \lambda^{\pm}) \|_\text{op} > \varepsilon \big \} \big ) + \delta. \label{eq:rand_feat_blah_1}
\end{align}
Here we used that $\Pr \big ( \mathcal{S}^c \cap \big \{ \|\widetilde{P}_k({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm})-\widetilde{P}_k({\boldsymbol H}; \lambda^{\pm}) \|_\text{op} > \varepsilon \big \} \big ) \le \Pr(\mathcal{S}^c) \le \delta$ for large $d$. Therefore, it suffices to consider the first term in \eqref{eq:rand_feat_blah_1} and show that it is $0$. By construction of the set $\mathcal{S}$, any element in $\mathcal{S}$ satisfies both $\| {\boldsymbol H} \|_{\text{op}} \le 2 \lambda^+$ and $|c_i(\lambda_{{\boldsymbol H}}^{\pm})- c_i(\lambda^{\pm})| \le \tfrac{\varepsilon}{4 (4\lambda^+)^i}$. Hence on the event $\mathcal{S}$, we have the following
\begin{align}
\|\widetilde{P}_k({\boldsymbol H}, \lambda_{{\boldsymbol H}}^{\pm} ) - \widetilde{P}_k({\boldsymbol H}; \lambda^{\pm})\|_{\text{op}} \le \sum_{i=0}^k | c_i(\lambda_{{\boldsymbol H}}^{\pm})-c_i(\lambda^{\pm})| \|{\boldsymbol H}\|_{\text{op}}^i \le \sum_{i=0}^k \frac{ (2\lambda^+)^i \varepsilon}{4 (4 \lambda^+)^i} \le \frac{\varepsilon}{2}\,.
\end{align}
From this, we deduce that $\Pr \big (\mathcal{S} \cap \{ \| \widetilde{P}_k({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm})-\widetilde{P}_k({\boldsymbol H}; \lambda^{\pm})\|_{\text{op}} > \varepsilon \} \big ) = 0$ and the result immediately follows by \eqref{eq:rand_feat_blah_1}.
\end{proof}
The squared norm of the gradient in \eqref{eq:grad_optimality_cond_app} is a quadratic form. In Proposition~\ref{proposition: norm}, we removed the randomness in the coefficients of the polynomial and now we will relate this back to the squared norm of the gradient, and particularly, the quadratic form. The following lemmas state this precisely.
\begin{lemma} \label{lemma: probability_lemma} Suppose the sequences of non-negative random variables $X_{d}, Y_{d} \ge 0$ satisfy $\mathbb{E}[X_{d}] \le \gamma < \infty$ and $Y_{d} \Prto[d] 0$. Then $X_{d} Y_{d} \Prto[d] 0$.
\end{lemma}
\begin{proof} Fix constants $\varepsilon, \delta > 0$ and suppose we set $\hat{\varepsilon} = \frac{\varepsilon \delta}{2\gamma}$ and $\hat{\delta} = \frac{\delta}{2}$. Because $Y_d$ converges in probability, we have $\Pr(Y_{d} > \hat{\varepsilon}) \le \hat{\delta}$ for sufficiently large $d$. Define the event $\mathcal{S} = \{Y_d \le \hat{\varepsilon} \}$ and decompose the space based on this set $\mathcal{S}$ so that for large $d$
\begin{align*}
\Pr(X_d Y_d > \varepsilon) = \Pr(\mathcal{S} \cap \{X_d Y_d > \varepsilon\}) + \Pr(\mathcal{S}^c \cap \{X_d Y_d > \varepsilon \})
\le \Pr(\mathcal{S} \cap \{X_d Y_d > \varepsilon\}) + \tfrac{\delta}{2}.
\end{align*}
Here we used that $\Pr(\mathcal{S}^c \cap \{X_d Y_d > \varepsilon\}) \le \Pr(\mathcal{S}^c)$. For the other term, a direct application of Markov's inequality yields
\begin{align*}
\Pr(\mathcal{S} \cap \{X_d Y_d > \varepsilon\}) \le \Pr(\mathcal{S} \cap \{\hat{\varepsilon} X_d > \varepsilon\}) \le \tfrac{\hat{\varepsilon}}{\varepsilon} \cdot \mathbb{E}[X_d] \le \tfrac{\delta}{2}.
\end{align*}
The result immediately follows.
\end{proof}
\begin{lemma}[Remove randomness in coefficients of quadratic form]\label{proposition: remove_norm} Suppose Assumption~\ref{assumption: spectral_density} holds and let the vectors ${\boldsymbol w} \in \mathbb{R}^d$ and ${\boldsymbol v} \in \mathbb{R}^d$ be i.i.d. satisfying ${\mathbb E}\,[\|{\boldsymbol w}\|_2^2] = R^2$ and ${\mathbb E}\,[\|{\boldsymbol v}\|_2^2] = \widetilde{R}^2$ for some constants $R, \widetilde{R} > 0$.
For any $k$ degree polynomial $\widetilde{P}_k$ whose coefficients depend continuously on $\lambda_{{\boldsymbol H}}^+$ and $\lambda_{{\boldsymbol H}}^-$, the quadratic form converges in probability
\begin{align*}
{\boldsymbol w}^T & \widetilde{P}_k \left ({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm} \right ) {\boldsymbol v} - {\boldsymbol w}^T \widetilde{P}_k \left ({\boldsymbol H}; \lambda^{\pm} \right ) {\boldsymbol v} \Prto[d] 0.
\end{align*}
\end{lemma}
\begin{proof} Using the Cauchy-Schwarz inequality, it suffices to show that for every $\varepsilon > 0$ we have
\begin{align*}
\lim_{d \to \infty} \Pr \left (\|{\boldsymbol w}\|_2 \cdot \|{\boldsymbol v}\|_2 \cdot \big \| \widetilde{P}_k \left ( {\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm} \right ) - \widetilde{P}_k \left ( {\boldsymbol H}; \lambda^{\pm} \right ) \big \|_{\text{op}} > \varepsilon \right ) = 0\,.
\end{align*}
Define $X_{d} = \|{\boldsymbol w}\|_2 \|{\boldsymbol v}\|_2$ and $Y_{d} = \big \|\widetilde{P}_k \left ( {\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm} \right ) - \widetilde{P}_k \left ( {\boldsymbol H}; \lambda^{\pm} \right ) \big \|_{\text{op}}$. Proposition~\ref{proposition: norm} immediately gives that $Y_{d} \Prto[d] 0$. Next, Cauchy-Schwartz implies
\[{\mathbb E}\,[X_d] = {\mathbb E}\,[ \|{\boldsymbol w}\|_2 \|{\boldsymbol v}\|_2] \le {\mathbb E}\,[\|{\boldsymbol w}\|_2^2]^{1/2} {\mathbb E}\,[\|{\boldsymbol v}\|_2^2]^{1/2} = R \widetilde{R}.\]
The result immediately follows after applying Lemma~\ref{lemma: probability_lemma}.
\end{proof}
From Lemma~\ref{proposition: remove_norm} and the expression for the squared norm of the gradient in \eqref{eq:grad_optimality_cond_app}, we can replace the maximum (minimum) eigenvalue $\lambda_{{\boldsymbol H}}^+$ $(\lambda_{{\boldsymbol H}}^-)$ in \eqref{eq:grad_optimality_cond_app} with the top (bottom) edge of the support of $\mu$, $\lambda^+$ ($\lambda^-$). This followed because the vectors ${\boldsymbol x}_0-\widetilde{{\boldsymbol x}}$ and $\tfrac{1}{n} {\boldsymbol A}^T {\boldsymbol \eta}$ satisfy ${\boldsymbol w}$ and ${\boldsymbol v}$ in Lemma~\ref{proposition: remove_norm} and the terms surrounding these vectors in \eqref{eq:grad_optimality_cond_app} are polynomials in ${\boldsymbol H}$.
\subsection{Concentration of the gradient}
Having established the key equation linking the gradient to a polynomial in Proposition~\ref{prop:gradient_polynomial}, we now show that for almost any large model the magnitude of the gradient after $k$ iterations converges to a deterministic value which we denote by $\! \! \underset{d \rightarrow \infty}{\mathcal{E}} \! [\|\nabla f(\xx_{k})\|^2]\,$. We recall the statement of Theorem~\ref{thm: concentration_main}:
\noindent \textbf{Theorem.} \rm{(Concentration of the gradient)} \textit{
Under Assumptions~\ref{assumption: Vector} and~\ref{assumption: spectral_density} the norm of the gradient concentrates around a deterministic value:
\begin{equation} \label{eq: something_1} \vspace{0.25cm}
\hspace{-0.28cm} \! \|\nabla f({\boldsymbol x}_k)\|^2 \! \! \Prto[d] \! \! \! \textcolor{teal}{\overbrace{R^2}^{\text{signal}}} \! \! \! \! \int \! { \underbrace{\lambda^2 P_k^2(\lambda; \lambda^{\pm})}_{\text{algorithm}}} \textcolor{mypurple}{\overbrace{\mathop{}\!\mathrm{d}\mu}^{\text{model}} } + \! \textcolor{purple}{\overbrace{ \widetilde{R}^2} ^{\text{noise}} } \! r \! \! \int \! { \underbrace{\lambda P_k^2(\lambda; \lambda^{\pm})}_{\text{algorithm}}} \textcolor{mypurple}{\overbrace{ \mathop{}\!\mathrm{d}\mu}^{\text{model}} } \stackrel{\text{def}}{=} \! \! \underset{d \rightarrow \infty}{\mathcal{E}} \! [\|\nabla f(\xx_{k})\|^2]\,. \!
\end{equation}
}
Intuitively, the value of $\! \! \underset{d \rightarrow \infty}{\mathcal{E}} \! [\|\nabla f(\xx_{k})\|^2]\,$ is the expected gradient after first taking the model size to infinity. The above expression explicitly illustrates the effects of the model and the algorithm on the norm of the gradient: the \textcolor{teal}{signal ($R^2$)} and \textcolor{purple}{noise ($\widetilde{R}^2$)}, the {optimization algorithm} which enters into the formula through the polynomial $P_k$, and the \textcolor{mypurple}{model used to generate ${\boldsymbol A}$} by means of the measure $\mu$.
The main tool to prove Theorem~\ref{thm: concentration_main} is the moment method which requires computing explicit expressions for the moments of the norm of the gradient. We summarize this in the following proposition.
To ease notation in the next few propositions, we define the following matrices and vectors
\begin{equation} \begin{gathered} \label{eq:blah_10}
\quad {\boldsymbol u} \stackrel{\text{def}}{=} {\boldsymbol x}_0-\widetilde{{\boldsymbol x}}, \quad {\boldsymbol B} \stackrel{\text{def}}{=} {\boldsymbol H}^2 P_k^2({\boldsymbol H}; \lambda^{\pm}), \quad {\boldsymbol C} \stackrel{\text{def}}{=} P_k^2({\boldsymbol H}; \lambda^{\pm}),\\
\text{and} \quad {\boldsymbol D} \stackrel{\text{def}}{=} -2 {\boldsymbol H} P_k^2({\boldsymbol H}; \lambda^{\pm})
\end{gathered}
\end{equation}
and let $y_k$ be the quadratic form given by
\begin{equation}\label{eq: norm_with_noise1}
y_k \stackrel{\text{def}}{=} {\boldsymbol u}^T {\boldsymbol B} {\boldsymbol u} + \tfrac{1}{n} {\boldsymbol u}^T {\boldsymbol D} {\boldsymbol A}^T {\boldsymbol \eta} + \tfrac{1}{n^2} {\boldsymbol \eta}^T {\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T {\boldsymbol \eta}.
\end{equation}
Observe that the value $y_k$ is simply $\|\nabla f({\boldsymbol x}_k)\|^2$ in \eqref{eq:grad_optimality_cond_app} with $\lambda_{{\boldsymbol H}}^{\pm}$ replaced with $\lambda^{\pm}$.
\begin{proposition} \label{proposition:conditional} Suppose the matrix ${\boldsymbol A}$ and vectors ${\boldsymbol x}_0, \widetilde{{\boldsymbol x}},$ and ${\boldsymbol \eta}$ satisfy Assumptions~\ref{assumption: Vector} and \ref{assumption: spectral_density}. Let $P_k$ be the $k$-degree polynomial defined in \eqref{eq:recursive_noise_poly}. Using the notation in \eqref{eq:blah_10} and \eqref{eq: norm_with_noise1}, the following holds for any $\varepsilon > 0$
\begin{equation} \begin{aligned} \label{eq:conditional}
\Pr \big ( | y_k - \big [ R^2 \text{ \rm tr} \big ( \tfrac{{\boldsymbol B}}{d} \big ) &+ \tilde{R}^2 \text{ \rm tr} \big (\tfrac{{\boldsymbol C} {\boldsymbol H}}{n} \big )\big ] | > \varepsilon \, \big | \, {\boldsymbol H} \big ) \\
&\le \tfrac{1}{\varepsilon^2} \left ( \tfrac{C-R^4}{d} \text{ \rm tr} \big ( \tfrac{{\boldsymbol B}^2}{d} \big ) + \tfrac{\tilde{C}-\tilde{R}^4}{n} \text{ \rm tr} \big ( \tfrac{({\boldsymbol C} {\boldsymbol H})^2}{n} \big ) + \tfrac{ R^2 \tilde{R}^2}{n} \big [ \tfrac{\text{tr}( {\boldsymbol D}^2 {\boldsymbol H})}{d} \big ] \right ).
\end{aligned} \end{equation}
Without loss of generality, we assume that the constants $C$ and $\widetilde{C}$ are large enough such that $C > 3 R^4$ and $\widetilde{C} > 3 \widetilde{R}^4$.
\end{proposition}
\begin{proof} We can write any quadratic form as ${\boldsymbol w}^T {\boldsymbol F} {\boldsymbol z} = \sum_{i,j} w_i z_j F_{ij}$. Expanding the quadratic forms, the following holds
\begin{align}
{\mathbb E}\,[y_k \, | \, {\boldsymbol H}] = {\mathbb E}\,[{\boldsymbol u}^T{\boldsymbol B} {\boldsymbol u} \, &| \, {\boldsymbol H}] + \tfrac{1}{n} {\mathbb E}\,[ {\boldsymbol u}^T {\boldsymbol D} {\boldsymbol A}^T {\boldsymbol \eta} \, | \, {\boldsymbol H}] + \tfrac{1}{n^2} {\mathbb E}\,[ {\boldsymbol \eta}^T {\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T {\boldsymbol \eta} \, | \, {\boldsymbol H}]\\
\text{(ind. of ${\boldsymbol \eta}$ and ${\boldsymbol u}$, ${\mathbb E}\,[{\boldsymbol \eta}] = \bm{0}$)} \quad &= {\mathbb E}\, \big [ \sum_{i,j} u_i u_j B_{ij} \, | \, {\boldsymbol H} \big ] + \tfrac{1}{n^2} {\mathbb E}\, \big [ \sum_{i,j} \eta_i \eta_j ({\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T)_{ij} \, | \, {\boldsymbol H} \big ] \\
\text{ (isotropic prop. of ${\boldsymbol \eta}$ and ${\boldsymbol u}$)} \quad &= R^2 \cdot \sum_i \tfrac{B_{ii}}{d} + \widetilde{R}^2 \cdot \sum_i \tfrac{({\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T)_{ii}}{n^2}\\
&= R^2 \cdot \tfrac{\text{tr}({\boldsymbol B})}{d} + \widetilde{R}^2 \cdot \tfrac{\text{tr}({\boldsymbol C} {\boldsymbol H})}{n}.
\end{align}
In the last equality, we used that $\text{tr}({\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T) = \text{tr}({\boldsymbol C} {\boldsymbol A}^T {\boldsymbol A}) = n \cdot \text{tr}({\boldsymbol C} {\boldsymbol H})$.
To prove \eqref{eq:conditional}, we will use Chebyshev's inequality; hence we need to compute the $\text{Var} \big ( y_k | {\boldsymbol H} \big ) = \mathbb{E} \big [ y^2_k | {\boldsymbol H} \big ] - \big (\mathbb{E} [y_k | {\boldsymbol H}] \big )^2$. First, a simple computation yields that
\begin{equation} \label{eq:variance_noise_11}
\big ( {\mathbb E}\,[ y_k | {\boldsymbol H} ] \big )^2 = \underbrace{\big [ \tfrac{R^2 \text{tr}({\boldsymbol B})}{d} \big ]^2}_{(i)} + \underbrace{ \big [ \tfrac{\tilde{R}^2 \text{tr}({\boldsymbol C} {\boldsymbol H})}{n} \big ]^2}_{(ii)} + \underbrace{2 \big [ \tfrac{R^2 \text{tr}({\boldsymbol B})}{d} \big ] \big [ \tfrac{\tilde{R}^2 \text{tr}({\boldsymbol C} {\boldsymbol H})}{n} \big ]}_{(iii)}.
\end{equation}
Next, we compute ${\mathbb E}\,[y^2_k | {\boldsymbol H}]$. By expanding out the terms in \eqref{eq: norm_with_noise1}, we get the following
\begin{equation} \begin{aligned} \label{eq:variance_noise_22}
{\mathbb E}\,[y^2_k | {\boldsymbol H}] &= \underbrace{{\mathbb E}\, [ ({\boldsymbol u}^T {\boldsymbol B} {\boldsymbol u})^2 | {\boldsymbol H} ]}_{(a)} + \underbrace{ {\mathbb E}\, \big [ \left ( \tfrac{{\boldsymbol \eta}^T {\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T {\boldsymbol \eta}}{n^2} \right )^2 | {\boldsymbol H} \big ] }_{(b)} + \underbrace{ {\mathbb E}\, \big [ \tfrac{2 {\boldsymbol u}^T {\boldsymbol B} {\boldsymbol u} \cdot {\boldsymbol \eta}^T {\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T {\boldsymbol \eta}}{n^2} \, | {\boldsymbol H} \big ] }_{(c)} \\
& \qquad + \underbrace{ {\mathbb E}\, \big [ \left ( \tfrac{{\boldsymbol u}^T {\boldsymbol D} {\boldsymbol A}^T {\boldsymbol \eta}}{n} \right )^2 | {\boldsymbol H} \big ] }_{(d)} + \underbrace{ {\mathbb E}\, \big [ 2 \left ( {\boldsymbol u}^T {\boldsymbol B} {\boldsymbol u} + \tfrac{{\boldsymbol \eta}^T {\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T {\boldsymbol \eta}}{n^2} \right ) \cdot \tfrac{{\boldsymbol u}^T {\boldsymbol D} {\boldsymbol A}^T {\boldsymbol \eta}}{n} \, | {\boldsymbol H} \big ]}_{(e)}.
\end{aligned} \end{equation}
To compute the variance of $y_k$, we take \eqref{eq:variance_noise_22} and subtract \eqref{eq:variance_noise_11}. Since this is quite a long expression, we will match up terms and compute these terms individually. First consider the terms (a) and (i) in equations~\eqref{eq:variance_noise_22} and \eqref{eq:variance_noise_11} respectively. By expanding out the square, we get
\begin{align*}
\text{Var}({\boldsymbol u}^T {\boldsymbol B} {\boldsymbol u} | {\boldsymbol H}) = {\mathbb E}\, \big [ \left ( {\boldsymbol u}^T {\boldsymbol B} {\boldsymbol u} \right )^2 | {\boldsymbol H} \big ] - \big [\tfrac{R^2 \text{tr}({\boldsymbol B})}{d} \big ]^2= \sum_{i,j,k,\ell} {\mathbb E}\,[u_i u_j u_k u_{\ell}] B_{ij} B_{k \ell} - \big [\tfrac{R^2 \text{tr}({\boldsymbol B})}{d} \big ]^2.
\end{align*}
We need each index to appear exactly twice in the above for its contribution to be non-negligible since ${\mathbb E}\,[u_i^2] = \tfrac{R^2}{d}$ and ${\mathbb E}\,[{\boldsymbol u}] = \bm{0}$. There are four possible ways in which this can happen: $\{i = j = k = \ell\}$, $\{i = j, k = \ell, k \neq i\}$, $\{i = k, j = \ell, i \neq j\}$, or $\{i = \ell, j = k, i \neq j\}$. By the symmetry of the ${\boldsymbol B}$ matrix, the last two cases are identical. Noting that ${\mathbb E}\,[u_i^4] \le \tfrac{C}{d^2}$ and ${\mathbb E}\,[u_i^2] = \tfrac{R^2}{d}$, we, consequently, get the following expression for the variance
\begin{equation} \begin{aligned} \label{eq:blah_23}
\text{Var}({\boldsymbol u}^T &{\boldsymbol B} {\boldsymbol u} \, | \, {\boldsymbol H}) = \sum_{i} {\mathbb E}\,[u_i^4] \cdot B_{ii}^2 + \sum_{i \neq j} {\mathbb E}\,[u_i^2] \cdot {\mathbb E}\,[u_j^2] \cdot \left (B_{ii} B_{jj} + 2 B_{ij}^2 \right )- \tfrac{R^4}{d^2} [\text{tr}({\boldsymbol B})]^2\\
&\le \frac{C-R^4}{d^2} \cdot \sum_i B_{ii}^2 + \frac{2R^4}{d^2} \sum_{i \neq j} B_{ij}^2 + \frac{R^4}{d^2} \big ( \sum_i B_{ii}^2 + \sum_{i \neq j} B_{ii} B_{jj} - [\text{tr}({\boldsymbol B})]^2 \big )\\
&= \frac{C-R^4}{d^2} \cdot \sum_{i} B_{ii}^2 + \frac{2R^4}{d^2} \sum_{i \neq j} B_{ij}^2 \\
&\le \frac{C-R^4}{d^2} \cdot \big (\sum_{i} B_{ii}^2 + \sum_{i \neq j} B_{ij}^2 \big ) = \frac{C-R^4}{d} \cdot \left [ \frac{ \text{tr}({\boldsymbol B}^2)}{d} \right ].
\end{aligned} \end{equation}
In the second equality, we used that $\sum_i B_{ii}^2 + \sum_{i \neq j} B_{ii} B_{jj} = [\text{tr}({\boldsymbol B})]^2$ and in the second inequality we can without loss of generality choose $C$ so that $C > 3R^4$. Finally, we used that $\sum_i B_{ii}^2 + \sum_{i \neq j} B_{ij}^2 = \text{tr}({\boldsymbol B}^2)$.
Next, we consider the terms (b) and (ii) in equations~\eqref{eq:variance_noise_22} and \eqref{eq:variance_noise_11} respectively.
Similar to the previous case, by expanding out the square, we get the following
\begin{align*}
\text{Var} \big (\tfrac{{\boldsymbol \eta}^T {\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T {\boldsymbol \eta}}{n^2} \, | \, {\boldsymbol H}\big ) = {\mathbb E}\, \big [ \frac{1}{n^4} \sum_{i,j,k,\ell} \eta_i \eta_j \eta_k \eta_{\ell} ({\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T)_{ij} ({\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T)_{k \ell} \, | \, {\boldsymbol H} \big ] - \big [\tfrac{\widetilde{R}^2 \text{tr}({\boldsymbol C} {\boldsymbol H})}{n} \big ]^2.
\end{align*}
Because of independence, isotropic variance ${\mathbb E}\,[\eta_i^2] =\widetilde{R}^2$, and mean ${\mathbb E}\,[{\boldsymbol \eta}] = \bm{0}$, we need each index to appear exactly twice in the above expression in order for its contribution to be non-negligible. There are four possible ways in which this can happen: $\{i = j= k = \ell\}, \{i = j, k = \ell, k \neq i\}, \{ i = k, j = \ell, i \neq j \}$, or $\{i = \ell, j = k, i \neq j\}$. As before, we have the following expression for the variance
\begin{equation}
\begin{aligned} \label{eq:noisy_GD_blah1}
\text{Var} &\big ( \tfrac{1}{n^2} \cdot {\boldsymbol \eta}^T {\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T {\boldsymbol \eta} \, | \, {\boldsymbol H} \big ) \le \tfrac{\widetilde{C}}{n^4} \sum_i ({\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T)_{ii}^2 + \tfrac{\widetilde{R}^4}{n^4} \sum_{i \neq j} ({\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T)_{ii} ({\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T)_{jj}\\
& \qquad \qquad \qquad \qquad + \tfrac{2 \widetilde{R}^4}{n^4} \sum_{i \neq j} ({\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T)_{ij}^2 - \tfrac{\widetilde{R}^4}{n^2} [ \text{tr}({\boldsymbol C} {\boldsymbol H}) ]^2\\
&= \tfrac{\widetilde{C}-\widetilde{R}^4}{n^4} \sum_i ({\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T)_{ii}^2 + \tfrac{\widetilde{R}^4}{n^4} \big [ \big ( \sum_i ({\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T)_{ii}^2 + \sum_{i \neq j} ({\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T)_{ii} ({\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T)_{jj} \big ) \big ] \\
& \qquad \qquad \quad + \tfrac{2 \tilde{R}^4}{n^4 } \sum_{i \neq j} ({\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T)_{ij}^2 - \tfrac{\tilde{R}^4}{n^2} [ \text{tr}({\boldsymbol C} {\boldsymbol H}) ]^2\\
&\le \tfrac{\widetilde{C}-\widetilde{R}^4}{n^4} \big [ \sum_i ({\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T)_{ii}^2 + \sum_{i \neq j} ({\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T)^2_{ij} \big ] = \tfrac{\widetilde{C}-\widetilde{R}^4}{n} \cdot \big [ \tfrac{\text{tr}( ({\boldsymbol C} {\boldsymbol H})^2 )}{n} \big ].
\end{aligned}
\end{equation}
Here we can without loss of generality choose $\widetilde{C}$ so that $\widetilde{C} > 3 \widetilde{R}^4$. Next, we compare (c) and (iii) in equation~\eqref{eq:variance_noise_22} and \eqref{eq:variance_noise_11}, respectively. We begin by expanding out (c) in equation~\eqref{eq:variance_noise_22} which yields
\begin{align*}
{\mathbb E}\, \big [ \tfrac{2}{n^2} \cdot {\boldsymbol u}^T {\boldsymbol B} {\boldsymbol u} \cdot {\boldsymbol \eta}^T {\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T {\boldsymbol \eta} \, | \, {\boldsymbol H} \big ]= {\mathbb E}\, \big [ \tfrac{2}{n^2} \big ( \sum_{i,j} u_i B_{ij} u_j \big ) \big ( \sum_{k, \ell} \eta_k ({\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T)_{k \ell} \eta_\ell \big ) \, | \, {\boldsymbol H} \big ].
\end{align*}
The only terms which contribute are when $i = j$ and $k = \ell$. Therefore, we deduce the following
\begin{equation} \begin{aligned} \label{eq:blah_20}
\tfrac{2}{n^2} \cdot {\mathbb E}\, \big [{\boldsymbol u}^T {\boldsymbol B} {\boldsymbol u} \cdot {\boldsymbol \eta}^T {\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T {\boldsymbol \eta} \, | \, & {\boldsymbol H} \big ] - 2 \big [ \tfrac{R^2 \text{tr}({\boldsymbol B})}{d} \big ] \cdot \big [ \tfrac{\widetilde{R}^2 \text{tr}({\boldsymbol C} {\boldsymbol H})}{n} \big ]\\
&= \tfrac{2\widetilde{R}^2 R^2}{n^2 d} \sum_{i,j} B_{ii} ({\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T)_{jj} - 2 \big [ \tfrac{R^2 \text{tr}({\boldsymbol B})}{d} \big ] \cdot \big [ \tfrac{\tilde{R}^2 \text{tr}({\boldsymbol C} {\boldsymbol H})}{n} \big ]\\
&= \tfrac{2\widetilde{R}^2 R^2}{n d} \big [ \text{tr}({\boldsymbol B}) \text{tr}({\boldsymbol C} {\boldsymbol H}) \big ] - 2 \big [ \tfrac{R^2 \text{tr}({\boldsymbol B})}{d} \big ] \cdot \big [ \tfrac{\widetilde{R}^2 \text{tr}({\boldsymbol C} {\boldsymbol H})}{n} \big ] = 0.
\end{aligned} \end{equation}
We have now used up all the terms in \eqref{eq:variance_noise_11} so the remaining terms, (d) and (e), in \eqref{eq:variance_noise_22} we will show are themselves already going to $0$ as $d \to \infty$. Again expanding the term (d), we get
\begin{equation}
{\mathbb E}\, \big [ \tfrac{1}{n^2} \big ( {\boldsymbol u}^T {\boldsymbol D} {\boldsymbol A}^T {\boldsymbol \eta} \big )^2 \, | \, {\boldsymbol H} \big ] = {\mathbb E}\, \big [ \tfrac{1}{n^2} \big ( \sum_{i,j} u_i ({\boldsymbol D} {\boldsymbol A}^T)_{ij} \eta_j \big )^2 \, | \, {\boldsymbol H} \big ].
\end{equation}
By independence and isotropic variance of ${\boldsymbol u}$ and ${\boldsymbol \eta}$, the only terms which remain after taking expectations are the ones with $u_i^2$ and $\eta_j^2$ terms. Therefore, we deduce \begin{equation}
\begin{aligned} \label{eq:GD_noisy_blah_22}
\tfrac{1}{n^2} {\mathbb E}\, \big [ \big ( {\boldsymbol u}^T {\boldsymbol D} {\boldsymbol A}^T {\boldsymbol \eta} \big )^2 \, | \, {\boldsymbol H} \big ] &= \tfrac{1}{n^2} \sum_{i,j} {\mathbb E}\, \big [u_i^2 \big ] \cdot {\mathbb E}\,[ \eta_j^2 ] \cdot ({\boldsymbol D} {\boldsymbol A}^T)_{ij}^2 = \tfrac{ R^2 \widetilde{R}^2}{n^2 d} \sum_{i,j} ({\boldsymbol D} {\boldsymbol A}^T)_{ij}^2\\
&= \tfrac{ R^2 \widetilde{R}^2}{n} \cdot \big [ \tfrac{\text{tr}( {\boldsymbol D}^2 {\boldsymbol H})}{d} \big ].
\end{aligned}
\end{equation}
The only term which remains in \eqref{eq:variance_noise_22} is (e). Since ${\mathbb E}\,[{\boldsymbol \eta}] = \bm{0}$, the term ${\boldsymbol u}^T {\boldsymbol B} {\boldsymbol u} \cdot {\boldsymbol u}^T\tfrac{{\boldsymbol D} {\boldsymbol A}^T }{n} {\boldsymbol \eta}$ contributes nothing to the expectation. Similarly since ${\mathbb E}\,[{\boldsymbol u}] = \bm{0}$, the term $\tfrac{1}{n^3} \cdot {\boldsymbol \eta}^T {\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T {\boldsymbol \eta} \cdot {\boldsymbol u}^T {\boldsymbol C} {\boldsymbol A}^T {\boldsymbol \eta}$ is also zero in expectation.
Putting all the quantities \eqref{eq:blah_23}, \eqref{eq:noisy_GD_blah1}, \eqref{eq:blah_20}, \eqref{eq:GD_noisy_blah_22} together with \eqref{eq:variance_noise_11} and \eqref{eq:variance_noise_22}, a straight forward application of Chebyshev's inequality yields the result.
\end{proof}
The only difference between $\|\nabla f({\boldsymbol x}_k)\|^2$ and $y_k$ is the coefficients of the polynomials in $\|\nabla f({\boldsymbol x}_k)\|^2$ continuously depend on $\lambda^{\pm}_{{\boldsymbol H}}$ while the coefficients of $y_k$ depend on $\lambda^{\pm}$. The polynomials $P_k$ and $Q_k$ together with Assumptions~\ref{assumption: Vector} and~\ref{assumption: spectral_density} ensure that all the conditions of Lemma~\ref{proposition: remove_norm} hold by setting ${\boldsymbol w}$ and ${\boldsymbol v}$ to combinations of ${\boldsymbol u}$ and $\tfrac{1}{n} {\boldsymbol A}^T {\boldsymbol \eta}$ and the polynomials to ${\boldsymbol B}$, ${\boldsymbol C}$, and ${\boldsymbol D}$. Therefore we have $| \|\nabla f({\boldsymbol x}_k)\|^2 - y_k | \Prto[d] 0$ so we can replace $y_k$ with $\|\nabla f({\boldsymbol x}_k)\|^2$. The proof of Proposition~\ref{proposition:conditional} shows, that conditioned on ${\boldsymbol H}$, the $\text{Var}(\|\nabla f({\boldsymbol x}_k)\|^2 | {\boldsymbol H})$ is $\mathcal{O}( \tfrac{1}{d})$ and
\begin{equation} \label{eq:something_3_1} {\mathbb E}\,[ \|\nabla f({\boldsymbol x}_k)\|^2 | {\boldsymbol H}] = R^2 \text{tr} \big (\tfrac{{\boldsymbol B}}{d} \big ) + \tilde{R}^2 \text{tr} \big (\tfrac{{\boldsymbol C} {\boldsymbol H}}{n} \big ).
\end{equation}
Consequently, conditioned on ${\boldsymbol H}$, the squared norm of the gradient is roughly \eqref{eq:something_3_1}. So in view of this, it suffices to understand the expected traces of polynomials in ${\boldsymbol H}$. Random matrix theory studies convergence properties of the limiting distribution of high dimensional matrices, particularly the empirical spectral measure. An important tool derived from using Assumption~\ref{assumption: spectral_density} linking the ESM and the expected trace to the moments of the measure $\mu$ is given below.
\begin{proposition}[Convergence of ESM] \label{proposition: moments} Let $\widetilde{P}_k$ be any $k$-degree polynomial. Under Assumption~\ref{assumption: spectral_density}, the following is true
\begin{align*}
\tfrac{1}{d}\text{\rm tr}\, \widetilde{P}_k({\boldsymbol H}; \lambda^{\pm}) = \int \widetilde{P}_k(\lambda; \lambda^{\pm}) \, \mathop{}\!\mathrm{d}\mu_{{\boldsymbol H}} \Prto[d] \int \widetilde{P}_k(\lambda; \lambda^{\pm}) \mathop{}\!\mathrm{d}\mu\,.
\end{align*}
\end{proposition}
\begin{proof}
For sufficiently large $d$, Assumption~\ref{assumption: spectral_density} says $\Pr(\lambda_{{\boldsymbol H}}^+ > \lambda^+ + \hat{\varepsilon}) \le \frac{\delta}{2}$. Define the event $\mathcal{S} = \{ \lambda_{{\boldsymbol H}}^+ \le \lambda^+ + \hat{\varepsilon} \}$. We construct a bounded, continuous function $h$ by
\[h(\lambda) = \begin{cases}
\widetilde{P}_k(0; \lambda^{\pm}), & \text{if $\lambda < 0$}\\
\widetilde{P}_k(\lambda; \lambda^{\pm}), & \text{if $0 \le \lambda \le \lambda^+ + \hat{\varepsilon}$}\\
\widetilde{P}_k(\lambda^+ + \hat{\varepsilon}; \lambda^{\pm}), & \text{otherwise}.
\end{cases}\]
Because the function $h$ is bounded and continuous, Assumption~\ref{assumption: spectral_density} guarantees that
\begin{equation} \label{eq:rand_feature_blah_4}
\Pr \big ( \big |\int h(\lambda) \, \mathop{}\!\mathrm{d}\mu_{{\boldsymbol H}} - \int h(\lambda) \, \mathop{}\!\mathrm{d}\mu \, \big | > \varepsilon \big ) \le \delta.
\end{equation}
Depending on whether $S$ has occurred, we have for all sufficiently large $d$
\begin{align}
\Pr \big ( \big | \int \widetilde{P}_k(\lambda; \lambda^{\pm}) \, \mathop{}\!\mathrm{d}\mu_{{\boldsymbol H}} - \int &\widetilde{P}_k(\lambda; \lambda^{\pm}) \, \mathop{}\!\mathrm{d}\mu \big | > \varepsilon \big ) \nonumber\\
&= \Pr \big ( \mathcal{S} \cap \{ \big | \int \widetilde{P}_k(\lambda; \lambda^{\pm}) \, \mathop{}\!\mathrm{d}\mu_{{\boldsymbol H}} - \int \widetilde{P}_k(\lambda; \lambda^{\pm}) \, | > \varepsilon \} \big ) \nonumber \\
& \qquad + \Pr \big (\mathcal{S}^c \cap \{ \big | \int \widetilde{P}_k(\lambda; \lambda^{\pm}) \, \mathop{}\!\mathrm{d}\mu_{{\boldsymbol H}} - \int \widetilde{P}_k(\lambda; \lambda^{\pm}) \, \mathop{}\!\mathrm{d}\mu \big | > \varepsilon \} \big ) \nonumber \\
&\le \Pr \big ( \mathcal{S} \cap \{ \big | \int \widetilde{P}_k(\lambda; \lambda^{\pm}) \, \mathop{}\!\mathrm{d}\mu_{{\boldsymbol H}} - \int \widetilde{P}_k(\lambda; \lambda^{\pm}) \, \mathop{}\!\mathrm{d}\mu \big | > \varepsilon \} \big ) + \tfrac{\delta}{2}. \label{eq: rand_feature_blah_3}
\end{align}
In the last line, the probability $\Pr \big (\mathcal{S}^c \cap \{ \big | \int \widetilde{P}_k(\lambda; \lambda^{\pm}) \, \mathop{}\!\mathrm{d}\mu_{{\boldsymbol H}} - \int \widetilde{P}_k(\lambda; \lambda^{\pm}) \, \mathop{}\!\mathrm{d}\mu \big | > \varepsilon \} \big ) \le \Pr(\mathcal{S}^c) \le \tfrac{\delta}{2}$ for large $d$. Hence, we consider only the first term in \eqref{eq: rand_feature_blah_3}. By construction, for any element in $\mathcal{S}$ it is clear that $h(\lambda) = \widetilde{P}_k(\lambda)$. For sufficiently large $d$, equation \eqref{eq:rand_feature_blah_4} yields
\[ \Pr \big ( \mathcal{S} \cap \{ \big | \int \widetilde{P}_k(\lambda; \lambda^{\pm}) \, \mathop{}\!\mathrm{d}\mu_{{\boldsymbol H}} - \int \widetilde{P}_k(\lambda; \lambda^{\pm}) \, \mathop{}\!\mathrm{d}\mu \big | > \varepsilon \} \big ) \le \Pr \big ( \big | \int h(\lambda) \, \mathop{}\!\mathrm{d}\mu_{{\boldsymbol H}} - \int h(\lambda) \, \mathop{}\!\mathrm{d}\mu \big | > \varepsilon \big ) \le \frac{\delta}{2}.\]
The result follows after combining with \eqref{eq: rand_feature_blah_3}.
\end{proof}
Now that we have described the main components of our argument, we present a preliminary concentration result for the gradient.
\begin{proposition}\label{thm: probability_convergence} Suppose the vectors ${\boldsymbol x}_0, \widetilde{{\boldsymbol x}},$ and ${\boldsymbol \eta}$ and the matrix ${\boldsymbol A}$ satisfy Assumptions~\ref{assumption: Vector} and \ref{assumption: spectral_density} resp.
The following holds
\begin{align} \label{eq:grad_convergence_prob}
\big | \|\nabla f({\boldsymbol x}_k)\|^2 - \big ( \underbrace{\textcolor{teal}{R^2} \textcolor{black}{\tfrac{1}{d} \text{\rm tr}({\boldsymbol H}^2 P_k^2({\boldsymbol H}; \lambda^{\pm}) )}}_{\text{signal}} + \underbrace{\textcolor{purple}{\widetilde{R}^2} \textcolor{black}{\tfrac{1}{n} \text{\rm tr}({\boldsymbol H} P_k^2({\boldsymbol H} ; \lambda^{\pm}))}}_{\text{noise}} \big ) \big | \Prto[d] 0.
\end{align}
\end{proposition}
\begin{proof}
Recall the definitions in \eqref{eq:blah_10} and \eqref{eq: norm_with_noise1} and equation \eqref{eq:grad_optimality_cond_app}. We note that the only difference between $\|\nabla f({\boldsymbol x}_k)\|^2$ and $y_k$ is that the coefficients of the polynomials in $\|\nabla f({\boldsymbol x}_k)\|^2$ continuously depend on $\lambda_{{\boldsymbol H}}^{\pm}$ while the coefficients in $y_k$ depend on $\lambda^{\pm}$. The polynomials $P_k$ and $Q_k$ together with Assumptions~\ref{assumption: Vector} and~\ref{assumption: spectral_density} ensure that all the conditions of Lemma~\ref{proposition: remove_norm} hold by setting ${\boldsymbol w}$ and ${\boldsymbol v}$ to combinations of ${\boldsymbol u}$ and $\tfrac{1}{n} {\boldsymbol A}^T {\boldsymbol \eta}$ and the polynomials to ${\boldsymbol B}$, ${\boldsymbol C}$, and ${\boldsymbol D}$. Therefore we have $| \|\nabla f({\boldsymbol x}_k)\|^2 - y_k | \Prto[d] 0$ so it suffices to prove \eqref{eq:grad_convergence_prob} with $\|\nabla f({\boldsymbol x}_k)\|^2$ replaced by $y_k$.
Fix constants $\varepsilon, \delta > 0$. Proposition~\ref{proposition: moments} guarantees convergence in probability of any expected trace to a constant which depends on the polynomial and the deterministic measure $\mu$. This together with the definitions of ${\boldsymbol B}$, ${\boldsymbol C}$, and ${\boldsymbol D}$ yield for sufficiently large $d$
\begin{equation}\label{eq:bound_traces}
\begin{gathered}
\Pr \big ( \big | \tfrac{\text{tr}({\boldsymbol B}^2)}{d} \big | > M_1 \stackrel{\text{def}}{=} \varepsilon + \int \lambda^4 P_k^4(\lambda; \lambda^{\pm}) \, \mathop{}\!\mathrm{d}\mu \big ) \le \tfrac{\delta}{6},\\
\Pr \big ( \big | \tfrac{\text{tr}(({\boldsymbol C} {\boldsymbol H})^2)}{n} \big | > M_2 \stackrel{\text{def}}{=} \varepsilon + r \int \lambda^2 P_k^4(\lambda; \lambda^{\pm}) \, \mathop{}\!\mathrm{d}\mu \big ) \le \tfrac{\delta}{6},\\
\text{and} \quad \Pr \big ( \big | \tfrac{\text{tr}({\boldsymbol D}^2 {\boldsymbol H})}{d} \big | > M_3 \stackrel{\text{def}}{=} \varepsilon + 4\int \lambda^3 P_k^4(\lambda; \lambda^{\pm}) \, \mathop{}\!\mathrm{d}\mu \big ) \le \tfrac{\delta}{6}.
\end{gathered}
\end{equation}
We define the set $\mathcal{S}$ for which the expected traces of the random matrices are bounded, namely,
\[ \mathcal{S} = \big \{ \big | \tfrac{\text{tr}({\boldsymbol B}^2)}{d} \big | \le M_1 \} \cap \big \{ \big | \tfrac{\text{tr}(({\boldsymbol C} {\boldsymbol H})^2)}{n} \big | \le M_2 \} \cap \big \{ \big | \tfrac{\text{tr}({\boldsymbol D}^2 {\boldsymbol H})}{d} \big | \le M_3 \big ) \big \}, \]
and we observe because of \eqref{eq:bound_traces} that the probability $\Pr(\mathcal{S}^c) \le \frac{\delta}{2}$. The total law of probability yields the following
\begin{align}
\Pr \big ( \big | y_k - \big [R^2 \text{tr} \big (\tfrac{{\boldsymbol B}}{d} \big ) &+ \widetilde{R}^2 \text{tr} \big (\tfrac{{\boldsymbol C} {\boldsymbol H}}{n} \big ) \big ] \big | > \varepsilon \big ) = \Pr \big ( \mathcal{S} \cap \big \{ \big | y_k - \big [R^2 \text{tr} \big (\tfrac{{\boldsymbol B}}{d} \big ) + \widetilde{R}^2 \text{tr} \big (\tfrac{{\boldsymbol C} {\boldsymbol H}}{n} \big ) \big ] \big | > \varepsilon \big \} \big ) \nonumber \\
& \qquad \qquad + \Pr \big ( \mathcal{S}^c \cap \big \{ \big | y_k - \big [R^2 \text{tr} \big (\tfrac{{\boldsymbol B}}{d} \big ) + \widetilde{R}^2 \text{tr} \big (\tfrac{{\boldsymbol C} {\boldsymbol H}}{n} \big ) \big ] \big | > \varepsilon \big \} \big ) \nonumber \\
&\le \Pr \big ( \mathcal{S} \cap \big \{ \big | y_k - \big [R^2 \text{tr} \big (\tfrac{{\boldsymbol B}}{d} \big ) + \widetilde{R}^2 \text{tr} \big (\tfrac{{\boldsymbol C} {\boldsymbol H}}{n} \big ) \big ] \big | > \varepsilon \big \} \big ) + \tfrac{\delta}{2}. \label{eq:blah_30}
\end{align}
Hence it suffices to bound the first term in \eqref{eq:blah_30}. The idea is to condition on the matrix ${\boldsymbol H}$ and apply Proposition~\ref{proposition:conditional}. The law of total expectation yields
\begin{align}
\Pr \big ( \mathcal{S} \cap \big \{ \big | y_k - \big [\text{tr} &\big (\tfrac{R^2 {\boldsymbol B}}{d} \big ) + \text{tr} \big (\tfrac{\widetilde{R}^2 {\boldsymbol C} {\boldsymbol H}}{n} \big ) \big ] \big | > \varepsilon \big \} \big ) \nonumber \\
\text{(conditioned on ${\boldsymbol H}$)} \, \, \, &= {\mathbb E}\, \big [ 1_{\mathcal{S}} \Pr \big ( \big | y_k - \big [ \text{tr} \big (\tfrac{R^2 {\boldsymbol B}}{d} \big ) + \text{tr} \big (\tfrac{\widetilde{R}^2 {\boldsymbol C} {\boldsymbol H}}{n} \big ) \big ] \big | > \varepsilon | {\boldsymbol H} \big ) \big ] \nonumber \\
\text{(Proposition~\ref{proposition:conditional})} \, \, \, &\le \tfrac{1}{\varepsilon^2} {\mathbb E}\, \big [ 1_{\mathcal{S}} \left ( \tfrac{C-R^4}{d} \text{ \rm tr} \big ( \tfrac{{\boldsymbol B}^2}{d} \big ) + \tfrac{\widetilde{C}-\widetilde{R}^4}{n} \text{ \rm tr} \big ( \tfrac{({\boldsymbol C} {\boldsymbol H})^2}{n} \big ) + \tfrac{ R^2 \widetilde{R}^2}{n} \big [ \tfrac{\text{tr}( {\boldsymbol D}^2 {\boldsymbol H})}{d} \big ] \right ) \big ]. \label{eq:blah_31}
\end{align}
Here for the indicator of the event $\mathcal{S}$ we use the notation $1_{\mathcal{S}}(\omega)$ where the indicator is $1$ if $\omega \in \mathcal{S}$ and $0$ otherwise. By construction of the event $\mathcal{S}$, each of the expected traces in \eqref{eq:blah_31} are bounded and therefore, we deduce that
\[\Pr \big ( \mathcal{S} \cap \big \{ \big | y_k - \big [\text{tr} \big (\tfrac{R^2 {\boldsymbol B}}{d} \big ) + \text{tr} \big (\tfrac{\widetilde{R}^2 {\boldsymbol C} {\boldsymbol H}}{n} \big ) \big ] \big | > \varepsilon \big \} \big ) = \tfrac{1}{\varepsilon^2} \cdot \mathcal{O} \big ( \tfrac{1}{d} \big ). \]
By choosing $d$ sufficiently large, we can make the right hand side smaller than $\tfrac{\delta}{2}$. The result immediately follows from \eqref{eq:blah_30}.
\end{proof}
Proposition~\ref{thm: probability_convergence} reveals that for high-dimensional data the squared norm of the gradient $\|\nabla f({\boldsymbol x}_k)\|^2$ is a polynomial in the eigenvalues of the matrix ${\boldsymbol H}$. Every eigenvalue, not just the largest or smallest, appears in this formula \eqref{eq:grad_convergence_prob}. This means that first-order methods indeed see all of the eigenvalues of the matrix ${\boldsymbol H}$, not just the top or bottom one. However, the expected trace is still a random quantity due to its dependency on the random matrix. We remove this randomness and complete the proof of Theorem~\ref{thm: concentration_main} after noting that the moments of the empirical spectral measure converge in probability to a deterministic quantity, denoted by $\! \! \underset{d \rightarrow \infty}{\mathcal{E}} \! [\|\nabla f(\xx_{k})\|^2]\,$.
\begin{proof}[Proof of Theorem~\ref{thm: concentration_main}]
Propositions~\ref{proposition: moments} and \ref{thm: probability_convergence} yield the result.
\end{proof}
\subsection{Halting time converges to a constant} \label{apx: halting_time_deterministic}
The concentration of the norm of the gradient in \eqref{eq: something_1} gives a candidate for the limiting value of the halting time $T_{\varepsilon}$. More precisely, we define this candidate for the halting time $\tau_{\varepsilon}$ from $\! \! \underset{d \rightarrow \infty}{\mathcal{E}} \! [\|\nabla f(\xx_{k})\|^2]\,$ and we recall the halting time, $T_{\varepsilon}$, as
\begin{align}
\tau_{\varepsilon} \stackrel{\text{def}}{=} \inf \, \{ k > 0 : \! \! \underset{d \rightarrow \infty}{\mathcal{E}} \! [\|\nabla f(\xx_{k})\|^2]\, \le \varepsilon\} \quad \text{and} \quad T_{\varepsilon} \stackrel{\text{def}}{=} \inf \, \{ k > 0 : \|\nabla f({\boldsymbol x}_k)\|^2 \le \varepsilon\}\,.
\end{align}
We note that the deterministic value $\tau_{\varepsilon}$ is, by definition, the average complexity of GD whereas $T_{\varepsilon}$ is a random variable depending on randomness from the data, noise, signal, and initialization.
This leads to our main result that states the almost sure convergence of the halting time to a constant value. We begin by showing that $\tau_{\varepsilon}$ is well-defined.
\begin{lemma}[$\tau_{\varepsilon}$ is well-defined] \label{lem: tau_finite}Under the assumptions of Proposition~\ref{thm: probability_convergence},
the iterates of a convergent algorithm satisfy $\! \! \underset{d \rightarrow \infty}{\mathcal{E}} \! [\|\nabla f(\xx_{k})\|^2]\, \underset{k \to \infty}{\to} 0$.
\end{lemma}
\begin{proof}
Both $\lambda^2 P_k^2(\lambda; \lambda^{\pm}) \to 0$ and $\lambda P_k^2(\lambda; \lambda^{\pm}) \to 0$ and these polynomials are uniformly bounded in $k$ for each $\lambda \in [\lambda^-, \lambda^+]$ (see Lemma~\ref{lem: convergent_algorithm} and \ref{lem: convergent_bounded}). By dominated convergence theorem, the result follows.
\end{proof}
With our candidate for the limiting halting time $\tau_{\varepsilon}$ well-defined, we show that number of iterations until $\|\nabla f({\boldsymbol x}_k)\|^2 \le \varepsilon$ equals $\tau_{\varepsilon}$ for high-dimensional data. We state a more general result of Theorem~\ref{thm: Halting_time_main}.
\begin{theorem}[Halting time universality] \label{thm: Halting_time}
Provided that $\! \! \underset{d \rightarrow \infty}{\mathcal{E}} \! [\|\nabla f(\xx_{k})\|^2]\, \neq \varepsilon$ for all $k$, the probability of reaching $\varepsilon$ in a pre-determined number of steps satisfies
\[\lim_{d \to \infty} \Pr(T_{\varepsilon} = \tau_{\varepsilon} ) = 1.\]
If the constant $\varepsilon = \! \! \underset{d \rightarrow \infty}{\mathcal{E}} \! [\|\nabla f(\xx_{k})\|^2]\,$ for some $k$, then the following holds
\[ \lim_{d \to \infty} \Pr(T_{\varepsilon} \in [ \tau_{\varepsilon}, \tau_{\varepsilon} + M_{\varepsilon}]) = 1, \quad
\text{where $M_{\varepsilon} \stackrel{\text{def}}{=} \inf \{ k-\tau_{\varepsilon} > 0 \, | \, \xi_k < \varepsilon \}$.}\]
\end{theorem}
\begin{proof}
To simplify notation, we define $\xi_k \stackrel{\text{def}}{=} \! \! \underset{d \rightarrow \infty}{\mathcal{E}} \! [\|\nabla f(\xx_{k})\|^2]\,$. First, we consider the case where $\varepsilon \neq \xi_k$ for all $k$. We are interested in bounding the following probabilities
\begin{equation} \Pr(T_{\varepsilon} \neq \tau_{\varepsilon}) = \Pr(T_{\varepsilon} < \tau_{\varepsilon} ) + \Pr(T_{\varepsilon} > \tau_{\varepsilon}). \label{eq: Halting_time_1} \end{equation}
We bound each of these probabilities independently; first consider $\Pr(T_{\varepsilon} < \tau_{\varepsilon})$ in \eqref{eq: Halting_time_1}. For $\tau_{\varepsilon} = 0$, we note that $\Pr(T_{\varepsilon} < \tau_{\varepsilon}) = 0$ since $T_{\varepsilon} \ge 0$. So we can assume that $\tau_{\varepsilon} > 0$.
Since $T_{\varepsilon} \le \tau_{\varepsilon} -1$, we obtain
\begin{equation} \label{eq: Halting_time_3} \Pr(T_{\varepsilon} < \tau_{\varepsilon}) = \Pr \Big ( \bigcup_{k=0}^{\tau_{\varepsilon}-1} \{T_{\varepsilon} = k\} \Big ) \le \sum_{k=0}^{\tau_{\varepsilon}-1} \Pr(T_{\varepsilon} = k) \le \sum_{k=0}^{\tau_{\varepsilon}-1} \Pr(\|\nabla f({\boldsymbol x}_k)\|^2 \le \varepsilon) .\end{equation}
Now we bound the probabilities $\Pr(\|\nabla f({\boldsymbol x}_k)\|^2 \le \varepsilon)$. As $\tau_{\varepsilon}$ is the first time $\xi$ falls below $\varepsilon$, we conclude that $ \xi_{\tau_\varepsilon}< \varepsilon < \xi_{\tau_{\varepsilon}-1}, \xi_{\tau_{\varepsilon}-2}, \hdots, \xi_0$ where we used that $\varepsilon \neq
\xi_k$ for any $k$. Next we define the constant $0 < \delta \stackrel{\text{def}}{=} \displaystyle \min_{ 0 \le k \le \tau_{\varepsilon}} \, \{ |\varepsilon-\xi_k| \}$ and we observe that $\delta < |\varepsilon - \xi_k| = \xi_k- \varepsilon$ for all $k < \tau_{\varepsilon}$. Fix a constant $\hat{\varepsilon} > 0$ and index $k$.
Theorem~\ref{thm: concentration_main} says that by making $d(k)$ sufficiently large
\begin{align*} \Pr(\|\nabla f({\boldsymbol x}_k)\|^2 \leq \varepsilon) \le \Pr(\|\nabla f({\boldsymbol x}_k)\|^2 < \xi_{k} - \delta)
\le \frac{\hat{\varepsilon}}{\tau_{\varepsilon}}.
\end{align*}
Here we used that
$\tau_{\varepsilon}$ is finite for every $\varepsilon >0$ (Lemma~\ref{lem: tau_finite}). Set $D \stackrel{\text{def}}{=}{} \max\{d(0), d(1), d(2), \hdots, d(\tau_{k}-1)\}$. Then for all $d > D$, we have from \eqref{eq: Halting_time_3} the following
\[ \Pr(T_{\varepsilon} < \tau_{\varepsilon}) \le \sum_{k=0}^{\tau_{\varepsilon}-1} \Pr(\|\nabla f({\boldsymbol x}_k)\|^2 \le \varepsilon) \le \sum_{k=0}^{\tau_{\varepsilon}-1} \frac{\hat{\varepsilon}}{\tau_{\varepsilon}} = \hat{\varepsilon}. \]
Lastly, we bound $\Pr(T_{\varepsilon} > \tau_{\varepsilon})$. The idea is similar to the other direction. Let $\delta$ be defined as above.
Therefore, again by Theorem~\ref{thm: concentration_main}, we conclude for sufficiently large $d$
\begin{align*}
\Pr(T_{\varepsilon} > \tau_{\varepsilon}) &\le \Pr( \|\nabla f({\boldsymbol x}_{\tau_{\varepsilon}}) \|^2 > \varepsilon)\le \Pr(\|\nabla f({\boldsymbol x}_{\tau_{\varepsilon}}) \|^2 - \xi_{\tau_{\varepsilon}} > \delta) \to 0.
\end{align*}
Indeed, we used that $ \xi_{\tau_{\varepsilon}} < \varepsilon$ and $\delta < |\varepsilon-\xi_{\tau_{\varepsilon}}| = \varepsilon - \xi_{\tau_{\varepsilon}}$. This completes the proof when $\varepsilon \neq \xi_k$.
Next, we consider the second case where $\xi_{k} = \varepsilon$. Note that $M_{\varepsilon} < \infty$ for all $\varepsilon$ because $\displaystyle \lim_{k \to \infty} \xi_k = 0$. In this setting, we are interested in bounding
\[ \Pr( T_{\varepsilon} \not \in [\tau_{\varepsilon}, \tau_{\varepsilon} + M_{\varepsilon}]) = \Pr(T_{\varepsilon} < \tau_{\varepsilon}) + \Pr(T_{\varepsilon} > \tau_{\varepsilon} + M_{\varepsilon}).\]
The arguments will be similar to the previous setting. Replacing the definition of $\delta$ above with $\displaystyle \delta \stackrel{\text{def}}{=} \min_{0 \le k \le \tau_{\varepsilon}-1} \{|\varepsilon - \xi_k| \}$ yields that $\delta > 0$ since $\varepsilon < \xi_{\tau_{\varepsilon}-1}, \xi_{\tau_{\varepsilon}-2}, \hdots, \xi_0$. With this choice of $\delta$, the previous argument holds and we deduce that $\Pr(T_{\varepsilon} < \tau_{\varepsilon}) \to 0$. Next we show that $\Pr(T_{\varepsilon} > \tau_{\varepsilon} + M_{\varepsilon})$. As before, we know that $\Pr(T_{\varepsilon} > \tau_{\varepsilon} + M_{\varepsilon}) \le \Pr( \|\nabla f({\boldsymbol x}_{\tau_{\varepsilon+M_{\varepsilon}}}) \|^2 > \varepsilon)$. By definition of $M_{\varepsilon}$, we have that $\varepsilon > \xi_{\tau_{\varepsilon}+M_{\varepsilon}}$. Now define $\delta \stackrel{\text{def}}{=} \varepsilon - \xi_{\tau_{\varepsilon} + M_\varepsilon} > 0$. The previous argument holds with this choice of $\delta$; therefore, one has that $\Pr(T_{\varepsilon} > \tau_{\varepsilon} + M_{\varepsilon}) \to 0$.
\end{proof}
For large models the number of iterations to reach a nearly optimal point equals its average complexity which loosely says $T_{\varepsilon} = \tau_{\varepsilon}$. The variability in the halting time goes to zero. Since the dependence in $\tau_{\varepsilon}$ on the distribution of the data is limited to only the first two moments, almost all instances of high-dimensional data have the same limit. In Tables~\ref{tab:comparison_worst_avg_cvx} and \ref{tab:comparison_worst_avg_str_cvx}, we compute the value of $\! \! \underset{d \rightarrow \infty}{\mathcal{E}} \! [\|\nabla f(\xx_{k})\|^2]\,$ for various models.
\subsection{Extension beyond least squares, ridge regression} \label{sec:ridge_regression}
In this section, we extend the results from Theorems~\ref{thm: concentration_main} and \ref{thm: Halting_time_main} to the ridge regression problem. We leave the proofs for the reader as they follow similar techniques as the least squares problem \eqref{eq:LS_main}. We consider the ridge regression problem of the form
\begin{equation} \label{eq:ridge_regression}
\argmin_{{\boldsymbol x} \in \mathbb{R}^d} \left \{ f({\boldsymbol x}) \stackrel{\text{def}}{=} \frac{1}{2n} \|{\boldsymbol A} {\boldsymbol x} - {\boldsymbol b}\|^2 + \frac{\gamma}{2} \|{\boldsymbol x}\|^2 \right \}, \quad \text{with ${\boldsymbol b} \stackrel{\text{def}}{=} {\boldsymbol A} \widetilde{{\boldsymbol x}} + {\boldsymbol \eta}$\,.}
\end{equation}
As in Section~\ref{sec: problem_setting}, we will assume that ${\boldsymbol A} \in \mathbb{R}^{n \times d}$ is a (possibly random) matrix satisfying Assumption~\ref{assumption: spectral_density}, $\widetilde{{\boldsymbol x}} \in \mathbb{R}^d$ is an unobserved signal vector, and ${\boldsymbol \eta} \in \mathbb{R}^n$ is a noise vector. The constant $\gamma > 0$ is the ridge regression parameter. Unlike the least squares problem, the gradient of $\eqref{eq:ridge_regression}$ does not decompose into a term involving ${\boldsymbol x}_0-\widetilde{{\boldsymbol x}}$ and ${\boldsymbol \eta}$. As such, we alter Assumption~\ref{assumption: Vector} placing an independence assumption between the initialization vector ${\boldsymbol x}_0$ and the signal $\widetilde{{\boldsymbol x}}$, that is,
\begin{assumption}[Initialization, signal, and noise.]\label{assumption:ridge_vector} The initial vector ${\boldsymbol x}_0 \in \mathbb{R}^d$, the signal $\widetilde{{\boldsymbol x}} \in \mathbb{R}^d$, and noise vector ${\boldsymbol \eta} \in \mathbb{R}^n$ are independent of each other and independent of ${\boldsymbol A}$. The vectors satisfy the following conditions:
\begin{enumerate}[leftmargin=*]
\item The entries of ${\boldsymbol x}_0$ and $\widetilde{{\boldsymbol x}}$ are i.i.d. random variables and there exists constants $C, \dot{R}, \widehat{R} > 0$ such that for $i = 1, \hdots, d$
\begin{equation} \begin{gathered} \label{eq:ridge_initial}
{\mathbb E}\,[{\boldsymbol x}_0] = {\mathbb E}\,[\widetilde{{\boldsymbol x}}] = 0, \quad {\mathbb E}\,[(x_0)_i^2] = \tfrac{1}{d} \dot{R}^2, \quad {\mathbb E}\,[\widetilde{x}_i^2] = \tfrac{1}{d} \widehat{R}^2,\\
{\mathbb E}\,[ (x_0)^4_i ] \le \tfrac{1}{d^2} C, \quad \text{and} \quad {\mathbb E}\,[\widetilde{x}_i^4] \le \tfrac{1}{d^2} C.
\end{gathered} \end{equation}
\item The entries of noise vector are i.i.d. random variables satisfying the following for $i = 1, \hdots, n$ and for some constants $\widetilde{C}, \widetilde{R} > 0$
\begin{equation}
{\mathbb E}\,[{\boldsymbol \eta}] = 0, \quad {\mathbb E}\,[\eta_i^2] = \widetilde{R}^2, \quad and \quad {\mathbb E}\,[\eta_i^4] \le \widetilde{C}.
\end{equation}
\end{enumerate}
\end{assumption}
The difference between Assumption~\ref{assumption: Vector} and Assumption~\ref{assumption:ridge_vector} is that \eqref{eq:ridge_initial} guarantees the initial vector ${\boldsymbol x}_0$ and the signal $\widetilde{{\boldsymbol x}}$ are independent. One relates $R^2$ to $\dot{R}^2$ and $\widehat{R}^2$ by $R^2 = \dot{R}^2 + \widehat{R}^2$. First, the gradient of \eqref{eq:ridge_regression} is
\begin{equation} \label{eq:grad_ridge_regression}
\nabla f({\boldsymbol x}) = {\boldsymbol H} ({\boldsymbol x} - \widetilde{{\boldsymbol x}}) - \tfrac{{\boldsymbol A}^T {\boldsymbol \eta}}{n} + \gamma {\boldsymbol x} = ({\boldsymbol H} + \gamma {\boldsymbol I}) ({\boldsymbol x}-\widetilde{{\boldsymbol x}}) - \tfrac{{\boldsymbol A}^T {\boldsymbol \eta}}{n} + \gamma \widetilde{{\boldsymbol x}} = {\boldsymbol M} ({\boldsymbol x}-\widetilde{{\boldsymbol x}}) - \tfrac{{\boldsymbol A}^T {\boldsymbol \eta}}{n} + \gamma \widetilde{{\boldsymbol x}},
\end{equation}
where the matrix ${\boldsymbol M} \stackrel{\text{def}}{=} {\boldsymbol H} + \gamma {\boldsymbol I} $ and ${\boldsymbol I}$ is the identity matrix.
Under Assumption~\ref{assumption:ridge_vector} and \ref{assumption: spectral_density}, we derive a similar recurrence expression for the iterates of gradient-based algorithms as Proposition~\ref{prop: polynomials_methods}
\begin{proposition}[Prop.~\ref{prop: polynomials_methods} for ridge regression] Consider a gradient-based method with coefficients that depend continuously on $\lambda^-_{{\boldsymbol M}}$ and $\lambda^+_{{\boldsymbol M}}$. Define the sequence of polynomials $\{P_k, Q_k\}_{k = 0}^{\infty}$ recursively by
\begin{equation} \begin{gathered} \label{eq:recursive_noise_poly_ridge_regression}
P_0({\boldsymbol M}; \lambda_{{\boldsymbol M}}^{\pm}) = {\boldsymbol I} \quad \text{and} \quad P_k({\boldsymbol M}; \lambda_{{\boldsymbol M}}^{\pm}) = {\boldsymbol I} - {\boldsymbol M} Q_{k}({\boldsymbol M}; \lambda^{\pm}_{{\boldsymbol M}})\\
Q_0({\boldsymbol M}; \lambda_{{\boldsymbol M}}^{\pm}) = \bm{0} \quad \text{and} \quad Q_k({\boldsymbol M}; \lambda_{{\boldsymbol M}}^{\pm}) = \sum_{i=0}^{k-1} c_{k-1,i} \big [ {\boldsymbol M} Q_i({\boldsymbol M}; \lambda_{{\boldsymbol M}}^{\pm}) - {\boldsymbol I} \big ]\,.
\end{gathered} \end{equation}
These polynomials $P_k$ and $Q_k$ are referred to as the \emph{residual} and \emph{iteration} polynomials respectively.
We express the difference between the iterate at step $k$ and $\widetilde{{\boldsymbol x}}$ in terms of these polynomials:
\begin{equation} \label{eq:recursive_noise_poly_1_ridge_regression}
{\boldsymbol x}_k - \widetilde{{\boldsymbol x}} = P_k({\boldsymbol M}; \lambda_{{\boldsymbol M}}^{\pm}) ({\boldsymbol x}_0-\widetilde{{\boldsymbol x}}) + Q_k({\boldsymbol M}; \lambda_{{\boldsymbol M}}^{\pm}) \cdot \frac{{\boldsymbol A}^T {\boldsymbol \eta}}{n} - \gamma Q_k({\boldsymbol M}; \lambda_{{\boldsymbol M}}^{\pm}) \widetilde{{\boldsymbol x}}\,.
\end{equation}
\end{proposition}
The proof of this proposition follows the same argument as in Proposition~\ref{prop: polynomials_methods}, replacing the gradient of the least squares problem \eqref{eq:LS_main} with the gradient for the ridge regression problem \eqref{eq:grad_ridge_regression}. The polynomials $P_k$ and $Q_k$ are exactly the same as in Proposition~\ref{prop: polynomials_methods} but applied to a different matrix ${\boldsymbol M}$ instead of ${\boldsymbol H}$ (see Section~\ref{sec:Ex_polynomials} for examples the polynomials $P_k$ and $Q_k$ for various first-order algorithms). Given the resemblance to the least squares problem, it follows that one can relate the residual polynomial to the squared norm of the gradient.
\begin{proposition}[Prop.~\ref{prop:gradient_polynomial} for ridge regression] \label{prop:gradient_polynomial_ridge_regression} Suppose the iterates $\{{\boldsymbol x}_k\}_{k=0}^\infty$ are generated from a gradient based method. Let $\{P_k\}_{k=0}^\infty$ be a sequence of polynomials defined in \eqref{eq:recursive_noise_poly_ridge_regression}. Then the following identity exists between the iterates and its residual polynomial,
\begin{equation} \begin{gathered} \label{eq:grad_optimality_cond_app_ridge_regression}
\| \nabla f({\boldsymbol x}_k) \|^2 = ({\boldsymbol x}_0-\widetilde{{\boldsymbol x}})^T {\boldsymbol M}^2 P_k^2({\boldsymbol M}; \lambda_{{\boldsymbol M}}^{\pm}) ({\boldsymbol x}_0-\widetilde{{\boldsymbol x}}) + \tfrac{{\boldsymbol \eta}^T {\boldsymbol A}}{n} P_k^2({\boldsymbol M}; \lambda_{{\boldsymbol M}}^{\pm}) \tfrac{{\boldsymbol A}^T {\boldsymbol \eta}}{n} + \gamma^2 \widetilde{{\boldsymbol x}}^T P_k^2({\boldsymbol M}; \lambda_{{\boldsymbol M}}^{\pm}) \widetilde{{\boldsymbol x}}\\
-2({\boldsymbol x}_0-\widetilde{{\boldsymbol x}})^T {\boldsymbol M} P_k^2({\boldsymbol M}; \lambda_{{\boldsymbol M}}^{\pm}) \tfrac{{\boldsymbol A}^T {\boldsymbol \eta}}{n} + 2 \gamma ({\boldsymbol x}_0-\widetilde{{\boldsymbol x}})^T {\boldsymbol M} P_k^2({\boldsymbol M}; \lambda_{{\boldsymbol M}}^{\pm}) \widetilde{{\boldsymbol x}} - 2 \gamma \widetilde{{\boldsymbol x}}^T P_k^2({\boldsymbol M}; \lambda_{{\boldsymbol M}}^{\pm}) \tfrac{{\boldsymbol A}^T {\boldsymbol \eta}}{n}. \nonumber
\end{gathered}
\end{equation}
\end{proposition}
As in the least squares problem, one can replace the $\lambda_{{\boldsymbol M}}^{\pm}$ in the polynomial with $\lambda^{\pm} + \gamma$ for ${\boldsymbol M} = {\boldsymbol H} + \gamma {\boldsymbol I}$. Under Assumptions~\ref{assumption: spectral_density} and \ref{assumption:ridge_vector}, using the same technique as in Propositions~\ref{proposition:conditional} for the least squares problem, we derive the following.
\begin{proposition}[Prop.~\ref{thm: probability_convergence_ridge} for ridge regression] \label{thm: probability_convergence_ridge} Suppose the vectors ${\boldsymbol x}_0, \widetilde{{\boldsymbol x}},$ and ${\boldsymbol \eta}$ and the matrix ${\boldsymbol A}$ satisfy Assumptions~\ref{assumption:ridge_vector} and \ref{assumption: spectral_density} resp.
The following holds
\begin{align} \label{eq:grad_convergence_prob_ridge}
\big | \|\nabla f({\boldsymbol x}_k)\|^2 - \big ( \underbrace{\textcolor{teal}{\dot{R}^2} \textcolor{black}{\tfrac{1}{d} \text{\rm tr}({\boldsymbol M}^2 P_k^2({\boldsymbol M}; \lambda^{\pm}) )}}_{\text{initialization}} + \underbrace{\textcolor{teal}{\widehat{R}^2} \tfrac{1}{d} \text{\rm tr}({\boldsymbol H}^2 P_k^2({\boldsymbol M}; \lambda^{\pm})) }_{\text{signal}} + \underbrace{\textcolor{purple}{\widetilde{R}^2} \textcolor{black}{\tfrac{1}{n} \text{\rm tr}({\boldsymbol H} P_k^2({\boldsymbol M} ; \lambda^{\pm}))}}_{\text{noise}} \big ) \big | \Prto[d] 0.
\end{align}
\end{proposition}
We remark that we used the independence between ${\boldsymbol x}_0$ and $\widetilde{{\boldsymbol x}}$ to obtain \eqref{eq:grad_convergence_prob_ridge}. This independence leads to two terms in the gradient corresponding to the initialization and the signal. As the polynomials ${\boldsymbol M}^2 P_k^2({\boldsymbol M}; \lambda^{\pm})$, ${\boldsymbol H}^2 P_k^2({\boldsymbol M}; \lambda^{\pm})$, and ${\boldsymbol H} P_k^2({\boldsymbol M}; \lambda^{\pm})$ are polynomials in ${\boldsymbol H}$ (the identity ${\boldsymbol I}$ commutes with ${\boldsymbol H}$), Proposition~\ref{proposition: moments} still holds. Therefore, the equivalent to Theorem~\ref{thm: concentration_main} for ridge regression follows (recall, Theorem~\ref{thm: concentration_main_main_ridge} in Section~\ref{sec:ridge_regression_main}).
\textbf{Theorem.} \rm{(Concentration of the gradient for ridge regression)}
\textit{Under Assumptions~\ref{assumption:ridge_vector} and~\ref{assumption: spectral_density} the norm of the gradient concentrates around a deterministic value:
\begin{equation} \begin{aligned} \label{eq: something_1_main_ridge} \vspace{0.25cm}
\hspace{-0.28cm} \|\nabla f({\boldsymbol x}_k)\|^2 \Prto[d] &\textcolor{teal}{\overbrace{\dot{R}^2}^{\text{initial.}}} \! \!\! \int { \underbrace{(\lambda + \gamma)^2 P_k^2(\lambda + \gamma; \lambda^{\pm})}_{\text{algorithm}}} \textcolor{mypurple}{\overbrace{\mathop{}\!\mathrm{d}\mu}^{\text{model}} } + \textcolor{teal}{\overbrace{\widehat{R}^2}^{\text{signal}}} \! \!\! \int { \underbrace{\lambda^2 P_k^2(\lambda + \gamma; \lambda^{\pm})}_{\text{algorithm}}} \textcolor{mypurple}{\overbrace{\mathop{}\!\mathrm{d}\mu}^{\text{model}} } \\
& \quad \quad + \textcolor{purple}{\overbrace{ \widetilde{R}^2} ^{\text{noise}} } r \int { \underbrace{\lambda P_k^2(\lambda + \gamma; \lambda^{\pm})}_{\text{algorithm}}} \textcolor{mypurple}{\overbrace{ \mathop{}\!\mathrm{d}\mu}^{\text{model}} }. \end{aligned}
\end{equation}}
The equivalent to Theorem~\ref{thm: Halting_time_main} immediately follows by replacing $\! \! \underset{d \rightarrow \infty}{\mathcal{E}} \! [\|\nabla f(\xx_{k})\|^2]\,$ with the right-hand side of \eqref{eq: something_1_main_ridge}.
\section{Derivation of the worst and average-case complexity} \label{sec: avg_derivations}
In this section, we derive an expression for the average-case complexity in the isotropic features model. Here the empirical spectral measure $\mu_{{\boldsymbol H}}$ converges to the Mar\v{c}enko-Pastur measure $\mu_{\mathrm{MP}}$ \eqref{eq:MP}.
The average-case complexity, $\tau_{\varepsilon}$, is controlled by the value of the expected gradient norm in \eqref{eq: something_1}. Hence to analyze the average-case rate, it suffices to derive an expression for this value, $\! \! \underset{d \rightarrow \infty}{\mathcal{E}} \! [\|\nabla f(\xx_{k})\|^2]\,$.
In light of \eqref{eq: something_1}, we must integrate the residual polynomials in Table~\ref{table:polynomials} against the Mar\v{c}enko-Pastur measure. By combining Theorem~\ref{thm: concentration_main} with the integrals derived in Appendix~\ref{apx:integral_computations}, we obtain the average-case complexities. Apart from Nesterov's accelerated method (convex), an \textit{exact} formula for the average-case rates are obtained. In the convex setting for Nesterov, the integral is difficult to directly compute so instead we use the asymptotic polynomial in \eqref{eq:Bessel_asymptotic_main}. Hence for Nesterov's accelerated method (convex), we only get an asymptotic average-case rate for sufficiently large $k$ (see Appendix~\ref{apx:integral_computations}). Tables~\ref{tab:comparison_worst_avg_cvx} and ~\ref{tab:comparison_worst_avg_str_cvx} summarize the asymptotic rates where both iteration and problem size are large.
We now turn to the worst-case guarantees. We discuss below how to make the worst-case complexity comparable.
\subsection{Traditional worst-case complexity}
Recall, the prior discussion on the dimension-dependent constants in the typical worst-case complexity bounds. We now make this precise below.
\paragraph{Worst-case complexity: strongly convex and noiseless non-strongly convex regimes.} Consider GD and note the other methods will have similar analysis. Recall, the standard analytical worst-case bound for the strongly convex regime and the exact worst-case bound for the non-strongly convex setting \citep{taylor2017smooth}, respectively,
\begin{equation*} \begin{gathered}
\|\nabla f({\boldsymbol x}_k)\|^2 \le (\lambda_{{\boldsymbol H}}^+)^2 \|{\boldsymbol x}_0-{\boldsymbol x}^{\star}\|^2 \left ( 1- \tfrac{\lambda_{{\boldsymbol H}}^-}{\lambda_{{\boldsymbol H}}^+} \right )^{2k} \quad \text{(strongly convex)}\\
\text{and} \quad \|\nabla f({\boldsymbol x}_k)\|^2 \le \frac{(\lambda^+_{{\boldsymbol H}})^2 \|{\boldsymbol x}_0-{\boldsymbol x}^{\star}\|^2}{(k+1)^2} \quad \text{(convex)}
\end{gathered}\end{equation*}
where ${\boldsymbol x}^{\star}$ is the minimal norm solution of \eqref{eq:LS}. For sufficiently large $d$, the largest $\lambda_{{\boldsymbol H}}^+$ and smallest eigenvalues $\lambda_{{\boldsymbol H}}^-$ of ${\boldsymbol H}$ converge in probability to $\sigma^2 (1+\sqrt{r})^2$ and $\sigma^2(1-\sqrt{r})^2$ respectively. These are the top and bottom edge of the Mar\v{c}enko-Pastur distribution. We also note in the noiseless setting $\|{\boldsymbol x}_0-{\boldsymbol x}^{\star}\|^2 = \|{\boldsymbol x}_0-\widetilde{{\boldsymbol x}}\|^2$. Hence by Assumption~\ref{assumption: Vector} and $\widetilde{R}^2 = 0$, on average, $\|{\boldsymbol x}_0-\widetilde{{\boldsymbol x}}\|^2 = R^2$. Moreover when the matrix ${\boldsymbol H}$ is nonsingular, as in the strongly convex setting, the optimum $\|{\boldsymbol x}^{\star}\|^2$ does not grow as dimension increases despite the noise. As a sequence of random variables in $d$, $\|{\boldsymbol x}_0-{\boldsymbol x}^{\star}\|^2$ is tight. From these observations we derive the worst-case complexities.
\paragraph{Worst-case complexity: noisy non-strongly convex regime.} While discussing the worst-case complexity in Section~\ref{sec: average_case}, we noted a discrepancy in the noisy, non-strongly convex regime between the average rate and the exact worst complexity. For instance, the exact worst complexity for gradient descent (GD) \citep{taylor2017smooth} is
\begin{equation} \label{eq:worst_case_complexity}
\|\nabla f({\boldsymbol x}_k)\|^2 \le \frac{(\lambda^+_{{\boldsymbol H}})^2 \|{\boldsymbol x}_0-{\boldsymbol x}^{\star}\|^2}{(k+1)^2} \quad \text{where ${\boldsymbol x}^{\star}$ is the optimum of \eqref{eq:LS}.}
\end{equation}
For sufficiently large $d$, the largest eigenvalue of $\lambda^+_{{\boldsymbol H}}$ converges a.s. to $\lambda^+ = 4 \sigma^2$, the top edge of the support of $\mu_{\mathrm{MP}}$. Hence, to derive worst-case complexity bounds, it suffices to understand the behavior of the distance to the optimum.
The vectors ${\boldsymbol x}^\star$ and $\widetilde{{\boldsymbol x}}$ are different when noise is added to the signal. For simplicity, we consider the setting where the matrix ${\boldsymbol A}$ is invertible. Intuitively, the optimum ${\boldsymbol x}^{\star} \approx {\boldsymbol A}^{-1} {\boldsymbol b} = \widetilde{{\boldsymbol x}} + {\boldsymbol A}^{-1} {\boldsymbol \eta}$ where $\widetilde{{\boldsymbol x}}$ is the underlying random signal. Because the signal $\widetilde{{\boldsymbol x}}$ is scaled, Assumption~\ref{assumption: Vector} says ${\mathbb E}\,[ \| {\boldsymbol x}_0-\widetilde{{\boldsymbol x}}\|^2] = R^2$. Therefore, the distance to the optimum ${\boldsymbol x}_0-{\boldsymbol x}^{\star}$ is controlled by the noise which in turn is bounded by the reciprocal of the minimum eigenvalue of ${\boldsymbol A}^T {\boldsymbol A}$, namely \[\|{\boldsymbol x}_0-{\boldsymbol x}^\star\|^2 \approx \|{\boldsymbol x}_0-\tilde{{\boldsymbol x}}\|^2 + \|{\boldsymbol A}^{-1} {\boldsymbol \eta}\|^2 \ge \frac{|{\boldsymbol u}_{\min}^T {\boldsymbol \eta}|^2}{\lambda_{\min}({\boldsymbol A}^T {\boldsymbol A})}, \]
where $(\lambda_{\min}({\boldsymbol A}^T {\boldsymbol A}), {\boldsymbol u}_{\min})$ is an eigenvalue-eigenvector pair corresponding to the minimum eigenvalue of ${\boldsymbol A}^T{\boldsymbol A}$. Unfortunately, the smallest eigenvalue is not well-behaved. Particularly there does not exist any scaling so that expectation of $\lambda_{\min}({\boldsymbol A}^T{\boldsymbol A})^{-1}$ is finite and the distribution is heavy-tailed. Instead we show that this quantity $\frac{|{\boldsymbol u}_{\min}^T {\boldsymbol \eta}|^2}{\lambda_{\min}({\boldsymbol A}^T {\boldsymbol A})}$ grows faster than $\widetilde{R}^2 d$. To do so, we appeal to a theorem in \citep{tao2010random}, that is, we assume that all moments of the entries of the matrix ${\boldsymbol A}$ are bounded, namely,
\begin{equation} \label{eq: bounded_moment}
\max_{i,j} \mathbb{E}[|A_{ij}|^k] < \infty \quad \text{for all $k \le 10^4$.}
\end{equation}
This bounded moment assumption is a mild assumption on the entries. For instance it includes any sub-exponential random variables. It should be noted here that under the simple isotropic features model it is clear that $\|{\boldsymbol x}_0-{\boldsymbol x}^{\star}\|^2$ is \textit{dimension-dependent}, but the exact dependence is more complicated. Under this condition \eqref{eq: bounded_moment}, we can prove a bound, which gives the dependence on the problem size, for the growth rate of the distance to the optimum.
\begin{lemma}[Growth of $\|{\boldsymbol x}_0-{\boldsymbol x}^\star\|^2$] \label{lem:growth_dist_optimum} Suppose Assumptions~\ref{assumption: Vector} and \ref{assumption: spectral_density} hold such that the noise vector ${\boldsymbol \eta} \in {\mathbb R}^d$ and the entries of the data matrix ${\boldsymbol A} \in \mathbb{R}^{d \times d}$ satisfy bounded moments \eqref{eq: bounded_moment}. Let ${\boldsymbol x}^\star$ be the minimal norm solution to \eqref{eq:LS}. For any $\delta > 0$ there exists a constant $M_{\delta} > 0$ such that
\begin{equation} \liminf_{n \to \infty} \Pr \big ( \|{\boldsymbol x}_0- {\boldsymbol x}^{\star} \|^2 \ge d \cdot \widetilde{R}^2 M_{\delta} \big ) \ge 1-\delta. \label{eq: growth_norm} \end{equation}
\end{lemma}
\begin{proof} We begin by defining the constant $M_{\delta} > 0$. The $n \times n$ matrix ${\boldsymbol A}$ is invertible a.s. so without loss of generality the smallest eigenvalue of ${\boldsymbol A}^T {\boldsymbol A}$ is non-zero. Here the dimensions are equal, $d = n$.
From \cite[Corollary 3.1]{edelman1988eigenvalues} and \cite[Theorem 1.3]{tao2010random}, we know that
$n \lambda_{\min}({\boldsymbol A}^T {\boldsymbol A})$ converges in distribution where we denote the smallest eigenvalue of ${\boldsymbol A}^T {\boldsymbol A}$ as $\lambda_{\min}({\boldsymbol A}^T {\boldsymbol A})$. It is immediately clear that $\log(n \lambda_{\min}({\boldsymbol A}^T {\boldsymbol A}))$ also converges in distribution. By Theorem~3.2.7 in \cite{durrett2010probability}, the sequence of distribution functions $\{F_n(x) = \Pr( \log(n \lambda_{\min}({\boldsymbol A}^T {\boldsymbol A})) \le x) \}$ is tight, that is, there exists an $C_{\delta} > 0$ such that
\[ \limsup_{n \to \infty} \Pr \big (n \lambda_{\min}({\boldsymbol A}^T {\boldsymbol A}) \not \in (e^{-C_{\delta}}, e^{C_{\delta}}] \big ) = \limsup_{n \to \infty} 1 - F_n(C_{\delta}) + F_n(-C_{\delta}) \le \tfrac{\delta}{2}. \]
In particular, we know that \begin{equation} \label{eq: avg_case_1} \limsup_{n \to \infty} \Pr \big ( n^{-1} ( \lambda_{\min}({\boldsymbol A}^T {\boldsymbol A}))^{-1} < e^{-C_{\delta}} \big ) \le \tfrac{\delta}{2}.
\end{equation}
Another way to observe \eqref{eq: avg_case_1} is that $n\lambda_{\min}({\boldsymbol A}^T {\boldsymbol A})$ has a density supported on $[0, \infty)$ \citep{edelman1988eigenvalues}. For any $\chi^2_1$-squared with $1$-degree of freedom random variable of $X$, there exists a constant $\widehat{C}_{\delta} > 0$ such that \begin{equation} \label{eq: avg_case_3}
\Pr(X \le \widehat{C}_{\delta}) \le \tfrac{\delta}{2}.
\end{equation}
Let $M_{\delta} \stackrel{\text{def}}{=} \tfrac{1}{4} \min \{ e^{-2C_{\delta}}, \widehat{C}_{\delta}^2 \}$. With $M_{\delta}$ defined, we are now ready to prove \eqref{eq: growth_norm}. The matrix ${\boldsymbol A}$ is a.s. invertible so gradient descent converges to ${\boldsymbol x}^\star = {\boldsymbol A}^{-1} {\boldsymbol b}$. Next we observe that \eqref{eq: growth_norm} is equivalent to proving
\begin{equation} \label{eq: avg_case_2} \limsup_{n \to \infty} \Pr \big ( \| {\boldsymbol x}_0 - {\boldsymbol A}^{-1} {\boldsymbol b} \| < \widetilde{R} \sqrt{n M_{\delta}} \big ) \le \delta.
\end{equation}
Plugging in the value of ${\boldsymbol b}$ and using the reverse triangle inequality, we obtain
\[ \Pr \big ( \|{\boldsymbol x}_0- {\boldsymbol A}^{-1} {\boldsymbol b}\| < \widetilde{R}\sqrt{n M_{\delta}} \big ) \le \Pr \big ( \|{\boldsymbol A}^{-1} {\boldsymbol \eta}\| < \widetilde{R}\sqrt{n M_{\delta}} + \|{\boldsymbol x}_0-\tilde{{\boldsymbol x}}\| \big ).\]
Using Markov's inequality, we can obtain a bound on $\|{\boldsymbol x}_0 - \widetilde{{\boldsymbol x}}\|$ :
\[ \Pr \big ( \|{\boldsymbol x}_0-\widetilde{{\boldsymbol x}}\| \ge \widetilde{R} \sqrt{nM_{\delta}} \big ) \le \frac{R^2}{n M_{\delta} \widetilde{R}^2}.\]
Consider now the event given by $\mathcal{S} \stackrel{\text{def}}{=} \{ \|{\boldsymbol x}_0-\widetilde{{\boldsymbol x}}\| < \widetilde{R} \sqrt{nM_{\delta}}\, \} $. The total law of probability yields
\begin{equation} \begin{aligned} \label{eq: avg_case_4} \Pr &\big ( \|{\boldsymbol x}_0- {\boldsymbol A}^{-1} {\boldsymbol b}\| < \widetilde{R} \sqrt{n M_{\delta}} \big )\\
&\le \Pr \big (\mathcal{S}^c \big ) + \Pr \big ( \mathcal{S} \cap \{ \|{\boldsymbol A}^{-1} {\boldsymbol \eta}\| < \widetilde{R} \sqrt{nM_{\delta}} + \|{\boldsymbol x}_0-\tilde{{\boldsymbol x}}\| \} \big )\\
&\le \frac{R^2}{n M_{\delta} \widetilde{R}^2} + \Pr \big ( \|{\boldsymbol A}^{-1} {\boldsymbol \eta}\| < 2 \widetilde{R} \sqrt{n M_{\delta}} \big ) = \frac{R^2}{n M_{\delta} \widetilde{R}^2} + \Pr \big ( \|{\boldsymbol A}^{-1} {\boldsymbol \eta}\|^2 < 4n\widetilde{R}^2 M_{\delta} \big ).
\end{aligned} \end{equation}
A simple calculation gives that $n^{-1} \widetilde{R}^{-2} \|{\boldsymbol A}^{-1} {\boldsymbol \eta}\|^2 \ge n^{-1} \big ( \lambda_{\min}({\boldsymbol A}^T {\boldsymbol A}) \big )^{-1} \widetilde{R}^{-2} ( {\boldsymbol u}_{\min}^T {\boldsymbol \eta})^2$ where the orthonormal vector ${\boldsymbol u}_{\min}$ is the eigenvector associated with the eigenvalue $(\lambda_{\min}({\boldsymbol A}^T{\boldsymbol A}))^{-1}$. From this, we deduce the following inequalities
\begin{align*}
\Pr \big ( &\|{\boldsymbol A}^{-1} {\boldsymbol \eta}\|^2 < 4n \widetilde{R}^2 M_{\delta} \big ) \le \Pr \big ( n^{-1} \lambda_{\min}({\boldsymbol A}^T{\boldsymbol A})^{-1} \cdot \widetilde{R}^{-2} ({\boldsymbol u}_{\min}^T {\boldsymbol \eta})^2 < \min \{ e^{-2C_{\delta}}, \widehat{C}^2_{\delta} \} \big )\\
&\le \Pr \big ( n^{-1} \lambda_{\min}({\boldsymbol A}^T{\boldsymbol A})^{-1} < \min \{ e^{-C_{\delta}}, \widehat{C}_{\delta} \} \big ) + \Pr \big ( \widetilde{R}^{-2} ({\boldsymbol u}_{\min}^T {\boldsymbol \eta})^2 < \min \{ e^{-C_{\delta}}, \widehat{C}_{\delta} \}\big )\\
&\le \Pr \big ( n^{-1} \lambda_{\min}({\boldsymbol A}^T{\boldsymbol A})^{-1} < e^{-C_{\delta}} \big ) + \Pr \big ( \widetilde{R}^{-2} ({\boldsymbol u}_{\min}^T {\boldsymbol \eta})^2 < \widehat{C}_{\delta} \big ).
\end{align*}
Since ${\boldsymbol \eta}$ is Gaussian and ${\boldsymbol u}_{\min}$ is orthonormal, we know that $\widetilde{R}^{-2} ({\boldsymbol u}^T_{\min} {\boldsymbol \eta})^2 \sim \chi^2_1$, a chi-squared distribution, so \eqref{eq: avg_case_3} holds and we already showed that $n^{-1} (\lambda_{\min}({\boldsymbol A}^T{\boldsymbol A}))^{-1}$ satisfies \eqref{eq: avg_case_1}. By taking $ \displaystyle \limsup$, we have
\[\limsup_{n \to \infty} \Pr \big ( \|{\boldsymbol A}^{-1} {\boldsymbol \eta}\|^2 < 4n \widetilde{R}^2 M_{\delta} \big ) \le \delta.\]
The inequality in \eqref{eq: avg_case_2} immediately follows after taking the limsup of \eqref{eq: avg_case_4}.
\end{proof}
Combining this lemma with the equation \eqref{eq:worst_case_complexity}, we get with high probability that
\[ \|\nabla f({\boldsymbol x}_k)\|^2 \le \frac{(\lambda_{{\boldsymbol H}}^+)^2 \|{\boldsymbol x}_0-{\boldsymbol x}^{\star}\|^2}{(k+1)^2} \approx \frac{16 \sigma^2 \widetilde{R}^2 d}{(k+1)^2}.\]
By setting the right-hand side equal to $\varepsilon$, we get the worst-case complexity result.
\begin{figure*}[t!]
\begin{center}
\includegraphics[scale = 0.4]{figures/gd-lr.pdf}
\includegraphics[scale = 0.4]{figures/nesterov-lr.pdf}
\includegraphics[scale = 0.4]{figures/sgd-ls.pdf}
\includegraphics[scale = 0.4]{figures/sgd-lr.pdf}
\end{center}
\caption{{\bfseries Halting time universality beyond least squares.} We compute the halting time on algorithms and models not covered by our theory and note that the convergence to a deterministic and universal halting time is also empirically observed in these settings.
For different model size $d$ ($x$-axis) we sample the vectors $\widetilde{{\boldsymbol x}}$, ${\boldsymbol x}_0$ and the matrix ${\boldsymbol A}$ ($\widetilde{R}^2 = 0.01$ and $r = 0.5$) and report the halting time ($y$-axis) and its standard deviation (shaded area) for GD and Nesterov (convex) on logistic regression and SGD on both least squares and logistic regression.
} \label{fig:halt_time_concentrates}\vspace{-1em}
\end{figure*}
\subsection{Adversarial Model}
Next we recall the adversarial model. Here we assume a noisy generative model for ${\boldsymbol b}$ (Assumption~\ref{assumption: Vector}). Then our adversary chooses the matrix ${\boldsymbol A}$ without knowledge of ${\boldsymbol b}$ in such a way that \textit{maximizes the norm of the gradient} subject to the constraint that the convex hull of the eigenvalues of ${\boldsymbol H} = \tfrac{1}{n}{\boldsymbol A}^T {\boldsymbol A}$ equals $[\lambda^{-},\lambda^+]$. For comparison to the average-case analysis with isotropic features, we would choose $\lambda^{\pm}$ to be the endpoints of the Mar\v{c}enko-Pastur law.
In light of Proposition~\ref{proposition:conditional}, the adversarial model seeks to solve the constrained optimization problem
\begin{equation} \begin{gathered} \label{eq:adversary_H}
\max_{{\boldsymbol H}} \Big \{ \mathbb{E} \big [ \|\nabla f({\boldsymbol x}_k)\|^2 \big ] = \tfrac{R^2}{d} \text{tr}({\boldsymbol H}^2 P_k^2({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm})) + \tfrac{\widetilde{R}^2}{n} \text{tr} ( {\boldsymbol H} P_k^2({\boldsymbol H}; \lambda^{\pm}_{{\boldsymbol H}})) \Big \} \\ \text{subject to} \quad \lambda_{{\boldsymbol H}}^+ = \lambda^+ \, \text{and} \, \lambda_{{\boldsymbol H}}^- = \lambda^-,
\end{gathered} \end{equation}
where the largest (smallest) eigenvalue of ${\boldsymbol H}$ is restricted to the upper (lower) edge of Mar\v{c}enko-Pastur measure. The optimal ${\boldsymbol H}$ of \eqref{eq:adversary_H}, ${\boldsymbol H}_{\max}$, has all but two of its eigenvalues at
\begin{equation} \lambda^*_k \stackrel{\text{def}}{=} \argmax_{\lambda \in [\lambda^-, \lambda^+]} \Big \{ R^2 \lambda^2P_k^2(\lambda; \lambda^{\pm}) + \widetilde{R}^2 \lambda P_k^2(\lambda; \lambda^{\pm}) \Big \} \, . \end{equation}
The other two eigenvalues must live at $\lambda^+$ and $\lambda^-$ in order to satisfy the constraints. The empirical spectral measure for this ${\boldsymbol H}_{\max}$ is exactly
\[ \mu_{{\boldsymbol H}_{\max}} = \frac{1}{d} \sum_{i=1}^d \delta_{\lambda_i} = \frac{1}{d} \cdot \delta_{\lambda^+} + \frac{1}{d} \cdot \delta_{\lambda^-} + \Big (1-\frac{2}{d} \Big ) \cdot \delta_{\lambda^*_k}. \]
Since this empirical spectral measure weakly converges to $\delta_{\lambda^*_k}$, we satisfy the conditions of Assumption~\ref{assumption: Vector} for these ${\boldsymbol H}_{\max}$ and spectral measure $\mu_{{\boldsymbol H}_{\max}}$. Hence, Theorem~\ref{thm: concentration_main} holds and the maximum expected squared norm of the gradient as the model size goes to infinity equals
\begin{equation} \begin{aligned} \label{eq: adversary_worst_case}
\lim_{d \to \infty} \max_{{\boldsymbol H}} \, \mathbb{E} \big [ \|\nabla f({\boldsymbol x}_k)\|^2 \big ] &= \int \big [ R^2 \lambda^2 P_k^2(\lambda; \lambda^{\pm}) + \widetilde{R}^2 r \lambda P_k^2(\lambda; \lambda^{\pm})\big ] \, \delta_{\lambda^*_k}\\
&= \max_{ \lambda \in [\lambda^-, \lambda^+] } R^2 \lambda^2 P_k^2(\lambda; \lambda^{\pm}) + \widetilde{R}^2 r \lambda P_k^2(\lambda; \lambda^{\pm})\,.
\end{aligned}
\end{equation}
We called the above expression the \textit{adversarial average-case complexity}. Table~\ref{tab:comparison_worst_avg_cvx} shows these convergence guarantees. We defer the derivations to Appendix~\ref{apx: adversarial_model}.
\begin{remark} In the strongly convex setting, we omitted the adversarial average-case guarantees out of brevity. For all the algorithms, the value of $\lambda_k^*$ occurs near or at the minimum eigenvalue $\lambda^+$. As such there is (almost) no distinction between the traditional worst-case guarantees and the adversarial guarantees.
\end{remark}
\section{Numerical Simulations} \label{sec:numerical_simulations}
To illustrate our theoretical results we report simulations using gradient descent (GD) and Nesterov's accelerated method (convex) \citep{nesterov2004introductory,Beck2009Fast} on the least squares problem under the isotropic features model. We further investigate the halting time in logistic regression as well as least squares with mini-batch stochastic gradient descent (SGD). See Appendix~\ref{apx:exp_details} for details.
\paragraph{Setup.} The vectors ${\boldsymbol x}_0$ and $\widetilde{{\boldsymbol x}}$ are sampled i.i.d. from the Gaussian $N({\boldsymbol{0}}, \tfrac{1}{d}{\boldsymbol I})$ whereas the entries of ${\boldsymbol A}$ are sampled either from a standardized Gaussian, a Bernoulli distribution, or a Student's \mbox{$t$-dis}tribution with 5 degrees of freedom, normalized so that they all have the same mean and variance. We train the following models:
\begin{itemize}[leftmargin=*]
\item \textbf{Least squares.} The least squares problem minimizes the objective function $f({\boldsymbol x}) = \tfrac{1}{2n} \|{\boldsymbol A} {\boldsymbol x} -{\boldsymbol b} \|^2$. The targets, ${\boldsymbol b} = {\boldsymbol A}\widetilde{{\boldsymbol x}} + {\boldsymbol \eta}$, are generated by adding a noise vector ${\boldsymbol \eta}$ to our signal, ${\boldsymbol A} \widetilde{{\boldsymbol x}}$. The entries of ${\boldsymbol \eta}$ are sampled from a normal, $N(0, \widetilde{R}^2)$, for different values of $\widetilde{R}^2$.
\item \textbf{Logistic regression.} For the logistic regression model we generate targets in the domain $(0, 1)$ using ${\boldsymbol b} = \sigma\left( {\boldsymbol A}\widetilde{{\boldsymbol x}} + {\boldsymbol \eta}\right)$ where $\sigma$ is the logistic function. The output of our model is ${\boldsymbol y} = \sigma\left({\boldsymbol A}{\boldsymbol x}\right)$, and the objective function is the standard cross-entropy loss:
\[f({\boldsymbol x}) = {-\frac{1}{n}\sum_{i=1}^n b_i \cdot \log(y_i) + (1 - b_i) \cdot \log(1 - y_i)}.\]
\end{itemize}
\begin{figure*}[t!]
\begin{center}
\includegraphics[scale =0.4]{figures/ratio-gd.pdf}
\includegraphics[scale = 0.4]{figures/ratio-sgd.pdf}
\end{center}
\caption{\textbf{Effect of the ratio $r = d/n$ on the halting time for various levels of noise $\widetilde{R}^2$.} The left plot shows the average halting time of gradient descent as a function of the ratio parameter $r$. As predicted by the theory, the halting time increases as ${\boldsymbol A}$ approaches a square matrix ($r \to 1$), and the difference between the linear rates ($r \neq 1$) and the sublinear rates ($r = 1)$ grows as the noise level increases. A total of 104,960 models were trained, keeping fixed the number of entries in the matrix $dn = 2^{24}$. In the right plot we show the same curve but for SGD instead, with a batch-size of $\frac{n}{8}$. We plot the curves for all values $r \neq 1$ with the value for $r = 1$ as a single point due to its large value.}
\label{fig:r}
\end{figure*}
\paragraph{Parameter settings.} In all simulations, the halting criterion is the number of steps until the gradient falls below $\varepsilon$, $\|\nabla f({\boldsymbol x})\|^2 < \varepsilon$, where $\varepsilon$ is chosen to be $10^{-6}$ for GD and Nesterov and $\varepsilon$ is $10^{-4}$ for SGD. The step size for GD and Nesterov's accelerated method is fixed to be $1/L$ where $L$ is the Lipschitz constant of the gradient. For least squares, $L=\lambda_{{\boldsymbol H}}^+$. We approximate $\lambda_{{\boldsymbol H}}^+$ by performing 64 steps of the power iteration method on the matrix ${\boldsymbol H}$, initialized with a constant vector of norm 1. For logistic regression, we set the step size to be $4/\lambda_{{\boldsymbol H}}^+$.
In SGD, we sample rows from the matrix ${\boldsymbol A}$. The mini-batch size parameter is a fixed fraction of the data set size $\frac{n}{16}$, so that the comparison of halting times across model sizes is consistent. When the models are over-parametrized ($n < d$), a strong growth condition~\citep{schmidt2013fast} holds. This means a scaling of the GD step size can be used to ensure convergence. In the under-parametrized setting, SGD does not converge to the optimum. In this case we chose a step size such that the expected squared gradient norm at the stationary point equals the halting criterion. See Appendix~\ref{sec:step_sizes} for derivations.
\paragraph{Results and conclusions.} Figure~\ref{fig:gd-ls} confirms our theoretical results: variability in the halting time decreases and the halting time converges to a deterministic quantity independent of the distribution of the data. Experimentally, the standard deviation decreased at a rate of $d^{-1/2}$, consistent with results in random matrix theory. For medium sized problems ($d = 2^5$), the heavy-tailed Student's t distribution occasionally produces ill-conditioned matrices resulting in large halting times. These ill-conditioned matrices disappear as the model size grows in large part because the maximum eigenvalue becomes stable.
More interestingly, our results extend to non-quadratic functions, such as logistic regression, as well as SGD (see Figure~\ref{fig:halt_time_concentrates}). Surprisingly, we see different behaviors between logistic and least square models for smaller matrices when using SGD. Moreover, we note that the large halting times seen in the Student's t distribution for GD on medium sized problems disappear when we instead run SGD.
Secondly, Figure~\ref{fig:r} evaluates the halting times dependency on the ratio $r$. As predicted by the theory, the halting time takes its maximum value (\textit{i.e.}, algorithm is slowest) precisely when $r = 1$. For SGD different step sizes are used for the over-parametrized and under-parametrized regime resulting in an asymmetric curve and a clear discontinuity at $r = 1$. We leave the study of these phenomena as future work.
\section*{Acknowledgements} The authors would like to thank our colleagues Nicolas Le Roux, Ross Goroshin, Zaid Harchaoui, Damien Scieur, and Dmitriy Drusvyatskiy for their feedback on this manuscript, and Henrik Ueberschaer for providing useful random matrix theory references.
\newpage
\section{Introduction}
\begin{wrapfigure}[18]{r}{0.50\textwidth}
\centering
\vspace{-2cm}
\includegraphics[width = \linewidth]{figures/gd-ls.pdf} \vspace{-0.75cm}
\caption{As the model grows ($x$-axis), the standard deviation (shaded region) in the halting time of gradient descent on random least squares vanishes and the halting time becomes {\bfseries predictable}. Note also a \textbf{universality} phenomenon, that is, the halting time limit is the same for problems generated from different distributions. (See Sec.~\ref{sec:numerical_simulations} for a description of simulations.)}
\label{fig:gd-ls}
\end{wrapfigure}
Traditional worst-case analysis of optimization algorithms provides complexity bounds for any input, no matter how unlikely \citep{nemirovski1995information, nesterov2004introductory}. It gives convergence guarantees, but the bounds are not always representative of the typical runtime of an algorithm.
In contrast, average-case analysis gives sharper runtime estimates when some or all of its inputs are random. This is often paired with concentration bounds that quantify the spread of those estimates.
In this way, it is more representative of the typical behavior.
Yet, average-case analysis is rarely used in optimization because the complexity of algorithms is assumed to depend on the specific probability distribution of the inputs. Surprisingly, simulations reveal this is not the case for large-scale problems (see Figure \ref{fig:gd-ls}).
We show that almost all instances of high-dimensional data are indistinguishable to first-order algorithms. Particularly, the \emph{halting time}, i.e.\ the number of iterations to reach a given accuracy, for any first-order method converges to a deterministic value which is independent of the input distribution (see Figure~\ref{fig:gd-ls}). Since the halting time is deterministic, the empirical complexity coincides almost surely with the average-case rates.
\renewcommand{\arraystretch}{2}
\ctable[notespar,
caption = {\textbf{Comparison of convergence guarantees for non-strongly convex objectives} in terms of asymptotic behavior of $\|\nabla f({\boldsymbol x}_k)\|^2$ as problem size and iteration are large in the isotropic features model. The average-case guarantees are strictly faster than the traditional worst-case and adversarial rates. Furthermore, the traditional worst-case complexity bounds depend on the distance to the optimum which under our constant signal-to-noise model, grows as the problem size, or dimension, $d$, increases. The `without noise' setting refers to the case when the targets ${\boldsymbol b}$ equal ${\boldsymbol A} \widetilde{{\boldsymbol x}}$ with $\widetilde{{\boldsymbol x}}$ the signal and the `noisy' setting when the targets ${\boldsymbol b}$ follow a generative model but are corrupted by noise, that is, ${\boldsymbol b} = {\boldsymbol A} \widetilde{{\boldsymbol x}} + {\boldsymbol \eta}$, where ${\boldsymbol \eta}$ is a noise vector. The rates are stated in terms of an absolute constant $C$, the amount of signal $R$ and noise $\widetilde{R}$, the ratio of number of features to samples $d/n \to r \in (0,\infty)$, and the maximum $\lambda^+$ and minimum $\lambda^-$ eigenvalues. Denote $\|J_1^2(x)\|_{\infty}$ the maximum value of the squared Bessel function of the first kind $(J_1(x))$ over $[0, \infty)$. See Section~\ref{sec: average_case} and \ref{sec: avg_derivations} for derivations and definitions of terms such as non-strongly convex.},
captionskip=2ex,
label={tab:comparison_worst_avg_cvx},
pos =ht!
]{clll}{\tnote[1]{In the noisy setting, we lower bounded $\|{\boldsymbol x}_0-{\boldsymbol x}^*\|^2$ by $d$ (see Lemma~\ref{lem:growth_dist_optimum}) to the worst-case complexity bound provided in \citet[Section 4.1.3]{taylor2017smooth}. } \tnote[2]{\cite{nesterov2004introductory,Beck2009Fast}}
\tnote[3]{\cite{nesterov2012how}} \tnote[4]{Adversarial model maximizes the norm of the gradient subject to a fixed condition number (see Section~\ref{sec: average_case}).} \tnote[5]{When noise is added, the convergence rates are dominated by the term with $\widetilde{R}$ in \eqref{eq: something_1_main}.}}{
\toprule
\textbf{Method} & & \begin{minipage}{0.3\textwidth} \begin{center} \textbf{Non-strongly cvx\\
w/o noise} \end{center} \end{minipage} & \begin{minipage}{0.3\textwidth} \begin{center} \textbf{Non-strongly cvx\\ w/ noise}\tmark[5] \end{center} \end{minipage}\\
\midrule
\multirow{3}{*}{\begin{minipage}{0.1\textwidth} \begin{center} Gradient descent (GD) \end{center} \end{minipage}} & Worst\tmark[1] & $\textcolor{teal}{\mfrac{1}{(k+1)^2}} \cdot R^2 (\lambda^+)^2$ & $\textcolor{teal}{\mfrac{\textcolor{purple}{d}}{(k+1)^2}} \cdot \widetilde{R}^2 (\lambda^+)^2 C$ \\
\cmidrule(r){2-4}
& Adversarial\tmark[4] & $\textcolor{teal}{\mfrac{1}{(k+1)^2}} \cdot \mfrac{R^2 (\lambda^+)^2}{e^{2}} $ &
$\textcolor{teal}{\mfrac{1}{k}} \cdot \mfrac{\widetilde{R}^2 \lambda^+}{2}$\\
\cmidrule(r){2-4}
& Average & $\textcolor{teal}{\mfrac{1}{k^{5/2}}} \cdot \mfrac{R^2 (\lambda^+)^2 \Gamma(5/2)}{2^{3/2} \pi}$ &
$\textcolor{teal}{\mfrac{1}{k^{3/2}}} \cdot \mfrac{\widetilde{R}^2 \lambda^+ \Gamma (3/2 )}{2^{1/2} \pi}$\\
\midrule
\multirow{3}{*}{\begin{minipage}{0.16\textwidth} \begin{center} Nesterov\\ accelerated method \tmark[2] \end{center} \end{minipage}}
& Worst \tmark[3] & $\textcolor{teal}{\mfrac{1}{k(k+2)^2}} \cdot 8 R^2 (\lambda^+)^2$ & $\textcolor{teal}{\mfrac{\textcolor{purple}{d}}{k(k+2)^2}} \cdot 8\widetilde{R}^2 (\lambda^+)^2 C$ \\
\cmidrule(r){2-4}
& Adversarial & $\textcolor{teal}{\mfrac{1}{k^{7/2}}} \cdot \mfrac{8e^{-1/2}}{\sqrt{2} \pi} R^2 (\lambda^+)^2$ &
$\textcolor{teal}{\mfrac{1}{k^{2}}} \cdot \|J_1^2(x)\|_{\infty} \widetilde{R}^2 \lambda^+ $ \\
\cmidrule(r){2-4}
& Average & $\textcolor{teal}{\mfrac{1}{k^4}} \cdot \mfrac{8 R^2 (\lambda^+)^2}{\pi^2}$ &
$\textcolor{teal}{\mfrac{\log(k)}{k^3}} \cdot \mfrac{4\widetilde{R}^2 \lambda^+}{\pi^2 }$\\
\bottomrule}
\renewcommand{\arraystretch}{2.5}
\ctable[notespar, caption = {\textbf{Comparison of convergence guarantees for strongly convex objectives} in terms of asymptotic behavior of $\|\nabla f({\boldsymbol x}_k)\|^2$ as problem size and iteration are large in the isotropic features model. Average-case matches the worst-case asymptotic guarantees multiplied by an additional \textit{polynomial correction term} (\textcolor{teal}{green}). This polynomial term has little effect on the complexity compared to the linear rate. However as the matrix ${\boldsymbol H} = \tfrac{1}{n} {\boldsymbol A}^T {\boldsymbol A}$ becomes ill-conditioned $(r\to1)$, the polynomial correction starts to dominate the average-case complexity. Indeed this shows that the support of the spectrum does not fully determine the rate. \textit{Many} eigenvalues contribute meaningfully to the average-rate.
See Section~\ref{sec: avg_derivations} for derivations and Table~\ref{tab:comparison_worst_avg_cvx} for definition of terms in the rates.},
label= {tab:comparison_worst_avg_str_cvx},
captionskip=2ex,
pos = ht!
]{cll}{\tnote[1]{\citet[Section 4.1.3]{taylor2017smooth}}}{
\toprule
\textbf{Method} & & \textbf{Strongly cvx w/ noise} \\
\midrule
\multirow{2}{*}{\begin{minipage}{0.15\textwidth} \begin{center} Gradient descent (GD) \end{center} \end{minipage}} & Worst\tmark[1] & $\textcolor{purple}{\big (1- \frac{\lambda^-}{\lambda^+} \big )^{2k}} (\lambda^+)^2 $\\
\cmidrule(r){2-3}
& Average & $\textcolor{purple}{\big (1- \frac{\lambda^-}{\lambda^+} \big )^{2k}} \textcolor{teal}{\mfrac{1}{k^{3/2}}} \big [ R^2 \lambda^- + \widetilde{R}^2 r \big ] \cdot C $\\
\midrule
\multirow{2}{*}{\begin{minipage}{0.15\textwidth} \begin{center} Polyak\\ \citep{Polyak1962Some} \end{center} \end{minipage}}
& Worst & $ \textcolor{purple}{ \big ( 1 - \frac{2 \sqrt{\lambda^-}}{\sqrt{\lambda^+} + \sqrt{\lambda^-}}\big )^{2k}} \cdot C$ \\
\cmidrule(r){2-3}
& Average & $ \textcolor{purple}{ \big ( 1 - \frac{2 \sqrt{\lambda^-}}{\sqrt{\lambda^+} + \sqrt{\lambda^-}}\big )^{2k}} \big [\frac{(\lambda^+-\lambda^-)}{2}R^2 + \widetilde{R}^2r \big ] \cdot C$ \\
\midrule
\multirow{2}{*}{\begin{minipage}{0.2\textwidth} \begin{center} Nesterov accelerated method\\ \citep{nesterov2004introductory} \end{center} \end{minipage}} & Worst & $\textcolor{purple}{ \big ( 1 - \frac{2 \sqrt{\lambda^-}}{\sqrt{\lambda^+} + \sqrt{\lambda^-}}\big )^{k} \big (1- \frac{\lambda^-}{\lambda^+} \big )^k} \cdot C$ \\
\cmidrule(r){2-3}
& Average & $ \textcolor{purple}{ \big ( 1 - \frac{2 \sqrt{\lambda^-}}{\sqrt{\lambda^+} + \sqrt{\lambda^-}}\big )^{k} \big (1- \frac{\lambda^-}{\lambda^+} \big )^k} \big [\textcolor{teal}{\mfrac{1}{k^{1/2}}} \cdot R^2 \lambda^- + \textcolor{teal}{\mfrac{1}{k^{1/2}}} \cdot \widetilde{R}^2 r \big ] \cdot C$ \\
\bottomrule}
\paragraph{Notation.} We write vectors in lowercase boldface (${\boldsymbol x}$) and matrices in uppercase boldface (${\boldsymbol H}$). The norm $\|{\boldsymbol x}\|_2^2 = {\boldsymbol x}^T {\boldsymbol x}$ gives the usual Euclidean $2$-norm and $\|{\boldsymbol H}\|_{\text{op}} = \text{maximum singular value of ${\boldsymbol H}$}$ is the usual operator-2 norm. Given a matrix ${\boldsymbol H} \in {\mathbb R}^{d \times d}$, the largest eigenvalue of ${\boldsymbol H}$ is $\lambda_{{\boldsymbol H}}^+$ and its smallest eigenvalue is $\lambda_{{\boldsymbol H}}^-$. A sequence of random variables $\{y_d\}_{d =0}^\infty$ converges in probability to $y$, indicated by $y_d \Prto[d] y$, if for any $\varepsilon > 0$, $\displaystyle \lim_{d \to \infty} \Pr(|y_d-y| > \varepsilon) = 0$. In other words, the probability that $y_d$ is far from $y$ goes to $0$ as $d$ increases. Probability measures are denoted by $\mu$ and their densities by $\mathop{}\!\mathrm{d}\mu$. We say a sequence of random measures $\mu_d$ converges to $\mu$ weakly in probability if for any bounded continuous function $f$, we have $\int f \mathop{}\!\mathrm{d}\mu_d \to \int f \mathop{}\!\mathrm{d}\mu$ in probability.
All stochastic quantities defined hereafter live on a probability space denoted by $(\Pr, \Omega, \mathcal{F})$ with probability measure $\Pr$ and the $\sigma$-algebra $\mathcal{F}$ containing subsets of $\Omega$. A random variable (vector) is a measurable map from $\Omega$ to ${\mathbb R}$ $({\mathbb R}^d)$ respectively. Let $X : (\Omega, \mathcal{F}) \mapsto ({\mathbb R}, \mathcal{B})$ be a random variable mapping into the Borel $\sigma$-algebra $\mathcal{B}$ and the set $B \in \mathcal{B}$. We use the standard shorthand for the event $\{X \in B\} = \{\omega : X(\omega) \in B\}$.\\
\subsection{Main results}
In this paper, we analyze the halting time and develop the first explicit average-case analysis for first-order methods on quadratic objectives.
Quadratic objective functions are rich enough to reproduce the dynamics that arise in more complex models, yet simple enough to be understood in closed form.
Quadratic models are receiving renewed interest in the machine learning community as recent advances have shown that over-parameterized models, including neural networks, have training dynamics similar to those of quadratic problems~\citep{jacot2018neural, novak2018bayesian, arora2019exact, chizat2019lazy}.
The precise form of the quadratic problem we consider is
\begin{equation} \label{eq:LS_main}
\vspace{0.5em}\argmin_{{\boldsymbol x} \in {\mathbb R}^d} \Big \{ f({\boldsymbol x}) \stackrel{\text{def}}{=} \frac{1}{2n} \|{\boldsymbol A} {\boldsymbol x}-{\boldsymbol b}\|^2 \Big \}, \quad \text{with } {\boldsymbol b} \stackrel{\text{def}}{=} {\boldsymbol A} \widetilde{{\boldsymbol x}} + {\boldsymbol \eta}\,,
\end{equation}
where ${\boldsymbol A} \in {\mathbb R}^{n \times d}$ is the data matrix, $\widetilde{{\boldsymbol x}} \in {\mathbb R}^d$ is the signal vector \footnote{The signal $\widetilde{{\boldsymbol x}}$ is not the same as the vector for which the iterates of the algorithm are converging to as $k \to \infty$.}, and ${\boldsymbol \eta} \in {\mathbb R}^n$ is a source of noise. All of these inputs will possibly be random and the target ${\boldsymbol b} = {\boldsymbol A} \widetilde{{\boldsymbol x}} + {\boldsymbol \eta}$ is produced by a generative model corrupted by noise. We refer to the noiseless (without noise) setting when ${\boldsymbol b} = {\boldsymbol A} \widetilde{{\boldsymbol x}}$ and the noisy setting as ${\boldsymbol b} = {\boldsymbol A} \widetilde{{\boldsymbol x}} + {\boldsymbol \eta}$.
We work in the following setting: Both the number of features $(d)$ and data dimension $(n)$ grow to infinity while $d/n$ tends to a fixed $r \in (0,\infty)$.
We use $\widetilde{R}^2=\frac{1}{d}\mathbb{E}\left[\|{\boldsymbol \eta}\|^2\right]$ to denote the magnitude of the noise. For intuition, we implicitly define $R^2 \approx \frac{1}{d}\|{\boldsymbol b}\|^2 - \widetilde{R}^2$ to measure the strength of the signal{\footnote{The definition of $\widetilde{R}^2$ in Assumption~\ref{assumption: Vector} does not imply that $R^2 \approx \frac{1}{d}\|{\boldsymbol b}\|^2 - \widetilde{R}^2$. However the precise definition of $\widetilde{R}$ and this intuitive one yield similar magnitudes and both are generated from similar quantities. }}; we make the definition of $\widetilde{R}^2$ precise in Assumption \ref{assumption: Vector} of Section~\ref{sec: problem_setting}, one of two assumptions fundamental to this work. Throughout, the signal-to-noise ratio in $[0,\infty]$ is held constant as the problem size grows.
Moreover, we assume that the data matrix ${\boldsymbol A}$ is \emph{independent} of both the signal, $\widetilde{{\boldsymbol x}}$, and noise ${\boldsymbol \eta}.$ Note this, together with the generative model, allows for some amount of dependence between ${\boldsymbol A}$ and the target ${\boldsymbol b}.$ We will also assume that ${\boldsymbol H} \stackrel{\text{def}}{=} \frac{1}{n} {\boldsymbol A}^T {\boldsymbol A}$ has a well-defined \textit{limiting spectral density}, denoted by $\mathop{}\!\mathrm{d}\mu$, as $n,d \to \infty$ (see Assumption \ref{assumption: spectral_density} of Section~\ref{sec: problem_setting}).
Our first contribution is a framework to analyze the average-case complexity of gradient-based methods in the described setting. Our framework highlights how the algorithm, signal and noise levels interact with each other to produce different average-case convergence guarantees. The culmination of this framework is the average-case convergence rates for first-order methods (see Tables~\ref{tab:comparison_worst_avg_cvx} and \ref{tab:comparison_worst_avg_str_cvx}).
\begin{wrapfigure}[15]{r}{0.47\textwidth}
\centering
\vspace{-0.55cm}
\includegraphics[width = 0.85\linewidth]{figures/mp_pdf.pdf}
\vspace{-0.25cm}
\caption{The spectrum of matrices $\tfrac{1}{n} {\boldsymbol A}^T {\boldsymbol A}$ under the isotropic features model converges as $n,d \to \infty$ to the \emph{Mar\v{c}enko-Pastur} distribution, shown here for different values of $r = d/n$.}
\label{fig:MP}
\end{wrapfigure}
Our framework is broad enough to facilitate multiple perspectives on average-case analysis. Our motivating and central application is the \emph{fully-average-case}, in which we assume that all inputs are random. The quintessential random data model is \emph{isotropic features}. This supposes the entries of ${\boldsymbol A}$ are i.i.d.\ random variables with zero mean, equal variance, and bounded fourth moments, that is, ${\mathbb E}\,[A_{ij}] = 0, {\mathbb E}\,[A_{ij}^2] = \sigma^2, {\mathbb E}\,[A_{ij}^4] < \infty$ for all $i, j$. In a celebrated theorem of \cite{marvcenko1967distribution}, the spectrum of ${\boldsymbol H} = \frac{1}{n}{\boldsymbol A}^T {\boldsymbol A}$ converges to a compactly supported measure as the problem size grows \emph{without any further assumptions on the distribution of the entries of ${\boldsymbol A}$}. This limiting spectral distribution is known as the Mar\v{c}enko-Pastur law:
\begin{equation} \label{eq:MP}
\begin{gathered} \mathop{}\!\mathrm{d}\mu_{\mathrm{MP}}(\lambda) \stackrel{\text{def}}{=} \delta_0(\lambda) \max\{1-\tfrac{1}{r}, 0\} + \frac{\sqrt{(\lambda-\lambda^-)(\lambda^+-\lambda)}}{2 \pi \lambda \sigma^2 r} 1_{[\lambda^-, \lambda^+]}\,,\\
\text{where} \qquad \lambda^- \stackrel{\text{def}}{=} \sigma^2(1 - \sqrt{r})^2 \quad \text{and} \quad \lambda^+ \stackrel{\text{def}}{=} \sigma^2(1+ \sqrt{r})^2\,.
\end{gathered}
\end{equation}
However, our framework is built to be vastly more general. To start, the framework covers a fully-average-case analysis with other data models, such as the one-hidden layer network with random weights and the correlated features model (see Section \ref{sec: data_generate}).
More to the point, this framework also allows for a type of semi-average-case analysis, in which only ${\boldsymbol b}$ is taken to be random. When we do this and then choose ${\boldsymbol A}$ in such a way as to maximize the halting time, we call this the \emph{adversarial average-case}. See Section \ref{sec: average_case} for further details and motivations.
We now discuss the contents of this framework in detail,
which is to say we survey how Assumptions \ref{assumption: Vector} and \ref{assumption: spectral_density} combine to show the halting time is concentrated and deterministic.
The first step is to express the conditional expectation of the gradient at the $k$-th iterate as a sum of expected traces of polynomials in the matrix ${\boldsymbol H} = \frac{{\boldsymbol A}^T {\boldsymbol A}}{n}$ (c.f.\ Proposition~\ref{proposition:conditional}):
\begin{equation} \label{eq:conditional_main_result}
{\mathbb E}\,[\|\nabla f({\boldsymbol x}_k)\|^2 \, | \, {\boldsymbol H}] = \tfrac{R^2}{d} \text{tr} \big ( {\boldsymbol H}^2 P_k^2({\boldsymbol H}) \big ) + \tfrac{\widetilde{R}^2}{n} \text{tr} \big ( {\boldsymbol H} P_k^2({\boldsymbol H}) \big ).
\end{equation}
The polynomial $P_k$, known as the \textit{residual polynomial}, is a $k$-th degree polynomial associated with the gradient-based algorithm. This tool of associating each algorithm with polynomials is a classic technique in numerical iterative methods for solving linear systems \citep{Flanders1950Numerical,golub1961chebyshev,fischer1996polynomial,rutishauser1959refined}. Such polynomials are used to prove convergence of some of the most celebrated algorithms like the conjugate gradient method \citep{Hestenes&Stiefel:1952}. Explicit expressions of the residual polynomials for Nesterov's accelerated methods \citep{nesterov2004introductory, Beck2009Fast}, both convex and strongly convex, as well as, gradient descent and Polyak's momemtum (a.k.a Heavy-ball) \citep{Polyak1962Some} are derived in Section~\ref{sec: poly}. These polynomials may be of independent interest.
The result in \eqref{eq:conditional_main_result} gives an \textit{exact expression} for the expected gradient depending only on traces of powers of ${\boldsymbol H}$, which in turn can be expressed in terms of its \emph{eigenvalues}. Our second main assumption (Assumption \ref{assumption: spectral_density}) then ensures that these traces converge to integrals against the spectral density $\mathop{}\!\mathrm{d}\mu.$ In summary, the squared gradient norm concentrates to a deterministic quantity, \footnote{
In many situations this deterministic quantity
$\! \! \underset{d \rightarrow \infty}{\mathcal{E}} \! [\|\nabla f(\xx_{k})\|^2]\,$
is in fact the limiting expectation of the squared-norm of the gradient. However, under the assumptions that we are using, this does not immediately follow. It is however always the limit of the median of the squared-norm of the gradient.}\footnote{Technically, there is no need to assume the measure $\mu$ has a density -- the theorem holds just as well for any limiting spectral measure $\mu$. In fact, a version of this theorem can be formulated at finite $n$ just as well, thus dispensing entirely with Assumption \ref{assumption: spectral_density} -- c.f.\ Proposition \ref{proposition:conditional}.} $\! \! \underset{d \rightarrow \infty}{\mathcal{E}} \! [\|\nabla f(\xx_{k})\|^2]\,$:
\begin{theorem}[Concentration of the gradient] \label{thm: concentration_main}
Under Assumptions~\ref{assumption: Vector} and~\ref{assumption: spectral_density} the norm of the gradient concentrates around a deterministic value:
\begin{equation} \label{eq: something_1_main} \vspace{0.25cm}
\hspace{-0.28cm} \|\nabla f({\boldsymbol x}_k)\|^2 \Prto[d] \textcolor{teal}{\overbrace{R^2}^{\text{signal}}} \int { \underbrace{\lambda^2 P_k^2(\lambda)}_{\text{algorithm}}} \textcolor{mypurple}{\overbrace{\mathop{}\!\mathrm{d}\mu}^{\text{model}} } + \textcolor{purple}{\overbrace{ \widetilde{R}^2} ^{\text{noise}} } r \int { \underbrace{\lambda P_k^2(\lambda)}_{\text{algorithm}}} \textcolor{mypurple}{\overbrace{ \mathop{}\!\mathrm{d}\mu}^{\text{model}} } \stackrel{\text{def}}{=} \! \! \underset{d \rightarrow \infty}{\mathcal{E}} \! [\|\nabla f(\xx_{k})\|^2]\,.
\end{equation}
\end{theorem}
\noindent
Notably, the deterministic value for which the gradient concentrates around depends only on ${\boldsymbol H}$ through its eigenvalues.
The concentration of the norm of the gradient above yields a candidate for the limiting value of the halting time, or the first time the gradient $\|\nabla f({\boldsymbol x}_k)\|^2$ falls below some predefined $\varepsilon$. We define this candidate for the halting time $\tau_{\varepsilon}$ from $ \! \! \underset{d \rightarrow \infty}{\mathcal{E}} \! [\|\nabla f(\xx_{k})\|^2]\,$ and we denote the halting time $T_{\varepsilon}$, by
\begin{align}
\tau_{\varepsilon} \stackrel{\text{def}}{=} \inf \, \{ k > 0 : \! \! \underset{d \rightarrow \infty}{\mathcal{E}} \! [\|\nabla f(\xx_{k})\|^2]\, \le \varepsilon\} \quad \text{and} \quad T_{\varepsilon} \stackrel{\text{def}}{=} \inf \, \{ k > 0 : \|\nabla f({\boldsymbol x}_k)\|^2 \le \varepsilon\}\,.
\end{align}
We note that the deterministic value $\tau_{\varepsilon}$ is, by definition, the average complexity of the first-order algorithm. This leads to our second main result that states the almost sure convergence of the halting time to a constant value.
\begin{theorem}[Halting time universality] \label{thm: Halting_time_main} Fix an $\varepsilon > 0$ and suppose $\! \! \underset{d \rightarrow \infty}{\mathcal{E}} \! [\|\nabla f(\xx_{k})\|^2]\, \neq \varepsilon$ for all $k$. Under Assumptions~\ref{assumption: Vector} and \ref{assumption: spectral_density},
\begin{empheq}[box=\mybluebox]{equation}
\vphantom{\sum_i^n}\lim_{d \to \infty} \Pr(T_{\varepsilon} = \tau_{\varepsilon} ) = 1\,.
\end{empheq}
\end{theorem}
A result of this form previously appeared in \cite{deift2019conjugate} for the conjugate gradient method.
\subsubsection{Extension beyond least squares, ridge regression} \label{sec:ridge_regression_main}
One extension of Theorems~\ref{thm: concentration_main} and \ref{thm: Halting_time_main} to other objective functions is the ridge regression problem or $\ell_2$-regularization, that is, we consider a problem of the form
\begin{equation} \label{eq:ridge_regression_main}
\argmin_{{\boldsymbol x} \in \mathbb{R}^d} \left \{ f({\boldsymbol x}) \stackrel{\text{def}}{=} \frac{1}{2n} \|{\boldsymbol A} {\boldsymbol x} - {\boldsymbol b}\|^2 + \frac{\gamma}{2} \|{\boldsymbol x}\|^2 \right \}, \quad \text{with ${\boldsymbol b} \stackrel{\text{def}}{=} {\boldsymbol A} \widetilde{{\boldsymbol x}} + {\boldsymbol \eta}$\,.}
\end{equation}
As discussed above, we assume that ${\boldsymbol A} \in \mathbb{R}^{n \times d}$ is (possibly random) data matrix, $\widetilde{{\boldsymbol x}} \in \mathbb{R}^d$ is an unobserved signal vector, and ${\boldsymbol \eta} \in \mathbb{R}^n$ is a noise vector. We make the same assumptions on the limiting spectral measure of ${\boldsymbol H} = \frac{1}{n} {\boldsymbol A}^T {\boldsymbol A}$, the ratio of features to samples, that is, $d/n$ tends to some fixed $r \in (0,\infty)$ as $d \to \infty$, and the magnitude of the noise $\widetilde{R}^2 = \tfrac{1}{d} \mathbb{E}[\|{\boldsymbol \eta}\|^2]$. In addition to the independence assumption between the data matrix ${\boldsymbol A}$ and the signal $\widetilde{{\boldsymbol x}}$ and ${\boldsymbol x}_0$, we add that the signal and the initialization are also independent of each other with magnitudes $\mathbb{E}[\|{\boldsymbol x}_0\|^2] = \dot{R}^2$ and $\mathbb{E}[\|\widetilde{{\boldsymbol x}}\|^2] = \widehat{R}^2$ (see Assumption~\ref{assumption:ridge_vector} for precise statement). The constant $\gamma > 0$ is the ridge regression parameter.
The addition of the $\ell_2$-regularizer to the least squares problem alters the Hessian of the least squares by adding a multiple of the identity. Therefore the matrix ${\boldsymbol M} \stackrel{\text{def}}{=} {\boldsymbol H} + \gamma {\boldsymbol I}$ and its eigenvalues play the role of ${\boldsymbol H}$ and its eigenvalue in Theorem~\ref{thm: concentration_main}. The result is the following theorem.
\begin{theorem}[Concentration of the gradient for ridge regression] \label{thm: concentration_main_main_ridge}
Under Assumptions~\ref{assumption:ridge_vector} and~\ref{assumption: spectral_density} the norm of the gradient concentrates around a deterministic value:
\begin{equation} \begin{aligned} \label{eq: something_1_main_ridge_main} \vspace{0.25cm}
\hspace{-0.28cm} \|\nabla f({\boldsymbol x}_k)\|^2 \Prto[d] &\textcolor{teal}{\overbrace{\dot{R}^2}^{\text{initial.}}} \! \!\! \int { \underbrace{(\lambda + \gamma)^2 P_k^2(\lambda + \gamma; \lambda^{\pm})}_{\text{algorithm}}} \textcolor{mypurple}{\overbrace{\mathop{}\!\mathrm{d}\mu}^{\text{model}} } + \textcolor{teal}{\overbrace{\widehat{R}^2}^{\text{signal}}} \! \!\! \int { \underbrace{\lambda^2 P_k^2(\lambda + \gamma; \lambda^{\pm})}_{\text{algorithm}}} \textcolor{mypurple}{\overbrace{\mathop{}\!\mathrm{d}\mu}^{\text{model}} } \\
& \quad \quad + \textcolor{purple}{\overbrace{ \widetilde{R}^2} ^{\text{noise}} } r \int { \underbrace{\lambda P_k^2(\lambda + \gamma; \lambda^{\pm})}_{\text{algorithm}}} \textcolor{mypurple}{\overbrace{ \mathop{}\!\mathrm{d}\mu}^{\text{model}} }. \end{aligned}
\end{equation}
\end{theorem}
Here $\mathop{}\!\mathrm{d} \mu$ is the limiting spectral density of ${\boldsymbol H}$. The limiting gradient \eqref{eq: something_1_main_ridge_main} decomposes into three terms which highlight the effects of initialization, signal, and noise. This is unlike the two terms in \eqref{eq: something_1_main} which illustrate the noise and signal/initialization effects. The extra $\dot{R}^2$ term in \eqref{eq: something_1_main_ridge_main} only adds to the magnitude of the gradient due to the independence between the signal and initialization. We also note that the matrix ${\boldsymbol M}$ always has eigenvalues bounded away from $0$ even in the limit as $d \to \infty$. As such, we expect linear convergence. By defining the right-hand side of \eqref{eq: something_1_main_ridge_main} to be $\! \! \underset{d \rightarrow \infty}{\mathcal{E}} \! [\|\nabla f(\xx_{k})\|^2]\,$, it follows that Theorem~\ref{thm: Halting_time_main} holds under Assumption~\ref{assumption:ridge_vector} in replace of Assumption~\ref{assumption: Vector}. For additional discussion see Section~\ref{sec:ridge_regression}.
\subsection{Comparison between average and worst-case} \label{sec: average_case}
The average-case analysis we develop in this paper is effective in the large problem size limit, whereas worst-case analysis is performed for a fixed matrix size. This implies that there are potentially \emph{dimension-dependent} quantities which must be addressed when making a comparison.
For example, all the first-order methods considered here converge linearly for the finite-dimensional least squares problem: the rate is determined by the gap between the smallest nonzero eigenvalue of the matrix ${\boldsymbol H}$ and $0$. However this could very well be meaningless in the context of a high-dimensional problem, as this gap becomes vanishingly small as the problem size grows.
In the context of the isotropic features model, when the ratio of features to samples $r$ is $1,$ this is precisely what occurs: the smallest eigenvalues tend to $0$ as the matrix size grows. In contrast, when $r$ is bounded away from $1$, the least squares problem in \eqref{eq:LS_main} has a \emph{dimension-independent} lower bound on the Hessian which holds with overwhelming probability, (c.f.\ Figure~\ref{fig:MP}). However, for the comparison we do here, there is another dimension-dependent quantity which will have a greater impact on the worst-case bounds.
Before continuing, we remark on some terminology we will use throughout the paper. While for any realization of the least squares problem the Hessian ${\boldsymbol H}$ is almost surely positive definite, as problem size grows, the matrix ${\boldsymbol H}$ can become ill-conditioned, that is, the smallest eigenvalues tend to $0$ as $n \to \infty$ when $r = 1$. Consequently, the computational complexity of first-order algorithms as $n \to \infty$ exhibit rates similar to non-strongly convex problems. On the other hand, when $r$ is bounded away from $1$, the gap between the smallest nonzero eigenvalue of ${\boldsymbol H}$ and 0 results in first order methods having complexity rates similar to strongly convex problems. We use this terminology, \textit{non-strongly convex} and \textit{strongly convex}, in Tables~\ref{tab:comparison_worst_avg_cvx} and \ref{tab:comparison_worst_avg_str_cvx} to distinguish the different convergence behaviors when $r = 1$ and $r \neq 1$ resp. and for worst-case complexity comparisons.
\paragraph{Worst-case rates and the distance to optimality.}
Typical worst-case upper bounds for first-order algorithms depend on the distance to optimality, ${\|{\boldsymbol x}_0-{\boldsymbol x}^{\star}\|^2}$. For example, let us consider gradient descent (GD). Tight worst-case bounds for GD in the strongly convex and convex setting \citep{taylor2017smooth}, respectively, are
\begin{gather*}
\|\nabla f({\boldsymbol x}_k)\|^2 \le (\lambda_{{\boldsymbol H}}^+)^2 \|{\boldsymbol x}_0-{\boldsymbol x}^{\star}\|^2 \left ( 1- \tfrac{\lambda_{{\boldsymbol H}}^-}{\lambda_{{\boldsymbol H}}^+} \right )^{2k} \stackrel{\text{def}}{=} \mathrm{UB}_{\text{sc}}(\|\nabla f({\boldsymbol x}_k)\|^2\\
\text{and} \quad \|\nabla f({\boldsymbol x}_k)\|^2 \le \frac{(\lambda^+_{{\boldsymbol H}})^2 \|{\boldsymbol x}_0-{\boldsymbol x}^{\star}\|^2}{(k+1)^2} \stackrel{\text{def}}{=} \mathrm{UB}_{\text{cvx}}(\|\nabla f({\boldsymbol x}_k)\|^2),
\end{gather*}
where ${\boldsymbol x}^{\star}$ is the solution to \eqref{eq:LS_main} found by the algorithm, i.e, the iterates of the algorithm converge ${\boldsymbol x}_k \to {\boldsymbol x}^{\star}$.
To formulate a comparison between the fully-average-case rates, where ${\boldsymbol A}$ follows isotropic features, and the worst-case rates, we must make an estimate of this distance to optimality ${\|{\boldsymbol x}_0-{\boldsymbol x}^{\star}\|^2}$. In the noiseless setting $(\widetilde{R} = 0)$, the expectation of $\|{\boldsymbol x}_0-{\boldsymbol x}^{\star}\|^2$ is a constant multiple of $R^2$. In particular, it is independent of the dimension. Similarly when we have dimension-independent-strong-convexity, $(r \neq 1),$ even with noisy targets ${\boldsymbol b}$ $(\widetilde{R} > 0)$, the distance to the optimum is well-behaved and $\mathbb{E}[\|{\boldsymbol x}_0-{\boldsymbol x}^{\star}\|^2]$ is a constant involving $\widetilde{R}^2$ and $R^2$. Hence, a direct comparison between worst and average-case is relatively simple.
For the ill-conditioned case when $r=1$, the situation is more complicated with noisy targets. To maintain a fixed and finite signal-to-noise ratio, the distance to optimality will behave like ${\|{\boldsymbol x}^{\star} - {\boldsymbol x}_0 \|^2} \approx d \widetilde{R}^2$; that is, it is dimension-dependent.\footnote{Precisely, we show that $\tfrac{d \widetilde{R}^2}{\|{\boldsymbol x}^{\star}-{\boldsymbol x}_0\|^2}$ is tight (see Section~\ref{sec: avg_derivations}, Lemma~\ref{lem:growth_dist_optimum}).} So the worst-case rates have a dimension-dependent constant whereas the average-case rates are dimension-independent. This dimension-dependent term can be see in the last column of Table~\ref{tab:comparison_worst_avg_cvx}. Conversely, if one desires to make ${\mathbb E}\,[\|{\boldsymbol x}_0-{\boldsymbol x}^\star\|^2]$ constant across dimensions using a generative model with noise, one is forced to scale ${\boldsymbol \eta}$ to go to zero as $d \to \infty$, thus reducing the full generative model to the noiseless regime.
\paragraph{Adversarial model.}
As mentioned above, the comparison with existing worst-case bounds is problematic due to dimension-dependent factors. To overcome this, we consider the following \textit{adversarial model}. First, we assume a noisy generative model for ${\boldsymbol b}$ (Assumption~\ref{assumption: Vector} holds). Next, our adversary chooses the matrix ${\boldsymbol A}$ without knowledge of ${\boldsymbol b}$ to \textit{maximize the norm of the gradient} subject to the constraint that the convex hull of the eigenvalues of ${\boldsymbol H} = \tfrac{1}{n}{\boldsymbol A}^T {\boldsymbol A}$ equals $[\lambda^{-},\lambda^+]$. For comparison to the average-case analysis with isotropic features, we would choose $\lambda^{\pm}$ to be the endpoints of the Mar\v{c}enko-Pastur law. In light of Theorem~\ref{thm: concentration_main}, the adversarial model seeks to solve the constrained optimization problem
\begin{equation} \begin{aligned} \label{eq: adversary_worst_case_main}
\lim_{d \to \infty} \max_{{\boldsymbol H}} \, \mathbb{E} \big [ \|\nabla f({\boldsymbol x}_k)\|^2 \big ]
&=
\max_{ \lambda \in [\lambda^-, \lambda^+] }
\bigl\{ R^2 \lambda^2 P_k^2(\lambda) + \widetilde{R}^2 r \lambda P_k^2(\lambda)\bigr\}.
\end{aligned}
\end{equation}
We call this expression the \textit{adversarial average-case guarantee}.
The main distinction between worst-case and adversarial average-case is that traditional worst-case maximizes the gradient over \textit{all} inputs -- both targets ${\boldsymbol b}$ and data matrix ${\boldsymbol A}$. This leads to dimension--dependent complexity, as there are usually exceptional target vectors that are heavily dependent on the data matrix ${\boldsymbol A}$ (such as those built from extremal singular vectors of ${\boldsymbol A}$) and cause the algorithm to perform exceptionally slowly.
In contrast, the adversarial average-case keeps the randomness of the target ${\boldsymbol b}$ while maximizing over the data matrix ${\boldsymbol A}$. This is a more meaningful worst-case comparison: for example, in the setting of linear regression, the response and measurements of the independent variables are typically generated through different means and have different and independent sources of noise (see for example \cite[Example 10.1]{walpole1989probability}. Hence the independence of the noise intervenes to limit how truly bad the data matrix ${\boldsymbol A}$ can be. Furthermore, the complexity of the adversarial average-case is dimension-independent. Table~\ref{tab:comparison_worst_avg_cvx} shows these adversarial complexities for non-strongly convex objectives \eqref{eq:LS_main}. Similar results can also be derived for strongly convex objectives but are omitted for brevity.
\begin{figure}
\centering
\includegraphics[width = 0.48\linewidth]{figures/rates.pdf}
\includegraphics[width = 0.48\linewidth]{figures/gap_1.pdf}
\caption{{\bfseries Average-case vs worst-case in least squares} with isotropic features ($r =1, d = 4096$). {\bfseries Left}: 8000 runs of GD, standard deviation (shaded region, undetectable), and theoretical rates (dashed lines). Empirical runs precisely match the theoretical average-case rates.
{\bfseries Right}: Ratio of the upper bound in worst-case to empirical gradient after $k=4096$ iterations,
$\mathrm{UB}_{\text{cvx}}(\|\nabla f({\boldsymbol x}_k)\|^2) / \|\nabla f({\boldsymbol x}_k)\|^2$. From the concentration of gradients (left), this implies that \emph{the norms of the gradient for worst-cases are always larger than average.} The distribution of worst-case gradient rates with noise has large variance contrasting the little variance in (left) and makes the expected worst-case unpredictable in contrast with the noiseless.
\label{fig:avg_rates}
}
\end{figure}
\paragraph{Comparison with adversarial and worst-case complexities.}
By construction, the average-case convergence rates are at least as good as the worst-case and adversarial guarantees. The average-case complexity in the convex, noiseless setting ($r =1, \widetilde{R}=0$) for Nesterov's accelerated method (convex) \citep{nesterov2004introductory, Beck2009Fast} and gradient descent (GD) are an order of magnitude faster in $k$ than the worst case rates (see Table~\ref{tab:comparison_worst_avg_cvx}, first column). It may appear at first glance (Table~\ref{tab:comparison_worst_avg_cvx}) that there is a discrepancy between the average-case and exact worst-case rate when $r =1$ and noisy setting (${\boldsymbol \eta} \neq \bm{0}$). As noted in the previous section, the worst-case rates have dimension-dependent constants. Provided the dimension is bigger than the iteration counter ($d \ge k^{1/2}$ for GD and $d \ge \log(k)$ for Nesterov), the average complexity indeed yields a faster rate of convergence. Average-case is always strictly better than adversarial rates (see Table~\ref{tab:comparison_worst_avg_cvx}). This improvement in the average rate indeed highlights that \textit{the support of the spectrum does not fully determine the rate.} Many eigenvalues contribute meaningfully to the average rate. Hence, our results are not and cannot be purely explained by the support of the spectrum.
The average-case complexity in the strongly convex case matches the worst-case guarantees multiplied by an additional \textit{polynomial correction term} (\textcolor{teal}{green} in Table~\ref{tab:comparison_worst_avg_str_cvx}). This polynomial term has little effect on the complexity compared to the linear rate. However as the matrix ${\boldsymbol H}$ becomes ill-conditioned $(r\to1)$, the polynomial correction starts to dominate the average-case complexity. The sublinear rates in Table~\ref{tab:comparison_worst_avg_cvx} show this effect and it accounts for the improved average-case rates.
Our average-case rates accurately predict the empirical convergence observed in simulations, in contrast to the worst-case rates (see Figure~\ref{fig:avg_rates}). Although our rates only hold on average, surprisingly, even a single instance of GD exactly matches the theoretical predictions. Moreover, the noisy non-strongly convex worst-case is highly unpredictable due to the instability in ${\boldsymbol x}^{\star}$ across runs. As such, the worst-case analysis is not representative of typical behavior (see Figure~\ref{fig:avg_rates}).
These theoretical results are supported by simulations and empirically extended to other models, such as logistic regression, as well as other algorithms, such as stochastic gradient descent (SGD) (see Section~\ref{sec:numerical_simulations}). This suggests that this universality property holds for a wider class of problems.
\paragraph{Related work.} The average-case analysis has a long history
in computer science and numerical analysis. Often it is used to
justify the superior performance of algorithms as compared with their
worst-case bounds such as Quicksort (sorting)
\citep{Hoare1962Quicksort} and the simplex method in linear
programming, see for example \citep{Spielman2004Smooth, smale1983on,
borgwardt1986probabilistic, todd1991probabilistic} and references
therein. Despite this rich history, it is challenging to transfer
these ideas into continuous optimization due to the ill-defined notion
of a typical continuous optimization problem. Recently
\citet{pedregosa2020average, lacotte2020optimal} derived a framework
for average-case analysis of gradient-based methods and developed
optimal algorithms with respect to the average-case. The class of
problems they consider is a special case of \eqref{eq:LS_main} with
vanishing noise. We use a similar framework -- extending the results to all first-order methods and noisy quadratics while also providing concentration and explicit average-case convergence guarantees.
A natural criticism of a simple average-case analysis is that the complexity is data model dependent and thus it only has predictive power for a small subset of real world phenomena. Because of this, it becomes important to show that any modeling choices made in defining the data ensemble have limited effect. \citet{paquette2020universality} showed that the halting time for conjugate gradient becomes deterministic as the dimension grows and it exhibits a universality property, that is, for a class of sample covariance matrices, the halting times are identical (see also \citet{deift2019conjugate}).
It is conjectured that this property holds in greater generality -- for more distributions and more algorithms~\citep{deift2014universality,deift2018universality}). In \citet{Sagun2017Universal}, empirical evidence confirms this for neural networks and spin glass models. Our paper is in the same spirit as these-- definitively answering the question that all first-order methods share this universality property for the halting time on quadratic problems.
This work is inspired by research in numerical linear algebra that uses random matrix theory to quantify the ``probability of difficulty" and ``typical behavior" of numerical algorithms \citep{demmel1988probability}. For many numerical linear algebra algorithms, one can place a random matrix as an input and analyze the algorithm's performance. It is used to help explain the success of algorithms and heuristics that could not be well understand through traditional worst-case analysis. Numerical algorithms such as the QR \citep{pfrang2014how}, Gaussian elimination \citep{sankar2006smoothed,trefethen1990average}, and other matrix factorization algorithms, for example, symmetric triadiagonalization and bidiagonalization \citep{edelman2005random} have had their performances analyzed under random matrix inputs (typically Gaussian matrices). In \cite{deift2019universality}, an empirical study extended these results beyond Gaussian matrices and showed that the halting time for a many numerical algorithms were independent of the random input matrix for a large class of matrix ensembles. This universality result eventually was proven for the conjugate gradient method \citep{paquette2020universality,deift2019conjugate}.
An alternative approach to explaining successes of numerical algorithms, introduced in \citep{Spielman2004Smooth}, is smoothed analysis. Smoothed analysis is a hybrid of worst-case and average-case analysis. Here one randomly perturbs the worst-case input and computes the maximum expected value of a measure for the performance of an algorithm. It has been used, for example, to successful analyze linear programming \citep{Spielman2004Smooth}, semi-definite programs \citep{bhojanapalli2018smoothed}, and conjugate gradient \citep{menon2016smoothed}. In this work, we instead focus on the random matrix approach to analyze first-order methods on optimization problems.
Our work draws heavily upon classical polynomial based iterative methods. Originally designed for the Chebyshev iterative method \citep{Flanders1950Numerical,golub1961chebyshev}, the polynomial approach for analyzing algorithms was instrumental in proving worst-case complexity for the celebrated conjugate gradient method \citep{Hestenes&Stiefel:1952}. For us, the polynomial approach gives an explicit equation relating the eigenvalues of the data matrix to the iterates which, in turn, allows the application of random matrix theory.\\
The remainder of the article is structured as follows: in Section~\ref{sec: problem_setting} we introduce the full mathematical model under investigation including some examples of data models. Section~\ref{sec: poly} discusses the relationship between polynomials and optimization. Our main results are then described and proven in Section~\ref{sec: halting_time}. Section~\ref{sec: avg_derivations} details the computations involved in the average-case analysis, the proofs of which are deferred to the appendix. The article concludes on showing some numerical simulations in Section~\ref{sec:numerical_simulations}.
\section{Problem setting} \label{sec: problem_setting} In this paper, we develop an average-case analysis for first-order methods on quadratic problems of the form
\begin{equation} \label{eq:LS}
\vspace{0.5em}\argmin_{{\boldsymbol x} \in {\mathbb R}^d} \Big \{ f({\boldsymbol x}) \stackrel{\text{def}}{=} \frac{1}{2n} \|{\boldsymbol A} {\boldsymbol x}-{\boldsymbol b}\|^2 \Big \}, \quad \text{with } {\boldsymbol b} \stackrel{\text{def}}{=} {\boldsymbol A} \widetilde{{\boldsymbol x}} + {\boldsymbol \eta}\,,
\end{equation}
where ${\boldsymbol A} \in {\mathbb R}^{n \times d}$ is a (possibly random) matrix (discussed in the next subsection), $\widetilde{{\boldsymbol x}} \in {\mathbb R}^d$ is an unobserved signal vector, and ${\boldsymbol \eta} \in {\mathbb R}^n$ is a noise vector.
\subsection{Data matrix, noise, signal, and initialization assumptions}\label{sec: assumptions}
Throughout the paper we make the following assumptions.
\begin{assumption}[Initialization, signal, and noise] \label{assumption: Vector} The initial vector ${\boldsymbol x}_0 \in {\mathbb R}^d$, the signal $\widetilde{{\boldsymbol x}} \in {\mathbb R}^d$, and noise vector ${\boldsymbol \eta} \in {\mathbb R}^n$ are independent of ${\boldsymbol A}$ and satisfy the following conditions:
\begin{enumerate}[leftmargin=*]
\item The entries of ${\boldsymbol x}_0-\widetilde{{\boldsymbol x}}$ are i.i.d. random variables and there exist constants $C, R > 0$ such that for $i = 1, \ldots, d$
\begin{equation} \label{eq:R} \begin{gathered} {\mathbb E}\,[{\boldsymbol x}_0-\widetilde{{\boldsymbol x}}] = \bm{0}, \quad {\mathbb E}\,[\|{\boldsymbol x}_0-\widetilde{{\boldsymbol x}}\|^2] = R^2,
\quad \text{and} \quad {\mathbb E}\,[(\widetilde{{\boldsymbol x}}-{\boldsymbol x}_0)_{i}^4] \le \tfrac{1}{d^2} C.
\end{gathered}
\end{equation}
\item The entries of the noise vector ${\boldsymbol \eta}$ are i.i.d. random variables satisfying the following for $i = 1, \ldots, n$ and for some constants $\widetilde{C}, \widetilde{R} > 0$
\begin{equation}
{\mathbb E}\,[{\boldsymbol \eta}] = \bm{0}, \quad {\mathbb E}\,[\eta_i^2] = \widetilde{R}^2, \quad \text{and} \quad {\mathbb E}\,[\eta_i^4] \le \widetilde{C}.
\end{equation}
\end{enumerate}
\end{assumption}
Assumption~\ref{assumption: Vector} encompasses the setting where the signal is \textit{random} and the algorithm is initialized at ${\boldsymbol x}_0 = \bm{0}$. But it is more general. Starting farther from the signal requires more iterations to converge. Hence, intuitively, \eqref{eq:R} restricts the distance of the algorithm's initialization to the signal so that it remains constant across problem sizes. The unbiased initialization about the signal, put another way, says the initialization is rotationally-invariantly distributed about the signal $\widetilde{x}$ (see Figure~\ref{fig: Assumption_1}).
Assumption~\ref{assumption: Vector} arises as a result of preserving a constant signal-to-noise ratio in the generative model.
Such generative models with this scaling have been used in numerous works \citep{mei2019generalization,hastie2019surprises}.
\begin{wrapfigure}[16]{r}{0.47\textwidth}
\vspace{-0.5cm}
\centering \begin{tikzpicture}[scale = 0.72]
\filldraw[pattern color = darkgray, pattern= north west lines, draw = gray, dashed] (0,0) circle [x radius=1cm, y radius=5mm, rotate=30];
\draw (0,0) circle [radius=2.5];
\node[mark size=2pt] at (0,0) {\pgfuseplotmark{*}};
\node at (0.25,0) {$\widetilde{{\boldsymbol x}}$};
\node[mark size=2pt,color=red] at (-1.5,2) {\pgfuseplotmark{*}};
\node[red] at (-1.9, 2.05) {${\boldsymbol x}_0$};
\node[mark size=2pt,color=red] at (-0.21,0.5) {\pgfuseplotmark{*}};
\node[color=red] at (-0.1,0.75) {${\boldsymbol x}_k$};
\draw[red] (-1.5,2)--(1.25,1.25)--(-1,1)-- (-0.25,0.5);
\node[mark size=3pt,color=blue] at (-2.45,-0.5) {\pgfuseplotmark{triangle*}};
\node[color=blue] at (-2.8,-0.5) {${\boldsymbol x}_0$};
\draw[blue] (-2.45,-0.5)--(0.75,-1.75)--(-0.3,-1)--(0.25,-0.5);
\node[mark size=3pt,color=blue] at (0.25,-0.5) {\pgfuseplotmark{triangle*}};
\node[color=blue] at (-0.1,-0.5) {${\boldsymbol x}_k$};
\node[mark size=2.75pt,color=orange] at (2.29,-1) {\pgfuseplotmark{square*}};
\node[color=orange] at (2.75,-1) {${\boldsymbol x}_0$};
\draw[orange] (2.29, -1)--(1,-1.25)--(1.25,-0.5)--(0.9,0.4);
\node[mark size=2.75pt,color=orange] at (0.9,0.4) {\pgfuseplotmark{square*}};
\node[color=orange] at (1.25,0.4) {${\boldsymbol x}_k$};
\end{tikzpicture}
\caption{The pictured ${\boldsymbol x}_0$ are equiprobable. Each colored line is a different run of GD with random matrix ${\boldsymbol A}$ and the shaded gray area is the set where $\|\nabla f({\boldsymbol x})\|^2 < \varepsilon$. Intuitively, our result says all runs of GD starting from a random ${\boldsymbol x}_0$ take the same number of iterations to reach the shaded area. } \label{fig: Assumption_1}
\end{wrapfigure}
\paragraph{Tools from random matrix theory.} Random matrix theory studies properties of matrices ${\boldsymbol H}$ (most notably, statistics of matrix eigenvalues) whose entries $H_{ij}$ are random variables. We refer the reader to \citep{bai2010spectral, tao2012topics} for a more thorough introduction. Many important statistics of random matrix theory can be expressed as functionals on the eigenvalues of a matrix ${\boldsymbol H}$ (\textit{e.g.}, determinants and traces). Let $\lambda_1, \ldots, \lambda_d$ be the eigenvalues of ${\boldsymbol H}$ and define the \textit{empirical spectral measure} (ESM), $\mu_{{\boldsymbol H}}$, as
\begin{equation}
\mu_{{\boldsymbol H}}(\lambda) \stackrel{\text{def}}{=} \frac{1}{d} \sum_{i=1}^d \delta_{\lambda_i},
\end{equation}
where $\delta_{\lambda_i}$ is a Dirac delta function, \textit{i.e.}, a function equal to $0$ except at $\lambda_i$ and whose integral over the entire real line is equal to one. The empirical spectral measure puts a uniform weight on each of the eigenvalues of ${\boldsymbol H}$. When ${\boldsymbol H}$ is random, this becomes a random measure. A main interest in random matrix theory is to characterize the behavior of the empirical spectral measure as the dimension of the matrix tends to infinity.
Because the ESM is a well-studied object for many random matrix ensembles, we state the following assumption on the ESM for the data matrix, ${\boldsymbol A}$. In Section~\ref{sec: data_generate}, we review practical scenarios in which this is verified.
\begin{assumption}[Data matrix] \label{assumption: spectral_density}
Let ${\boldsymbol A}$ be a (possibly random) $n \times d$ matrix such that the number of features, $d$, tends to infinity proportionally to the size of the data set, $n$, so that $\tfrac{d}{n} \to r \in (0, \infty)$. Let ${\boldsymbol H} \stackrel{\text{def}}{=} \tfrac{1}{n} {\boldsymbol A}^T {\boldsymbol A}$ with eigenvalues $\lambda_1 \leq \ldots \leq \lambda_d$ and let $\delta_{\lambda_i}$ denote the Dirac delta with mass at $\lambda_i$. We make the following assumptions on the eigenvalue distribution of this matrix:
\begin{enumerate}[leftmargin=*]
\item The ESM converges weakly in probability to a deterministic measure $\mu$ with compact support,
\begin{equation} \label{eq:ESM_convergence}
\mu_{{\boldsymbol H}} = \mfrac{1}{d}\sum_{i=1}^d \delta_{\lambda_i} \to \mu \quad \text{weakly in probability\,.}
\end{equation}
\item The largest eigenvalue of ${\boldsymbol H}$ converges in probability to the largest element in the support of $\mu$. In particular, if $\lambda^+$ denotes the top edge of the support of $\mu$ then
\begin{equation} \label{eq:max_eigenvalue} \lambda_{{\boldsymbol H}}^+ \Prto[d] \lambda^+. \,\end{equation}
\item (Required provided the algorithm uses the smallest eigenvalue) The smallest eigenvalue of ${\boldsymbol H}$ converges in probability to the smallest, non-zero element in the support of $\mu$. In particular, if $\lambda^-$ denotes the bottom edge of the support of $\mu$ then
\begin{equation} \label{eq:min_eigenvalue} \lambda_{{\boldsymbol H}}^- \Prto[d] \lambda^-. \,\end{equation}
\end{enumerate}
\end{assumption}
\subsection{Examples of data distributions.} \label{sec: data_generate}
In this section we review three examples of data-generating distributions that verify Assumption~\ref{assumption: spectral_density}: a model with isotropic features, a correlated features model, and a one-hidden layer neural network with random weights. Numerous works studying the spectrum of the Hessian on neural networks have found that this spectrum shares many characteristics with the limiting spectral distributions discussed below including compact support, a concentration of eigenvalues near $0$, and a stable top eigenvalue \citep{dauphin2014identifying, papyan2018the, sagun2016eigenvalues, behrooz2019investigation}. In fact, the work of \citet{martin2018implicit} directly compares the Hessians of deep neural networks at various stages in training with the Mar\v{c}enko-Pastur density, that is, the limiting spectral density for the isotropic features model.
\paragraph{Isotropic features.}
We will now elaborate on the well developed theory surrounding the isotropic features model (see \eqref{eq:MP} and the text just above it).
In particular, parts 2 and 3 of Assumption~\ref{assumption: spectral_density} on the convergence of the largest and smallest eigenvalues is well known:
\begin{lemma}[Isotropic features]({\rm \textbf{\citet[Theorem 5.8]{bai2010spectral}}}) \label{lem:bai_Spectral}
Suppose the matrix ${{\boldsymbol A} \in {\mathbb R}^{n \times d}}$ is generated using the isotropic features model.
The largest and smallest eigenvalue of ${\boldsymbol H}$, $\lambda_{{\boldsymbol H}}^+$ and $\lambda_{{\boldsymbol H}}^-$, resp., converge in probability to $\lambda^+$ and $\lambda^-$ resp. where $\lambda^+ = \sigma^2 (1+ \sqrt{r})^2$ is the top edge of the support of the Mar\v{c}enko-Pastur measure and $\lambda^- = \sigma^2(1+\sqrt{r})^2$ is the bottom edge of the support of the Mar\v{c}enko-Pastur measure.
\end{lemma}
In addition, the isotropic features model is sufficiently random that it is possible to weaken Assumption \ref{assumption: Vector} and still derive for it the conclusion of Theorem \ref{thm: concentration_main}.
In particular, we may let ${\boldsymbol b}_n$ be defined as
\begin{equation}\label{eq:weakb}
{\boldsymbol b}_n =
\mfrac{1}{\sqrt{n}} R {\boldsymbol A} \boldsymbol{\omega}_{1,d}
+
\widetilde{R} \boldsymbol{\omega}_{2,n}
\end{equation}
for any deterministic sequences of vectors $\{\boldsymbol{\omega}_{1,d}\}$ and $\{\boldsymbol{\omega}_{2,n}\}$ from the $d$-dimensional and the $n$-dimensional spheres, respectively, multiplied by the signal strength $R$ and noise $\widetilde{R}$.
Then, under the \emph{further} moment assumption on ${\boldsymbol A}$ that for any $k \in \mathbb{N}$
\begin{equation}\label{eq:strongmoments}
\sup_{i,j} \biggl\{\mathbb{E} |A_{i,j}|^k \biggr\} < \infty,
\end{equation}
it is a consequence of \cite[Theorem 3.6,3.7]{KnowlesYin},
that
\begin{equation}
\label{eq:isotropicE}
\|\nabla f({\boldsymbol x}_k)\|^2\Prto[d]
R^2 \int {\lambda^2 P_k^2(\lambda)}\mathop{}\!\mathrm{d}\mu+\widetilde{R}^2 r \int {\lambda P_k^2(\lambda)}\mathop{}\!\mathrm{d}\mu
=
\! \! \underset{d \rightarrow \infty}{\mathcal{E}} \! [\|\nabla f(\xx_{k})\|^2]\,.
\end{equation}
This implies that for the isotropic features model under the stronger assumption for the data matrix \eqref{eq:strongmoments}, but the weaker target assumption \eqref{eq:weakb}, we obtain the same complexity results presented in Tables \ref{tab:comparison_worst_avg_cvx} and \ref{tab:comparison_worst_avg_str_cvx}. See also \cite[Corollary 5.12]{paquette2020universality} in which a central limit theorem for the gradient is derived under these same assumptions
\paragraph{Correlated features.}
In this model, one takes a random matrix ${\boldsymbol W} \in \mathbb{R}^{n \times d}$ generated from the isotropic features model and a symmetric positive definite correlation matrix ${\boldsymbol \Sigma}_d \in \mathbb{R}^{d \times d}$. One then defines the correlated features model by
\[
{\boldsymbol A} \stackrel{\text{def}}{=} {\boldsymbol W} {\boldsymbol \Sigma}_d^{1/2}.
\]
This makes ${\boldsymbol H} = \frac{1}{n}{\boldsymbol A}^T {\boldsymbol A}$ the normalized sample covariance matrix of $d$ samples of a $n$-dimensional random vector with covariance structure ${\boldsymbol \Sigma}_d.$
Under the assumption that the empirical spectral measure of ${\boldsymbol \Sigma}_d$ converges to a measure $\nu$ and that the norm of ${\boldsymbol \Sigma}_d$ is uniformly bounded, it is consequence of \cite{Bai1999a,Bai1999b} (see also the discussions in \cite{bai2004CLT,KnowlesYin, HachemHardyNajim}) that Assumption \ref{assumption: spectral_density} holds. Unlike in isotropic features, the limiting spectral measure is not known explicitly, but is instead only characterized (in general) through a fixed-point equation describing its Stieltjes transform.
\paragraph{One-hidden layer network with random weights.} In this model, the entries of ${\boldsymbol A}$ are the result of a matrix multiplication composed with a (potentially non-linear) activation function $g \, : \, \mathbb{R} \mapsto \mathbb{R}$:
\begin{align}
A_{ij} \stackrel{\text{def}}{=} g \big (\tfrac{[{\boldsymbol W} {\boldsymbol Y}]_{ij}}{\sqrt{m}} \big ), \quad \text{where ${\boldsymbol W} \in {\mathbb R}^{n \times m}$, ${\boldsymbol Y} \in {\mathbb R}^{m \times d}$ are random matrices\,.}
\end{align}
The entries of ${\boldsymbol W}$ and ${\boldsymbol Y}$ are i.i.d. with zero mean, isotropic variances
${\mathbb E}\,[W_{ij}^2] = \sigma_w^2$ and ${\mathbb E}\,[Y_{ij}^2] = \sigma_y^2$, and light tails, that is, there exists constants $\theta_w, \theta_y > 0$ and $\alpha > 0$ such that for any $t > 0$
\begin{equation} \label{eq: light_tail}
\Pr(|W_{11}| > t) \le \exp(-\theta_w t^{\alpha}) \quad \text{and} \quad \Pr(|Y_{11}| > t) \le \exp(-\theta_y t^{\alpha})\,.
\end{equation}
Although stronger than bounded fourth moments, this assumption holds for any sub-Gaussian random variables (\textit{e.g.}, Gaussian, Bernoulli, etc). As in the previous case to study the large dimensional limit, we assume that the different dimensions grow at comparable rates given by $\frac{m}{n} \to r_1 \in (0, \infty)$ and $\frac{m}{d} \to r_2 \in (0, \infty)$.
This model encompasses two-layer neural networks with a squared loss, where the first layer has random weights and the second layer's weights are given by the regression coefficients ${\boldsymbol x}$.
In this case, problem \eqref{eq:LS} becomes
\begin{equation} \label{eq: general_LS}
\min_{\boldsymbol x} \, \left\{ f({\boldsymbol x}) = \mfrac{1}{2n} \|{g} \big ( \tfrac{1}{\sqrt{m}} {\boldsymbol W} {\boldsymbol Y} \big ){\boldsymbol x} - {\boldsymbol b}\|^2_2 \right\}.
\end{equation}
The model was introduced by \citep{Rahimi2008Random} as a randomized approach for scaling kernel methods to large datasets, and has seen a surge in interest in recent years as a way to study the generalization properties of neural networks
\citep{hastie2019surprises,mei2019generalization,pennington2017nonlinear,louart2018random,liao2018dynamics}.
The most important difference between this model and the isotropic features is the existence of a potentially non-linear activation function $g$. We assume $g$ to be entire with a growth condition and have zero Gaussian-mean,
\begin{align} \label{eq: Gaussian_mean}
\hspace{-3em}\text{(Gaussian mean)} \qquad \int {g}(\sigma_w \sigma_y z) \tfrac{e^{-z^2/2}}{\sqrt{2 \pi} } \, \mathop{}\!\mathrm{d} z = 0\,.
\end{align}
The additional growth condition on the function $g$ is precisely given as there exists positive constants $C_g, c_g, A_0 > 0$ such that for any $A \ge A_0$ and any $n \in \mathbb{N}$
\begin{align}
\sup_{z \in [-A,A]} |g^{(n)}(z)| \le C_g A^{c_g n}\, .
\end{align}
Here $g^{(n)}$ is the $n$th derivative of $g$. This growth condition is verified for common activation functions such as the sigmoid ${g}(z) = (1+ e^{-z})^{-1}$ and the softplus ${g}(z) = \log(1+e^z)$, a smoothed approximation to the ReLU. The Gaussian mean assumption \eqref{eq: Gaussian_mean} can always be satisfied by incorporating a translation into the activation function.
\cite{benigni2019eigenvalue} recently showed that the empirical spectral measure and largest eigenvalue of ${\boldsymbol H}$ converge to a deterministic measure and largest element in the support, respectively.
This implies that this model, like the isotropic features one, verifies Assumption~\ref{assumption: spectral_density}.
However, contrary to the isotropic features model, the limiting measure does not have an explicit expression, except for some specific instances of $g$ in which it is known to coincide with the Mar\v{c}enko-Pastur distribution.
\begin{lemma}[One-hidden layer network]({\rm \textbf{\citet[Theorems~2.2 and~5.1]{benigni2019eigenvalue}}}) \label{lem:rand_feat_measure} Suppose the matrix ${\boldsymbol A} \in {\mathbb R}^{n \times d}$ is generated using the random features model. Then there exists a deterministic compactly supported measure $\mu$ such that $\mu_{{\boldsymbol H}} \underset{d \to \infty}{\longrightarrow} \mu$ weakly in probability. Moreover $\lambda_{{\boldsymbol H}}^+ \Prto[d] \lambda^+$ where $\lambda^+$ is the top edge of the support of $\mu$.
\end{lemma}
\renewcommand{\arraystretch}{2.5}
\ctable[notespar,
caption = {{\bfseries Residual Polynomials.} Summary of the residual polynomials associated with the methods discussed in this paper. $T_k$ is the $k$-th Chebyshev polynomial of the first kind, $U_k$ is $k$-th Chebyshev polynomial of the second kind, and $L_k$ is the $k$-th Legendre polynomial. Derivations of these polynomials can be found in Appendix~\ref{apx: derivation_polynomial}. In light of Proposition~\ref{prop: polynomials_methods}, an explicit expression for the polynomial $P_k$ is enough to determine the polynomial $Q_k$. },
label = {table:polynomials},
captionskip=2ex,
pos =!t
]{l c l}{\tnote[1]{\citep{nesterov2004introductory,Beck2009Fast}} \tnote[2]{\citep{Polyak1962Some}} \tnote[3]{\citep{nesterov2004introductory}} }{
\toprule
\textbf{Methods} & \textbf{Polynomial $P_k$} & \textbf{Parameters} \\
\midrule
Gradient Descent & $(1-\alpha \lambda)^k$ & $\alpha = 1 / \lambda^+_{{\boldsymbol H}}$\\
\midrule
\begin{minipage}{0.18\textwidth} Nesterov (cvx) \tmark[1]
\end{minipage} & $\frac{2(1-\alpha \lambda)^{(k+1)/2}}{\alpha \lambda k} \big ( \sqrt{1-\alpha \lambda} L_k(\sqrt{1-\alpha \lambda}) - L_{k+1}(\sqrt{1-\alpha \lambda}) \big )$
& $\alpha = {1}/{\lambda_{{\boldsymbol H}}^+}$\\
\midrule
\begin{minipage}{0.15\textwidth} Polyak \tmark[2]
\end{minipage} &$\beta^k \big [ \tfrac{ ( \sqrt{\lambda_{{\boldsymbol H}}^+}-\sqrt{\lambda_{{\boldsymbol H}}^-})^2}{\lambda_{{\boldsymbol H}}^+ + \lambda_{{\boldsymbol H}}^-} \cdot T_k(\sigma(\lambda)) + \tfrac{2 \sqrt{\lambda_{{\boldsymbol H}}^- \lambda_{{\boldsymbol H}}^+}}{\lambda_{{\boldsymbol H}}^+ + \lambda_{{\boldsymbol H}}^-} \cdot U_k(\sigma(\lambda)) \big ]$ & \begin{minipage}{0.19\textwidth} $\sigma(\lambda) = \frac{\lambda_{{\boldsymbol H}}^+ + \lambda_{{\boldsymbol H}}^- - 2\lambda}{\lambda_{{\boldsymbol H}}^+ - \lambda_{{\boldsymbol H}}^-}$ \\
$\beta = \tfrac{\sqrt{\lambda_{{\boldsymbol H}}^+}-\sqrt{\lambda_{{\boldsymbol H}}^-}}{\sqrt{\lambda_{{\boldsymbol H}}^+} + \sqrt{\lambda_{{\boldsymbol H}}^-}}$ \end{minipage}
\\
\midrule
\begin{minipage}{0.18\textwidth} Nesterov\\
(Strongly cvx) \tmark[3]
\end{minipage} & $\tfrac{2\beta (\beta x)^{k/2} }{1+\beta} T_k \left ( \tfrac{1+\beta}{2 \sqrt{\beta}} \sqrt{x} \right ) + \left (1 - \frac{2\beta}{1+\beta} \right ) (\beta x)^{k/2} U_k \left (\tfrac{1+\beta}{2 \sqrt{\beta}} \sqrt{x} \right )$
& \begin{minipage}{0.18\textwidth} $x = 1-\alpha\lambda$,\\ $\alpha = {1}/{\lambda_{{\boldsymbol H}}^+}$\\
$\beta =
\tfrac{\sqrt{\lambda_{{\boldsymbol H}}^+} - \sqrt{\lambda_{{\boldsymbol H}}^-}}{\sqrt{\lambda_{{\boldsymbol H}}^+} + \sqrt{\lambda_{{\boldsymbol H}}^-} }$
\end{minipage}\\
\bottomrule
}
\section{From optimization to polynomials} \label{sec: poly}
In this section, we look at the classical connection between optimization algorithms, iterative methods, and polynomials \citep{Flanders1950Numerical,golub1961chebyshev,fischer1996polynomial,rutishauser1959refined}. While the idea of analyzing optimization algorithms from the perspective of polynomials is well-established, many modern algorithms, such as the celebrated Nesterov accelerated gradient \citep{nesterov2004introductory}, use alternative approaches to prove convergence.
This connection between polynomials and optimization methods will be crucial to proving the average-case guarantees in Tables~\ref{tab:comparison_worst_avg_cvx} and \ref{tab:comparison_worst_avg_str_cvx}.
To exploit this connection, we will construct the residual polynomials associated with the considered methods and we prove novel facts which may be of independent interest. For example, the polynomials associated with Nesterov's method provides an alternative explanation to the ODE in \citep{su2016differential}.
Throughout the paper, we consider only gradient-based methods, algorithms which can be written as a linear combination of the previous gradients and the initial iterate.
\begin{definition}[Gradient-based method] \rm{An optimization algorithm is called a \textit{gradient-based method} if each update of the algorithm can be written as a linear combination of the previous iterate and previous gradients. In other words, if every update is of the form
\begin{equation}\label{eq:gradient_based}
{\boldsymbol x}_{k+1} = {\boldsymbol x}_0 + \sum_{i=0}^{k} c_{k i} \nabla f({\boldsymbol x}_k)~,
\end{equation}
for some scalar values $c_{k i}$ that can potentially depend continuously on $\lambda^+_{{\boldsymbol H}}$ and $\lambda^-_{{\boldsymbol H}}$.
}
\end{definition}
Examples of gradient-based methods include momentum methods \citep{Polyak1962Some}, accelerated methods \citep{nesterov2004introductory,Beck2009Fast}, and gradient descent. Now given any gradient-based method, we can associate to the method \textit{residual polynomials} $P_k$ and \textit{iteration polynomials} $Q_k$ which are polynomials of degree $k$, precisely as followed.
\begin{proposition}[Polynomials and gradient-based methods] \label{prop: polynomials_methods} Consider a gradient-based method with coefficients $c_{ki}$ that depend continuously on $\lambda^-_{{\boldsymbol H}}$ and $\lambda^+_{{\boldsymbol H}}$.
Define the sequence of polynomials $\{P_k, Q_k\}_{k = 0}^\infty$ recursively by
\begin{equation} \begin{gathered} \label{eq:recursive_noise_poly}
P_0({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm}) = {\boldsymbol I} \quad \text{and} \quad P_k({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm}) = {\boldsymbol I} - {\boldsymbol H} Q_{k}({\boldsymbol H}; \lambda^{\pm}_{{\boldsymbol H}})\\
Q_0({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm}) = \bm{0} \quad \text{and} \quad Q_k({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm}) = \sum_{i=0}^{k-1} c_{k-1,i} \big [ {\boldsymbol H} Q_i({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm}) - {\boldsymbol I} \big ]\,.
\end{gathered} \end{equation}
These polynomials $P_k$ and $Q_k$ are referred to as the \emph{residual} and \emph{iteration} polynomials respectively.
We express the difference between the iterate at step $k$ and $\widetilde{{\boldsymbol x}}$ in terms of these polynomials:
\begin{equation} \label{eq:recursive_noise_poly_1}
{\boldsymbol x}_k - \widetilde{{\boldsymbol x}} = P_k({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm}) ({\boldsymbol x}_0-\widetilde{{\boldsymbol x}}) + Q_k({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm}) \cdot \frac{{\boldsymbol A}^T {\boldsymbol \eta}}{n}\,.
\end{equation}
\end{proposition}
\begin{proof}
We will prove the result by induction.
For $k=0$, the claimed result holds trivially. We assume it holds up to iteration $k$ and we will prove it holds for $k+1$. To show this, we will use the following equivalent form of the gradient $\nabla f({\boldsymbol x}) = {\boldsymbol H} ({\boldsymbol x} - \widetilde{{\boldsymbol x}}) - \tfrac{{\boldsymbol A}^T {\boldsymbol \eta}}{n}$, which follows from the definition of ${\boldsymbol b}$. Using this and the definition of gradient-based method, we have:
\begin{align*}
{\boldsymbol x}_{k+1} &- \widetilde{{\boldsymbol x}} = {\boldsymbol x}_0 - \widetilde{{\boldsymbol x}} + \sum_{i=0}^{k} c_{ki} \nabla f({\boldsymbol x}_i) = {\boldsymbol x}_0 - \widetilde{{\boldsymbol x}} + \sum_{i=0}^k c_{ki} \big [{\boldsymbol H} ({\boldsymbol x}_i-\widetilde{{\boldsymbol x}}) - \tfrac{{\boldsymbol A}^T {\boldsymbol \eta}}{n} \big ]\\
&= {\boldsymbol x}_0-\widetilde{{\boldsymbol x}} + \sum_{i=0}^k c_{ki} \big [{\boldsymbol H} \big ( \big ( {\boldsymbol I} - {\boldsymbol H} Q_i({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm}) \big ) ({\boldsymbol x}_0-\widetilde{{\boldsymbol x}}) + Q_i({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm}) \tfrac{{\boldsymbol A}^T {\boldsymbol \eta}}{n} \big ) - \tfrac{{\boldsymbol A}^T {\boldsymbol \eta}}{n} \big ]\\
&= {\boldsymbol x}_0-\widetilde{{\boldsymbol x}}+ {\boldsymbol H} \sum_{i=0}^k c_{ki} ( {\boldsymbol I} - {\boldsymbol H} Q_i({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm})) ({\boldsymbol x}_0-\widetilde{{\boldsymbol x}}) + \sum_{i=0}^k c_{ki} \big ( {\boldsymbol H} Q_i({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm}) - {\boldsymbol I} \big ) \tfrac{{\boldsymbol A}^T {\boldsymbol \eta}}{n} \\
&= \underbrace{\Big [{\boldsymbol I} - {\boldsymbol H} Q_{k+1}({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm}) \Big ]}_{=P_{k+1}({\boldsymbol H}, \lambda_{{\boldsymbol H}}^{\pm})} ({\boldsymbol x}_0-\widetilde{{\boldsymbol x}}) + Q_{k+1}({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm}) \tfrac{{\boldsymbol A}^T {\boldsymbol \eta}}{n}\,,
\end{align*}
where in the second identity we have used the induction hypothesis and in the last one the recursive definition of $Q_{k+1}$.
\end{proof}
\subsection{Examples of residual polynomials.}\label{sec:Ex_polynomials}
Motivated by the identity linking the error and the residual polynomial in Proposition~\ref{prop: polynomials_methods},
we derive the residual polynomials for some well-known optimization methods. Some of these residual polynomials are known but some, like Nesterov's accelerated methods, appear to be new.
\paragraph{Gradient descent.} Due to the simplicity in the recurrence of the iterates for gradient descent, its residual polynomials $P_k$ and $Q_k$ are explicit. Take for example the typical step size $\alpha = \tfrac{1}{\lambda_{{\boldsymbol H}}^+}$. Then iterates on \eqref{eq:LS} follow the recursion
\begin{equation}
{\boldsymbol x}_k - \widetilde{{\boldsymbol x}} = {\boldsymbol x}_{k-1} - \widetilde{{\boldsymbol x}} - \alpha \nabla f({\boldsymbol x}_{k-1}) = \big ( {\boldsymbol I} - \alpha {\boldsymbol H} \big )({\boldsymbol x}_{k-1}-\widetilde{{\boldsymbol x}}) + \alpha \tfrac{{\boldsymbol A}^T {\boldsymbol \eta}}{n}\,.
\end{equation}
Applying Proposition~\ref{prop: polynomials_methods} to this recurrence, we obtain the following polynomials:
\begin{equation} \begin{gathered}
P_k(\lambda; \alpha^{-1}) = (1-\alpha \lambda)^k, \quad
Q_k(\lambda; \alpha^{-1}) = \alpha \sum_{i=0}^{k-1} (1-\alpha \lambda)^i \quad \text{with $Q_0(\lambda) = 0$}.
\end{gathered}
\end{equation}
\paragraph{Nesterov's accelerated method.} Nesterov's accelerated method \citep{nesterov2004introductory} and its variant FISTA \citep{Beck2009Fast} generate iterates on \eqref{eq:LS} satisfying the recurrence
\begin{equation} \begin{gathered}
{\boldsymbol x}_{k+1}-\widetilde{{\boldsymbol x}} = (1 + \beta_{k-1}) (I- \alpha {\boldsymbol H}) ({\boldsymbol x}_k-\widetilde{{\boldsymbol x}}) - \beta_{k-1} (I - \alpha {\boldsymbol H})({\boldsymbol x}_{k-1}-\widetilde{{\boldsymbol x}}) + \alpha \cdot \tfrac{{\boldsymbol A}^T {\boldsymbol \eta}}{n},\\
\text{where} \quad \alpha = \frac{1}{\lambda_{{\boldsymbol H}}^+} \quad \text{and} \quad \beta_k = \begin{cases}
\tfrac{\sqrt{\lambda_{{\boldsymbol H}}^+} - \sqrt{\lambda_{{\boldsymbol H}}^-}}{\sqrt{\lambda_{{\boldsymbol H}}^+} + \sqrt{\lambda_{{\boldsymbol H}}^-} }, &\text{if $\lambda_{{\boldsymbol H}}^- \neq 0$}\\
\frac{k}{k+3}, & \text{if $\lambda_{{\boldsymbol H}}^- = 0$}\,,
\end{cases}
\end{gathered}
\end{equation}
with initial vector ${\boldsymbol x}_0 \in {\mathbb R}^d$ and ${\boldsymbol x}_1 = {\boldsymbol x}_0-\alpha \nabla f({\boldsymbol x}_0)$. Unrolling the recurrence, we can obtain an explicit formula for the corresponding polynomials
\begin{equation} \begin{gathered} \label{eq:Nesterov_polynomial_main}
P_{k+1}(\lambda; \lambda_{{\boldsymbol H}}^{\pm}) = (1+\beta_{k-1}) (1-\alpha \lambda) P_k(\lambda; \lambda_{{\boldsymbol H}}^{\pm}) - \beta_{k-1}(1-\alpha \lambda) P_{k-1}(\lambda; \lambda_{{\boldsymbol H}}^{\pm})\\
\text{with} \quad P_0(\lambda; \lambda_{{\boldsymbol H}}^{\pm}) = 1 \quad \text{and} \quad P_1(\lambda; \lambda_{{\boldsymbol H}}^{\pm}) = 1-\alpha \lambda\\
Q_{k+1}(\lambda; \lambda_{{\boldsymbol H}}^{\pm}) = (1+\beta_{k-1})(1-\alpha \lambda) Q_k(\lambda; \lambda_{{\boldsymbol H}}^{\pm}) - \beta_{k-1} (1 - \alpha \lambda) Q_{k-1}(\lambda; \lambda_{{\boldsymbol H}}^{\pm}) + \alpha \\
\text{with} \quad Q_0(\lambda; \lambda_{{\boldsymbol H}}^{\pm}) = 0 \quad \text{and} \quad Q_1(\lambda; \lambda_{{\boldsymbol H}}^{\pm}) = \alpha\,.
\end{gathered}
\end{equation}
We derive the polynomials $P_k$ explicitly in Appendix \ref{apx: Nesterov_accelerated_method}. When $\lambda_{{\boldsymbol H}}^- > 0$ (strongly convex), the polynomial $P_k$ is given by
\begin{gather} P_k(\lambda; \lambda^{\pm}_{{\boldsymbol H}}) = \tfrac{2\beta}{1+\beta} (\beta (1-\alpha \lambda))^{k/2} T_k \left ( \tfrac{1+\beta}{2 \sqrt{\beta}} \sqrt{1-\alpha \lambda} \right ) + \left (1 - \tfrac{2\beta}{1+\beta} \right ) (\beta (1-\alpha \lambda))^{k/2} U_k \left (\tfrac{1+\beta}{2 \sqrt{\beta}} \sqrt{1-\alpha \lambda} \right ), \nonumber \\ \text{where $\beta =
\tfrac{\sqrt{\lambda_{{\boldsymbol H}}^+} - \sqrt{\lambda_{{\boldsymbol H}}^-}}{\sqrt{\lambda_{{\boldsymbol H}}^+} + \sqrt{\lambda_{{\boldsymbol H}}^-} }$, \quad and \quad $\alpha = \frac{1}{\lambda_{{\boldsymbol H}}^+}$}\,, \label{eq:Nesterov_poly_sc}
\end{gather}
where $T_k$ and $U_k$ the Chebyshev polynomials of the 1st and 2nd-kind respectively. When the smallest eigenvalue of ${\boldsymbol H}$ is equal to $0$ (non-strongly convex setting) the polynomial $P_k$ is given by
\begin{equation} \label{eq: Nesterov_Legendre}
P_k(\lambda; \lambda_{{\boldsymbol H}}^{\pm}) = \frac{2(1-\alpha\lambda)^{(k+1)/2}}{k \alpha \lambda} \left ( \sqrt{1-\alpha \lambda} \, L_k(\sqrt{1-\alpha \lambda}) - L_{k+1}(\sqrt{1-\alpha \lambda}) \right )\,,
\end{equation}
where $L_k$ are the Legendre polynomials.
\begin{wrapfigure}[16]{r}{0.45\textwidth}
\vspace{-0.5cm}
\centering
\includegraphics[scale = 0.2]{figures/Halting_time_bessel_poly_2}
\caption{\textbf{Bessel approx. of Nesterov's (convex) poly.} For small $\lambda$, the Bessel approx. (blue) in \eqref{eq:Bessel_asymptotic_main} and Nesterov's (convex) poly. (orange) are indistinguishable. Only when $\lambda$ is far from zero that one sees any, albeit minor, differences.}
\label{fig:Bessel}
\end{wrapfigure}
Working directly with the polynomial in \eqref{eq: Nesterov_Legendre} will prove difficult. As such, we derive an asymptotic expression for this polynomial. Nesterov's polynomial satisfies in a sufficiently strong sense
\begin{equation} \label{eq:Bessel_asymptotic_main}
P_k(\lambda; \lambda_{{\boldsymbol H}}^{\pm}) \sim \frac{2J_1(k\sqrt{\alpha \lambda})}{ k\sqrt{\alpha \lambda}} e^{-\alpha \lambda k / 2}
\end{equation}
where $J_1$ is the Bessel function of the first kind. A derivation of this can be found in Appendix~\ref{apx: Nesterov_accelerated_cvx}. Let $f(t,z) \stackrel{\text{def}}{=} P_{tn}(z n^{-2}; \lambda^{\pm}_{{\boldsymbol H}})$. Then the recurrence in \eqref{eq:Nesterov_polynomial_main} becomes a discrete approximation to the initial value problem
\[ \partial_{tt} f + \frac{3}{t} \partial_t f + z f = 0, \, \, f(t,0) = 1 \, \, \text{and} \, \, \partial f_t(t,0) = 0,\]
which bears a strong resemblance to the differential equation model for Nesterov's accelerated method in \citep{su2016differential}.
The solution to this initial value problem is $\frac{2 J_1(k \sqrt{\alpha \lambda})}{k \sqrt{\alpha \lambda}}$. Our result in \eqref{eq:Bessel_asymptotic_main}, not derived using this differential equation, yields an even tighter result for Nesterov's accelerated method by including the exponential.
\paragraph{Polyak momentum algorithm.} We aim to derive the residual polynomials for the Polyak momentum algorithm (a.k.a Heavy-ball method) \citep{Polyak1962Some}. The Polyak momentum algorithm takes as arguments the largest and smallest eigenvalues of ${\boldsymbol H}$ and iterates as follows
\begin{equation} \begin{gathered}
{\boldsymbol x}_{k+1}-\widetilde{{\boldsymbol x}} = {\boldsymbol x}_k-\widetilde{{\boldsymbol x}} + m ({\boldsymbol x}_{k-1}-\widetilde{{\boldsymbol x}}-({\boldsymbol x}_{k}-\widetilde{{\boldsymbol x}})) + \alpha \nabla f({\boldsymbol x}_{k}),\\
{\boldsymbol x}_0 \in {\mathbb R}^d, \quad {\boldsymbol x}_1-\widetilde{{\boldsymbol x}} = {\boldsymbol x}_0-\widetilde{{\boldsymbol x}}-\tfrac{2}{\lambda_{{\boldsymbol H}}^+ + \lambda_{{\boldsymbol H}}^-} \nabla f({\boldsymbol x}_0)\\
\text{where $m = - \left ( \tfrac{\sqrt{\lambda_{{\boldsymbol H}}^+} - \sqrt{\lambda_{{\boldsymbol H}}^-}}{\sqrt{\lambda_{{\boldsymbol H}}^+} + \sqrt{\lambda_{{\boldsymbol H}}^-}} \right )^2$ and $\alpha = -\mfrac{4}{(\sqrt{\lambda_{{\boldsymbol H}}^-}+\sqrt{\lambda_{{\boldsymbol H}}^+})^2}$}.
\end{gathered} \end{equation}
Using these initial conditions, the residual polynomials for Polyak momentum satisfy
\begin{equation} \begin{gathered} P_{k+1}(\lambda; \lambda^{\pm}_{{\boldsymbol H}}) = (1-m + \alpha\lambda) P_k(\lambda; \lambda^{\pm}_{{\boldsymbol H}}) + m P_{k-1}(\lambda; \lambda_{{\boldsymbol H}}^{\pm}),\\
\text{with} \qquad P_0(\lambda; \lambda_{{\boldsymbol H}}^{\pm}) = 1, \qquad P_1(\lambda; \lambda^{\pm}_{{\boldsymbol H}}) = 1 - \tfrac{2}{\lambda_{{\boldsymbol H}}^+ + \lambda_{{\boldsymbol H}}^-} \lambda\\
\text{and} \qquad Q_{k+1}(\lambda; \lambda_{{\boldsymbol H}}^{\pm}) = (1-m + \alpha\lambda) Q_k(\lambda; \lambda_{{\boldsymbol H}}^{\pm}) + m Q_{k-1}(\lambda; \lambda_{{\boldsymbol H}}^{\pm}) - \alpha,\\
\text{with} \qquad Q_0(\lambda; \lambda_{{\boldsymbol H}}^{\pm}) = 0, \qquad Q_1(\lambda; \lambda^{\pm}_{{\boldsymbol H}}) = \tfrac{2}{\lambda_{{\boldsymbol H}}^+ + \lambda_{{\boldsymbol H}}^-}.
\end{gathered}
\end{equation}
By recognizing this three-term recurrence as Chebyshev polynomials, we can derive an explicit representation for $P_k$ namely
\begin{equation} \begin{gathered}
P_k(\lambda; \lambda_{{\boldsymbol H}}^{\pm}) = \left ( \tfrac{\sqrt{\lambda_{{\boldsymbol H}}^+}-\sqrt{\lambda_{{\boldsymbol H}}^-}}{\sqrt{\lambda_{{\boldsymbol H}}^+} + \sqrt{\lambda_{{\boldsymbol H}}^-}} \right )^k \big [ \tfrac{ ( \sqrt{\lambda_{{\boldsymbol H}}^+}-\sqrt{\lambda_{{\boldsymbol H}}^-})^2}{\lambda_{{\boldsymbol H}}^+ + \lambda_{{\boldsymbol H}}^-} \cdot T_k(\sigma(\lambda)) + \tfrac{2 \sqrt{\lambda_{{\boldsymbol H}}^- \lambda_{{\boldsymbol H}}^+}}{\lambda_{{\boldsymbol H}}^+ + \lambda_{{\boldsymbol H}}^-} \cdot U_k(\sigma(\lambda)) \big ] \\
\text{where $T_k(x)$ and $U_k(x)$ are the Chebyshev polynomials of the 1st and 2nd-kind respectively}\\
\text{and \quad $\sigma(\lambda) = \tfrac{\lambda_{{\boldsymbol H}}^+ + \lambda_{{\boldsymbol H}}^- -2 \lambda}{\lambda_{{\boldsymbol H}}^+ - \lambda_{{\boldsymbol H}}^-}$.}
\end{gathered}
\end{equation}
\begin{figure*}[t!]
\begin{center}
\includegraphics[scale = 0.2]{figures/Halting_time_nesterov_strongly_cvx_poly_2}
\hspace{-0.5cm}\includegraphics[scale = 0.2]{figures/Halting_time_polyak_poly_2}
\hspace{-0.5cm}\includegraphics[scale = 0.2]{figures/Halting_time_nesterov_cvx_poly_2}
\end{center}
\caption{{\bfseries Residual polynomials.} The oscillations in the polynomials for Nesterov's accelerated method (convex) are pronounced near zero compared with the other methods. In fact, both Nesterov (strongly convex) and Polyak momentum polynomials decay quite rapidly to the zero polynomial. To see these oscillations in the figures one needs to have a badly conditioned matrix (condition number 40,000). The slower decay to zero in the residual polynomials for Nesterov (strongly convex) as compared with Polyak's momentum suggest a worst rate of convergence.
} \label{fig:polynomials}
\end{figure*}
\subsection{Properties of residual polynomials}
In the following sections, it will be convenient to know some general properties of residual polynomials. Particularly, the polynomials, $\lambda^2 P_k^2(\lambda; \lambda^{\pm})$ and $\lambda P_k^2(\lambda; \lambda^{\pm})$ are uniformly bounded in $k$ and that these polynomials goes to zero on some fixed support $[\lambda^-, \lambda^+]$. The importance of these facts are twofold. First, these polynomials appear in the formula for the expected gradient, Theorem~\ref{thm: concentration_main}. Second, we use the boundedness and convergence properties in the proof of halting time universality, Theorem~\ref{thm: Halting_time_main}. If one \textit{a priori} knows an explicit expression for these polynomials, then these properties are easily deduced. However when such an expression does not exist, we still can conclude these properties hold provided that the algorithm is \textit{convergent}.
\begin{definition}[Convergent algorithms] \rm{We say a gradient-based method is \textit{(strongly) convergent} if for every matrix ${\boldsymbol A}$ such that $({\boldsymbol A}^T {\boldsymbol A} \succ 0)$ ${\boldsymbol A}^T{\boldsymbol A} \succeq 0$ and any vectors ${\boldsymbol b}$ and ${\boldsymbol x}_0$, we have that the sequence of iterates generated by the algorithm starting at ${\boldsymbol x}_0$ satisfies $\|\nabla f({\boldsymbol x}_k)\|^2 \to 0$ as $k \to \infty$ and there exists constants $C, \widetilde{C}$ depending on $\lambda_{{\boldsymbol H}}^+$ and $\lambda_{{\boldsymbol H}}^-$ such that
\begin{equation} \begin{gathered} \label{eq: boundedness_grad} \|\nabla f({\boldsymbol x}_k)\|^2 \le C
\big ( f({\boldsymbol x}_0)-f({\boldsymbol x}^{\star}) + \|{\boldsymbol x}_0-{\boldsymbol x}^{\star}\|^2 \big )\\
f({\boldsymbol x}_k)-f({\boldsymbol x}^{\star}) \le \widetilde{C} (f({\boldsymbol x}_0)-f({\boldsymbol x}^{\star}) + \|{\boldsymbol x}_0-{\boldsymbol x}^{\star}\|^2)
\end{gathered}
\end{equation}
where ${\boldsymbol x}^{\star}$ in the optimum of \eqref{eq:LS}.
}
\end{definition}
\begin{remark}[Minimal norm solutions] \label{rmk: minimal_norm} For any gradient-based method, the iterates generated by the algorithm on the least squares problem \eqref{eq:LS} satisfy ${\boldsymbol x}_k \in {\boldsymbol x}_0 + \ospan\{ \nabla f({\boldsymbol x}_0), \hdots, \nabla f({\boldsymbol x}_{k-1})\} \subseteq {\boldsymbol x}_0 + \text{\rm Null}({\boldsymbol A})^{\perp}$. If the algorithm converges to some ${\boldsymbol x}^{\star}$ and we have that ${\boldsymbol A}^T {\boldsymbol A} {\boldsymbol x}^{\star} = {\boldsymbol A}^T {\boldsymbol b}$, then the solution ${\boldsymbol x}^{\star}$ is independent of the algorithm in the following sense
\begin{equation}
{\boldsymbol x}^{\star} = \argmin_{\{ {\boldsymbol x} \, : \, {\boldsymbol A}^T {\boldsymbol A} {\boldsymbol x} = {\boldsymbol A}^T {\boldsymbol b} \}} \|{\boldsymbol x}_0-{\boldsymbol x}\|^2_2.
\end{equation}
In particular when ${\boldsymbol x}_0 \in \text{\rm Null}({\boldsymbol A})^{\perp}$, the optimum ${\boldsymbol x}^{\star}$ is the minimal norm solution. See \textit{e.g.}, \cite{gunasekar2018characterizing, wilson2017marginal} and references therein.
\end{remark}
\begin{remark} All the algorithms discussed in Section~\ref{sec:Ex_polynomials} are convergent.
\end{remark}
The following lemma shows that convergent algorithms have residual polynomials which go to $0$ as $k \to \infty$ on compact subsets of the positive real line. In essence if optimality measures go to zero, then so must the residual polynomial.
\begin{lemma}[Convergent algorithms $\Rightarrow$ Residual polynomials $\to 0$] \label{lem: convergent_algorithm} Suppose the algorithm $\mathcal{A}$ is a (strongly) convergent gradient-based method. Fix positive constants $0 \le \lambda^- < \lambda^+$ for a convergent algorithm and constants $0 < \lambda^- < \lambda^+$ if one has a strongly convergent algorithm. The residual polynomial, $P_k$, for the algorithm $\mathcal{A}$ satisfies
\[ \lim_{k \to \infty} \lambda^2 P_k^2(\lambda; \lambda^{\pm}) = 0 \quad \text{and} \quad \lim_{k \to \infty} \lambda P_k^2(\lambda; \lambda^{\pm}) = 0 \quad \text{for all $\lambda \in [\lambda_-, \lambda_+]$}. \]
\end{lemma}
\begin{proof}
Suppose we consider the noiseless setting where ${\boldsymbol \eta} = (0,0,0)^T$ in the generative model so that ${\boldsymbol A} \widetilde{{\boldsymbol x}} = {\boldsymbol b}$. Fix a constant $\lambda \in [\lambda^-, \lambda^+]$ and define the following matrix and vectors
\begin{equation} \label{eq:matrix_AA} {\boldsymbol A} = \bBigg@{4} [ \begin{matrix} \sqrt{3
\lambda^+} & 0 & 0
\vspace{-0.5cm}\\
0 & \sqrt{3 \lambda} & 0 \vspace{-0.5cm}\\
0 & 0 & \sqrt{3\lambda^-}
\end{matrix} \bBigg@{4} ], \qquad {\boldsymbol x}_0-\widetilde{{\boldsymbol x}} = ( 0, 1, 0 )^T, \quad \text{and} \quad {\boldsymbol \eta} = ( 0, 0, 0 )^T.
\end{equation}
A simple computation shows that ${\boldsymbol H} = \tfrac{1}{3}{\boldsymbol A}^T{\boldsymbol A} = \text{diag}(\lambda^+, \lambda, \lambda^-)$. Because the method is (strongly) convergent, the algorithm converges for these choices of ${\boldsymbol H}$ and ${\boldsymbol x}_0-\widetilde{{\boldsymbol x}}$. Moreover, we know that $\nabla f({\boldsymbol x}_k) = {\boldsymbol H} ({\boldsymbol x}_k-\widetilde{{\boldsymbol x}})$ and by Proposition~\ref{prop: polynomials_methods}, the vector ${\boldsymbol x}_k-\widetilde{{\boldsymbol x}} = P_k({\boldsymbol H}; \lambda^{\pm}) ({\boldsymbol x}_0-\widetilde{{\boldsymbol x}})$. Therefore we have that
\begin{equation} \label{eq:convergent_stuff_1} \lim_{k \to \infty} \lambda^2 P_k^2(\lambda; \lambda^{\pm}) = \lim_{k \to\infty} ({\boldsymbol x}_0-\widetilde{{\boldsymbol x}})^T {\boldsymbol H}^2 P_k^2({\boldsymbol H}; \lambda^{\pm}) ({\boldsymbol x}_0-\widetilde{{\boldsymbol x}}) = \lim_{k \to \infty} \|\nabla f({\boldsymbol x}_k)\|^2 = 0.
\end{equation}
Similarly, we consider the same matrix ${\boldsymbol A}$ as in \eqref{eq:matrix_AA} but instead a pure noise setting,
\begin{equation} {\boldsymbol x}_0-\widetilde{{\boldsymbol x}} = ( 0, 0, 0)^T, \quad \text{and} \quad {\boldsymbol \eta} = ( 0, \sqrt{3}, 0)^T.
\end{equation}
As before, the matrix ${\boldsymbol H} = \text{diag}(\lambda^+, \lambda, \lambda^-)$. By Proposition~\ref{prop: polynomials_methods}, the iterates ${\boldsymbol x}_k-\widetilde{{\boldsymbol x}} = Q_k({\boldsymbol H}; \lambda^{\pm}) \frac{{\boldsymbol A}^T {\boldsymbol \eta}}{3}$ as ${\boldsymbol x}_0-\widetilde{{\boldsymbol x}} = \bm{0}$. With this, the gradient equals
\[ \nabla f({\boldsymbol x}_k) = {\boldsymbol H}({\boldsymbol x}_k- \widetilde{{\boldsymbol x}}) - \tfrac{{\boldsymbol A}^T {\boldsymbol \eta}}{3} = \big [ {\boldsymbol H} Q_k({\boldsymbol H}; \lambda^{\pm}) - {\boldsymbol I} \big ] \tfrac{{\boldsymbol A}^T {\boldsymbol \eta}}{3} = - P_k({\boldsymbol H}; \lambda^{\pm}) \tfrac{{\boldsymbol A}^T {\boldsymbol \eta}}{3}. \]
Here, again, we used Proposition~\ref{prop: polynomials_methods}. A (strongly) convergent method has the following
\begin{equation}
\lim_{k \to \infty} \lambda P_k^2(\lambda; \lambda^{\pm}) = \lim_{k \to \infty} \tfrac{{\boldsymbol \eta}^T {\boldsymbol A}}{3} P_k^2({\boldsymbol H}; \lambda^{\pm}) \tfrac{{\boldsymbol A}^T {\boldsymbol \eta}}{3} = \lim_{k \to \infty} \|\nabla f({\boldsymbol x}_k)\|^2 = 0.
\end{equation}
This completes the result.
\end{proof}
The following lemma shows that the residual polynomials are uniformly bounded over $k$ on any compact subset of the positive real line.
\begin{lemma}[Convergent algorithms $\Rightarrow$ boundedness of $P_k$] \label{lem: convergent_bounded} Suppose $\mathcal{A}$ is a (strongly) convergent algorithm with residual polynomial $P_k$.
Under the assumptions of Lemma~\ref{lem: convergent_algorithm},
\[ \max_{k, \lambda \in [\lambda^-, \lambda^+]} \lambda^2 P_k^2(\lambda; \lambda^{\pm}) \le B \quad \text{and} \quad \max_{k, \lambda \in [\lambda^-, \lambda^+]} \lambda P_k^2(\lambda; \lambda^{\pm}) \le \widetilde{B},\]
for some constants $B, \widetilde{B} > 0$.
\end{lemma}
\begin{proof} Suppose we consider the noiseless setting ${\boldsymbol \eta} = \bm{0}$ in the generative model \eqref{eq:LS} so that ${\boldsymbol A} \widetilde{{\boldsymbol x}} = {\boldsymbol b}$. It then follows that $f({\boldsymbol x}^{\star}) = 0$ where ${\boldsymbol x}^{\star}$ is the optimum. A simple computation using Proposition~\ref{prop: polynomials_methods} shows that for all $k \ge 0$
\begin{equation} \begin{aligned} \label{eq: stuff_10}
f({\boldsymbol x}_k) - f({\boldsymbol x}^{\star}) &= \tfrac{1}{2n} \|{\boldsymbol A} ({\boldsymbol x}_k-\widetilde{{\boldsymbol x}})\|^2 = \tfrac{1}{2} ({\boldsymbol x}_k-\widetilde{{\boldsymbol x}})^T {\boldsymbol H} ({\boldsymbol x}_k-\widetilde{{\boldsymbol x}})\\
&= \tfrac{1}{2}({\boldsymbol x}_0-\widetilde{{\boldsymbol x}})^T {\boldsymbol H} P_k^2({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm}) ({\boldsymbol x}_0-\widetilde{{\boldsymbol x}}).
\end{aligned}
\end{equation}
Next consider the matrix ${\boldsymbol A}$ and vectors ${\boldsymbol x}_0-\widetilde{{\boldsymbol x}}$ and ${\boldsymbol \eta}$ as in \eqref{eq:matrix_AA} with the initial iterate ${\boldsymbol x}_0 = (0,0,0)^T$. We consider cases: suppose $\lambda^- = 0$. Fix a constant $\lambda \in [\lambda^-, \lambda^+]$. It follows from our choice of ${\boldsymbol A}$, ${\boldsymbol x}_0$, $\widetilde{{\boldsymbol x}}$, and ${\boldsymbol \eta}$ that the vector ${\boldsymbol A} \widetilde{{\boldsymbol x}} = {\boldsymbol b} = (0, -\sqrt{3 \lambda}, 0)^T$ and by \eqref{eq: stuff_10} that
\begin{equation} \label{eq: stuff_11} f({\boldsymbol x}_0)-f({\boldsymbol x}^{\star}) = \tfrac{1}{2} \lambda P_0^2(\lambda; \lambda^{\pm}).
\end{equation}
The solution set $\{{\boldsymbol x} \, : \, {\boldsymbol A}^T{\boldsymbol A} {\boldsymbol x} = {\boldsymbol A}^T{\boldsymbol b}\} = \{(0,-1, a)^T : a \in {\mathbb R}\}$ if $\lambda > 0$ and otherwise it equals $\{(0, a, b)^T : a,b \in {\mathbb R}\}$ if $\lambda = 0$. From Remark~\ref{rmk: minimal_norm}, we have that $\displaystyle {\boldsymbol x}^{\star} = \argmin_{{\boldsymbol A}^T{\boldsymbol A} {\boldsymbol x} = {\boldsymbol A}^T {\boldsymbol b}} \|{\boldsymbol x}-{\boldsymbol x}_0\|^2$ and thus we deduce that
\[ {\boldsymbol x}^{\star} = \begin{cases}
(0,-1,0)^T, & \text{if $\lambda > 0$}\\
(0, 0, 0)^T & \text{if $\lambda = 0$}.
\end{cases}
\]
In both cases, we have that $\|{\boldsymbol x}_0-{\boldsymbol x}^{\star}\|^2 \le 1$. Therefore using the boundedness assumption \eqref{eq: boundedness_grad} and the expression for the gradient in \eqref{eq:convergent_stuff_1}, we have that
\begin{align*} \sup_{k, \, \lambda \in [\lambda^-, \lambda^+] } \! \! \! \! \! \! \lambda^2 P_k^2(\lambda; \lambda^{\pm}) &= \! \! \! \! \! \! \sup_{k, \, \lambda \in [\lambda^-, \lambda^+] } \! \! \! \! \! \! \|\nabla f({\boldsymbol x}_k)\|^2 \le \! \! \! \! \! \! \sup_{k, \, \lambda \in [\lambda^-, \lambda^+] } \! \! \! \! \! \! C ( f({\boldsymbol x}_0)-f({\boldsymbol x}^{\star}) + \|{\boldsymbol x}_0-{\boldsymbol x}_{\star}\|^2 )\\
&\le \! \! \! \! \! \! \sup_{k, \, \lambda \in [\lambda^-, \lambda^+] } \! \! \! \! \! \! C \big ( \tfrac{1}{2} \lambda P_0^2(\lambda; \lambda^{\pm}) + 1 \big ) \le B.
\end{align*}
Here we used that the distance to the optimum $\|{\boldsymbol x}_0-{\boldsymbol x}^{\star}\|^2 \le 1$ and the polynomial in \eqref{eq: stuff_11} is bounded on a compact set.
Now we suppose that $\lambda^- > 0$. As above, we use the same matrix ${\boldsymbol A}$ and vectors ${\boldsymbol x}_0-\widetilde{{\boldsymbol x}}$ and ${\boldsymbol \eta}$ as in \eqref{eq:matrix_AA} and, in addition, we set ${\boldsymbol x}_0 = (0,0,0)^T$. In this situation, the matrix ${\boldsymbol A}$ is invertible and ${\boldsymbol x}^{\star} = (0,-1,0)^T$. Hence both \eqref{eq: stuff_11} and $\|{\boldsymbol x}_0-{\boldsymbol x}^{\star}\|^2 \le 1$ holds. Using the boundedness assumption on function values \eqref{eq: boundedness_grad} and the expression for the function values in \eqref{eq: stuff_10}, we deduce
\begin{align*}
\sup_{k, \, \lambda \in [\lambda^-, \lambda^+] } \! \! \! \! \! \! \lambda P_k^2(\lambda; \lambda^{\pm}) = \! \! \! \! \! \! \sup_{k, \, \lambda \in [\lambda^-, \lambda^+] } \! \! \! \! \! \! f({\boldsymbol x}_k)-f({\boldsymbol x}^{\star})
&\le \! \! \! \! \! \! \sup_{k, \, \lambda \in [\lambda^-, \lambda^+] } \! \! \! \! \! \! \widetilde{C} ( f({\boldsymbol x}_0)-f({\boldsymbol x}^{\star}) + \|{\boldsymbol x}_0-{\boldsymbol x}^{\star}\|^2 )\\
&\le \! \! \! \! \! \! \sup_{k, \, \lambda \in [\lambda^-, \lambda^+] } \! \! \! \! \! \! \widetilde{C} \big ( \tfrac{1}{2} \lambda P_0^2(\lambda; \lambda^{\pm}) + 1 \big ) \le \widetilde{B}.
\end{align*}
The result immediately follows.
\end{proof}
\section{Halting time is almost deterministic} \label{sec: halting_time}
In this section we develop a framework for the average-case analysis and state a main result of this paper: the concentration of the halting time.
We define the halting time $T_{\varepsilon}$ as the first iteration at which the gradient falls below some predefined $\varepsilon$:
\begin{equation} \label{eq:something_2} T_{\varepsilon} \stackrel{\text{def}}{=} \inf \, \{ k > 0 \, : \, \|\nabla f({\boldsymbol x}_k)\|^2 \le \varepsilon\}\,.
\end{equation}
Our main result (Theorem~\ref{thm: Halting_time}) states that this halting time is predictable for almost all high-dimensional data, or more precisely,
\begin{equation}
\lim_{d \to \infty} \Pr(T_{\varepsilon} = \text{constant}) = 1\,.
\end{equation}
Furthermore, we provide an implicit expression for this constant, otherwise known as the average complexity, and in Tables~\ref{tab:comparison_worst_avg_cvx} and \ref{tab:comparison_worst_avg_str_cvx} an explicit expression under further assumptions. The rest of this section provides a proof of this result.
\subsection{First-order methods as polynomials} \label{apx: GD_poly}
\begin{proposition}[Residual polynomials and gradients] \label{prop:gradient_polynomial} Suppose the iterates $\{{\boldsymbol x}_k\}_{k=0}^\infty$ are generated from a gradient based method. Let $\{P_k\}_{k=0}^\infty$ be a sequence of polynomials defined in \eqref{eq:recursive_noise_poly}. Then the following identity exists between the iterates and its residual polynomial,
\begin{equation} \begin{gathered} \label{eq:grad_optimality_cond_app}
\| \nabla f({\boldsymbol x}_k) \|^2 = ({\boldsymbol x}_0-\widetilde{{\boldsymbol x}})^T {\boldsymbol H}^2 P_k^2({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm}) ({\boldsymbol x}_0-\widetilde{{\boldsymbol x}}) + \tfrac{{\boldsymbol \eta}^T {\boldsymbol A}}{n} P_k^2({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm}) \tfrac{{\boldsymbol A}^T {\boldsymbol \eta}}{n} \\
-2({\boldsymbol x}_0-\widetilde{{\boldsymbol x}})^T {\boldsymbol H} P_k^2({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm}) \tfrac{{\boldsymbol A}^T {\boldsymbol \eta}}{n}. \nonumber
\end{gathered}
\end{equation}
\end{proposition}
\begin{proof} The gradient in \eqref{eq:LS} is given by the expression $\nabla f({\boldsymbol x}_k) = {\boldsymbol H} ({\boldsymbol x}_k-\widetilde{{\boldsymbol x}}) - \frac{{\boldsymbol A}^T {\boldsymbol \eta}}{n}$. The result follows immediately by plugging in \eqref{eq:recursive_noise_poly_1} into the formula for the gradient and using the relationship that ${\boldsymbol H}^2 Q_k^2({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm}) -2 {\boldsymbol H} Q_k({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm}) + {\boldsymbol I} = ({\boldsymbol I} - {\boldsymbol H} Q_k({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm}))^2 = P_k^2({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm})$.
\end{proof}
This \textit{equality} for the squared norm of the gradient is crucial for deriving average-case rates. In contrast, worst-case analysis typically uses only \textit{bounds} on the norm. A difficulty with the polynomials $P_k$ and $Q_k$ is that their coefficients depend on the largest and smallest eigenvalue of ${\boldsymbol H}$, and hence are random. We can remove this randomness thanks to Assumption~\ref{assumption: spectral_density}, replacing $\lambda_{\boldsymbol H}^+$ and $\lambda_{\boldsymbol H}^-$ with the top (bottom) edge of the support of $\mu$, denoted by $\lambda^+$ and $\lambda^-$, without loss of generality.
\begin{proposition}[Remove randomness in coefficients of polynomial] \label{proposition: norm} Suppose Assumption~\ref{assumption: spectral_density} holds. Fix any $k$-degree polynomial $\widetilde{P}_k$ whose coefficients depend continuously on the largest and smallest eigenvalues of ${\boldsymbol H}$. Then the following hold
\begin{equation} \| \widetilde{P}_k({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm}) - \widetilde{P}_k({\boldsymbol H}; \lambda^{\pm})\|^2_{\text{\rm op}} \Prto[d] 0\,. \end{equation}
\end{proposition}
\begin{proof} Fix any $\varepsilon, \delta > 0$. Let $c_i(\cdot)$ where $i = 0, \hdots, k$ be the coefficients associated with the term of degree $i$ in $\widetilde{P}_k({\boldsymbol H}; \cdot)$. For each $i$, the continuity of $c_i(\cdot)$ implies there exists $\delta_{\varepsilon} > 0$ such that
\begin{equation} \text{whenever} \quad \|(\lambda_{{\boldsymbol H}}^+, \lambda_{{\boldsymbol H}}^-)-(\lambda^+, \lambda^-)\| \le \delta_{\varepsilon} \quad \Rightarrow \quad |c_i(\lambda_{{\boldsymbol H}}^{\pm})-c_i(\lambda^{\pm})| \le \frac{\varepsilon}{4 (4\lambda^+)^i}\,. \end{equation}
For sufficiently large $d$, Assumption~\ref{assumption: spectral_density} implies $\Pr \big (|\lambda_{{\boldsymbol H}}^+-\lambda^+| > \min\{\tfrac{\delta_{\varepsilon}}{2}, \lambda^+\} \big ) \le \tfrac{\delta}{2}$ and $\Pr \big (|\lambda_{{\boldsymbol H}}^--\lambda^-| > \min\{ \tfrac{\delta_{\varepsilon}}{2}, \lambda^+\} \big ) \le \tfrac{\delta}{2}$.
With this, we define the event
$\mathcal{S} = \{ | \lambda_{{\boldsymbol H}}^+ - \lambda^+| \le \min\{ \tfrac{\delta_{\varepsilon}}{2}, \lambda^+ \} \} \cap \{ | \lambda_{{\boldsymbol H}}^- - \lambda^-| \le \min\{ \tfrac{\delta_{\varepsilon}}{2}, \lambda^+ \} \}.$ We have for all sufficiently large $d$
\begin{align}
\Pr \big ( \|\widetilde{P}_k({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm})-&\widetilde{P}_k({\boldsymbol H}; \lambda^{\pm}) \|_\text{op} > \varepsilon \big )
= \Pr \big ( \mathcal{S} \cap \big \{ \|\widetilde{P}_k({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm})-\widetilde{P}_k({\boldsymbol H}; \lambda^{\pm}) \|_\text{op} > \varepsilon \big \} \big ) \nonumber \\
&\qquad \qquad + \Pr \big ( \mathcal{S}^c \cap \big \{ \|\widetilde{P}_k({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm})-\widetilde{P}_k({\boldsymbol H}; \lambda^{\pm}) \|_\text{op} > \varepsilon \big \} \big ) \nonumber\\
&\le \Pr \big ( \mathcal{S} \cap \big \{ \|\widetilde{P}_k({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm})-\widetilde{P}_k({\boldsymbol H}; \lambda^{\pm}) \|_\text{op} > \varepsilon \big \} \big ) + \delta. \label{eq:rand_feat_blah_1}
\end{align}
Here we used that $\Pr \big ( \mathcal{S}^c \cap \big \{ \|\widetilde{P}_k({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm})-\widetilde{P}_k({\boldsymbol H}; \lambda^{\pm}) \|_\text{op} > \varepsilon \big \} \big ) \le \Pr(\mathcal{S}^c) \le \delta$ for large $d$. Therefore, it suffices to consider the first term in \eqref{eq:rand_feat_blah_1} and show that it is $0$. By construction of the set $\mathcal{S}$, any element in $\mathcal{S}$ satisfies both $\| {\boldsymbol H} \|_{\text{op}} \le 2 \lambda^+$ and $|c_i(\lambda_{{\boldsymbol H}}^{\pm})- c_i(\lambda^{\pm})| \le \tfrac{\varepsilon}{4 (4\lambda^+)^i}$. Hence on the event $\mathcal{S}$, we have the following
\begin{align}
\|\widetilde{P}_k({\boldsymbol H}, \lambda_{{\boldsymbol H}}^{\pm} ) - \widetilde{P}_k({\boldsymbol H}; \lambda^{\pm})\|_{\text{op}} \le \sum_{i=0}^k | c_i(\lambda_{{\boldsymbol H}}^{\pm})-c_i(\lambda^{\pm})| \|{\boldsymbol H}\|_{\text{op}}^i \le \sum_{i=0}^k \frac{ (2\lambda^+)^i \varepsilon}{4 (4 \lambda^+)^i} \le \frac{\varepsilon}{2}\,.
\end{align}
From this, we deduce that $\Pr \big (\mathcal{S} \cap \{ \| \widetilde{P}_k({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm})-\widetilde{P}_k({\boldsymbol H}; \lambda^{\pm})\|_{\text{op}} > \varepsilon \} \big ) = 0$ and the result immediately follows by \eqref{eq:rand_feat_blah_1}.
\end{proof}
The squared norm of the gradient in \eqref{eq:grad_optimality_cond_app} is a quadratic form. In Proposition~\ref{proposition: norm}, we removed the randomness in the coefficients of the polynomial and now we will relate this back to the squared norm of the gradient, and particularly, the quadratic form. The following lemmas state this precisely.
\begin{lemma} \label{lemma: probability_lemma} Suppose the sequences of non-negative random variables $X_{d}, Y_{d} \ge 0$ satisfy $\mathbb{E}[X_{d}] \le \gamma < \infty$ and $Y_{d} \Prto[d] 0$. Then $X_{d} Y_{d} \Prto[d] 0$.
\end{lemma}
\begin{proof} Fix constants $\varepsilon, \delta > 0$ and suppose we set $\hat{\varepsilon} = \frac{\varepsilon \delta}{2\gamma}$ and $\hat{\delta} = \frac{\delta}{2}$. Because $Y_d$ converges in probability, we have $\Pr(Y_{d} > \hat{\varepsilon}) \le \hat{\delta}$ for sufficiently large $d$. Define the event $\mathcal{S} = \{Y_d \le \hat{\varepsilon} \}$ and decompose the space based on this set $\mathcal{S}$ so that for large $d$
\begin{align*}
\Pr(X_d Y_d > \varepsilon) = \Pr(\mathcal{S} \cap \{X_d Y_d > \varepsilon\}) + \Pr(\mathcal{S}^c \cap \{X_d Y_d > \varepsilon \})
\le \Pr(\mathcal{S} \cap \{X_d Y_d > \varepsilon\}) + \tfrac{\delta}{2}.
\end{align*}
Here we used that $\Pr(\mathcal{S}^c \cap \{X_d Y_d > \varepsilon\}) \le \Pr(\mathcal{S}^c)$. For the other term, a direct application of Markov's inequality yields
\begin{align*}
\Pr(\mathcal{S} \cap \{X_d Y_d > \varepsilon\}) \le \Pr(\mathcal{S} \cap \{\hat{\varepsilon} X_d > \varepsilon\}) \le \tfrac{\hat{\varepsilon}}{\varepsilon} \cdot \mathbb{E}[X_d] \le \tfrac{\delta}{2}.
\end{align*}
The result immediately follows.
\end{proof}
\begin{lemma}[Remove randomness in coefficients of quadratic form]\label{proposition: remove_norm} Suppose Assumption~\ref{assumption: spectral_density} holds and let the vectors ${\boldsymbol w} \in \mathbb{R}^d$ and ${\boldsymbol v} \in \mathbb{R}^d$ be i.i.d. satisfying ${\mathbb E}\,[\|{\boldsymbol w}\|_2^2] = R^2$ and ${\mathbb E}\,[\|{\boldsymbol v}\|_2^2] = \widetilde{R}^2$ for some constants $R, \widetilde{R} > 0$.
For any $k$ degree polynomial $\widetilde{P}_k$ whose coefficients depend continuously on $\lambda_{{\boldsymbol H}}^+$ and $\lambda_{{\boldsymbol H}}^-$, the quadratic form converges in probability
\begin{align*}
{\boldsymbol w}^T & \widetilde{P}_k \left ({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm} \right ) {\boldsymbol v} - {\boldsymbol w}^T \widetilde{P}_k \left ({\boldsymbol H}; \lambda^{\pm} \right ) {\boldsymbol v} \Prto[d] 0.
\end{align*}
\end{lemma}
\begin{proof} Using the Cauchy-Schwarz inequality, it suffices to show that for every $\varepsilon > 0$ we have
\begin{align*}
\lim_{d \to \infty} \Pr \left (\|{\boldsymbol w}\|_2 \cdot \|{\boldsymbol v}\|_2 \cdot \big \| \widetilde{P}_k \left ( {\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm} \right ) - \widetilde{P}_k \left ( {\boldsymbol H}; \lambda^{\pm} \right ) \big \|_{\text{op}} > \varepsilon \right ) = 0\,.
\end{align*}
Define $X_{d} = \|{\boldsymbol w}\|_2 \|{\boldsymbol v}\|_2$ and $Y_{d} = \big \|\widetilde{P}_k \left ( {\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm} \right ) - \widetilde{P}_k \left ( {\boldsymbol H}; \lambda^{\pm} \right ) \big \|_{\text{op}}$. Proposition~\ref{proposition: norm} immediately gives that $Y_{d} \Prto[d] 0$. Next, Cauchy-Schwartz implies
\[{\mathbb E}\,[X_d] = {\mathbb E}\,[ \|{\boldsymbol w}\|_2 \|{\boldsymbol v}\|_2] \le {\mathbb E}\,[\|{\boldsymbol w}\|_2^2]^{1/2} {\mathbb E}\,[\|{\boldsymbol v}\|_2^2]^{1/2} = R \widetilde{R}.\]
The result immediately follows after applying Lemma~\ref{lemma: probability_lemma}.
\end{proof}
From Lemma~\ref{proposition: remove_norm} and the expression for the squared norm of the gradient in \eqref{eq:grad_optimality_cond_app}, we can replace the maximum (minimum) eigenvalue $\lambda_{{\boldsymbol H}}^+$ $(\lambda_{{\boldsymbol H}}^-)$ in \eqref{eq:grad_optimality_cond_app} with the top (bottom) edge of the support of $\mu$, $\lambda^+$ ($\lambda^-$). This followed because the vectors ${\boldsymbol x}_0-\widetilde{{\boldsymbol x}}$ and $\tfrac{1}{n} {\boldsymbol A}^T {\boldsymbol \eta}$ satisfy ${\boldsymbol w}$ and ${\boldsymbol v}$ in Lemma~\ref{proposition: remove_norm} and the terms surrounding these vectors in \eqref{eq:grad_optimality_cond_app} are polynomials in ${\boldsymbol H}$.
\subsection{Concentration of the gradient}
Having established the key equation linking the gradient to a polynomial in Proposition~\ref{prop:gradient_polynomial}, we now show that for almost any large model the magnitude of the gradient after $k$ iterations converges to a deterministic value which we denote by $\! \! \underset{d \rightarrow \infty}{\mathcal{E}} \! [\|\nabla f(\xx_{k})\|^2]\,$. We recall the statement of Theorem~\ref{thm: concentration_main}:
\noindent \textbf{Theorem.} \rm{(Concentration of the gradient)} \textit{
Under Assumptions~\ref{assumption: Vector} and~\ref{assumption: spectral_density} the norm of the gradient concentrates around a deterministic value:
\begin{equation} \label{eq: something_1} \vspace{0.25cm}
\hspace{-0.28cm} \! \|\nabla f({\boldsymbol x}_k)\|^2 \! \! \Prto[d] \! \! \! \textcolor{teal}{\overbrace{R^2}^{\text{signal}}} \! \! \! \! \int \! { \underbrace{\lambda^2 P_k^2(\lambda; \lambda^{\pm})}_{\text{algorithm}}} \textcolor{mypurple}{\overbrace{\mathop{}\!\mathrm{d}\mu}^{\text{model}} } + \! \textcolor{purple}{\overbrace{ \widetilde{R}^2} ^{\text{noise}} } \! r \! \! \int \! { \underbrace{\lambda P_k^2(\lambda; \lambda^{\pm})}_{\text{algorithm}}} \textcolor{mypurple}{\overbrace{ \mathop{}\!\mathrm{d}\mu}^{\text{model}} } \stackrel{\text{def}}{=} \! \! \underset{d \rightarrow \infty}{\mathcal{E}} \! [\|\nabla f(\xx_{k})\|^2]\,. \!
\end{equation}
}
Intuitively, the value of $\! \! \underset{d \rightarrow \infty}{\mathcal{E}} \! [\|\nabla f(\xx_{k})\|^2]\,$ is the expected gradient after first taking the model size to infinity. The above expression explicitly illustrates the effects of the model and the algorithm on the norm of the gradient: the \textcolor{teal}{signal ($R^2$)} and \textcolor{purple}{noise ($\widetilde{R}^2$)}, the {optimization algorithm} which enters into the formula through the polynomial $P_k$, and the \textcolor{mypurple}{model used to generate ${\boldsymbol A}$} by means of the measure $\mu$.
The main tool to prove Theorem~\ref{thm: concentration_main} is the moment method which requires computing explicit expressions for the moments of the norm of the gradient. We summarize this in the following proposition.
To ease notation in the next few propositions, we define the following matrices and vectors
\begin{equation} \begin{gathered} \label{eq:blah_10}
\quad {\boldsymbol u} \stackrel{\text{def}}{=} {\boldsymbol x}_0-\widetilde{{\boldsymbol x}}, \quad {\boldsymbol B} \stackrel{\text{def}}{=} {\boldsymbol H}^2 P_k^2({\boldsymbol H}; \lambda^{\pm}), \quad {\boldsymbol C} \stackrel{\text{def}}{=} P_k^2({\boldsymbol H}; \lambda^{\pm}),\\
\text{and} \quad {\boldsymbol D} \stackrel{\text{def}}{=} -2 {\boldsymbol H} P_k^2({\boldsymbol H}; \lambda^{\pm})
\end{gathered}
\end{equation}
and let $y_k$ be the quadratic form given by
\begin{equation}\label{eq: norm_with_noise1}
y_k \stackrel{\text{def}}{=} {\boldsymbol u}^T {\boldsymbol B} {\boldsymbol u} + \tfrac{1}{n} {\boldsymbol u}^T {\boldsymbol D} {\boldsymbol A}^T {\boldsymbol \eta} + \tfrac{1}{n^2} {\boldsymbol \eta}^T {\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T {\boldsymbol \eta}.
\end{equation}
Observe that the value $y_k$ is simply $\|\nabla f({\boldsymbol x}_k)\|^2$ in \eqref{eq:grad_optimality_cond_app} with $\lambda_{{\boldsymbol H}}^{\pm}$ replaced with $\lambda^{\pm}$.
\begin{proposition} \label{proposition:conditional} Suppose the matrix ${\boldsymbol A}$ and vectors ${\boldsymbol x}_0, \widetilde{{\boldsymbol x}},$ and ${\boldsymbol \eta}$ satisfy Assumptions~\ref{assumption: Vector} and \ref{assumption: spectral_density}. Let $P_k$ be the $k$-degree polynomial defined in \eqref{eq:recursive_noise_poly}. Using the notation in \eqref{eq:blah_10} and \eqref{eq: norm_with_noise1}, the following holds for any $\varepsilon > 0$
\begin{equation} \begin{aligned} \label{eq:conditional}
\Pr \big ( | y_k - \big [ R^2 \text{ \rm tr} \big ( \tfrac{{\boldsymbol B}}{d} \big ) &+ \tilde{R}^2 \text{ \rm tr} \big (\tfrac{{\boldsymbol C} {\boldsymbol H}}{n} \big )\big ] | > \varepsilon \, \big | \, {\boldsymbol H} \big ) \\
&\le \tfrac{1}{\varepsilon^2} \left ( \tfrac{C-R^4}{d} \text{ \rm tr} \big ( \tfrac{{\boldsymbol B}^2}{d} \big ) + \tfrac{\tilde{C}-\tilde{R}^4}{n} \text{ \rm tr} \big ( \tfrac{({\boldsymbol C} {\boldsymbol H})^2}{n} \big ) + \tfrac{ R^2 \tilde{R}^2}{n} \big [ \tfrac{\text{tr}( {\boldsymbol D}^2 {\boldsymbol H})}{d} \big ] \right ).
\end{aligned} \end{equation}
Without loss of generality, we assume that the constants $C$ and $\widetilde{C}$ are large enough such that $C > 3 R^4$ and $\widetilde{C} > 3 \widetilde{R}^4$.
\end{proposition}
\begin{proof} We can write any quadratic form as ${\boldsymbol w}^T {\boldsymbol F} {\boldsymbol z} = \sum_{i,j} w_i z_j F_{ij}$. Expanding the quadratic forms, the following holds
\begin{align}
{\mathbb E}\,[y_k \, | \, {\boldsymbol H}] = {\mathbb E}\,[{\boldsymbol u}^T{\boldsymbol B} {\boldsymbol u} \, &| \, {\boldsymbol H}] + \tfrac{1}{n} {\mathbb E}\,[ {\boldsymbol u}^T {\boldsymbol D} {\boldsymbol A}^T {\boldsymbol \eta} \, | \, {\boldsymbol H}] + \tfrac{1}{n^2} {\mathbb E}\,[ {\boldsymbol \eta}^T {\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T {\boldsymbol \eta} \, | \, {\boldsymbol H}]\\
\text{(ind. of ${\boldsymbol \eta}$ and ${\boldsymbol u}$, ${\mathbb E}\,[{\boldsymbol \eta}] = \bm{0}$)} \quad &= {\mathbb E}\, \big [ \sum_{i,j} u_i u_j B_{ij} \, | \, {\boldsymbol H} \big ] + \tfrac{1}{n^2} {\mathbb E}\, \big [ \sum_{i,j} \eta_i \eta_j ({\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T)_{ij} \, | \, {\boldsymbol H} \big ] \\
\text{ (isotropic prop. of ${\boldsymbol \eta}$ and ${\boldsymbol u}$)} \quad &= R^2 \cdot \sum_i \tfrac{B_{ii}}{d} + \widetilde{R}^2 \cdot \sum_i \tfrac{({\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T)_{ii}}{n^2}\\
&= R^2 \cdot \tfrac{\text{tr}({\boldsymbol B})}{d} + \widetilde{R}^2 \cdot \tfrac{\text{tr}({\boldsymbol C} {\boldsymbol H})}{n}.
\end{align}
In the last equality, we used that $\text{tr}({\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T) = \text{tr}({\boldsymbol C} {\boldsymbol A}^T {\boldsymbol A}) = n \cdot \text{tr}({\boldsymbol C} {\boldsymbol H})$.
To prove \eqref{eq:conditional}, we will use Chebyshev's inequality; hence we need to compute the $\text{Var} \big ( y_k | {\boldsymbol H} \big ) = \mathbb{E} \big [ y^2_k | {\boldsymbol H} \big ] - \big (\mathbb{E} [y_k | {\boldsymbol H}] \big )^2$. First, a simple computation yields that
\begin{equation} \label{eq:variance_noise_11}
\big ( {\mathbb E}\,[ y_k | {\boldsymbol H} ] \big )^2 = \underbrace{\big [ \tfrac{R^2 \text{tr}({\boldsymbol B})}{d} \big ]^2}_{(i)} + \underbrace{ \big [ \tfrac{\tilde{R}^2 \text{tr}({\boldsymbol C} {\boldsymbol H})}{n} \big ]^2}_{(ii)} + \underbrace{2 \big [ \tfrac{R^2 \text{tr}({\boldsymbol B})}{d} \big ] \big [ \tfrac{\tilde{R}^2 \text{tr}({\boldsymbol C} {\boldsymbol H})}{n} \big ]}_{(iii)}.
\end{equation}
Next, we compute ${\mathbb E}\,[y^2_k | {\boldsymbol H}]$. By expanding out the terms in \eqref{eq: norm_with_noise1}, we get the following
\begin{equation} \begin{aligned} \label{eq:variance_noise_22}
{\mathbb E}\,[y^2_k | {\boldsymbol H}] &= \underbrace{{\mathbb E}\, [ ({\boldsymbol u}^T {\boldsymbol B} {\boldsymbol u})^2 | {\boldsymbol H} ]}_{(a)} + \underbrace{ {\mathbb E}\, \big [ \left ( \tfrac{{\boldsymbol \eta}^T {\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T {\boldsymbol \eta}}{n^2} \right )^2 | {\boldsymbol H} \big ] }_{(b)} + \underbrace{ {\mathbb E}\, \big [ \tfrac{2 {\boldsymbol u}^T {\boldsymbol B} {\boldsymbol u} \cdot {\boldsymbol \eta}^T {\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T {\boldsymbol \eta}}{n^2} \, | {\boldsymbol H} \big ] }_{(c)} \\
& \qquad + \underbrace{ {\mathbb E}\, \big [ \left ( \tfrac{{\boldsymbol u}^T {\boldsymbol D} {\boldsymbol A}^T {\boldsymbol \eta}}{n} \right )^2 | {\boldsymbol H} \big ] }_{(d)} + \underbrace{ {\mathbb E}\, \big [ 2 \left ( {\boldsymbol u}^T {\boldsymbol B} {\boldsymbol u} + \tfrac{{\boldsymbol \eta}^T {\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T {\boldsymbol \eta}}{n^2} \right ) \cdot \tfrac{{\boldsymbol u}^T {\boldsymbol D} {\boldsymbol A}^T {\boldsymbol \eta}}{n} \, | {\boldsymbol H} \big ]}_{(e)}.
\end{aligned} \end{equation}
To compute the variance of $y_k$, we take \eqref{eq:variance_noise_22} and subtract \eqref{eq:variance_noise_11}. Since this is quite a long expression, we will match up terms and compute these terms individually. First consider the terms (a) and (i) in equations~\eqref{eq:variance_noise_22} and \eqref{eq:variance_noise_11} respectively. By expanding out the square, we get
\begin{align*}
\text{Var}({\boldsymbol u}^T {\boldsymbol B} {\boldsymbol u} | {\boldsymbol H}) = {\mathbb E}\, \big [ \left ( {\boldsymbol u}^T {\boldsymbol B} {\boldsymbol u} \right )^2 | {\boldsymbol H} \big ] - \big [\tfrac{R^2 \text{tr}({\boldsymbol B})}{d} \big ]^2= \sum_{i,j,k,\ell} {\mathbb E}\,[u_i u_j u_k u_{\ell}] B_{ij} B_{k \ell} - \big [\tfrac{R^2 \text{tr}({\boldsymbol B})}{d} \big ]^2.
\end{align*}
We need each index to appear exactly twice in the above for its contribution to be non-negligible since ${\mathbb E}\,[u_i^2] = \tfrac{R^2}{d}$ and ${\mathbb E}\,[{\boldsymbol u}] = \bm{0}$. There are four possible ways in which this can happen: $\{i = j = k = \ell\}$, $\{i = j, k = \ell, k \neq i\}$, $\{i = k, j = \ell, i \neq j\}$, or $\{i = \ell, j = k, i \neq j\}$. By the symmetry of the ${\boldsymbol B}$ matrix, the last two cases are identical. Noting that ${\mathbb E}\,[u_i^4] \le \tfrac{C}{d^2}$ and ${\mathbb E}\,[u_i^2] = \tfrac{R^2}{d}$, we, consequently, get the following expression for the variance
\begin{equation} \begin{aligned} \label{eq:blah_23}
\text{Var}({\boldsymbol u}^T &{\boldsymbol B} {\boldsymbol u} \, | \, {\boldsymbol H}) = \sum_{i} {\mathbb E}\,[u_i^4] \cdot B_{ii}^2 + \sum_{i \neq j} {\mathbb E}\,[u_i^2] \cdot {\mathbb E}\,[u_j^2] \cdot \left (B_{ii} B_{jj} + 2 B_{ij}^2 \right )- \tfrac{R^4}{d^2} [\text{tr}({\boldsymbol B})]^2\\
&\le \frac{C-R^4}{d^2} \cdot \sum_i B_{ii}^2 + \frac{2R^4}{d^2} \sum_{i \neq j} B_{ij}^2 + \frac{R^4}{d^2} \big ( \sum_i B_{ii}^2 + \sum_{i \neq j} B_{ii} B_{jj} - [\text{tr}({\boldsymbol B})]^2 \big )\\
&= \frac{C-R^4}{d^2} \cdot \sum_{i} B_{ii}^2 + \frac{2R^4}{d^2} \sum_{i \neq j} B_{ij}^2 \\
&\le \frac{C-R^4}{d^2} \cdot \big (\sum_{i} B_{ii}^2 + \sum_{i \neq j} B_{ij}^2 \big ) = \frac{C-R^4}{d} \cdot \left [ \frac{ \text{tr}({\boldsymbol B}^2)}{d} \right ].
\end{aligned} \end{equation}
In the second equality, we used that $\sum_i B_{ii}^2 + \sum_{i \neq j} B_{ii} B_{jj} = [\text{tr}({\boldsymbol B})]^2$ and in the second inequality we can without loss of generality choose $C$ so that $C > 3R^4$. Finally, we used that $\sum_i B_{ii}^2 + \sum_{i \neq j} B_{ij}^2 = \text{tr}({\boldsymbol B}^2)$.
Next, we consider the terms (b) and (ii) in equations~\eqref{eq:variance_noise_22} and \eqref{eq:variance_noise_11} respectively.
Similar to the previous case, by expanding out the square, we get the following
\begin{align*}
\text{Var} \big (\tfrac{{\boldsymbol \eta}^T {\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T {\boldsymbol \eta}}{n^2} \, | \, {\boldsymbol H}\big ) = {\mathbb E}\, \big [ \frac{1}{n^4} \sum_{i,j,k,\ell} \eta_i \eta_j \eta_k \eta_{\ell} ({\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T)_{ij} ({\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T)_{k \ell} \, | \, {\boldsymbol H} \big ] - \big [\tfrac{\widetilde{R}^2 \text{tr}({\boldsymbol C} {\boldsymbol H})}{n} \big ]^2.
\end{align*}
Because of independence, isotropic variance ${\mathbb E}\,[\eta_i^2] =\widetilde{R}^2$, and mean ${\mathbb E}\,[{\boldsymbol \eta}] = \bm{0}$, we need each index to appear exactly twice in the above expression in order for its contribution to be non-negligible. There are four possible ways in which this can happen: $\{i = j= k = \ell\}, \{i = j, k = \ell, k \neq i\}, \{ i = k, j = \ell, i \neq j \}$, or $\{i = \ell, j = k, i \neq j\}$. As before, we have the following expression for the variance
\begin{equation}
\begin{aligned} \label{eq:noisy_GD_blah1}
\text{Var} &\big ( \tfrac{1}{n^2} \cdot {\boldsymbol \eta}^T {\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T {\boldsymbol \eta} \, | \, {\boldsymbol H} \big ) \le \tfrac{\widetilde{C}}{n^4} \sum_i ({\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T)_{ii}^2 + \tfrac{\widetilde{R}^4}{n^4} \sum_{i \neq j} ({\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T)_{ii} ({\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T)_{jj}\\
& \qquad \qquad \qquad \qquad + \tfrac{2 \widetilde{R}^4}{n^4} \sum_{i \neq j} ({\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T)_{ij}^2 - \tfrac{\widetilde{R}^4}{n^2} [ \text{tr}({\boldsymbol C} {\boldsymbol H}) ]^2\\
&= \tfrac{\widetilde{C}-\widetilde{R}^4}{n^4} \sum_i ({\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T)_{ii}^2 + \tfrac{\widetilde{R}^4}{n^4} \big [ \big ( \sum_i ({\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T)_{ii}^2 + \sum_{i \neq j} ({\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T)_{ii} ({\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T)_{jj} \big ) \big ] \\
& \qquad \qquad \quad + \tfrac{2 \tilde{R}^4}{n^4 } \sum_{i \neq j} ({\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T)_{ij}^2 - \tfrac{\tilde{R}^4}{n^2} [ \text{tr}({\boldsymbol C} {\boldsymbol H}) ]^2\\
&\le \tfrac{\widetilde{C}-\widetilde{R}^4}{n^4} \big [ \sum_i ({\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T)_{ii}^2 + \sum_{i \neq j} ({\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T)^2_{ij} \big ] = \tfrac{\widetilde{C}-\widetilde{R}^4}{n} \cdot \big [ \tfrac{\text{tr}( ({\boldsymbol C} {\boldsymbol H})^2 )}{n} \big ].
\end{aligned}
\end{equation}
Here we can without loss of generality choose $\widetilde{C}$ so that $\widetilde{C} > 3 \widetilde{R}^4$. Next, we compare (c) and (iii) in equation~\eqref{eq:variance_noise_22} and \eqref{eq:variance_noise_11}, respectively. We begin by expanding out (c) in equation~\eqref{eq:variance_noise_22} which yields
\begin{align*}
{\mathbb E}\, \big [ \tfrac{2}{n^2} \cdot {\boldsymbol u}^T {\boldsymbol B} {\boldsymbol u} \cdot {\boldsymbol \eta}^T {\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T {\boldsymbol \eta} \, | \, {\boldsymbol H} \big ]= {\mathbb E}\, \big [ \tfrac{2}{n^2} \big ( \sum_{i,j} u_i B_{ij} u_j \big ) \big ( \sum_{k, \ell} \eta_k ({\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T)_{k \ell} \eta_\ell \big ) \, | \, {\boldsymbol H} \big ].
\end{align*}
The only terms which contribute are when $i = j$ and $k = \ell$. Therefore, we deduce the following
\begin{equation} \begin{aligned} \label{eq:blah_20}
\tfrac{2}{n^2} \cdot {\mathbb E}\, \big [{\boldsymbol u}^T {\boldsymbol B} {\boldsymbol u} \cdot {\boldsymbol \eta}^T {\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T {\boldsymbol \eta} \, | \, & {\boldsymbol H} \big ] - 2 \big [ \tfrac{R^2 \text{tr}({\boldsymbol B})}{d} \big ] \cdot \big [ \tfrac{\widetilde{R}^2 \text{tr}({\boldsymbol C} {\boldsymbol H})}{n} \big ]\\
&= \tfrac{2\widetilde{R}^2 R^2}{n^2 d} \sum_{i,j} B_{ii} ({\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T)_{jj} - 2 \big [ \tfrac{R^2 \text{tr}({\boldsymbol B})}{d} \big ] \cdot \big [ \tfrac{\tilde{R}^2 \text{tr}({\boldsymbol C} {\boldsymbol H})}{n} \big ]\\
&= \tfrac{2\widetilde{R}^2 R^2}{n d} \big [ \text{tr}({\boldsymbol B}) \text{tr}({\boldsymbol C} {\boldsymbol H}) \big ] - 2 \big [ \tfrac{R^2 \text{tr}({\boldsymbol B})}{d} \big ] \cdot \big [ \tfrac{\widetilde{R}^2 \text{tr}({\boldsymbol C} {\boldsymbol H})}{n} \big ] = 0.
\end{aligned} \end{equation}
We have now used up all the terms in \eqref{eq:variance_noise_11} so the remaining terms, (d) and (e), in \eqref{eq:variance_noise_22} we will show are themselves already going to $0$ as $d \to \infty$. Again expanding the term (d), we get
\begin{equation}
{\mathbb E}\, \big [ \tfrac{1}{n^2} \big ( {\boldsymbol u}^T {\boldsymbol D} {\boldsymbol A}^T {\boldsymbol \eta} \big )^2 \, | \, {\boldsymbol H} \big ] = {\mathbb E}\, \big [ \tfrac{1}{n^2} \big ( \sum_{i,j} u_i ({\boldsymbol D} {\boldsymbol A}^T)_{ij} \eta_j \big )^2 \, | \, {\boldsymbol H} \big ].
\end{equation}
By independence and isotropic variance of ${\boldsymbol u}$ and ${\boldsymbol \eta}$, the only terms which remain after taking expectations are the ones with $u_i^2$ and $\eta_j^2$ terms. Therefore, we deduce \begin{equation}
\begin{aligned} \label{eq:GD_noisy_blah_22}
\tfrac{1}{n^2} {\mathbb E}\, \big [ \big ( {\boldsymbol u}^T {\boldsymbol D} {\boldsymbol A}^T {\boldsymbol \eta} \big )^2 \, | \, {\boldsymbol H} \big ] &= \tfrac{1}{n^2} \sum_{i,j} {\mathbb E}\, \big [u_i^2 \big ] \cdot {\mathbb E}\,[ \eta_j^2 ] \cdot ({\boldsymbol D} {\boldsymbol A}^T)_{ij}^2 = \tfrac{ R^2 \widetilde{R}^2}{n^2 d} \sum_{i,j} ({\boldsymbol D} {\boldsymbol A}^T)_{ij}^2\\
&= \tfrac{ R^2 \widetilde{R}^2}{n} \cdot \big [ \tfrac{\text{tr}( {\boldsymbol D}^2 {\boldsymbol H})}{d} \big ].
\end{aligned}
\end{equation}
The only term which remains in \eqref{eq:variance_noise_22} is (e). Since ${\mathbb E}\,[{\boldsymbol \eta}] = \bm{0}$, the term ${\boldsymbol u}^T {\boldsymbol B} {\boldsymbol u} \cdot {\boldsymbol u}^T\tfrac{{\boldsymbol D} {\boldsymbol A}^T }{n} {\boldsymbol \eta}$ contributes nothing to the expectation. Similarly since ${\mathbb E}\,[{\boldsymbol u}] = \bm{0}$, the term $\tfrac{1}{n^3} \cdot {\boldsymbol \eta}^T {\boldsymbol A} {\boldsymbol C} {\boldsymbol A}^T {\boldsymbol \eta} \cdot {\boldsymbol u}^T {\boldsymbol C} {\boldsymbol A}^T {\boldsymbol \eta}$ is also zero in expectation.
Putting all the quantities \eqref{eq:blah_23}, \eqref{eq:noisy_GD_blah1}, \eqref{eq:blah_20}, \eqref{eq:GD_noisy_blah_22} together with \eqref{eq:variance_noise_11} and \eqref{eq:variance_noise_22}, a straight forward application of Chebyshev's inequality yields the result.
\end{proof}
The only difference between $\|\nabla f({\boldsymbol x}_k)\|^2$ and $y_k$ is the coefficients of the polynomials in $\|\nabla f({\boldsymbol x}_k)\|^2$ continuously depend on $\lambda^{\pm}_{{\boldsymbol H}}$ while the coefficients of $y_k$ depend on $\lambda^{\pm}$. The polynomials $P_k$ and $Q_k$ together with Assumptions~\ref{assumption: Vector} and~\ref{assumption: spectral_density} ensure that all the conditions of Lemma~\ref{proposition: remove_norm} hold by setting ${\boldsymbol w}$ and ${\boldsymbol v}$ to combinations of ${\boldsymbol u}$ and $\tfrac{1}{n} {\boldsymbol A}^T {\boldsymbol \eta}$ and the polynomials to ${\boldsymbol B}$, ${\boldsymbol C}$, and ${\boldsymbol D}$. Therefore we have $| \|\nabla f({\boldsymbol x}_k)\|^2 - y_k | \Prto[d] 0$ so we can replace $y_k$ with $\|\nabla f({\boldsymbol x}_k)\|^2$. The proof of Proposition~\ref{proposition:conditional} shows, that conditioned on ${\boldsymbol H}$, the $\text{Var}(\|\nabla f({\boldsymbol x}_k)\|^2 | {\boldsymbol H})$ is $\mathcal{O}( \tfrac{1}{d})$ and
\begin{equation} \label{eq:something_3_1} {\mathbb E}\,[ \|\nabla f({\boldsymbol x}_k)\|^2 | {\boldsymbol H}] = R^2 \text{tr} \big (\tfrac{{\boldsymbol B}}{d} \big ) + \tilde{R}^2 \text{tr} \big (\tfrac{{\boldsymbol C} {\boldsymbol H}}{n} \big ).
\end{equation}
Consequently, conditioned on ${\boldsymbol H}$, the squared norm of the gradient is roughly \eqref{eq:something_3_1}. So in view of this, it suffices to understand the expected traces of polynomials in ${\boldsymbol H}$. Random matrix theory studies convergence properties of the limiting distribution of high dimensional matrices, particularly the empirical spectral measure. An important tool derived from using Assumption~\ref{assumption: spectral_density} linking the ESM and the expected trace to the moments of the measure $\mu$ is given below.
\begin{proposition}[Convergence of ESM] \label{proposition: moments} Let $\widetilde{P}_k$ be any $k$-degree polynomial. Under Assumption~\ref{assumption: spectral_density}, the following is true
\begin{align*}
\tfrac{1}{d}\text{\rm tr}\, \widetilde{P}_k({\boldsymbol H}; \lambda^{\pm}) = \int \widetilde{P}_k(\lambda; \lambda^{\pm}) \, \mathop{}\!\mathrm{d}\mu_{{\boldsymbol H}} \Prto[d] \int \widetilde{P}_k(\lambda; \lambda^{\pm}) \mathop{}\!\mathrm{d}\mu\,.
\end{align*}
\end{proposition}
\begin{proof}
For sufficiently large $d$, Assumption~\ref{assumption: spectral_density} says $\Pr(\lambda_{{\boldsymbol H}}^+ > \lambda^+ + \hat{\varepsilon}) \le \frac{\delta}{2}$. Define the event $\mathcal{S} = \{ \lambda_{{\boldsymbol H}}^+ \le \lambda^+ + \hat{\varepsilon} \}$. We construct a bounded, continuous function $h$ by
\[h(\lambda) = \begin{cases}
\widetilde{P}_k(0; \lambda^{\pm}), & \text{if $\lambda < 0$}\\
\widetilde{P}_k(\lambda; \lambda^{\pm}), & \text{if $0 \le \lambda \le \lambda^+ + \hat{\varepsilon}$}\\
\widetilde{P}_k(\lambda^+ + \hat{\varepsilon}; \lambda^{\pm}), & \text{otherwise}.
\end{cases}\]
Because the function $h$ is bounded and continuous, Assumption~\ref{assumption: spectral_density} guarantees that
\begin{equation} \label{eq:rand_feature_blah_4}
\Pr \big ( \big |\int h(\lambda) \, \mathop{}\!\mathrm{d}\mu_{{\boldsymbol H}} - \int h(\lambda) \, \mathop{}\!\mathrm{d}\mu \, \big | > \varepsilon \big ) \le \delta.
\end{equation}
Depending on whether $S$ has occurred, we have for all sufficiently large $d$
\begin{align}
\Pr \big ( \big | \int \widetilde{P}_k(\lambda; \lambda^{\pm}) \, \mathop{}\!\mathrm{d}\mu_{{\boldsymbol H}} - \int &\widetilde{P}_k(\lambda; \lambda^{\pm}) \, \mathop{}\!\mathrm{d}\mu \big | > \varepsilon \big ) \nonumber\\
&= \Pr \big ( \mathcal{S} \cap \{ \big | \int \widetilde{P}_k(\lambda; \lambda^{\pm}) \, \mathop{}\!\mathrm{d}\mu_{{\boldsymbol H}} - \int \widetilde{P}_k(\lambda; \lambda^{\pm}) \, | > \varepsilon \} \big ) \nonumber \\
& \qquad + \Pr \big (\mathcal{S}^c \cap \{ \big | \int \widetilde{P}_k(\lambda; \lambda^{\pm}) \, \mathop{}\!\mathrm{d}\mu_{{\boldsymbol H}} - \int \widetilde{P}_k(\lambda; \lambda^{\pm}) \, \mathop{}\!\mathrm{d}\mu \big | > \varepsilon \} \big ) \nonumber \\
&\le \Pr \big ( \mathcal{S} \cap \{ \big | \int \widetilde{P}_k(\lambda; \lambda^{\pm}) \, \mathop{}\!\mathrm{d}\mu_{{\boldsymbol H}} - \int \widetilde{P}_k(\lambda; \lambda^{\pm}) \, \mathop{}\!\mathrm{d}\mu \big | > \varepsilon \} \big ) + \tfrac{\delta}{2}. \label{eq: rand_feature_blah_3}
\end{align}
In the last line, the probability $\Pr \big (\mathcal{S}^c \cap \{ \big | \int \widetilde{P}_k(\lambda; \lambda^{\pm}) \, \mathop{}\!\mathrm{d}\mu_{{\boldsymbol H}} - \int \widetilde{P}_k(\lambda; \lambda^{\pm}) \, \mathop{}\!\mathrm{d}\mu \big | > \varepsilon \} \big ) \le \Pr(\mathcal{S}^c) \le \tfrac{\delta}{2}$ for large $d$. Hence, we consider only the first term in \eqref{eq: rand_feature_blah_3}. By construction, for any element in $\mathcal{S}$ it is clear that $h(\lambda) = \widetilde{P}_k(\lambda)$. For sufficiently large $d$, equation \eqref{eq:rand_feature_blah_4} yields
\[ \Pr \big ( \mathcal{S} \cap \{ \big | \int \widetilde{P}_k(\lambda; \lambda^{\pm}) \, \mathop{}\!\mathrm{d}\mu_{{\boldsymbol H}} - \int \widetilde{P}_k(\lambda; \lambda^{\pm}) \, \mathop{}\!\mathrm{d}\mu \big | > \varepsilon \} \big ) \le \Pr \big ( \big | \int h(\lambda) \, \mathop{}\!\mathrm{d}\mu_{{\boldsymbol H}} - \int h(\lambda) \, \mathop{}\!\mathrm{d}\mu \big | > \varepsilon \big ) \le \frac{\delta}{2}.\]
The result follows after combining with \eqref{eq: rand_feature_blah_3}.
\end{proof}
Now that we have described the main components of our argument, we present a preliminary concentration result for the gradient.
\begin{proposition}\label{thm: probability_convergence} Suppose the vectors ${\boldsymbol x}_0, \widetilde{{\boldsymbol x}},$ and ${\boldsymbol \eta}$ and the matrix ${\boldsymbol A}$ satisfy Assumptions~\ref{assumption: Vector} and \ref{assumption: spectral_density} resp.
The following holds
\begin{align} \label{eq:grad_convergence_prob}
\big | \|\nabla f({\boldsymbol x}_k)\|^2 - \big ( \underbrace{\textcolor{teal}{R^2} \textcolor{black}{\tfrac{1}{d} \text{\rm tr}({\boldsymbol H}^2 P_k^2({\boldsymbol H}; \lambda^{\pm}) )}}_{\text{signal}} + \underbrace{\textcolor{purple}{\widetilde{R}^2} \textcolor{black}{\tfrac{1}{n} \text{\rm tr}({\boldsymbol H} P_k^2({\boldsymbol H} ; \lambda^{\pm}))}}_{\text{noise}} \big ) \big | \Prto[d] 0.
\end{align}
\end{proposition}
\begin{proof}
Recall the definitions in \eqref{eq:blah_10} and \eqref{eq: norm_with_noise1} and equation \eqref{eq:grad_optimality_cond_app}. We note that the only difference between $\|\nabla f({\boldsymbol x}_k)\|^2$ and $y_k$ is that the coefficients of the polynomials in $\|\nabla f({\boldsymbol x}_k)\|^2$ continuously depend on $\lambda_{{\boldsymbol H}}^{\pm}$ while the coefficients in $y_k$ depend on $\lambda^{\pm}$. The polynomials $P_k$ and $Q_k$ together with Assumptions~\ref{assumption: Vector} and~\ref{assumption: spectral_density} ensure that all the conditions of Lemma~\ref{proposition: remove_norm} hold by setting ${\boldsymbol w}$ and ${\boldsymbol v}$ to combinations of ${\boldsymbol u}$ and $\tfrac{1}{n} {\boldsymbol A}^T {\boldsymbol \eta}$ and the polynomials to ${\boldsymbol B}$, ${\boldsymbol C}$, and ${\boldsymbol D}$. Therefore we have $| \|\nabla f({\boldsymbol x}_k)\|^2 - y_k | \Prto[d] 0$ so it suffices to prove \eqref{eq:grad_convergence_prob} with $\|\nabla f({\boldsymbol x}_k)\|^2$ replaced by $y_k$.
Fix constants $\varepsilon, \delta > 0$. Proposition~\ref{proposition: moments} guarantees convergence in probability of any expected trace to a constant which depends on the polynomial and the deterministic measure $\mu$. This together with the definitions of ${\boldsymbol B}$, ${\boldsymbol C}$, and ${\boldsymbol D}$ yield for sufficiently large $d$
\begin{equation}\label{eq:bound_traces}
\begin{gathered}
\Pr \big ( \big | \tfrac{\text{tr}({\boldsymbol B}^2)}{d} \big | > M_1 \stackrel{\text{def}}{=} \varepsilon + \int \lambda^4 P_k^4(\lambda; \lambda^{\pm}) \, \mathop{}\!\mathrm{d}\mu \big ) \le \tfrac{\delta}{6},\\
\Pr \big ( \big | \tfrac{\text{tr}(({\boldsymbol C} {\boldsymbol H})^2)}{n} \big | > M_2 \stackrel{\text{def}}{=} \varepsilon + r \int \lambda^2 P_k^4(\lambda; \lambda^{\pm}) \, \mathop{}\!\mathrm{d}\mu \big ) \le \tfrac{\delta}{6},\\
\text{and} \quad \Pr \big ( \big | \tfrac{\text{tr}({\boldsymbol D}^2 {\boldsymbol H})}{d} \big | > M_3 \stackrel{\text{def}}{=} \varepsilon + 4\int \lambda^3 P_k^4(\lambda; \lambda^{\pm}) \, \mathop{}\!\mathrm{d}\mu \big ) \le \tfrac{\delta}{6}.
\end{gathered}
\end{equation}
We define the set $\mathcal{S}$ for which the expected traces of the random matrices are bounded, namely,
\[ \mathcal{S} = \big \{ \big | \tfrac{\text{tr}({\boldsymbol B}^2)}{d} \big | \le M_1 \} \cap \big \{ \big | \tfrac{\text{tr}(({\boldsymbol C} {\boldsymbol H})^2)}{n} \big | \le M_2 \} \cap \big \{ \big | \tfrac{\text{tr}({\boldsymbol D}^2 {\boldsymbol H})}{d} \big | \le M_3 \big ) \big \}, \]
and we observe because of \eqref{eq:bound_traces} that the probability $\Pr(\mathcal{S}^c) \le \frac{\delta}{2}$. The total law of probability yields the following
\begin{align}
\Pr \big ( \big | y_k - \big [R^2 \text{tr} \big (\tfrac{{\boldsymbol B}}{d} \big ) &+ \widetilde{R}^2 \text{tr} \big (\tfrac{{\boldsymbol C} {\boldsymbol H}}{n} \big ) \big ] \big | > \varepsilon \big ) = \Pr \big ( \mathcal{S} \cap \big \{ \big | y_k - \big [R^2 \text{tr} \big (\tfrac{{\boldsymbol B}}{d} \big ) + \widetilde{R}^2 \text{tr} \big (\tfrac{{\boldsymbol C} {\boldsymbol H}}{n} \big ) \big ] \big | > \varepsilon \big \} \big ) \nonumber \\
& \qquad \qquad + \Pr \big ( \mathcal{S}^c \cap \big \{ \big | y_k - \big [R^2 \text{tr} \big (\tfrac{{\boldsymbol B}}{d} \big ) + \widetilde{R}^2 \text{tr} \big (\tfrac{{\boldsymbol C} {\boldsymbol H}}{n} \big ) \big ] \big | > \varepsilon \big \} \big ) \nonumber \\
&\le \Pr \big ( \mathcal{S} \cap \big \{ \big | y_k - \big [R^2 \text{tr} \big (\tfrac{{\boldsymbol B}}{d} \big ) + \widetilde{R}^2 \text{tr} \big (\tfrac{{\boldsymbol C} {\boldsymbol H}}{n} \big ) \big ] \big | > \varepsilon \big \} \big ) + \tfrac{\delta}{2}. \label{eq:blah_30}
\end{align}
Hence it suffices to bound the first term in \eqref{eq:blah_30}. The idea is to condition on the matrix ${\boldsymbol H}$ and apply Proposition~\ref{proposition:conditional}. The law of total expectation yields
\begin{align}
\Pr \big ( \mathcal{S} \cap \big \{ \big | y_k - \big [\text{tr} &\big (\tfrac{R^2 {\boldsymbol B}}{d} \big ) + \text{tr} \big (\tfrac{\widetilde{R}^2 {\boldsymbol C} {\boldsymbol H}}{n} \big ) \big ] \big | > \varepsilon \big \} \big ) \nonumber \\
\text{(conditioned on ${\boldsymbol H}$)} \, \, \, &= {\mathbb E}\, \big [ 1_{\mathcal{S}} \Pr \big ( \big | y_k - \big [ \text{tr} \big (\tfrac{R^2 {\boldsymbol B}}{d} \big ) + \text{tr} \big (\tfrac{\widetilde{R}^2 {\boldsymbol C} {\boldsymbol H}}{n} \big ) \big ] \big | > \varepsilon | {\boldsymbol H} \big ) \big ] \nonumber \\
\text{(Proposition~\ref{proposition:conditional})} \, \, \, &\le \tfrac{1}{\varepsilon^2} {\mathbb E}\, \big [ 1_{\mathcal{S}} \left ( \tfrac{C-R^4}{d} \text{ \rm tr} \big ( \tfrac{{\boldsymbol B}^2}{d} \big ) + \tfrac{\widetilde{C}-\widetilde{R}^4}{n} \text{ \rm tr} \big ( \tfrac{({\boldsymbol C} {\boldsymbol H})^2}{n} \big ) + \tfrac{ R^2 \widetilde{R}^2}{n} \big [ \tfrac{\text{tr}( {\boldsymbol D}^2 {\boldsymbol H})}{d} \big ] \right ) \big ]. \label{eq:blah_31}
\end{align}
Here for the indicator of the event $\mathcal{S}$ we use the notation $1_{\mathcal{S}}(\omega)$ where the indicator is $1$ if $\omega \in \mathcal{S}$ and $0$ otherwise. By construction of the event $\mathcal{S}$, each of the expected traces in \eqref{eq:blah_31} are bounded and therefore, we deduce that
\[\Pr \big ( \mathcal{S} \cap \big \{ \big | y_k - \big [\text{tr} \big (\tfrac{R^2 {\boldsymbol B}}{d} \big ) + \text{tr} \big (\tfrac{\widetilde{R}^2 {\boldsymbol C} {\boldsymbol H}}{n} \big ) \big ] \big | > \varepsilon \big \} \big ) = \tfrac{1}{\varepsilon^2} \cdot \mathcal{O} \big ( \tfrac{1}{d} \big ). \]
By choosing $d$ sufficiently large, we can make the right hand side smaller than $\tfrac{\delta}{2}$. The result immediately follows from \eqref{eq:blah_30}.
\end{proof}
Proposition~\ref{thm: probability_convergence} reveals that for high-dimensional data the squared norm of the gradient $\|\nabla f({\boldsymbol x}_k)\|^2$ is a polynomial in the eigenvalues of the matrix ${\boldsymbol H}$. Every eigenvalue, not just the largest or smallest, appears in this formula \eqref{eq:grad_convergence_prob}. This means that first-order methods indeed see all of the eigenvalues of the matrix ${\boldsymbol H}$, not just the top or bottom one. However, the expected trace is still a random quantity due to its dependency on the random matrix. We remove this randomness and complete the proof of Theorem~\ref{thm: concentration_main} after noting that the moments of the empirical spectral measure converge in probability to a deterministic quantity, denoted by $\! \! \underset{d \rightarrow \infty}{\mathcal{E}} \! [\|\nabla f(\xx_{k})\|^2]\,$.
\begin{proof}[Proof of Theorem~\ref{thm: concentration_main}]
Propositions~\ref{proposition: moments} and \ref{thm: probability_convergence} yield the result.
\end{proof}
\subsection{Halting time converges to a constant} \label{apx: halting_time_deterministic}
The concentration of the norm of the gradient in \eqref{eq: something_1} gives a candidate for the limiting value of the halting time $T_{\varepsilon}$. More precisely, we define this candidate for the halting time $\tau_{\varepsilon}$ from $\! \! \underset{d \rightarrow \infty}{\mathcal{E}} \! [\|\nabla f(\xx_{k})\|^2]\,$ and we recall the halting time, $T_{\varepsilon}$, as
\begin{align}
\tau_{\varepsilon} \stackrel{\text{def}}{=} \inf \, \{ k > 0 : \! \! \underset{d \rightarrow \infty}{\mathcal{E}} \! [\|\nabla f(\xx_{k})\|^2]\, \le \varepsilon\} \quad \text{and} \quad T_{\varepsilon} \stackrel{\text{def}}{=} \inf \, \{ k > 0 : \|\nabla f({\boldsymbol x}_k)\|^2 \le \varepsilon\}\,.
\end{align}
We note that the deterministic value $\tau_{\varepsilon}$ is, by definition, the average complexity of GD whereas $T_{\varepsilon}$ is a random variable depending on randomness from the data, noise, signal, and initialization.
This leads to our main result that states the almost sure convergence of the halting time to a constant value. We begin by showing that $\tau_{\varepsilon}$ is well-defined.
\begin{lemma}[$\tau_{\varepsilon}$ is well-defined] \label{lem: tau_finite}Under the assumptions of Proposition~\ref{thm: probability_convergence},
the iterates of a convergent algorithm satisfy $\! \! \underset{d \rightarrow \infty}{\mathcal{E}} \! [\|\nabla f(\xx_{k})\|^2]\, \underset{k \to \infty}{\to} 0$.
\end{lemma}
\begin{proof}
Both $\lambda^2 P_k^2(\lambda; \lambda^{\pm}) \to 0$ and $\lambda P_k^2(\lambda; \lambda^{\pm}) \to 0$ and these polynomials are uniformly bounded in $k$ for each $\lambda \in [\lambda^-, \lambda^+]$ (see Lemma~\ref{lem: convergent_algorithm} and \ref{lem: convergent_bounded}). By dominated convergence theorem, the result follows.
\end{proof}
With our candidate for the limiting halting time $\tau_{\varepsilon}$ well-defined, we show that number of iterations until $\|\nabla f({\boldsymbol x}_k)\|^2 \le \varepsilon$ equals $\tau_{\varepsilon}$ for high-dimensional data. We state a more general result of Theorem~\ref{thm: Halting_time_main}.
\begin{theorem}[Halting time universality] \label{thm: Halting_time}
Provided that $\! \! \underset{d \rightarrow \infty}{\mathcal{E}} \! [\|\nabla f(\xx_{k})\|^2]\, \neq \varepsilon$ for all $k$, the probability of reaching $\varepsilon$ in a pre-determined number of steps satisfies
\[\lim_{d \to \infty} \Pr(T_{\varepsilon} = \tau_{\varepsilon} ) = 1.\]
If the constant $\varepsilon = \! \! \underset{d \rightarrow \infty}{\mathcal{E}} \! [\|\nabla f(\xx_{k})\|^2]\,$ for some $k$, then the following holds
\[ \lim_{d \to \infty} \Pr(T_{\varepsilon} \in [ \tau_{\varepsilon}, \tau_{\varepsilon} + M_{\varepsilon}]) = 1, \quad
\text{where $M_{\varepsilon} \stackrel{\text{def}}{=} \inf \{ k-\tau_{\varepsilon} > 0 \, | \, \xi_k < \varepsilon \}$.}\]
\end{theorem}
\begin{proof}
To simplify notation, we define $\xi_k \stackrel{\text{def}}{=} \! \! \underset{d \rightarrow \infty}{\mathcal{E}} \! [\|\nabla f(\xx_{k})\|^2]\,$. First, we consider the case where $\varepsilon \neq \xi_k$ for all $k$. We are interested in bounding the following probabilities
\begin{equation} \Pr(T_{\varepsilon} \neq \tau_{\varepsilon}) = \Pr(T_{\varepsilon} < \tau_{\varepsilon} ) + \Pr(T_{\varepsilon} > \tau_{\varepsilon}). \label{eq: Halting_time_1} \end{equation}
We bound each of these probabilities independently; first consider $\Pr(T_{\varepsilon} < \tau_{\varepsilon})$ in \eqref{eq: Halting_time_1}. For $\tau_{\varepsilon} = 0$, we note that $\Pr(T_{\varepsilon} < \tau_{\varepsilon}) = 0$ since $T_{\varepsilon} \ge 0$. So we can assume that $\tau_{\varepsilon} > 0$.
Since $T_{\varepsilon} \le \tau_{\varepsilon} -1$, we obtain
\begin{equation} \label{eq: Halting_time_3} \Pr(T_{\varepsilon} < \tau_{\varepsilon}) = \Pr \Big ( \bigcup_{k=0}^{\tau_{\varepsilon}-1} \{T_{\varepsilon} = k\} \Big ) \le \sum_{k=0}^{\tau_{\varepsilon}-1} \Pr(T_{\varepsilon} = k) \le \sum_{k=0}^{\tau_{\varepsilon}-1} \Pr(\|\nabla f({\boldsymbol x}_k)\|^2 \le \varepsilon) .\end{equation}
Now we bound the probabilities $\Pr(\|\nabla f({\boldsymbol x}_k)\|^2 \le \varepsilon)$. As $\tau_{\varepsilon}$ is the first time $\xi$ falls below $\varepsilon$, we conclude that $ \xi_{\tau_\varepsilon}< \varepsilon < \xi_{\tau_{\varepsilon}-1}, \xi_{\tau_{\varepsilon}-2}, \hdots, \xi_0$ where we used that $\varepsilon \neq
\xi_k$ for any $k$. Next we define the constant $0 < \delta \stackrel{\text{def}}{=} \displaystyle \min_{ 0 \le k \le \tau_{\varepsilon}} \, \{ |\varepsilon-\xi_k| \}$ and we observe that $\delta < |\varepsilon - \xi_k| = \xi_k- \varepsilon$ for all $k < \tau_{\varepsilon}$. Fix a constant $\hat{\varepsilon} > 0$ and index $k$.
Theorem~\ref{thm: concentration_main} says that by making $d(k)$ sufficiently large
\begin{align*} \Pr(\|\nabla f({\boldsymbol x}_k)\|^2 \leq \varepsilon) \le \Pr(\|\nabla f({\boldsymbol x}_k)\|^2 < \xi_{k} - \delta)
\le \frac{\hat{\varepsilon}}{\tau_{\varepsilon}}.
\end{align*}
Here we used that
$\tau_{\varepsilon}$ is finite for every $\varepsilon >0$ (Lemma~\ref{lem: tau_finite}). Set $D \stackrel{\text{def}}{=}{} \max\{d(0), d(1), d(2), \hdots, d(\tau_{k}-1)\}$. Then for all $d > D$, we have from \eqref{eq: Halting_time_3} the following
\[ \Pr(T_{\varepsilon} < \tau_{\varepsilon}) \le \sum_{k=0}^{\tau_{\varepsilon}-1} \Pr(\|\nabla f({\boldsymbol x}_k)\|^2 \le \varepsilon) \le \sum_{k=0}^{\tau_{\varepsilon}-1} \frac{\hat{\varepsilon}}{\tau_{\varepsilon}} = \hat{\varepsilon}. \]
Lastly, we bound $\Pr(T_{\varepsilon} > \tau_{\varepsilon})$. The idea is similar to the other direction. Let $\delta$ be defined as above.
Therefore, again by Theorem~\ref{thm: concentration_main}, we conclude for sufficiently large $d$
\begin{align*}
\Pr(T_{\varepsilon} > \tau_{\varepsilon}) &\le \Pr( \|\nabla f({\boldsymbol x}_{\tau_{\varepsilon}}) \|^2 > \varepsilon)\le \Pr(\|\nabla f({\boldsymbol x}_{\tau_{\varepsilon}}) \|^2 - \xi_{\tau_{\varepsilon}} > \delta) \to 0.
\end{align*}
Indeed, we used that $ \xi_{\tau_{\varepsilon}} < \varepsilon$ and $\delta < |\varepsilon-\xi_{\tau_{\varepsilon}}| = \varepsilon - \xi_{\tau_{\varepsilon}}$. This completes the proof when $\varepsilon \neq \xi_k$.
Next, we consider the second case where $\xi_{k} = \varepsilon$. Note that $M_{\varepsilon} < \infty$ for all $\varepsilon$ because $\displaystyle \lim_{k \to \infty} \xi_k = 0$. In this setting, we are interested in bounding
\[ \Pr( T_{\varepsilon} \not \in [\tau_{\varepsilon}, \tau_{\varepsilon} + M_{\varepsilon}]) = \Pr(T_{\varepsilon} < \tau_{\varepsilon}) + \Pr(T_{\varepsilon} > \tau_{\varepsilon} + M_{\varepsilon}).\]
The arguments will be similar to the previous setting. Replacing the definition of $\delta$ above with $\displaystyle \delta \stackrel{\text{def}}{=} \min_{0 \le k \le \tau_{\varepsilon}-1} \{|\varepsilon - \xi_k| \}$ yields that $\delta > 0$ since $\varepsilon < \xi_{\tau_{\varepsilon}-1}, \xi_{\tau_{\varepsilon}-2}, \hdots, \xi_0$. With this choice of $\delta$, the previous argument holds and we deduce that $\Pr(T_{\varepsilon} < \tau_{\varepsilon}) \to 0$. Next we show that $\Pr(T_{\varepsilon} > \tau_{\varepsilon} + M_{\varepsilon})$. As before, we know that $\Pr(T_{\varepsilon} > \tau_{\varepsilon} + M_{\varepsilon}) \le \Pr( \|\nabla f({\boldsymbol x}_{\tau_{\varepsilon+M_{\varepsilon}}}) \|^2 > \varepsilon)$. By definition of $M_{\varepsilon}$, we have that $\varepsilon > \xi_{\tau_{\varepsilon}+M_{\varepsilon}}$. Now define $\delta \stackrel{\text{def}}{=} \varepsilon - \xi_{\tau_{\varepsilon} + M_\varepsilon} > 0$. The previous argument holds with this choice of $\delta$; therefore, one has that $\Pr(T_{\varepsilon} > \tau_{\varepsilon} + M_{\varepsilon}) \to 0$.
\end{proof}
For large models the number of iterations to reach a nearly optimal point equals its average complexity which loosely says $T_{\varepsilon} = \tau_{\varepsilon}$. The variability in the halting time goes to zero. Since the dependence in $\tau_{\varepsilon}$ on the distribution of the data is limited to only the first two moments, almost all instances of high-dimensional data have the same limit. In Tables~\ref{tab:comparison_worst_avg_cvx} and \ref{tab:comparison_worst_avg_str_cvx}, we compute the value of $\! \! \underset{d \rightarrow \infty}{\mathcal{E}} \! [\|\nabla f(\xx_{k})\|^2]\,$ for various models.
\subsection{Extension beyond least squares, ridge regression} \label{sec:ridge_regression}
In this section, we extend the results from Theorems~\ref{thm: concentration_main} and \ref{thm: Halting_time_main} to the ridge regression problem. We leave the proofs for the reader as they follow similar techniques as the least squares problem \eqref{eq:LS_main}. We consider the ridge regression problem of the form
\begin{equation} \label{eq:ridge_regression}
\argmin_{{\boldsymbol x} \in \mathbb{R}^d} \left \{ f({\boldsymbol x}) \stackrel{\text{def}}{=} \frac{1}{2n} \|{\boldsymbol A} {\boldsymbol x} - {\boldsymbol b}\|^2 + \frac{\gamma}{2} \|{\boldsymbol x}\|^2 \right \}, \quad \text{with ${\boldsymbol b} \stackrel{\text{def}}{=} {\boldsymbol A} \widetilde{{\boldsymbol x}} + {\boldsymbol \eta}$\,.}
\end{equation}
As in Section~\ref{sec: problem_setting}, we will assume that ${\boldsymbol A} \in \mathbb{R}^{n \times d}$ is a (possibly random) matrix satisfying Assumption~\ref{assumption: spectral_density}, $\widetilde{{\boldsymbol x}} \in \mathbb{R}^d$ is an unobserved signal vector, and ${\boldsymbol \eta} \in \mathbb{R}^n$ is a noise vector. The constant $\gamma > 0$ is the ridge regression parameter. Unlike the least squares problem, the gradient of $\eqref{eq:ridge_regression}$ does not decompose into a term involving ${\boldsymbol x}_0-\widetilde{{\boldsymbol x}}$ and ${\boldsymbol \eta}$. As such, we alter Assumption~\ref{assumption: Vector} placing an independence assumption between the initialization vector ${\boldsymbol x}_0$ and the signal $\widetilde{{\boldsymbol x}}$, that is,
\begin{assumption}[Initialization, signal, and noise.]\label{assumption:ridge_vector} The initial vector ${\boldsymbol x}_0 \in \mathbb{R}^d$, the signal $\widetilde{{\boldsymbol x}} \in \mathbb{R}^d$, and noise vector ${\boldsymbol \eta} \in \mathbb{R}^n$ are independent of each other and independent of ${\boldsymbol A}$. The vectors satisfy the following conditions:
\begin{enumerate}[leftmargin=*]
\item The entries of ${\boldsymbol x}_0$ and $\widetilde{{\boldsymbol x}}$ are i.i.d. random variables and there exists constants $C, \dot{R}, \widehat{R} > 0$ such that for $i = 1, \hdots, d$
\begin{equation} \begin{gathered} \label{eq:ridge_initial}
{\mathbb E}\,[{\boldsymbol x}_0] = {\mathbb E}\,[\widetilde{{\boldsymbol x}}] = 0, \quad {\mathbb E}\,[(x_0)_i^2] = \tfrac{1}{d} \dot{R}^2, \quad {\mathbb E}\,[\widetilde{x}_i^2] = \tfrac{1}{d} \widehat{R}^2,\\
{\mathbb E}\,[ (x_0)^4_i ] \le \tfrac{1}{d^2} C, \quad \text{and} \quad {\mathbb E}\,[\widetilde{x}_i^4] \le \tfrac{1}{d^2} C.
\end{gathered} \end{equation}
\item The entries of noise vector are i.i.d. random variables satisfying the following for $i = 1, \hdots, n$ and for some constants $\widetilde{C}, \widetilde{R} > 0$
\begin{equation}
{\mathbb E}\,[{\boldsymbol \eta}] = 0, \quad {\mathbb E}\,[\eta_i^2] = \widetilde{R}^2, \quad and \quad {\mathbb E}\,[\eta_i^4] \le \widetilde{C}.
\end{equation}
\end{enumerate}
\end{assumption}
The difference between Assumption~\ref{assumption: Vector} and Assumption~\ref{assumption:ridge_vector} is that \eqref{eq:ridge_initial} guarantees the initial vector ${\boldsymbol x}_0$ and the signal $\widetilde{{\boldsymbol x}}$ are independent. One relates $R^2$ to $\dot{R}^2$ and $\widehat{R}^2$ by $R^2 = \dot{R}^2 + \widehat{R}^2$. First, the gradient of \eqref{eq:ridge_regression} is
\begin{equation} \label{eq:grad_ridge_regression}
\nabla f({\boldsymbol x}) = {\boldsymbol H} ({\boldsymbol x} - \widetilde{{\boldsymbol x}}) - \tfrac{{\boldsymbol A}^T {\boldsymbol \eta}}{n} + \gamma {\boldsymbol x} = ({\boldsymbol H} + \gamma {\boldsymbol I}) ({\boldsymbol x}-\widetilde{{\boldsymbol x}}) - \tfrac{{\boldsymbol A}^T {\boldsymbol \eta}}{n} + \gamma \widetilde{{\boldsymbol x}} = {\boldsymbol M} ({\boldsymbol x}-\widetilde{{\boldsymbol x}}) - \tfrac{{\boldsymbol A}^T {\boldsymbol \eta}}{n} + \gamma \widetilde{{\boldsymbol x}},
\end{equation}
where the matrix ${\boldsymbol M} \stackrel{\text{def}}{=} {\boldsymbol H} + \gamma {\boldsymbol I} $ and ${\boldsymbol I}$ is the identity matrix.
Under Assumption~\ref{assumption:ridge_vector} and \ref{assumption: spectral_density}, we derive a similar recurrence expression for the iterates of gradient-based algorithms as Proposition~\ref{prop: polynomials_methods}
\begin{proposition}[Prop.~\ref{prop: polynomials_methods} for ridge regression] Consider a gradient-based method with coefficients that depend continuously on $\lambda^-_{{\boldsymbol M}}$ and $\lambda^+_{{\boldsymbol M}}$. Define the sequence of polynomials $\{P_k, Q_k\}_{k = 0}^{\infty}$ recursively by
\begin{equation} \begin{gathered} \label{eq:recursive_noise_poly_ridge_regression}
P_0({\boldsymbol M}; \lambda_{{\boldsymbol M}}^{\pm}) = {\boldsymbol I} \quad \text{and} \quad P_k({\boldsymbol M}; \lambda_{{\boldsymbol M}}^{\pm}) = {\boldsymbol I} - {\boldsymbol M} Q_{k}({\boldsymbol M}; \lambda^{\pm}_{{\boldsymbol M}})\\
Q_0({\boldsymbol M}; \lambda_{{\boldsymbol M}}^{\pm}) = \bm{0} \quad \text{and} \quad Q_k({\boldsymbol M}; \lambda_{{\boldsymbol M}}^{\pm}) = \sum_{i=0}^{k-1} c_{k-1,i} \big [ {\boldsymbol M} Q_i({\boldsymbol M}; \lambda_{{\boldsymbol M}}^{\pm}) - {\boldsymbol I} \big ]\,.
\end{gathered} \end{equation}
These polynomials $P_k$ and $Q_k$ are referred to as the \emph{residual} and \emph{iteration} polynomials respectively.
We express the difference between the iterate at step $k$ and $\widetilde{{\boldsymbol x}}$ in terms of these polynomials:
\begin{equation} \label{eq:recursive_noise_poly_1_ridge_regression}
{\boldsymbol x}_k - \widetilde{{\boldsymbol x}} = P_k({\boldsymbol M}; \lambda_{{\boldsymbol M}}^{\pm}) ({\boldsymbol x}_0-\widetilde{{\boldsymbol x}}) + Q_k({\boldsymbol M}; \lambda_{{\boldsymbol M}}^{\pm}) \cdot \frac{{\boldsymbol A}^T {\boldsymbol \eta}}{n} - \gamma Q_k({\boldsymbol M}; \lambda_{{\boldsymbol M}}^{\pm}) \widetilde{{\boldsymbol x}}\,.
\end{equation}
\end{proposition}
The proof of this proposition follows the same argument as in Proposition~\ref{prop: polynomials_methods}, replacing the gradient of the least squares problem \eqref{eq:LS_main} with the gradient for the ridge regression problem \eqref{eq:grad_ridge_regression}. The polynomials $P_k$ and $Q_k$ are exactly the same as in Proposition~\ref{prop: polynomials_methods} but applied to a different matrix ${\boldsymbol M}$ instead of ${\boldsymbol H}$ (see Section~\ref{sec:Ex_polynomials} for examples the polynomials $P_k$ and $Q_k$ for various first-order algorithms). Given the resemblance to the least squares problem, it follows that one can relate the residual polynomial to the squared norm of the gradient.
\begin{proposition}[Prop.~\ref{prop:gradient_polynomial} for ridge regression] \label{prop:gradient_polynomial_ridge_regression} Suppose the iterates $\{{\boldsymbol x}_k\}_{k=0}^\infty$ are generated from a gradient based method. Let $\{P_k\}_{k=0}^\infty$ be a sequence of polynomials defined in \eqref{eq:recursive_noise_poly_ridge_regression}. Then the following identity exists between the iterates and its residual polynomial,
\begin{equation} \begin{gathered} \label{eq:grad_optimality_cond_app_ridge_regression}
\| \nabla f({\boldsymbol x}_k) \|^2 = ({\boldsymbol x}_0-\widetilde{{\boldsymbol x}})^T {\boldsymbol M}^2 P_k^2({\boldsymbol M}; \lambda_{{\boldsymbol M}}^{\pm}) ({\boldsymbol x}_0-\widetilde{{\boldsymbol x}}) + \tfrac{{\boldsymbol \eta}^T {\boldsymbol A}}{n} P_k^2({\boldsymbol M}; \lambda_{{\boldsymbol M}}^{\pm}) \tfrac{{\boldsymbol A}^T {\boldsymbol \eta}}{n} + \gamma^2 \widetilde{{\boldsymbol x}}^T P_k^2({\boldsymbol M}; \lambda_{{\boldsymbol M}}^{\pm}) \widetilde{{\boldsymbol x}}\\
-2({\boldsymbol x}_0-\widetilde{{\boldsymbol x}})^T {\boldsymbol M} P_k^2({\boldsymbol M}; \lambda_{{\boldsymbol M}}^{\pm}) \tfrac{{\boldsymbol A}^T {\boldsymbol \eta}}{n} + 2 \gamma ({\boldsymbol x}_0-\widetilde{{\boldsymbol x}})^T {\boldsymbol M} P_k^2({\boldsymbol M}; \lambda_{{\boldsymbol M}}^{\pm}) \widetilde{{\boldsymbol x}} - 2 \gamma \widetilde{{\boldsymbol x}}^T P_k^2({\boldsymbol M}; \lambda_{{\boldsymbol M}}^{\pm}) \tfrac{{\boldsymbol A}^T {\boldsymbol \eta}}{n}. \nonumber
\end{gathered}
\end{equation}
\end{proposition}
As in the least squares problem, one can replace the $\lambda_{{\boldsymbol M}}^{\pm}$ in the polynomial with $\lambda^{\pm} + \gamma$ for ${\boldsymbol M} = {\boldsymbol H} + \gamma {\boldsymbol I}$. Under Assumptions~\ref{assumption: spectral_density} and \ref{assumption:ridge_vector}, using the same technique as in Propositions~\ref{proposition:conditional} for the least squares problem, we derive the following.
\begin{proposition}[Prop.~\ref{thm: probability_convergence_ridge} for ridge regression] \label{thm: probability_convergence_ridge} Suppose the vectors ${\boldsymbol x}_0, \widetilde{{\boldsymbol x}},$ and ${\boldsymbol \eta}$ and the matrix ${\boldsymbol A}$ satisfy Assumptions~\ref{assumption:ridge_vector} and \ref{assumption: spectral_density} resp.
The following holds
\begin{align} \label{eq:grad_convergence_prob_ridge}
\big | \|\nabla f({\boldsymbol x}_k)\|^2 - \big ( \underbrace{\textcolor{teal}{\dot{R}^2} \textcolor{black}{\tfrac{1}{d} \text{\rm tr}({\boldsymbol M}^2 P_k^2({\boldsymbol M}; \lambda^{\pm}) )}}_{\text{initialization}} + \underbrace{\textcolor{teal}{\widehat{R}^2} \tfrac{1}{d} \text{\rm tr}({\boldsymbol H}^2 P_k^2({\boldsymbol M}; \lambda^{\pm})) }_{\text{signal}} + \underbrace{\textcolor{purple}{\widetilde{R}^2} \textcolor{black}{\tfrac{1}{n} \text{\rm tr}({\boldsymbol H} P_k^2({\boldsymbol M} ; \lambda^{\pm}))}}_{\text{noise}} \big ) \big | \Prto[d] 0.
\end{align}
\end{proposition}
We remark that we used the independence between ${\boldsymbol x}_0$ and $\widetilde{{\boldsymbol x}}$ to obtain \eqref{eq:grad_convergence_prob_ridge}. This independence leads to two terms in the gradient corresponding to the initialization and the signal. As the polynomials ${\boldsymbol M}^2 P_k^2({\boldsymbol M}; \lambda^{\pm})$, ${\boldsymbol H}^2 P_k^2({\boldsymbol M}; \lambda^{\pm})$, and ${\boldsymbol H} P_k^2({\boldsymbol M}; \lambda^{\pm})$ are polynomials in ${\boldsymbol H}$ (the identity ${\boldsymbol I}$ commutes with ${\boldsymbol H}$), Proposition~\ref{proposition: moments} still holds. Therefore, the equivalent to Theorem~\ref{thm: concentration_main} for ridge regression follows (recall, Theorem~\ref{thm: concentration_main_main_ridge} in Section~\ref{sec:ridge_regression_main}).
\textbf{Theorem.} \rm{(Concentration of the gradient for ridge regression)}
\textit{Under Assumptions~\ref{assumption:ridge_vector} and~\ref{assumption: spectral_density} the norm of the gradient concentrates around a deterministic value:
\begin{equation} \begin{aligned} \label{eq: something_1_main_ridge} \vspace{0.25cm}
\hspace{-0.28cm} \|\nabla f({\boldsymbol x}_k)\|^2 \Prto[d] &\textcolor{teal}{\overbrace{\dot{R}^2}^{\text{initial.}}} \! \!\! \int { \underbrace{(\lambda + \gamma)^2 P_k^2(\lambda + \gamma; \lambda^{\pm})}_{\text{algorithm}}} \textcolor{mypurple}{\overbrace{\mathop{}\!\mathrm{d}\mu}^{\text{model}} } + \textcolor{teal}{\overbrace{\widehat{R}^2}^{\text{signal}}} \! \!\! \int { \underbrace{\lambda^2 P_k^2(\lambda + \gamma; \lambda^{\pm})}_{\text{algorithm}}} \textcolor{mypurple}{\overbrace{\mathop{}\!\mathrm{d}\mu}^{\text{model}} } \\
& \quad \quad + \textcolor{purple}{\overbrace{ \widetilde{R}^2} ^{\text{noise}} } r \int { \underbrace{\lambda P_k^2(\lambda + \gamma; \lambda^{\pm})}_{\text{algorithm}}} \textcolor{mypurple}{\overbrace{ \mathop{}\!\mathrm{d}\mu}^{\text{model}} }. \end{aligned}
\end{equation}}
The equivalent to Theorem~\ref{thm: Halting_time_main} immediately follows by replacing $\! \! \underset{d \rightarrow \infty}{\mathcal{E}} \! [\|\nabla f(\xx_{k})\|^2]\,$ with the right-hand side of \eqref{eq: something_1_main_ridge}.
\section{Derivation of the worst and average-case complexity} \label{sec: avg_derivations}
In this section, we derive an expression for the average-case complexity in the isotropic features model. Here the empirical spectral measure $\mu_{{\boldsymbol H}}$ converges to the Mar\v{c}enko-Pastur measure $\mu_{\mathrm{MP}}$ \eqref{eq:MP}.
The average-case complexity, $\tau_{\varepsilon}$, is controlled by the value of the expected gradient norm in \eqref{eq: something_1}. Hence to analyze the average-case rate, it suffices to derive an expression for this value, $\! \! \underset{d \rightarrow \infty}{\mathcal{E}} \! [\|\nabla f(\xx_{k})\|^2]\,$.
In light of \eqref{eq: something_1}, we must integrate the residual polynomials in Table~\ref{table:polynomials} against the Mar\v{c}enko-Pastur measure. By combining Theorem~\ref{thm: concentration_main} with the integrals derived in Appendix~\ref{apx:integral_computations}, we obtain the average-case complexities. Apart from Nesterov's accelerated method (convex), an \textit{exact} formula for the average-case rates are obtained. In the convex setting for Nesterov, the integral is difficult to directly compute so instead we use the asymptotic polynomial in \eqref{eq:Bessel_asymptotic_main}. Hence for Nesterov's accelerated method (convex), we only get an asymptotic average-case rate for sufficiently large $k$ (see Appendix~\ref{apx:integral_computations}). Tables~\ref{tab:comparison_worst_avg_cvx} and ~\ref{tab:comparison_worst_avg_str_cvx} summarize the asymptotic rates where both iteration and problem size are large.
We now turn to the worst-case guarantees. We discuss below how to make the worst-case complexity comparable.
\subsection{Traditional worst-case complexity}
Recall, the prior discussion on the dimension-dependent constants in the typical worst-case complexity bounds. We now make this precise below.
\paragraph{Worst-case complexity: strongly convex and noiseless non-strongly convex regimes.} Consider GD and note the other methods will have similar analysis. Recall, the standard analytical worst-case bound for the strongly convex regime and the exact worst-case bound for the non-strongly convex setting \citep{taylor2017smooth}, respectively,
\begin{equation*} \begin{gathered}
\|\nabla f({\boldsymbol x}_k)\|^2 \le (\lambda_{{\boldsymbol H}}^+)^2 \|{\boldsymbol x}_0-{\boldsymbol x}^{\star}\|^2 \left ( 1- \tfrac{\lambda_{{\boldsymbol H}}^-}{\lambda_{{\boldsymbol H}}^+} \right )^{2k} \quad \text{(strongly convex)}\\
\text{and} \quad \|\nabla f({\boldsymbol x}_k)\|^2 \le \frac{(\lambda^+_{{\boldsymbol H}})^2 \|{\boldsymbol x}_0-{\boldsymbol x}^{\star}\|^2}{(k+1)^2} \quad \text{(convex)}
\end{gathered}\end{equation*}
where ${\boldsymbol x}^{\star}$ is the minimal norm solution of \eqref{eq:LS}. For sufficiently large $d$, the largest $\lambda_{{\boldsymbol H}}^+$ and smallest eigenvalues $\lambda_{{\boldsymbol H}}^-$ of ${\boldsymbol H}$ converge in probability to $\sigma^2 (1+\sqrt{r})^2$ and $\sigma^2(1-\sqrt{r})^2$ respectively. These are the top and bottom edge of the Mar\v{c}enko-Pastur distribution. We also note in the noiseless setting $\|{\boldsymbol x}_0-{\boldsymbol x}^{\star}\|^2 = \|{\boldsymbol x}_0-\widetilde{{\boldsymbol x}}\|^2$. Hence by Assumption~\ref{assumption: Vector} and $\widetilde{R}^2 = 0$, on average, $\|{\boldsymbol x}_0-\widetilde{{\boldsymbol x}}\|^2 = R^2$. Moreover when the matrix ${\boldsymbol H}$ is nonsingular, as in the strongly convex setting, the optimum $\|{\boldsymbol x}^{\star}\|^2$ does not grow as dimension increases despite the noise. As a sequence of random variables in $d$, $\|{\boldsymbol x}_0-{\boldsymbol x}^{\star}\|^2$ is tight. From these observations we derive the worst-case complexities.
\paragraph{Worst-case complexity: noisy non-strongly convex regime.} While discussing the worst-case complexity in Section~\ref{sec: average_case}, we noted a discrepancy in the noisy, non-strongly convex regime between the average rate and the exact worst complexity. For instance, the exact worst complexity for gradient descent (GD) \citep{taylor2017smooth} is
\begin{equation} \label{eq:worst_case_complexity}
\|\nabla f({\boldsymbol x}_k)\|^2 \le \frac{(\lambda^+_{{\boldsymbol H}})^2 \|{\boldsymbol x}_0-{\boldsymbol x}^{\star}\|^2}{(k+1)^2} \quad \text{where ${\boldsymbol x}^{\star}$ is the optimum of \eqref{eq:LS}.}
\end{equation}
For sufficiently large $d$, the largest eigenvalue of $\lambda^+_{{\boldsymbol H}}$ converges a.s. to $\lambda^+ = 4 \sigma^2$, the top edge of the support of $\mu_{\mathrm{MP}}$. Hence, to derive worst-case complexity bounds, it suffices to understand the behavior of the distance to the optimum.
The vectors ${\boldsymbol x}^\star$ and $\widetilde{{\boldsymbol x}}$ are different when noise is added to the signal. For simplicity, we consider the setting where the matrix ${\boldsymbol A}$ is invertible. Intuitively, the optimum ${\boldsymbol x}^{\star} \approx {\boldsymbol A}^{-1} {\boldsymbol b} = \widetilde{{\boldsymbol x}} + {\boldsymbol A}^{-1} {\boldsymbol \eta}$ where $\widetilde{{\boldsymbol x}}$ is the underlying random signal. Because the signal $\widetilde{{\boldsymbol x}}$ is scaled, Assumption~\ref{assumption: Vector} says ${\mathbb E}\,[ \| {\boldsymbol x}_0-\widetilde{{\boldsymbol x}}\|^2] = R^2$. Therefore, the distance to the optimum ${\boldsymbol x}_0-{\boldsymbol x}^{\star}$ is controlled by the noise which in turn is bounded by the reciprocal of the minimum eigenvalue of ${\boldsymbol A}^T {\boldsymbol A}$, namely \[\|{\boldsymbol x}_0-{\boldsymbol x}^\star\|^2 \approx \|{\boldsymbol x}_0-\tilde{{\boldsymbol x}}\|^2 + \|{\boldsymbol A}^{-1} {\boldsymbol \eta}\|^2 \ge \frac{|{\boldsymbol u}_{\min}^T {\boldsymbol \eta}|^2}{\lambda_{\min}({\boldsymbol A}^T {\boldsymbol A})}, \]
where $(\lambda_{\min}({\boldsymbol A}^T {\boldsymbol A}), {\boldsymbol u}_{\min})$ is an eigenvalue-eigenvector pair corresponding to the minimum eigenvalue of ${\boldsymbol A}^T{\boldsymbol A}$. Unfortunately, the smallest eigenvalue is not well-behaved. Particularly there does not exist any scaling so that expectation of $\lambda_{\min}({\boldsymbol A}^T{\boldsymbol A})^{-1}$ is finite and the distribution is heavy-tailed. Instead we show that this quantity $\frac{|{\boldsymbol u}_{\min}^T {\boldsymbol \eta}|^2}{\lambda_{\min}({\boldsymbol A}^T {\boldsymbol A})}$ grows faster than $\widetilde{R}^2 d$. To do so, we appeal to a theorem in \citep{tao2010random}, that is, we assume that all moments of the entries of the matrix ${\boldsymbol A}$ are bounded, namely,
\begin{equation} \label{eq: bounded_moment}
\max_{i,j} \mathbb{E}[|A_{ij}|^k] < \infty \quad \text{for all $k \le 10^4$.}
\end{equation}
This bounded moment assumption is a mild assumption on the entries. For instance it includes any sub-exponential random variables. It should be noted here that under the simple isotropic features model it is clear that $\|{\boldsymbol x}_0-{\boldsymbol x}^{\star}\|^2$ is \textit{dimension-dependent}, but the exact dependence is more complicated. Under this condition \eqref{eq: bounded_moment}, we can prove a bound, which gives the dependence on the problem size, for the growth rate of the distance to the optimum.
\begin{lemma}[Growth of $\|{\boldsymbol x}_0-{\boldsymbol x}^\star\|^2$] \label{lem:growth_dist_optimum} Suppose Assumptions~\ref{assumption: Vector} and \ref{assumption: spectral_density} hold such that the noise vector ${\boldsymbol \eta} \in {\mathbb R}^d$ and the entries of the data matrix ${\boldsymbol A} \in \mathbb{R}^{d \times d}$ satisfy bounded moments \eqref{eq: bounded_moment}. Let ${\boldsymbol x}^\star$ be the minimal norm solution to \eqref{eq:LS}. For any $\delta > 0$ there exists a constant $M_{\delta} > 0$ such that
\begin{equation} \liminf_{n \to \infty} \Pr \big ( \|{\boldsymbol x}_0- {\boldsymbol x}^{\star} \|^2 \ge d \cdot \widetilde{R}^2 M_{\delta} \big ) \ge 1-\delta. \label{eq: growth_norm} \end{equation}
\end{lemma}
\begin{proof} We begin by defining the constant $M_{\delta} > 0$. The $n \times n$ matrix ${\boldsymbol A}$ is invertible a.s. so without loss of generality the smallest eigenvalue of ${\boldsymbol A}^T {\boldsymbol A}$ is non-zero. Here the dimensions are equal, $d = n$.
From \cite[Corollary 3.1]{edelman1988eigenvalues} and \cite[Theorem 1.3]{tao2010random}, we know that
$n \lambda_{\min}({\boldsymbol A}^T {\boldsymbol A})$ converges in distribution where we denote the smallest eigenvalue of ${\boldsymbol A}^T {\boldsymbol A}$ as $\lambda_{\min}({\boldsymbol A}^T {\boldsymbol A})$. It is immediately clear that $\log(n \lambda_{\min}({\boldsymbol A}^T {\boldsymbol A}))$ also converges in distribution. By Theorem~3.2.7 in \cite{durrett2010probability}, the sequence of distribution functions $\{F_n(x) = \Pr( \log(n \lambda_{\min}({\boldsymbol A}^T {\boldsymbol A})) \le x) \}$ is tight, that is, there exists an $C_{\delta} > 0$ such that
\[ \limsup_{n \to \infty} \Pr \big (n \lambda_{\min}({\boldsymbol A}^T {\boldsymbol A}) \not \in (e^{-C_{\delta}}, e^{C_{\delta}}] \big ) = \limsup_{n \to \infty} 1 - F_n(C_{\delta}) + F_n(-C_{\delta}) \le \tfrac{\delta}{2}. \]
In particular, we know that \begin{equation} \label{eq: avg_case_1} \limsup_{n \to \infty} \Pr \big ( n^{-1} ( \lambda_{\min}({\boldsymbol A}^T {\boldsymbol A}))^{-1} < e^{-C_{\delta}} \big ) \le \tfrac{\delta}{2}.
\end{equation}
Another way to observe \eqref{eq: avg_case_1} is that $n\lambda_{\min}({\boldsymbol A}^T {\boldsymbol A})$ has a density supported on $[0, \infty)$ \citep{edelman1988eigenvalues}. For any $\chi^2_1$-squared with $1$-degree of freedom random variable of $X$, there exists a constant $\widehat{C}_{\delta} > 0$ such that \begin{equation} \label{eq: avg_case_3}
\Pr(X \le \widehat{C}_{\delta}) \le \tfrac{\delta}{2}.
\end{equation}
Let $M_{\delta} \stackrel{\text{def}}{=} \tfrac{1}{4} \min \{ e^{-2C_{\delta}}, \widehat{C}_{\delta}^2 \}$. With $M_{\delta}$ defined, we are now ready to prove \eqref{eq: growth_norm}. The matrix ${\boldsymbol A}$ is a.s. invertible so gradient descent converges to ${\boldsymbol x}^\star = {\boldsymbol A}^{-1} {\boldsymbol b}$. Next we observe that \eqref{eq: growth_norm} is equivalent to proving
\begin{equation} \label{eq: avg_case_2} \limsup_{n \to \infty} \Pr \big ( \| {\boldsymbol x}_0 - {\boldsymbol A}^{-1} {\boldsymbol b} \| < \widetilde{R} \sqrt{n M_{\delta}} \big ) \le \delta.
\end{equation}
Plugging in the value of ${\boldsymbol b}$ and using the reverse triangle inequality, we obtain
\[ \Pr \big ( \|{\boldsymbol x}_0- {\boldsymbol A}^{-1} {\boldsymbol b}\| < \widetilde{R}\sqrt{n M_{\delta}} \big ) \le \Pr \big ( \|{\boldsymbol A}^{-1} {\boldsymbol \eta}\| < \widetilde{R}\sqrt{n M_{\delta}} + \|{\boldsymbol x}_0-\tilde{{\boldsymbol x}}\| \big ).\]
Using Markov's inequality, we can obtain a bound on $\|{\boldsymbol x}_0 - \widetilde{{\boldsymbol x}}\|$ :
\[ \Pr \big ( \|{\boldsymbol x}_0-\widetilde{{\boldsymbol x}}\| \ge \widetilde{R} \sqrt{nM_{\delta}} \big ) \le \frac{R^2}{n M_{\delta} \widetilde{R}^2}.\]
Consider now the event given by $\mathcal{S} \stackrel{\text{def}}{=} \{ \|{\boldsymbol x}_0-\widetilde{{\boldsymbol x}}\| < \widetilde{R} \sqrt{nM_{\delta}}\, \} $. The total law of probability yields
\begin{equation} \begin{aligned} \label{eq: avg_case_4} \Pr &\big ( \|{\boldsymbol x}_0- {\boldsymbol A}^{-1} {\boldsymbol b}\| < \widetilde{R} \sqrt{n M_{\delta}} \big )\\
&\le \Pr \big (\mathcal{S}^c \big ) + \Pr \big ( \mathcal{S} \cap \{ \|{\boldsymbol A}^{-1} {\boldsymbol \eta}\| < \widetilde{R} \sqrt{nM_{\delta}} + \|{\boldsymbol x}_0-\tilde{{\boldsymbol x}}\| \} \big )\\
&\le \frac{R^2}{n M_{\delta} \widetilde{R}^2} + \Pr \big ( \|{\boldsymbol A}^{-1} {\boldsymbol \eta}\| < 2 \widetilde{R} \sqrt{n M_{\delta}} \big ) = \frac{R^2}{n M_{\delta} \widetilde{R}^2} + \Pr \big ( \|{\boldsymbol A}^{-1} {\boldsymbol \eta}\|^2 < 4n\widetilde{R}^2 M_{\delta} \big ).
\end{aligned} \end{equation}
A simple calculation gives that $n^{-1} \widetilde{R}^{-2} \|{\boldsymbol A}^{-1} {\boldsymbol \eta}\|^2 \ge n^{-1} \big ( \lambda_{\min}({\boldsymbol A}^T {\boldsymbol A}) \big )^{-1} \widetilde{R}^{-2} ( {\boldsymbol u}_{\min}^T {\boldsymbol \eta})^2$ where the orthonormal vector ${\boldsymbol u}_{\min}$ is the eigenvector associated with the eigenvalue $(\lambda_{\min}({\boldsymbol A}^T{\boldsymbol A}))^{-1}$. From this, we deduce the following inequalities
\begin{align*}
\Pr \big ( &\|{\boldsymbol A}^{-1} {\boldsymbol \eta}\|^2 < 4n \widetilde{R}^2 M_{\delta} \big ) \le \Pr \big ( n^{-1} \lambda_{\min}({\boldsymbol A}^T{\boldsymbol A})^{-1} \cdot \widetilde{R}^{-2} ({\boldsymbol u}_{\min}^T {\boldsymbol \eta})^2 < \min \{ e^{-2C_{\delta}}, \widehat{C}^2_{\delta} \} \big )\\
&\le \Pr \big ( n^{-1} \lambda_{\min}({\boldsymbol A}^T{\boldsymbol A})^{-1} < \min \{ e^{-C_{\delta}}, \widehat{C}_{\delta} \} \big ) + \Pr \big ( \widetilde{R}^{-2} ({\boldsymbol u}_{\min}^T {\boldsymbol \eta})^2 < \min \{ e^{-C_{\delta}}, \widehat{C}_{\delta} \}\big )\\
&\le \Pr \big ( n^{-1} \lambda_{\min}({\boldsymbol A}^T{\boldsymbol A})^{-1} < e^{-C_{\delta}} \big ) + \Pr \big ( \widetilde{R}^{-2} ({\boldsymbol u}_{\min}^T {\boldsymbol \eta})^2 < \widehat{C}_{\delta} \big ).
\end{align*}
Since ${\boldsymbol \eta}$ is Gaussian and ${\boldsymbol u}_{\min}$ is orthonormal, we know that $\widetilde{R}^{-2} ({\boldsymbol u}^T_{\min} {\boldsymbol \eta})^2 \sim \chi^2_1$, a chi-squared distribution, so \eqref{eq: avg_case_3} holds and we already showed that $n^{-1} (\lambda_{\min}({\boldsymbol A}^T{\boldsymbol A}))^{-1}$ satisfies \eqref{eq: avg_case_1}. By taking $ \displaystyle \limsup$, we have
\[\limsup_{n \to \infty} \Pr \big ( \|{\boldsymbol A}^{-1} {\boldsymbol \eta}\|^2 < 4n \widetilde{R}^2 M_{\delta} \big ) \le \delta.\]
The inequality in \eqref{eq: avg_case_2} immediately follows after taking the limsup of \eqref{eq: avg_case_4}.
\end{proof}
Combining this lemma with the equation \eqref{eq:worst_case_complexity}, we get with high probability that
\[ \|\nabla f({\boldsymbol x}_k)\|^2 \le \frac{(\lambda_{{\boldsymbol H}}^+)^2 \|{\boldsymbol x}_0-{\boldsymbol x}^{\star}\|^2}{(k+1)^2} \approx \frac{16 \sigma^2 \widetilde{R}^2 d}{(k+1)^2}.\]
By setting the right-hand side equal to $\varepsilon$, we get the worst-case complexity result.
\begin{figure*}[t!]
\begin{center}
\includegraphics[scale = 0.4]{figures/gd-lr.pdf}
\includegraphics[scale = 0.4]{figures/nesterov-lr.pdf}
\includegraphics[scale = 0.4]{figures/sgd-ls.pdf}
\includegraphics[scale = 0.4]{figures/sgd-lr.pdf}
\end{center}
\caption{{\bfseries Halting time universality beyond least squares.} We compute the halting time on algorithms and models not covered by our theory and note that the convergence to a deterministic and universal halting time is also empirically observed in these settings.
For different model size $d$ ($x$-axis) we sample the vectors $\widetilde{{\boldsymbol x}}$, ${\boldsymbol x}_0$ and the matrix ${\boldsymbol A}$ ($\widetilde{R}^2 = 0.01$ and $r = 0.5$) and report the halting time ($y$-axis) and its standard deviation (shaded area) for GD and Nesterov (convex) on logistic regression and SGD on both least squares and logistic regression.
} \label{fig:halt_time_concentrates}\vspace{-1em}
\end{figure*}
\subsection{Adversarial Model}
Next we recall the adversarial model. Here we assume a noisy generative model for ${\boldsymbol b}$ (Assumption~\ref{assumption: Vector}). Then our adversary chooses the matrix ${\boldsymbol A}$ without knowledge of ${\boldsymbol b}$ in such a way that \textit{maximizes the norm of the gradient} subject to the constraint that the convex hull of the eigenvalues of ${\boldsymbol H} = \tfrac{1}{n}{\boldsymbol A}^T {\boldsymbol A}$ equals $[\lambda^{-},\lambda^+]$. For comparison to the average-case analysis with isotropic features, we would choose $\lambda^{\pm}$ to be the endpoints of the Mar\v{c}enko-Pastur law.
In light of Proposition~\ref{proposition:conditional}, the adversarial model seeks to solve the constrained optimization problem
\begin{equation} \begin{gathered} \label{eq:adversary_H}
\max_{{\boldsymbol H}} \Big \{ \mathbb{E} \big [ \|\nabla f({\boldsymbol x}_k)\|^2 \big ] = \tfrac{R^2}{d} \text{tr}({\boldsymbol H}^2 P_k^2({\boldsymbol H}; \lambda_{{\boldsymbol H}}^{\pm})) + \tfrac{\widetilde{R}^2}{n} \text{tr} ( {\boldsymbol H} P_k^2({\boldsymbol H}; \lambda^{\pm}_{{\boldsymbol H}})) \Big \} \\ \text{subject to} \quad \lambda_{{\boldsymbol H}}^+ = \lambda^+ \, \text{and} \, \lambda_{{\boldsymbol H}}^- = \lambda^-,
\end{gathered} \end{equation}
where the largest (smallest) eigenvalue of ${\boldsymbol H}$ is restricted to the upper (lower) edge of Mar\v{c}enko-Pastur measure. The optimal ${\boldsymbol H}$ of \eqref{eq:adversary_H}, ${\boldsymbol H}_{\max}$, has all but two of its eigenvalues at
\begin{equation} \lambda^*_k \stackrel{\text{def}}{=} \argmax_{\lambda \in [\lambda^-, \lambda^+]} \Big \{ R^2 \lambda^2P_k^2(\lambda; \lambda^{\pm}) + \widetilde{R}^2 \lambda P_k^2(\lambda; \lambda^{\pm}) \Big \} \, . \end{equation}
The other two eigenvalues must live at $\lambda^+$ and $\lambda^-$ in order to satisfy the constraints. The empirical spectral measure for this ${\boldsymbol H}_{\max}$ is exactly
\[ \mu_{{\boldsymbol H}_{\max}} = \frac{1}{d} \sum_{i=1}^d \delta_{\lambda_i} = \frac{1}{d} \cdot \delta_{\lambda^+} + \frac{1}{d} \cdot \delta_{\lambda^-} + \Big (1-\frac{2}{d} \Big ) \cdot \delta_{\lambda^*_k}. \]
Since this empirical spectral measure weakly converges to $\delta_{\lambda^*_k}$, we satisfy the conditions of Assumption~\ref{assumption: Vector} for these ${\boldsymbol H}_{\max}$ and spectral measure $\mu_{{\boldsymbol H}_{\max}}$. Hence, Theorem~\ref{thm: concentration_main} holds and the maximum expected squared norm of the gradient as the model size goes to infinity equals
\begin{equation} \begin{aligned} \label{eq: adversary_worst_case}
\lim_{d \to \infty} \max_{{\boldsymbol H}} \, \mathbb{E} \big [ \|\nabla f({\boldsymbol x}_k)\|^2 \big ] &= \int \big [ R^2 \lambda^2 P_k^2(\lambda; \lambda^{\pm}) + \widetilde{R}^2 r \lambda P_k^2(\lambda; \lambda^{\pm})\big ] \, \delta_{\lambda^*_k}\\
&= \max_{ \lambda \in [\lambda^-, \lambda^+] } R^2 \lambda^2 P_k^2(\lambda; \lambda^{\pm}) + \widetilde{R}^2 r \lambda P_k^2(\lambda; \lambda^{\pm})\,.
\end{aligned}
\end{equation}
We called the above expression the \textit{adversarial average-case complexity}. Table~\ref{tab:comparison_worst_avg_cvx} shows these convergence guarantees. We defer the derivations to Appendix~\ref{apx: adversarial_model}.
\begin{remark} In the strongly convex setting, we omitted the adversarial average-case guarantees out of brevity. For all the algorithms, the value of $\lambda_k^*$ occurs near or at the minimum eigenvalue $\lambda^+$. As such there is (almost) no distinction between the traditional worst-case guarantees and the adversarial guarantees.
\end{remark}
\section{Numerical Simulations} \label{sec:numerical_simulations}
To illustrate our theoretical results we report simulations using gradient descent (GD) and Nesterov's accelerated method (convex) \citep{nesterov2004introductory,Beck2009Fast} on the least squares problem under the isotropic features model. We further investigate the halting time in logistic regression as well as least squares with mini-batch stochastic gradient descent (SGD). See Appendix~\ref{apx:exp_details} for details.
\paragraph{Setup.} The vectors ${\boldsymbol x}_0$ and $\widetilde{{\boldsymbol x}}$ are sampled i.i.d. from the Gaussian $N({\boldsymbol{0}}, \tfrac{1}{d}{\boldsymbol I})$ whereas the entries of ${\boldsymbol A}$ are sampled either from a standardized Gaussian, a Bernoulli distribution, or a Student's \mbox{$t$-dis}tribution with 5 degrees of freedom, normalized so that they all have the same mean and variance. We train the following models:
\begin{itemize}[leftmargin=*]
\item \textbf{Least squares.} The least squares problem minimizes the objective function $f({\boldsymbol x}) = \tfrac{1}{2n} \|{\boldsymbol A} {\boldsymbol x} -{\boldsymbol b} \|^2$. The targets, ${\boldsymbol b} = {\boldsymbol A}\widetilde{{\boldsymbol x}} + {\boldsymbol \eta}$, are generated by adding a noise vector ${\boldsymbol \eta}$ to our signal, ${\boldsymbol A} \widetilde{{\boldsymbol x}}$. The entries of ${\boldsymbol \eta}$ are sampled from a normal, $N(0, \widetilde{R}^2)$, for different values of $\widetilde{R}^2$.
\item \textbf{Logistic regression.} For the logistic regression model we generate targets in the domain $(0, 1)$ using ${\boldsymbol b} = \sigma\left( {\boldsymbol A}\widetilde{{\boldsymbol x}} + {\boldsymbol \eta}\right)$ where $\sigma$ is the logistic function. The output of our model is ${\boldsymbol y} = \sigma\left({\boldsymbol A}{\boldsymbol x}\right)$, and the objective function is the standard cross-entropy loss:
\[f({\boldsymbol x}) = {-\frac{1}{n}\sum_{i=1}^n b_i \cdot \log(y_i) + (1 - b_i) \cdot \log(1 - y_i)}.\]
\end{itemize}
\begin{figure*}[t!]
\begin{center}
\includegraphics[scale =0.4]{figures/ratio-gd.pdf}
\includegraphics[scale = 0.4]{figures/ratio-sgd.pdf}
\end{center}
\caption{\textbf{Effect of the ratio $r = d/n$ on the halting time for various levels of noise $\widetilde{R}^2$.} The left plot shows the average halting time of gradient descent as a function of the ratio parameter $r$. As predicted by the theory, the halting time increases as ${\boldsymbol A}$ approaches a square matrix ($r \to 1$), and the difference between the linear rates ($r \neq 1$) and the sublinear rates ($r = 1)$ grows as the noise level increases. A total of 104,960 models were trained, keeping fixed the number of entries in the matrix $dn = 2^{24}$. In the right plot we show the same curve but for SGD instead, with a batch-size of $\frac{n}{8}$. We plot the curves for all values $r \neq 1$ with the value for $r = 1$ as a single point due to its large value.}
\label{fig:r}
\end{figure*}
\paragraph{Parameter settings.} In all simulations, the halting criterion is the number of steps until the gradient falls below $\varepsilon$, $\|\nabla f({\boldsymbol x})\|^2 < \varepsilon$, where $\varepsilon$ is chosen to be $10^{-6}$ for GD and Nesterov and $\varepsilon$ is $10^{-4}$ for SGD. The step size for GD and Nesterov's accelerated method is fixed to be $1/L$ where $L$ is the Lipschitz constant of the gradient. For least squares, $L=\lambda_{{\boldsymbol H}}^+$. We approximate $\lambda_{{\boldsymbol H}}^+$ by performing 64 steps of the power iteration method on the matrix ${\boldsymbol H}$, initialized with a constant vector of norm 1. For logistic regression, we set the step size to be $4/\lambda_{{\boldsymbol H}}^+$.
In SGD, we sample rows from the matrix ${\boldsymbol A}$. The mini-batch size parameter is a fixed fraction of the data set size $\frac{n}{16}$, so that the comparison of halting times across model sizes is consistent. When the models are over-parametrized ($n < d$), a strong growth condition~\citep{schmidt2013fast} holds. This means a scaling of the GD step size can be used to ensure convergence. In the under-parametrized setting, SGD does not converge to the optimum. In this case we chose a step size such that the expected squared gradient norm at the stationary point equals the halting criterion. See Appendix~\ref{sec:step_sizes} for derivations.
\paragraph{Results and conclusions.} Figure~\ref{fig:gd-ls} confirms our theoretical results: variability in the halting time decreases and the halting time converges to a deterministic quantity independent of the distribution of the data. Experimentally, the standard deviation decreased at a rate of $d^{-1/2}$, consistent with results in random matrix theory. For medium sized problems ($d = 2^5$), the heavy-tailed Student's t distribution occasionally produces ill-conditioned matrices resulting in large halting times. These ill-conditioned matrices disappear as the model size grows in large part because the maximum eigenvalue becomes stable.
More interestingly, our results extend to non-quadratic functions, such as logistic regression, as well as SGD (see Figure~\ref{fig:halt_time_concentrates}). Surprisingly, we see different behaviors between logistic and least square models for smaller matrices when using SGD. Moreover, we note that the large halting times seen in the Student's t distribution for GD on medium sized problems disappear when we instead run SGD.
Secondly, Figure~\ref{fig:r} evaluates the halting times dependency on the ratio $r$. As predicted by the theory, the halting time takes its maximum value (\textit{i.e.}, algorithm is slowest) precisely when $r = 1$. For SGD different step sizes are used for the over-parametrized and under-parametrized regime resulting in an asymmetric curve and a clear discontinuity at $r = 1$. We leave the study of these phenomena as future work.
\section*{Acknowledgements} The authors would like to thank our colleagues Nicolas Le Roux, Ross Goroshin, Zaid Harchaoui, Damien Scieur, and Dmitriy Drusvyatskiy for their feedback on this manuscript, and Henrik Ueberschaer for providing useful random matrix theory references.
\newpage
|
1,941,325,220,418 | arxiv | \section{Introduction}
The
\emph{Imaging X-ray Polarimetry Explorer}\footnote{\url{https://ixpe.msfc.nasa.gov}} (IXPE), which is a space-based mission selected by the NASA, is expected to be launched in 2021. IXPE will conduct precise polarimetry in the X-ray energy band (between 2 and 8 kilo electronvolts), which is a poorly investigated field so far, see \citep{galaxies6010033} for a recent review. The data collected by this mission will be important for the analysis of various
astrophysical sources, from stellar-mass black holes, neutron stars and pulsar wind nebulae, to supernovae remnants and active galactic nuclei.
IXPE exploits the so-called \emph{Gas Pixel Detector}~(GPD) design to perform measurements of linear polarization~\citep{2006NIMPA.560..425B}. In particular, when an X-ray photon is absorbed in the gas gap of the GPD, a photo-electron~(PE) is ejected, producing an ionization pattern that defines
a track. Each track is drifted by a uniform electric field to the \emph{Gas Electron Multiplier}~(GEM), which amplifies the charge keeping the shape unchanged~\citep{articleCosta}. The amplified charge is then read out through a grid of hexagonal pixels and the image of the PE is recorded. An example of a PE track image is given in Figure~\ref{fig:ixpetrack}: the green dot represents the impact point, where the X-ray converted into a PE, and the green line shows the emission direction of the secondary particle, which lies preferentially in the oscillation plane of the X-ray electric field.
\begin{wrapfigure}{r}{8.0cm}
\vspace{-8pt}
\centering
\includegraphics[width=7.5cm]{IXPE-track.png}
\caption{Example of a PE track:
The darker the color, the higher the charge value recorded.}
\label{fig:ixpetrack}
\vspace{-8pt}
\end{wrapfigure}
A correct reconstruction of the impact point is crucial to carry out the imaging of observed extended sources in the sky, while the estimation of the PE emission direction is fundamental to determine the polarization of the incoming radiation. The reconstruction of IXPE events can be reduced to the estimation of these two main parameters: (1) the impact point, and (2) the polarization angle $\phi$. Currently, an analytic approach is used to infer both the impact point and the polarization angle from the charge-waited pixel content exploiting a geometrical moments analysis technique. This approach shows its weaknesses by well reconstructing (on average) only $\sim$20$\%$ of the events, loosing mostly the low-energy ones, which are generally less featured (less elongated tracks, more spot-like).
The weakness of the analytical approach brings up the necessity of an alternative reconstruction method and, since the IXPE track reconstruction is based on images, it is very appealing from the deep learning point of view.
Deep learning has successfully been applied in various domains such as medical image analysis, remote sensing, or astronomy~\citep{LeCunBH2015}.
In this work, we report results of two first attempts to address the estimation tasks sketched above, i.e., we propose deep neural networks for~(1) the impact point estimation and for~(2) the estimation of the emission direction. We also outline strategies for further improvements of deep learning based models for both tasks.
\section{Data and Reconstruction efficiency}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.15\textwidth]{example1.png}
\hfill
\includegraphics[width=0.15\textwidth]{example2.png}
\hfill
\includegraphics[width=0.15\textwidth]{example3.png}
\hfill
\includegraphics[width=0.15\textwidth]{example4.png}
\hfill
\includegraphics[width=0.15\textwidth]{example5.png}
\hfill
\includegraphics[width=0.15\textwidth]{example6.png}
\end{center}
\caption{Examples of simulated images}
\label{fig:examples}
\end{figure}
For this study, we simulated IXPE observations of an unpolarized source emitting X-rays of energies uniformly distributed in the range of the IXPE sensitivity. In particular, Monte Carlo simulations were used to generate 500,000 PE track images. Each track image was labelled with the following set of parameters: (1) the energy of the incoming X-ray (E$_{X}$), (2) the coordinates $(j, i)$ of the pixel containing the impact point, and (3) the polarization angle~$\phi$.
\begin{wrapfigure}{r}{10.5cm}
\vspace{-8pt}
\centering
\includegraphics[width=10.5cm]{IXPE_modfact.png}
\caption{Left: Illustration of a 100\% polarized radiation as seen by IXPE. Right: Same as left image but for an unpolarized radiation.}
\label{fig:ixpetrack2}
\end{wrapfigure}
The generated images were subsequently normalized (i.e., pixel values in [0,1]) and upsampled to a cartesian grid (upsampling factor $\approx 2$, equally-shaped square images). In Figure~\ref{fig:examples}, some examples of such generated images are shown. A separate additional test set of 35,838 observations for a $\pi/4$ polarized source emitting X-rays at 4 kilo electronvolts was processed in the same way.
A relevant characteristic is that the distribution of~$\phi$ of the events collected observing an X-ray source in the sky would show a $cos^2$ modulation in the case that the target is linearly polarized, while it would be uniform in case of unpolarized source, as illustrated in Figure \ref{fig:ixpetrack2}. The capability of an algorithm of reproducing this modulation in the final distribution of $\phi$ can be translated in an efficiency evaluation useful to compare reconstruction algorithms. We can define the modulation factor $\mu$ as the response to a fully polarized sample: the closer to 1 the modulation factor, the more efficient the algorithm. In addition to the modulation factor, we define the \textit{efficiency}, $\varepsilon_{10}$, to compare the performance of different algorithms in predicting~$\phi$. It is given by the number of events whose predicted polarization angle lies within 10 degrees
(arbitrary but reasonable number)
of the true value.
Given the periodicity of the polarization angle distribution a shifting phase of $\pm \pi$ and $\pm$2$\pi$ of the reconstructed angle is still good, since the overall distribution would not be altered (as long as the number of positive-shifted events is balanced by the number of negative shifted events).
\section{Angle and Impact Estimation}
\label{s.exper}
We propose two different neural network architectures to estimate (a) the emission direction (angle) and (b) the impact point of a given track, respectively.
\subsection{Polarization Angle Reconstruction}
For this subtask, we resort to a M-head ensemble of VGG-16 networks with $M=8$ heads in the ensemble, where all but the last block of CNN filters are shared~\citep{mhead2015,simonyan2014}. The network produces a normalized direction vector and we consider the cosine similarity loss between the network predictions and the ground truth directions as loss function.
Following \citep{mhead2015}, we assign a higher weight factor to the loss for the ensemble head, which gives most accurate prediction, and lower weights for the other heads in order to prevent network heads from becoming too similar. At inference stage, the average over ensemble heads is returned as the final prediction.
For typical low energy events accurate direction regression can be impossible, therefore it is important to quantify predictive uncertainty of the model, which we accomplish by using ensembling. It is known that ensembles of neural networks typically yield the best estimate of predictive uncertainty~\citep{fertig2019}, compared to methods such as dropout~\citep{gal2016} and SVI~\citep{blundel2015}, in addition to improvements in accuracy over single models. In Figure~\ref{fig:efficiency}, histograms of the reconstructions of the $\pi/4$-polarized test data together with $10$-degree efficiency estimates $\varepsilon_{10}$ and the modulation factors $\mu$ for both the neural network and the classical reconstructions are provided. It can be seen that the baseline model gives nearly the same efficiency as the state-of-the-art analytical method. Ideally, the goal is to outperform the analytical method, and we will discuss potential strategies for that in the conclusion.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.47\textwidth, height=4.5cm]{MC-pred_histo.png}
\hfill
\includegraphics[width=0.47\textwidth, height=4.5cm]{MC-anRecon_histo.png}
\end{center}
\caption{Left: neural network prediction efficiency. Right: state of the art analytical method. The efficiency $\epsilon_{10}$ and the modulation factor $\mu$ are reported on top of the plots.
}
\label{fig:efficiency}
\end{figure}
\subsection{Impact Point Reconstruction}
\begin{wrapfigure}{r}{0.5\textwidth}
\vspace{-60pt}
\begin{center}
\includegraphics[width=0.5\textwidth, height=4cm]{lossVsbatches.png}
\end{center}
\caption{Training and validation loss}
\label{fig:lossVsbatches}
\vspace{-30pt}
\end{wrapfigure}
For the impact point estimation task, we resorted to the ResNet-34 model~\citep{he2016deep} with pre-trained weights (based on ImageNet). That is, we follow a transfer learning approach~\citep{pratt1993discriminability} and only fine-tune the last layer according to the new task.
The image sample are labeled based on the true (simulated) impact point coordinates. For training the last layer, we used 60,000 events from the available track samples, 20\% samples were used for validation. We obtained 15,000 events separately for the purpose of testing the model.
For training the model, we considered the Mean Square Error (MSE) as the loss function, an initial learning rate of $3e-2$ with weight decay of $1e-3$, a batch size of 64, and five epochs. The image size of the samples is $64 \times 64$. The training loss starts with a value of 0.0372 (Figure \ref{fig:lossVsbatches}) and later follows a downward path with the increase in the number of batches processed.
The validation loss starts with a similar value of 0.0257 and later decreases steadily. Near the end of the batch processing, the training and the validation loss reach a value of 0.011 and 0.010 respectively. We use Root Mean Square Error (RMSE) as the performance measure and the model showed RMSE in x to be 7.807 whereas RMSE in y as 7.368 for evaluation on test data.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.47\textwidth, height=2.5cm]{Prediction1.png}
\hfill
\includegraphics[width=0.47\textwidth, height=2.5cm]{Prediction2.png}
\end{center}
\caption{Subfigure 1 and 2 - Left: Ground truth, Right: Regression prediction.
}
\label{fig:predictions}
\end{figure}
\section{Conclusions and Future Work}
\label{s.concl}
We have introduced the track reconstruction challenge for the IXPE mission and have shown that existing neural network architectures can achieve results close to the state-of-the-art reconstruction algorithms. In addition to comparable efficiency of the reconstruction, the machine learning techniques provide means of estimating the uncertainties associated to the predicted values, which is an important advantage over the analytic approach, and allows us to set quality cuts on the final reconstructed data, enhancing the accuracy of the IXPE scientific observations. Furthermore, there are multiple directions for further research:
\begin{itemize}
\item Firstly, it is worth stressing that the current image pre-processing is not optimal: Since the original images are cropped around the cluster of pixels above the trigger threshold, they exhibit different sizes, meaning that our attempt to produce equally-sized images alters the aspect ratio of the actual tracks.
\item Secondly, since the sensor has hexagonal pixels, conventional `cartesian' convolutional filters in fact \emph{do not} yield equivariant feature maps when applied to the raw image data coming from the sensor and can, therefore, lead to a suboptimal performance. For the baseline experiments we used $2\times$ upsampling to a cartesian grid from the original hexagonal grid, but a better approach would be to use \emph{hexagonal} convolutions instead, which work with raw data and take the actual sensor grid shape into account. Hexagonal convolutions have been implemented, e.g., in the HexagDLy library~\citep{hexagdly_paper} for PyTorch. A further step in this direction would be to investigate hexagonal group convolutions~\citep{cohen2016,hoogeboom2018}, which capture rotational feature symmetries and result in higher parameter efficiency.
\item Thirdly, model calibration should be improved as well. We see in Figure~\ref{fig:efficiency} that, compared to the neural network, the analytical method results in a very clear sinusoidal shape of the histogram. Increasing the ensemble size and using alternative methods for sampling from the posterior distribution of directions could potentially reduce the irregularities for the neural network reconstructions. A possible improvement could be achieved by adding the information about the location of the impact point as input parameters in addition to the images.
\item Finally, the basic direction regression and hit point detection tasks can be combined in a single model for simultaneous prediction on both tasks, as is typically done for \emph{multi-task learning} \citep{Ruder2017} tasks. Multi-task learning, intuitively, adds additional supervision signals to the network, and such additional signals could lead to an overall model outperforming the individual models trained exclusively for single tasks.
\end{itemize}
We plan to investigate the aforementioned extensions and research directions in the near future.
\newpage
\section{Acknowledgements}
We want to thank the DarkMachines collaboration for bringing us together and for fruitful discussions. Michela Negro wants to acknowledge the IXPE team and in particular Niccol\'o Di Lalla and Alberto Manfreda for providing the simulated data samples.
\printbibliography
\end{document}
|
1,941,325,220,419 | arxiv | \section{Supplemental Material}
In the main text we make reference to our setup being described by a generalized Bose-Hubbard Hamiltonian \cite{mekhov2012}, where atoms in an optical lattice scatter light, which is treated as a dynamical quantum variable. Here we provide this Hamiltonian, and give a brief description of the origin of each term.
The full Hamiltonian of the system is given by
$$H=H_L+H_M+H_{LM}.$$
The light mode Hamiltonian
$$H_L=\displaystyle\sum_l \omega_l a^\dagger_l a_l $$
contains terms for the energy in the light modes. Photons in mode $l$ with frequency $\omega_l$ and mode function $u_l(\bm{r})$ are created (annihilated) by boson operators $a_l^\dagger$ ($a_l$). Note that here we have adopted natural units, such that $\hbar=1$.
The matter behaves according to the Bose-Hubbard Hamiltonian
$$H_M=-t\displaystyle\sum_{<i,j>}(b_i^\dagger b_j+b_j^\dagger b_i) + U\displaystyle\sum_ib^\dagger_ib^\dagger_ib_ib_i,$$
where the first term represents tunneling between sites (the sum is over nearest neighbours), and the second $s$-wave scattering between atoms occupying the same site. We consider bosonic matter modes created (annihilated) at site $i$ by $b^\dagger_i$ ($b_i$). We will treat the trapping potential classically, so that we may focus on the quantum nature of the external light fields.
The last term describes the light-matter interaction. It is derived from a many-atom generalisation of the Jaynes-Cummings model with adiabatic elimination of the excited states.
$$H_{LM}=\frac{1}{\Delta_a}\displaystyle\sum_{l,m}\displaystyle\sum_{i,j}J^{lm}_{ij}g_lg_ma_l^\dagger a_mb_i^\dagger b_j,$$
where
$$J_{ij}^{lm}=\int\mathrm{d}\bm{r}w(\bm{r}-\bm{r}_i)u^*_l(\bm{r})u_m(\bm{r})w(\bm{r}-\bm{r}_j),$$
and $w(\bm{r})$ are the Wannier functions of the lattice. The atom-light detuning $\Delta_a=\omega-\omega_a$, where $\omega_a$ is the frequency of the atomic transition, and $\omega$ is some central frequency set by the adiabatic elimination, and $g_l$ are the light-atom coupling strengths. This term governs the new properties present in the fully-quantum treatment. While in a classical treatment of the light this term can still allow control over the range, magnitude and phase of tunnelling terms through the mode functions $u(\bm{r})$, in the quantum case the light mode operators $a_l$ are also present, and this allows the interaction to feed back into the light modes, elevating light to a dynamical component of the system.
\end{document}
|
1,941,325,220,420 | arxiv | \section{Introduction}\label{sec-intro}
A 1-dimensional solenoid is the inverse limit space of a sequence of covering maps,
\begin{equation}\label{eq-presentation1}
{\mathcal S}(\vec{m}) \myeq \underleftarrow{\lim} ~ \{\, q_{\ell}:{\mathbb S}^1 \to {\mathbb S}^1 \mid \ell \geq 1\}
\end{equation}
where $q_{\ell}$ is a covering map of the circle ${\mathbb S}^1$ of degree $m_{\ell} > 1$. Here, $\vec{m} = (m_1, m_2,\dots)$ denotes a sequence of integers with each $m_i \geq 2$. These continua (compact metric spaces) were introduced by van Danzig \cite{vanDantzig1930} and Vietoris \cite{Vietoris1927}, and appear in many areas of mathematics.
Associated to $\vec{m}$ is a \emph{supernatural} number, or \emph{Steinitz} number, $\Pi[\vec{m}]$,
which is the formal product of the integers $\{m_i \mid i \geq 1\}$.
Chapter 2 of Wilson \cite{Wilson1998}, or Chapter~2.3 of Ribes and Zilesskii \cite{RZ2000}, give a basic discussion of the arithmetic of supernatural numbers. In particular, a Steinitz number
can be rewritten as the formal product of its prime factors,
\begin{equation}\label{eq-steinitzorder}
\Pi = \Pi[\vec{m}] = m_1 \cdot m_2 \cdots m_i \cdots = \prod_{p \in \pi} \ p^{n(p)} \quad , \quad 0 \leq n(p) \leq \infty \ ,
\end{equation}
where
$\pi = \{2,3,5, \ldots\}$ is the set of distinct prime numbers. The non-negative integers $n(p)$ can be thought of as the ``coordinates'' of $\Pi$ along the ``axes'' given by the primes in $\pi$.
The Steinitz number $\Pi[\vec{m}]$ is called the \emph{Steinitz order} of the inverse limit ${\mathcal S}(\vec{m})$.
The following equivalence relation appears naturally in the applications of Steinitz numbers to dynamical systems.
\begin{defn}\label{def-asymptequiv}
Given $\vec{m} = \{m_i \mid i \geq 1 ~ , ~ m_i > 1 \}$ and $\vec{n} = \{n_i \mid i \geq 1 ~ , ~ n_i > 1\}$ sequences of integers, we say that the Steinitz numbers $\Pi[\vec{m}]$ and $\Pi[\vec{n}]$ are \emph{asymptotically equivalent}, and we write $\Pi[\vec{m}] \mor \Pi[\vec{n}]$, if there exist integers $1 \leq m_0 < \infty$ and $1 \leq n_0 < \infty$ such that $n_0 \cdot \Pi[\vec{m}] = m_0 \cdot \Pi[\vec{n}]$. The asymptotic equivalence class of $\Pi[\vec{m}]$ is denoted by $\Pi_a[\vec{m}]$.
\end{defn}
Definition \ref{def-asymptequiv} says that two representatives of the same asymptotic equivalence class $\Pi_a[\vec{m}]$ differ by a finite number of prime factors with finite coordinates.
Bing observed in \cite{Bing1960} that for $1$-dimensional solenoids ${\mathcal S}(\vec{m})$ and ${\mathcal S}(\vec{n})$, if $\Pi[\vec{m}] \mor \Pi[\vec{n}]$ then the solenoids are homeomorphic. McCord showed in \cite[Section~2]{McCord1965} the converse, that if ${\mathcal S}(\vec{m})$ and ${\mathcal S}(\vec{n})$ are homeomorphic spaces, then $\Pi[\vec{m}] \mor \Pi[\vec{n}]$. Aarts and Fokkink gave in \cite{AartsFokkink1991} an alternate proof of this. Thus we have:
\begin{thm}\cite{AartsFokkink1991,Bing1960}\label{thm-onedimSol}
Solenoids ${\mathcal S}(\vec{m}) $ and ${\mathcal S}(\vec{n}) $ are homeomorphic if and only if ~ $\Pi[\vec{m}] \mor \Pi[\vec{n}]$.
\end{thm}
The results in this paper were motivated in part by the question, to what extent does Theorem~\ref{thm-onedimSol} generalize to higher dimensional solenoidal manifolds?
A sequence of \emph{proper finite covering} maps
${\mathcal P} = \{\, q_{\ell} \colon M_{\ell} \to M_{\ell -1} \mid \ell \geq 1\}$, where each $M_{\ell}$ is a compact connected manifold without boundary of dimension $n \geq 1$, is called a \emph{presentation} in \cite{DHL2016c}. The inverse limit
\begin{equation}\label{eq-presentationinvlim}
{\mathcal S}_{{\mathcal P}} \equiv \lim_{\longleftarrow} ~ \{ q_{\ell } \colon M_{\ell } \to M_{\ell -1}\} ~ \subset \prod_{\ell \geq 0} ~ M_{\ell} ~
\end{equation}
is the \emph{solenoidal manifold} associated to ${\mathcal P}$. The set ${\mathcal S}_{{\mathcal P}}$ is given the relative topology, induced from the product topology, so that ${\mathcal S}_{{\mathcal P}}$ is compact and connected.
By the definition of the inverse limit, for a sequence $\{x_{\ell} \in M_{\ell} \mid \ell \geq 0\}$, we have
\begin{equation}\label{eq-presentationinvlim2}
x = (x_0, x_1, \ldots ) \in {\mathcal S}_{{\mathcal P}} ~ \Longleftrightarrow ~ q_{\ell}(x_{\ell}) = x_{\ell-1} ~ {\rm for ~ all} ~ \ell \geq 1 ~.
\end{equation}
For each $\ell \geq 0$, there is a fibration $\widehat{q}_{\ell} \colon {\mathcal S}_{{\mathcal P}} \to M_{\ell}$, given by projection onto the $\ell$-th factor in \eqref{eq-presentationinvlim}, so $\widehat{q}_{\ell}(x) = x_{\ell}$. We also make use of the covering maps denoted by
$\overline{q}_{\ell} = q_{\ell} \circ q_{\ell -1} \circ \cdots \circ q_1 \colon M_{\ell} \to M_0$. Note that $\widehat{q}_0 = \overline{q}_{\ell} \circ \widehat{q}_{\ell}$.
Solenoidal manifolds, as a special class of continua, were first studied by McCord in \cite{McCord1965}, who showed that
the continuum ${\mathcal S}_{{\mathcal P}}$ is a foliated space with foliation ${\mathcal F}_{{\mathcal P}}$, in the sense of \cite{MS2006}, where the leaves of ${\mathcal F}_{{\mathcal P}}$ are coverings of the base manifold $M_0$ via the projection map $\widehat{q}_0 \colon {\mathcal S}_{{\mathcal P}} \to M_0$ restricted to the path-connected components of ${\mathcal S}_{{\mathcal P}}$. Solenoidal manifolds are \emph{matchbox manifolds} of dimension $n$ in the terminology of \cite{ClarkHurder2013}, and the terminology ``solenoidal manifolds'' was introduced by Sullivan \cite{Sullivan2014}.
The Heisenberg $H_3({\mathbb R})$-odometers studied by Danilenko and Lema\'{n}czyk in \cite{DL2016} are all solenoidal manifolds, equipped with the leafwise action of $H_3({\mathbb R})$.
The motivation for McCord's work in \cite{McCord1965} was the question of whether a solenoidal space must be a homogeneous continuum? That is, when does the group of self-homeomorphisms act transitively on the space?
This is a particular case of the more general problem to study the space of homeomorphisms between solenoidal manifolds, and their invariants up to homeomorphism. This problem has been studied especially in the works \cite{AartsFokkink1991,CHL2019,HL2018a}.
In this work, we continue this study by associating a \emph{prime spectrum} to a solenoidal space, and studying its invariance properties.
Given a presentation ${\mathcal P}$, define the truncated presentation
${\mathcal P}_m = \{\, q_{\ell} \colon M_{\ell} \to M_{\ell -1} \mid \ell > m\}$, then it is a formality that the solenoidal manifolds ${\mathcal S}_{{\mathcal P}}$ and ${\mathcal S}_{{\mathcal P}_m}$ are homeomorphic. Thus, homeomorphism invariants for solenoidal manifolds have an ``asymptotic'' character in terms of its presentation.
For a presentation ${\mathcal P}$ as in \eqref{eq-presentationinvlim}, let $m_{\ell} > 1$ denote the degree of the covering map
$q_{\ell } \colon M_{\ell } \to M_{\ell -1}$. The product $m_{1} \cdots m_{\ell}$ equals the degree of the covering map $\overline{q}_{\ell} \colon M_{\ell} \to M_0$.
\begin{defn}\label{def-steinitzpres}
The \emph{Steinitz order} of a presentation ${\mathcal P}$ is the Steinitz number
\begin{equation}\label{eq-highersteinitzorder}
\Pi[{\mathcal P}] = LCM \{ m_1 m_2 \cdots m_{\ell} \mid \ell > 0\} \ ,
\end{equation}
where LCM denotes the least common multiple of the collection of integers. The \emph{asymptotic Steinitz order} of ${\mathcal P}$ is the class $\Pi_a[{\mathcal P}]$ associated to $\Pi[{\mathcal P}]$.
\end{defn}
That is, the Steinitz order of a presentation ${\mathcal P}$ counts the number of appearances of distinct primes in the degrees of the covering maps $\overline{q}_{\ell} \colon M_{\ell} \to M_0$ for $\ell \geq 1$. Here $LCM$ should be understood in terms of Steinitz numbers, see Example \ref{ex-computeLCM} for more explanation.
Our first result is a direct generalization of one of the implications of Theorem~\ref{thm-onedimSol}.
\begin{thm}\label{thm-main1}
Let ${\mathcal S}_{{\mathcal P}}$ be a solenoidal manifold with presentation ${\mathcal P}$. Then the asymptotic order $\Pi_a[{\mathcal P}]$ depends only on the homeomorphism type of ${\mathcal S}_{{\mathcal P}}$, and so defines the \emph{asymptotic Steinitz order} of ${\mathcal S}_{{\mathcal P}}$ denoted by $\Pi_a[{\mathcal S}_{{\mathcal P}}]$.
\end{thm}
Note that McCord's proof in \cite[Section~2]{McCord1965} for $1$-dimensional solenoids uses Pontrjagin Duality, and his technique of proof is only applicable for the case when the fundamental group of $M_0$ is abelian.
One cannot expect a converse to the conclusion of Theorem~\ref{thm-main1} as in Theorem~\ref{thm-onedimSol}.
For example, if $M_0 = {\mathbb T}^n$ is the $n$-torus with $n > 1$, Example~\ref{ex-toral} constructs solenoidal manifolds over ${\mathbb T}^n$ which have equal asymptotic orders, but are not homeomorphic. Examples~\ref{ex-stable} and \ref{ex-wild} construct isospectral nilpotent Cantor actions whose suspension solenoids are not homeomorphic.
The proof of Theorem~\ref{thm-main1} is based on the study of the monodromy actions of solenoidal manifolds, and the fact that a homeomorphism between solenoidal manifolds induces a return equivalence between their global monodromy Cantor actions, as discussed in Section~\ref{subsec-morita}. The Steinitz order invariants for minimal equicontinuous Cantor actions studied in this work are of independent interest, and will be described next.
We say that $({\mathfrak{X}}, \Gamma, \Phi)$ is a \emph{ Cantor action} if $\Gamma$ is a countable group, ${\mathfrak{X}}$ is a Cantor space, and $\Phi \colon \Gamma \times {\mathfrak{X}} \to {\mathfrak{X}}$ is a minimal action. The action $({\mathfrak{X}},\Gamma,\Phi)$ is \emph{equicontinuous} with respect to a metric $\dX$ on ${\mathfrak{X}}$, if for all $\e >0$ there exists $\delta > 0$, such that for all $x , y \in {\mathfrak{X}}$ with
$\displaystyle \dX(x,y) < \delta$ and all $\gamma \in \Gamma$, we have $\dX(\gamma x, \gamma y) < \e$.
This property is independent of the choice of the metric on ${\mathfrak{X}}$.
Let $\Phi(\Gamma) \subset \Homeo({\mathfrak{X}})$ denote the image subgroup for an action $({\mathfrak{X}}, \Gamma, \Phi)$.
When the action is equicontinuous, the closure $\overline{\Phi(\Gamma)} \subset \Homeo({\mathfrak{X}})$ in the \emph{uniform topology of maps} is a separable profinite group. We adopt the notation ${\mathfrak{G}}(\Phi) \equiv \overline{\Phi(\Gamma)}$. More generally, we typically use letters in fraktur font to denote profinite objects. Let $\widehat{\Phi} \colon {\mathfrak{G}}(\Phi) \times {\mathfrak{X}} \to {\mathfrak{X}}$ denote the induced action of ${\mathfrak{G}}(\Phi)$ on ${\mathfrak{X}}$, which is transitive as the action $({\mathfrak{X}}, \Gamma, \Phi)$ is minimal. For $\widehat{g} \in {\mathfrak{G}}(\Phi)$, we write its action on ${\mathfrak{X}}$ by $\widehat{g} \, x = \widehat{\Phi}(\widehat{g})(x)$.
Given $x \in {\mathfrak{X}}$, introduce the isotropy group at $x$,
\begin{align}\label{iso-defn2}
{\mathfrak{D}}(\Phi, x) = \{ \widehat{g} \in {\mathfrak{G}}(\Phi) \mid \widehat{g} \, x = x\} \subset \Homeo({\mathfrak{X}}) \ ,
\end{align}
which is a closed subgroup of ${\mathfrak{G}}(\Phi)$, and thus is either finite, or is an infinite profinite group.
As the action $\widehat{\Phi} \colon {\mathfrak{G}}(\Phi) \times {\mathfrak{X}} \to {\mathfrak{X}}$ is transitive, the conjugacy class of ${\mathfrak{D}}(\Phi,x)$ in ${\mathfrak{G}}(\Phi)$ is independent of the choice of $x$. The group ${\mathfrak{D}}(\Phi,x)$ is called the \emph{discriminant} of the action $({\mathfrak{X}},\Gamma,\Phi)$ in the authors works \cite{DHL2016c,HL2018a,HL2018b}, and is called a \emph{parabolic} subgroup (of the profinite completion of a countable group) in the works by Bartholdi and Grigorchuk \cite{BartholdiGrigorchuk2000,BartholdiGrigorchuk2002}.
The \emph{Steinitz order} $\Pi[{\mathfrak{G}}]$ of a profinite group ${\mathfrak{G}}$ is a supernatural number associated to a presentation of ${\mathfrak{G}}$ as an inverse limit of finite groups (see Definition~\ref{def-steinitzprofinite}, or \cite[Chapter 2]{Wilson1998} or \cite[Chapter~2.3]{RZ2000}).
The Steinitz order has been used in the study of the analytic representations of profinite groups associated to groups acting on rooted trees, for example in the work \cite{Kionke2019}. Parabolic subgroups of countable groups, acting on rooted trees, play an important role in the study of analytic representations of such groups, see for instance \cite{BartholdiGrigorchuk2000,BartholdiGrigorchuk2002}, and the importance of developing a similar theory for representations of profinite groups was pointed out in \cite{BartholdiGrigorchuk2002}.
Recall that for a profinite group ${\mathfrak{G}}$, an open subgroup ${\mathfrak{U}} \subset {\mathfrak{G}}$ has finite index \cite[Lemma 2.1.2]{RZ2000}. Given a collection of finite positive integers $S = \{n_i \mid i \in {\mathcal I}\}$, let $LCM(S)$ denote the least common multiple of the collection, in the sense of Steinitz numbers.
\begin{defn}\label{def-steinitzorderaction}
Let $({\mathfrak{X}}, \Gamma, \Phi)$ be a minimal equicontinuous Cantor action, with choice of a basepoint $x \in {\mathfrak{X}}$. The \emph{Steinitz orders} of the action are defined as follows:
\begin{enumerate}
\item $\Pi[{\mathfrak{G}}(\Phi)] = LCM \{\# \ {\mathfrak{G}}(\Phi)/{\mathfrak{N}} \mid {\mathfrak{N}} \subset {\mathfrak{G}}(\Phi)~ \text{open normal subgroup}\}$,
\item $\Pi[{\mathfrak{D}}(\Phi)] = LCM \{\# \ {\mathfrak{D}}(\Phi,x)/({\mathfrak{N}} \cap {\mathfrak{D}}(\Phi,x)) \mid {\mathfrak{N}} \subset {\mathfrak{G}}(\Phi)~ \text{open normal subgroup}\}$,
\item $\Pi[{\mathfrak{G}}(\Phi) : {\mathfrak{D}}(\Phi)] = LCM \{\# \ {\mathfrak{G}}(\Phi)/({\mathfrak{N}} \cdot {\mathfrak{D}}(\Phi,x)) \mid {\mathfrak{N}} \subset {\mathfrak{G}}(\Phi)~ \text{open normal subgroup}\}$.
\end{enumerate}
\end{defn}
The next result shows that these Steinitz orders are invariants of the isomorphism class of the action, for the notion of isomorphism or conjugacy as given in Definition~\ref{def-isomorphism}.
\begin{thm}\label{thm-isoinvariance}
Let $({\mathfrak{X}}, \Gamma, \Phi)$ be a minimal equicontinuous Cantor action. Then the Steinitz orders for the action are independent of the choice of a basepoint $x \in {\mathfrak{X}}$.
Moreover, these orders depend only on the isomorphism class of the action, and satisfy the Lagrange identity
\begin{equation}\label{eq-productorders}
\Pi[{\mathfrak{G}}(\Phi)] = \Pi[{\mathfrak{G}}(\Phi) : {\mathfrak{D}}(\Phi)] \cdot \Pi[{\mathfrak{D}}(\Phi)] \ ,
\end{equation}
where the multiplication is taken in the sense of supernatural numbers.
\end{thm}
For example, if $\Phi \colon {\mathbb Z} \times {\mathfrak{X}} \to {\mathfrak{X}}$ is a minimal equicontinuous action of the free abelian group $\Gamma = {\mathbb Z}$, which is the monodromy of a solenoid ${\mathcal S}(\vec{m}) $ as defined by \eqref{eq-presentation1}, then the Steinitz order of the closure of the action is given by $\Pi[{\mathfrak{G}}(\Phi)] = \Pi[\vec{m}]$. As the group $\Gamma = {\mathbb Z}$ is abelian, the discriminant subgroup ${\mathfrak{D}}(\Phi)$ is trivial, so $\Pi[{\mathfrak{D}}(\Phi)]$ is trivial, and $\Pi[{\mathfrak{G}}(\Phi) : {\mathfrak{D}}(\Phi)] = \Pi[{\mathfrak{G}}(\Phi)]$. On the other hand, there are Cantor actions of the Heisenberg group with ${\mathfrak{D}}(\Phi)$ a Cantor group, and their Steinitz orders $[{\mathfrak{D}}(\Phi)]$ distinguish an uncountable number of such actions. (See the examples in Section~\ref{subsec-heisenberg}.)
Isomorphism is the strongest notion of equivalence for Cantor actions. \emph{Return equivalence}, as given in Definition~\ref{def-return}, is a form of ``virtual isomorphism'' for minimal equicontinuous Cantor actions, and is natural when considering Cantor systems arising from geometric constructions, as in \cite{HL2018a,HL2018b,HL2019a}.
\begin{thm}\label{thm-returnequivorder}
Let $({\mathfrak{X}}, \Gamma, \Phi)$ be a minimal equicontinuous Cantor action. Then the relative asymptotic Steinitz order
$\Pi_a[{\mathfrak{G}}(\Phi) : {\mathfrak{D}}(\Phi)]$ is an invariant of its return equivalence class.
\end{thm}
It is shown in Section~\ref{subsec-solenoids} that the Steinitz number $\Pi[{\mathcal P}]$ of a presentation in Theorem~\ref{thm-main1} equals the relative Steinitz order
$\Pi[{\mathfrak{G}}(\Phi) : {\mathfrak{D}}(\Phi)]$ for the monodromy action of the solenoid ${\mathcal S}_{{\mathcal P}}$, so that Theorem~\ref{thm-main1} follows from Theorem~\ref{thm-returnequivorder} and the results of Sections~\ref{subsec-equivalence} and \ref{subsec-morita}.
The behavior under return equivalence of actions of the other two Steinitz orders $\Pi[{\mathfrak{G}}(\Phi)]$ and $\Pi[{\mathfrak{D}}(\Phi)]$ in Definition~\ref{def-steinitzorderaction} is more subtle. In particular, the constructions in Example~\ref{ex-almosttoral} show that their asymptotic classes need not be invariant under return equivalence.
\begin{defn}\label{def-primespectrum}
Let $\pi = \{2,3,5, \ldots\}$ denote the set of primes. Given $\Pi = \prod_{p \in \pi} \ p^{n(p)}$, define:
\begin{eqnarray*}
\pi(\Pi) ~ & = & ~ \{ p \in \pi \mid 0 < n(p) \} \ , ~ \emph{the prime spectrum of} \ \Pi , \label{eq-primespectrum}\\
\pi_f(\Pi) ~ & = & ~ \{ p \in \pi \mid 0 < n(p) < \infty \} \ , ~ \emph{the finite prime spectrum of} \ \Pi , \label{eq-finiteprimespectrum} \\
\pi_{\infty}(\Pi) ~ & = & ~ \{ p \in \pi \mid n(p) = \infty \} \ , ~ \emph{the infinite prime spectrum of} \ \Pi \ .\label{eq-infiniteprimespectrum}
\end{eqnarray*}
\end{defn}
Note that if $\Pi \mor \Pi'$, then $\pi_{\infty}(\Pi) = \pi_{\infty}(\Pi')$. The property that $\pi_f(\Pi)$ is a \emph{infinite} set is also preserved by asymptotic equivalence of Steinitz numbers.
A profinite group ${\mathfrak{G}}$ is said to have \emph{finite prime spectrum} if ~ $\pi(\Pi({\mathfrak{G}}))$ is a finite set of primes.
If $\pi(\Pi({\mathfrak{G}})) = \{p\}$, then ${\mathfrak{G}}$ is said to be a \emph{pro-$p$ group}, for which there is an extensive literature \cite{DdSMS1999,dSSS2000}.
The property that $\Pi({\mathfrak{G}})$ has finite prime spectrum is preserved by asymptotic equivalence.
\begin{thm}\label{thm-returnequivspectra}
Let $({\mathfrak{X}}, \Gamma, \Phi)$ be a minimal equicontinuous Cantor action. Then the infinite prime spectra of the Steinitz orders
$\Pi[{\mathfrak{G}}(\Phi)]$, $\Pi[{\mathfrak{D}}(\Phi)]$ and $\Pi_a[{\mathfrak{G}}(\Phi) : {\mathfrak{D}}(\Phi)]$ depend only on the return equivalence class of the action.
The same holds for the property that the finite prime spectrum of each of these Steinitz orders is an infinite set.
\end{thm}
This result suggests a natural question:
\begin{problem}\label{prob-nilpotentorder}
How do the dynamical properties of a minimal equicontinuous Cantor action $({\mathfrak{X}}, \Gamma, \Phi)$ depend on the asymptotic Steinitz orders associated to the action?
\end{problem}
A basic dynamical property of a minimal equicontinuous Cantor action $({\mathfrak{X}}, \Gamma, \Phi)$ is its degree of ``regularity'', as discussed in Section \ref{subsec-lqa}. The action is \emph{topologically free} if the set of all fixed points for the elements of the action is a meagre set (see Definition~\ref{def-topfree}.) The \emph{local quasi-analytic} property of an action, as in Definition~\ref{def-LQA}, is a local (generalized) version of the topologically free property, and does not require that the acting group $\Gamma$ be countable, so applies for profinite group actions in particular. We then have the following notion:
\begin{defn} \label{def-stable}
An equicontinuous Cantor action $({\mathfrak{X}}, \Gamma, \Phi)$ is said to be \emph{stable} if the induced profinite action $\widehat{\Phi} \colon {\mathfrak{G}}(\Phi) \times {\mathfrak{X}} \to {\mathfrak{X}}$ is locally quasi-analytic. The action is said to be \emph{wild} otherwise.
\end{defn}
A stable Cantor action satisfies local rigidity, as discussed in the works \cite{CortezMedynets2016,HL2018b,HL2020a,Li2018}. On the other hand,
there are many examples of wild Cantor actions. The actions of weakly branch groups on the boundaries of their associated trees are always wild \cite{BGS2012,GL2019}. The work \cite{ALBLLN2020} gives the construction of wild Cantor actions exhibiting a variety of characteristic properties, using algebraic methods.
In this work, we a partial solution to Problem~\ref{prob-nilpotentorder}.
A \emph{nilpotent Cantor action} is a minimal equicontinuous Cantor action $({\mathfrak{X}},\Gamma,\Phi)$, where $\Gamma$ contains a finitely-generated nilpotent subgroup $\Gamma_0 \subset \Gamma$ of finite index. The authors showed in \cite[Theorem~4.1]{HL2020a} that a nilpotent Cantor action is always locally quasi-analytic. Moreover, it was shown in \cite[Theorem~1.1]{HL2020a} that if the actions are both effective, then the property of being a nilpotent Cantor action is preserved by return equivalence, and thus also by continuous orbit equivalence of actions.
\begin{thm}\label{thm-nilstable}
Let $({\mathfrak{X}},\Gamma,\Phi)$ be a nilpotent Cantor action, with discriminant ${\mathfrak{D}}(\Phi) \subset {\mathfrak{G}}(\Phi)$.
If the prime spectrum $\pi(\Pi({\mathfrak{D}}(\Phi)))$ is finite, then the action is stable. In particular, if the prime spectrum $\pi(\Pi[{\mathfrak{G}}(\Phi)])$ is finite, then the action is stable.
\end{thm}
The proof of Theorem \ref{thm-nilstable} yields the following corollary.
The \emph{multiplicity} of a prime $p$ in a Steinitz number $\Pi$ is the value of $n(p)$ in the formula \eqref{eq-steinitzorder}.
\begin{cor}\label{cor-niltopfree}
Let $({\mathfrak{X}},\Gamma,\Phi)$ be a nilpotent Cantor action. If the Steinitz order $\Pi({\mathfrak{G}}(\Phi))$ has prime multiplicities at most $2$, for all but a finite set of primes, then the action is stable.
\end{cor}
The wild actions in Example~\ref{ex-wild} have finite multiplicities at least $3$ for an infinite set of primes.
The converse of Theorem \ref{thm-nilstable} need not hold, indeed, it is possible to construct actions of abelian groups with infinite prime spectrum which are necessarily stable, see Example \ref{ex-toral}, and also stable actions of nilpotent groups with infinite prime spectrum, see Example \ref{ex-stable}. The relation of the finite prime spectrum with the stability of an action depends on the \emph{Noetherian} property of its profinite completion, as explained in Section \ref{subsec-Noetheriandynamics}.
The celebrated Grigorchuk group (see \cite{BGS2012,Grigorchuk2011} for example) is a $p$-group for $p=2$, and its action on the boundary of the $2$-adic tree is minimal and equicontinuous, and moreover is a wild action. Thus, Theorem~\ref{thm-nilstable} cannot be generalized to Cantor actions of arbitrary finitely generated groups.
The authors asked in the works \cite{HL2018b,HL2020a} whether a locally quasi-analytic nilpotent Cantor action $({\mathfrak{X}},\Gamma,\Phi)$ can be wild, more precisely, do there exist actions $({\mathfrak{X}},\Gamma,\Phi)$ such that the action of $\Gamma$ on ${\mathfrak{X}}$ is locally quasi-analytic, while the action of the completion ${\mathfrak{G}}(\Phi)$ on ${\mathfrak{X}}$ is not locally quasi-analytic? Using the constructions in Example~\ref{ex-wild}, our final result gives an answer to this question, noting that a topologically-free Cantor action is locally quasi-analytic.
\begin{thm}\label{thm-lqaWILD}
There exists an uncountable number of topologically-free Cantor actions $({\mathfrak{X}},\Gamma,\Phi)$ of the Heisenberg group $\Gamma$, distinct up to return equivalence, that are wild.
\end{thm}
Section~\ref{sec-basics} recalls some basic facts about Cantor actions as required for this work.
Section~\ref{sec-supernatural} develops in more detail the properties of Steinitz orders for Cantor actions. This yield the proofs of
Theorems~\ref{thm-isoinvariance}, \ref{thm-returnequivorder} and \ref{thm-returnequivspectra}. Then in
Section~\ref{subsec-algmodel} we recall the construction of the group chain model for a minimal equicontinuous Cantor action, and the results of Section~\ref{subsec-steinitzalgmodel} show that their Steinitz orders can be calculated using these group chains. This is used to deduce the proof of Theorem~\ref{thm-main1} from Theorem~\ref{thm-returnequivorder} in Section~\ref{subsec-solenoids}.
Section~\ref{sec-nilpotent} considers the special case of nilpotent Cantor actions, and gives an application of the prime spectrum to this class of actions.
An essential part of the abstract study of minimal equicontinuous Cantor actions is to have explicit examples of the properties being studied and characterized. This we provide in Section~\ref{sec-examples}.
Example~\ref{ex-toral} gives the most basic construction of actions with prescribed prime spectrum for ${\mathfrak{G}}(\Phi)$. The ${\mathbb Z}^n$-actions constructed in show that for $n \geq 2$, the prime spectrum does not contain sufficient information about the action to distinguish the actions up to return equivalence.
Example~\ref{ex-trivial} recalls the construction from \cite{HLvL2020} of a ``balanced" self-embedding of the integer Heisenberg group into itself, which has the property that the discriminant group ${\mathfrak{D}}(\Phi)$ of the action is trivial, but the maps in the inverse limit formula for ${\mathfrak{D}}(\Phi)$ in \eqref{eq-discformula} are not surjective.
Example~\ref{ex-stable} gives the construction of nilpotent Cantor actions of the integer Heisenberg group with arbitrary finite or infinite prime spectrum, for which the discriminant group ${\mathfrak{D}}(\Phi)$ is non-trivial and the action is stable.
Example~\ref{ex-wild} gives the constructions of nilpotent Cantor actions for which the prime spectrum is any arbitrary infinite subset of the primes, and the action is wild. These examples are then used to give the proof of Theorem~\ref{thm-lqaWILD}.
\section{Cantor actions}\label{sec-basics}
We recall some of the basic
properties of Cantor actions, as required for the proofs of the results in Section~\ref{sec-intro}.
More complete discussions of the properties of equicontinuous Cantor actions are given in the text by Auslander \cite{Auslander1988}, the papers by Cortez and Petite \cite{CortezPetite2008}, Cortez and Medynets \cite{CortezMedynets2016}, and the authors' works, in particular \cite{DHL2016c} and \cite[Section~3]{HL2019a}.
\subsection{Basic concepts}\label{subsec-basics}
Let $({\mathfrak{X}},\Gamma,\Phi)$ denote an action $\Phi \colon \Gamma \times {\mathfrak{X}} \to {\mathfrak{X}}$. We write $g\cdot x$ for $\Phi(g)(x)$ when appropriate.
The orbit of $x \in {\mathfrak{X}}$ is the subset ${\mathcal O}(x) = \{g \cdot x \mid g \in \Gamma\}$.
The action is \emph{minimal} if for all $x \in {\mathfrak{X}}$, its orbit ${\mathcal O}(x)$ is dense in ${\mathfrak{X}}$.
\eject
Let $N(\Phi) \subset \Gamma$ denote the kernel of the action homomorphism $\Phi \colon \Gamma \to \Homeo({\mathfrak{X}})$. The action is said to be \emph{effective} if $N(\Phi)$ is the trivial group. That is, the homomorphism $\Phi$ is faithful, and one also says that the action is faithful.
An action $({\mathfrak{X}},\Gamma,\Phi)$ is \emph{equicontinuous} with respect to a metric $\dX$ on ${\mathfrak{X}}$, if for all $\e >0$ there exists $\delta > 0$, such that for all $x , y \in {\mathfrak{X}}$ and $g \in \Gamma$ we have that
$\displaystyle \dX(x,y) < \delta$ implies $\dX(g \cdot x, g \cdot y) < \e$.
The property of being equicontinuous is independent of the choice of the metric on ${\mathfrak{X}}$ which is compatible with the topology of ${\mathfrak{X}}$.
Now assume that ${\mathfrak{X}}$ is a Cantor space.
Let $\CO({\mathfrak{X}})$ denote the collection of all clopen (closed and open) subsets of ${\mathfrak{X}}$, which forms a basis for the topology of ${\mathfrak{X}}$.
For $\phi \in \Homeo({\mathfrak{X}})$ and $U \in \CO({\mathfrak{X}})$, the image $\phi(U) \in \CO({\mathfrak{X}})$.
The following result is folklore, and a proof is given in \cite[Proposition~3.1]{HL2018b}.
\begin{prop}\label{prop-CO}
For ${\mathfrak{X}}$ a Cantor space, a minimal action $\Phi \colon \Gamma \times {\mathfrak{X}} \to {\mathfrak{X}}$ is equicontinuous if and only if the $\Gamma$-orbit of every $U \in \CO({\mathfrak{X}})$ is finite for the induced action $\Phi_* \colon \Gamma \times \CO({\mathfrak{X}}) \to \CO({\mathfrak{X}})$.
\end{prop}
We say that $U \subset {\mathfrak{X}}$ is \emph{adapted} to the action $({\mathfrak{X}},\Gamma,\Phi)$ if $U$ is a \emph{non-empty clopen} subset, and for any $g \in \Gamma$,
if $\Phi(g)(U) \cap U \ne \emptyset$ implies that $\Phi(g)(U) = U$. The proof of \cite[Proposition~3.1]{HL2018b} shows that given $x \in {\mathfrak{X}}$ and clopen set $x \in W$, there is an adapted clopen set $U$ with $x \in U \subset W$.
For an adapted set $U$, the set of ``return times'' to $U$,
\begin{equation}\label{eq-adapted}
\Gamma_U = \left\{g \in \Gamma \mid g \cdot U \cap U \ne \emptyset \right\}
\end{equation}
is a subgroup of $\Gamma$, called the \emph{stabilizer} of $U$.
Then for $g, g' \in \Gamma$ with $g \cdot U \cap g' \cdot U \ne \emptyset$ we have $g^{-1} \, g' \cdot U = U$, hence $g^{-1} \, g' \in \Gamma_U$. Thus, the translates $\{ g \cdot U \mid g \in \Gamma\}$ form a finite clopen partition of ${\mathfrak{X}}$, and are in 1-1 correspondence with the quotient space $X_U = \Gamma/\Gamma_U$. Then $\Gamma$ acts by permutations of the finite set $X_U$ and so the stabilizer group $\Gamma_U \subset G$ has finite index. Note that this implies that if $V \subset U$ is a proper inclusion of adapted sets, then the inclusion $\Gamma_V \subset \Gamma_U$ is also proper.
\begin{defn}\label{def-adaptednbhds}
Let $({\mathfrak{X}},\Gamma,\Phi)$ be a minimal equicontinuous Cantor action.
A properly descending chain of clopen sets ${\mathcal U} = \{U_{\ell} \subset {\mathfrak{X}} \mid \ell \geq 0\}$ is said to be an \emph{adapted neighborhood basis} at $x \in {\mathfrak{X}}$ for the action $\Phi$, if
$x \in U_{\ell +1} \subset U_{\ell}$ is a proper inclusion for all $ \ell \geq 0$, with $\cap_{\ell > 0} \ U_{\ell} = \{x\}$, and each $U_{\ell}$ is adapted to the action $\Phi$.
\end{defn}
Given $x \in {\mathfrak{X}}$ and $\e > 0$, Proposition~\ref{prop-CO} implies there exists an adapted clopen set $U \in \CO({\mathfrak{X}})$ with $x \in U$ and $\diam(U) < \e$. Thus, one can choose a descending chain ${\mathcal U}$ of adapted sets in $\CO({\mathfrak{X}})$ whose intersection is $x$, from which the following result follows:
\begin{prop}\label{prop-adpatedchain}
Let $({\mathfrak{X}},\Gamma,\Phi)$ be a minimal equicontinuous Cantor action. Given $x \in {\mathfrak{X}}$, there exists an adapted neighborhood basis ${\mathcal U}$ at $x$ for the action $\Phi$.
\end{prop}
\subsection{Equivalence of Cantor actions}\label{subsec-equivalence}
We next recall the notions of equivalence of Cantor actions which we use in this work.
The first and strongest is that of
{isomorphism} of Cantor actions, which is a generalization of the usual notion of conjugacy of topological actions. For $\Gamma = {\mathbb Z}$, isomorphism corresponds to the notion of ``flip conjugacy'' introduced in the work of Boyle and Tomiyama \cite{BoyleTomiyama1998}.
The definition below agrees with the usage in the papers \cite{CortezMedynets2016,HL2018b,Li2018}.
\begin{defn} \label{def-isomorphism}
Cantor actions $({\mathfrak{X}}_1, \Gamma_1, \Phi_1)$ and $({\mathfrak{X}}_2, \Gamma_2, \Phi_2)$ are said to be \emph{isomorphic} if there is a homeomorphism $h \colon {\mathfrak{X}}_1 \to {\mathfrak{X}}_2$ and group isomorphism $\Theta \colon \Gamma_1 \to \Gamma_2$ so that
\begin{equation}\label{eq-isomorphism}
\Phi_1(g) = h^{-1} \circ \Phi_2(\Theta(g)) \circ h \in \Homeo({\mathfrak{X}}_1) \ \textrm{for all} \ g \in \Gamma_1 \ .
\end{equation}
\end{defn}
The notion of \emph{return equivalence} for Cantor actions is weaker than the notion of isomorphism, and is natural when considering the Cantor systems defined by the holonomy actions for solenoidal manifolds, as considered in the works \cite{HL2018a,HL2018b,HL2019a}.
For a minimal equicontinuous Cantor action $({\mathfrak{X}}, \Gamma, \Phi)$ and an adapted set $U \subset {\mathfrak{X}}$, by a small abuse of notation, we use $\Phi_U$ to denote both the restricted action $\Phi_U \colon \Gamma_U \times U \to U$ and the induced quotient action $\Phi_U \colon H_U \times U \to U$ for $H_U = \Phi(G_U) \subset \Homeo(U)$. Then $(U, H_U, \Phi_U)$ is called the \emph{holonomy action} for $\Phi$, in analogy with the case where $U$ is a transversal to a solenoidal manifold, and
$H_U$ is the holonomy group for this transversal.
\begin{defn}\label{def-return}
Two minimal equicontinuous Cantor actions $({\mathfrak{X}}_1, \Gamma_1, \Phi_1)$ and $({\mathfrak{X}}_2, \Gamma_2, \Phi_2)$ are \emph{return equivalent} if there exists
an adapted set $U_1 \subset {\mathfrak{X}}_1$ for the action $\Phi_1$ and
an adapted set $U_2 \subset {\mathfrak{X}}_2$ for the action $\Phi_2$,
such that the restricted actions $(U_1, H_{1,U_1}, \Phi_{1,U_1})$ and $(U_2, H_{2,U_2}, \Phi_{2,U_2})$ are isomorphic.
\end{defn}
If the actions $\Phi_1$ and $\Phi_2$ are isomorphic in the sense of Definition~\ref{def-isomorphism}, then they are return equivalent with $U_1 = {\mathfrak{X}}_1$ and $U_2 = {\mathfrak{X}}_2$. However, the notion of return equivalence is weaker even for this case, as the conjugacy is between the holonomy groups $H_{1,{\mathfrak{X}}_1}$ and $H_{2,{\mathfrak{X}}_2}$, and not the groups $\Gamma_1$ and $\Gamma_2$.
\subsection{Morita equivalence}\label{subsec-morita}
We next relate the notion of return equivalence of Cantor actions with that of Morita equivalence of pseudogroups, as induced by a homeomorphism between solenoidal manifolds. Let $h \colon {\mathcal S}_{{\mathcal P}} \to {\mathcal S}_{{\mathcal P}'}$ be a homeomorphism between solenoidal manifolds, defined by
$${\mathcal S}_{{\mathcal P}} \equiv \lim_{\longleftarrow} ~ \{ q_{\ell } \colon M_{\ell } \to M_{\ell -1}\} ~ \subset \prod_{\ell \geq 0} ~ M_{\ell} \quad , \quad
{\mathcal S}_{{\mathcal P}'} \equiv \lim_{\longleftarrow} ~ \{ q'_{\ell } \colon M'_{\ell } \to M'_{\ell -1}\} ~ \subset \prod_{\ell \geq 0} ~ M'_{\ell} ~ ,
$$
with foliations ${\mathcal F}_{{\mathcal P}}$ and ${\mathcal F}_{{\mathcal P}'}$ defined by the path-connected components of each space, respectively.
Let $\widehat{q}_0 \colon {\mathcal S}_{{\mathcal P}} \to M_0$ and $\widehat{q}_0' \colon {\mathcal S}_{{\mathcal P}'} \to M'_0$ be the corresponding projection maps.
Then for choices of basepoints $x \in {\mathcal S}_{{\mathcal P}}$ and $x' \in {\mathcal S}_{{\mathcal P}'}$, the Cantor fibers ${\mathfrak{X}} = \widehat{q}_0^{-1}(\widehat{q}_0(x))$ and ${\mathfrak{X}}' = (\widehat{q}'_0)^{-1}(\widehat{q}'_0(x'))$ are complete transversals to the foliations ${\mathcal F}_{{\mathcal P}}$ and ${\mathcal F}_{{\mathcal P}'}$, respectively.
The homeomorphism $h$ cannot be assumed to be fiber-preserving; that is, to satisfy $h({\mathfrak{X}}) = {\mathfrak{X}}'$. For example, the work \cite{CHL2019} studies the homeomorphisms between solenoidal manifolds induced by lifts of homeomorphisms between finite covering spaces $\pi \colon {\widetilde{M}}_0 \to M_0$ and $\pi' \colon {\widetilde{M}}_0' \to M_0'$ in which case the map $h$ need not even be continuously deformable into a fiber-preserving map.
Associated to the transversal ${\mathfrak{X}}$ for ${\mathcal F}_{{\mathcal P}}$ is a pseudogroup ${\mathcal G}$ modeled on ${\mathfrak{X}}$. The elements of ${\mathcal G}$ are local homeomorphisms between open subsets $U,V \subset {\mathfrak{X}}$ induced by the holonomy transport along the leaves of ${\mathcal F}_{{\mathcal P}}$. The construction of these pseudogroups for smooth foliations is discussed by Haefliger in \cite{Haefliger1984,Haefliger2002a}, for example. The adaptation of these ideas to matchbox manifolds, where the transverse space is a Cantor set, is discussed in detail in the works \cite{ClarkHurder2013,CHL2019}.
Associated to a non-empty open subset $W \subset {\mathfrak{X}}$, we can form the restricted pseudogroup ${\mathcal G}_W$ which consists of the elements of ${\mathcal G}$ whose domain and range are contained in $W$. As the foliation ${\mathcal F}_{{\mathcal P}}$ is minimal, that is, every leaf is dense in ${\mathcal S}_{{\mathcal P}}$, the pseudogroups ${\mathcal G}$ and ${\mathcal G}_W$ are Morita equivalent in the sense of Haefliger in \cite{Haefliger1984}.
The same remarks apply to the space ${\mathcal S}_{{\mathcal P}'}$ and so there is a restricted pseudogroup ${\mathcal G}'_{W'}$ for the pseudogroup ${\mathcal G}'$ modeled on ${\mathfrak{X}}'$ defined by the holonomy transport of ${\mathcal F}_{{\mathcal P}'}$.
The homeomorphism $h \colon {\mathcal S}_{{\mathcal P}} \to {\mathcal S}_{{\mathcal P}'}$ is necessarily leaf-preserving, and a basic fact is that there exists non-empty open sets $W \subset {\mathfrak{X}}$ and $W' \subset {\mathfrak{X}}'$ such that the homeomorphism $h$ induces an isomorphism between the restricted pseudogroups ${\mathcal G}_W$ and ${\mathcal G}'_{W'}$. This is discussed in detail in \cite[Section~2.4]{HL2018a}. Moreover, as the holonomy action of ${\mathcal G}$ on ${\mathfrak{X}}$ is equicontinuous, and likewise that for ${\mathcal G}'$ on ${\mathfrak{X}}'$, the open sets $W$ and $W'$ can be chosen to be clopen. Moreover, ${\mathcal G}_W$ is the pseudogroup induced by a minimal equicontinuous group action on $W$, and likewise for the action of ${\mathcal G}'_{W'}$ on $W'$, so $h$ induces a return equivalence between these group actions in the sense of Definition~\ref{def-return}. Then by the remarks in Section~\ref{subsec-solenoids}, the algebraic model Cantor actions for the monodromy actions of ${\mathcal S}_{{\mathcal P}}$ and ${\mathcal S}_{{\mathcal P}'}$ are return equivalent.
\subsection{Regularity of Cantor actions}\label{subsec-lqa}
We next recall some regularity properties of Cantor actions. These are used in the proof of Theorem~\ref{thm-nilstable} and the analysis of the examples constructed in Section~\ref{sec-examples}.
An action $({\mathfrak{X}},\Gamma,\Phi)$ is said to be \emph{free} if for all $x \in {\mathfrak{X}}$ and $g \in \Gamma$, $g \cdot x = x$ implies that $g = e$, the identity of the group.
The notion of a \emph{topologically free} action is a generalization of free actions, introduced by Boyle in his thesis \cite{Boyle1983}, and later used in the works by Boyle and Tomiyama \cite{BoyleTomiyama1998} for the study of classification of general Cantor actions, by Renault \cite{Renault2008} for the study of the $C^*$-algebras associated to Cantor actions, and by Li \cite{Li2018} for proving rigidity properties of Cantor actions. We recall this definition.
Let $\Fix(g) = \{x \in {\mathfrak{X}} \mid g \cdot x = x \}$, and define the \emph{isotropy set}
\begin{equation}\label{eq-isotropy}
\Iso(\Phi) = \{ x \in {\mathfrak{X}} \mid \exists ~ g \in \Gamma ~ , ~ g \ne id ~, ~g \cdot x = x \} = \bigcup_{e \ne g \in \Gamma} \ \Fix(g) \ .
\end{equation}
\begin{defn}\cite{BoyleTomiyama1998,Li2018,Renault2008} \label{def-topfree}
$({\mathfrak{X}},\Gamma,\Phi)$ is said to be \emph{topologically free} if $\Iso(\Phi) $ is meager in ${\mathfrak{X}}$.
\end{defn}
Note that if $\Iso(\Phi)$ is meager, then $\Iso(\Phi)$ has empty interior. That is, if there exists a non-identity element $g \in \Gamma$ such that $\Fix(g)$ has interior, then the action is not topologically free.
The notion of a
\emph{quasi-analytic} action, introduced in the works of {\'A}lvarez L{\'o}pez, Candel, and Moreira Galicia \cite{ALC2009,ALM2016}, is an alternative formulation of the topologically free property which generalizes to group Cantor actions where the acting group can be countable or profinite.
\begin{defn}\label{def-qa}
An action $\Phi \colon H \times {\mathfrak{X}} \to {\mathfrak{X}}$, where
$H$ is a topological group and ${\mathfrak{X}}$ a Cantor space, is said to be \emph{quasi-analytic} if for each clopen set $U \subset {\mathfrak{X}}$
and $g \in H$ such that $\Phi(g)(U) = U$ and the restriction $\Phi(g) | U$ is the identity map on $U$,
then $\Phi(g)$ acts as the identity on ${\mathfrak{X}}$.
\end{defn}
A topologically free action is quasi-analytic. Conversely, the Baire Category Theorem implies that a quasi-analytic effective action of a \emph{countable} group is topologically free \cite[Section~3]{Renault2008}.
A local formulation of the quasi-analytic property was introduced in the works \cite{DHL2016c,HL2018a}, and has proved very useful for the study of the dynamical properties of Cantor actions.
\begin{defn} \label{def-LQA}
An action $\Phi \colon H \times {\mathfrak{X}} \to {\mathfrak{X}}$, where
$H$ is a topological group and ${\mathfrak{X}}$ a Cantor metric space with metric $\dX$, is \emph{locally quasi-analytic} (or LQA) if there exists $\e > 0$ such that for any non-empty open set $U \subset {\mathfrak{X}}$ with $\diam (U) < \e$, and for any non-empty open subset $V \subset U$, if the action of $g \in H$ satisfies $\Phi(g)(V) = V$ and the restriction $\Phi(g) | V$ is the identity map on $V$, then $\Phi(g)$ acts as the identity on all of $U$.
\end{defn}
This reformulation of the notion of topologically free actions is the basis for the following notion.
\begin{defn}\label{def-stable2}
A minimal equicontinuous Cantor action $({\mathfrak{X}}, \Gamma, \Phi)$ is said to be \emph{stable} if the action of its profinite closure ${\mathfrak{G}}(\Phi)$ on ${\mathfrak{X}}$ is locally quasi-analytic, and otherwise is a \emph{wild} action.
\end{defn}
Wild Cantor actions include the actions of weakly branch groups on their boundaries
\cite{BartholdiGrigorchuk2000,BartholdiGrigorchuk2002,BGS2012,DudkoGrigorchuk2017,Grigorchuk2011,Nekrashevych2005,Nekrashevych2016}, actions of higher rank arithmetic lattices on quotients of their profinite completions \cite{HL2018a}, and various constructions of subgroups of wreath product groups acting on trees \cite{ALBLLN2020}.
\section{Steinitz orders of Cantor actions}\label{sec-supernatural}
In this section, we recall the properties of the Steinitz orders of profinite groups from the texts \cite{RZ2000,Wilson1998}, then consider the invariance properties of the Steinitz orders associated to a minimal equicontinuous Cantor action. This yields
proofs of Theorems~\ref{thm-isoinvariance}, \ref{thm-returnequivorder} and \ref{thm-returnequivspectra}.
We then recall the algebraic model for a minimal equicontinuous action, and derive the Steinitz orders of a Cantor action in terms of this algebraic model. The algebraic models are used in the proof of Theorem~\ref{thm-main1} in Section~\ref{subsec-solenoids}, and for the constructions of examples in Section~\ref{sec-examples}.
\subsection{Abstract Steinitz orders}\label{subsec-sodefs}
We begin with the definitions and basic properties of the Steinitz orders associated to profinite groups.
\begin{defn}\label{def-steinitzprofinite}
Let ${\mathfrak{H}} \subset {\mathfrak{G}}$ be a closed subgroup of the profinite group ${\mathfrak{G}}$. Then
\begin{equation}\label{eq-relativeorder}
\Pi[{\mathfrak{G}} : {\mathfrak{H}}] = LCM \{\# \ {\mathfrak{G}}(\Phi)/({\mathfrak{N}} \cdot {\mathfrak{H}}) \mid {\mathfrak{N}} \subset {\mathfrak{G}}(\Phi)~ \text{clopen normal subgroup}\}
\end{equation}
is the \emph{relative Steinitz order} of ${\mathfrak{H}}$ in ${\mathfrak{G}}$. The \emph{Steinitz order} of ${\mathfrak{G}}$ is
$\Pi[{\mathfrak{G}}] = \Pi[{\mathfrak{G}} : \{\widehat{e}\}]$, where $\{\widehat{e}\}$ is the identity subgroup.
\end{defn}
\begin{ex}\label{ex-computeLCM}
{\rm
For readers unfamiliar with computations using Steinitz numbers we provide an example computation of $LCM(a,b)$. Suppose $a$ and $b$ are Steinitz numbers. Then $a = \prod_{p \in \pi} p^{n(p)}$ and $b = \prod_{p \in \pi} p^{m(p)}$, where $\pi$ is the set of distinct prime numbers. Then
$$LCM(a,b) = \prod_{p \in \pi} p^{\max\{n(p),m(p)\}}.$$
In particular, if $\{m_\ell\}_{\ell \geq 1}$ is a sequence of integers, then $LCM\{m_1 \cdot m_2 \cdots m_\ell \mid 1 \leq \ell \leq k\} = m_1 \cdots m_k$, considered as a Steinitz number. Then $LCM\{m_1 \cdots m_\ell \mid \ell \geq 1\} = \prod_{p \in \pi} p^{n(p)}$ is a Steinitz number, where for each $p \in \pi$ the exponent $n(p)$ is the number of times which $p$ appears as a divisor of the elements in $\{m_\ell \mid \ell \geq 1\}$.
}
\end{ex}
We also note the profinite version of Lagrange's Theorem:
\begin{prop}\cite[Proposition~2.1.2]{Wilson1998} \label{prop-lagrange}
Let ${\mathfrak{K}} \subset {\mathfrak{H}} \subset {\mathfrak{G}}$ be a closed subgroups of the profinite group ${\mathfrak{G}}$. Then
\begin{equation}\label{eq-relativeorder2}
\Pi[{\mathfrak{G}} : {\mathfrak{K}}] = \Pi[{\mathfrak{G}} : {\mathfrak{H}}] \cdot \Pi[{\mathfrak{H}} : {\mathfrak{K}}] \ ,
\end{equation}
where the multiplication is taken in the sense of Steinitz numbers.
\end{prop}
Now let $({\mathfrak{X}}, \Gamma, \Phi)$ be a minimal equicontinuous Cantor action, with basepoint $x \in {\mathfrak{X}}$. Recall the \emph{Steinitz orders} of the action, as in Definition~\ref{def-steinitzorderaction}:
\begin{itemize}
\item $\Pi[{\mathfrak{G}}(\Phi)] = LCM \{\# \ {\mathfrak{G}}(\Phi)/{\mathfrak{N}} \mid {\mathfrak{N}} \subset {\mathfrak{G}}(\Phi)~ \text{open normal subgroup}\}$,
\item $\Pi[{\mathfrak{D}}(\Phi)] = LCM \{\# \ {\mathfrak{D}}(\Phi,x)/({\mathfrak{N}} \cap {\mathfrak{D}}(\Phi,x)) \mid {\mathfrak{N}} \subset {\mathfrak{G}}(\Phi)~ \text{open normal subgroup}\}$,
\item $\Pi[{\mathfrak{G}}(\Phi) : {\mathfrak{D}}(\Phi)] = LCM \{\# \ {\mathfrak{G}}(\Phi)/({\mathfrak{N}} \cdot {\mathfrak{D}}(\Phi,x)) \mid {\mathfrak{N}} \subset {\mathfrak{G}}(\Phi)~ \text{open normal subgroup}\} $.
\end{itemize}
We consider the dependence of these Steinitz orders on the choices made and the conjugacy class of the action.
First note that the profinite group ${\mathfrak{G}}(\Phi)$ does not depend on a choice of basepoint, so this also holds for $\Pi[{\mathfrak{G}}(\Phi)]$.
Given basepoints $x,y \in {\mathfrak{X}}$ there exists $\widehat{g}_{x,y} \in {\mathfrak{G}}(\Phi)$ such that $\widehat{g}_{x,y} x = y$. Then the conjugation action of $\widehat{g}_{x,y}$ on ${\mathfrak{G}}(\Phi)$ induces a topological isomorphism of ${\mathfrak{D}}(\Phi,x)$ with ${\mathfrak{D}}(\Phi, y)$, and maps a clopen subset of ${\mathfrak{G}}(\Phi)$ to a clopen subset of ${\mathfrak{G}}(\Phi)$. Then from the definition, we have $\Pi[{\mathfrak{D}}(\Phi,x)] = \Pi[{\mathfrak{D}}(\Phi, y)]$, and $\Pi[{\mathfrak{G}}(\Phi) : {\mathfrak{D}}(\Phi, x)] = \Pi[{\mathfrak{G}}(\Phi) : {\mathfrak{D}}(\Phi, y)]$.
Let $({\mathfrak{X}}_1, \Gamma_1, \Phi_1)$ and $({\mathfrak{X}}_2, \Gamma_2, \Phi_2)$ be isomorphic minimal equicontinuous Cantor actions. By
Definition~\ref{def-isomorphism}
there is a homeomorphism $h \colon {\mathfrak{X}}_1 \to {\mathfrak{X}}_2$ and group isomorphism $\Theta \colon \Gamma_1 \to \Gamma_2$ so that
\begin{equation}\label{eq-isomorphism33}
\Phi_1(g) = h^{-1} \circ \Phi_2(\Theta(g)) \circ h \in \Homeo({\mathfrak{X}}_1) \ \textrm{for all} \ g \in \Gamma_1 \ .
\end{equation}
Let $\Phi_2' = \Phi_2 \circ \Theta \colon \Gamma_1 \to \Homeo({\mathfrak{X}}_2)$, then the images are equal, $\Phi_2(\Gamma) = \Phi_2'(\Gamma)$ and hence so also their closures, ${\mathfrak{G}}(\Phi_2) = {\mathfrak{G}}(\Phi_2')$.
The identity \eqref{eq-isomorphism33} implies that $h$ induces a topological isomorphism between ${\mathfrak{G}}(\Phi_1)$ and ${\mathfrak{G}}(\Phi_2')$ and so also between ${\mathfrak{G}}(\Phi_1)$ and ${\mathfrak{G}}(\Phi_2)$.
Thus $\Pi({\mathfrak{G}}(\Phi_1)) = \Pi({\mathfrak{G}}(\Phi_2))$.
Given $x \in {\mathfrak{X}}_1$ let $y = h(x) \in {\mathfrak{X}}_2$, by \eqref{eq-isomorphism33} the map $h$ induces an isomorphism between ${\mathfrak{D}}(\Phi_1, x)$ and ${\mathfrak{D}}(\Phi_2, y)$, and maps clopen subsets of ${\mathfrak{G}}(\Phi_1)$ to clopen subsets of ${\mathfrak{G}}(\Phi_2)$.
Thus $\Pi[{\mathfrak{D}}(\Phi_1,x)] = \Pi[{\mathfrak{D}}(\Phi_2, y)]$
and $\Pi[{\mathfrak{G}}(\Phi_1) : {\mathfrak{D}}(\Phi_1, x)] = \Pi[{\mathfrak{G}}(\Phi_2) : {\mathfrak{D}}(\Phi_2, y)]$.
These observations complete the proof of Theorem~\ref{thm-isoinvariance}.
\subsection{Orders and return equivalence}\label{subsec-re}
We next consider how the Steinitz orders behave under return equivalence of actions, and obtain the proofs of Theorems~\ref{thm-returnequivorder} and \ref{thm-returnequivspectra}.
Let $({\mathfrak{X}}_1, \Gamma_1, \Phi_1)$ and $({\mathfrak{X}}_2, \Gamma_2, \Phi_2)$ be minimal equicontinuous Cantor actions, and assume that the actions are return equivalent. That is, we assume there exists
an adapted set $U_1 \subset {\mathfrak{X}}_1$ for the action $\Phi_1$ and
an adapted set $U_2 \subset {\mathfrak{X}}_2$ for the action $\Phi_2$,
such that the restricted actions $(U_1, H_{1,U_1}, \Phi_{1,U_1})$ and $(U_2, H_{2,U_2}, \Phi_{2,U_2})$ are isomorphic, with the isomorphism induced by a homeomorphism $h \colon U_1 \to U_2$.
Thus, the profinite closures
$${\mathfrak{H}}_1 = \overline{H_{1,U_1}} \subset \Homeo(U_1) \, \textrm{ and } \, {\mathfrak{H}}_2 = \overline{H_{2,U_2}} \subset \Homeo(U_2)$$
are isomorphic. Fix a basepoint $x_1 \in {\mathfrak{X}}_1$ and set $x_2 = h(x_1) \in U_2$, then the map $h$ induces an isomorphism between the isotropy subgroups of the restricted actions, ${\mathfrak{D}}(\Phi_1|U_1 , x_1)$ and ${\mathfrak{D}}(\Phi_2|U_2 , x_2)$.
Our first result is that the asymptotic relative Steinitz order is an invariant of return equivalence.
\begin{prop}\label{prop-relativeinv}
Let $({\mathfrak{X}}_1, \Gamma_1, \Phi_1)$ and $({\mathfrak{X}}_2, \Gamma_2, \Phi_2)$ be minimal equicontinuous Cantor actions which are return equivalent. Then
\begin{equation} \label{eq-asymporders3}
\Pi_a[{\mathfrak{G}}(\Phi_1):{\mathfrak{D}}(\Phi_1)] = \Pi_a[{\mathfrak{G}}(\Phi_2) : {\mathfrak{D}}(\Phi_2)] \ .
\end{equation}
\end{prop}
\proof
For $i=1,2$, consider the isotropy subgroup of $U_i$
\begin{equation}
{\mathfrak{G}}(\Phi_i)_{U_i} = \left\{ \widehat{g} \in {\mathfrak{G}}(\Phi_i) \mid \widehat{\Phi}_i(\widehat{g})(U_i) = U_i \right\} \ .
\end{equation}
Then ${\mathfrak{G}}(\Phi_i)_{U_i}$ is a clopen subgroup in ${\mathfrak{G}}(\Phi_i)$, so has finite index $m_i = [{\mathfrak{G}}(\Phi_i) : {\mathfrak{G}}(\Phi_i)_{U_i}] = [\Gamma_i : \Gamma_{i,U_i}]$. Note that since for any $\widehat{g} \in {\mathfrak{D}}(\Phi_i,x_i)$ we have $\widehat{g} x = x$, it follows that the action of $\widehat{g}$ preserves $U_i$, and so ${\mathfrak{D}}(\Phi_i,x_i) \subset {\mathfrak{G}}(\Phi_i)_{U_i}$.
The induced map $\widehat{\Phi}_i|U_i \colon {\mathfrak{G}}(\Phi_i)_{U_i} \to {\mathfrak{H}}_i$ is onto, and
the kernel ${\mathfrak{K}}_i = \ker \{ \widehat{\Phi}_i|U_i \colon {\mathfrak{G}}(\Phi_i)_{U_i} \to {\mathfrak{H}}_i\}$ is a closed subgroup of ${\mathfrak{G}}(\Phi_i)_{U_i}$ with ${\mathfrak{K}}_i \subset {\mathfrak{D}}(\Phi_i , x_i)$, since every element of ${\mathfrak{K}}_i$ fixes $x_i$.
Let ${\mathfrak{M}}_i \subset {\mathfrak{H}}_i$ be an open subgroup with ${\mathfrak{D}}(\Phi_i|U_i , x_i) \subset {\mathfrak{M}}_i$, then ${\mathfrak{N}}_i = (\widehat{\Phi}_i|U_i)^{-1}({\mathfrak{M}}_i)$ is an open subgroup of ${\mathfrak{G}}(\Phi_i)_{U_i}$ with ${\mathfrak{K}}_i \subset {\mathfrak{D}}(\Phi_i, x_i) \subset {\mathfrak{N}}_i$. Here ${\mathfrak{D}}(\Phi_i,x_i)$ is the isotropy group of the action of ${\mathfrak{G}}(\Phi_i)$ on ${\mathfrak{X}}_i$, and ${\mathfrak{D}}(\Phi|U_i,x_i)$ is the isotropy subgroup of the action of ${\mathfrak{H}}_i \subset Homeo(U_i)$ on $U_i$.
Conversely, let ${\mathfrak{N}}_i \subset {\mathfrak{G}}(\Phi_i)_{U_i}$ be an open subgroup with ${\mathfrak{D}}(\Phi_i,x_i) \subset {\mathfrak{N}}_i$.
Then by \cite[Lemma 2.1.2]{RZ2000}, ${\mathfrak{N}}_i$ is closed with finite index in ${\mathfrak{G}}(\Phi_i)_{U_i}$ and hence also in ${\mathfrak{G}}(\Phi_i)$, so it is clopen hence compact.
Thus the image ${\mathfrak{M}}_i = \widehat{\Phi}_i|U_i({\mathfrak{N}}_i) \subset {\mathfrak{H}}_i$ is a closed subgroup of finite index. Then \cite[Lemma 2.1.2]{RZ2000} implies it is clopen in ${\mathfrak{H}}_i$, and
${\mathfrak{D}}(\Phi_i|U_i , x_i) \subset {\mathfrak{M}}_i$. It follows from Definition~\ref{def-steinitzorderaction} that, for $i=1,2$,
\begin{equation}\label{eq-relindex}
\Pi[{\mathfrak{G}}(\Phi_i)_{U_i} :{\mathfrak{D}}(\Phi_i, x_i)] = \Pi[{\mathfrak{H}}_i : {\mathfrak{D}}(\Phi_i |U_i , x_i)] \ .
\end{equation}
The homeomorphism $h \colon U_1 \to U_2$ conjugates the actions $(U_1, {\mathfrak{H}}_1, \widehat{\Phi}_1)$ and $(U_2, {\mathfrak{H}}_2, \widehat{\Phi}_2)$ so by the results in Section~\ref{subsec-sodefs} we have for the restricted actions
$$\Pi[{\mathfrak{H}}_1 : {\mathfrak{D}}(\Phi_1 |U_1 , x_1)] = \Pi[{\mathfrak{H}}_2 : {\mathfrak{D}}(\Phi_2 |U_2 , x_2)].$$
The equality of the asymptotic Steinitz orders in \eqref{eq-asymporders3} then follows.
\endproof
Theorem \ref{thm-returnequivorder} follows immediately from Proposition~\ref{prop-relativeinv}.
The equality \eqref{eq-relindex} is the key to the proof of Proposition~\ref{prop-relativeinv}. This identity is based on the property that the homomorphism from ${\mathfrak{G}}(\Phi_i)_{U_i}$ to ${\mathfrak{H}}_i$ has kernel ${\mathfrak{K}}_i \subset {\mathfrak{D}}(\Phi_i, x_i)$, so the contributions to the Steinitz orders ${\mathfrak{G}}(\Phi_i)_{U_i}$ and ${\mathfrak{D}}(\Phi_i, x_i)$ from the subgroup ${\mathfrak{K}}_i$ cancels out in the relative order $\Pi[{\mathfrak{G}}(\Phi_i)_{U_i} : {\mathfrak{D}}(\Phi_i, x_i)]$. However, the absolute Steinitz orders $\Pi[{\mathfrak{G}}(\Phi_i)_{U_i}]$ and $\Pi[{\mathfrak{D}}(\Phi_i, x_i)]$ may indeed include a factor coming from the Steinitz order $\Pi[{\mathfrak{K}}_i]$. Example~5.3 in \cite{HL2020a} illustrates this.
For actions with trivial discriminant, Proposition~\ref{prop-relativeinv} has the following consequence:
\begin{cor}\label{cor-2}
Let $({\mathfrak{X}}, \Gamma, \Phi)$ be a minimal equicontinuous Cantor action with trivial discriminant invariant. Then the asymptotic Steinitz order $\Pi_a[{\mathfrak{G}}(\Phi)]$ is a return equivalence invariant.
\end{cor}
\proof
In the notation of Proposition \ref{prop-relativeinv}, by assumption we have ${\mathfrak{D}}(\Phi_1, x_1)$ is the trivial group. For an adapted clopen set $U_1 \subset {\mathfrak{X}}_1$ with $x_1 \in U_1$, we have ${\mathfrak{D}}(\Phi_1 | U_1, x_1)$ is a quotient of ${\mathfrak{D}}(\Phi_1, x_1)$ hence is also trivial. Thus,
\begin{equation}\label{eq-localconjugacy}
\Pi_a[{\mathfrak{G}}(\Phi_1)] = \Pi_a[{\mathfrak{G}}(\Phi_1 | U_1)] = \Pi_a[{\mathfrak{G}}(\Phi_1 | U_1) : {\mathfrak{D}}(\Phi_1 | U_1, x_1)] \ .
\end{equation}
Let $({\mathfrak{X}}_2,\Gamma_2,\Phi_2)$ be return equivalent to $({\mathfrak{X}}_1,\Gamma_1,\Phi_1)$, then the restricted actions $(U_1,H_{1,U_1},\Phi_{1,U_1})$ and $(U_2,H_{2,U_2},\Phi_{2,U_2})$ are isomorphic, which induces a topological isomorphism of the discriminant groups ${\mathfrak{D}}(\Phi_1|U_1 , x_1)$ and ${\mathfrak{D}}(\Phi_2|U_2 , x_2)$, and implies that ${\mathfrak{D}}(\Phi_2|U_2,x_2)$ is trivial. Using this remark, a formula analogous to \eqref{eq-localconjugacy} for the action $({\mathfrak{X}}_2,\Gamma_2,\Phi_2)$, and Proposition~\ref{prop-relativeinv}, we obtain the claim.
\endproof
Now consider the behavior of the Steinitz orders $\Pi[{\mathfrak{G}}(\Phi)]$ and $\Pi[{\mathfrak{D}}(\Phi, x)]$ under return equivalence of actions. The idea is to use the observation that the action of ${\mathfrak{G}}(\Phi)$ on ${\mathfrak{X}}$ is effective (by definition) to construct an effective action map of ${\mathfrak{D}}(\Phi, x)$ which can be related to a similar construction for a return equivalent action, and so obtain a comparison of their Steinitz orders.
This yields the proof of Theorem \ref{thm-returnequivspectra}.
Let $({\mathfrak{X}}_1, \Gamma_1, \Phi_1)$ and $({\mathfrak{X}}_2, \Gamma_2, \Phi_2)$ be minimal equicontinuous Cantor actions, and assume that the actions are return equivalent: for
an adapted set $U_1 \subset {\mathfrak{X}}_1$ for the action $\Phi_1$ and
an adapted set $U_2 \subset {\mathfrak{X}}_2$ for the action $\Phi_2$,
there is a homeomorphism $h \colon U_1 \to U_2$ which conjugates
the restricted actions $(U_1, H_{1,U_1}, \Phi_{1,U_1})$ and $(U_2, H_{2,U_2}, \Phi_{2,U_2})$.
For $i=1,2$, the action of ${\mathfrak{G}}(\Phi_i)$ on ${\mathfrak{X}}_i$ is effective, as ${\mathfrak{G}}(\Phi_i) \subset \Homeo({\mathfrak{X}}_i)$.
Recall that
$${\mathfrak{H}}_i = \overline{H_{i,U_i}} = \left\{ \widehat{\Phi}_i(\widehat{g}) \mid \widehat{g} \in {\mathfrak{G}}(\Phi_i)_{U_i} \right\} \subset \Homeo(U_i) \ .$$
Choose representatives $\{ h_{i,j} \in \Gamma_i \mid 1 \leq j \leq m_i\}$ of the cosets of $\Gamma_i/\Gamma_{i,U_i}$ with $h_{i,1}$ the identity element, and set
$U_{i,j} = \Phi_i(h_{i,j})(U_i)$. Thus $U_{i,1} = U_i$, and we have a partition ${\mathfrak{X}}_i = U_{i,1} \cup \cdots \cup U_{i,m_i}$.
Introduce the normal core of ${\mathfrak{G}}(\Phi_i)_{U_i}$ given by
\begin{equation}
{\mathfrak{N}}(\Phi_i) = \bigcap_{j=1}^{m_i} \ \Phi_i(h_{i,j})^{-1} \cdot {\mathfrak{G}}(\Phi_i)_{U_i} \cdot \Phi_i(h_{i,j}) \subset {\mathfrak{G}}(\Phi_i)_{U_i} \ ,
\end{equation}
which is a clopen subgroup of ${\mathfrak{G}}(\Phi_i)$ of finite index $n_i = [{\mathfrak{G}}(\Phi_i) : {\mathfrak{N}}(\Phi_i)]$, where $m_i$ divides $n_i$.
In particular, we have $[{\mathfrak{G}}(\Phi_i)_{U_i} : {\mathfrak{N}}(\Phi_i)] < n_i$.
The fact that ${\mathfrak{N}}(\Phi_i)$ is a normal subgroup of ${\mathfrak{G}}(\Phi_i)$ implies that the action of ${\mathfrak{N}}(\Phi_i)$ on the partition of ${\mathfrak{X}}_i$ maps each of the sets $U_{i,j}$ to itself.
Recall that $\widehat{\Phi}_i: {\mathfrak{G}}(\Phi_i) \to Homeo({\mathfrak{X}}_i)$ is the action of the profinite completion of $({\mathfrak{X}}_i,\Gamma_i,\Phi_i)$, $i=1,2$.
For $\widehat{g} \ne \widehat{e}$, the action of $\widehat{\Phi}_i(\widehat{g})$ on ${\mathfrak{X}}_i$ is non-trivial, so if $\widehat{g} \in {\mathfrak{N}}(\Phi_i)$ also, then for some $1 \leq j \leq m_i$ the restricted action of $\widehat{\Phi}_i(\widehat{g})$ on $U_{i,j}$ must be non-trivial. That is, for some $j$ we have
\begin{equation}
\widehat{g} \not\in \ker \left\{ \widehat{\Phi}_{i,j} \equiv \widehat{\Phi}_i | U_{i,j} \colon {\mathfrak{N}}(\Phi_i) \to \Homeo(U_{i,j}) \right\} \ .
\end{equation}
Define a representation ${\widehat{\rho}}_i$ of ${\mathfrak{N}}(\Phi_i)$ into a product of $m_i$ copies of ${\mathfrak{H}}_i$ by setting, for $\widehat{g} \in {\mathfrak{N}}(\Phi_i)$,
\begin{equation}\label{eq-prodrep}
{\widehat{\rho}}_i \colon {\mathfrak{N}}(\Phi_i) \to {\mathfrak{H}}_i \times \cdots \times {\mathfrak{H}}_i \quad , \quad {\widehat{\rho}}_i(\widehat{g}) = \widehat{\Phi}_i^1(\widehat{g}) \times \cdots \times \widehat{\Phi}_i^{m_i}(\widehat{g}) \ ,
\end{equation}
where we use that ${\mathfrak{N}}(\Phi_i)$ is normal in $ {\mathfrak{G}}(\Phi_i)$, so for $\widehat{g} \in {\mathfrak{N}}(\Phi_i)$ the following is well-defined:
$$\widehat{\Phi}_i^j(\widehat{g}) = \Phi_i(h_{i,j})^{-1} \circ \widehat{\Phi}_{i,j}(\widehat{g}) \circ \Phi_i(h_{i,j}) = \widehat{\Phi}_i(h_{i,j}^{-1} \ \widehat{g} \ h_{i,j}) | U_{i} \in {\mathfrak{H}}_i \ .$$
The kernel of ${\widehat{\rho}}_i$ is trivial by the above arguments, so there is an isomorphism ${\mathfrak{N}}(\Phi_i) \cong {\widehat{\rho}}_i({\mathfrak{N}}(\Phi_i))$.
This diagonal trick to obtain the injective map ${\widehat{\rho}}_i$ was first used in the proof of \cite[Theorem~1.2]{HL2020a}.
The index $n_i = [{\mathfrak{G}}(\Phi_i) : {\mathfrak{N}}(\Phi_i)] < \infty$, so we have \begin{equation}\label{eq-compare1}
[{\mathfrak{G}}(\Phi_i)] \mor [{\mathfrak{G}}(\Phi_i)_{U_i}] \mor [{\mathfrak{N}}(\Phi_i)] = [{\widehat{\rho}}_i({\mathfrak{N}}(\Phi_i))] \ .
\end{equation}
Let $p_{i,1} \colon {\mathfrak{H}}_i \times \cdots \times {\mathfrak{H}}_i \to {\mathfrak{H}}_i$ denote the projection onto the first factor.
Then the composition $p_{i,1} \circ {\widehat{\rho}}_i$ equals the restriction to ${\mathfrak{N}}(\Phi_i)$ of the map
$\widehat{\Phi}_{i,U_i} \colon {\mathfrak{G}}(\Phi_i)_{U_i} \to {\mathfrak{H}}_i$. Let
$${\mathfrak{L}}_i = \ker \ p_{i,1} \colon {\widehat{\rho}}_i({\mathfrak{N}}(\Phi_i)) \to \widehat{\Phi}_{i,U_i}({\mathfrak{N}}(\Phi_i))$$
denote the kernel of the restriction of $p_{i,1}$. Then by
Proposition~\ref{prop-lagrange} applied to the inclusions $\{\widehat{e}\} \subset {\mathfrak{L}}_i \subset {\widehat{\rho}}_i({\mathfrak{N}}(\Phi_i))$, by the
identity \eqref{eq-relativeorder2} we have
$ \Pi[{\widehat{\rho}}_i({\mathfrak{N}}(\Phi_i))] = \Pi[{\widehat{\rho}}_i({\mathfrak{N}}(\Phi_i)) : {\mathfrak{L}}_i] \cdot \Pi[{\mathfrak{L}}_i ]$.
Since by the first isomorphism theorem $\widehat{\Phi}_{i,U_i}({\mathfrak{N}}(\Phi_i)) = {\widehat{\rho}}_i({\mathfrak{N}}(\Phi_i)) /{\mathfrak{L}}_i$, then
$$ \Pi[{\widehat{\rho}}_i({\mathfrak{N}}(\Phi_i)) : {\mathfrak{L}}_i] = \Pi[\widehat{\Phi}_{i,U_i}({\mathfrak{N}}(\Phi_i))],$$
and thus we have the inequality of Steinitz orders $ [\widehat{\Phi}_{i,U_i}({\mathfrak{N}}(\Phi_i))] \leq [{\widehat{\rho}}_i({\mathfrak{N}}(\Phi_i))]$.
Now note that ${\mathfrak{N}}(\Phi_i)$ has finite index in ${\mathfrak{G}}(\Phi_i)_{U_i}$ implies the same holds for its image under $\widehat{\Phi}_{i,U_i}$, so we have
$[\widehat{\Phi}_{i,U_i}({\mathfrak{N}}(\Phi_i))] \mor [{\mathfrak{H}}_i]$.
Thus we have the estimate on Steinitz orders
\begin{equation}\label{eq-compare2}
[{\mathfrak{H}}_i] \mor [\widehat{\Phi}_{i,U_i}({\mathfrak{N}}(\Phi_i))] \leq [{\widehat{\rho}}_i({\mathfrak{N}}(\Phi_i))] \ .
\end{equation}
On the other hand, from the embedding in \eqref{eq-prodrep} we have
\begin{equation}\label{eq-compare3}
[{\widehat{\rho}}_i({\mathfrak{N}}(\Phi_i))] \leq [{\mathfrak{H}}_i] \cdots [{\mathfrak{H}}_i] = [{\mathfrak{H}}_i]^{m_i} \ .
\end{equation}
Combining the estimates \eqref{eq-compare1}, \eqref{eq-compare2}, and \eqref{eq-compare3} we obtain that $\pi_{\infty}([{\mathfrak{H}}_i]) = \pi_{\infty}({\mathfrak{N}}(\Phi_i)) = \pi_{\infty}({\mathfrak{G}}(\Phi_i))$. Moreover, $\pi_{f}([{\mathfrak{H}}_i])$ and $\pi_{f}({\mathfrak{G}}(\Phi_i))$ differ by at most a finite subset of primes.
As ${\mathfrak{H}}_1$ and ${\mathfrak{H}}_2$ are topologically isomorphic, this shows that the prime spectra of ${\mathfrak{G}}(\Phi_1)$ and ${\mathfrak{G}}(\Phi_2)$
satisfy the claim of Theorem~\ref{thm-returnequivspectra}.
We can apply the same analysis as above to the isotropy subgroups ${\mathfrak{D}}(\Phi_1, x_1)$ and ${\mathfrak{D}}(\Phi_2, x_2)$ to obtain the stated relations between their prime spectra, completing the proof of Theorem~\ref{thm-returnequivspectra}.
\subsection{Algebraic model} \label{subsec-algmodel}
In this section we reformulate the abstract Definition~\ref{def-steinitzprofinite} of the Steinitz order invariants in terms of an algebraic model for a Cantor action. This provides an effective method of calculating and working with these invariants.
We first recall the construction of the algebraic models for an action $({\mathfrak{X}}, \Gamma, \Phi)$ and its profinite completion.
For $x \in {\mathfrak{X}}$, by Proposition~\ref{prop-adpatedchain} there exists an adapted neighborhood basis ${\mathcal U} = \{U_{\ell} \subset {\mathfrak{X}} \mid \ell \geq 0\}$ at $x$ for the action $\Phi$.
Let $\Gamma_{\ell} = \Gamma_{U_{\ell}}$ denote the stabilizer group of $U_{\ell}$.
Then we obtain a strictly descending chain of finite index subgroups
\begin{equation}\label{eq-groupchain}
{\mathcal G}^x_{{\mathcal U}} = \{\Gamma = \Gamma_0 \supset \Gamma_1 \supset \Gamma_2 \supset \cdots \} \ .
\end{equation}
Note that each $\Gamma_{\ell}$ has finite index in $\Gamma$, and is not assumed to be a normal subgroup. Also note that while the intersection of the chain ${\mathcal U}$ is a single point $\{x\}$, the intersection of the stabilizer groups in ${\mathcal G}^x_{{\mathcal U}}$ need not be the trivial group.
Next, set $X_{\ell} = \Gamma/\Gamma_{\ell}$ and note that $\Gamma$ acts transitively on the left on $X_{\ell}$.
The inclusion $\Gamma_{\ell +1} \subset \Gamma_{\ell}$ induces a natural $\Gamma$-invariant quotient map $p_{\ell +1} \colon X_{\ell +1} \to X_{\ell}$.
Introduce the inverse limit
\begin{eqnarray}
X_{\infty} & \equiv & \lim_{\longleftarrow} ~ \{ p_{\ell +1} \colon X_{\ell +1} \to X_{\ell} \mid \ell \geq 0 \} \label{eq-invlimspace}\\
& = & \{(x_0, x_1, \ldots ) \in X_{\infty} \mid p_{\ell +1 }(x_{\ell + 1}) = x_{\ell} ~ {\rm for ~ all} ~ \ell \geq 0 ~\} ~ \subset \prod_{\ell \geq 0} ~ X_{\ell} \ , \nonumber
\end{eqnarray}
which is a Cantor space with the Tychonoff topology, and the actions of $\Gamma$ on the factors $X_{\ell}$ induce a minimal equicontinuous action on the inverse limit, denoted by $\Phi_x \colon G \times X_{\infty} \to X_{\infty}$. Denote the points in $X_{\infty}$ by
$x = (x_{\ell}) \in X_{\infty}$. There is a natural basepoint $x_{\infty} \in X_{\infty}$ given by the cosets of the identity element $e \in \Gamma$, so $x_{\infty} = (e \Gamma_{\ell})$. A basis of neighborhoods of $x_{\infty}$ is given by the clopen sets
\begin{equation}\label{eq-openbasis}
U_{\ell} = \left\{ x = (x_{\ell}) \in X_{\infty} \mid x_i = e \Gamma_i \in X_i~, ~ 0 \leq i < \ell ~ \right\} \subset X_{\infty} \ .
\end{equation}
For each $\ell \geq 0$, we have the ``partition coding map'' $\Theta_{\ell} \colon {\mathfrak{X}} \to X_{\ell}$ which is $G$-equivariant. The maps $\{\Theta_{\ell}\}$ are compatible with the map on quotients in \eqref{eq-invlimspace}, and so they induce a limit map $\Theta_x \colon {\mathfrak{X}} \to X_{\infty}$. The fact that the diameters of the clopen sets $\{U_{\ell}\}$ tend to zero, implies that $\Theta_x$ is a homeomorphism. Moreover, $\Theta_x(x) = x_{\infty} \in X_{\infty}$.
\begin{thm}\cite[Appendix~A]{DHL2016a}
The map $\Theta_x \colon {\mathfrak{X}} \to X_{\infty}$ induces an isomorphism of the Cantor actions $({\mathfrak{X}},\Gamma,\Phi)$ and $(X_{\infty}, \Gamma, \Phi_x)$.
\end{thm}
The action $(X_{\infty}, G, \Phi_x)$ is called the \emph{odometer model} centered at $x$ for the action $({\mathfrak{X}},\Gamma,\Phi)$.
The dependence of the model on the choices of a base point $x \in {\mathfrak{X}}$ and adapted neighborhood basis ${\mathcal U}$ is discussed in detail in the works \cite{DHL2016a,FO2002,HL2018a,HL2019a}.
Next, we develop the algebraic model for the profinite action $\widehat{\Phi} \colon {\mathfrak{G}}(\Phi) \times {\mathfrak{X}} \to {\mathfrak{X}}$ of the completion ${\mathfrak{G}}(\Phi) \equiv \overline{\Phi(\Gamma)} \subset \Homeo({\mathfrak{X}})$.
Fix a choice of group chain $\{\Gamma_{\ell} \mid \ell \geq 0\}$ as above, which provides an algebraic model for the action $({\mathfrak{X}},\Gamma,\Phi)$.
For each $\ell \geq 1$, let $C_{\ell} \subset \Gamma_{\ell}$ denote the \emph{core} of $\Gamma_{\ell}$, that is, the largest normal subgroup of $\Gamma_{\ell}$. So
\begin{equation}\label{eq-core}
C_{\ell} ~ = {\rm Core}(\Gamma_{\ell}) ~ = ~ \bigcap_{g \in \Gamma} ~ g \ \Gamma_{\ell} \ g^{-1} ~ \subset \Gamma_{\ell} ~ .
\end{equation}
As $\Gamma_{\ell}$ has finite index in $\Gamma$, the same holds for $C_{\ell}$. Observe that for all $\ell \geq 0$, we have $C_{\ell +1} \subset C_{\ell}$.
Introduce the quotient group $Q_{\ell} = \Gamma/C_{\ell}$ with identity element $e_{\ell} \in Q_{\ell}$. There are natural quotient maps $q_{\ell+1} \colon Q_{\ell +1} \to Q_{\ell}$, and we can form the inverse limit group
\begin{eqnarray}
\widehat{\Gamma}_{\infty} & \equiv & \lim_{\longleftarrow} ~ \{ q_{\ell +1} \colon Q_{\ell +1} \to Q_{\ell} \mid \ell \geq 0 \} \label{eq-invgroup}\\
& = & \{(g_{\ell}) = (g_0, g_1, \ldots ) \mid g_{\ell} \in Q_{\ell} ~ , ~ q_{\ell +1 }(g_{\ell + 1}) = g_{\ell} ~ {\rm for ~ all} ~ \ell \geq 0 ~\} ~ \subset \prod_{\ell \geq 0} ~ \Gamma_{\ell} ~ , \label{eq-coordinates}
\end{eqnarray}
which is a Cantor space with the Tychonoff topology. The left actions of $\Gamma$ on the spaces $X_{\ell} = \Gamma/\Gamma_{\ell}$ induce a minimal equicontinuous action of $\widehat{\Gamma}_{\infty}$ on $X_{\infty}$, again denoted by $\widehat{\Phi} \colon \widehat{\Gamma}_{\infty} \times X_{\infty} \to X_{\infty}$. Note that the isotropy group of the identity coset of the action of $Q_{\ell} = \Gamma_{\ell}/C_{\ell}$ on $X_{\ell}= \Gamma/\Gamma_{\ell}$ is the subgroup $D_{\ell} = \Gamma_{\ell}/C_{\ell}$.
Denote the points in $\widehat{\Gamma}_{\infty}$ by
$\widehat{g} = (g_{\ell}) \in \widehat{\Gamma}_{\infty}$ where $g_{\ell} \in Q_{\ell}$. There is a natural basepoint $\widehat{e}_{\infty} \in \widehat{\Gamma}_{\infty}$ given by the cosets of the identity element $e \in \Gamma$, so $\widehat{e}_{\infty} = (e_{\ell})$ where $e_{\ell} = e C_{\ell} \in Q_{\ell}$ is the identity element in $Q_{\ell}$.
For each $\ell \geq 0$, let $\Pi_{\ell} \colon \widehat{\Gamma}_{\infty} \to Q_{\ell}$ denote the projection onto the $\ell$-th factor in \eqref{eq-invgroup}, so in the coordinates of \eqref{eq-coordinates}, we have $\Pi_{\ell}(\widehat{g}) = g_{\ell} \in Q_{\ell}$.
The maps $\Pi_{\ell}$ are continuous for the profinite topology on $\widehat{\Gamma}_{\infty}$, so the pre-images of points in $Q_{\ell}$ are clopen subsets. In particular, the fiber of $Q_{\ell}$ over $e_{\ell}$ is the normal subgroup
\begin{equation}\label{eq-opennbhds}
\widehat{C}_{\ell} = \Pi_{\ell}^{-1}(e_{\ell}) = \{(g_{\ell}) \in \widehat{\Gamma}_{\infty} \mid g_{i} \in C_{i} ~ , ~ 0 \leq i \leq \ell \} \ .
\end{equation}
Then the collection $\{\widehat{C}_{\ell} \mid \ell \geq 1\}$ forms a basis of clopen neighborhoods of $\widehat{e}_{\infty} \in \widehat{\Gamma}_{\infty}$. That is, for each clopen set $\widehat{U} \subset \widehat{\Gamma}_{\infty}$ with $\widehat{e}_{\infty} \in \widehat{U}$, there exists $\ell_0 > 0$ such that $\widehat{C}_{\ell} \subset \widehat{U}$ for all $\ell \geq \ell_0$.
\begin{thm}\cite[Theorem~4.4]{DHL2016a}\label{thm-fundamentaliso}
There is an isomorphism ${\widehat{\tau}} \colon {\mathfrak{G}}(\Phi) \to \widehat{\Gamma}_{\infty}$ which conjugates the profinite action
$({\mathfrak{X}}, {\mathfrak{G}}(\Phi), \widehat{\Phi})$ with the profinite action
$(X_{\infty}, \widehat{\Gamma}_{\infty}, \widehat{\Phi})$. In particular, ${\widehat{\tau}}$
identifies the isotropy group ${\mathfrak{D}}(\Phi, x) = {\mathfrak{G}}(\Phi)_{x}$ with the inverse limit subgroup
\begin{equation}\label{eq-discformula}
D_{\infty} = \varprojlim \ \{q_{\ell +1} \colon \Gamma_{\ell +1}/C_{\ell+1} \to \Gamma_{\ell}/C_{\ell} \mid \ell \geq 0\} \subset \widehat{\Gamma}_{\infty}~ .
\end{equation}
\end{thm}
The maps $q_{\ell +1}$ in the formula \eqref{eq-discformula} need not be surjections, and thus the calculation of the inverse limit $D_{\infty}$ can involve some subtleties. For example, it is possible that each group $Q_{\ell}$ is non-trivial for $\ell > 0$, and yet $D_{\infty}$ is the trivial group (see Example~\ref{ex-trivial}.) This phenomenon leads to the following considerations.
Observe that the formula \eqref{eq-discformula} implies the restriction of the projection map $\Pi_{\ell} \colon D_{\infty} \to Q_\ell$ yields a map $\Pi_{\ell} \colon D_{\infty} \to D_{\ell} \equiv \Gamma_{\ell}/C_{\ell} \subset Q_{\ell}$.
Set
\begin{equation}\label{eq-discimage}
D_{\ell}^* = \Pi_{\ell}(D_{\infty}) \subset D_{\ell} \ .
\end{equation}
We recall a concept definition from \cite[Definition~5.6]{DHL2016a}:
\begin{defn}\label{def-normalform}
A group chain $\{\Gamma_{\ell} \mid \ell \geq 0\}$ in $\Gamma$ is in \emph{normal form} if $D_{\ell}^* = D_{\ell}$, for $\ell \geq 0$.
\end{defn}
Recall that if the group chain $\{\Gamma_{\ell} \mid \ell \geq 0\}$ is in normal form,
then each of the bonding maps $q_{\ell +1}$ in \eqref{eq-discformula} is a surjection. We note that, given any group chain ${\mathcal G} = \{\Gamma_\ell \mid \ell \geq 0\}$, by \cite[Proposition~5.7]{DHL2016a} there exists a group chain ${\mathcal G}' = \{\Gamma_{\ell}' \mid \ell \geq 0\}$ in normal form which is equivalent to ${\mathcal G}$, that is, up to a choice of infinite subsequences the group chains are intertwined, $\Gamma_0 \supset \Gamma_1 ' \supset \Gamma_1 \supset \Gamma_2' \supset \cdots$. As explained in \cite{DHL2016a}, the actions defined by equivalent group chains ${\mathcal G}$ and ${\mathcal G}'$ using formulas \eqref{eq-invlimspace} - \eqref{eq-invgroup} are isomorphic, and the homeomorphism implementing the isomorphism preserves the basepoint.
\subsection{Steinitz orders for algebraic models} \label{subsec-steinitzalgmodel}
Let $({\mathfrak{X}}, \Gamma, \Phi)$ be a minimal equicontinuous Cantor action, chose $x \in {\mathfrak{X}}$ and an adapted neighborhood basis ${\mathcal U}$ at $x$, then let ${\mathcal G} = \{\Gamma_{\ell} \mid \ell \geq 0\}$ be the associated group chain formed by the stabilizer subgroups of the clopen sets $U_{\ell}$ in ${\mathcal U}$. We continue further with the notation in Section~\ref{subsec-algmodel}.
For $\ell \geq 0$, we have the finite sets $X_{\ell} = \Gamma/\Gamma_{\ell}$, and the finite groups $Q_{\ell} = \Gamma/C_{\ell}$,
$D_{\ell} = \Gamma_{\ell}/C_{\ell}$ and $D_{\ell}^* = \Pi_{\ell}({\mathcal D}_{\infty}) \subset D_{\ell}$.
Introduce the sequences of integers:
\begin{equation}\label{eq-dims}
m_{\ell} = \# \ X_{\ell} \quad ; \quad n_{\ell} = \# \ Q_{\ell} \quad ; \quad k_{\ell} = \# \ D_{\ell} \quad ; \quad k_{\ell}^* = \# \ D_{\ell}^* \ .
\end{equation}
We make some elementary observations about these sequences of integers.
Lagrange's Theorem implies that $n_{\ell} = m_{\ell} k_{\ell}$ for $\ell \geq 0$, and we also have $k_{\ell}^* \leq k_{\ell}$.
Note that $m_{\ell +1} = m_{\ell} \cdot [\Gamma_{\ell} : \Gamma_{\ell+1}]$. As the inclusion $\Gamma_{\ell +1} \subset \Gamma_{\ell}$ is proper, we have $[\Gamma_{\ell} : \Gamma_{\ell+1}] > 1$ and so $\{ m_{\ell} \mid \ell \geq 0\}$ is a strictly increasing sequence.
Also, $C_{\ell +1} \subset C_{\ell}$, and $n_{\ell+1} = n_{\ell} \cdot [C_{\ell} : C_{\ell+1}]$ so $\{ n_{\ell} \mid \ell \geq 0\}$ is a non-decreasing sequence.
As $k_{\ell}^*$ is the order of the projection of ${\mathcal D}_{\infty}$ into $Q_{\ell}$, the sequence $\{ k_{\ell}^* \mid \ell \geq 0\}$ is non-decreasing. For instance, when ${\mathcal D}_\infty$ is a finite group, then there exist $m \geq 0$ such that $k_\ell^* = k_{\ell+1}^*$ for all $\ell \geq m$.
\begin{prop}\label{prop-steinitzinvariance}
Let $({\mathfrak{X}}, \Gamma, \Phi)$ be a minimal equicontinuous Cantor action.
Given a basepoint $x \in {\mathfrak{X}}$, and an adapted neighborhood basis ${\mathcal U}$ at $x$, let ${\mathcal G} = \{\Gamma_{\ell} \mid \ell \geq 0\}$ be the associated group chain formed by the stabilizer subgroups of the clopen sets $U_{\ell}$ in ${\mathcal U}$. Then the Steinitz orders for the action, as defined in Definition~\ref{def-steinitzorderaction}, can be calculated as follows:
\begin{enumerate}
\item \quad $\Pi[{\mathfrak{G}}(\Phi)] = LCM \ \{n_{\ell} \mid \ell \geq 0\}$,
\item \quad $\Pi[{\mathfrak{G}}(\Phi) : {\mathfrak{D}}(\Phi , x)] = LCM \ \{m_{\ell} \mid \ell \geq 0\}$,
\item \quad $\Pi[{\mathfrak{D}}(\Phi, x)] = LCM \ \{k^*_{\ell} \mid \ell \geq 0\} \leq LCM \ \{k_{\ell} \mid \ell \geq 0\}$ \ .
\end{enumerate}
\end{prop}
\proof
By Theorem~\ref{thm-fundamentaliso},
there is an isomorphism ${\widehat{\tau}} \colon {\mathfrak{G}}(\Phi) \to \widehat{\Gamma}_{\infty}$
which conjugates the profinite action $({\mathfrak{X}}, \widehat{\Gamma}, \widehat{\Phi})$
with the profinite action $(X_{\infty}, \widehat{\Gamma}_{\infty}, \widehat{\Phi})$.
By the results of Section~\ref{subsec-sodefs}, it suffices to show that the formulas in Proposition \ref{prop-steinitzinvariance}, (1)-(3), hold for the action $(X_{\infty}, \widehat{\Gamma}_{\infty}, \widehat{\Phi})$.
Recall that $\widehat{C}_{\ell}$ is the normal clopen subgroup of $\widehat{\Gamma}_{\infty}$ defined in \eqref{eq-opennbhds}. Since $\{\widehat{C}_\ell\}_{\ell \geq 0}$ form a neighborhood basis for the identity in $\widehat{\Gamma}_\infty$, for any clopen normal subgroup ${\mathcal N} \subset \widehat{\Gamma}_{\infty}$, there exists $\ell > 0$ such that $\widehat{C}_{\ell} \subset {\mathcal N}$. It follows that
$\# (\widehat{\Gamma}_{\infty}/{\mathcal N})$ divides $\# (\widehat{\Gamma}_{\infty}/\widehat{C}_{\ell}) = \# Q_{\ell}$. Noting that $\widehat{C}_{\ell}$ is itself a clopen normal subgroup, we have
\begin{eqnarray}
\lefteqn{ LCM \{\# \ \widehat{\Gamma}_{\infty}/{\mathcal N} \mid {\mathcal N} \subset \widehat{\Gamma}_{\infty} ~ \text{clopen normal subgroup} \} = } \label{eq-reduction} \\
& & LCM \{\# \ \widehat{\Gamma}_{\infty}/\widehat{C}_{\ell} \mid \ell > 0\} = LCM \{\# \ Q_{\ell} \mid \ell > 0\} \ . \nonumber
\end{eqnarray}
Then by Definition \ref{def-steinitzorderaction},
\begin{eqnarray*}
\Pi[{\mathfrak{G}}(\Phi)] & = & LCM \{\# \ {\mathfrak{G}}(\Phi)/{\mathfrak{N}} \mid {\mathfrak{N}} \subset {\mathfrak{G}}(\Phi)~ \text{clopen normal subgroup}\} \\
& = & LCM \{\# \ \widehat{\Gamma}_{\infty}/{\mathcal N} \mid {\mathcal N} \subset \widehat{\Gamma}_{\infty} ~ \text{clopen normal subgroup} \} \\
& = & LCM \{\# \ Q_{\ell} \mid \ell > 0\} = LCM \{ \ n_{\ell} \mid \ell > 0\} \ .
\end{eqnarray*}
The proofs of the identities (2) and (3) in Proposition \ref{prop-steinitzinvariance} require an additional consideration. Introduce the closures of the subgroups $\Gamma_{\ell}$, for $\ell > 0$,
\begin{equation}
\widehat{\Gamma}_{\ell} = \overline{\Gamma_{\ell}} = \left\{ \widehat{g} = (g_{\ell}) \in \widehat{\Gamma}_{\infty} \mid g_i = e_i ~, ~ 0 \leq i < \ell ~ ; ~ g_i \in \Gamma_i ~ , ~ i \geq \ell \right\} \subset \widehat{\Gamma}_{\infty} \ .
\end{equation}
Then each $\widehat{\Gamma}_{\ell}$ is a clopen subset of $\widehat{\Gamma}_{\infty}$, and from the formula \eqref{eq-discformula} we have $D_{\infty} \subset \widehat{\Gamma}_{\ell}$ for all $\ell \geq 0$, and moreover, we have
\begin{equation}\label{eq-intersection}
D_{\infty} = \bigcap_{\ell > 0} ~ \widehat{\Gamma}_{\ell} \ .
\end{equation}
The equality in \eqref{eq-intersection} follows as the action of $\widehat{g} \in \Gamma_{\ell}$ on $X_{\infty}$ fixes the clopen set $U_{\ell}$ defined by \eqref{eq-openbasis}, so $\widehat{g} \in \widehat{\Gamma}_{\ell}$ for all $ \ell > 0$ implies that its action on $X_{\infty}$ fixes the intersection, $x_{\infty} = \cap_{\ell > 0} U_{\ell}$.
Also, observe that for $\ell > 0$ we have the identity
\begin{equation}
\Gamma_{\ell} = \left\{ g \in \Gamma \mid \widehat{g} = (g,g, \ldots) \in \widehat{\Gamma}_{\ell} \right\} \ ,
\end{equation}
and consequently there is an isomorphism $\widehat{\Gamma}_{\infty}/\widehat{\Gamma}_{\ell} \cong \Gamma/\Gamma_{\ell}$.
Next, observe that given a clopen normal subgroup ${\mathcal N} \subset \widehat{\Gamma}_{\infty}$, by \eqref{eq-intersection} there exists $\ell$ such that $\widehat{\Gamma}_{\ell} \subset {\mathcal N} \cdot D_{\infty}$. For instance, this holds for any $\ell \geq 0$ such that $\widehat{C}_\ell \subset {\mathcal N}$. Then the identity (2) in Proposition \ref{prop-steinitzinvariance} follows from the fact that $\widehat{\Gamma}_{\ell}$ is a clopen neighborhood of $D_{\infty}$, and reasoning as for \eqref{eq-reduction}, we have
\begin{eqnarray*}
\Pi[{\mathfrak{G}}(\Phi) : {\mathfrak{D}}(\Phi , x)] & = & LCM \{\# \ {\mathfrak{G}}(\Phi)/({\mathfrak{N}} \cdot {\mathfrak{D}}(\Phi , x)) \mid {\mathfrak{N}} \subset {\mathfrak{G}}(\Phi)~ \text{clopen normal subgroup}\} \\
& = & LCM \{\# \ \widehat{\Gamma}_{\infty}/({\mathcal N} \cdot D_{\infty}) \mid {\mathcal N} \subset \widehat{\Gamma}_{\infty}~ \text{clopen normal subgroup}\} \\
& = & LCM \{\# \ \widehat{\Gamma}_{\infty}/\widehat{\Gamma}_{\ell} \mid \ell > 0\} = LCM \{\# \ \Gamma/\Gamma_{\ell} \mid \ell > 0\} \\
& = & LCM \{ \ m_{\ell} \mid \ell > 0\} \ .
\end{eqnarray*}
Similarly, the proof of the identity (3) in Proposition \ref{prop-steinitzinvariance} follows from the calculations:
\begin{eqnarray*}
\Pi[{\mathfrak{D}}(\Phi , x)] & = & LCM \{\# \ {\mathfrak{D}}(\Phi , x)/({\mathfrak{N}} \cap {\mathfrak{D}}(\Phi , x)) \mid {\mathfrak{N}} \subset {\mathfrak{G}}(\Phi)~ \text{clopen normal subgroup}\} \\
& = & LCM \{\# \ D_{\infty}/({\mathcal N} \cap D_{\infty}) \mid {\mathcal N} \subset \widehat{\Gamma}_{\infty} ~ \text{clopen normal subgroup} \} \\
& = & LCM \{\# \ D_{\infty}/ (\widehat{C}_{\ell} \cap D_{\infty}) \mid \ell > 0\} \\
& = & LCM \{\# \ \Pi_{\ell}({\mathcal D}_{\infty}) \mid \ell > 0\}
= LCM \{\# \ k^*_{\ell} \mid \ell > 0\} \ .
\end{eqnarray*}
This completes the proof of Proposition~\ref{prop-steinitzinvariance}.
\endproof
As remarked is the discussion of Definition~\ref{def-normalform}, the condition that the chain ${\mathcal G}$ is descending does not impose sufficient restrictions on the behavior of the orders of the groups $D_{\ell} = \Gamma_\ell/C_\ell$ in order to compute $\Pi[{\mathfrak{D}}(\Phi , x)]$.
Rather, computing $LCM\{D_\ell \mid \ell \geq 0\} = LCM\{k_\ell \mid \ell \geq 0\}$ yields an upper bound on the Steinitz order of $D_{\infty}$. However, if we are given that the chain ${\mathcal G}$ is in normal form, as in Definition~\ref{def-normalform}, then this indeterminacy is removed.
\begin{cor}
Let ${\mathcal G} = \{\Gamma_{\ell} \mid \ell \geq 0\}$ be a group chain in normal form which gives an algebraic model for a Cantor action $({\mathfrak{X}}, \Gamma, \Phi)$. Then we have
\begin{equation}\label{eq-SOnormalform}
\Pi[{\mathfrak{D}}(\Phi , x)] = LCM \{\# \ D_{\ell} \mid \ell > 0\} = LCM \{\# \ k_{\ell} \mid \ell > 0\} \ .
\end{equation}
\end{cor}
It is often the case when constructing examples of Cantor actions, that the normal form property is guaranteed by the choices in the construction, and then \eqref{eq-SOnormalform} calculates the Steinitz order of the discriminant of the action.
\subsection{Steinitz orders of solenoidal manifolds}\label{subsec-solenoids}
We relate the asymptotic Steinitz order for a tower of coverings with the Steinitz order invariants for Cantor actions.
This yields the proof of Theorem~\ref{thm-main1}. We first recall some preliminary constructions for solenoidal manifolds.
Let $M_0$ be a compact connected manifold without boundary. Let
${\mathcal P} = \{\, q_{\ell} \colon M_{\ell} \to M_{\ell -1} \mid \ell \geq 1\}$ be a presentation as in Section~\ref{sec-intro}. Let ${\mathcal S}_{{\mathcal P}}$ be the inverse limit of this presentation as in \eqref{eq-presentationinvlim}. A point $x \in {\mathcal S}_{{\mathcal P}}$ is represented by a sequence, $x = (x_0, x_1, \ldots )$ with $x_{\ell} \in M_{\ell}$.
For each $\ell \geq 0$, projection onto the $\ell$-th factor in \eqref{eq-presentationinvlim} yields a fibration denoted by
$\widehat{q}_{\ell} \colon {\mathcal S}_{{\mathcal P}} \to M_{\ell}$, so $\widehat{q}_{\ell}(x) = x_{\ell}$.
Denote the iterated covering map by
$\overline{q}_{\ell} = q_{\ell} \circ q_{\ell -1} \circ \cdots \circ q_1 \colon M_{\ell} \to M_0$, and note that $\widehat{q}_0 = \overline{q}_{\ell} \circ \widehat{q}_{\ell}$.
Choose a basepoint $x_0 \in M_0$, and let ${\mathfrak{X}}_0 = \widehat{q}_0^{-1}(x_0)$ denote the fiber of the projection map $\widehat{q}_0$.
Then ${\mathfrak{X}}_0$ is a Cantor space, and the holonomy along the leaves of the foliation ${\mathcal F}_{{\mathcal P}}$ on ${\mathcal S}_{{\mathcal P}}$ induces
the \emph{monodromy action} of the fundamental group $\Gamma_0 = \pi_1(M_0, x_0)$ on ${\mathfrak{X}}_0$. This action is discussed in greater detail is many works, for example in \cite{CandelConlon2000}.
Choose a basepoint $x \in {\mathfrak{X}}_0$ and then for each $\ell \geq 0$, set $x_{\ell} = \widehat{q}_{\ell}(x) \in M_{\ell}$.
Then $\overline{q}_{\ell}(x_{\ell}) = x_0$ so we get induced maps of fundamental groups,
$(\overline{q}_{\ell})_{\#} \colon \pi_1(M_{\ell} , x_{\ell}) \to \pi_1(M, x_0) = \Gamma_0$. Let $\Gamma_{\ell} \subset \Gamma_0$ denote the image of this map, so $\Gamma_{\ell} \subset \Gamma_0$ is a subgroup of finite index. Note that $\overline{q}_{\ell} \colon M_{\ell} \to M_0$ is a normal covering map exactly when $\Gamma_{\ell}$ is a normal subgroup of $\Gamma_0$.
Let $(X_{\infty}, \Gamma_0, \Phi_x)$ be the Cantor action associated to the group chain ${\mathcal G}_{x} = \{\Gamma_{\ell} \mid \ell \geq 0 \}$ constructed in Section~\ref{subsec-algmodel} above. Then the monodromy action of $\Gamma_0$ on ${\mathfrak{X}}_0$ determined by the foliation on ${\mathcal S}_{{\mathcal P}}$ is conjugate to the action
$(X_{\infty}, \Gamma, \Phi_x)$, as discussed in \cite[Section~2]{DHL2016b} and \cite[Section~3.1]{DHL2016c}.
In particular, note that the degree of the covering map $\overline{q}_{\ell} \colon M_{\ell} \to M_0$ equals the index $\#[\Gamma_0 : \Gamma_{\ell}]$.
Thus, by the identity (2) in Proposition~\ref{prop-steinitzinvariance}, the {Steinitz order} $\Pi[{\mathcal P}] $ of ${\mathcal P}$ in Definition~\ref{def-steinitzpres} equals the relative Steinitz order $\Pi[{\mathfrak{G}}(\Phi_x) : {\mathfrak{D}}(\Phi_x , x)]$ of the action $(X_{\infty}, \Gamma, \Phi_x)$.
Now suppose, for $i=1,2$, we are given a solenoidal manifold ${\mathcal S}_{{\mathcal P}_i}$ defined by the presentation ${\mathcal P}_i$ and there exists a homeomorphism $h \colon {\mathcal S}_{{\mathcal P}_1} \to {\mathcal S}_{{\mathcal P}_2}$. Then by the results of Section~\ref{subsec-morita}, the homeomorphism $h$ induces a return equivalence of their monodromy actions, and thus the algebraic models for these actions defined by ${\mathcal P}_1$ and ${\mathcal P}_2$ are return equivalent.
By Proposition~\ref{prop-relativeinv} we have
$\Pi_a[{\mathfrak{G}}(\Phi_1):{\mathfrak{D}}(\Phi_1)] = \Pi_a[{\mathfrak{G}}(\Phi_2) : {\mathfrak{D}}(\Phi_2)]$.
Proposition~\ref{prop-steinitzinvariance} identifies $\Pi_a[{\mathfrak{G}}(\Phi_i):{\mathfrak{D}}(\Phi_i)]$ with the asymptotic Steinitz order $\Pi_a[{\mathcal P}_i]$ and so we obtain the conclusion of Theorem~\ref{thm-main1}.
\section{Nilpotent actions}\label{sec-nilpotent}
In this section, we apply the notion of the Steinitz order of a nilpotent Cantor action to the study of its dynamical properties.
The proof of Theorem~\ref{thm-nilstable} is based on the special properties of the profinite completions of nilpotent groups,
in particular the uniqueness of their Sylow $p$-subgroups, and the relation of this algebraic property with the dynamics of the action.
\subsection{Noetherian groups}\label{subsec-noetherian}
Baer introduced the notion of a Noetherian group in his work \cite{Baer1956}. A countable group $\Gamma$ is said to be \emph{Noetherian} if every increasing chain of subgroups $\{H_i \mid i \geq 1 \}$ of $\Gamma$ has a maximal element $H_{i_0}$. Equivalently, $\Gamma$ is Noetherian if every increasing chain of subgroups in $\Gamma$ eventually stabilizes.
It is easy to see that the group ${\mathbb Z}$ is Noetherian, that a finite product of Noetherian groups is Noetherian, and that a subgroup and quotient group of a Noetherian group is Noetherian. Thus, a finitely-generated nilpotent group is Noetherian.
The notion of a Noetherian group has a generalization which is useful for the study of actions of profinite groups (see \cite[page 153]{Wilson1998}.)
\begin{defn} \label{def-noetherian}
A profinite group ${\mathfrak{G}}$ is said to be \emph{topologically Noetherian} if every increasing chain of \emph{closed} subgroups $\{{\mathfrak{H}}_i \mid i \geq 1 \}$ of ${\mathfrak{G}}$ has a maximal element ${\mathfrak{H}}_{i_0}$.
\end{defn}
We illustrate this concept with two canonical examples of profinite completions of ${\mathbb Z}$. First, let $\widehat{\mZ}_p$ denote the $p$-adic integers, for $p$ a prime. That is, $\widehat{\mZ}_p$ is the completion of ${\mathbb Z}$ with respect to the chain of subgroups
${\mathcal G} = \{\Gamma_{\ell} = p^{\ell} {\mathbb Z} \mid \ell \geq 1\}$. The closed subgroups of $\widehat{\mZ}_p$ are given by $p^i \cdot \widehat{\mZ}_p$ for some fixed $i > 0$, hence satisfy the ascending chain property in Definition \ref{def-noetherian}.
Next, let $\pi = \{p_i \mid i \geq 1\}$ be an infinite collection of distinct primes, and define the increasing chain of subgroups of ${\mathbb Z}$ defined by ${\mathcal G}_{\pi} = \{\Gamma_{\ell} = p_1p_2 \cdots p_{\ell} {\mathbb Z} \mid \ell \geq 1\}$. Let $\widehat{\mZ}_{\pi}$ be the completion of ${\mathbb Z}$ with respect to the chain ${\mathcal G}_{\pi}$. Then we have a topological isomorphism
\begin{equation}
\widehat{\mZ}_{\pi} \cong \prod_{i \geq 1} \ {\mathbb Z}/p_i {\mathbb Z} \ .
\end{equation}
Let $H_{\ell} = {\mathbb Z}/p_1{\mathbb Z} \oplus \cdots \oplus {\mathbb Z}/p_{\ell} {\mathbb Z}$ be the direct sum of the first $\ell$-factors. Then $\{H_{\ell} \mid \ell \geq 1\}$ is an infinite increasing chain of finite subgroups of $\widehat{\mZ}_{\pi}$ which does not stabilize, so $\widehat{\mZ}_{\pi}$ is not topologically Noetherian.
These two examples illustrate the idea behind the proof of the following result.
\begin{prop}\label{prop-nilpNoetherian}
Let $\Gamma$ be a finitely generated nilpotent group, and let $\widehat{\Gamma}$ be a profinite completion of $\Gamma$.
Then $\widehat{\Gamma}$ is topologically Noetherian if and only if the prime spectrum $\pi(\Pi[\widehat{\Gamma}])$ is finite.
\end{prop}
\proof
Recall some basic facts about profinite groups. (See for example, \cite[Chapter~2]{Wilson1998}.) For a prime $p$, a finite group $H$ is a $p$-group if every element of $H$ has order a power of $p$. A profinite group ${\mathfrak{H}}$ is a pro-$p$-group if ${\mathfrak{H}}$ is the inverse limit of finite $p$-groups. A Sylow $p$-subgroup ${\mathfrak{H}} \subset {\mathfrak{G}}$ is a maximal pro-$p$-subgroup \cite[Definition~2.2.1]{Wilson1998}.
If ${\mathfrak{G}}$ is pro-nilpotent, then for each prime $p$, there is a unique Sylow $p$-subgroup of ${\mathfrak{G}}$, which is normal in ${\mathfrak{G}}$ \cite[Proposition~2.4.3]{Wilson1998}. Denote this group by ${\mathfrak{G}}_{(p)}$. Moreover, ${\mathfrak{G}}_{(p)}$ is non-trivial if and only if
$p \in \pi(\Pi[{\mathfrak{G}}])$. It follows that there
is a topological isomorphism
\begin{equation}\label{eq-primeSylowdecomp}
{\mathfrak{G}} \cong \prod_{p \in \pi(\Pi[{\mathfrak{G}}])} ~ {\mathfrak{G}}_{(p)} \ .
\end{equation}
From the isomorphism \eqref{eq-primeSylowdecomp} it follows immediately that if the prime spectrum $\pi(\Pi[{\mathfrak{G}}])$ is infinite, then ${\mathfrak{G}}$ is not topologically Noetherian. To see this, list $\pi(\Pi[{\mathfrak{G}}]) = \{p_i \mid i = 1,2, \ldots \}$, then we obtain an infinite strictly increasing chain of closed subgroups,
$${\mathfrak{H}}_{\ell} = \prod_{i=1}^{\ell} \ {\mathfrak{G}}_{(p_i)} \ . $$
If the prime spectrum $\pi(\Pi[{\mathfrak{G}}])$ is finite, then the isomorphism \eqref{eq-primeSylowdecomp} reduces the proof that ${\mathfrak{G}}$ is topologically Noetherian to the case of showing that if ${\mathfrak{G}}$ is topologically finitely generated, then each of its Sylow $p$-subgroups is Noetherian. The group ${\mathfrak{G}}_{(p)}$ is nilpotent and topologically finitely generated, so we can use the lower central series for ${\mathfrak{G}}_{(p)}$ and induction to reduce to the case where
${\mathfrak{H}}$ is a topologically finitely-generated abelian pro-$p$-group, and so is isomorphic to a finite product of $p$-completions of ${\mathbb Z}$, which are topologically Noetherian.
The proof of Proposition~\ref{prop-nilpNoetherian} is completed by observing that a profinite completion $\widehat{\Gamma}$ of a finitely generated nilpotent group $\Gamma$ is a topologically finitely-generated nilpotent group, and we apply the above remarks.
\endproof
\begin{cor}\label{cor-vnilpotentNoetherian}
Let $\Gamma$ be a virtually nilpotent group, that is there exists a finitely-generated nilpotent subgroup $\Gamma_0 \subset \Gamma$ of finite index.
Then a profinite completion $\widehat{\Gamma}$ of $\Gamma$ is topologically Noetherian if and only if its prime spectrum $\pi(\Pi[\widehat{\Gamma}])$ is finite.
\end{cor}
\proof
We can assume that $\Gamma_0$ is a normal subgroup of $\Gamma$, then its closure $\widehat{\Gamma}_0 \subset \widehat{\Gamma}$ satisfies the hypotheses of Proposition~\ref{prop-nilpNoetherian}, and the Steinitz orders satisfy $[\widehat{\Gamma}_0] \mor [\widehat{\Gamma}]$.
As $\widehat{\Gamma}_0$ is topologically Noetherian if and only if $\widehat{\Gamma}$ is topologically Noetherian, the claim follows.
\endproof
\subsection{Dynamics of Noetherian groups}\label{subsec-Noetheriandynamics}
We next relate the topologically Noetherian property of a profinite group with the dynamics of a Cantor action of the group, to obtain proofs of Theorem~\ref{thm-nilstable} and Corollary~\ref{cor-niltopfree}. We first give the profinite analog of \cite[Theorem~1.6]{HL2018b}. We follow the outline of its proof.
\begin{prop}\label{prop-NLQA}
Let ${\mathfrak{G}}$ be a topologically Noetherian group.
Then a minimal equicontinuous action $({\mathfrak{X}},{\mathfrak{G}},\widehat{\Phi})$ on a Cantor space ${\mathfrak{X}}$ is locally quasi-analytic.
\end{prop}
\proof
First, the closure ${\mathfrak{G}}(\Phi) \subset \Homeo({\mathfrak{X}})$, so the action $\widehat{\Phi}$ of ${\mathfrak{G}}(\Phi)$ is effective. Suppose that the action $\widehat{\Phi}$ is not locally quasi-analytic, then there exists an infinite properly decreasing chain of clopen subsets of ${\mathfrak{X}}$,
$\{U_1 \supset U_2 \supset \cdots \}$, which satisfy the following properties, for all $\ell \geq 1$:
\begin{itemize}
\item $U_{\ell}$ is adapted to the action $\widehat{\Phi}$ with isotropy subgroup ${\mathfrak{G}}_{U_{\ell}} \subset {\mathfrak{G}}$;
\item there is a closed subgroup $K_{\ell} \subset {\mathfrak{G}}_{U_{\ell+1}}$ whose restricted action to $U_{\ell +1}$ is trivial, but the restricted action of $K_{\ell}$ to $U_{\ell}$ is effective.
\end{itemize}
It follows that we obtain a properly increasing chain of closed subgroups $\{K_1 \subset K_2 \subset \cdots\}$ in ${\mathfrak{G}}$, which contradicts the assumption that ${\mathfrak{G}}$ is topologically Noetherian.
\endproof
We now give the proof of Theorem~\ref{thm-nilstable}.
Let $({\mathfrak{X}},\Gamma,\Phi)$ be a nilpotent Cantor action.
Then there exists a finitely-generated nilpotent subgroup $\Gamma_0 \subset \Gamma$ of finite index, and we can assume without loss of generality that $\Gamma_0$ is normal. Let $\widehat{\Gamma}_0$ be the closure of $\Gamma_0$ in $\widehat{\Gamma}$ and let $x \in {\mathfrak{X}}$ be a basepoint. Note that the group $\widehat{\Gamma}$ has finite prime spectrum if and only if the group $\widehat{\Gamma}_0$ has finite prime spectrum. Thus, it suffices to show that the action of $\Gamma_0$ on the orbit ${\mathfrak{X}}_0 = \widehat{\Gamma}_0 \cdot x$ is stable. For simplicity of notation, we will simply assume that the given group $\Gamma$ is itself nilpotent.
The profinite completion ${\mathfrak{G}}(\Gamma)$ of $\Phi(\Gamma)$ is also nilpotent, and we have the profinite action $({\mathfrak{X}},\widehat{\Gamma},\widehat{\Phi})$.
Suppose that the action $\widehat{\Phi}$ is not stable, then there exists an increasing chain of closed subgroups $\{K_{\ell} \mid \ell \geq 1\}$ where $K_{\ell}$ acts trivially on the clopen subset $U_{\ell} \subset {\mathfrak{X}}$. Let $x \in \cap_{\ell > 0} \ U_{\ell}$ then each $K_{\ell} \subset {\mathfrak{D}}(\Phi, x)$, so ${\mathfrak{D}}(\Phi, x)$ contains a strictly increasing chain of closed subgroups. As we are given that the prime spectrum $\pi(\Pi[{\mathfrak{D}}(\Phi, x)])$ is finite, this contradicts the
conclusion of Proposition~\ref{prop-nilpNoetherian}. Hence, the action $\widehat{\Phi}$ must be locally quasi-analytic, as was to be shown.
The proof of Corollary~\ref{cor-niltopfree} is just an extension of that of Theorem~\ref{thm-nilstable}.
Let $({\mathfrak{X}},\Gamma,\Phi)$ be a nilpotent Cantor action for which the Steinitz order $\Pi({\mathfrak{G}}(\Phi))$ has prime multiplicities at most $2$, at all but a finite number of primes.
As before, we can assume without loss of generality that the group $\Gamma$ is nilpotent. Then we have the decomposition \eqref{eq-primeSylowdecomp} of ${\mathfrak{G}}(\Phi)$ into a product of its Sylow $p$-subgroups, and the corresponding product decomposition of the space
\begin{equation}\label{eq-Xproduct}
{\mathfrak{X}} \cong \prod_{p \in \pi(\Pi[{\mathfrak{G}}(\Phi)])} ~ {\mathfrak{X}}_{(p)} ~ = ~ \prod_{p \in \pi(\Pi[{\mathfrak{G}}(\Phi)])} ~ {\mathfrak{G}}(\Phi)_{(p)}/{\mathfrak{D}}(\Phi)_{(p)} \ .
\end{equation}
The factors in the product representation of ${\mathfrak{G}}(\Phi)$ in \eqref{eq-primeSylowdecomp} act on the corresponding factors in \eqref{eq-Xproduct}.
In particular,
the factors ${\mathfrak{G}}(\Phi)_{(p)}$ and ${\mathfrak{G}}(\Phi)_{(q)}$ commute when $p \ne q$, and thus their actions on ${\mathfrak{X}}$ commute.
Also note that if the multiplicity of $p$ is finite, then the corresponding Sylow $p$-subgroup ${\mathfrak{G}}(\Phi)_{(p)}$ is a finite group, and so the quotient space ${\mathfrak{X}}_{(p)}$ is a finite set.
Let ${\mathfrak{G}}(\Phi)_{(p)}$ be a $p$-Sylow subgroup with order at most $p^2$. Then ${\mathfrak{G}}(\Phi)_{(p)}$ is a nilpotent group of order $p^2$, so must be abelian.
Let ${\mathcal D}(\Phi)$ denote the discriminant of the action $\Phi$. Its $p$-Sylow subgroup satisfies ${\mathcal D}(\Phi)_{(p)} \subset {\mathfrak{G}}(\Phi)_{(p)}$.
If the multiplicity of $p$ is at most $2$, then for $\widehat{g} \in {\mathcal D}(\Phi)$, the left action of its projection to ${\mathcal D}(\Phi)_{(p)}$ fixes the basepoint in ${\mathfrak{X}}_{(p)}$, and as ${\mathcal D}(\Phi)_{(p)}$ is abelian, the action fixes
all of the points in the finite quotient space ${\mathfrak{X}}_{(p)} = {\mathfrak{G}}(\Phi)_{(p)}/{\mathfrak{D}}(\Phi)_{(p)}$. As the action of a non-trivial element of ${\mathfrak{D}}(\Phi)_{(p)}$ must be non-trivial, this implies the projection is the identity element in ${\mathfrak{G}}(\Phi)_{(p)}$.
Thus, it suffices to show that the action of $\widehat{g}$ on the factors in \eqref{eq-Xproduct}
for which the prime order $n(p) \geq 3$ is stable. As there are at most a finite number of such factors, we are reduced to the situation in the proof of Theorem~\ref{thm-nilstable}, and so the action must be stable.
\section{Examples}\label{sec-examples}
We give in this section a collection of examples of nilpotent Cantor actions to illustrate the results and ideas of this work. Our guiding principle is to present the simplest examples in each class, which can then be made as complicated as desired following the basic design. All of these examples give rise to solenoidal manifolds with the specified prime spectrum, with base manifold an $n$-torus in Example~\ref{ex-toral}, or base manifold the standard compact nil-3-manifold for Examples~\ref{ex-trivial}, \ref{ex-stable} and \ref{ex-wild}.
\subsection{Toroidal actions}\label{subsec-toral}
We begin with the simplest examples of Cantor actions for which the prime spectra are not sufficient to distinguish the actions.
A \emph{toroidal Cantor action} is the action of $\Gamma = {\mathbb Z}^m$ on a ``diagonal'' profinite completion of ${\mathbb Z}^m$, for some $m \geq 1$. The classification of minimal equicontinuous actions of ${\mathbb Z}^m$ involves subtleties associated with the space of lattice chains in ${\mathbb R}^m$, as discussed in various works \cite{GPS2019,Li2018}. The diagonal actions, which we now define, suffice for illustrating the construction of actions with prescribed prime spectrum.
\begin{ex}\label{ex-toral}
{\rm
Consider the case $n=1$. Choose two disjoint sets of distinct primes,
$$\pi_f = \{q_1 , q_2, \ldots \} \quad , \quad \pi_{\infty} = \{p_1 , p_2, \ldots\}$$
where $\pi_f$ and $\pi_{\infty}$ can be chosen to be finite or infinite sets, and either $\pi_f$ is infinite, or $\pi_{\infty}$ is non-empty. Choose multiplicities $n(q_i) \geq 1$ for the primes in $\pi_f$.
For each $\ell > 0$, define a subgroup of $\Gamma = {\mathbb Z}$ by
$$\Gamma_{\ell} = \{q_1^{n(q_1)} q_2^{n(q_2)} \cdots q_{\ell}^{n(q_{\ell})} \cdot p_1^{\ell} p_2^{\ell} \cdots p_{\ell}^{\ell} \cdot n \mid n \in {\mathbb Z} \} \ , $$
with the understanding that if the prime $q_{\ell}$ or $p_{\ell}$ is not defined, then we simply set this term to be $1$.
The completion $\widehat{\Gamma}$ of ${\mathbb Z}$ with respect to this group chain admits a product decomposition into its Sylow $p$-subgroups
\begin{equation}\label{eq-pqlimit}
\widehat{\Gamma} ~ \cong ~ \prod_{i =1}^{\infty} \ {\mathbb Z}/q_i^{n(q_i)} {\mathbb Z} ~ \cdot ~ \prod_{p \in \pi_{\infty}} ~ \widehat{\mZ}_{(p)} \ ,
\end{equation}
where $\widehat{\mZ}_{(p)}$ denotes the $p$-adic completion of ${\mathbb Z}$.
Thus $\pi(\Pi[\widehat{\Gamma}]) = \pi_f \cup \pi_{\infty}$. As ${\mathbb Z}$ is abelian, we have ${\mathfrak{X}} = \widehat{\Gamma}$ and the the discriminant group for the action of $\Gamma$ is trivial.
}
\end{ex}
\begin{ex}\label{ex-almosttoral}
{\rm
We next give two extensions of the diagonal actions described in Example~\ref{ex-toral}.
First, we construct a
diagonal toroidal action of ${\mathbb Z}^m$ by making $m$ choices of prime spectra as above, then taking the product action. While the return equivalence class of a ${\mathbb Z}$-action on ${\mathfrak{X}} = \widehat{\Gamma}$ as in \eqref{eq-pqlimit} is determined by the asymptotic class $\Pi_a[\widehat{\Gamma}]$, as in Theorem~\ref{thm-onedimSol}, this need no longer hold for the product of such actions.
For example, the two profinite completions of ${\mathbb Z}^2 = {\mathbb Z} \oplus {\mathbb Z}$ given by
\begin{equation}
\widehat{\Gamma}_1 = \widehat{\mZ}_{(6)} \oplus \widehat{\mZ}_{(5)} \quad , \quad \widehat{\Gamma}_2 = \widehat{\mZ}_{(2)} \oplus \widehat{\mZ}_{(15)}
\end{equation}
have the same Steinitz orders, but are not isomorphic.
The second construction shows that the conclusion of Theorem~\ref{thm-returnequivspectra} is best possible, that is, return equivalence need not preserve the Steinitz order of the action.
Let $\pi_f = \{p_1, p_2, \ldots \}$ be a proper subset of primes, infinite in number and all distinct. Let $\widehat{\mZ}_{\pi_f}$ denote the completion of ${\mathbb Z}$ with respect to the primes $\pi_f$ where we choose multiplicity $n(p) = 1$ for each $p \in \pi_f$.
Then we have the odometer action $\Phi_1$ of ${\mathbb Z}$ on ${\mathfrak{X}}_1 = \widehat{\mZ}_{\pi_f}$.
Next, for $k \geq 2$, consider the action of ${\mathbb Z}^k = {\mathbb Z} \oplus \cdots \oplus {\mathbb Z}$ on ${\mathfrak{X}} = \widehat{\mZ}_{\pi_f} \oplus \cdots \oplus \widehat{\mZ}_{\pi_f}$.
Let $\Gamma = {\mathbb Z}^k \rtimes C_k$ where $C_k = {\mathbb Z}/k {\mathbb Z}$ is the cyclic group of order $k$, which acts on the factor ${\mathbb Z}^k$ by the automorphism which is a cyclic permutation of the basis vectors. Then $C_k$ also acts on ${\mathfrak{X}}$ by the corresponding cyclic permutation of the factors, and we use this to define an action $\Phi_2$ of $\Gamma$ on ${\mathfrak{X}}$.
The actions $\Phi_1$ and $\Phi_2$ are return equivalent. To see this, observe that the coset of the identity in $C_k$ determines a clopen subset of ${\mathfrak{X}}$, and the restriction of the action $\Phi_2$ to this coset is just the odometer action $\Phi_1$.
Suppose that $k$ is a prime which is not in $\pi_f$, then $\pi(\Pi[\Phi_2]) = \pi_f \cup \{k\} = \pi(\Pi[\Phi_1]) \cup \{k\}$, and so their prime spectra differ. If $k$ is a prime which is in $\pi_f$ then the prime spectra of the two actions agree, but their multiplicities do not.
One can also repeat this construction for any transitive subgroup of the permutation group $\Perm(k)$ on $k$-elements for $k \geq 2$, and so obtain that the prime spectra of the two actions differ by an arbitrary set of primes which are divisors of $k$.
}
\end{ex}
\subsection{Heisenberg actions}\label{subsec-heisenberg}
We next construct a selection of examples, given by the action of the integer Heisenberg group ${\mathcal H}$ on a profinite completion of the group. The group ${\mathcal H}$ is a cocompact lattice in the real Heisenberg group $H_3({\mathbb R})$, so the quotient $M = H_3({\mathbb R})/{\mathcal H}$ is a compact $3$-manifold, and the choice of a group chain in ${\mathcal H}$ defines a tower of coverings of $M$ whose inverse limit has monodromy action conjugate to the Cantor actions defined by the group chain.
Let ${\mathcal H}$ be represented as the upper triangular matrices in ${\rm GL({\mathbb Z}^3)}$. That is,
\begin{equation}\label{eq-cH}
{\mathcal H} = \left\{ \left[ {\begin{array}{ccc}
1 & a & c\\
0 & 1 & b\\
0 & 0 & 1\\
\end{array} } \right] \mid a,b,c \in {\mathbb Z}\right\} .
\end{equation}
In coordinates $(a,b,c) , (a',b',c') \in {\mathbb Z}^3$, the group operation $*$ and inverse are given by,
\begin{equation}\label{eq-Hrules}
(a,b,c)*(a',b',c')=(a+a',b+b',c+c'+ab') \quad , \quad (a,b,c)^{-1} = (-a, -b, -c +ab) \ .
\end{equation}
In particular, we have
\begin{equation}\label{eq-Hcomm}
(a,b,c) * (a',b',c')*(a,b,c)^{-1}=(a',b',c' + ab' -ba') \ .
\end{equation}
The work \cite{LSU2014} gives a complete discussion of the normal subgroups of ${\mathcal H}$.
\begin{ex}\label{ex-trivial}
{\rm
We construct a Cantor action of ${\mathcal H}$ on a profinite completion defined by a proper self-embedding of ${\mathcal H}$ into itself. The resulting action has trivial discriminant group, but the integers $k_{\ell}$ and $k^*_{\ell}$ defined in \eqref{eq-dims} are distinct.
The variety of such actions have been extensively studied in the authors' work \cite{HLvL2020} joint with Van Limbeek, as they all yield stable Cantor actions.
For a prime $p \geq 2$, define the self-embedding ${\varphi}_p \colon {\mathcal H} \to {\mathcal H}$ by
${\varphi}(a,b,c) = (pa, pb, p^2c)$. Then define a group chain in ${\mathcal H}$ by setting
$${\mathcal H}_{\ell} = {\varphi}_p^{\ell}({\mathcal H}) = \{(p^{\ell} a, p^{\ell}b, p^{2\ell}c) \mid a,b,c \in {\mathbb Z}\} \quad, \quad \bigcap_{\ell > 0} \ {\mathcal H}_{\ell} = \{e\} \ .$$
Formula \eqref{eq-Hcomm} implies that the normal core for ${\mathcal H}_{\ell}$ is given by
$$C_{\ell} = {\rm core}({\mathcal H}_{\ell}) = \{(p^{2\ell} a, p^{2\ell} b, p^{2\ell} c) \mid a,b,c \in {\mathbb Z}\} \ .$$
Thus, the finite group
\begin{equation}\label{eq-Qell}
Q_{\ell} = {\mathcal H}/C_{\ell} \cong \{( \overline{a}, \overline{b}, \overline{c}) \mid \overline{a}, \overline{b}, \overline{c}\in {\mathbb Z}/p^{2\ell}{\mathbb Z} \} \ .
\end{equation}
The profinite group $\widehat{{\mathcal H}}_{\infty}$ is the inverse limit of the quotient groups $Q_{\ell}$ so we have
$$\widehat{{\mathcal H}}_{\infty} = \{(\widehat{a}, \widehat{b}, \widehat{c}) \mid \widehat{a}, \widehat{b}, \widehat{c}\in \widehat{{\mathbb Z}}_{p^2} \}$$
with multiplication on each finite quotient induced by the formula \eqref{eq-Hrules}.
Note that the group ${\mathcal H}$ embeds into $\widehat{{\mathcal H}}_{\infty}$ as $p^{\ell}$ tends to infinity with $\ell$.
Next, we calculate the discriminant subgroup ${\mathcal D}_{\infty}$ for this action. First note
\begin{eqnarray}
{\mathcal H}_{\ell}/C_{\ell} & = & \{(p^{\ell} \overline{a}, p^{\ell} \overline{b}, 0) \mid \overline{a}, \overline{b} \ \in {\mathbb Z}/p^{\ell}{\mathbb Z} \} \ \subset \ Q_{\ell} \ , \label{eq-Qell1}\\
{\mathcal H}_{\ell+1}/C_{\ell+1} & = & \{(p^{\ell +1} \overline{a}, p^{\ell +1} \overline{b}, 0) \mid \overline{a}, \overline{b} \ \in {\mathbb Z}/p^{\ell +1}{\mathbb Z} \} \ \ . \label{eq-Qell=2}
\end{eqnarray}
Thus, $k_{\ell} = \# ({\mathcal H}_{\ell}/C_{\ell}) = p^{2\ell}$.
Note that ${\mathcal H}_{2\ell} \subset C_{\ell}$. So while each quotient ${\mathcal H}_{2\ell}/C_{2\ell}$ is non-trivial, its image under the composition of bonding maps in
\eqref{eq-discformula} vanishes in ${\mathcal H}_{\ell}/C_{\ell}$. Thus ${\mathcal D}_{{\varphi}}$ is the trivial group, and so each $k_{\ell}^* = 1$.
}
\end{ex}
\begin{ex}[A toy model] \label{ex-toy}
{\rm
We describe a \emph{finite} action which is used to construct the next classes of Heisenberg actions which have non-trivial discriminant groups, and arbitrary prime spectra.
Fix a prime $p \geq 2$. For $n \geq 1$ and $0 \leq k < n$, we have the following finite groups:
\begin{equation}
G_{p,n} = \left\{ \left[ {\begin{array}{ccc}
1 & \overline{a} & \overline{c}\\
0 & 1 & \overline{b}\\
0 & 0 & 1\\
\end{array} } \right] \mid \overline{a},\overline{b},\overline{c} \in {\mathbb Z}/p^{n}{\mathbb Z}\right\} ~ , ~
H_{p,n,k} = \left\{ \left[ {\begin{array}{ccc}
1 & p^k \overline{a} & 0\\
0 & 1 & 0\\
0 & 0 & 1\\
\end{array} } \right] \mid \overline{a} \in {\mathbb Z}/p^{n}{\mathbb Z}\right\}
\end{equation}
Note that $\#[G_{p,n}] = p^{3n}$ and $\#[H_{p,n,k}] = p^{n-k}$.
Let $\overline{x} = (1,0,0), \overline{y} = (0,1,0), \overline{z}= (0,0,1) \in G_{p,n}$, then by formula \eqref{eq-Hcomm} we have
$\overline{x} \cdot \overline{y} \cdot \overline{x}^{-1} = \overline{y} \overline{z}$ and $\overline{x} \cdot \overline{z} \cdot \overline{x}^{-1} = \overline{z}$. That is, the adjoint action of $\overline{x}$ on the ``plane'' in the $(\overline{y},\overline{z})$-coordinates is a ``shear'' action along the $\overline{z}$-axis, and the adjoint action of $\overline{x}$ on the $\overline{z}$-axis fixes all points on the $\overline{z}$-axis.
Set $X_{p,n,k} = G_{p,n}/H_{p,n,k}$, then the isotropy group of the action of $G_{p,n}$ on $X_{p,n,k}$ at the coset $H_{p,n,k}$ of the identity element is $H_{p,n,k}$. The core subgroup $C_{p,n,k} \subset H_{p,n,k}$ contains elements in $H_{p,n,k}$ which fix every point in $X_{p,n,k}$. The action of $\overline{x}$ on the coset space $X_{p,n,k}$ satisfies $\Phi(\overline{x})(\overline{y}) = \overline{y} \overline{z}$, so the identity is the only element in $H_{p,n,k}$, so $C_{p,n,k}$ is trivial. Then $D_{p,n,k} = H_{p,n,k}/C_{p,n,k} = H_{p,n,k}$, and for each $g \in H_{p,n,k}$ its action fixes the multiples of $\overline{z}$.
}
\end{ex}
In the following two classes of examples, given sets of primes $\pi_f$ and $\pi_{\infty}$, we embed a infinite product of finite actions as in Example~\ref{ex-toy} into a profinite completion ${\widehat{\cH}}_{\infty}$ of ${\mathcal H}$, which defines a nilpotent Cantor action $(X_{\infty}, {\mathcal H}, \Phi)$ on a quotient $X_{\infty} = {\widehat{\cH}}_{\infty}/D_{\infty}$.
This is possible, due to the following result for pro-nilpotent groups, which is a consequence of \cite[Proposition~2.4.3]{Wilson1998}.
\begin{prop}\label{prop-factorization}
Let $\widehat{\Gamma}$ be a profinite completion of a finitely-generated nilpotent group $\Gamma$. Then there is a topological isomorphism
\begin{equation}\label{eq-factorization}
\widehat{\Gamma} \cong \prod_{p \in \pi(\Pi[\widehat{\Gamma}])} \ \widehat{\Gamma}_{(p)} \ ,
\end{equation}
where $\widehat{\Gamma}_{(p)} \subset \widehat{\Gamma}$ denotes the Sylow $p$-subgroup of $\widehat{\Gamma}$ for a prime $p$.
\end{prop}
\begin{ex}[Stable Heisenberg actions] \label{ex-stable}
{\rm
We construct Heisenberg actions with finite or infinite prime spectrum, using the product formula \eqref{eq-factorization}, and then show that they are stable.
Let $\pi_f $ and $\pi_{\infty}$ be two disjoint collections of primes, with $\pi_f$ a finite set, and $\pi_{\infty}$ a non-empty set.
Enumerate $\pi_f = \{q_1, q_2, \ldots, q_m\}$ then choose integers $1 \leq r_i \leq n_i$ for $1 \leq i \leq m$. Enumerate $\pi_{\infty} = \{p_1, p_2, \ldots\}$ with the convention (for notational convenience) that if $\ell$ is greater than the number of primes in $\pi_{\infty}$ then we set $p_{\ell} = 1$.
For each $\ell \geq 1$, define the integers
\begin{eqnarray}
M_{\ell} & = & q_1^{r_1} q_2^{r_2} \cdots q_m^{r_m} \cdot p_1^{\ell} p_2^{\ell} \cdots p_{\ell}^{\ell} \ , \\
N_{\ell} & = & q_1^{n_1} q_2^{n_2} \cdots q_m^{n_m} \cdot p_1^{\ell} p_2^{\ell} \cdots p_{\ell}^{\ell} \ ,
\end{eqnarray}
For all $\ell \geq 1$, observe that $M_{\ell}$ divides $N_{\ell}$, and define a subgroup of ${\mathcal H}$, in the coordinates above,
\begin{equation}\label{eq-chain}
{\mathcal H}_{\ell} = \{ (a M_{\ell},b N_{\ell} ,c N_{\ell}) \mid a,b,c \in {\mathbb Z} \} \ .
\end{equation}
Its core subgroup is given by
$C_{\ell} = \{ (a N_{\ell},b N_{\ell} ,c N_{\ell}) \mid a,b,c \in {\mathbb Z} \}$.
Observe that
$${\mathbb Z}/N_{\ell} {\mathbb Z} \cong {\mathbb Z}/q_1^{n_1}{\mathbb Z} \oplus \cdots \oplus {\mathbb Z}/q_m^{n_m}{\mathbb Z} \oplus {\mathbb Z}/p_1^{\ell}{\mathbb Z} \oplus \cdots \oplus {\mathbb Z}/p_{\ell}^{\ell}{\mathbb Z} \ .$$
By Proposition~\ref{prop-factorization}, and in the notation of Example~\ref{ex-toy}, we have for $k_i = n_i - r_i$ that
\begin{equation}\label{eq-lqafactors}
{\widehat{\cH}}_{\infty} ~ \cong ~ \prod_{i=1}^m \ G_{q_i, n_i} ~ \cdot ~ \prod_{j=1}^{\infty} \ {\widehat{\cH}}_{(p_j)} \quad , \quad D_{\infty} ~ \cong ~ \prod_{i=1}^m \ H_{q_i, n_i, k_i} \ .
\end{equation}
Then the Cantor space $X_{\infty} = {\widehat{\cH}}_{\infty}/D_{\infty}$ associated to the group chain $\{{\mathcal H}_{\ell} \mid \ell \geq 1\}$ is given by
\begin{equation}\label{eq-lqaspace}
X_{\infty} ~ \cong ~ \prod_{i=1}^m \ X_{q_i, n_i, k_i} ~ \times ~ \prod_{j=1}^{\infty} \ {\widehat{\cH}}_{(p_j)} \ .
\end{equation}
In particular, as the first factor in \eqref{eq-lqaspace} is a finite product of finite sets, the second factor defines an open neighborhood $$U = \prod_{i=1}^m \ \{x_i\} ~ \times ~ \prod_{j=1}^{\infty} \ {\widehat{\cH}}_{(p_j)}$$
where $x_i \in X_{q_i, n_i, k_i}$ is the basepoint given by the coset of the identity element.
That is, $U$ is a clopen neighborhood of the basepoint in $X_{\infty}$. The isotropy group of $U$ is given by
\begin{equation}
{\widehat{\cH}}_{\infty}|U ~ = ~ \prod_{i=1}^m \ H_{q_i, n_i, k_i} ~ \times ~ \prod_{j=1}^{\infty} \ {\widehat{\cH}}_{(p_j)} \ .
\end{equation}
The restriction of ${\widehat{\cH}}_{\infty}|U$ to $U$ is isomorphic to the subgroup
\begin{equation}
K|U ~ = ~ \prod_{i=1}^m \ \{\overline{e}_i\} ~ \times ~ \prod_{j=1}^{\infty} \ {\widehat{\cH}}_{(p_j)} ~ \subset ~ \Homeo(U) \ ,
\end{equation}
where $\overline{e}_i \in G_{q_i, n_i}$ is the identity element. The group $K|U$ acts freely on $U$, and thus the action of ${\widehat{\cH}}_{\infty}$ on $X_{\infty}$ is locally quasi-analytic.
Moreover, the union $\pi = \pi_f \cup \pi_{\infty} = \pi(\Pi[{\widehat{\cH}}_{\infty}])$ is the prime spectrum of the action of ${\mathcal H}$ on $X_{\infty}$. If $\pi_\infty$ is infinite, then the prime spectrum of the action is infinite.
Note that the group ${\mathcal H}$ embeds into $\widehat{{\mathcal H}}_{\infty}$ as the integers $M_{\ell}$ and $N_{\ell}$ tend to infinity with $\ell$.
}
\end{ex}
\begin{ex}[Wild Heisenberg actions]\label{ex-wild}
{\rm
Let $\pi_f $ and $\pi_{\infty}$ be two disjoint collections of primes, with $\pi_f$ an infinite set and $\pi_{\infty}$ arbitrary, possibly empty.
Enumerate $\pi_f = \{q_1, q_2, \ldots\}$ and choose integers $1 \leq r_i \leq n_i$ for $1 \leq i < \infty$. Enumerate $\pi_{\infty} = \{p_1, p_2, \ldots\}$, again with the convention that if $\ell$ is greater than the number of primes in $\pi_{\infty}$ then we set $p_{\ell} = 1$.
As in Example~\ref{ex-stable}, for each $\ell \geq 1$, define the integers
\begin{eqnarray}
M_{\ell} & = & q_1^{r_1} q_2^{r_2} \cdots q_{\ell}^{r_{\ell}} \cdot p_1^{\ell} p_2^{\ell} \cdots p_{\ell}^{\ell} \ , \\
N_{\ell} & = & q_1^{n_1} q_2^{n_2} \cdots q_{\ell}^{n_{\ell}} \cdot p_1^{\ell} p_2^{\ell} \cdots p_{\ell}^{\ell} \ .
\end{eqnarray}
For $\ell \geq 1$, define a subgroup of ${\mathcal H}$, in the coordinates above,
\begin{equation}\label{eq-chain2}
{\mathcal H}_{\ell} = \{ (a M_{\ell},b N_{\ell} ,c N_{\ell}) \mid a,b,c \in {\mathbb Z} \} \ ,
\end{equation}
Its core subgroup is given by
$C_{\ell} = \{ (a N_{\ell},b N_{\ell} ,c N_{\ell}) \mid a,b,c \in {\mathbb Z} \}$. For $k_i = n_i - r_i$ we then have
\begin{equation}\label{eq-lqafactors2}
{\widehat{\cH}}_{\infty} ~ \cong ~ \prod_{i=1}^{\infty} \ G_{q_i, n_i} ~ \cdot ~ \prod_{j=1}^{\infty} \ {\widehat{\cH}}_{(p_j)} \quad , \quad D_{\infty} ~ \cong ~ \prod_{i=1}^{\infty} \ H_{q_i, n_i, k_i} \ .
\end{equation}
The Cantor space $X_{\infty} = {\widehat{\cH}}_{\infty}/D_{\infty}$ associated to the group chain $\{{\mathcal H}_{\ell} \mid \ell \geq 1\}$ is given by
\begin{equation}\label{eq-lqaspace2}
X_{\infty} ~ \cong ~ \prod_{i=1}^{\infty} \ X_{q_i, n_i, k_i} ~ \times ~ \prod_{j=1}^{\infty} \ {\widehat{\cH}}_{(p_j)} \ .
\end{equation}
The first factor in \eqref{eq-lqaspace} is an infinite product of finite sets, so fixing the first $\ell$-coordinates in this product determines a clopen subset of $X_{\infty}$. Let $x_i \in X_{q_i, n_i, k_i}$ denote the coset of the identity element, which is the basepoint in $X_{q_i, n_i, k_i}$. Then for each $\ell \geq 1$, we define a clopen set in $X_{\infty}$
\begin{equation}\label{eq-wildclopen}
U_{\ell} = \prod_{i=1}^{\ell} \ \{x_i\} ~ \times ~ \prod_{i=\ell+1}^{\infty} \ X_{q_i, n_i, k_i} ~ \times ~ \prod_{j=1}^{\infty} \ {\widehat{\cH}}_{(p_j)} \ .
\end{equation}
Recalling the calculations in Example~\ref{ex-toy}, the subgroup $H_{q_i, n_i, k_i}$ is the isotropy group of the basepoint $x_i \in X_{q_i, n_i, k_i}$. Thus,
the isotropy subgroup of $U_{\ell}$ for the ${\widehat{\cH}}_{\infty}$-action is given by the product
\begin{equation}
{\widehat{\cH}}_{\infty}|_{U_{\ell}} ~ = ~ \prod_{i=1}^{\ell} \ H_{q_i, n_i, k_i} ~ \times ~ \prod_{i=\ell + 1}^{\infty} \ G_{q_i, n_i} ~ \times ~ \prod_{j=1}^{\infty} \ {\widehat{\cH}}_{(p_j)} \ .
\end{equation}
For $j \ne i$, the subgroup $H_{q_i, n_i, k_i}$ acts as the identity on the factors $X_{q_j, n_j, k_j}$ in \eqref{eq-lqaspace2}.
Thus, the image of ${\widehat{\cH}}_{\infty}|_{U_{\ell}}$ in $\Homeo(U_{\ell})$ is isomorphic to the subgroup
\begin{equation}
Z_{\ell} ~ = ~{\widehat{\cH}}_{\infty}|U_{\ell} ~ = ~ \prod_{i=1}^{\ell} \ \{\overline{e}_i\} ~ \times ~ \prod_{i=\ell + 1}^{\infty} \ G_{q_i, n_i} ~ \times ~ \prod_{j=1}^{\infty} \ {\widehat{\cH}}_{(p_j)} ~ \subset ~ \Homeo(U_{\ell}) \ ,
\end{equation}
where $\overline{e}_i \in G_{q_i, n_i}$ is the identity element.
We next show that this action is not stable; that is, for any $\ell > 0$ there exists a clopen subset $V \subset U_{\ell}$ and non-trivial $\widehat{g} \in Z_{\ell}$ so that the action of $\widehat{G}$ restricts to the identity map on $V$. We can assume without loss that $V= U_{\ell'}$ for some $\ell' > \ell$. Consider the restriction map for
the isotropy subgroup of $Z_{\ell}$ to $U_{\ell'}$ which is given by
$$\rho_{\ell, \ell'} \colon Z_{\ell}|_{U_{\ell'}} \to Z_{\ell'} \subset \Homeo(U_{\ell'}) \ .$$
We must show that there exists $\ell' > \ell$ such that this map has a non-trivial kernel.
Calculate this map in terms of the product representations above,
\begin{equation}\label{eq-restriction}
Z_{\ell}|_{U_{\ell'}} ~ = ~ \prod_{i=1}^{\ell} \ \{\overline{e}_i\} ~ \times ~ \prod_{i=\ell + 1}^{\ell'} \ H_{q_i, n_i,k_i} ~ \times ~ \prod_{i=\ell' + 1}^{\infty} \ G_{q_i, n_i}~ \times ~ \prod_{j=1}^{\infty} \ {\widehat{\cH}}_{(p_j)} \ .
\end{equation}
For $\ell < i \leq \ell'$, the group $H_{q_i, n_i,k_i}$ fixes the point $\prod_{i=1}^{\ell'} \ \{x_i\}$, and acts trivially on
$\prod_{i=\ell'+1}^{\infty} \ X_{q_i, n_i, k_i}$.
Thus, the kernel of the restriction map contains the second factor in \eqref{eq-restriction},
\begin{equation}
\prod_{i=\ell + 1}^{\ell'} \ H_{q_i, n_i,k_i} ~ \subset ~ \ker \left\{ \rho_{\ell, \ell'} \colon Z_{\ell}|_{U_{\ell'}} \to \Homeo(U_{\ell'}) \right\} \ .
\end{equation}
As this group is non-trivial for all $\ell' > \ell$, the action of ${\widehat{\cH}}_{\infty}$ on $X_{\infty}$ is not locally quasi-analytic, hence the action of ${\mathcal H}$ on $X_{\infty}$ is wild.
Moreover, the prime spectrum of the action of ${\mathcal H}$ on $X_{\infty}$ equals the union $\pi = \pi_f \cup \pi_{\infty}$.
}
\end{ex}
Finally, we give the proof of Theorem~\ref{thm-lqaWILD} using the construction in Example~\ref{ex-wild}, that is, we show that choices in Example \ref{ex-wild} can be made in such a way that the action of ${\mathcal H}$ on a Cantor set is topologically free while the action of ${\widehat{\cH}}_\infty$ is not stable. To do that, choose an infinite set of distinct primes $\pi_f = \{q_1, q_2, \ldots\}$, and let the set of infinite primes $\pi_{\infty}$ be empty.
Choose the constants $n_i =2$ and $k_i=1$ for all $i \geq 1$. Let $X_{\infty}$ be the Cantor space defined by \eqref{eq-lqaspace2}. Then the action of ${\mathcal H}$ is wild by the calculations in Example~\ref{ex-wild}.
We claim that this action is topologically free. Suppose not, then there exists an open set $U \subset X_{\infty}$ and $g \in {\mathcal H}$ such that the action of $\Phi(g)$ is non-trivial on $X_{\infty}$ but leaves the set $U$ invariant, and restricts to the identity action on $U$.
The action of ${\mathcal H}$ on $X_{\infty}$ is minimal, so there exists $h \in {\mathcal H}$ with $h \cdot x_{\infty} \in U$. Then $\Phi(h^{-1} g h)(x_{\infty}) = x_{\infty}$ and the action $\Phi(h^{-1} g h)$ fixes an open neighborhood of $x_{\infty}$. Replacing $g$ with $h^{-1} g h$ we can assume that $\Phi(g)(x_{\infty}) = x_{\infty} \in U$. From the definition \eqref{eq-wildclopen}, the clopen sets
\begin{equation}\label{eq-wildclopen2}
U_{\ell} = \prod_{i=1}^{\ell} \ \{x_i\} ~ \times ~ \prod_{i=\ell+1}^{\infty} \ X_{q_i, 2, 1}
\end{equation}
form a neighborhood basis at $x_{\infty}$, and thus there exists $\ell > 0$ such that $U_{\ell} \subset U$.
The group ${\mathcal H}$ diagonally embeds into ${\widehat{\cH}}_{\infty}$ so from the expression \eqref{eq-lqafactors2}, we have
$\displaystyle g = (g,g,\ldots,g) \in \prod_{i=1}^{\infty} \ G_{q_i, 2}$. The action of $\Phi(g)$ is factorwise, and $\Phi(g)(x_{\infty}) = x_{\infty}$ implies that $\displaystyle g \in D_{\infty} \cong \prod_{i=1}^{\infty} \ H_{q_i, n_i, k_i}$. The assumption that $\Phi(g)$ fixes the points in $U$ implies that it acts trivially on each factor $X_{q_i, 2, 1}$ for $i > \ell$. As each factor $H_{q_i, 2, 1}$ acts effectively on $X_{q_i, 2, 1}$ this implies that the projection of $g$ to the $i$-th factor group $H_{q_i, 2, 1}$ is the identity for $i > \ell$. This
implies that every entry above the diagonal in the matrix representation of $g$ in \eqref{eq-cH} is divisible by an infinite number of distinct primes $\{q_i \mid i \geq \ell\}$, so by the Prime Factorization Theorem the matrix $g$ must be the identity.
Alternately, observe that we have $\displaystyle g \in \prod_{i=1}^{\ell} \ H_{q_i, 2, 1}$. This is a finite product of finite groups, which implies that $g \in {\mathcal H}$ is a torsion element. However, ${\mathcal H}$ is torsion-free, hence $g$ must be the identity. Thus, the action of ${\mathcal H}$ on $X_{\infty}$ must be topologically free.
Finally, the above construction allows the choice of any infinite subset $\pi_f$ of distinct primes, and there are an uncountable number such choices which are distinct up to asymptotic equivalence. Thus, by Theorem~\ref{thm-returnequivspectra} there are an uncountable number of topologically-free, wild nilpotent Cantor actions which are distinct up to return equivalence.
This completes the proof of Theorem~\ref{thm-lqaWILD}.
{\begin{remark}\label{rmk-generalization}
{\rm
The constructions in Examples~\ref{ex-stable} and \ref{ex-wild} can be generalized to the integer upper triangular matrices in all dimensions, where there is much more freedom in the choice of the subgroups $H_{q_i, n_i,k_i}$. The above calculations become correspondingly more tedious, but yield analogous results. It seems reasonable to expect that similar constructions can be made for any finitely-generated torsion-free nilpotent (non-abelian) group $\Gamma$. That is, that there are group chains in $\Gamma$ which yield wild nilpotent Cantor actions. Note that in the work \cite{HLvL2020} with van Limbeek, the authors showed that if $\Gamma$ is a finitely-generated nilpotent group which admits a proper self-embedding (said to be non-co-Hopfian, or renormalizable), then the iterated images of this self-embedding define a group chain for which the associated profinite action is quasi-analytic. Thus, wild Cantor actions are in a sense the furthest extreme from the actions associated to renormalizable groups.
}
\end{remark}
\vfill
\eject
|
1,941,325,220,421 | arxiv | \section{Introduction}
\label{sec:intro}
Simulation of binary black hole mergers will play an important part in the prediction,
detection, and the analysis of signals in gravitational
wave detectors. In the usual approach to computing the merger of black holes
(generically called the $3+1$ method), one has first to initiate the
simulation by producing consistent data. Four of the components of the
Einstein equation do not contain time derivatives of the spatial metric, nor
of the momentum of the 3-metric. These components, $G_{00}=0$ and
$G_{i0}=0$,
are thus called constraint equations, and they must be satisfied in any
specification of initial data. (We are interested in black hole
interactions, which are vacuum, i.e. matter-free, so the right side of the
Einstein equation is zero: $G_{\mu \nu}=0$.) As we recall in the next
section, a conformal decomposition
\cite{YP,MY,York,Mathews,Bowen+York,Choquet,Lichnerowicz} allows the
solution of these components to be put in the form of a set of four coupled
elliptic equations. These elliptic equations are the subject of our work. We
solve them via a multigrid method which applies concepts from
\cite{Hawley+Matzner} to
this problem. We demonstrate the accuracy of our data by considering
features discussed analytically by Wald \cite{WaldPRD}. Wald described the
spin-spin effect on the binding energy of two black holes in an analytic
perturbation scheme, where one hole is much more massive than the other.
Our computational technology is well suited to
simulating these effects for equal mass black holes, and we demonstrate
agreement in some aspects of the
computational spin-spin interactions with the analytic estimate,
for separations that are not small.
\section{$3+1$ Formulation of Einstein Equations}
We take a Cauchy formulation
(3+1) of the ADM type, after Arnowitt, Deser, and
Misner~\cite{ADM}. In such a method the 3-metric $g_{ij}$ and its momentum
$K_{ij}$ are specified at one initial time on
a spacelike hypersurface, and evolved into the future. The ADM metric is
\begin{equation}
{\rm d} s^2 = -(\alpha^2 - \beta_i \beta^i)\,{\rm d} t^2 + 2\beta_i \, {\rm d} t
\,{\rm d} x^i
+ g_{ij}\, {\rm d} x^i\, {\rm d} x^j
\label{eq:admMetric}
\end{equation}
where $\alpha$ is the lapse function and $\beta^i$ is the shift
3-vector; these gauge
functions encode the
coordinatization.\renewcommand{\thefootnote}{\fnsymbol{footnote}}\setcounter
{footnote}{2}\fnsymbol{footnote}
\footnotetext[2]{Latin indices run $1,2,3$ and are lowered and raised
by $ g_{ij}$ and its 3-d inverse $ g^{ij}$. The time derivative will be
denoted by an overdot ($\dot{}$)}
The Einstein field equations contain both hyperbolic evolution equations
and elliptic constraint equations.
The constraint equations for vacuum in the ADM decomposition are:
\begin{eqnarray}
H = \frac{1}{2} [R - K_{ij}K^{ij} + K^2] &=& 0,
\label{eq:constraintH}
\end{eqnarray}
\begin{eqnarray}
H^i = \nabla_j \left( K^{ij} - g^{ij}K\right) &=& 0.
\label{eq:constraintK}
\end{eqnarray}
Eq. (\ref{eq:constraintH}) is known as the Hamiltonian constraint;
Eq. (\ref{eq:constraintK}) is the momentum constraint (three components).
Here $R$ is the 3-d Ricci scalar constructed from the 3-metric, and
$\nabla_j$ is the torsion-free
3-d covariant derivative compatible with $ g_{ij}$.
Initial data must satisfy these constraint
equations; one may not freely specify all components of $g_{ij}$ and
$K_{ij}$.
One of the evolution equations from the
Einstein system is
\begin{equation}
\dot g_{ij} = -2\alpha K_{ij} +\nabla_j\beta_{i} + \nabla_i\beta_{j},
\label{eq:gdot}
\end{equation}
and this will prove useful in our data setting procedure below.
\section{Data Form}
\label{sec:DataForm}
Solutions of the initial value problem have been addressed in the past by
several groups,
~\cite{YP,MY,York,Mathews,Bowen+York,Choquet,Lichnerowicz,Hawley+Matzner},
~\cite{Cook,Pfeiffer,Baumgarte,GGB1,GGB2,GGB3,Cook2,Pfeiffer2,CookReview,Shoemaker,Huq}.
It is the case that
until recently, most data have been constructed assuming that the initial
3-space is conformally flat. The method most commonly used is the approach
of Bowen and York~\cite{Bowen+York}, which chooses maximal spatial
hypersurfaces (for which the quantity $K \equiv {K^a}_a =0$), as well as
taking the spatial 3-metric to be conformally flat.
The chief advantage of the maximal spatial hypersurface approach is
numerical simplicity, as this choice decouples the Hamiltonian constraint
from the momentum constraint equations. Besides, for $K = 0$, if the
conformal background is flat Euclidean 3-space, then there are known
$K_{ij}$ that analytically solve the momentum constraint~\cite{Bowen+York}.
The constraints then reduce to one elliptic equation for the conformal
factor $\phi$.
Very recently substantial success has been achieved evolving
Bowen-York data using ``puncture'' methods \cite{Brownsville,Goddard}.
However, we generally use an alternative choice of background
3-metric, which is based on a metric constructed from single black hole Kerr
Schild data\cite{KerrSchild}; multiple black holes are constructed by a
superposition in the conformal background. It has been shown that this
process, while not exact for multiple black hole data, does contain much of
the physics. It clearly {\it is} exact for a
single black hole, even a spinning or
boosted black hole~\cite{Matzner:1999pt}.
\subsection{Kerr Schild Black Holes}
\label{subsec:KSform}
The Kerr-Schild~\cite{KerrSchild} form of a black hole solution describes
the spacetime of a single black hole with mass, $m$, and specific angular
momentum, $a = j/m$, in a coordinate system that is well behaved at the
black hole horizon:
\begin{equation}
{\rm d} s^{2} = \eta_{\mu \nu}\,{\rm d} x^{\mu}\, {\rm d} x^{\nu}
+ 2H(x^{\alpha}) l_{\mu} l_{\nu}\,{\rm d} x^{\mu}\,{\rm d}
x^{\nu},
\label{eq:1}
\end{equation}
where $\eta_{\mu \nu}$ is the metric of flat space, $H$ is a scalar
function of $x^\mu$, and $l_{\mu}$ is an (ingoing) null vector, null
with respect to both the flat metric and the full metric,
\begin{equation}
\eta^{\mu \nu} l_{\mu} l_{\nu} = g^{\mu \nu} l_{\mu} l_{\nu} = 0.
\label{eq:2}
\end{equation}
Comparing the Kerr-Schild metric with the ADM
decomposition~\eref{eq:admMetric}, we find that the $t=\hbox{\rm constant}$
3-space metric is: $g_{ij} = \delta_{ij} + 2 H l_i l_j$.
Further, by comparison to the ADM form, we have
\begin{equation}
\beta_i = 2 H l_0 l_i,
\label{eq:beta_ks}
\end{equation}
and
\begin{equation}
\alpha = \frac{1}{\sqrt{1 + 2 H l_0^2}}.
\end{equation}
Explicit forms of $H(x^\mu)$ and $l_\alpha(x^\nu)$ for Kerr black holes
are given in a number of references.
See \cite{KerrSchild},\cite{Matzner:1999pt},\cite{Marronetti}.
Many details of the algebraic manipulation of the Kerr-Schild
form are found in reference \cite{Huq}.
The extrinsic curvature can be computed from Eq.(\ref{eq:gdot}):
\begin{equation}
K_{ij} = \frac{1}{2\alpha}[\nabla_j\beta_{i} + \nabla_i\beta_{j}
- \dot g_{ij}],
\label{eq:k_ks}
\end{equation}
Each term on the right hand side of this equation is known analytically; in
particular, for a black hole at rest, $\dot g_{ij}=0$.
\subsection{Boosted Kerr-Schild black holes}
The Kerr-Schild metric is form-invariant under a
boost, making it an ideal metric to describe moving
black holes. A constant Lorentz transformation
(the boost velocity, ${\bf v}$, is specified with respect to the background
Minkowski spacetime) $\Lambda^{\alpha}{}_{\beta}$ leaves the
4-metric in Kerr-Schild form, with $H$ and $l_{\mu}$
transformed in the usual manner:\\
\begin{eqnarray}
x'^{\beta} &=& \Lambda^\beta{}_\alpha x^{\alpha},\\
H'(x'^{\alpha}) &=& H\left( (\Lambda^{-1})^\alpha{}_\beta
\,\,x'^{\beta}\right),\\
l'_{\delta}(x'^{\alpha}) &=& \Lambda^{\gamma}{}_{\delta}\,\,
l_{\gamma}\left((\Lambda^{-1})^\alpha{}_\beta\,\, x'^{\beta}\right) .
\label{eq:ks_boost}
\end{eqnarray}
Note that $l'_{0}$ is no longer unity. As the initial solution
is stationary, the only time dependence comes in the
motion of the center, and the full metric is stationary with a Killing
vector reflecting the boost velocity.
The boosted Kerr-Schild data exactly represent a spinning and/or moving single
black hole.
\subsection{Background data for multiple black holes}
The structure of the Kerr-Schild metric suggests a natural extension
to generate the background data for multiple black hole spacetimes.
We first choose mass and angular momentum parameters for each hole,
and compute the respective $H$ and $l^\alpha$ in the appropriate
rest frame. These quantities are then boosted in the desired direction
and offset to the chosen position in the computational frame.
The computational grid is the center of momentum frame for the two holes,
making the velocity of the second hole a function of the two
masses and the velocity of the first hole.
We compute the
individual metrics and extrinsic curvatures in the coordinate system
of the computational domain:
\begin{eqnarray}
{}_A g_{ij} &=& \eta_{ij}
+ 2~{}_A H ~{}_A l_{i} ~{}_A l_{j},\\
{}_A K_i{}^m &=& \frac{1}{2\alpha} ~{}_A g^{mj}
\left( \nabla_j ~{}_A\beta_{i} + \nabla_i~{}_A \beta_{j}
- ~{}_A \dot g_{ij}\right).
\end{eqnarray}
The pre-index $A$ labels the black holes.
Background data for $N$ holes are then constructed in superposition:
\begin{eqnarray}
\tilde{g}_{ij} &=& \eta_{ij} + \sum_A^N 2~{}_A H {}_A l_i ~{}_A l_j ,\\
\tilde{K} &=& \sum_A^N ~{}_AK_i{}^i ,\\
\tilde{A}_{ij} &=& \tilde{g}_{n(i}~~\sum_A^N \left( {}_AK_{j)}{}^n
-\frac{1}{3} \delta_{j)}{}^n ~{}_AK_i{}^i\right) .
\label{eq:ks_super}
\end{eqnarray}
A tilde ( $\tilde{}$ ) indicates a background field tensor. Notice
that we do {\it not} use the attenuation functions introduced by
Bonning et al.\cite{Bonning}.
To give the reader a feel for how closely the Kerr-Schild superposition
data resemble a true binary black hole spacetime, in Figure \ref{fig_comp_back_out}
we provide a graph
comparing the superposed Kerr-Schild background data with
the subsequent solutions of the constraint equations (described below).
\begin{figure}
\begin{center}
\includegraphics[width=6.0in, angle=0]{comp_back_out.eps}
\end{center}
\vspace{-1.0cm}
\caption{
Comparison of background Kerr-Schild superposition data (dashed lines)
with the final output of our elliptic constraint equation solver (solid
lines).
We see that the background is quite close to the physical solution.
These particular data were generated for two holes located at $x=\pm 5m$, with
spins $a_1=a_2=0.5$, with the spin of the $x=-5m$ hole tipped by rotation about the $x$ axis by $\theta_1=7\pi / 8$,
and an excision radius of $0.75m$.
}
\label{fig_comp_back_out}
\end{figure}
\section{Generating the physical spacetime}
We will consider in this paper physical applications which use
superposed Kerr-Schild backgrounds. When multiple black holes are
present, the background superposed Kerr-Schild data described in
the previous section are not solutions of the constraints,
Eqs.~(\ref{eq:constraintH})--(\ref{eq:constraintK}). Hence they do
not constitute a physically consistent data set. A physical spacetime
can be constructed by modifying the background fields with new
functions such that the constraints {\it are} satisfied. We adopt
the conformal transverse-traceless method of York and
collaborators~\cite{YP} which consists of a conformal decomposition
and a vector potential that adjusts the longitudinal components of
the extrinsic curvature. The constraint equations are then solved
for these new quantities such that the complete solution fully
satisfies the constraints. We do not consider ${\rm tr}K=0$, nor
conformally flat, solutions.
The physical metric, $g_{ij}$, and the trace-free part of the extrinsic
curvature, $A_{ij}$, are related to the background fields through a
conformal
factor
\begin{eqnarray}
g_{ij} &=& \phi^{4} \tilde{g}_{ij}, \label{confg1} \\
A^{ij} &=& \phi^{-10} (\tilde{A}^{ij} + \tilde{(lw)}^{ij}),
\label{eq:conf_field}
\end{eqnarray}
where $\phi$ is the conformal factor, and $\tilde{(lw)}^{ij}$
will be used to cancel any possible longitudinal contribution to the
superposed background extrinsic curvature.
$w^i$ is a vector potential, and
\begin{eqnarray}
\tilde{(lw)}^{ij} \equiv \tilde{\nabla}^{i} w^{j} + \tilde{\nabla}^{j} w^{i}
- \frac{2}{3} \tilde{g}^{ij} \tilde{\nabla_{k}} w^{k}.
\label{lw}
\end{eqnarray}
The trace $K$ is taken to be a given function
\begin{equation}
K = \tilde K.
\label{tk}
\end{equation}
Writing the Hamiltonian and momentum constraint equations in terms of
the quantities in
Eqs.~(\ref{confg1})--(\ref{tk}), we obtain four coupled
elliptic equations for the fields $\phi$ and $w^i$~\cite{YP}:
\begin{eqnarray}
\tilde{\nabla}^2 \phi &=& (1/8) \big( \tilde{R}\phi
+ \frac{2}{3} \tilde{K}^{2}\phi^{5} - \nonumber \\
& & \phi^{-7} (\tilde{A}{^{ij}} + (\tilde{lw})^{ij})
(\tilde{A}_{ij} + (\tilde{lw})_{ij}) \big), \label{ell_eqs1}
\\
\tilde{\nabla}_{j}(\tilde{lw})^{ij} &=& \frac{2}{3} \tilde{g}^{ij} \phi^{6}
\tilde{\nabla}_{j} K - \tilde{\nabla}_{j} \tilde{A}{^{ij}}.
\label{ell_eqs}
\end{eqnarray}
Our outer boundary condition for $\phi$, namely
\begin{equation}
\partial_{\rho} \left( \rho (\phi - 1) \right)|_{\rho \rightarrow
\infty} = 0.
\label{eq:phi_boundary}
\end{equation}
enforces $\phi \rightarrow 1 $ at $\infty$, but does not specify the size
(or sign) of the $\rho^{-1}$ term in $\phi$. (Here $\rho^2 = x^2+y^2+z^2$.) We also take as boundary
conditions for the vector $w^i$:
\begin{eqnarray}
& & \partial_\rho (\rho w^{i} n_{i}) = 0,
\label{25} \\[.12in]
& & \partial_\rho \left( \rho^{2} w^{i} (\delta_{ij} - n_{i}n_{j})
\right) = 0\,,
\label{26}
\end{eqnarray}
where $n_i$ is the outward pointing unit spatial normal.
Condition (\ref{eq:phi_boundary}) is a {\it Robin} condition
commonly used for computational conformal factor determination.
Conditions (\ref{25}) and (\ref{26}) were derived by
Bonning et al.\cite{Bonning}.
\section{Numerical Methods}
We first discuss the computational code and tests, and
some code limitations.
The constraint equations \eref{ell_eqs1}, \eref{ell_eqs} are solved
with a
multigrid solver~\cite{Hawley+Matzner}.
The present code is essentially the same as that described in
\cite{Hawley+Matzner},
except that it has been extended to the full set of constraint equations,
non-flat backgrounds, and features parallel processing.
The multigrid scheme is essentially a clever means of
eliminating successive wavelength-components of the error via the
use of relaxation at multiple spatial scales. It makes use of
some sort of local averaging procedure (e.g. Gauss-Seidel relaxation).
Such relaxation is extremely
effective at eliminating short-wavelength components of the error,
or in other words, at ``smoothing'' the error (i.e., the residual,
see below). However, relaxation fails to operate efficiently
on long-wavelength components of the error (components that involve
discretization points more than a few away from the
point at which the solution is sought).
Multigrid addresses the solution repeatedly on grids of different
discretization, achieving the same efficiency at smoothing every scale.
Because the implementation in the method is also in \cite{Hawley+Matzner},
we do not repeat it here.
\section{Multigrid with Excised regions}
In our formulation, the black holes are represented by excised regions. Because
we work in
Cartesian coordinates, and because we want completely general
implementation,
we do not typically expect that the excision will be defined by
overlapping points on the various grids of different resolution.
Our definition of the excision region is that on each grid, the inner
boundary consists of points that lie {\it just inside}, i.e. up
to one grid point inside, the analytic location of the inner boundary, as shown in
Figure \ref{fig_define_ex_reg}.
While there are exceptional configurations such as cubic excision defined
so that the excision boundary lies on points of the coarsest grid, this
definition means that generically the size of the excision is {\it larger}
on
the finer grids.
This definition of the inner boundary affects the way in which data are
restricted from fine grids to coarse grids. Away from the inner boundary,
weighted restriction is performed, as shown in left pane of Figure
\ref{fig_restrict_scheme}. However, if any of the points used in the
weighted average lie on an inner boundary, then these points are not used
and instead a simple ``copy" operation is performed as shown in the right pane
of Figure \ref{fig_restrict_scheme}. The inner boundary points
themselves may need to be filled in on coarse grids (since on the
fine grid they may be excised), and to do this we apply a weighted
``inward extrapolation'' using a parabolic fit to surrounding fine
grid points.
\begin{figure}
\centering
\includegraphics[width=5.5in]{define_ex_reg.eps}
\caption{
Example of how the inner boundary is defined, showing points on a coarse
grid and a fine grid. Inner boundary points are those points which are
immediately interior to a circle of radius $r_{\rm ex}$. The large filled
circles show normal interior grid points (i.e., non-excised, non-boundary
points) on the coarse grid, and the large open circles show boundary points
on the coarse grid. The small filled and open circles show fine grid
interior points and boundary points, respectively. The small dots show
excised points on the fine and/or coarse grids, as appropriate. (Only one
quadrant of a full domain is shown in this picture for purposes of clarity.)
}
\label{fig_define_ex_reg}
\end{figure}
\begin{figure}
\centering
\centerline{\includegraphics[width=5.2in,height=1.23in]{restrict_scheme.eps}
}
\caption{1-D schematic of scheme for restriction scheme near inner
boundary.
The circles on the rightmost X's indicate that this is where the
Dirichlet conditions are applied, i.e., there {\em are} data on these
points.
One {\em could} use these points in weighted restriction even
in the case shown in the right panel. However, we choose never to use
these boundary data in weighted restriction, and instead do a
simple ``copy" operation. The boundary points themselves are
updated using either a direct copy,
or for coarse grid points over excised fine grid points
via an average over parabolic fits of
fine-grid data in all available directions. }
\label{fig_restrict_scheme}
\end{figure}
This scheme has been implemented in a parallel computing environment, using
{\it MPI} to communicate between processors. Each processor handles a part
(a {\it patch}) of the total domain. The patch is also logically surrounded by
``ghost zones". Because we deal with a finite-difference representation of
derivatives, the communication between processors requires the filling of
these ``ghost zones" on the borders of the patches, using values computed on
other processors, so that derivatives can be accurately computed near the
boundaries of the patches. This has implications for the way that smoothing
is handled in our simulation.
On a single processor Gauss-Seidel smoothing proceeds across the grid, and
the updates at any particular point involve some surrounding points that
have been updated and some that have not been. If the same scheme has been
implemented on two processors (say splitting the $x-$axis), the buffer
region of one patch will already have been updated when the smoother of the
other patch begins to use the equivalent points. The order and direction of
the filling of the ghost zones can lead to inconsistent behavior ({\it i.e.,} the result
will be different from the single processor result). One solution to this is
to insert ``wait" commands into the parallel code, so that processors wait
to carry out the process in the correct order. This has the effect of
slowing the execution, and loses the advantage of parallel processing. A
better approach is to use something like {\it red-black} Gauss-Seidel. (In
2-d the red-black pattern is like that on a checkerboard.) If the
differential operator involves only diagonal second derivatives (no mixed
partials) then each point is updated using only points of the opposite
color. Then all the reds can be updated before any of the blacks, and vice
versa. This ameliorates the ghost zone synchronization problem; the ghost
zones can be maintained in the correct state for every step. In this case
parallelization works as anticipated.
If the background is taken as flat space, then these conditions apply. But
we work with Kerr-Schild
forms of the metric which guarantee that there will be mixed partial
derivatives
in the operators, and the parallel synchronization problem reappears.
Our solution is to introduce what we call {\it rainbow} smoothing, in which
we make a total of {\it eight} passes (like the two passes in red-black smoothing)
over the grid, where each pass has a stride width of two over each of the
three dimensions of the grid.
\section{Verification of constraint Solution}
\label{sec:correctness}
To verify the solution of the discrete equations, we have examined the
code's convergence in some detail. We use a set of
completely independent ``residual evaluators"
for the full Einstein system (here applied only to the initial data),
originally constructed
by Anderson \cite{mattDiss} .
These evaluate the
Einstein tensor, working just from the metric produced by the
computational solution, to return fourth order accurate
results. They are completely different from the way
the equations are expressed in the constraint solver code.
Figure \ref{fig:ham_conv} shows
such a plot of convergence for
the Hamiltonian constraint in an equal mass binary black hole spacetime.
The holes are located at $\pm 5m$ on the $x$-axis, where $m$ is the mass of
one hole.
The elliptic equations were then solved on grids of sizes $385^3,$
$449^3,$ and $513^3,$ giving finest-grid resolutions of approximately
$m/12.8$, $m/15$, and $m/17$. We use a five-level multigrid hierarchy.
Figure \ref{fig:ham_conv} demonstrates almost perfect second order convergence, except
near the outer
boundary, where the convergence is apparently first order.
The second order convergence shows
that we have achieved a correct finite difference solution to the initial data
problem.
\begin{figure}
\begin{center}
\includegraphics[width=6.0in, angle=0]{ham_convergence.eps}
\end{center}
\vspace{-1.0cm}
\caption{
Convergence of the Hamiltonian constraint along the positive $x$ and $y$
axes.
We show values of the constraint obtained and three different finest-grid
resolutions, $385^3$, $449^3$ and $513^3$, scaled by appropriate ratios of
the mesh spacings consistent with second-order convergence. We see that
there
is good convergence everywhere except near the outer boundaries. Because of
this loss of convergence near the outer boundaries, we evaluate the ADM mass
over the surface a cube with half-width 12M. (In the left pane,
the vertical scale has been exaggerated in order to zoom in on the
``body'' of the domain, and cuts out the peaks immediately adjacent to
the excision regions.
}
\label{fig:ham_conv}
\end{figure}
\section{Computational
Limitations on grid coarseness in excised black hole spacetimes}
In the examples given here, where we
work on a fixed spatial domain (say $\pm 15m$), the finest resolutions of
$385^3$, $449^3$, $513^3$, used in the convergence test correspond to
coarsest-grid resolutions of $25^3$, $29^3$, $33^3$, respectively. For the $33^3$
grid, the $\pm15m$ domain is discretized at about $1m$ resolution.
The problem of required resolution for black hole simulations has been
discussed in this context at least since the early {\it Grand Challenge}\cite{BBHGC}
efforts. Present computational resources allow much larger grid size than in
the Grand Challenge epoch, so the conflict appears at higher resolutions
and larger physical domains than previously, and we can do substantial
physics with the present configuration. Our approach will be to introduce a
multiresolution scheme to maintain required resolution near the central
``action", and allow coarsening further away. To accomplish this we are
investigating a mesh-refined multigrid, similar to that described by Brown
and Lowe\cite{brown}.
However, for the present work, we simply use very high resolution, the
highest that we can presently achieve on the computers available to us,
namely $513^3$ points using 32 processors.
\section{Spin-Spin Effects in Black Hole Interaction}
Wald \cite{WaldPRD} directly computes the force for stationary sources with
arbitrarily oriented spins. He considered a small black hole as a
perturbation in the field of a large hole. The result found
for the spin-spin contribution to the binding energy is
\begin{equation}
{E_b} = - \left( \frac{ \vec{S} \cdot \vec{S'} -
3(\vec{S} \cdot \hat{n})(\vec{S'} \cdot \hat{n})}{d\,^3} \right).
\label{BEWald}
\end{equation}
\noindent Here, $\vec{S}$, $\vec{S}'$ are the spin vectors of the
sources and $\hat{n}$ is the unit vector connecting the two sources,
and $d$ is any reasonable measure of separation that approaches
the Euclidean distance $\times (1+{\rm O}(d^{-1}))$ at large $d$
(such as the distance measured in the flat background used in the
initial data setting).
Dain~\cite{Dain}, using a definition of intrinsic mass that differs
from ours (see below), finds binding energy which agrees with Wald's
(\eref{BEWald}) at ${\rm O}(d^{-3})$.
\subsection{Computational Spin-Spin Effects in Black Hole Binding Energy}
In order to investigate a computational implementation validating
\eref{BEWald}, we begin with a standard definition of the binding
energy for black hole interactions.
The total gravitational energy in a binary system can be computed
from the initial data using the ADM mass $M_{{\rm ADM}}$, which is
evaluated by a distant surface integral (see \eref{eq:adm_mass} below), and
gives
the Newtonian gravitational mass as measured ``at infinity". For
a measure of each hole's intrinsic mass, we use the horizon mass $M_{\rm
AH}$
defined by (\eref{eq:mirr}) below. Thus the binding energy, ${E_b}$, is defined as
\begin{equation}
{E_b} = M_{{\rm ADM}} - M_{\rm AH} - M'_{\rm AH}. \label{binding}
\end{equation}
The ADM mass is evaluated in an asymptotically flat region
surrounding the system of interest, and in Cartesian coordinates is given by
\begin{eqnarray}
M_{{\rm ADM}} &=& \frac{1}{16\pi} \oint \left( \frac{\partial g_{ji}}{\partial
x^{j}} - \frac{\partial g_{jj}}{\partial x^{i}} \right)
{\rm d} S^i,
\label{eq:adm_mass}
\end{eqnarray}
The apparent horizon is the only structure available to measure the
intrinsic mass of a black hole.\renewcommand{\thefootnote}{\fnsymbol{footnote}}\setcounter
{footnote}{3}\fnsymbol{footnote}
\footnotetext[3]{Dain \cite{Dain} considers black hole slicings
that have a second asymptotically flat infinity, and measures a
mass (an intrinsic mass for the black hole) at this second infinity.
This approach is impossible for the Kerr-Schild data we consider
because Kerr-Schild slices intersect the black hole singularity.}
Complicating this issue is the intrinsic
spin of the black hole; the relation is between horizon area and
{\it irreducible} mass:
\begin{equation}
A_{\rm H} = 16 \pi m_{irr}^2 = 8 \pi m \left( m + \sqrt{(m^2 -a^2)}\right).
\label{eq:mirr}
\end{equation}
As \eref{eq:mirr} shows, the irreducible mass is a function of both the
mass and the spin, and in general we have no completely unambiguous way
specify the spin of the black holes in interaction. But, as was shown in
\cite{Bonning}, the spin evolves only very little until the black
holes are very close together. Further, the apparent horizon coincides
closely
with the event horizon unless the black holes have strong interaction.
Hence we assume that the individual spins
are correctly given by the spin parameters ${}_Aa$ specified in forming
the superposed Kerr-Schild background, and that the mass is that determined
by
\eref{eq:mirr} using the area determined from the apparent horizon area.
The physical idea in determining the binding energy is that the
configuration is assembled from infinitely separated black holes,
which are initially on the $x$-axis and which initially have parallel
spins. (No energy is required to orient the coordinate system or to
adiabatically rotate the spins while the holes are infinitely separated.)
Thus these separated holes have their isolated total energy, i.e. $2m$,
for equal mass black holes.
Then one of the black
holes is adiabatically brought to a particular distance from the other,
for instance a coordinate distance of $10m$ as in some of our examples.
This is the base configuration from which our computations will start. We then
consider the change in the binding energy as we move the direction of the
spin axis of one of the black holes.
\subsection{ADM angular momentum}
Besides the mass, ADM formulae also exist for the momentum $P^{{\rm ADM}}_{k}$
and angular momentum $J^{{\rm ADM}}_{ab}$. These formul\ae\ are also
evaluated in an asymptotically flat region surrounding the system of
interest~\cite{Wald, note4}:
\begin{eqnarray}
\label{eq:adm_mom}
P^{{\rm ADM}}_{k} &=& \frac{1}{8\pi} \oint \left( K_{ki} - K^{b}{}_{b}
\delta_{ki}
\right){\rm d} S^i,\\
\label{eq:adm_ang_mom}
J^{{\rm ADM}}_{ab} &=& \frac{1}{8\pi} \oint \left( x_{a}K_{bi} - x_{b}K_{ai}
\right) {\rm d} S^i.
\end{eqnarray}
In the data we set, the total momentum is set to zero, so $P^{{\rm ADM}}_k=0$.
In general, we set data for arbitrarily spinning holes with arbitrary orbital
impact parameter, so in general the angular momentum $J^{{\rm ADM}}_{ab}$
is nonzero, and interesting. In the results presented below as
code tests, we seek initially non-moving black holes, so the total angular
momentum $J^{{\rm ADM}}_{ab}$ is simply the sum of the intrinsic angular momenta,
$\Sigma {}_A m {}_a a$.
\subsection{Computational Results}
We carried out several series of computational experiments to investigate
the spin-spin interaction. In particular we considered instantaneously
nonmoving black holes of equal
mass $m_1=m_2=m$, with equal spin parameter $a_1=a_2 = 0.5m$.
The background separation $d$ for each series was varied from $6m$ to $18m$.
For instance, we considered $d=10m$
(holes at coordinate location $x_1= -5m$, $x_2=5m$).
We varied the spin axis of one hole in
two different planes, resulting in two ``series'' of data.
The hole at $x=+5m$ (``hole number 2'') was maintained with spin $a_2$ aligned with the $z$-axis,
while the direction of the other spin $a_1$ was varied in a plane in
$\pi/8$ steps through $2 \pi$ from the $+z$-axis through the $-z$-axis and
on back to the $+z$-axis. The difference in the
two series is that in one case (the ``$yz$ series")
the spin remains in the $y$-$z$ plane; in the other (the ``$xz$ series") it
remains in the $x$-$z$ plane. These two configurations are displayed in Figure \ref{fig_xzyz}.
\begin{figure}
\centering
\includegraphics[width=5.75in]{xzyz.eps}
\caption{The two different BBH configurations investigated. In all cases, the black hole at $x=+d/2$ is held
fixed with a constant spin of $a_2=0.5$ in the $z$ direction. In the ``$xz$ series," the spin axis of hole
at $x=-d/2$ is rotated in the $x$-$z$ plane ({\it i.e.} about the $y$ axis) by varying angle $\theta_1$
{\it away from} the other hole (holding $\varphi_1=\pi/2$).
In the ``yz series," this axis is
rotated in the $y$-$z$ plane by varying $\theta_1$ clockwise about the positive $x$ axis.
We note that, as an historical artifact of the background generator code of \cite{Bonning}, angles are defined such
that $\varphi=0$ corresponds to the $y$ axis, {\it not} the (more typical) $x$ axis.
}
\label{fig_xzyz}
\end{figure}
The domains we used were typically $\pm 15m$, using $513^3$ grid points, and typically excising
a region of size $r_{ex}=0.9m$.
The ADM mass was evaluated on a cube with sides at $\pm 12m$ ({\it i.e.}, well inside
``convergence region'' shown in Figure \ref{fig:ham_conv} ).
(Variations in domain size, resolution, and excision region size
were conducted to estimate the dependence of the resulting binding energy on these physically irrelevant
but computationally important parameters.
For example we conducted a series of runs with outer boundaries
at $\pm20m$ with $513^3$ points, evaluating the ADM mass at $\pm 17m$.)
The apparent horizon areas were
determined using Thornburg's horizon finder \cite{ThornburgAHFinder} in the Cactus
\cite{cactus-grid, cactus-tools, cactus-review, cactus-webpages,Goodale02a}
computational toolkit, via a post-processing run on our output files.
As shown in Figures
\ref{be_vs_theta_yz}
and
\ref{be_vs_theta_xz}
below, the angular
dependence for the binding energy behavior in the $yz$ case is close to that
predicted by Wald (\eref{BEWald}).
\begin{figure}
\centering
\includegraphics[width=4in]{be_vs_theta_rex.eps}
\vspace{-0.5cm}
\caption{(Normalized) Binding energy vs. spin angle for the $yz$ series.
Present in the graph are two curves corresponding to different
excision radii $r_{ex}$. For instance a least-squares fit
to the $r_{ex}=0.9m$ curve is
${E_b/(M_{AH1}+M_{AH2})} = -0.06794 - 1.396\times10^{-4}\cos\theta$.
For the $r_{ex}=0.75m$ curve, the amplitude of the cosine is $1.381\times10^{-4}$.
This cosine corresponds to the $ \vec{S} \cdot \vec{S'}$ term in (\ref{BEWald}).
We note that changing the excision radius changes the overall constant offset of the binding energy,
but does not have a large effect on the amplitude of the spin-spin interaction.
}
\label{be_vs_theta_yz}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=4in]{be_vs_theta_phi.eps}
\vspace{-0.5cm}
\caption{(Normalized) Binding energy vs. spin angle for the $xz$ series
(``$\varphi_1=\pi/2$''),
along with the $r_{ex}=0.9m$ curve from Figure \ref{be_vs_theta_yz} shown for
comparison. Also included is a least-squares fit to the
$xz$ series:
${E_b/(M_{AH1}+M_{AH2})} = -0.0679 - 1.390\times10^{-4}\cos\theta - 2.762\times10^{-4}\sin^2\theta$.
Note that the coefficient of the $\sin^2\theta$ term is roughly twice
that of those $\cos\theta$ terms both in this graph an Figure \ref{be_vs_theta_yz}.
Rather than this near factor of 2 we find numerically,
considerations based on the mass quadrupole moment
would suggest a factor closer to $3/2$, as indicated by (\ref{mass_quad}).
}
\label{be_vs_theta_xz}
\end{figure}
Figure \ref{be_vs_theta_yz} contains two tests of the $yz$ series, computed
identically except for a change in the excision radius. We see that the
spin dependence of the binding energy is unchanged, but there
is an offset in the average binding energy. This binding energy offset
($0.001m$ out of $-0.07m$)
is a well known -- but small -- dependence on the inner boundary condition
in the computation of initial data sets for binary black
hole systems (see, {\it e.g.} \cite{Pfeiffer2}, \cite{Bonning}). It implies an {\it accuracy} in the binding energy
(estimated from the value of the offset)
of less than $2\%$ of the total binding energy. On the other hand, the
behavior of the spin-dependence,
the cosine curve in the binding energy, implies a {\it precision} much smaller
than the peak-peak amplitude of the cosine curve; we estimate $0.02\%$,
one tenth of the peak to peak amplitude.
However, Figure \ref{be_vs_theta_xz} for the $xz$ series (where one of the spins tips
{\it away from} the other) reminds us that there are additional physical
effects in play.
{\it The Kerr solution has a quadrupole moment arising nonlinearly from its spin.}
In terms of Newtonian potential for Kerr:
\begin{eqnarray}
\phi = -\frac{m}{d} -\frac{3}{2 d ^3} ma^2\cos^2\Theta + ... ;
\label{mass_quad_init}
\end{eqnarray}
the quadrupole term is the $\cos^2 \Theta$ term,
where $\Theta$ is the ``viewing'' angle at which one hole ``sees'' the other.
Given our configurations in which the spins by default are perpedicular to the line of
separation, $\Theta=\pi/2$ when $\theta=0$. Thus in terms of our spin-orientation
angle $\theta$ the effect may be written as
\begin{eqnarray}
\phi = -\frac{m}{d} -\frac{3}{2 d ^3} ma^2\sin^2\theta + ... ;
\label{mass_quad}
\end{eqnarray}
Here $d$ is a radial
coordinate, defined so that angular dependence begins in the metric
only at $O(d^{-3})$\cite{KipMoments}. Hernandez \cite{walterHernandez}
expands the asymptotic Kerr-Schild form and comes
to the same result for the quadrupole moment of the Kerr black hole.
The quadrupole $\cos^2\Theta$ effect is not evident in the $yz$ series because
the hole at $+5m$ is always ``looking" at the equator of the hole at $-5m$, i.e. at
$\Theta=\pi/2$ so there is zero effect.
even as the hole at $-5m$ tilts. However, since the $xz$ series tilts the hole at $x=-5m$
away from hole at $x=+5m$, the fixed hole ``sees" different latitudes of the rotated hole
in the $xz$ series.
Bonning et al.\cite{Bonning} showed that Kerr-Schild data correctly predicts
the Newtonian binding energy $-mm'/{d}$. The total binding in a relativistic
calculation is this ${\rm O}(d^{-1})$ term, plus Wald's ${\rm O}(d^{-3})$
spin- spin interaction, plus the quadruole terms in the potential, plus any possible ${\rm O}(d^{-2})$
contribution to the solution.
Following Wald's notation then, the complete spin dependence may be written as
\begin{eqnarray}
{E_b} = - \left( \frac{ \vec{S}_1 \cdot \vec{S}_2 -
3(\vec{S}_1 \cdot \hat{n})(\vec{S}_2 \cdot \hat{n})}{d\,^3} \right).
+ { 3 \over 2d\,^3} \left(
{m_2\over m_1} [\vec{S}_1\cdot\hat{n}]^2 + {m_1\over m_2} [\vec{S}_2\cdot\hat{n}]^2 \right).
\label{HVMEq}
\end{eqnarray}
(Note that, for the configurations considered in this paper, $\vec{S}_2 \cdot \hat{n} = 0$.)
Both the Wald spin-spin term and the quadrupole moment term in the
expansion of the potential are proportional to $d^{-3}$, though this
is correct only near infinity; for close distances (such as
$d \approx 10m$ considered here) one expects deviations from nonlinear
terms in the results. In fact the angular dependence
of these terms is remarkably accurately reproduced. Table \ref{table_coeffs} shows
the first {\it four} coefficients in fits to the binding energy; the
third and fourth power coefficients are substantially below the cosine
and cosine-squared coefficients. The Wald formula would
produce an amplitude of $0.5^2/10^3 = 2.5 \times 10^{-4}$ for our
case of equal spins of $a=0.5 m$ and separation of $d=10m$; the
actual coefficient from the fit (after multiplying by the sum of the horizon masses) is
$2.97 \times 10^{-4}$. This apparent agreement is somewhat of an accident, however,
since the expected dependence of $d^{-3}$ is not present in our data, as we will
show below.
The term
arising from the the quadrupole term (the cosine squared term) suggests
a coefficient of $3.75 \times 10^{-4}$ ($1.5$ times the expected
amplitude of the spin-spin term). Our fit to the experiment
(the $xz$ series) produces $5.860\times 10^{-4}$.
The Wald formula, Equation (\ref{BEWald}), predicts no
difference in the cosine term ($A_1$ in Table \ref{table_coeffs}) between the $xz$ and
$yz$ series. (The ($\vec{S}\cdot\hat{n}$) term in Eq. (\ref{BEWald})
is zero for all experiments carried out because the
black hole on the positive $x$ axis has a fixed spin direction
parallel to the $z$ axis.) This is the behavior we find; compare the coefficients $B_1$ for
$\cos\theta$ in Tables \ref{table_coeffs_yz} and \ref{table_coeffs_xz}.
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
$\varphi_1$ & d & $A_0$ & $A_1$ & $A_2$ & $A_3$ & $A_4$ \\
\hline
0 (``$yz$ series")& 10 & -0.0679461 & -1.3985e-4 & -1.07477e-6 & 3.47174e-7 & -1.68858e-7 \\
0.175 & 10 & -0.067933 & -1.3891e-4 & -1.45311e-5 & -1.39774e-6 & 4.15175e-7 \\
1.57 (``$xz$ series")& 10 & -0.0676704 & -1.3787e-4 & -2.74078e-4 & -1.55601e-6 & -2.17319e-6 \\
\hline
\end{tabular}
\caption{
Table of coefficients for curve fits of the form
${E_b/(M_{AH1}+M_{AH2})} = A_0 + A_1\cos\theta + A_2\cos^2\theta + A_3\cos^3\theta + A_4\cos^4\theta$,
for a separation of $10m$.
(Here and below, we report several significant digits for the purposes of comparison, however due
to variations resulting from excision region size and other effects, one would rightly regard only
the first two digits as significant.)
This shows that terms higher in
order than $\cos^2\theta$ do not contribute significantly. Because
of this, we do not include powers higher than two in the trigonometric basis functions.
Also, given the considerations due to the mass quadrupole moment in Eq. (\ref{mass_quad}),
{\it all subsequent curve fits in this paper use the form
${E_b/(M_{AH1}+M_{AH2})} = B_0 + B_1\cos(\theta) + B_2\sin^2\theta$.} That is, the second order term will be taken as proportional to
$\sin^2\theta$, not $\cos^2\theta$. This results in an offset of the total binding energy ($A_0$).
}
\label{table_coeffs}
\end{table}
\begin{figure}
\centering
\includegraphics[width=4in]{cutspininhalf.eps}
\vspace{-0.5cm}
\caption{
The effects of varying the spin magnitude of one of the holes.
Symbols denote data points, lines denote curve fits. Notably, the
fit for the $xz$ series with $a_1=0.5$ is
${E_b/(M_{AH1}+M_{AH2})} = -0.0679464 -1.39041\times 10^{-4}\cos\theta + 2.76249\times 10^{-4}\sin^2\theta$,
while for the $xz$ series with $a_1=0.25$ it is
${E_b/(M_{AH1}+M_{AH2})} = -0.0677621 -6.94328\times 10^{-5}\cos\theta + 7.00104\times 10^{-5}\sin^2\theta$.
Thus reducing spin $a_1$ from 0.5 to
0.25 results in reduction of the $\cos\theta$ term by a factor of
two, while the $\sin^2\theta$ term is reduced by nearly a factor
of four.
}
\label{cutspininhalf} \end{figure}
We tested the spin-squared dependence of the $\sin^2\theta$ term
by two methods. In one we considered $a=a_1=0.25m$ for the hole at
$x=-5m$, which was then tested in an abbreviated $yz$ series, while
$a_2$ held at $a_2=0.5$ along the positive $z$ axis for the hole
at $x=+5m$. Figure \ref{cutspininhalf} shows the result; the effect
from the quadrupole term is quadratic in the reduced spin (its
amplitude is reduced by a factor of four), while the Wald spin-spin
interaction is linear in the reduced spin and its amplitude is
reduced by a factor of two. To further test the quadrupole dependence
we considered rotating the spin of the black hole at $x=-5m$ in a
plane turned by $\varphi_1=0.175{\rm rad} \approx 10^\circ$. The
coefficients of the nonlinear curve fit are listed on the second
line of Table \ref{table_coeffs} and are plotted as interpolating
lines in Figure \ref{be_vs_theta_xz}. They have the analytically
expected dependence.
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
Series & $d$ & $B_0$ & $B_1$ & $B_2$ \\
\hline
$yz$ & 6 & -0.0743885 & -0.000381203 & 4.68096e-6 \\
$yz$ & 7 & -0.0731073 & -0.000275675 & 3.11827e-6 \\
$yz$ & 8 & -0.0716993 & -0.000211891 & 3.06277e-6 \\
$yz$ & 10 & -0.0679473 & -0.000139598 & 1.25165e-6 \\
$yz$ & 12 & -0.0632223 & -9.97935e-5 & -3.78722e-7 \\
$yz$ & 14 & -0.0576223 & -7.85976e-5 & -8.578e-8 \\
$yz$ & 16 & -0.0517695 & -6.45692e-5 & -3.17302e-7 \\
$yz$ & 18 & -0.0459505 & -5.52391e-5 & -1.16e-6 \\
\hline
\end{tabular}
\caption{
Table of coefficients for curve fits of the form
${E_b/(M_{AH1}+M_{AH2})} = B_0 + B_1\cos\theta + B_2\sin^2\theta$, for the $yz$ series ($\varphi_1=0$),
for various BBH separations $d$ with both spins $a_1=a_2=0.5$,
using a domain size of $\pm 15m$ and $513^3$ fine grid points. Notice that,
as expected, $B_2$ is very small for all these fits.
}
\label{table_coeffs_yz}
\end{table}
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
Series & $d$ & $B_0$ & $B_1$ & $B_2$ \\
\hline
$xz$ & 6 & -0.0743885 & -0.000381203 & 0.000431581 \\
$xz$ & 7 & -0.0731073 & -0.000275674 & 0.000357985 \\
$xz$ & 8 & -0.0716993 & -0.00021189 & 0.000316921 \\
$xz$ & 10 & -0.0679464 & -0.000139041 & 0.000276249 \\
$xz$ & 12 & -0.0632223 & -9.97932e-5 & 0.000224196 \\
$xz$ & 14 & -0.0576223 & -7.85974e-5 & 0.000193869 \\
$xz$ & 16 & -0.0517695 & -6.45689e-5 & 0.000171335 \\
$xz$ & 18 & -0.0459505 & -5.52389e-5 & 0.000146205 \\
\hline
\end{tabular}
\caption{
Table of coefficients for curve fits of the form
${E_b/(M_{AH1}+M_{AH2})} = B_0 + B_1\cos\theta + B_2\sin^2\theta$, for the $xz$ series ($\varphi_1=1.57$),
for various BBH separations $d$,
using a domain size of $\pm 15M$ and $513^3$ fine grid points.
}
\label{table_coeffs_xz}
\end{table}
Figures \ref{amp_vs_sep_const} and \ref{amp_vs_sep} show our tests
of the separation-dependence of the binding energy. We expect the
constant term $B_0$ to fall off asymptotically as $1/d$, since it
corresponds to the $M/r$ term in the Newtonian limit. Instead we
find roughly linear behavior at the largest separations we are able
to compute. The amplitudes $B_1$ and $B_2$ also fall off differently
than expected. We expect both $B_1$, which corresponds to the
cosine term in (\ref{BEWald}), and $B_2$, which corresponds to the
mass quadrupole term, to scale as $d^{-3}$. Instead we find that
$B_1$ scales as $d^{-2}$, and that $B_2$ scales no faster than
$d^{-1}$. Since we expect the constant term $B_0$ to scale as $1/d$
(although, as in Figure \ref{amp_vs_sep_const}, we see that it does
not), dividing the amplitudes $B_1$ and $B_2$ by the constant $B_0$
does not significantly illuminate the results. These results for
the separation-dependence are likely affected by the outer boundaries
of our computational domain in unphysical ways. In the future we
hope to repeat these studies with higher resolution and larger
domains, using a multi-resolution (mesh refinement) version of our
code. For the present, we conducted an additional test to measure
the effects of the outer boundary, namely we looked for unphysical
effects by computing the difference between the ADM mass and the
horizon mass for a {\em single} black hole as we rotated its spin
axis. The variation we found was on the order of $10^{-6}$, which
would appear as a horizontal line in Figure \ref{be_vs_theta_yz}.
While this provides some assurance that our mass determination
methods are functioning to some level of expectation, this (single-hole)
effect is insufficient to explain the deviation from the expected
results in the binary simulations. Future refinements and larger
domains will hopefully clarify this issue.
\begin{figure}
\centering
\includegraphics[width=4in]{amp_vs_sep_const.eps}
\caption{
Variation of binding energy vs. BBH separation $d$ (holes at $x=\pm d/2$), showing the constant term $B_0$ in the curve fits shown
in Table \ref{table_coeffs_yz}.
The circles are computed using $513^3$ grid points on a domain of $\pm 15m$,
evaluating the ADM mass on a cube of half-width $12m$.
We note that, at large separations, the curve is roughly linear, in contrast to an
expected behavior of $1/d$. We speculate that this (unphysical) effect is due to the outer
boundary of the computational domain, in particular the behavior of the ADM mass as estimated at these
rather small outer radii. However it may simply be the case that the asymptotic behavior only evident
at larger separations.
We expect that future work using our multiresolution code with larger domains
and higher resolutions should produce better agreement with the expected behavior.
}
\label{amp_vs_sep_const}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=4in]{amp_vs_sep.eps}
\vspace{-0.5cm}
\caption{
Amplitude vs. separation for the constants $B_1$ in Tables
\ref{table_coeffs_yz} and \ref{table_coeffs_xz} (circles), and $B_2$
in Table \ref{table_coeffs_xz} (squares). Rather than seeing the
expected asymptotic $d^{-3}$ fall-off for each amplitude, we find
the cosine term $B_1$ has roughly $d^{-2}$ behavior, whereas the
$\sin^2$ term $B_2$ falls off no faster than $1/d$.
}
\label{amp_vs_sep}
\end{figure}
\section{Discussion}
To the extent checked, our computational tests of the spin-spin interaction in the binding
energy of binary black hole configurations, verify
the angular dependence given by Equation (\ref{BEWald})
\cite{WaldPRD}. We verified the behavior given by Wald's $-(\vec{S}
\cdot \vec{S}')$ term, but since we always kept one black hole's
spin axis perpendicular to the separation axis, we made no attempt
to observe the $\vec{S}\cdot\hat{n}$ term in Wald's formula. At
the separations and domains available, our results did not show
asymptotic $d^{-3}$ fall-off with distance.
Additionally, we find an effect due to the mass quadrupole moment
of the black holes. This results in a higher-order (sine squared)
variation with spin angle than given in Wald's formula. Thus we combine
these two effects into a general equation (\ref{HVMEq}) for spin-spin interactions
of binary black holes. However,
in this case as well, the expected asymptotic fall off of $d^{-3}$
was not evident in our solutions. Future analysis will use a new
multiresolution version of our code to further pursue questions
raised by the results here, including moving to significantly larger
separations to evaluate the asymptotic behavior of the spin-dependent
interactions, and effects due to rotation of {\em both} holes' spin axes.
We also intend to investigate the spin-orbit coupling
and its bearing on evolutions such as \cite{Brownsville2}. We are
now beginning exploration of the constrained evolution approach in
spacetimes involving single moving, and multiple interacting black
holes. We find substantial improvement from constraint solving in
every simulation.
\section*{Acknowledgments}
We thank Evan Turner, Chris Hempel and Karl Schultz of the Texas
Advanced Computing Center at the University of Texas, where the
computations were performed. This work was supported by NSF grant
PHY~0354842, and by NASA grant NNG04GL37G. Portions of this work
were conducted at the Laboratory for High Energy Astrophysics,
NASA/Goddard Space flight Center, Greenbelt Maryland, with support
from the University Space Research Association.
\newpage
|
1,941,325,220,422 | arxiv | \section{Introduction}
A problem raised by Evans and Ball in the 1980's (see for instance~\cite{Ball2}), and still open in its full generality, is the following: can one approximate a planar bi-Sobolev homeomorphism with diffeomorphisms, or piecewise affine homeomorphisms, in the bi-Sobolev sense? This would be rather important for applications in the context of the non-linear elasticity. This problem and its partial solutions have an interesting history, one can see for instance the papers~\cite{BMC,IwKovOnn,DP,HP} to have an overview of what is now known.\par
In all the available results, for instance in those cited above, the authors use quite different strategies, but a common ingredient is to divide the domain in simple ones, namely, triangles or squares, and then to work on each of them. And, in particular, it is often important to approximate the value of a homeomorphism on the boundary of the triangle or square. In other words, the much simpler one-dimensional task (i.e., approximating a function defined on a segment, or on the boundary of a square) is one of the ingredients to solve the two-dimensional one, actually usually a very easy ingredient. Let us state this more precisely: we have a function $\varphi:[0,1]\to \mathbb R^2$, and we look for a piecewise linear function $\varphi_\varepsilon:[0,1]\to\mathbb R^2$, which is uniformly close to $\varphi$ and which coincides with $\varphi$ at $0$ and $1$; this is of course very easy to reach. In addition, one has often the information that $\varphi$ is $L$-biLipschitz; in this case, it would be also interesting to have an estimate on the bi-Lipschitz constant of $\varphi_\varepsilon$ (which is surely bi-Lipschitz, since it is piecewise linear). Surprisingly enough, this does not come for free; in particular, in~\cite[Lemma~5.5]{DP} it was proved that one can obtain a $4L$-biLipschitz approximating function $\varphi_\varepsilon$, and the proof was simple but not straightforward.\par
The goal of the present paper is to obtain the sharp result in this direction, namely, that it is possible to have an approximating function $\varphi_\varepsilon$ which is $(L+\varepsilon)$-biLipschitz. To state it, let us first recall the definition of the bi-Lipschitz property.
\begin{definition}
The function $f: [0,1]\to \mathbb R^2$ is said to be \emph{$L$-biLipschitz} if for every $p,\, q\in [0,1]$
\[
\frac 1 L\, |p-q|\leq |f(p)-f(q)| \leq L |p-q|\,.
\]
Notice that the second inequality is the usual $L$-Lipschitz property; through this paper, we will refer to the first inequality as the \emph{inverse $L$-Lipschitz property}.
\end{definition}
Notice that a function can be $L$-biLipschitz only if $L\geq 1$, and actually if $L=1$ then the function must be linear (and then, there is no need of approximating it with piecewise linear functions!). As a consequence, we can always think that $L>1$. We can now state our main result.
\begin{theorem}\label{main}
Let $\varphi:[0,1]\to\mathbb R^2$ be an $L$-biLipschitz function, and $\varepsilon>0$. Then there exists an $(L+\varepsilon)$-biLipschitz function $\varphi_\varepsilon:[0,1]\to\mathbb R^2$ such that
\begin{align}\label{claimmain}
\varphi_\varepsilon(0)=\varphi(0)\,, && \varphi_\varepsilon(1)=\varphi(1)\,, && \|\varphi-\varphi_\varepsilon\|_{L^\infty}\leq \varepsilon\,,
\end{align}
and $\varphi_\varepsilon$ is finitely piecewise linear on $[0,1]$.
\end{theorem}
The plain of the paper is very simple. Sections~\ref{sect2} and~\ref{sect3} contain the main ingredients of the proof, namely, the study of how one can approximate a function near Lebesgue points for $\varphi'$, and the study of how to treat the remaining small intervals. These two ingredients will be then put together in Section~\ref{sect4}, which contains the proof of Theorem~\ref{main}. Finally, Section~\ref{sect5} is devoted to generalize the result to the case of closed curves, that is, instead of functions defined on $[0,1]$ we consider functions defined on $\mathbb S^1$; this is obtained in Theorem~\ref{main2}.
\subsection{Notation}
In this paper we will use very little notation. We list the main things here for the sake of clarity. The length-measure is denoted by ${\mathcal H}^1$. Given two points $x,\, y\in \mathbb R^2$, we denote by $xy$ the segment joining them; depending on the situation, for the ease of notation we denote its length either by $|y-x|$, or by the quicker symbol $\segm{xy}$, or also by ${\mathcal H}^1(xy)$. Once a curve $\gamma:[a,b]\to\mathbb R^2$ is fixed, the arc between two points $P$ and $Q$ is denoted by $\arc{PQ}$, and its length is then ${\mathcal H}^1(\arc{PQ})$. In particular, for any $a\leq x<y\leq b$, $\arc{\gamma(x)\gamma(y)}$ is the arc connecting $\gamma(x)$ to $\gamma(y)$. Finally, given any three points $x,\,y,\, z\in\mathbb R^2$, we write $\angle xyz\in [0,\pi]$ to denote the angle between the segments $xy$ and $yz$.
\section{The ``Lebesgue intervals''\label{sect2}}
In this section we show that, on a small interval around a Lebesgue point for $\varphi'$, it is possible to replace the function $\varphi$ with a linear one. Since Rademacher Theorem ensures that almost every point of $[0,1]$ is a Lebesgue point for $\varphi'$, being $\varphi$ a Lipschitz function, we will be eventually able to repeat this argument on a large number of non-intersecting intervals which fill a big portion of $[0,1]$. In the end, we can prove the following result.
\begin{prop}\label{step1}
Let $\varphi:[0,1]\to \mathbb R^2$ be an $L$-biLipschitz function, and let $\varepsilon>0$. Then there exists an $(L+\varepsilon)$-biLipschitz function $\varphi_\varepsilon:[0,1]\to\mathbb R^2$ such that~(\ref{claimmain}) holds true, and $\varphi_\varepsilon$ is finitely piecewise linear on a finite union of intervals $A\subseteq [0,1]$ such that $|[0,1]\setminus A|\leq \varepsilon$.
\end{prop}
As we said above, the main brick to prove this result concerns the modification of $\varphi$ on a single small interval. Before stating it, we need the following piece of notation.
\begin{definition}\label{varphist}
Let $\varphi: [0,C]\to \mathbb R^2$ be a function, and let $0\leq s < t \leq C$. We set $\varphi_{st}:[0,C]\to \mathbb R^2$ the function defined as
\[
\varphi_{st}(x):= \left\{\begin{array}{ll}
\varphi(x) & \hbox{if $x\notin (s,t)$}\,,\\
\begin{aligned}\varphi(s) + \frac{x-s}{t-s} \, \big(\varphi(t)-\varphi(s)\big)\end{aligned}\qquad & \hbox{if $x\in (s,t)$}\,.
\end{array}\right.
\]
Moreover, we call $t^+=s+|\varphi(t)-\varphi(s)|/L$ and $\varphi_{st}^+:[0,C-(t-t^+)]\to \mathbb R^2$ the function
\[
\varphi_{st}^+(x):= \left\{\begin{array}{ll}
\varphi(x) & \hbox{if $x\leq s$}\,,\\
\begin{aligned}\varphi(s) + \frac{L(x-s)}{|\varphi(t)-\varphi(s)|} \, \big(\varphi(t)-\varphi(s)\big)\end{aligned}\qquad & \hbox{if $s<x<t^+$}\,,\\
\varphi(x+t-t^+) & \hbox{if $t^+\leq x\leq C-(t-t^+)$}\,.
\end{array}\right.
\]
\end{definition}
In words, the function $\varphi_{st}$ coincides with $\varphi$ out of the interval $(s,t)$, while the curve $\varphi$ in $(s,t)$ is replaced by the segment connecting $\varphi(s)$ to $\varphi(t)$. The function $\varphi_{st}^+$ behaves in the very same way, except that the segment is parametrized at the (maximal possible) speed $L$.
\begin{lemma}\label{Lebesgue}
Let $\varphi:[0,1]\to \mathbb R^2$ be an $L$-biLipschitz function, and let $\varepsilon>0$ be small enough. For any $x\in (0,1)$ which is a Lebesgue point for $\varphi'$, there exists $\bar\ell=\bar\ell(x)>0$ such that, for any $\ell\leq\bar\ell$, there is a sets $I_\ell(x)\subseteq (x-\ell,x+\ell)$ with
\begin{align}\label{lungIpm}
\big|I_\ell(x)\big| \geq (2-\varepsilon) \ell\,,
\end{align}
so that for every $s<t\in I_\ell(x)$ the function $\varphi_{st}$ is $(L+\varepsilon)$-biLipschitz and satisfies $\|\varphi-\varphi_{st}\|_{L^\infty}<\varepsilon$. Moreover, for every $s \in I_\ell(x)$ and $t_1,\, t_2 \in (x-\ell/\varepsilon,x+\ell/\varepsilon)$ with $t_1<s<t_2$, the directions of the segments $\varphi(t_1)\varphi(s)$ and $\varphi(s)\varphi(t_2)$ coincide up to an error $2\varepsilon$.
\end{lemma}
In this section we will first show Lemma~\ref{Lebesgue}, and then use it as a tool to obtain Proposition~\ref{step1}.
\proofof{Lemma~\ref{Lebesgue}}
We divide the proof in three steps for the sake of clarity.
\step{I}{The $L$-Lipschitz property of $\varphi_{st}$ and a uniform estimate for $\varphi-\varphi_{st}$.}
In this step we show that $\varphi_{st}$ is $L$-Lipschitz for any choice of $s,t$ in $[0,1]$, and we give an estimate for $\|\varphi-\varphi_{st}\|_{L^\infty}$. Let us fix $0\leq s\leq t\leq 1$ and take two arbitrary points $y\neq z\in [0,1]$: we have to check that
\[
\frac{|\varphi_{st}(y)-\varphi_{st}(z)|}{|y-z|} \leq L\,.
\]
This is clearly true if both $y,\,z \in [0,1]\setminus (s,t)$, since in this case $\varphi_{st}=\varphi$ at both $y$ and $z$; on the other hand, if both $y,\, z \in [s,t]$, then by definition
\begin{equation}\label{bothsides}
\frac{|\varphi_{st}(y)-\varphi_{st}(z)|}{|y-z|}
=\frac{|\varphi_{st}(s)-\varphi_{st}(t)|}{|s-t|}
=\frac{|\varphi(s)-\varphi(t)|}{|s-t|} \leq L\,.
\end{equation}
Finally, let us suppose that one of the points $y$ and $z$ belongs to $(s,t)$, and the other one to $[0,1]\setminus [s,t]$; by symmetry, we can assume $s<y<t<z$. Thus, by the above observations and by the triangular inequality we have
\[
|\varphi_{st}(y)-\varphi_{st}(z)| \leq
|\varphi_{st}(y)-\varphi_{st}(t)| + |\varphi_{st}(t)-\varphi_{st}(z)|
\leq L |y-t| + L |t-z| = L |y-z|\,.
\]
Concerning the uniform estimate, it says that
\begin{equation}\label{stimaLinfinito}
\|\varphi-\varphi_{st}\|_{L^\infty}\leq 2L |t-s|\,.
\end{equation}
Indeed, calling for brevity $d=|t-s|$, for any $x\in [s,t]$ one has $|\varphi(x)-\varphi(s) | \leq L |x-s| \leq L d$. As an immediate consequence, for any $y\in [s,t]$ we also get $|\varphi_{st}(y) - \varphi(s)| \leq Ld$, hence in turn $\|\varphi_{st}-\varphi\|_{L^\infty}\leq 2Ld$, that is, (\ref{stimaLinfinito}).
\step{II}{Definition of $\ell(x)$ and $I_\ell(x)$ and the estimate on the directions.}
Let $x$ be a Lebesgue point for $\varphi'$. By definition, for every $\delta>0$ there exists a strictly positive constant $\bar h=\bar h(x)<1/(4L)$ such that, for any $h<\bar h$,
\begin{equation}\label{Lebcond}
\hbox{\ }\Xint{\hbox{\vrule height -0pt width 10pt depth 1pt}}_{x-h}^{x+h} |\varphi'(z) - \varphi'(x)|dz < \delta\,.
\end{equation}
Let us now assume for simplicity that $\varphi'(x)$ is an horizontal vector, and for any $p<q$ in $(x-h,x+h)$, let us call $\tau_{pq}\in\mathbb S^1$ the direction of the segment $\varphi(p)\varphi(q)$. Since $|\varphi'(x)|\geq 1/L$, we immediately obtain that for any interval $(p,q)\subseteq (x-h,x+h)$ the following holds,
\begin{align}\label{predefofA}
M(p,q):=\hbox{\ }\Xint{\hbox{\vrule height -0pt width 10pt depth 1pt}}_p^q |\varphi'(z)-\varphi'(x)|\, dz < \frac \varepsilon {2L} && \Longrightarrow && |\tau_{pq}| < \varepsilon\,.
\end{align}
We want now to find a particular set $A\subseteq (x-3h,x+3h)$ such that
\begin{equation}\label{defofA}
M(p,q)< \frac \varepsilon{2L} \qquad \forall\, p,\, q \in (x-h,x+h):\, (p,q)\notin A\times A\,.
\end{equation}
Notice that we are asking $M(p,q)$ to be small as soon as \emph{at least one} between $p$ and $q$ belongs to $(x-h,x+h)\setminus A$. To define $A$, let us start simply by letting $A=\emptyset$ if $M(p,q)<\varepsilon/(2L)$ is true for every pair $p,\,q\in (x-h,x+h)$, so that~(\ref{defofA}) trivially holds.\par
Otherwise, let $p_1< q_1\in (x-h,x+h)$ be two points maximizing ${\mathcal H}^1(pq)$ among all the pairs for which $M(p,q)\geq \varepsilon/(2L)$: notice that this is possible by the fact that $\varphi'$ is an $L^1$ function on the compact interval $[x-h,x+h]$. Then, let us define $I_1 =(p_1^-,q_1^+)$, being
\begin{align}\label{def-+}
p_1^- = p_1 - (q_1 - p_1) \,, && q_1^+ = q_1 + (q_1 - p_1)\,.
\end{align}
Notice that by construction $I_1\subseteq (x-3h,x+3h)$. Now, if~(\ref{defofA}) is satisfied with $A=I_1$ we stop here, otherwise let $p_2<q_2\in (x-h,x+h)$ be two points maximizing ${\mathcal H}^1(pq\setminus I_1)$ among the pairs for which $M(p,q)\geq \varepsilon/(2L)$, and let $I_2=(p_2^-,q_2^+)$ where $p_2^-$ and $q_2^+$ are defined as in~(\ref{def-+}). Notice that, by definition, it is possible that $p_2$ or $q_2$ belong to $I_1$, but the intervals $(p_1,q_1)$ and $(p_2,q_2)$ are surely disjoint. Indeed, by the maximality in the definition of $p_1$ and $q_1$ we have that ${\mathcal H}^1(p_2q_2)\leq {\mathcal H}^1(p_1q_1)$; as a consequence, the intervals $(p_1,q_1)$ and $(p_2,q_2)$ could intersect only if both $p_2$ and $q_2$ belong to $I_1$: but then, ${\mathcal H}^1(p_2q_2\setminus I_1)=0$, against the maximality in the definition of $p_2$ and $q_2$. Moreover, as before, $I_2\subseteq (x-3h,x+3h)$. We continue our definition of the intervals $I_j$ recursively, being at any step $p_j<q_j\in (x-h,x+h)$ two points maximizing ${\mathcal H}^1\big(pq\setminus \cup_{i=1}^{j-1} I_i\big)$ among the pairs for which $M(p,q)\geq \varepsilon/(2L)$, noticing that the different intervals $(p_j,q_j)$ are disjoint, and stopping the construction if $A=\cup_{i=1}^j I_i$ satisfies~(\ref{defofA}).\par
Thus, either we stop after finitely many steps, and this means that~(\ref{defofA}) holds true being $A$ a finite union of intervals, or we end up with a sequence of intervals $I_j=(p_j^-,q_j^+),\, j\in\mathbb N$. Since all the different ``internal intervals'' $(p_j,q_j)$ are disjoint, the sum of the lengths is bounded, hence $|I_j|\to 0$ when $j\to \infty$. As a consequence, we can easily check that~(\ref{defofA}) holds true by setting
\[
A:= \Bigg\{ z\in (x-3h,x+3h):\, \liminf_{\nu\to 0} \frac{|(z-\nu,z+\nu)\cap \cup_j I_j |}{2\nu}> 0\Bigg\}\,,
\]
that is, $A$ is the set of points having strictly positive density with respect to $\cup_j I_j$. To do so, let us assume the existence of $p<q \in (x-h,x+h)$ such that at least one between $p$ and $q$ does not belong to $A$, but $M(p,q)\geq \varepsilon/(2L)$. We can immediately notice that
\begin{equation}\label{nullmeasure}
{\mathcal H}^1\big(pq \setminus \cup_j I_j \big)=0\,:
\end{equation}
indeed, if the above measure were some quantity $\xi>0$, then the fact that the interval $(p,q)$ was not chosen at the $j$-th step gives that
\[
{\mathcal H}^1\big(p_jq_j\setminus \cup_{i=1}^{j-1} I_i\big) \geq
{\mathcal H}^1\big(pq\setminus \cup_{i=1}^{j-1} I_i\big) \geq
{\mathcal H}^1\big(pq\setminus \cup_{i\in\mathbb N} I_i\big)=\xi\,,
\]
hence in particular $|I_j|\geq \xi$ for every $j$, while we have already noticed that $|I_j|\to 0$. On the other hand, (\ref{nullmeasure}) implies that both $p$ and $q$ have at least density $1/2$ for $\cup_j I_j$, so they both belong to $A$, against the assumption. Hence, the validity of~(\ref{defofA}) has been established.\par
We define then $\ell=\tilde\varepsilon h$ for some $\tilde\varepsilon=\tilde\varepsilon(\varepsilon,L) < \varepsilon$ to be specified later, and we set
\[
I_\ell(x) = (x-\ell,x+\ell) \setminus A\,.
\]
Keep in mind that, since $h$ is any positive constant smaller than $\bar h(x)$, then also $\ell$ can be chosen as any positive constant smaller than $\bar\ell(x)=\tilde \varepsilon \bar h(x)$. To conclude this step, we give an estimate of the length of $A$, namely,
\[\begin{split}
|A|&\leq \sum_{j=1}^{+\infty} |I_j|
= 3 \sum_{j=1}^{+\infty} {\mathcal H}^1(p_jq_j)
\leq \frac{6L}\varepsilon\, \sum_{j=1}^{+\infty} \int_{p_j}^{q_j} |\varphi'(z)-\varphi'(x)| \,d z\\
&\leq \frac{6L}\varepsilon\, \int_{x-h}^{x+h} |\varphi'(z)-\varphi'(x)| \, d z
<\frac{12Lh \delta}\varepsilon <h\varepsilon\tilde\varepsilon =\ell\varepsilon\,,
\end{split}\]
where we have used the definition of $A$, the fact that $M(p_j,q_j)\geq \varepsilon/(2L)$ for every $j\in\mathbb N$, the fact that all the intervals $(p_j,q_j)$ are disjoint, and~\eqref{Lebcond}, and where the last inequality holds true as soon as $\delta\leq \varepsilon^2\tilde\varepsilon/(12L)$. As a consequence, the validity of~(\ref{lungIpm}) follows.\par
To conclude this step, we take two points $s\in I_\ell(x)$ and $t_1,\,t_2\in (x-h,x+h)$ with $t_1<s<t_2$. Applying~(\ref{defofA}) at both pairs $(t_1,s)$ and $(s,t_2)$, and keeping in mind~(\ref{predefofA}), we get that both the segments $\varphi(t_1)\varphi(s)$ and $\varphi(s)\varphi(t_2)$ are horizontal up to an error $\varepsilon$, thus in turn the two directions coincide up to an error $2\varepsilon$. Notice that, since $(x-\ell/\varepsilon,x+\ell/\varepsilon)\subseteq (x-h,x+h)$, in particular we have proved the last assertion of the claim about the directions.
\step{III}{The bi-Lipschitz property and the $L^\infty$ estimate for $\varphi_{st}$.}
To conclude the proof we only have to check that, whenever $x$ is a Lebesgue point for $\varphi'$ and the points $s$ and $t$ are in $I_\ell(x)$, the function $\varphi_{st}$ is $(L+\varepsilon)$-biLipschitz and satisfies $\|\varphi-\varphi_{st}\|_{L^\infty} < \varepsilon$.\par
The $L^\infty$ estimate comes directly by Step~I, keeping in mind~(\ref{stimaLinfinito}) and since by construction $2L|t-s| \leq 4\ell L < 4\varepsilon h L < \varepsilon$; moreover, Step~I ensures also the Lipschitz property, even with constant $L$ instead of $(L+\varepsilon)$: as a consequence, we only have to take care of the inverse Lipschitz inequality. In other words, we take $y,\, z\in [0,1]$ and we have to check that
\begin{equation}\label{biLip}
|\varphi_{st}(y)-\varphi_{st}(z) | \geq \frac {|y-z|}{L+\varepsilon} \,.
\end{equation}
If $y$ and $z$ are both in $[0,1]\setminus (s,t)$, then~(\ref{biLip}) is true --with $L$ in place of $L+\varepsilon$-- because $\varphi=\varphi_{st}$ at both $y$ and $z$, while if they are both in $[s,t]$ then the validity of~(\ref{biLip}) --again with $L$ in place of $L+\varepsilon$-- can be obtained exactly as in~(\ref{bothsides}). Without loss of generality, let us then consider the case when $s<y<t<z$, which we further subdivide in two cases.\par
If $z \in (x-h,x+h)$ then, as observed at the end of Step~II, the angle $\theta=\angle{\varphi(y)}{\varphi(t)}{\varphi(z)}$ is at most $2\varepsilon$. Recalling that the validity of~(\ref{biLip}) with $L$ in place of $L+\varepsilon$ is already known for both the pairs $(y,t)$ and $(t,z)$, we have then
\[\begin{split}
|\varphi_{st}(y)-\varphi_{st}(z)| &\geq \cos(\theta/2) \big(|\varphi_{st}(y)-\varphi_{st}(t)|+|\varphi_{st}(t)-\varphi_{st}(z)|\big)\\
&\geq \cos \varepsilon \, \frac{|y-t|+|t-z|}L
\geq \frac{|y-z|}{L+\varepsilon}\,,
\end{split}\]
which is valid up to take $\varepsilon$ small enough.\par
Finally, assume that $z>x+h$: in this case it is enough to observe that, also by~(\ref{stimaLinfinito}),
\[\begin{split}
\frac{|\varphi_{st}(y)-\varphi_{st}(z)|}{|y-z|}&=
\frac{|\varphi_{st}(y)-\varphi(z)|}{|y-z|}
\geq \frac{|\varphi(y)-\varphi(z)|}{|y-z|} - \frac{\|\varphi_{st}-\varphi\|_{L^\infty}}{|y-z|}\\
&\geq \frac 1L - \frac{4 L \ell}{h-\ell}
= \frac 1L - \frac{4 L \tilde\varepsilon}{1-\tilde\varepsilon}
> \frac 1{L+\varepsilon}\,,
\end{split}\]
up to have chosen $\tilde\varepsilon=\tilde\varepsilon(\varepsilon, L)$ small enough. Thus, the estimate~(\ref{biLip}) has been proved in any case and the proof is concluded.
\end{proof}
\begin{definition}
Given an interval $J=(a,b)\subseteq [0,1]$ and $\varepsilon>0$, we call \emph{central part of $J$} the interval $J^\varepsilon$ given by
\[
J^\varepsilon := \bigg(\frac{a+b}2 - \frac \varepsilon 2\,(b-a), \frac{a+b}2 + \frac \varepsilon 2\,(b-a)\bigg)\,.
\]
Moreover, we say that $J$ is \emph{$\varepsilon$-admissible} if there exists $x\in J^\varepsilon$ such that $\bar{\ell}(x) > (b-a)/2$.
\end{definition}
\proofof{Proposition~\ref{step1}}
For any $N\in\mathbb N$, we write $[0,1]$ as the essentially disjoint union of the intervals $J_m = \big(m/N, (m+1)/N\big)$, with $0\leq m < N$. Moreover, we let $\tilde \varepsilon=\tilde\varepsilon(\varepsilon,L)<\varepsilon$ be a small constant, to be specified later. We split the proof in three steps for clarity.
\step{I}{A piecewise linear $(L+\tilde\varepsilon)$-biLipschitz function $\varphi_m$ on each $\tilde\varepsilon$-admissible interval.}
Let us start by considering an interval $J_m$ which is $\tilde\varepsilon$-admissible. Then, there exists a Lebesgue point $x_m\in J_m^{\tilde\varepsilon}$ for $\varphi'$ satisfying $\bar{\ell}(x_m)>1/2N$. Let now $\ell= {\rm dist} (x_m,[0,1]\setminus J_m)$: of course $\ell \leq 1/2N<\bar\ell(x_m)$, hence we can apply Lemma~\ref{Lebesgue} with $\tilde\varepsilon$ in place of $\varepsilon$, and get two points
\begin{align*}
x_m^- \in \bigg(\frac m N, \frac mN+\frac{2\tilde\varepsilon}N\bigg)\,, &&
x_m^+ \in \bigg(\frac {m+1}N-\frac{2\tilde\varepsilon}N,\frac {m+1}N \bigg)
\end{align*}
such that the function $\varphi_m=\varphi_{x_m^-x_m^+}$ of Definition~\ref{varphist} is $(L+\tilde\varepsilon)$-biLipschitz and satisfies $\|\varphi-\varphi_m\|_{L^\infty}< \tilde\varepsilon<\varepsilon$. Notice that $\varphi_m$ is piecewise linear on a subset of $J_m$ having length at least $(1-4\tilde\varepsilon)/N$. We underline now another $L^\infty$ estimate which holds for $\varphi-\varphi_m$, which will be needed later; namely, since $\varphi$ and $\varphi_m$ are bi-Lipschitz and they coincide at $x_m^-$, then for any $y\in (x_m^-,x_m^+)$ we have
\begin{equation}\label{betterest}
|\varphi_m(y)-\varphi(y)| \leq |\varphi_m(y)-\varphi_m(x_m^-)| + |\varphi(x_m^-)-\varphi(y)|
\leq (2L+\tilde\varepsilon)(y-x_m^-) < \frac{3L}N\,.
\end{equation}
\step{II}{The length of the non $\tilde\varepsilon$-admissible intervals $J_m$ is small.}
Let us consider an interval $J_m$ which is not $\tilde\varepsilon$-admissible. By definition, this means that no Lebesgue point $x$ in $J_m^{\tilde\varepsilon}$ satisfies $\bar{\ell}(x)>1/2N$, or equivalently that $J_m^{\tilde\varepsilon}$ is entirely contained in
\[
A_N=\bigg\{x\in [0,1]:\, \hbox{either $x$ is not a Lebegue point for $\varphi'$, or } \bar{\ell}(x)\leq \frac 1{2N}\bigg\}\,.
\]
As a consequence, the union of the intervals which are not $\tilde\varepsilon$-admissible has length at most $|A_N|/\tilde\varepsilon$: hence, since $\varepsilon>0$ is fixed and since $\tilde\varepsilon$ will ultimately depend only on $\varepsilon$ and $L$, by Rademacher Theorem we can select $N\gg 1$ such that this union is as small as we wish.
\step{III}{Definition of the function $\varphi_{\varepsilon}$.}
We are now in position to define the desired function $\varphi_\varepsilon$. More precisely, we let $\varphi_\varepsilon=\varphi_m$ in every $\tilde\varepsilon$-admissible interval $J_m$, and $\varphi_\varepsilon=\varphi$ on the other intervals; thus, $\varphi_\varepsilon$ coincides with $\varphi$ on every interval which is not $\tilde\varepsilon$-admissible, as well as in the ``external'' portion $J_m\setminus (x_m^-,x_m^+)$ of the $\tilde\varepsilon$-admissible intervals.\par
First of all, observe that $\varphi_\varepsilon$ is piecewise linear on the union of the intervals $(x_m^-,x_m^+)$, hence --by Steps~I and~II-- on a portion of $[0,1]$ having measure larger than $1-5\tilde\varepsilon$ (thus in turn larger than $1-\varepsilon$ if $\tilde\varepsilon<\varepsilon/5$) as soon as $N\gg 1$.\par
Second, by construction we have $\varphi_\varepsilon(0)=\varphi(0)$ and $\varphi_\varepsilon(1)=\varphi(1)$; moreover, since in every $\tilde\varepsilon$-admissible interval $J_m$ one has $\|\varphi-\varphi_\varepsilon\|_{L^\infty}=\|\varphi-\varphi_m\|_{L^\infty}<\tilde\varepsilon$, while on each non $\tilde\varepsilon$-admissible interval one has $\varphi_\varepsilon=\varphi$, the $L^\infty$ estimate and thus the whole (\ref{claimmain}) has been established.\par
To conclude, we have only to check the $(L+\varepsilon)$-biLipschitz property of $\varphi_\varepsilon$. To do so, having fixed two points $y<z$ in $[0,1]$, we need to show that
\begin{align}\label{doubleineq}
|\varphi_\varepsilon(y) -\varphi_\varepsilon(z)| \leq (L+\varepsilon) |y-z|\,, && |\varphi_\varepsilon(y) -\varphi_\varepsilon(z)| \geq \frac 1{L+\varepsilon}\, |y-z|\,.
\end{align}
Since $\varphi$ is $L$-biLipschitz by assumption, and every $\varphi_m$ is $(L+\tilde\varepsilon)$-biLipschitz by Step~I, there is nothing to prove unless $y \in (x_m^-,\, x_m^+)$ and $z\in (x_n^-,x_n^+)$ for some $m<n$, being both the intervals $J_m$ and $J_n$ $\tilde\varepsilon$-admissible. In this case, the first inequality in~(\ref{doubleineq}) comes directly by the triangular inequality, being
\[\begin{split}
|\varphi_{\varepsilon}(y) - \varphi_{\varepsilon}(z)| &\leq |\varphi_{\varepsilon}(y) - \varphi_{\varepsilon}(x_{m}^{+})| + |\varphi_{\varepsilon}(x_{m}^{+}) - \varphi_{\varepsilon}(x_{n}^{-})| + |\varphi_{\varepsilon}(x_{n}^{-}) - \varphi_{\varepsilon}(z)| \\
&= |\varphi_m(y) - \varphi_m(x_{m}^{+})| + |\varphi(x_{m}^{+}) - \varphi(x_{n}^{-})| + |\varphi_n(x_{n}^{-}) - \varphi_n(z)|\\
&\leq (L+\tilde\varepsilon) | y- z |
\leq (L+\varepsilon) | y- z |\,.
\end{split}\]
To show the other inequality, it is convenient to distinguish two subcases, namely, whether $y$ and $z$ are very close, or not. More precisely, let us first assume that $z<x_m + 1/(2N\tilde\varepsilon)<x_m+\bar\ell(x_m)/\tilde\varepsilon$; in this case, by Lemma~\ref{Lebesgue} we know that the angle $\theta=\angle{\varphi_{\varepsilon}(y)}{\varphi_{\varepsilon}(x_{m}^{+})}{\varphi_{\varepsilon}(z)}$ satisfies $\theta> \pi-2\tilde\varepsilon$, so that as soon as $\tilde\varepsilon=\tilde\varepsilon(\varepsilon,L)$ is small enough we have
\[
|\varphi_{\varepsilon}(y) - \varphi_{\varepsilon}(z)| \geq \cos(\tilde\varepsilon)\big( |\varphi_{\varepsilon}(y) - \varphi_{\varepsilon}(x_{m}^{+})| + |\varphi_{\varepsilon}(x_{m}^{+}) - \varphi_{\varepsilon}(z)|\big)
\geq \frac{\cos(\tilde\varepsilon)}{L+\tilde\varepsilon}\, | y- z |
\geq \frac 1{L+\varepsilon}\, | y- z |\,.
\]
Finally, if $z\geq x_m+1/(2N\tilde\varepsilon)$, then of course $|y-z|\geq 1/(3N\tilde\varepsilon)$. As a consequence, since by~(\ref{betterest}) we have $\|\varphi-\varphi_\varepsilon\|_{L^\infty}<3L/N$, we get
\[
\frac{|\varphi_{\varepsilon}(y) - \varphi_{\varepsilon}(z)|}{| y-z |}
\geq \frac{|\varphi(y) - \varphi(z)|}{| y-z |} - \frac{2 \|\varphi-\varphi_\varepsilon\|_{L^\infty}}{| y-z |}
\geq \frac 1L - 18L\tilde\varepsilon \geq \frac 1{L+\varepsilon}\,,
\]
where the last inequality is again true for a suitable choice of $\tilde\varepsilon=\tilde\varepsilon(\varepsilon,L)$. The second inequality in~(\ref{doubleineq}) is thus proved in any case, and the proof is concluded.
\end{proof}
\begin{remark}\label{C1}
Notice that, if the function $\varphi$ is ${\rm C}^1$ up to the boundary on the interval $[0,1]$, then Lemma~\ref{Lebesgue} can be applied to any point of $[0,1]$, thus by a trivial compactness argument the proof of Proposition~\ref{step1} can be modified to get an $(L+\varepsilon)$-biLipschitz approximation of $\varphi$ which is finitely piecewise linear on the whole $[0,1]$.
\end{remark}
\section{The ``non Lebesgue intervals''\label{sect3}}
In this section we show that any $L$-biLipschitz function $\varphi$ can be modified inside any small interval $(a,b)$, shrinking a little bit this interval, becoming ${\rm C}^1$ there, and remaining globally $L$-biLipschitz. In the next section we will apply this result to the ``non Lebesgue intervals'', that is, the intervals which we were not able to treat in the last section. The main aim of the section is to prove the following result.
\begin{prop}\label{nLI}
Let $\varphi:[0,C]\to \mathbb R^2$ be an $L$-biLipschitz function, let $[a,b]\subseteq [0,C]$ be a given interval, and suppose that for some $\varepsilon>0$ the function $\varphi$ is linear on $(a-\varepsilon,a)\cap [0,C]$ and on $(b,b+\varepsilon)\cap [0,C]$, with $|\varphi'|=L$ on both these intervals. Then, there exists $a+(b-a)/L^2\leq b'\leq b$ and an $L$-biLipschitz function $\psi:[0,C-(b-b')]\to \mathbb R^2$ which is ${\rm C}^1$ on $[a,b']$ and satisfies
\begin{equation}\label{propprel}\left\{
\begin{array}{ll}
\psi(t) = \varphi(t) &\hbox{for every $0\leq t\leq a$}\,,\\
\psi(t) = \varphi(t+b-b') \qquad &\hbox{for every $b'\leq t\leq C-(b-b')$}\,.
\end{array}
\right.\end{equation}
\end{prop}
To obtain this result, the following two definitions will be useful.
\begin{definition}[Fast and short functions]\label{deffs}
Let $\varphi:[0,C]\to\mathbb R^2$ be an $L$-biLipschitz function and $[a,b]\subseteq [0,C]$ be a given interval. We say that a function $\psi:[0,C-(b-b')]\to \mathbb R^2$ is \emph{fast on $[a,b]$} if $a+(b-a)/L^2\leq b'\leq b$, $\psi$ satisfies~(\ref{propprel}),
\begin{align}\label{mildbiLip}
\frac 1L\, |z-y| \leq |\psi(z)-\psi(y)| \leq L |z-y|
&& \forall\, y \in [a,b']\,, z \notin [a,b'] \,,
\end{align}
and $|\psi'|\equiv L$ on $[a,b']$. Moreover, any $\psi$ which minimizes the value of $b'$ among all the functions fast on $[a,b]$, is said to be \emph{short on $[a,b]$}.
\end{definition}
In words, a ``fast'' function is a function which connects $\varphi(a)$ with $\varphi(b)$ always moving at maximal speed, and satisfying~(\ref{mildbiLip}), while a ``short'' function is the shortest possible fast function. Let us immediately make a very simple observation, which we will use often later.
\begin{lemma}\label{ifnotshort}
Let $\varphi:[0,C]\to\mathbb R^2$ be an $L$-biLipschitz function, and let $\psi:[0,C-(b-b')]\to\mathbb R^2$ be short on some interval $[a,b]\subseteq [0,C]$. Let also $a\leq r < s \leq b'$, and assume that $\psi$ is not a straight line between $\psi(r)$ and $\psi(s)$. Then, the inverse $L$-Lipschitz property for the function $\psi_{rs}^+$ fails for some $p\notin [a,b'-(s-s^+)]$ and $q\in (r,s^+)$, where $\psi_{rs}^+$ and $s^+$ are as in Definition~\ref{varphist}.
\end{lemma}
\begin{proof}
Let us consider the function $\psi_{rs}^+:[0,C-(b-b'')]\to \mathbb R^2$, with $b''=b'-(s-s^+)$, which of course satisfies~(\ref{propprel}). Since $\psi$ is not a straight line between $\psi(r)$ and $\psi(s)$, we have that $b''<b'$ and then, since $\psi$ is short on $(a,b)$, by definition we get that $\psi_{rs}^+$ cannot be fast on $(a,b)$. As a consequence, recalling~(\ref{mildbiLip}), we know that there must be some $p\notin [a,b'']$ and some $q\in [a,b'']$ such that the $L$-biLipschitz property for $\psi_{rs}^+$ fails at $p$ and $q$. However, we know that $\big|(\psi_{rs}^+)'\big|=L$ in $(a,b'')$, while outside $\big|(\psi_{rs}^+)'\big|\leq L$ since $\psi_{rs}^+$ coincides with $\varphi$ up to a translation of the variable, and $\varphi$ is $L$-biLipschitz. Thus, the $L$-Lipschitz property for $\psi_{rs}^+$ cannot fail, and we realize that the inverse $L$-Lipschitz property must fail at $p$ and $q$. By symmetry, we can also assume that $p<a$; hence, if $a\leq q \leq r$, then
\[
\frac{|\psi_{rs}^+(q)-\psi_{rs}^+(p)|}{q-p} =
\frac{|\psi(q)-\psi(p)|}{q-p} \geq \frac 1L\,,
\]
because the function $\psi$ is short and then in particular it satisfies~(\ref{mildbiLip}). Instead, if $s^+\leq q \leq b''$, then we have
\[\begin{split}
\frac{|\psi_{rs}^+(q)-\psi_{rs}^+(p)|}{q-p} &=
\frac{|\psi(q+(s-s^+))-\psi(p)|}{q-p}
=\frac{|\psi(q+(s-s^+))-\psi(p)|}{q+(s-s^+)-p}\,\frac{q-p+(s-s^+)}{q-p}\\
&\geq\frac{|\psi(q+(s-s^+))-\psi(p)|}{q+(s-s^+)-p}
\geq \frac 1L\,.
\end{split}\]
As a consequence, we obtain that $q$ must be in $(r,s^+)$, and the thesis is concluded.
\end{proof}
Our next result tells that a short function always exists, and it is even $L$-biLipschitz: notice that this is not guaranteed by~(\ref{mildbiLip}), since there we check only some pairs $(y,z)$, namely, those for which $y$ is inside the interval $(a,b')$ and $z$ is outside it.
\begin{lemma}\label{short->biLip}
Let $\varphi:[0,C]\to \mathbb R^2$ be an $L$-biLipschitz function, and let $[a,b]\subseteq [0,C]$ be a given interval. Then, there exists a function $\psi$ short on $[a,b]$, and any such function is $L$-biLipschitz.
\end{lemma}
\begin{proof}
First of all, let us observe that the set of the fast functions is not empty. Indeed, the function $\varphi$ itself, reparametrized at speed $L$ in $(a,b)$, is fast: more precisely, let us set
\[
b' = a+ \frac{{\mathcal H}^1\big(\arc{\varphi(a)\varphi(b)}\big)}L\,,
\]
let $\sigma:[0,C]\to [0,C-(b-b')]$ be the one-to-one function given by
\[
\sigma(t) = \left\{
\begin{array}{ll}
t &\forall\, 0\leq t\leq a\,,\\
\begin{aligned} a+\frac{{\mathcal H}^1\big(\arc{\varphi(a)\varphi(t)}\big)}L \end{aligned} \quad &\forall\,a<t<b\,,\\
t-(b-b') & \forall\, b\leq t\leq C\,,
\end{array}\right.
\]
and set $\psi_1$ as $\psi_1(\sigma(t))=\varphi(t)$. We claim that $\psi_1$ is a fast function on $[a,b]$: everything is obvious by construction except the validity of~(\ref{mildbiLip}). But in fact, let $y\in (a,b')$ and $z>b'$ (if $z<a$, the very same argument applies). Since $|\psi_1'(t)|=L$ for $t\in (y,b')$, while for $b'<t<z$ one has $|\psi_1'(t)|=|\varphi'(t+b-b')|\leq L$ because $\varphi$ is $L$-biLipschitz, we get immediately the validity of the second inequality. Concerning the first one, we have just to recall that $|\sigma'|\leq 1$, so that
\[
b'-y =\sigma(b) - \sigma\big(\sigma^{-1}(y)\big)\leq b - \sigma^{-1}(y)\,,
\]
and then we directly get
\[
\frac{|\psi_1(z)-\psi_1(y)|}{|z-y|}=
\frac{|\varphi(z-b'+b)-\varphi(\sigma^{-1}(y))|}{z-b'+b'-y}
\geq \frac{|\varphi(z-b'+b)-\varphi(\sigma^{-1}(y))|}{z-b'+b-\sigma^{-1}(y)}\geq \frac 1L\,
\]
where in the last inequality we have used the bi-Lipschitz property of $\varphi$. So, also the first inequality in~(\ref{mildbiLip}) is proved and thus the claim is established.\par
To get the existence of a short function, it is enough to recall that all the fast functions are uniformly continuous on uniformly bounded intervals; thus, such existence follows directly by Ascoli--Arzel\`a Theorem and since any uniform limit of fast functions is also fast.\par
To conclude, we take a short function $\psi$ on $[a,b]$, and we have to show that $\psi$ is $L$-biLipschitz. We have already noticed that $|\psi'|\leq L$, so the Lipschitz property is obvious and we only have to care about the inverse Lipschitz property. To do so, let us take $y<z \in [0,C-(b-b')]$. If $y$ and $z$ are both smaller than $a$, or both larger than $b'$, this comes directly by the inverse Lipschitz property of $\varphi$; if one between $y$ and $z$ is smaller than $a$, and the other is larger than $b'$, the same argument applies since
\[
|\psi(z)-\psi(y)| = \big|\varphi(z+b-b') - \varphi(y)\big| \geq \frac 1L\, \big|(z+b-b')-y\big| \geq \frac 1L\, |z-y|\,;
\]
if exactly one between $y$ and $z$ is in $[a,b']$, the inequality is ensured by~(\ref{mildbiLip}). Summarizing, the only situation left to prove is the case $a<y<z<b'$.\par
Let us assume, by contradiction, that there exists $a<r<s<b'$ such that
\begin{equation}\label{contr1}
|\psi(s)-\psi(r)| < \frac 1L\, |s-r|\,,
\end{equation}
and consider the function $\psi_{rs}^+:[0,C-(b-b'')]\to \mathbb R^2$ given by Definition~\ref{varphist}. Notice that $\psi$ cannot be a straight line between $\psi(r)$ and $\psi(s)$, by~(\ref{contr1}) and the fact that $|\psi'|=L$ on $(a,b')$. Thus, Lemma~\ref{ifnotshort} ensures the existence of some $p\notin [a,b'']$ (and, without loss of generality, we can think $p<a$) and $q\in (r,s^+)$ such that
\begin{equation}\label{contr2}
|\psi_{rs}^+(q)-\psi_{rs}^+(p)| < \frac 1L\, (q-p)\,.
\end{equation}
Finally, making use of the validity of~(\ref{mildbiLip}) for $\psi$, together with~(\ref{contr1}) and~(\ref{contr2}), we get
\[\begin{split}
\frac 1L\, |s-p| &\leq |\psi(s)-\psi(p)|
\leq |\psi(s)-\psi_{rs}^+(q)| + |\psi_{rs}^+(q)-\psi(p)|\\
&= |\psi(s)-\psi(r)| - |\psi_{rs}^+(q)-\psi_{rs}^+(r)| + |\psi_{rs}^+(q)-\psi_{rs}^+(p)|
<\frac 1L\, (s-r) - L(q-r) + \frac 1L\, (q-p)\\
&\leq \frac 1L |s-p|\,,
\end{split}\]
and since the contradiction shows that $\psi$ is $L$-biLipschitz, the proof is concluded.
\end{proof}
Keep in mind that we aim to prove Proposition~\ref{nLI}, that is, we want to find some $L$-biLipschitz function $\psi$ which satisfies~(\ref{propprel}) and which is ${\rm C}^1$ on $[a,b']$. By definition, any function fast in $[a,b]$ already satisfies~(\ref{propprel}), and the lemma above ensures that any function short on $[a,b]$ is also $L$-biLipschitz. We will then get our proof once we show that any short function is necessarily ${\rm C}^1$ on $[a,b']$. To do so, we start with a couple of preliminary geometric estimates.
\begin{lemma}\label{3.4}
For every small constants $\ell$ and $\eta$ and for every $L\geq 1$, there exists $\bar\delta(\ell,\eta,L)\ll \ell$ satisfying the following properties. Let $P,\,Q,\,S$ be three points in $\mathbb R^2$ such that $\segm{PQ}\geq\ell/2$ and $\delta=\segm{QS}\leq\bar\delta$. Call also $\theta,\,\theta',\,\nu\in\mathbb S^1$ the directions of the segments $PQ$, $PS$ and $QS$ respectively. Then the following holds true:
\begin{gather}
|\theta-\theta'| \leq \frac \eta{L^2}\,,\label{prop1}\\
\theta\cdot \nu - \frac\eta{L^2} \leq \frac{\segm{PS}-\segm{PQ}}\delta \leq \theta\cdot \nu + \frac\eta{L^2}\,.\label{prop2}
\end{gather}
\end{lemma}
\begin{proof}
Once $\ell$, $\eta$ and $L$ are given, the existence of some $\bar\delta$ satisfying~(\ref{prop1}) is immediate by continuity; we will show that the same choice of $\bar\delta$ gives also~(\ref{prop2}).\par
Let us call $\tau:[0,\delta]\to \mathbb R^2$ the function given by $\tau(t)=Q + t\nu$, so that $\tau(0)=Q$ and $\tau(\delta)=S$; call also $\theta(t)$ the direction of the segment $P\tau(t)$, and observe that $|\theta(t)-\theta|\leq \eta/L^2$ by~(\ref{prop1}) applied to the triple $(P,Q,\tau(t))$. Hence,
\[
\segm{PS}-\segm{PQ}=\int_0^\delta \frac{d}{dt}\, \Big( \segm{P\tau(t)}\Big) \, dt
=\int_0^\delta \theta(t)\cdot \nu\,dt
= \delta \theta\cdot \nu + \int_0^\delta (\theta(t)-\theta)\cdot \nu\,dt\,,
\]
and the modulus of the latter integral is smaller than $\delta\eta/L^2$, hence~(\ref{prop2}) follows.
\end{proof}
\begin{lemma}\label{smallone}
Let $\ell$, $\eta$ and $L$ be fixed, let $\varphi:[0,C]\to\mathbb R^2$ be an $L$-biLipschitz function, and take three points $P=\varphi(p)$, $Q=\varphi(q)$, $R=\varphi(r)$, with $p<q<r$, satisfying $\segm{PQ}\geq \ell$ and $\delta:=\segm{QR}\leq \bar\delta(\ell,\eta,L)$. Assume that the function $\varphi^+_{qr}:[0,C-(r-r^+)]\to \mathbb R^2$ of Definition~\ref{varphist} does not satisfy the inverse $L$-Lipschitz property at the pair $(p,t)$ with some $q\leq t\leq r^+$. Then,
\begin{gather}
\frac 1{L^2}-2\,\frac\eta{L^2} \leq \theta\cdot \nu \leq \frac 1{L^2}+ 2\,\frac\eta{L^2}\,,\label{prop3}\\
\frac{\segm{PQ}}{|q-p|} \leq \frac 1 L + \frac{3\eta\delta}{|q-p| L^2}\,,\label{prop4}\\
\frac{{\mathcal H}^1(\arc{QR})}{\segm{QR}} \leq 1+6\eta\,,\label{prop5}
\end{gather}
where $\theta$ and $\nu$ are the directions of the segments $PQ$ and $QR$ respectively.
\end{lemma}
\begin{proof}
First of all, let us call for brevity $\sigma:=\segm{Q\varphi^+_{qr}(t)}$ and $Q_\sigma=\varphi^+_{qr}(t)$. The failure of the inverse $L$-Lipschitz property at $p$ and $t$, together with~(\ref{prop2}) applied with $S=Q_\sigma$ and with the fact that $\varphi$, instead, is $L$-biLipschitz, gives
\begin{equation}\label{ty}
\frac\sigma{L^2}+\frac{q-p}L=\frac{t-p}L>\segm{\varphi^+_{qr}(p)\varphi^+_{qr}(t)}=\segm{PQ_\sigma}
\geq\segm{PQ}+\sigma\bigg(\theta\cdot \nu - \frac\eta{L^2}\bigg)
\geq\frac{q-p}L+\sigma\bigg(\theta\cdot \nu - \frac\eta{L^2}\bigg)\,,
\end{equation}
which can be rewritten as
\[
\theta\cdot \nu < \frac 1{L^2} + \frac\eta{L^2}\,,
\]
so that one inequality in~(\ref{prop3}) is already proved.\par
Notice now that $\segm{PQ_\sigma}\geq \ell-\bar\delta>\ell/2$, so we can apply~(\ref{prop2}) also to $Q=Q_\sigma$ and $S=R$. By~(\ref{ty}) and the biLipschitz property of $\varphi$ again, we have then
\[\begin{split}
\frac {\sigma-L(r^+-q)}{L^2}+\frac{r-p}L
&\geq\frac {\sigma-L(r-q)}{L^2}+\frac{r-p}L
=\frac \sigma{L^2}+\frac{q-p}L>\segm{PQ_\sigma}\\
&\geq\segm{PR}-(\segm{QR}-\sigma)\bigg(\tilde\theta\cdot \nu + \frac\eta{L^2}\bigg)\\
&\geq \frac{r-p}L+(\sigma-L(r^+-q))\bigg(\tilde\theta\cdot \nu + \frac\eta{L^2}\bigg)\,,
\end{split}\]
where $\tilde\theta$ is the direction of $PQ_\sigma$. Since $L(r^+-q)=\segm{QR}\geq \sigma$, we deduce
\[
\tilde\theta\cdot\nu > \frac 1{L^2} -\frac\eta{L^2}\,.
\]
And since $|\tilde\theta-\theta|<\eta/L^2$ by~(\ref{prop1}), we conclude the validity of~(\ref{prop3}).\par
Property~(\ref{prop4}) can be directly deduced from~(\ref{ty}) and~(\ref{prop3}), since
\[
\frac{\segm{PQ}}{|q-p|} < \frac 1 L +\frac\sigma{|q-p|}\,\bigg(\frac 1{L^2} - \theta\cdot \nu + \frac\eta{L^2}\bigg)
\leq \frac 1 L +\frac{3\eta\sigma}{|q-p|L^2}
\leq \frac 1 L +\frac{3\eta\delta}{|q-p|L^2}\,.
\]
And finally, to get property~(\ref{prop5}), first we use that $\varphi$ is $L$-biLipschitz to get
\begin{equation}\label{quasopra}
\segm{PR}\geq \frac{r-p}{L} = \frac{q-p}L + \frac{r-q}L \geq \frac{q-p}L + \frac{{\mathcal H}^1(\arc{QR})}{L^2}\,,
\end{equation}
and then we use~(\ref{prop2}), (\ref{prop4}) and~(\ref{prop3}) to get
\[
\segm{PR} \leq \segm{PQ} + \delta \bigg( \theta\cdot \nu + \frac \eta{L^2}\bigg)
\leq \frac {q-p} L + \frac\delta{L^2} ( 1+ 6\eta)\,,
\]
which inserted in~(\ref{quasopra}) gives~(\ref{prop5}).
\end{proof}
Let us now present the main technical tool of this section: thanks to this result, we will be able to prove the regularity of any short map.
\begin{lemma}\label{bigone}
Let $\ell$, $\eta$ and $L$ be fixed, let $\varphi:[0,C]\to\mathbb R^2$ be an $L$-biLipschitz function with $|\varphi'|\equiv L$ in $(q,r)$, and take five points $P=\varphi(p)$, $Q=\varphi(q)$, $R=\varphi(r)$, $Q'=\varphi(q')$ and $Q''=\varphi(q'')$ with $p<q<q'<q''<r$, satisfying $\segm{PQ}\geq 2\ell$ and $\delta:=\segm{QR}\leq \bar\delta(\ell,\eta,L)$. Assume also that both $Q'$ and $Q''$ have distance at least $\delta/3$ from each of $Q$ and $R$, that $\eta$ is small with respect to $L-1$ and $1/L$, and that~(\ref{prop5}) holds true. Then, if the inverse $L$-Lipschitz property for the function $\varphi_{q'q''}^+$ is not satisfied by $P$ and some point in the segment $Q'Q''$, one has
\begin{equation}\label{oscest}
|\nu-\nu'| \leq \frac 12\, \min \bigg\{\frac 1{L^2}, 1-\frac 1{L^2}\bigg\}\qquad \Longrightarrow \qquad |\nu-\nu'| \leq 15\sqrt\eta\,,
\end{equation}
where $\nu$ and $\nu'$ are the directions of the segments $QR$ and $Q'Q''$ respectively.
\end{lemma}
\begin{proof}
First of all, we use property~(\ref{prop5}) for $Q$ and $R$, which is valid by assumption: we immediately get that every point in the curve $\arc{QR}$, hence in particular both $Q'$ and $Q''$, has distance less than $2\delta\sqrt\eta$ from the segment $QR$. Since $\segm{QQ'}\geq \delta/3$ and $\segm{Q''R}\geq \delta/3$, we deduce that
\begin{align}\label{almuno}
|\nu-\tilde\nu| \leq 6\sqrt\eta\,, && \segm{Q'Q''}\leq \frac \delta 2\leq \frac 32\, \segm{QQ'}\,,
\end{align}
being $\tilde \nu$ the direction of $QQ'$. Moreover, the validity of~(\ref{prop5}) also implies that
\[
\segm{QR}(1+6\eta) \geq {\mathcal H}^1(\arc{QR})
= {\mathcal H}^1(\arc{QQ'})+{\mathcal H}^1(\arc{Q'R})
\geq {\mathcal H}^1(\arc{QQ'})+\segm{Q'R}
\geq {\mathcal H}^1(\arc{QQ'})+\segm{QR}-\segm{QQ'}\,,
\]
which in turn implies by the assumptions
\begin{equation}\label{1207}
{\mathcal H}^1(\arc{QQ'}) \leq 6\eta \segm{QR} + \segm{QQ'} \leq \segm{QQ'}(1+18\eta)\,.
\end{equation}
Let us now use the fact that the inverse $L$-Lipschitz property for the function $\varphi_{q'q''}^+$ fails for $P$ and some point in $Q'Q''$. As a consequence, we can apply Lemma~\ref{smallone} to the points $P,\, Q'$ and $Q''$, so by~(\ref{prop3}), calling $\theta'$ the direction of $PQ'$, we know
\begin{equation}\label{alkn}
\Big|\theta' \cdot \nu' - \frac 1{L^2}\Big| \leq 2\,\frac\eta{L^2}\,.
\end{equation}
Let us now assume that
\begin{equation}\label{soquesto}
|\nu-\nu'| \leq \frac 12\, \min\bigg\{\frac 1{L^2}, 1-\frac 1{L^2}\bigg\}\,,
\end{equation}
so that the proof will be concluded once we show that
\begin{equation}\label{bastaquesto}
|\nu-\nu'| \leq 15\sqrt\eta\,.
\end{equation}
First of all, putting together~(\ref{alkn}), (\ref{soquesto}) and~(\ref{almuno}), a simple geometric argument shows that the directions $\nu$ and $\tilde\nu$ are in the same quadrant with respect to $\theta'$ as soon as $\eta$ is small enough; more precisely, this holds true as soon as $\eta$ is much smaller than both $L-1$ and $1/L$ (keep in mind that $L>1$). As a consequence, a trigonometric argument immediately gives
\[
\big|\theta' \cdot \tilde\nu - \theta' \cdot \nu'\big| \geq \frac{|\tilde\nu-\nu'|^2}3\,.
\]
Just to fix the ideas, we can assume that
\begin{equation}\label{siqu}
\theta' \cdot \tilde\nu - \theta' \cdot \nu' \geq \frac{|\tilde\nu-\nu'|^2}3\,,
\end{equation}
otherwise the argument below about $\segm{QQ'}$ has to be replaced by a completely similar argument about $\segm{RQ''}$.\par
Let us now collect all the information that we have: by (\ref{prop2}) applied to $P,\,Q'$ and $Q$, by~(\ref{prop4}) applied to $P,\,Q'$ and $Q''$, and by~(\ref{almuno}), we get
\[\begin{split}
\frac{q-p}L &\leq \segm{PQ} \leq \segm{PQ'}+\segm{QQ'} \bigg( \theta' \cdot (-\tilde\nu)+\frac \eta{L^2}\bigg)\\
&\leq \frac{q'-p}L + \frac{3 \eta \segm{Q'Q''}}{L^2} + \segm{QQ'} \bigg(\frac \eta{L^2} -\theta' \cdot \tilde\nu\bigg)
\leq \frac{q'-p}L + \frac{9 \eta \segm{QQ'}}{2L^2} + \segm{QQ'} \bigg(\frac \eta{L^2} -\theta' \cdot \tilde\nu\bigg)\\
&\leq\frac{q'-p}L + \segm{QQ'} \bigg(\frac{6\eta}{L^2} -\theta' \cdot \tilde\nu\bigg)\,.
\end{split}\]
This, also keeping in mind the fact that $|\varphi'|\equiv L$ in $(q,r)$ and~(\ref{1207}), implies
\[
\segm{QQ'} \bigg(\theta' \cdot \tilde\nu -\frac{6\eta}{L^2} \bigg)\leq \frac{q'-q}L
=\frac{{\mathcal H}^1(\arc{QQ'})}{L^2}
\leq \frac{\segm{QQ'}(1+18\eta)}{L^2}\,,
\]
which finally gives
\[
\theta' \cdot \tilde\nu \leq \frac 1{L^2} + \frac{24\eta}{L^2}\,.
\]
Inserting now this estimate and~(\ref{alkn}) in~(\ref{siqu}), we get
\[
|\tilde\nu-\nu'| \leq \frac{9\sqrt\eta}L < 9\sqrt\eta\,,
\]
which together with~(\ref{almuno}) finally gives~(\ref{bastaquesto}).
\end{proof}
Even though the estimate~(\ref{oscest}) is quite obscure, it gives an important information, namely, if the directions $\nu$ and $\nu'$ are not too far away from each other, then they must be very close. We can now see that this implies the regularity of a short map $\varphi$. First of all, we prove the internal regularity in the open segment $(a,b)$.
\begin{lemma}\label{shortonC1}
Let $\varphi$ be an $L$-biLipschitz function, short on $[a,b]$. Then, $\varphi$ is of class ${\rm C}^1$ in the open interval $(a,b)$.
\end{lemma}
\begin{proof}
Let us take $a<a'<b'<b$, and let us fix some $\ell\ll 1$ such that
\[
\segm{PQ}>2\ell \quad \forall\, P=\varphi(p),\, Q=\varphi(q), \, p\notin [a,b],\, q\in (a',b')\,.
\]
We aim to show that $\varphi$ is of class ${\rm C}^1$ in $(a',b')$, and this will of course give the thesis since $a'$ and $b'$ are arbitrary.\par
Let us then fix a point $S=\varphi(s)$ with $s\in(a',b')$, and let also $\eta\ll 1$ be given. Define $\bar\delta=\bar\delta(\ell,\eta,L)$ according to Lemma~\ref{3.4}: we claim that there exists some direction $\nu\in\mathbb S^1$ for which
\begin{equation}\label{tgt}
\bigg|\frac{\varphi'(t)}L - \nu \bigg| \leq 16\sqrt\eta \qquad \forall\, t\in (a',b'):\, |t-s|\leq \frac{\bar\delta}{12L}\,.
\end{equation}
Since $s$ and $\eta$ are arbitrary, of course this will immediately imply the required ${\rm C}^1$ regularity of $\varphi$ in $(a',b')$.\par
\begin{figure}[thbp]
\input{figura.pdf_t}
\caption{Situation in Lemma~\ref{shortonC1}}\label{figure}
\end{figure}
Let us now take $Q=\varphi(q)$ and $R=\varphi(r)$ in such a way that $\segm{QR}=\bar\delta$ and that $s=(q+r)/2$, and let us call $\nu$ the direction of the segment $QR$: Figure~\ref{figure} depicts all the involved points. Notice that, by construction and since $\bar\delta\ll \ell$, both $q$ and $r$ are in $(a,b)$, and both $\segm{PQ}$ and $\segm{PR}$ are larger than $\ell$ whenever $P=\varphi(p)$ for some $p\notin [a,b]$. If $\varphi$ is linear on $QR$, then of course $\varphi'$ is constantly equal to $L\nu$ in $(q,r)$, thus~(\ref{tgt}) is already established. Otherwise, Lemma~\ref{ifnotshort} says that there must be some $p\notin(a,b-(r-r^+))$ such that the inverse $L$-Lipschitz property for $\varphi_{qr}^+$ fails at $p$ and at some point in $(q,r^+)$. Thus, we can apply Lemma~\ref{smallone} and in particular~(\ref{prop5}) is true.\par
As noticed in the proof of Lemma~\ref{bigone}, this implies that the whole curve $\arc{QR}$ has distance less than $2\bar\delta\sqrt\eta$ from the segment $QR$. As a consequence, if we call $Q^+$ and $R^-$ the first and the last point of $\arc{QR}$ having distance $\bar\delta/3$ from $Q$ and from $R$ respectively, we clearly have
\begin{equation}\label{put1}
|\tilde\nu-\nu| \leq \arctan(12\sqrt\eta) < 16\sqrt\eta\,,
\end{equation}
where $\tilde\nu$ is the direction of the segment $Q^+R^-$. Let us assume now that~(\ref{tgt}) is false, thus there exists some $t$ with $|t-s|\leq \bar\delta/12L$ such that
\begin{equation}\label{put2}
\bigg|\frac{\varphi'(t)}L - \nu \bigg| > 16\sqrt\eta\,.
\end{equation}
Observe that, by construction, $t$ must belong to the interval $(q^+,r^-)$. By continuity, (\ref{put1}) and~(\ref{put2}) imply the existence of $q^+<q'<t<q''<r^-$ such that
\begin{equation}\label{put3}
|\nu' - \nu | =16\sqrt\eta\,,
\end{equation}
where $\nu'$ is the direction of the segment $Q'Q''$. The function $\varphi$ cannot be a segment between $Q'$ and $Q''$, because otherwise it would be $\varphi'(t) = L\nu'$, against~(\ref{put2}) and~(\ref{put3}). Again by Lemma~\ref{ifnotshort}, we deduce the existence of some new point $\widetilde P=\varphi(\tilde p)$ with $\tilde p \notin [a,b-(q''^+-q'')]$ such that the inverse $L$-Lipschitz property for $\varphi^+_{q'q''}$ fails at $\tilde p$ and some point between $q'$ and $q''^+$ (notice that there is no reason why this point $\widetilde P$ should coincide with the point $P=\varphi(p)$ of few lines ago).\par
We can then apply Lemma~\ref{bigone} with $P=\widetilde P$, and we get the validity of~(\ref{oscest}). Notice that, since $\eta$ has been taken arbitrarily small, by~(\ref{put3}) we can assume without loss of generality that
\[
|\nu-\nu'| = 16 \sqrt \eta \leq \frac 12\, \min \bigg\{\frac 1{L^2}, 1-\frac 1{L^2}\bigg\}\,.
\]
As a consequence, (\ref{oscest}) tells us that $|\nu-\nu'|\leq 15\sqrt\eta$, which is clearly a contradiction with~(\ref{put3}). Therefore, the proof of~(\ref{tgt}) is completed, and as noticed above the thesis follows.
\end{proof}
Now, we can extend the regularity up to the extremes of the interval $[a,b]$. We do this first for an interval compactly contained in $[0,C]$, and then for an interval reaching the boundary of $[0,C]$.
\begin{lemma}\label{rightleft}
Let $\varphi:[0,C]\to \mathbb R^2$ be an $L$-biLipschitz function, short on some $[a,b]\subset\subset [0,C]$, and assume that for some $\varepsilon\ll 1$ the function $\varphi$ is linear on $(a-\varepsilon,a)$ with $|\varphi'|\equiv L$. Then, $\varphi'$ is right-continuous in $a$.
\end{lemma}
\begin{proof}
Up to a rotation, we can assume for simplicity also that $\varphi$ is horizontal in $(a-\varepsilon,a)$ or, in other words, that $\varphi' = L{\rm e}_1$ in $(a-\varepsilon,a)$. Since by definition $|\varphi'|=L$ on the whole $(a,b)$, we have to find a direction $\bar\nu\in\mathbb S^1$ such that the directions of $\varphi'(t)$ converge to $\bar\nu$ when $t\searrow a$. First of all, let us define $\bar\theta = 2\arcsin (1/L^2)$, and notice that $\bar\theta$ converges to $\pi$ (resp., to $0$) if $L$ converges to $1$ (resp., to $+\infty$).
\step{I}{For every $r\in (a,a+\varepsilon)$, the direction of the segment $AR$ is between $-(\pi-\bar\theta)$ and $\pi-\bar\theta$.}
Let us take a generic $r\in(a,a+\varepsilon)$, define $\sigma=\segm{AR}/L$, and notice that
\begin{equation}\label{nsd}
L(r-a) = {\mathcal H}^1(\arc{AR})\geq \segm{AR}\,, \qquad \hbox{thus} \qquad r-a\geq \sigma\,.
\end{equation}
Call now $P=\varphi(p)=\varphi(a-\sigma)$, and notice that $\segm{PA}=\segm{AR}$ by construction, since the above inequality in particular ensures that $\sigma<\varepsilon$. If we assume, by contradiction, that the direction of $AR$ is not between $-(\pi-\bar\theta)$ and $\pi-\bar\theta$, then in particular $\angle PAR<\bar\theta$. As a consequence, by~(\ref{nsd}) we have
\[
\segm{PR} < 2 \segm{AR}\sin (\bar\theta/2) = \frac{2\segm{AR}}{L^2} = \frac {2\sigma}L \leq \frac{r-p}L\,,
\]
and this gives a contradiction to the $L$-biLipschitz property of $\varphi$. Hence, the first step is concluded.
\step{II}{There exists $\bar\nu \in \big[-(\pi-\bar\theta),\pi-\bar\theta\big]$ such that the direction of $A\varphi(t)$ converges to $\bar\nu$ when $t\searrow a$.}
By compactness of $\mathbb S^1$, the directions of the segments $A\varphi(t)$ have at least a limit point for $t\searrow a$: the goal of this step is to show that there is actually only a single such limiting point.\par
Let us assume that the limiting directions are more than one: since the set of the limiting directions is clearly a connected subset of $\mathbb S^1$, which can only contain directions between $-(\pi -\bar\theta)$ and $\pi -\bar\theta$ by Step~I, we deduce in particular the existence of $\nu_1,\,\nu_2\in \mathbb S^1$, and of two sequences of points $R^i_n=\varphi(r^i_n)$ for $i\in \{1,\,2\}$ and $n\in\mathbb N$ satisfying
\begin{align*}
\nu_1,\, \nu_2 \in \big(-(\pi-\bar\theta),\pi-\bar\theta\big),\, &&
\nu_1 \neq \nu_2\,, &&
r^i_n \mathop{\searrow}\limits_{n\to \infty} a\,, &&
\frac{R^i_n-A}{|R^i_n-A|} = \nu_i\,.
\end{align*}
Let us now fix a constant $\ell$ much smaller than $L\varepsilon$. For every $\eta>0$, calling for brevity $r=r^1_n$ and $R=R^1_n$, as soon as $n$ is big enough we have that $\segm{AR}$ is smaller than $\bar\delta(\ell,\eta,L)$. As a consequence, since of course $\varphi$ is not linear between $a$ and $r$ but it is short on $(a,b)$, Lemma~\ref{ifnotshort} implies that the inverse $L$-Lipschitz property for the function $\varphi_{ar}^+$ must fail for some pair $P_n=\varphi^+_{ar}(p_n)$ and $S=\varphi^+_{ar}(s)$, where $p_n\notin [a,b-(r-r^+)]$ while $s\in (a,r^+)$. Notice now the following simple general trigonometric fact: given two directions $\alpha_1,\,\alpha_2\in\mathbb S^1$, the map $\tau:\mathbb R\to\mathbb R^2$ defined as $\tau(x)=Lx\alpha_1$ for $x\geq 0$ and $\tau(x)=-Lx\alpha_2$ for $x<0$ is $L$-biLipschitz if and only if the angle between $\alpha_1$ and $\alpha_2$ is at least the angle $\bar\theta$ defined above. Thus, Step~I and the fact that $\nu_1\in \big(-(\pi-\bar\theta),\pi-\bar\theta\big)$, ensure that $p_n$ cannot belong to $(a-\varepsilon,a)$ if $r<a+\varepsilon$, which is of course true up to have taken $n$ big enough. As a consequence, we can apply Lemma~\ref{smallone} to the points $P=P_n,\, Q=A$ and $R$, and we get that
\begin{align*}
\bigg|\theta_n\cdot \nu_1 - \frac 1 {L^2} \bigg| \leq 2\,\frac\eta{L^2}\,, &&
\segm{AP_n} \leq \frac {|a-p_n|} L + \frac{3\eta\segm{AR}}{L^2}\,.
\end{align*}
where $\theta_n$ is the direction of the segment $P_nA$. If we now send $\eta\to 0$ and consequently $n\to \infty$, we find a point $P^1=\varphi(p^1)$ (which is a limit of some subsequence of the points $P_n$) such that, calling $\theta^1$ the direction of $P^1A$, it is
\begin{align}\label{titaglio}
\theta^1\cdot \nu_1 =\frac 1 {L^2}\,, &&
\segm{AP^1} =\frac {|a-p^1|} L\,.
\end{align}
The very same argument, using the direction $\nu_2$ in place of $\nu_1$, gives us of course another point $P^2=\varphi(p^2)$ satisfying
\begin{align}\label{ledita}
\theta^2\cdot \nu_2 =\frac 1 {L^2}\,, &&
\segm{AP^2} =\frac {|a-p^2|} L\,,
\end{align}
where $\theta^2$ is the direction of $P^2A$. Again recalling that the set of the limiting directions, among which we have chosen $\nu_1$ and $\nu_2$, is a connected subset of $\mathbb S^1$, from~(\ref{titaglio}), (\ref{ledita}) and the fact that $\nu_1\neq \nu_2$ we get that, up to swap $\nu_1$ and $\nu_2$,
\begin{equation}\label{poverine}
\theta^1\cdot \nu_2 < \frac {1-\eta}{L^2}
\end{equation}
for some strictly positive constant $\eta$. Let us now again select a point $R=R^2_n$ with $n$ big enough so that $\segm{AR}\leq \bar\delta(\ell,\eta,L)$, and let us assume that $p^1<a$ (otherwise one just has to repeat the argument below swapping the roles of $A$ and $R$). Recalling that $\varphi$ is $L$-biLipschitz, so in particular the inverse $L$-Lipschitz property holds for the points $p^1$ and $r=r^2_n$, using again Lemma~\ref{3.4} with $P=P^1$, $Q=A$ and $S=R$, and by~(\ref{titaglio}) and~(\ref{poverine}), we have
\[\begin{split}
\frac{\segm{AR}}{L^2} &\leq \frac{r-a}L = \frac{r-p^1}L-\frac{a-p^1}L
\leq \segm{P^1R}-\frac{a-p^1}L
\leq \segm{P^1A} + \segm{AR}\bigg(\theta^1\cdot \nu_2 + \frac\eta{L^2}\bigg)-\frac{a-p^1}L \\
&= \segm{AR}\bigg(\theta^1\cdot \nu_2 + \frac\eta{L^2}\bigg)
<\frac{\segm{AR}}{L^2}\,,
\end{split}\]
and the contradiction shows the uniqueness of the direction $\nu$, hence this step is concluded.
\step{III}{Conclusion.}
We are now ready to conclude the proof. In Step~II we have already found a direction $\bar\nu$ such that
\[
\frac{A-\varphi(t)}{|A-\varphi(t)|} \to \bar\nu
\]
for $t\searrow a$. Hence, we only have to show that $\varphi'(t) \to L\bar\nu$ for $t\searrow a$: our argument will be very similar to the one of Lemma~\ref{shortonC1}. Call again $\ell$ a constant much smaller than $L\varepsilon$, fix arbitrarily some $\eta\ll 1$, and consider the first portion of the curve $\varphi$, after $A$, of a length $\bar\delta(\ell,\eta,L)$. We claim that for any point $\varphi(t)$ in this piece of curve one has
\begin{equation}\label{ultip}
|\varphi'(t) - L\bar\nu| \leq 16\sqrt\eta\,.
\end{equation}
Once we prove this, since $\eta$ is arbitrary the proof is concluded. Assume then the existence of some $t$ as before for which~(\ref{ultip}) is not satisfied, take a point $R=\varphi(r)$, with $r>a$, such that $\segm{AR}=2\segm{AT}$, and take two more points $Q^+=\varphi(q^+)$ and $R^-=\varphi(r^-)$ with $a<q^+<r^-<r$ so that
\[
\segm{AQ^+} = \segm{R^-R} = \frac{\segm{AR}}3\,.
\]
The existence of $t$ implies that $\varphi$ is not linear between $A$ and $R$. Therefore, we can apply once again first Lemma~\ref{ifnotshort} and then Lemma~\ref{smallone} with $Q=A$, in particular getting the validity of~(\ref{prop5}). Exactly as in the proof of Lemma~\ref{shortonC1}, this implies that the direction $\nu$ of the segment $Q^+R^-$ satisfies
\[
|\nu-\bar\nu| < 16\sqrt\eta\,,
\]
hence by continuity we can find two points $Q'=\varphi(q')$ and $Q''=\varphi(q'')$ with $q^+<q'<q''<r^-$ with $|\nu'-\bar\nu|=16\sqrt\eta$, being $\nu'$ the direction of $Q'Q''$. And finally, the points $Q'$ and $Q''$ give a contradiction with~(\ref{oscest}) of Lemma~\ref{bigone}. This contradiction show the validity of~(\ref{ultip}), and then the proof is concluded.
\end{proof}
\begin{lemma}\label{nowlast}
Let $\varphi:[0,C]\to \mathbb R^2$ be an $L$-biLipschitz function, short on some interval $[a,C]$. Then, $\varphi$ is of class ${\rm C}^1$ on the interval $(a,C]$.
\end{lemma}
\begin{proof}
The ${\rm C}^1$ regularity on the open interval $(a,C)$ has been already proved in Lemma~\ref{shortonC1}, thus we only have to take care of the situation near $b=C$. Our argument will be quite similar to what already done in the proof of Lemmas~\ref{bigone} and~\ref{shortonC1}, and is divided in two steps for simplicity.
\step{I}{The directions of the segments $QB$ converge to some $\nu\in \mathbb S^1$.}
First of all, we consider the segments $QB$, where as usual $Q=\varphi(q)$ and $B=\varphi(b)$. We aim to show that the directions of the segments $QB$ converge to some $\nu\in\mathbb S^1$ when $q\to b$. Suppose that this is false and notice that, by compactness, this means that the limit points of the directions of the segments $QB$ when $q\to b$ are a connected subset of $\mathbb S^1$ made by more than one point. We can then fix a distance $\ell$ such that $\segm{PB}\gg \ell$ for every $P=\varphi(p)$ and $p<a$, and $\eta$ much smaller than the diameter of the set of the limiting directions just mentioned. Let us now pick any $q<b$, with $q$ very close to $b$ so that $\segm{QB} \leq \bar\delta(\ell,\eta,L)$; the function $\varphi$ is of course not a segment between $q$ and $b$, thus the function $\varphi^+_{qb}$ does not satisfy the inverse $L$-Lipschitz property at some pair $(p,t)$ with $p<a$ and $q<t<b$, so we can apply Lemma~\ref{smallone} and in particular we get
\begin{align}\label{punto1}
\Big|\theta\cdot \nu - \frac 1 {L^2}\Big| \leq 2\,\frac\eta{L^2}\,, &&
\frac{{\mathcal H}^1(\arc{QB})}{\segm{QB}} \leq 1+6\eta\,, &&
\segm{PQ} \leq \frac {q-p}L + \frac{3\eta\segm{QB}}{L^2}\,,
\end{align}
where $\theta$ and $\nu$ are the directions of the segments $PQ$ and $QB$ respectively. The very same argument can be applied to some other $Q'$ near $B$, getting another point $P'$ and
\begin{align}\label{punto2}
\Big|\theta'\cdot \nu' - \frac 1 {L^2}\Big| \leq 2\,\frac\eta{L^2}\,, &&
\frac{{\mathcal H}^1(\arc{Q'B})}{\segm{Q'B}} \leq 1+6\eta\,, &&
\segm{P'Q'} \leq \frac {q'-p'}L + \frac{3\eta\segm{Q'B}}{L^2}\,,
\end{align}
being $\theta'$ and $\nu'$ the directions of $P'Q'$ and $Q'B$ respectively. Recall now that we are assuming that the limiting directions of the segments $QB$ form a nontrivial arc of $\mathbb S^1$: as a consequence, similarly as in Step~II of the proof of Lemma~\ref{rightleft}, we can select two such directions $\nu,\,\nu'$, assume by symmetry that
\begin{equation}\label{punto2.5}
\theta' \cdot \nu > \frac 1{L^2} + 10\,\frac \eta {L^2}\,,
\end{equation}
and chose two points $Q,\, Q'$ corresponding to the directions $\nu$ and $\nu'$ in such a way that $b-q' \ll b-q$, so that ${\mathcal H}^1(\arc{QQ'})\approx {\mathcal H}^1(\arc{QB})$ and then also using~(\ref{punto1}) we get
\begin{align}\label{punto3}
\segm{Q'B} \leq \frac{\segm{QQ'}}3\,, &&
\segm{QQ'} \geq {\mathcal H}^1(\arc{QQ'})(1-6\eta)\,, &&
|\tilde\nu - \nu|\leq \frac \eta{L^2}\,,
\end{align}
where $\tilde\nu$ is the direction of $QQ'$. Using then the estimates~(\ref{punto3}), (\ref{punto2}) and~(\ref{punto2.5}), and applying Lemma~\ref{3.4} to the points $P'$, $Q'$ and $Q$, we obtain
\[\begin{split}
\frac {q'-p'}L + \frac{\eta\segm{QQ'}}{L^2} &\geq
\frac {q'-p'}L + \frac{3\eta\segm{Q'B}}{L^2} \geq \segm{P'Q'}
\geq \segm{P'Q} + \segm{QQ'} \bigg(\theta'\cdot \tilde\nu - \frac \eta{L^2} \bigg)\\
&\geq \frac{q-p'}L + \segm{QQ'} \bigg(\theta'\cdot \nu - 2\,\frac \eta{L^2} \bigg)
\geq \frac{q-p'}L + \segm{QQ'} \bigg(\frac 1{L^2}+ 8\,\frac \eta{L^2} \bigg)\,,
\end{split}\]
which implies, again recalling~(\ref{punto3}),
\[
\frac {q'-q}L \geq \segm{QQ'} \bigg(\frac 1{L^2}+ 7\,\frac \eta{L^2} \bigg)
\geq \frac{{\mathcal H}^1(\arc{QQ'})}{L^2}(1-6\eta)(1+7\eta)
= \frac{q'-q}L (1+\eta-42\eta^2)\,,
\]
which is impossible as soon as $\eta$ was chosen small enough. This concludes the proof of this step.
\step{II}{The derivative $\varphi'(q)$ converge to $L\nu$.}
In order to conclude the proof, we have now to check that $\varphi'(q)\to L\nu$ when $q\to b$, being $\nu$ the direction found in Step~I. Suppose that this is not the case; then, since we already know that $\varphi'$ is continuous, with $|\varphi'|=L$, on the open interval $(a,b)$ by Lemma~\ref{shortonC1}, the set of limiting directions of the vectors $\varphi'(t)/L$ with $t\to b$ is a non-trivial arc of $\mathbb S^1$, containing of course the direction $\nu$ found in Step~I. Let us then pick a direction $\tilde\nu\neq \nu$ in the interior of this arc, satisfying
\[
|\nu-\tilde\nu| \leq \frac 14\, \min \bigg\{ \frac 1{L^2}, 1- \frac 1{L^2}\bigg\}\,,
\]
and let us select $\ell,\, \eta>0$ in such a way that $\segm{PB}\gg \ell$ for every $P=\varphi(p)$ and $p<a$, and that
\begin{equation}\label{seclas}
|\nu-\tilde\nu| > 16 \sqrt \eta\,.
\end{equation}
Let now $t<b$ be a point such that $b-t\ll \bar\delta(\ell,\eta,L)/L$, $\varphi'(t) =L \tilde\nu$ (which is possible since $\tilde\nu$ is in the interior of the arc containing all the limiting directions), and also such that
\begin{equation}\label{thilas}
\bigg|\frac{B-S}{\segm{SB}}-\nu \bigg| \leq \frac\eta{L^2} \qquad \forall\, S:\, \segm{SB} \leq 3 \segm{TB}\,;
\end{equation}
this last estimate is of course admissible thanks to Step~I. Moreover, let us fix $q<t$ so that $\segm{QB}=2\segm{TB}$. Since $\varphi$ cannot be a segment between $q$ and $b$, keeping in mind Lemma~\ref{ifnotshort} we can apply as usual Lemma~\ref{smallone} with $R=B$ and in particular we get that~(\ref{prop5}) holds true. Now, let $q'<t<q''$ be two points such that the direction $\nu'$ of the segment $Q'Q''$ satisfies
\begin{align}\label{foulas}
|\nu-\nu'| = 16 \sqrt\eta\,, && |\nu-\nu'| \leq \frac 12\, \min \bigg\{ \frac 1{L^2}, 1- \frac 1{L^2}\bigg\}\,.
\end{align}
Notice that such two points surely exist, thanks to~(\ref{seclas}) and the fact that the direction of $QB$ is very close to $\nu$ by~(\ref{thilas}). Moreover, since $\eta\ll |\nu-\nu'|$ and since the curve $\arc{QB}$ is very close to the segment $QB$ by the validity of~(\ref{prop5}), we find that $\segm{Q'Q''}\ll \delta=\segm{QB}$, hence in particular
\begin{align*}
\segm{QQ'}\geq \frac \delta 3\,, && \segm{QQ''}\geq \frac \delta 3\,, &&
\segm{Q'B}\geq \frac \delta 3\,, && \segm{Q''B}\geq \frac \delta 3\,.
\end{align*}
Finally, since~(\ref{thilas}) and~(\ref{foulas}) give that $\tilde\nu\neq \nu'$, and thus that $\varphi$ is not a segment between $Q'$ and $Q''$, and then by Lemma~\ref{ifnotshort} the inverse $L$-Lipschitz property for $\varphi^+_{q'q''}$ must fail for some $P=\varphi(p)$ with $p<a$ and some point between $Q'$ and $Q''$, we can apply Lemma~\ref{bigone} with $R=B$. However, (\ref{foulas}) is clearly in contradiction with~(\ref{oscest}), and this shows that $\varphi'(q)\to L\nu$ for $q\to b$, as desired.
\end{proof}
Let us conclude this section by simply observing that Proposition~\ref{nLI} is an immediate consequence of Lemma~\ref{shortonC1}, Lemma~\ref{rightleft} and Lemma~\ref{nowlast} (and the symmetric counterparts of the last two): indeed, the internal regularity is given by Lemma~\ref{shortonC1}, while the regularity up the boundary is achieved applying Lemma~\ref{rightleft} or Lemma~\ref{nowlast} for points of the boundary which are in $(0,1)$ or in $\{0,1\}$.
\section{Proof of Theorem~\ref{main}\label{sect4}}
This section is devoted to show Theorem~\ref{main}. The idea is simply to put together Proposition~\ref{step1} and Proposition~\ref{nLI}; with the first result, one gets a bi-Lipschitz function which is piecewise linear on most of $[0,1]$, and then the second result allows to modify the function on the small regions which are left out.
\begin{proof}[Proof of Theorem~\ref{main}]
We take a small constant $\xi=\xi(L,\varepsilon)$, to be specified at the end. For the sake of simplicity, we divide the proof in a few steps.
\step{I}{The function $\varphi_1$ from Proposition~\ref{step1}.}
We start by applying Proposition~\ref{step1} to $\varphi$, thus getting an $(L+\xi)$-biLipschitz function $\varphi_1:[0,1]\to\mathbb R^2$, which satisfies~(\ref{claimmain}) and which is piecewise linear on a finite union $A$ of closed intervals which cover a portion of length at least $1-\xi$ of the whole $[0,1]$. Let us then write
\[
[0,1]\setminus A = \bigcup\nolimits_{i=1}^N J_i\,,
\]
where the $J_i$'s are a finite number of open intervals, satisfying
\[
\sum_{i=1}^N |J_i| = 1 - |A| \leq \xi\,.
\]
\step{II}{The function $\varphi_2$, which ``goes fast'' near the intervals $J_i$.}
In this step, we make a simple modification of $\varphi_1$, in order to be able to apply Proposition~\ref{nLI} later. To do so, we recall that the function $\varphi_1$ is finitely piecewise linear on a subset $A$ of $[0,1]$; hence, we can define a very small length $\ell>0$ such that any interval (contained in $A$) where $\varphi_1$ is linear, is much longer than $\ell$. This is of course possible since such intervals are finitely many. In particular, the distance between any two consecutive ``bad'' intervals $J_i$ and $J_{i+1}$ is always much larger than $\ell$. Up to further decrease $\ell$, we can also assume that
\begin{equation}\label{smallell}
2\ell N < \xi\,.
\end{equation}
Let us now define $A^-$ the subset of $[0,1]$ made by all the points of $A$ which have distance from $[0,1]\setminus A$ smaller than $\ell$: by construction, $A^-$ is a union of either $2N-2$, or $2N-1$, or $2N$ small subintervals of $A$, on each of which $\varphi_1$ is linear by construction: the exact number depends on whether $0$ and/or $1$ belong to $A$ or not. Let us now consider the function $\tau:[0,1]\to [0,C]$ identified by
\begin{align*}
\tau(0)=0\,, && \tau'(x) = 1 \quad \forall\, x\notin A^-\,, && \tau'(x) = \frac {|\varphi_1'(x)|}{L+\xi} \quad \forall\, x\in A^-\,.
\end{align*}
It is immediate from the definition to observe that $\tau'(x)\leq 1$ for every $x$, and that
\[
1 - 2N\ell\leq C \leq 1\,,
\]
which also by~(\ref{smallell}) implies
\[
x-\xi \leq \tau(x) \leq x \qquad \forall\, 0\leq x\leq 1\,,
\]
so that in particular
\begin{equation}\label{star}
1 - \xi \leq \tau(1) = C\,.
\end{equation}
Finally, we define the function $\varphi_2: [0,C]\to \mathbb R^2$ as
\[
\varphi_2(x) = \varphi_1\big(\tau^{-1}(x)\big)\,.
\]
Since $\tau'\leq 1$, as already observed the inverse Lipschitz property for $\varphi_2$ works better as than for $\varphi_1$, hence $\varphi_2$ satisfies the $L$ inverse Lipschitz property because so does $\varphi_1$. On the other hand, $|\varphi_2'(x)| = |\varphi_1'(\tau^{-1}(x))|\leq L$ as soon as $\tau^{-1}(x)\notin A^-$\,, while otherwise $|\varphi_2'(x)| =L+\xi$. As a consequence, also the $(L+\xi)$-Lipschitz property for $\varphi_2$ follows. Summarizing, the function $\varphi_2:[0,C]\to \mathbb R^2$ is an $(L+\xi)$-biLipschitz function, which is finitely piecewise linear on the whole $[0,C]$ except on $N$ intervals $\widetilde J_i$, where each $\widetilde J_i$ is simply $\tau(J_i)$, and so $|\widetilde J_i|=|J_i|$. Observe that, by construction, the function $\varphi_2$ is linear, with $|\varphi_2'|=L+\xi$, for a short while before and after each of the intervals $\widetilde J_i$, except of course before $\widetilde J_1$ if $0\in \widetilde J_1$, and after $\widetilde J_N$ if $C\in \widetilde J_N$. We conclude this step with a couple of observations. First of all, by construction, $\varphi_2$ is finitely piecewise linear on a subset $\widetilde A$ of $[0,C]$ which satisfies
\begin{equation}\label{reclen}
C - |\widetilde A| = C - |\tau(A)| = \big|\tau \big( [0,1] \setminus A\big)\big|
=| [0,1] \setminus A| = 1 - |A| \leq \xi\,.
\end{equation}
And moreover, for every $x\in [0,C]$, we have
\begin{equation}\label{rat}
\big|\varphi_2(\tau(x)) - \varphi(x)\big| = \big|\varphi_1(x)- \varphi(x)\big| \leq \xi
\end{equation}
recalling that~(\ref{claimmain}) holds for $\varphi_1$.
\step{III}{The function $\varphi_3$, which is short on each $\widetilde J_i$.}
We can now further modify the function $\varphi_2$. We simply apply Proposition~\ref{nLI} (with $L+\xi$ in place of $L$) on each of the intervals $\widetilde J_i$; more precisely, we first apply the proposition to the interval $\widetilde J_1$, finding a map $\varphi_3^1:[0,C_1]\to \mathbb R^2$ which is $(L+\xi)$-biLipschitz and satisfies~(\ref{propprel}). Then, we apply again Proposition~\ref{nLI} to the interval $\widetilde J_2^1$, which is simply the interval $\widetilde J_2$ translated of a distance $C-C_1$, so that $\varphi_3^1$ on $\widetilde J_2^1$ coincides with $\varphi_2$ on $\widetilde J_2$. Going on with the obvious recursion, after $N$ steps we have finally defined the function $\varphi_3: [0,C']\to \mathbb R^2$, which is $(L+\xi)$-biLipschitz, and which by construction is finitely piecewise linear on some subset $A'$ of $[0,C']$ satisfying
\[
C' - |A'| = C - |\widetilde A| \leq \xi\,,
\]
recalling~(\ref{reclen}). We can say also something more precise: $[0,C']\setminus A'$ is the union of $N$ intervals $J_i'$, and on the closure of each of them the function $\varphi_3$ is of class ${\rm C}^1$. In addition, since the function has been changed only on the intervals $\widetilde J_i$, and doing so those intervals have been shrinked, there exists a function $\tilde\tau:[0,C]\to [0,C']$ such that
\begin{align*}
\tilde\tau(\widetilde A) = A'\,, && \tilde\tau'(x) = 1\quad \forall\, x\in \widetilde A\,,
\end{align*}
and by~(\ref{rat}) we get
\begin{equation}\label{ratt}
\big| \varphi_3\big(\tilde\tau(\tau(x))\big) - \varphi(x)\big| =
\big| \varphi_2(\tau(x)) - \varphi(x)\big| \leq \xi \qquad \forall\, x\in \big(\tilde\tau \circ \tau\big)^{-1} (A')\,.
\end{equation}
\step{IV}{The function $\varphi_4$, which is finitely piecewise linear.}
From the previous steps, we have now a function $\varphi_3:[0,C']\to \mathbb R^2$ which is finitely piecewise linear on almost the whole $[0,C']$, and which is ${\rm C}^1$ on the closure of each of the $N$ intervals $J_i'$ where it is not already piecewise linear.\par
As pointed out in Remark~\ref{C1}, it is elementary how to modify a ${\rm C}^1$ function into a finitely piecewise linear one, up to increase the biLipschitz constant of an arbitrarily small constant. Applying this argument $N$ times, to each of the intervals $J_i'$, we get then a finitely piecewise linear function $\varphi_4:[0,C']\to \mathbb R^2$, which is $(L+2\xi)$-biLipschitz and which of course satisfies $\varphi_4(0)=\varphi(0)$ and $\varphi_4(C')=\varphi(1)$. And moreover, since $\varphi_4=\varphi_3$ on $A'$, then the estimate~(\ref{ratt}) holds also with $\varphi_4$ in place of $\varphi_3$.
\step{V}{The ``final'' function $\varphi_\varepsilon$.}
We are finally in position to conclude the proof of our Theorem. The function $\varphi_4$ was already almost perfect, its only flaw being that it is defined on the interval $[0,C']$ instead than on $[0,1]$. Nevertheless, we can easily observe that
\begin{equation}\label{esob}
1-2\xi \leq C' \leq 1\,.
\end{equation}
Indeed, the fact that $C'\leq 1$ is obvious, since all our modifications of the map $\varphi$ either left the domain unchanged or shrinked it. On the other hand, recalling~(\ref{reclen}) and~(\ref{star}), we have
\begin{equation}\label{abt}
C' \geq |A'| = |\widetilde A| \geq C-\xi \geq 1-2\xi\,,
\end{equation}
so the validity of~(\ref{esob}) is established.\par
The function $\varphi_\varepsilon$ will then be simply a reparameterization of $\varphi_4$, precisely we set $\varphi_\varepsilon : [0,1]\to\mathbb R^2$ as
\[
\varphi_\varepsilon (x) = \varphi_4 ( C' x)\,.
\]
The function $\varphi_\varepsilon$ is then finitely piecewise linear by construction, of course it satisfies $\varphi_\varepsilon(0)=\varphi(0)$ and $\varphi_\varepsilon(1)=\varphi(1)$, and it is at most $(L+\xi)(1+3\xi)$-biLipschitz, hence in particular $(L+\varepsilon)$-biLipschitz if $\xi(L,\varepsilon)$ is suitably small.\par
To conclude, we have then just to check that $\|\varphi_\varepsilon-\varphi\|_{L^\infty}\leq \varepsilon$. To do so, let us take a generic $z \in [0,1]$; first of all, we can find $x\in A'' = A'/C'$ such that, also by~(\ref{abt}),
\[
|z-x| \leq 1 - |A''| = 1 - \frac{|A'|}{C'} \leq 2\xi\,.
\]
Then, we define $y=(\tilde\tau\circ\tau)^{-1}(C'x)$ and, since
\[
y-2\xi \leq y - (1-C') \leq \tilde\tau(\tau(y))=C' x \leq y \,,
\]
also recalling that~(\ref{ratt}) holds also with $\varphi_4$ in place of $\varphi_3$ we deduce
\begin{align*}
|y-x| \leq |y-C'x| + |C'-1| \leq 4\xi\,, &&
\big|\varphi_\varepsilon(x) -\varphi(y)\big|
=\big|\varphi_4(C'x) - \varphi(y)\big|
\leq \xi\,.
\end{align*}
Recalling that $\varphi$ is $L$-biLipschitz while $\varphi_\varepsilon$ is $(L+\varepsilon)$-biLipschitz, the above estimates give us
\[\begin{split}
|\varphi_\varepsilon(z) - \varphi(z)| &\leq |\varphi_\varepsilon(z) - \varphi_\varepsilon(x)| + |\varphi_\varepsilon(x) - \varphi(y) | + |\varphi(y)-\varphi(z)|\\
&\leq (L+\varepsilon)|z -x| + \xi + L|y-z|
\leq (L+\varepsilon) 2\xi + \xi + 6 L \xi\,.
\end{split}\]
Since $z\in [0,1]$ was generic, and since the last quantity is smaller than $\varepsilon$ as soon as $\xi=\xi(L,\varepsilon)$ is small enough, the $L^\infty$ estimate has been established, and the proof is concluded.
\end{proof}
A straightforward consequence of our construction is the following.
\begin{corollary}\label{cormain}
Assume that $\varphi:[0,1]\to\mathbb R^2$ is an $L$-biLipschitz function, linear on $[0,a]$ and on $[1-a,1]$ for some $a\ll 1$. Fix two quantities $0<a'<a$ and $\varepsilon>0$. There exists an $(L+\varepsilon)$-biLipschitz function $\varphi_\varepsilon:[0,1]\to\mathbb R^2$ such that~(\ref{claimmain}) holds, $\varphi_\varepsilon$ is finitely piecewise linear on $[0,1]$, and $\varphi_\varepsilon$ coincides with $\varphi$ on the intervals $[0,a']$ and $[1-a',1]$.
\end{corollary}
\begin{proof}
To obtain the required function $\varphi_\varepsilon$, it is enough to check the proof of Theorem~\ref{main} and to modify it only very slightly.\par
Indeed, the first step of that proof simply consists in taking a function $\varphi_1$ given by Proposition~\ref{step1}. On the other hand, from a quick look at the proof of Proposition~\ref{step1}, it is obvious that $\varphi_1$ coincides with $\varphi$ in all the intervals $[m/N,(m+1)/N]$ where $\varphi$ is linear. As a consequence, we can select any $\delta>0$ and, up to take $N$ much bigger than $1/a$ and $1/\delta$ in the proof of Proposition~\ref{step1}, we get a function $\varphi_1$ which coincides with $\varphi$ on $[0,a-\delta]$ and on $[1-(a-\delta),1]$, and in particular these two intervals are contained in the set $A$ of Step~I.\par
In the second step of the proof of Theorem~\ref{main}, we just modified $\varphi_1$ in order to make it faster near the end of the good intervals, getting then a new function $\varphi_2:[0,C]\to\mathbb R^2$. Up to take $\ell$ smaller than $\delta$ there, we can do the same construction and we have then that $\varphi_2=\varphi$ on $[0,a-2\delta]$ and that $\varphi_2(x+C-1)=\varphi(x)$ for each $x\in [1-(a-2\delta),1]$.\par
In the third and fourth step of the proof, we defined a function $\varphi_4:[0,C']\to\mathbb R^2$, modifying $\varphi_2$ only in the bad intervals. Again, we can do exactly the same thing now, and we still have that $\varphi_4$ coincides with $\varphi$ on $[0,a-2\delta]$ and that $\varphi_4(x+C'-1)=\varphi(x)$ for each $x\in [1-(a-2\delta),1]$.\par
Finally, in the last step we defined the approximating function $\varphi_\varepsilon$, which was obtained simply by ``changing the velocity'' of $\varphi_4$, namely, we set $\varphi_\varepsilon(x)=\varphi_4(C'x)$. This time, we cannot do the same, otherwise we lose the information that $\varphi$ and $\varphi_\varepsilon$ coincide near $0$ and $1$; nevertheless, it is clear that a solution is just to define
\[
\varphi_\varepsilon(x) = \left\{\begin{array}{ll}
\varphi_4(x) &\hbox{for $0\leq x \leq a$}\,,\\[5pt]
\varphi_4\bigg(\begin{aligned} a+\frac{C'-2a}{1-2a}\,(x-a)\end{aligned}\bigg) \qquad &\hbox{for $a\leq x \leq 1-a$}\,,\\[10pt]
\varphi_4(x+C'-1) &\hbox{for $1-a\leq x \leq 1$}\,.
\end{array}\right.
\]
Indeed, with this definition, the very same arguments as in Step~V of the proof of Theorem~\ref{main} still ensure that $\varphi_\varepsilon$ is $(L+\varepsilon)$-biLipschitz and that~(\ref{claimmain}) holds; moreover, $\varphi_\varepsilon$ is finitely piecewise linear by definition. Finally, $\varphi_\varepsilon$ coincides with $\varphi$ on $[0,a-2\delta]$ and on $[1-(a-2\delta),1]$ thus, provided that we have chosen $\delta$ smaller than $(a-a')/2$, the proof is concluded.
\end{proof}
\section{generalization to $\mathbb S^1$\label{sect5}}
In this section we generalize Theorem~\ref{main} in order to consider the case of a map defined on $\mathbb S^1$, instead than on $[0,1]$. To do so, we need the following standard definitions.
\begin{definition}
Let $\pi:\mathbb R\to \mathbb S^1$ be the map $\pi(t) = t \, ({\rm mod}\ 2\pi)$, let $\varphi:\mathbb S^1\to\mathbb R^2$ be any function, and let $a<b$ be any two real numbers such that $b-a<2\pi$. We denote by $\varphi^{ab}:[a,b]\to\mathbb R^2$ the function defined by $\varphi^{ab}(t) = \varphi(\pi(t))$ for any $a\leq t\leq b$. We say that the function $\varphi$ is \emph{finitely piecewise linear} if so is the function $\varphi^{ab}$ for any choice of $a$ and $b$.
\end{definition}
The goal of this last section is to prove is the following statement.
\begin{theorem}\label{main2}
Let $\varphi:\mathbb S^1\to\mathbb R^2$ be an $L$-biLipschitz function, and $\varepsilon>0$. Then, there exists a finitely piecewise linear, $(L+\varepsilon)$-biLipschitz function $\varphi_\varepsilon:\mathbb S^1\to\mathbb R^2$ such that $\|\varphi-\varphi_\varepsilon\|_{L^\infty}\leq \varepsilon$.
\end{theorem}
\begin{proof}
First of all, let us fix a small positive quantity $\varepsilon'$, to be specified later, and let us also fix a small $\theta>0$ such that
\begin{equation}\label{deftheta}
1-\varepsilon' \leq \frac{2\sin(\theta/2)}\theta \leq 1\,.
\end{equation}
We divide the proof in few steps for clarity.
\step{I}{The Lebesgue points for $\varphi$ and the function $\varphi_1$.}
Let $p\in \mathbb S^1$ be a Lebesgue point for $\varphi$, that is, for any $a<z<b$ such that $p=\pi(z)\in \big(\pi(a),\pi(b)\big)$ the point $z$ is a Lebesgue point for $\varphi^{ab}$. In particular, let us choose $z\in\mathbb R$ so that $p=\pi(z)$, and let us set for a moment $a=z-\theta/2$ and $b=z+\theta/2$. Notice that, by~(\ref{deftheta}), for any $a<x<y<b$ one has
\begin{equation}\label{stes}
(1-\varepsilon') |y-x| \leq |\pi(y)-\pi(x)|\leq |y-x|\,.
\end{equation}
Let us then concentrate ourselves on the function $\varphi^{ab}$, which is easily $L_1$-biLipschitz with $L_1=L(1-\varepsilon')^{-1}$ thanks to~(\ref{stes}). The point $z$ is a Lebesgue point for $\varphi^{ab}$, hence we can apply to it Lemma~\ref{Lebesgue}, using $[a,b]$ in place of $[0,1]$ and with constant $\varepsilon'$ in place of $\varepsilon$. We get then a constant $\bar \ell$ and the sets $I_\ell(z)$ for $\ell\leq\bar\ell$. We can arbitrarily select $\ell\ll \theta$ and two points $s<z<t$ in $I_\ell(z)$, and the lemma ensures us that the function $\psi:[a,b]\to\mathbb R^2$ defined as $\psi=\varphi^{ab}_{st}$ is $L_2=(L_1+\varepsilon')$-biLipschitz and satisfies $\|\psi-\varphi^{ab}\|_{L^\infty}\leq \varepsilon'$. We can then define the function $\tilde\varphi:\mathbb S^1\to\mathbb R^2$ as $\tilde\varphi=\varphi$ outside the arc $\pi(a)\pi(b)$, and $\tilde\varphi(q) = \psi(\pi^{-1}(q))$ inside. By construction we have that $\|\varphi-\tilde\varphi\|_{L^\infty}\leq \varepsilon'$, and moreover $\tilde\varphi$ is $L_3$-biLipschitz with $L_3=L_2(1-\varepsilon')^{-1}$: this can be obtained arguing exactly as in Lemma~\ref{Lebesgue}, and keeping in mind that $\ell\ll\theta$.\par
We want now to use a similar argument with many Lebesgue points, instead of just one. To do so, let us select finitely many Lebesgue points $p_1,\, p_2,\, \dots \, , \, p_M$ in $\mathbb S^1$, such that every arc $\arc{p_ip_{i+1}}$ in $\mathbb S^1$ has length less than $\theta$, and it does not contain other points $p_j$ (of course, as usual we denote $p_{M+1}\equiv p_1$). For each of these points, say $p_i$, we can repeat the argument above, selecting $\ell$ much smaller than the minimal distance between two of the points $p_j$, and finding a function $\tilde\varphi_i$ which is a segment between $\tilde\varphi_i(s_i)=\varphi(s_i)$ and $\tilde\varphi_i(t_i)=\varphi(t_i)$. Hence, each function $\tilde\varphi_i$ coincides with $\varphi$ in all $\mathbb S^1$ except a small arc $\arc{s_it_i}$ around $p_i$, and these arcs are all well disjoint. We can then define $\varphi_1:\mathbb S^1\to\mathbb R^2$ as the function which coincides with $\tilde\varphi_i$ in every $\arc{s_it_i}$, and with $\varphi$ otherwise. Arguing as in Section~\ref{sect2}, in particular keeping in mind that each length $\ell$ has been chosen much smaller than the distance between different points $p_j$, we obtain immediately that $\varphi_1$ is $L_4$-biLipschitz, with $L_4=L_3+\varepsilon'$. Moreover, by construction $\varphi_1$ is linear on each arc $(s_i,t_i)$, and $\|\varphi_1-\varphi\|_{L^\infty}\leq \varepsilon'$.
\step{II}{Modification in each arc $\arc{p_ip_{i+1}}$ and the function $\varphi_\varepsilon$.}
We now restrict our attention to the function $\varphi_1$ on the arc $\arc{p_ip_{i+1}}$. This is an $L_4$-biLipschitz function, and its image is a segment at the beginning and at the end, that is, from $p_i$ to $t_i$ and from $s_{i+1}$ to $p_{i+1}$. Let us then take two real numbers $a<b$ with $b-a<2\pi$ and $\pi(a)=p_i$, $\pi(b)=p_{i+1}$, and let us call $\psi=\varphi_1^{ab}$. Moreover, let $a^+,\, b^- \in (a,b)$ be such that $\pi(a^+)=t_i$ and $\pi(b^-)=s_{i+1}$: hence, the function $\psi : [a,b]\to\mathbb R^2$ is an $L_5$-biLipschitz function with $L_5=L_4(1-\varepsilon')^{-1}$, again by~(\ref{stes}), and it is linear in $[a,a^+]$ and in $[b^-,b]$. We can then apply Corollary~\ref{cormain}, so we get a piecewise linear function $\tilde\psi:[a,b]\to\mathbb R^2$, biLipschitz with constant $L_6=L_5+\varepsilon'$, such that $\|\tilde\psi-\psi\|_{L^\infty}\leq \delta$, and coinciding with $\psi$ (thus, linear) on the two intervals $[a,a']$ and $[b',b]$, for two points $a<a'<a^+$ and $b^-<b'<b$, and for a suitable constant $\delta>0$ to be specified later.\par
We define then $\varphi_2^i:\mathbb S^1\to\mathbb R^2$ as the function which coincides with $\varphi_1$ out of the arc $\arc{p_ip_{i+1}}$, and with $\tilde\psi\circ \pi^{-1}$ inside the arc. Notice that the function $\varphi_2^i$ is piecewise linear in the arc $\arc{p_ip_{i+1}}$, and in particular it is linear and coincides with $\varphi_1$ in the small arcs $\arc{p_it_i'}$ and $\arc{s_{i+1}'p_{i+1}}$, where the points $t_i'\in \arc{p_it_i}$ and $s_{i+1}'\in\arc{s_{i+1}p_{i+1}}$ are $\pi^{-1}(a')$ and $\pi^{-1}(b')$ respectively. Moreover, we also have $\|\varphi_2^i-\varphi_1\|_{L^\infty}=\|\tilde\psi-\psi\|_{L^\infty}\leq \delta$.\par
Finally, we repeat the same construction for each $1\leq i\leq M$, and we define the final function $\varphi_\varepsilon$ as the function coinciding with $\varphi_2^i$ in each arc $\arc{p_ip_{i+1}}$, so that $\|\varphi_\varepsilon-\varphi_1\|_{L^\infty}\leq \delta$.
\step{III}{Conclusion.}
It is now only left to check that the function $\varphi_\varepsilon$ satisfies the requirement of the Theorem. By construction we have that $\varphi_\varepsilon$ is finitely piecewise linear, and moreover
\[
\|\varphi_\varepsilon-\varphi\|_{L^\infty}\leq \|\varphi_\varepsilon-\varphi_1\|_{L^\infty}+\|\varphi_1-\varphi\|_{L^\infty}\leq \delta+\varepsilon'\,,
\]
so we have $\|\varphi_\varepsilon-\varphi\|_{L^\infty}\leq \varepsilon$ as soon as we have chosen $\delta$ and $\varepsilon'$ small enough. To conclude, we only have to check that $\varphi_\varepsilon$ is $(L+\varepsilon)$-biLipschitz.\par
To do so, let us take two points $x,\, y\in \mathbb S^1$. Suppose first that they belong to a same arc $\arc{p_ip_{i+1}}$. Then, by construction we have, setting $L_7=L_6(1-\varepsilon')^{-1}$,
\begin{equation}\label{last1}\begin{split}
|\varphi_\varepsilon(y)-\varphi_\varepsilon(x)|
&=|\varphi_2^i(y)-\varphi_2^i(x)|
=\big|\tilde\psi(\pi^{-1}(y))-\tilde\psi(\pi^{-1}(x))\big|
\leq L_6 |\pi^{-1}(y)-\pi^{-1}(x)|\\
&\leq L_6(1-\varepsilon')^{-1} |y-x|
=L_7|y-x|\,,
\end{split}\end{equation}
since $\tilde\psi$ is $L_6$-biLipschitz by Step~II and again by~(\ref{stes}).
Suppose now, instead, that $x$ and $y$ belong to two different arcs, in particular let us take $x\in \arc{p_ip_{i+1}}$ and $y\in \arc{p_jp_{j+1}}$. We can divide this case in two subcases, namely, if the equality $\varphi_\varepsilon=\varphi_1$ holds at both $x$ and $y$, or not. If $\varphi_\varepsilon(x)=\varphi_1(x)$ and $\varphi_\varepsilon(y)=\varphi_1(y)$, then since $\varphi_1$ is $L_4$-biLipschitz we have that
\begin{equation}\label{last2}
|\varphi_\varepsilon(y)-\varphi_\varepsilon(x)|=
|\varphi_1(y)-\varphi_1(x)| \leq L_4 |y-x|\,.
\end{equation}
Finally, assume (by symmetry) that $\varphi_\varepsilon(x)\neq \varphi_1(x)$. By construction, this implies that $x\in\arc{t_i's_{i+1}'}$; since $y\notin \arc{p_ip_{i+1}}$, we derive that $|y-x|\geq \eta$, where we define
\[
\eta = \min \Big\{ |d-c|:\, \exists\ 1\leq h \leq M,\, d\notin \arc{p_hp_{h+1}},\, c \in \arc{t_h's_{h+1}'}\Big\}\,.
\]
Notice that $\eta$ is strictly positive, since the arcs $\arc{p_hp_{h+1}}$ are only finitely many. Moreover, notice that we are free to decide $\delta$ depending on $\eta$, thanks to the construction of Step~II. As a consequence, recalling that $\varphi_1$ is $L_4$-biLipschitz and that $\|\varphi_\varepsilon-\varphi_1\|_{L^\infty}\leq \delta$, we have for this last case
\begin{equation}\label{last3}
\frac{|\varphi_\varepsilon(y)-\varphi_\varepsilon(x)|}{|y-x|} \leq \frac{|\varphi_1(y)-\varphi_1(x)|}{|y-x|} + \frac{2\delta}{\eta}
\leq L_4 + \frac{2\delta}{\eta}\leq L_4+\varepsilon'\,,
\end{equation}
where the last inequality is true as soon as $\delta$ has been chosen small enough.\par
We are then in position to conclude: it is straightforward to check that all the constants $L_j$ for $1\leq j\leq 7$ converge to $L$ when $\varepsilon'$ go to $0$, then the estimates~(\ref{last1}), (\ref{last2}) and~(\ref{last3}) give that $\varphi_\varepsilon$ is $(L+\varepsilon)$-biLipschitz as soon as $\varepsilon'$ is small enough.
\end{proof}
|
1,941,325,220,423 | arxiv | \section{Introduction}\label{sec:introduction}}
\else
\section{Introduction}
\label{sec:introduction}
\fi
\IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file''
for IEEE Computer Society journal papers produced under \LaTeX\ using
IEEEtran.cls version 1.8b and later.
I wish you the best of success.
\hfill mds
\hfill August 26, 2015
\subsection{Subsection Heading Here}
Subsection text here.
\subsubsection{Subsubsection Heading Here}
Subsubsection text here.
\section{Conclusion}
The conclusion goes here.
\appendices
\section{Proof of the First Zonklar Equation}
Appendix one text goes here.
\section{}
Appendix two text goes here.
\ifCLASSOPTIONcompsoc
\section*{Acknowledgments}
\else
\section*{Acknowledgment}
\fi
The authors would like to thank...
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\section{Introduction}
\label{sec:intro}
Air pollution is one of the most serious threats for the human health and the environment.
In order to mitigate air pollution, we need to accurately
measure air quality at very high spatial and temporal rates,
especially within urban areas.
Fixed monitoring stations have been deployed to measure the concentration of air pollutants. Given the high cost of the necessary instruments, the number of such installations is limited.
Although fixed stations can collect measurements with high temporal resolution, their spatial resolution is very low; hence, there is a need to spatially infer the concentration of air pollutants.
Recent advances in sensors, IoT platforms, and mobile communications enable deploying low-cost mobile monitoring stations, e.g., by mounting sensors on vehicles.
Examples include the air quality monitoring system using the public transport network in Zurich~\cite{hasenfratz2014pushing}, the system using Google street-view cars in Oakland, CA~\cite{apte2017high}, and imec's City-of-Things platform that uses postal trucks~\cite{latre2016city}.
Deploying mobile stations increases the spatial density of air quality measurements;
however, their temporal resolution per location is low since the vehicles are moving. In addition, there are still locations not covered by the vehicles.
This renders computationally inferring missing air quality measurements across the spatial and temporal dimensions a problem of high interest.
A number of methods have been proposed to infer the air pollutant concentration using measurements collected by \textit{fixed monitoring stations}.
They are based on either \textit{physical} models or \textit{data-driven} solutions~\cite{cheng2018neural}.
In the former approach, the complex physical dispersion processes of air pollutants are modeled using observed data and empirical assumptions~\cite{kim2012urban,arystanbekova2004application,mensink1970aurora}.
Methods in this category, however, often require the availability of additional information, e.g., the distribution of pollution sources and accurate weather models~\cite{airquality15}.
Furthermore, the assumptions behind them might not hold given the variability of urban landscapes~\cite{cheng2018neural}.
Data-driven methods do not rely on strong assumptions;
instead, they utilize diverse local data, such as meteorological information, points of interest and traffic information, to infer the concentration of air pollutants.
By leveraging the recent advances in deep learning, in particular, data-driven methods have achieved good inference performance~\cite{cheng2018neural,qi2018deep,fan2017spatiotemporal}.
Only very limited work has focused on air quality inference using data collected by mobile stations~\cite{hasenfratz2014pushing}.
In this paper, we use the City-of-Things platform from imec \cite{latre2016city} to retrieve street-level air quality data measured using mobile stations in Antwerp, Belgium.
Given the available data, we infer the air quality in unmeasured locations across time and space.
We follow a data-driven approach and formulate the air quality inference problem as a graph-based matrix completion problem. Specifically, we exploit the topology of Antwerp'’s street network and propose a novel deep learning model based on variational graph autoencoders; we refer to our model as AVGAE.
The model captures effectively the spatio-temporal dependencies in the measurements, without using other types of data, such as traffic or weather, apart from the street-network topology.
Experiments on real data from the City-of-Things platform
show that our method outperforms various reference models.
To summarize, our main contributions in this paper are: (\textit{i}) we formulate air quality inference as a graph-based matrix completion problem
and propose a variational graph autoencoder for accurate inference. To the best of our knowledge, this is the first work to explore graph-based neural network models in the context of air quality inference;
(\textit{ii}) the proposed model effectively incorporates the temporal and spatial correlations
via a temporal smoothness constraint and graph convolutional operations; (\textit{iii}) we carry out comprehensive experiments on real-world datasets to evaluate the proposed model showing its superior performance compared to existing models.
The rest of this paper is as follows: Section~\ref{sec:related_work} reviews the related work, and Section~\ref{sec:method} states the problem and presents our model. Section~\ref{sec:experiments} describes the experiments and Section~\ref{sec:conclusion} concludes the paper.
\section{Related Work}
\label{sec:related_work}
\subsection{Air Quality Inference}
Unmeasured air pollution in locations or time instances can be estimated using simple interpolation or resampling techniques~\cite{fan2017spatiotemporal,li2011spatiotemporal}. However, given the dynamics of air pollutants, these techniques tend to produce high estimation errors.
Alternatively, one can use kriging-based variogram models to capture the variance in air pollution data with respect to the geodesic distance~\cite{kitanidis1997introduction,xie2017review}.
As a purely spatial interpolation method, however, this approach does not capture the temporal correlation in the air quality data.
In recent years, we have witnessed the rise of machine-learning-based methods.
In~\cite{zheng2013u}, a co-training approach with temporal and spatial classifiers is proposed for classifying discrete air quality indices (AQIs); yet, this model can not be used to infer the real-valued concentration of air pollutants.
Deep-neural-network-based models have been proposed for air quality inference in~\cite{cheng2018neural,qi2018deep}.
These models exploit the spatio-temporal correlations in the concentration of air pollutants either by incorporating additional information in the model---from traffic, weather, etc.---or by imposing objective constraints. Unlike these methods, our work utilizes a graph variational autoencoder to estimate the concentration of air pollutants across space and time, and provides higher estimation performance without considering additional information.
Alternatively, the authors of~\cite{hsieh2015inferring} proposed a model to infer the air quality using a graph-based semi-supervised approach. The work considers an affinity graph of locations and deploys a label propagation mechanism to predict the air quality. This work is similar to ours in terms of formulating the air quality inference problem on graphs; however, instead of label propagation, we propose an end-to-end graph convolutional model, which is more flexible. It is worth noting that in~\cite{qi2018deep,zheng2013u,hsieh2015inferring} the considered area is divided in a uniform grid, whereas in the proposed approach we aggregate measurements non-uniformly across the street network (see Section~\ref{sec:formulation})
\subsection{Matrix Completion on Graphs}
Matrix completion is a fundamental problem in machine learning, which focuses on inferring unknown entries of matrices~\cite{davenport2016overview}.
Applications of matrix completion include recommender systems~\cite{nguyen2018extendable}, cellular network planning~\cite{chouvardas2016method} and air quality inference~\cite{yu2017low}, to name a few.
Recently, a number of studies have addressed the problem of matrix completion with tools from graph signal processing~\cite{monti2017geometric,van2017graph,kalofolias2014matrix,huang2018matrix,huang2018rating} with applications in recommender systems.
Our method is related to these approaches but it includes specific components tailored to the problem of air quality inference from mobile measurements. In the experimental section, we compare the performance of our method against~\cite{nguyen2018extendable,monti2017geometric} and demonstrate its superior performance in inferring air quality data.
\subsection{Variational Graph Autoencoders}
Variational autoencoders (VAEs)~\cite{kingma2013auto} are generative models that have lately received considerable attention. The study in~\cite{liang2018variational} proposed a VAE with fully connected neural network layers with application in collaborative filtering, a particular application of matrix completion. Furthermore, variational inference on graphs has been proposed for link prediction~\cite{kipf2016variational}. Our model is different from~\cite{kipf2016variational,liang2018variational} in that we propose a variational graph autoencoder, which can express the spatial and temporal dependencies across air pollution measurements. Furthermore, the data in~\cite{liang2018variational} is assumed to follow a discrete multinomial distribution, whereas in our model, the data follows a continuous distribution.
\section{Method}
\label{sec:method}
\begin{figure*}[t]
\centering
\includegraphics[width=0.9\textwidth]{model}
\caption{The proposed variational graph autoencoder architecture for air quality inference (AVGAE).
The input of AVGAE consists of the incomplete matrix $\bm{X}$ and the matrix of geocoordinates $\bm{S}$.
The light gray row in $\bm{X}$ indicates a location without measurements across time, dark gray cells represent unmeasured locations at a given time instance, and the entries with a red font are reconstructed known entries on which we evaluate the loss function.
The function blocks $\bm{f}_{\text{GCN}}$ represent GCN layers. The encoder outputs the parameters $\bm{\mu}$, $\bm{\sigma}$ of a Gaussian distribution. The output matrix $\tilde{\bm{X}}$ approximates the known entries and contains the inferred unknown entries.}
\label{fig:model1}
\end{figure*}
\subsection{Problem Formulation and Notation}
\label{sec:formulation}
We focus on air quality inference at the street network of urban areas---namely, we consider only locations on streets---using measurements on the concentration of air pollutants collected by sensor-equipped vehicles moving around a specific urban area; the problem statement adheres to the smart cities concept. Each vehicle makes measurements while moving on the city street network, resulting in high spatial measurement density; in contrast, the measurements at a specific location have low temporal resolution.
As the time and location associated to a measurement are continuous,
it is convenient to aggregate the measurements at discrete time instances and locations.
We uniformly divide the time span of the data into equal slots of duration $\tau$ (e.g., one hour).
In a given timeslot $t$, we gather all measurements within a pre-defined geographical distance $r$ from a given spatial location $p$ on the street network and take their median-value as the measurement at location $p$ at timeslot $t$.
The street network information is obtained from OpenMapTiles\footnote{https://openmaptiles.com/downloads/europe/belgium/antwerp/}. Hence, the aggregation across space is non-uniform and is adapted to the considered locations on the street network.
The above aggregation process results in a measurement matrix $\bm{X} \in \mathbb{R}^{N \times T}$,
with $N$ the number of considered geographical locations and $T$ the number of timeslots.
An entry $\bm{X}_{ij}$, with $i = 1,\dots,N$ and $j = 1,\dots,T$, corresponds to the measurements
at the $i^{\text{th}}$ location and the $j^{\text{th}}$ timeslot.
$\bm{X}$ is a highly incomplete matrix with the set of known entries denoted by $\Omega$. Our task is to predict the air pollution concentration values in the unknown entries using the measurements (known entries) and the street-network topology.
For notational consistency, in the rest of the paper, we use bold-faced uppercase letters for matrices,
bold-faced lowercase letters for vectors and regular lowercase letters for scalar variables.
Both regular uppercase and lowercase Greek letters denote constants.
\subsection{Variational Autoencoders}
VAEs build on the assumption that the data points in a dataset can be drawn from a distribution conditioned by latent variables; furthermore, the latent variables follow a prior distribution, e.g., the Gaussian distribution. VAEs attempt to learn a deterministic function that transforms the Gaussian distribution to the distribution of the observed data.
Let $\bm{x}$ denote an example in the dataset and $\bm{z}$ the vector containing the latent variables. The inference process is modelled by
\begin{equation}
q(\bm{z} | \bm{x}) = \mathcal{N}\big( \bm{\mu},\bm{\sigma} \big),
\end{equation}
\noindent where $\bm{\mu} = f_{\mu}(\bm{x},\bm{\Theta_1})$ and $\bm{\sigma} = f_{\sigma}(\bm{x},\bm{\Theta_2})$ are parameters of the Gaussian distribution. The generative process is characterized by
\begin{equation}
p(\bm{x} | \bm{z}) \propto f_z( \bm{z}, \bm{\Phi}).
\end{equation}
It should be noted that $f_{\mu}, f_{\sigma}$ and $f_z$ are parameterized functions and their parameters $\bm{\Theta_1}, \bm{\Theta_2}$ and $\bm{\Phi}$ can be learned from data. These functions are often implemented by neural network layers.
To find the parameters, one needs to minimize the following equation:
\begin{equation}\label{core}
\mathcal{L} = -\mathbb{E}_{q(\bm{z} \vert \bm{x})} \big[ \text{log}p(\bm{x}|\bm{z}) \big] + \mathcal{D}\big[ q(\bm{z}|\bm{x}) \lVert p(\bm{z}) \big].
\end{equation}
In~\eqref{core}, one can interpret the first term as the reconstruction error and the second term as a regularization constraint. The second term is the Kullback-Leibler (KL) divergence between $q(\bm{z}|\bm{x})$ and the prior $p(\bm{z}) = \mathcal{N}(\bm{0},\bm{I})$, which can be computed with a closed form formula~\cite{kingma2013auto}.
\subsection{Variational Graph Autoencoders}\label{gvae}
Variational graph autoencoders (VGAEs)~\cite{kipf2016variational} adhere to the VAE concept and utilize graph convolutional layers (GCN) for the parameterized functions $f_{\mu}, f_{\sigma}$ and $f_z$.
Given a graph $\mathcal{G} = (\mathcal{V},\mathcal{E})$ with an adjacency matrix $\bm{A} \in \mathbb{R}^{N \times N}$ and a degree matrix $\bm{D} \in \mathbb{R}^{N \times N}$, $N = \vert \mathcal{V} \vert$, a graph convolutional layer~\cite{kipf2016semi} is expressed as
\begin{equation}
f_{\text{GCN}}(\bm{X}) = \sigma \big( \tilde{\bm{D}}^{-\frac{1}{2}} \tilde{\bm{A}} \tilde{\bm{D}}^{-\frac{1}{2}} \bm{X} \bm{W} \big)
\end{equation}
\noindent where $\tilde{\bm{A}} = \bm{A} + \bm{I}_N$, $\tilde{\bm{D}}_{ij} = \sum_j \tilde{\bm{A}}_{ij}$, $\bm{X} \in \mathbb{R}^{N \times T}$ is the input signal summarized in a matrix, $\bm{W} \in \mathbb{R}^{T \times D}$ is the corresponding weight matrix with $D$ being the GCN layer's dimensionality, and $\sigma$ indicates a nonlinear function. By stacking multiple GCN layers, more complex functions can be constructed. In what follows, we propose a particular architecture tailored to the air quality inference task.
\subsection{The Proposed AVGAE Architecture}
The architecture of our model, which we refer to as AVGAE, is depicted in Fig.~\ref{fig:model1}.
We build a graph of $N$ nodes by considering the geodesic distance among the $N$ corresponding discretized locations on the street network. Two nodes are connected if the geodesic distance between them is smaller than a predefined threshold $\delta$, or if they belong to the same road segment. The weight of a connection is the inverse of the geodesic distance in meters computed by the Haversine formula~\cite{van2012heavenly}. Furthermore, we summarize the locations' geocoordinates in a matrix $\bm{S}$. Our model is described by the following set of equations:
\begin{align}
\bm{\mu} &= \text{GCN}_{\mu}(\bm{X, S, \Theta_1} ) \label{eq:gcn_mu} \\
\bm{\sigma} &= \text{GCN}_{\sigma}(\bm{X, S, \Theta_2} ) \label{eq:gcn_sigma} \\
\bm{Z} &\sim \mathcal{N} (\bm{\mu},\bm{\sigma}) \label{eq:z} \\
\bm{\tilde{X}} &= \text{GCN}_z(\bm{Z}, \bm{\Phi}) \label{eq:gcn_x}
\end{align}
In~\ref{eq:gcn_mu},~\ref{eq:gcn_sigma},~\ref{eq:gcn_x}, $\text{GCN}_{\mu}$, $\text{GCN}_{\sigma}$ and $\text{GCN}_z$ are functions obtained by stacking GCN layers and, $\bm{\Theta}_1$, $\bm{\Theta}_2$ and $\bm{\Phi}$ are parameters that can be learned from the data.
$\bm{S}$ is the geocoordinates matrix, which is horizontally concatenated with $\bm{X}$.
Our model utilizes two separate branches for training $\bm{\mu}$ and $\bm{\sigma}$, thereby allowing to select proper activation functions for $\bm{\mu}$ and $\bm{\sigma}$ (the selected functions are mentioned in Section~\ref{sec:experiments}).
It is worth mentioning that our model is capable of inferring values at locations that are not measured by vehicles, which are illustrated by an empty row in matrix $\bm{X}$ in Fig.~\ref{fig:model1}. This is because the proposed model captures the spatial correlation between the unobserved and observed locations through their geocoordinates and the street network's topology.
The loss function of our model is defined in~\eqref{eq:loss}.
We modify~\eqref{core} by using the mean absolute error (MAE) regularized by a KL divergence term.
Even though the MAE is not everywhere differentiable, we find that using its sub-gradient is sufficient for optimization with gradient descent.
The temporal dependency between measurements imposes an additional smoothness constraint:
\begin{multline}\label{eq:loss}
\mathcal{L}(\bm{X},\bm{\Theta}_1,\bm{\Theta}_2,\bm{\Phi}) = \frac{1}{\vert \Omega \vert} \sum_{(i,j) \in \Omega} \vert \tilde{\bm{X}}_{ij} - \bm{X}_{ij} \vert + \\ \beta \mathcal{D}\big[ q(\bm{z}|\bm{x}) \lVert p(\bm{z}) \big] + \gamma \sum_{(i,j)} \ \sum_{k \in \mathcal{T}(i,j)} e^{-\vert j - k \vert} (\tilde{\bm{X}}_{ij} - \tilde{\bm{X}}_{i,k})^2
\end{multline}
In~\eqref{eq:loss}, $\beta$ and $\gamma$ are positive tuning parameters and $\mathcal{T}(i,j)$ is the neighborhood of the entry $\bm{X}_{i,j}$ with respect to the temporal dimension.
The width of the neighborhood $w_{\mathcal{T}}$ is a parameter that is fine-tuned experimentally.
We minimize the loss function with respect to training entries using the stochastic gradient descent---where we use the reparameterization technique in~\cite{kingma2013auto}---and we deploy the dropout regularization technique to mitigate overfitting.
After training, we obtain the re-constructed data matrix $\tilde{\bm{X}}$ containing predicted values for the unknown entries.
\section{Experiments}
\label{sec:experiments}
\begin{table*}[t]
\centering
\caption{Air quality inference results.}
\label{table:classification_result}
\begin{tabular}{c | c c | c c}
\hline \hline
& \multicolumn{2}{|c|}{$\text{NO}_2$} & \multicolumn{2}{|c}{$\text{PM}_{2.5}$} \\
\hline
& MAE & RMSE & MAE & RMSE \\
\hline
\hline
Kriging linear~\cite{kitanidis1997introduction} & 18.19 & 28.43 & 3.28 & 7.98 \\
\hline
Kriging exponential~\cite{kitanidis1997introduction} & 15.86 & 25.58 & 2.89 & 7.43 \\
\hline
KNN-based collaborative filtering~\cite{koren2010factor} & 20.92 & 32.67 & 3.60 & 7.47 \\
\hline
SVD~\cite{mnih2008probabilistic} & 27.35 & 38.32 & 7.41 & 13.40 \\
\hline
NMF~\cite{luo2014efficient} & 71.67 & 82.34 & 6.75 & 13.09 \\
\hline
NMC~\cite{nguyen2018extendable} & 22.12 & 32.83 & 3.99 & 8.35 \\
\hline
RGCNN~\cite{monti2017geometric} & 48.6 & 60.11 & 6.2 & 15.4 \\
\hline \hline
AVGAE (Our method) & \textbf{14.92} & \textbf{24.33} & \textbf{2.56} & \textbf{6.42} \\
\hline \hline
\end{tabular}
\end{table*}
\subsection{The Dataset}
We rely on the City-of-Things platform~\cite{latre2016city} to obtain measurements of the air quality in the Antwerp city in Belgium. The platform makes use of 24 cars equipped with mobile monitoring devices. We retrieve the measurements during May 2018 for two air pollutants, that is, $\text{NO}_2$ and $\text{PM}_{2.5}$.
As described in Section~\ref{sec:formulation}, we first apply aggregation as a data preprocessing step, where we choose $\tau = 1$ hour and $r = 100$ meters. It is worth mentioning that these parameters can be made smaller, leading to near real-time inference with a finer spatial resolution. After processing, we obtain 3630 and 4086 discrete locations for the $\text{NO}_2$ and $\text{PM}_{2.5}$ datasets, respectively. Each location is specified by the pair of latitude and longitude geocoordinates. Moreover, for each pollutant, a location is associated with a measurement vector of $\text{T} = 30 \times 24 = 720$ dimensions, which is the number of hours during the considered period. The description of the dataset is presented in Table~\ref{table:dataset}.
\begin{table}[t]
\centering
\caption{The description of the $\text{NO}_2$ and $\text{PM}_{2.5}$ dataset.
The units for $\text{NO}_2$ and $\text{PM}_{2.5}$ are parts per billion (ppb) and $\mu \text{G}/\text{m}^3$.
}
\label{table:dataset}
\begin{tabular}{ c | c | c }
\hline \hline
& $\text{NO}_2$ & $\text{PM}_{2.5}$ \\
\hline \hline
Number of locations & 3630 & 4086 \\
\hline
Duration in hours & 720 & 720 \\
\hline
Max concentration & 633.65 & 189.03 \\
\hline
Min concentration & 0.16 & 0.07 \\
\hline
Mean concentration & 85.50 & 9.83 \\
\hline
\% of known entries versus all & 0.60 & 0.56 \\
\hline \hline
\end{tabular}
\end{table}
\subsection{Experimental Setting}
To evaluate the proposed method, we randomly divide the known entries into training and test sets. That is, 90\% of the known entries is used for training and the rest is reserved for testing. We use two common evaluation metrics, namely, the root mean squared error (RMSE) and the mean absolute error (MAE). To obtain robust results, we repeat this procedure with 5 random divisions and report average results.
To create the graph, we set the distance threshold to $\delta = 200$ m. The parameters of the AVGAE are chosen experimentally: we set the learning rate to $\alpha = 0.005$, the KL divergence coefficient to $\beta = 0.1$, the temporal smoothness coefficient to $\gamma = 0.8$, the temporal neighborhood width to $w_{\mathcal{T}} = 3$ and the dropout rate to 0.4. For all GCN layers, we use the same dimensionality, that is, $D = 512$.
We use 4 GCN layers for the encoder and 1 GCN layer for the decoder. We employ ReLU to activate the GCN layers of the \text{encoder} except for the last GCN layer of the $\bm{\sigma}$ branch where the sigmoid function is used because $\bm{\sigma}$ should contain strictly positive entries. Because the output is unbounded, it is not necessary to use an activation function for the GCN layer of the decoder.
As reference benchmarks, we have selected two well-established kriging-based models, that is, the linear and exponential models~\cite{kitanidis1997introduction}. A kriging model is applied per column of the matrix~$\bm{X}$ (corresponding to a timeslot) using the geocoordinates information in~$\bm{S}$. Furthermore, we consider various state-of-the-art matrix completion methods, including KNN-based collaborative filtering~\cite{koren2010factor}, SVD-based matrix completion~\cite{mnih2008probabilistic}, non-negative matrix factorization~\cite{luo2014efficient}, and extendable neural matrix completion~\cite{nguyen2018extendable}. These models perform completion under an assumption on~$\bm{X}$, e.g., a low-rank prior. Furthermore, we compare against the graph-based matrix completion method in~\cite{monti2017geometric}; specifically, the RGCNN model, where the graph for the row-factor matrix is the same as in our AVGAE model and the hyper-parameters are kept as in~\cite{monti2017geometric}. For the implementation, we rely on PyKridge\footnote{https://pykrige.readthedocs.io/en/latest/index.html} for the kriging models and Surprise\footnote{https://surprise.readthedocs.io/en/stable/index.html} for the reference matrix completion techniques. The implementations of~\cite{nguyen2018extendable,monti2017geometric} are available online.
All models
have been trained in our dataset.
\subsection{Result and Analysis}
The results in air quality inference with the different methods are shown in Table~\ref{table:classification_result}. Kriging-based methods provide good estimation accuracy, particularly the exponential model. This is because such models capture properly the spatial correlation in the air
quality measurements with respect to the geodesic distance.
On the other hand, matrix completion models assume that there are
hidden factors characterizing rows (a.k.a., discrete locations) and columns (a.k.a., timeslots).
While this assumption is appropriate for other problems such as recommendation systems, it does
not properly capture the spatio-temporal correlation in the concentration of air pollutants.
It is evident that our AVGAE model achieves the best performance for both the RMSE and MAE metrics and for both pollutants ($\text{NO}_2$ and $\text{PM}_{2.5}$).
Conversely to kriging models, AVGAE effectively captures \textit{both} the temporal and spatial correlations in the data, and leverages the underlying graph structure of the street network. Furthermore, unlike the reference matrix completion models, either graph-based or not, AVGAE\ adheres to an autoencoder model, which provides good performance in reconstruction problems.
\section{Conclusion}
\label{sec:conclusion}
Measuring the concentration of air pollutants with mobile stations is a promising approach to achieve hyperlocal air quality monitoring. The measurements collected by such mobile stations, however, have very low temporal resolution per location and there are still unmeasured locations.
We formulated the air quality inference problem in this setting as a matrix completion problem on graphs, and proposed a variational graph autoencoder model to solve it. The proposed model was experimentally shown to effective capture the spatio-temporal correlation in the measurements, resulting in better air quality inference compared to various state-of-the-art kriging and matrix completion methods.
\bibliographystyle{IEEEbib}
|
1,941,325,220,424 | arxiv | \section{Introduction}
We consider the problem of crowdsourced labeling, which has diverse applications in image labeling, video annotation, and character recognition~\cite{raykar2010learning,von2008recaptcha,welinder2010multidimensional}.
Workers in the crowdsourcing system are given simple tasks and asked to provide a binary label to each assigned task.
Since workers may provide incorrect labels to some of the tasks and worker reliabilities are usually unknown, the main challenge in the crowdsourced labeling is to infer true labels from noisy answers collected from workers of unknown reliabilities.
To resolve such challenges and to design inference algorithms with provable performance guarantees, many previous works considered a simple yet meaningful error model for workers' answers. One of the most widely studied model is the single-coin Dawid-Skene model~\cite{dawid1979maximum}, where each worker is modeled by his/her own reliability level and the worker provides a correct answer to any task with probability depending on the worker's reliability level, regardless of the types of assigned tasks. For such a model, various inference algorithms were proposed to first estimate the worker reliabilities from the collected answers and to use them to infer correct labels by using expectation maximization (EM)\cite{gao2013minimax,liu2012variational,zhou2012learning}, message passing \cite{karger2014budget}, or spectral method \cite{dalvi2013aggregating, zhang2014spectral}.
However, this error model does not capture some realistic scenarios where worker's ability to provide a correct label could change depending on the types of the assigned tasks and the workers' expertise \cite{8437703,9174227,kim2020crowdsourced,shah2020permutation}.
In this work, we consider a $d$-type specialization model, which was introduced in~\cite{shah2018reducing}. This model assumes that each worker and each task is associated with a single type (among $d$ different types), and a worker provides an answer better than a random guess if the task type matches the worker type and otherwise, the worker just provides a random guess.
The inference algorithm proposed in~\cite{shah2018reducing} is composed of two stages. At the first stage, the workers are clustered based on similarity on their answers, and at the second stage the task label is estimated by first finding a cluster of the matched type and aggregating the answers only from the chosen cluster while ignoring the answers from other clusters.
In this work, we generalize the $d$-type specialization model to the case where a worker provides an answer better than a random guess with probability $q\in[1/2,p)$ even when the worker type and the task type does not match. When the types are matched, the answer is correct with higher probability $p\in(q,1]$. Different from the algorithm in~\cite{shah2018reducing}, we do not throw away the answers from the cluster of unmatched type but use the answers with proper weights to achieve the optimal accuracy in the label estimation. We propose two algorithms in this paper. Our first algorithm does not require any information on the worker/task types but the parameters $(p,q)$, and it achieves the best known performance, regardless of the regimes of the reliability parameters $(p,q)$ or the number of types $d$. We then propose our second algorithm which does not require even $(p,q)$ values. In this algorithm, the parameters are estimated from the workers' answers and used to estimate the correct labels. We empirically show that our second algorithm achieve as good performance as the first algorithm in diverse parameter regimes.
Furthermore, we empirically demonstrate that under the generalized $d$-type specialization model our two proposed algorithms outperform the state-of-the-art inference algorithms developed for the Dawid-Skene model.
\section{Problem Formulation}\label{sec2}
In this work, we consider a $d$-type specialization model for crowdsourced labeling.
We assume that there exists $m$ binary tasks and $n$ workers. Denote the set of tasks and the set of workers by ${\mathcal T}$ and ${\mathcal W}$, respectively. Let ${\mathcal W}_z$ denote the set of workers of type $z\in[d]$. For $i \in {\mathcal T}$, let $a_i \in \lbrace -1, 1 \rbrace$ denote the true label of the $i$-th task, and let $t_i, w_j \in [d]$ denote the type of the $i$-th task and that of the $j$-th worker, respectively, where $[d]:= \{ 1, \dots, d \}$.
We assume that the type of each task and the type of each worker are uniformly distributed over $[d]$.
The set of workers assigned to task $i$ is denoted by ${\mathcal N}_i$. Let $m_{ij}$ be the $j$-th worker's answer to the task $i$. If task $i$ is not assigned to worker $j$, then $m_{ij}=0$, and if it is assigned
\begin{equation} \label{eq2.1}
{m_{ij}} =
\begin{cases}
a_i & \text{with probability $f_{ij}$}, \\
-a_i & \text{with probability $1-f_{ij}$}.
\end{cases}
\end{equation}
We assume that $m_{ij}$'s are independent for all $i,j$. The $d$-type specialization model we consider further assumes that
\begin{equation}
f_{ij}=
\begin{cases}
p,&\text{if }t_i=w_j, \\
q,&\text{o.w.}
\end{cases}
\end{equation}
where $p> q\geq 1/2$.
Different from~\cite{shah2018reducing} where the value $q$ was fixed to 1/2, here we consider a general $q\in[1/2,p)$.
For $i \in {\mathcal T}$, let $\hat a_i \in \{-1, 1\}$ denote the inferred label of the $i$-th task.
The performance metric we consider is the expected fraction of errors in the inferred labels, i.e., $\mathbb{E}[\frac{1}{m}\sum_{i=1}^m\mathbbm{1}(\hat{a}_i\neq a_i)]=\frac{1}{m}\sum_{i=1}^m\pr(\hat{a}_i\neq a_i)$.
We aim to minimize the number of queries per task, achieving
\begin{equation}\label{eqn:err}
\frac{1}{m}\sum_{i=1}^m\pr(\hat{a}_i\neq a_i)\leq\alpha_c,\quad\text{for some }\alpha_c\in(0,1).
\end{equation}
\section{Performance Baselines}\label{sec3}
In this section, we first review performance baselines of previous works and outline our contributions.
\iffalse
\subsection{Majority Voting}
For majority voting, the decision is given by
$
\hat{a}_i^{\textsf{MV}}=\text{sign}\left(\sum_{j\in {\mathcal N}(i)}M_{ij} \right).
$
By applying Chernoff's bound, it can be shown that
\begin{equation}
\pr(\hat{a}_i^{\textsf{MV}}\neq a_i)\leq\exp\left(-\frac{|{\mathcal N}(i)|}{2}\left(\frac{\sum_{j\in{\mathcal N}(i)}(2F_{ij}-1)}{|{\mathcal N}(i)|}\right)^2\right).
\end{equation}
When we choose ${\mathcal N}(i) \subset {\mathcal W}$ at random, effectively, $1/d$ fraction of answers are given with fidelity $F_{ij}=p$ and the rest with $F_{ij}=q$.
Assuming $|{\mathcal N}(i)|=Ld$, the majority voting gives
\begin{equation}
\pr(\hat{a}_i^{\textsf{MV}}\neq a_i)\leq \exp\left(-\frac{r_{\textsf{MV}}^2}{2}Ld\right)
\end{equation}
where
\begin{equation}
r_{\textsf{MV}}=\frac{((2p-1)+(d-1)(2q-1))}{{d}}.
\end{equation}
To achieve the targeted recovery accuracy~\eqref{eqn:err} with the majority voting, the required number of queries per task is
\begin{equation}\label{eqn:mv_Ld}
Ld=\frac{2d^2}{((2p-1)+(d-1)(2q-1))^2}\ln\left(\frac{1}{\alpha_c}\right).
\end{equation}
\fi
\subsection{Oracle Weighted Majority Voting and Majority Voting}
As the first performance baseline, we consider a general weighted majority voting, which aggregates answers with weights to generate the label estimate.
For weighted majority voting, the decision is given by
\begin{equation}\label{eqn:wmv_estimator}
\hat{a}_i^{\textsf{WMV}}=\text{sign}\left(\sum_{j\in {\mathcal N}_i}\mu_{ij}m_{ij} \right),
\end{equation}
where $\mu_{ij}$ is the weight for the answer from the $j$-th worker to the $i$-th task.
By using Hoeffding's inequality (or Corollary 5 in~\cite{li2014error}), it can be shown that the weighted majority voting guarantees
\begin{equation}\label{eqn:wmv_P}
\pr(\hat{a}_i^{\textsf{WMV}}\neq a_i)\leq \exp\left(-\frac{\gamma^2_{\textsf{WMV}}}{2}|{\mathcal N}_i|\right)
\end{equation}
where
\begin{equation}\label{eqn:wmv_P1}
\gamma_{\textsf{WMV}}=\frac{\sum_{j\in{\mathcal N}_i} \mu_{ij}(2f_{ij}-1)}{\|\mu_{i*}\|_2 \cdot\sqrt{|{\mathcal N}_i|}}
\end{equation}
for $\mu_{i*} = ( \mu_{i1}, \dots,\mu_{i \vert {\mathcal N}_i \vert} )$.
By Cauchy-Schwarz inequality, the weight $\mu_{ij}$ that maximizes $\gamma_\textsf{WMV}$ is $\mu_{ij}\propto (2f_{ij}-1)$.
When we choose ${\mathcal N}_i \subset {\mathcal W}$ at random, effectively, $1/d$ fraction of answers are given with fidelity $f_{ij}=p$ and the rest with $f_{ij}=q$.
Thus, when $\{f_{ij}\}$ is known, i.e., when the task types $\{t_i\}$ and the worker types $\{w_j\}$ as well as the reliability parameters $(p,q)$ are known at the inference algorithm, by choosing $\mu_{ij}\propto (2f_{ij}-1)$ the oracle weighted majority voting can achieve~\eqref{eqn:wmv_P} with
$
\gamma_{\textsf{WMV}}^*=\frac{\sqrt{(2p-1)^2+(d-1)(2q-1)^2}}{\sqrt{d}}.
$
The required number of queries per task to achieve~\eqref{eqn:err} for the oracle weighted majority voting is thus
\begin{equation}\label{eqn:wmv_Ld}
L_{\sf oracle}=\frac{2d}{(2p-1)^2+(d-1)(2q-1)^2}\ln\left(\frac{1}{\alpha_c}\right).
\end{equation}
As another baseline, we can consider the simple majority voting that aggregates all the answers with equal weights, i.e.,
$
\hat{a}_i^{\textsf{MV}}=\text{sign}\left(\sum_{j\in {\mathcal N}_i}m_{ij} \right).
$
The majority voting gives
$
\pr(\hat{a}_i^{\textsf{MV}}\neq a_i)\leq \exp\left(-\frac{\gamma_{\textsf{MV}}^2}{2}|{\mathcal N}_i|\right)
$
where
$
\gamma_{\textsf{MV}}=\frac{((2p-1)+(d-1)(2q-1))}{{d}}.
$
To achieve the targeted recovery accuracy~\eqref{eqn:err} with the majority voting, the required number of queries per task is
\begin{equation}\label{eqn:mv_Ld}
L_{\sf mv}=\frac{2d^2}{((2p-1)+(d-1)(2q-1))^2}\ln\left(\frac{1}{\alpha_c}\right).
\end{equation}
We can easily check the $L_{\sf oracle}$ in~\eqref{eqn:wmv_Ld} is less than or equal to $L_{\sf mv}$ in~\eqref{eqn:mv_Ld}.
However, the oracle result is achievable when the worker types and the task types as well as reliability parameters $(p,q)$ are all known to the inference algorithm.
\subsection{Inference Algorithm from~\cite{shah2018reducing}: Clustering and Majority Voting from the Workers of a Matched Cluster}\label{sec:prev_3}
We review the algorithm in~\cite{shah2018reducing}, proposed for the $d$-type specialization model with $p>q=1/2$.
The parameters $\zeta$, $r$, and $l$ of this algorithm can be chosen later to guarantee the recovery condition~\eqref{eqn:err}.
\medskip
\noindent\textbf{Algorithm~\cite{shah2018reducing}}: This algorithm is composed of two stages.
\begin{itemize} [leftmargin=*]
\item\textit{Stage 1 (Clustering Workers by Types):} Let ${\mathcal S}\subset {\mathcal T}$ represent randomly chosen $r$ tasks from the set ${\mathcal T}$.
Assign each task in ${\mathcal S}$ to all $n$ workers. Given the answers $m_{ij}$ for $i\in{\mathcal S}$, cluster workers \textit{sequentially}: for a worker $j\in[n]$ if there exists a cluster of workers ${\mathcal Q}\subset[j-1]$ such that for each $j'\in {\mathcal Q}$
\begin{equation}\label{eqn:cluster1}
\frac{1}{r}\sum_{i\in{\mathcal S}}\mathbbm{1}(m_{ij}=m_{ij'})>\zeta,
\end{equation}
then assign $j$ to ${\mathcal Q}$; otherwise, create a new cluster containing $j$. Let $\{{\mathcal V}_1,\dots,{\mathcal V}_c\}$ be the resulting clusters of $[n]$ workers.
For each task $i\in{\mathcal T}\backslash {\mathcal S}$ and cluster $z\in[c]$, assign task $i$ to $l$ workers sampled uniformly at random from the set ${\mathcal V}_z$. The total number of workers assigned to task $i$ is $lc$.
\item\textit{Stage 2 (Type Matching and Majority Voting):} For each task $i\in{\mathcal T}$, find a cluster of the matched type by
\begin{equation}\label{eqn:palg_typematching}
z^*(i)=\argmax_{z\in[c]}\left|\sum_{j\in{\mathcal N}_i \cap {\mathcal V}_z }m_{ij}\right|,
\end{equation}
and estimate the label for the task $i$ by the majority voting from the answers only from the set ${\mathcal V}_{z^*(i)}$:
\begin{equation}\label{eqn:palg_est}
\hat{a}_i=\text{sign}\left(\sum_{j\in {\mathcal N}_i \cap {\mathcal V}_{z^*(i)}} m_{ij}\right).
\end{equation}
\end{itemize}
\medskip
The main idea of this algorithm is to cluster workers by finding subsets of workers having similarity (larger than some threshold $\zeta$) in their answers for the initially assigned $|{\mathcal S}|=r$ tasks. After assigning the rest of the tasks ${\mathcal T}\backslash {\mathcal S}$ to total $lc$ workers from $c$ clusters, the final decision is made by the majority voting from the answers only from one cluster believed to be composed of workers having the same type as the task.
The parameters $\zeta$, $r$, and $l$ of this algorithm can be chosen to guarantee the recovery condition~\eqref{eqn:err}. We note that the choice of $\zeta$, which is $\frac{1}{2}+\frac{(p-q)^2}{d}$ in~\cite{shah2018reducing}, requires a prior knowledge of the model parameter $p,q$.
We can easily generalize the analysis of this original algorithm to a general $q\geq 1/2$ by selecting a proper choice of $\zeta$, $r$, $n$ and $l$, and can show that the required number of queries per task $\frac{1}{m}(nr+ld(m-r))$ to achieve the recovery condition~\eqref{eqn:err} can be bounded as
\begin{equation}\label{eqn:Ld_palg}
L_{\sf type}= \min \left\{ \frac{2d}{\frac{(p-q)^2}{2}+\frac{(2q-1)^2}{2}} \ln\frac{6d+3}{\alpha_c}, \frac{2d}{\frac{(p-q)^2}{2}} \ln \frac{6d}{\alpha_c} \right\}
\end{equation}
when $r=\frac{d^2}{2(p-q)^4}\ln\frac{3n(n-1)}{2\alpha_c}$, $n\geq \max\left\{8d\ln\frac{3d}{\alpha_c},L_{\sf type}\right\}$, $m\geq cn^3$, and $l = \frac{1}{2d} L_{\sf type} $ for some constant $c>0$.
\medskip
\begin{rem}[Our contributions]
When $q=1/2$ and $d$ is large, the clustering-based algorithm can guarantee the recovery condition~\eqref{eqn:err} with the number of queries per task scaling as $\frac{d}{(2p-1)^2}\ln \frac{d}{\alpha_c}$, whereas the majority voting requires $\frac{d^2}{(2p-1)^2}\ln\frac{1}{\alpha_c}$ queries per task. This demonstrates the benefit of using the clustering-based algorithm for $q=1/2$.
The gain comes from aggregating a selected subset of answers from a matched cluster; in contrast, even though the majority voting aggregates almost $d$ times large number of answers, since $(d-1)l$ answers are just random guesses, these answers degrade the overall inference performance, especially when $d$ is large.
On the other hand, for any $q>1/2$, the clustering-based algorithm requires much more number of queries $\frac{d}{(p-q)^2+(2q-1)^2}\ln \frac{d}{\alpha_c}$ compared to that of majority voting $\frac{d^2}{((2p-1)+(d-1)(2q-1))^2}\ln\frac{1}{\alpha_c}\approx\frac{1}{(2q-1)^2}\ln\frac{1}{\alpha_c}$, since the clustering-based algorithm does not utilize the $(d-1)l$ answers from unmatched clusters even though these answers can still provide some useful information about the true task label when $q>1/2$.
Motivated by this observation, in the next section we propose two new algorithms, still based on clustering, but that aggregates the answers from all the clusters with proper weights.
In particular, our second algorithm uses a new clustering method based on semidefinite programming (SDP) \cite{ames2014guaranteed,hajek2016achieving,vinayak2016similarity,lee2020hypergraph}, which does not require the knowledge of the reliability parameters $p,q$, and we also suggest estimators $\hat{p},\hat{q}$ calculated from the clustering result, which then can be used for weighted majority voting of workers' answers.
\end{rem}
\section{Main Results}\label{sec4}
\subsection{First Algorithm: When Parameters $(p,q)$ are Known}
We first consider the case when $(p,q)$ are known so that we can use the optimal weighted majority voting after the clustering step in {\it Stage 1} of Algorithm~\cite{shah2018reducing}.
With general $q\in[1/2,p)$, for the optimal weighted majority voting {\it Stage 2} of Algorithm~\cite{shah2018reducing} should be changed as below.
\medskip
\noindent\textbf{Algorithm 1 (for the known $(p,q)$ case)}: This algorithm is composed of two stages. \textit{Stage 1} for worker clustering is the same as that of Algorithm \cite{shah2018reducing}, which is summarized in Section~\ref{sec:prev_3}. \textit{Stage 2} is modified as below.
\begin{itemize} [leftmargin=*]
\item\textit{Stage 2 (Type Matching and Weighted Majority Voting):} For each task $i\in{\mathcal T}$, find a cluster of the matched type $z^*(i)$ by~\eqref{eqn:palg_typematching}
and set the weights $\mu_{ij}$ for answers $m_{ij}$, $j\in{\mathcal N}_i$, by
\begin{equation}\label{eqn:weights_cluster}
\mu_{ij}=
\begin{cases}
2p-1,&\text{ for }j\in {\mathcal V}_{z^*(i)},\\
2q-1,&\text{ for }j\in {\mathcal N}_i \backslash {\mathcal V}_{z^*(i)}.
\end{cases}
\end{equation}
Estimate the label for the task $i$ by the weighted majority voting~\eqref{eqn:wmv_estimator} with weights~\eqref{eqn:weights_cluster} based on the worker clustering and the type matching.
\end{itemize}
\begin{thm}
\textit{With Algorithm 1, for any $\alpha_c \in (0,1)$, when $m \ge cn^3$ for some constant $c>0$, the recovery of task labels is guaranteed with the expected accuracy~\eqref{eqn:err}, with the number of queries per task
\begin{equation} \label{eqn:Ld_alg1}
L_{\sf Alg1} = \frac{2d}{{(p-q)^2}/{2} + \gamma_u} \ln \frac{6d+3}{\alpha_c}
\end{equation}
where
\begin{equation} \label{eqn:gammau}
\gamma_u=\frac{(2(2p-1)(2q-1)+(d-2)(2q-1)^2)^2}{2((2p-1)^2+(d-1)(2q-1)^2)}.
\end{equation}}
\end{thm}
\begin{rem} Note that Algorithm 1 guarantees the recovery condition~\eqref{eqn:err} with a reduced number $L_{\sf Alg1}$ of queries per task compared to that of {Algorithm~\cite{shah2018reducing}} in~\eqref{eqn:Ld_palg}. Especially, the gap increases as $q(<p)$ increases.
Compared to the required number~\eqref{eqn:mv_Ld} of queries for majority voting, we can see that the proposed algorithm requires the same order $\Theta \left(\ln\frac{d}{\alpha_c} \right)$ (ignoring the $\ln d$ overhead) of queries when $q>1/2$ and $d\to\infty$, while that of {Algorithm~\cite{shah2018reducing}} required $\Theta \left(d\ln \frac{d}{\alpha_c}\right)$ queries per task.
\end{rem}
\begin{proof}
With the two-stage algorithm, the workers are first clustered, and for a given task, the cluster of the matched type is inferred.
We first analyze the clustering error. For any two workers $(a,b)$ having the same type, $\pr(m_{ia}=m_{ib}|w_a=w_b)=\frac{p^2+(1-p)^2}{d}+\frac{(d-1)(q^2+(1-q)^2)}{d},
$
while for two workers of different types, $\pr(m_{ia}=m_{ib}|w_a\neq w_b)=\frac{2(pq+(1-p)(1-q))}{d}+\frac{(d-2)(q^2+(1-q)^2)}{d}.
$
By setting $\zeta$ in~\eqref{eqn:cluster1} as the mean of the two values, we can bound $\pr(\frac{1}{r}\sum_{i\in{\mathcal S}}\mathbbm{1}(m_{ia}=m_{ib})<\zeta| w_a=w_b)\leq \exp\left(-\frac{2(p-q)^4}{d^2}r\right)$ and $\pr(\frac{1}{r}\sum_{i\in{\mathcal S}}\mathbbm{1}(m_{ia}=m_{ib})\geq \zeta| w_a\neq w_b)\leq \exp\left(-\frac{2(p-q)^4}{d^2}r\right)$ by using Chernoff bound. By union bound, the clustering error is then bounded by
\begin{equation}\label{eqn:cluster_er}
\pr(\text{Clustering error})\leq { n \choose 2} \exp\left(-\frac{2(p-q)^4}{d^2}r\right).
\end{equation}
We also need to guarantee that the number of workers per type is at least $l$. Since the number of workers per type is distributed by $\text{Binomial}(n,\frac{1}{d})$, by using the Chernoff bound and the union bound,
\begin{equation}
\begin{split}\label{eqn:type_er}
\pr(\cup_{z\in [d]} \{|\mathcal{V}_z|\leq l\})
\leq d\exp\left(-\frac{1}{2}\left(1-\frac{ld}{n}\right)^2 \frac{n}{d}\right).
\end{split}
\end{equation}
Next, we bound the type matching error.
Let $S_{iz}:=\sum_{j\in\mathcal{N}_i\cap \mathcal{W}_z}\mathbbm{1}(m_{ij}=+1)$. Note that $S_{iz}$ is distributed by Binomial($|\mathcal{N}_i\cap \mathcal{W}_z|,p$) if $t_i=z$ and $a_i=1$; Binomial($|\mathcal{N}_i\cap \mathcal{W}_z|,q$) if $t_i\neq z$ and $a_i=1$; Binomial($|\mathcal{N}_i\cap \mathcal{W}_z|,1-p$) if $t_i=z$ and $a_i=-1$; and Binomial($|\mathcal{N}_i\cap \mathcal{W}_z|,1-q$) if $t_i\neq z$ and $a_i=-1$.
Therefore, if $S_{iz}$ is concentrated around its mean by $\frac{1}{2}(p-q)$, then~\eqref{eqn:palg_typematching} provides the correctly matched type.
By the union bound over $z\in[d]$, the type matching error is thus bounded above by
\begin{equation}\label{eqn:type_match_err}
\pr(z^*(i)\neq t_i)\leq 2d\exp\left(-\frac{(p-q)^2l}{2}\right).
\end{equation}
We then analyze the label estimation error. When the clustering is perfect but the type matching is wrong, the weight defined in~\eqref{eqn:weights_cluster} is not equal to the desired weight $\mu_{ij} = 2 f_{ij} - 1$, and the estimation error is bounded above by the case when the weight is higher ($\mu_{ij}=2p-1)$ for a cluster that is incorrectly matched to the task, and lower $(\mu_{ij}=2q-1)$ for the cluster having the same type as the task, i.e., $\pr \left( \hat a_i^{\textsf{WMV}} \ne a_i \right) \le \exp \left( - \gamma_u l \right)$ where for $\gamma_u$ in~\eqref{eqn:gammau}. On the other hand, when the clustering and type matching is all correct, the estimation error for $\hat a_i^{\sf WMV}$ is equal to that of the oracle weighted majority voting, $\exp (-\gamma_m l)$ for $\gamma_m = \frac{(2p-1)^2+(d-1)(2q-1)^2}{2}$. It can be shown that $\exp(-\gamma_m l) \le \exp (-((p-q)^2/2 + \gamma_u) l )$.
By combining the above analysis, the expected fraction of label errors $\mathbb{E}\left[\frac{1}{m}\sum_{i=1}^m\mathbbm{1}(\hat{a}_i\neq a_i)\right]$ is bounded above by
\begin{equation}
\begin{split}
&{ n \choose 2} \exp\left(-\frac{2(p-q)^4}{d^2}r\right)+d\exp\left(-\frac{1}{2}\left(1-\frac{ld}{n}\right)^2 \frac{n}{d}\right)\\
&\qquad\quad+(2d+1)\exp\left(-\frac{(p-q)^2l}{2}\right) \cdot \exp(-\gamma_u l).
\end{split}
\end{equation}
To limit the fraction of errors to $\alpha_c$, we can choose $r=\frac{d^2}{2(p-q)^4}\ln\frac{3n(n-1)}{2\alpha_c}$, $l= \frac{1}{(p-q)^2/2+\gamma_u}\ln\frac{6d+3}{\alpha_c}$ and $n\geq \max\left\{8d\ln\frac{3d}{\alpha_c}, \frac{2d}{(p-q)^2/2+\gamma_u}\ln\frac{6d+3}{\alpha_c}\right\}$. The total number of queries per task is $\frac{1}{m}(nr+ld(m-r))\leq ld+\frac{nr}{m}$, and the second term is dominated by the first term when $m\ge c n^3$ for some constant $c>0$. Thus, the total number of queries per task is bounded by $L_{\sf Alg1}$ in~\eqref{eqn:Ld_alg1}.
\iffalse
Under the event that $z^*(i)\neq t_i$, the weight defined in~\eqref{eqn:weights_cluster} is not equal to the desired weight $\mu_{ij}=2f_{ij}-1$ but it is higher ($\mu_{ij}=2p-1$) for a cluster that is incorrectly matched to the task, and lower ($\mu_{ij}=2q-1$) for the cluster having the same type as the task.
By using~\eqref{eqn:wmv_P}, we can show that the expected error fraction from the weighted majority voting with such partially mismatched weights is bounded above by
\begin{equation}
\begin{split}\label{eqn:err_wmv_mis}
&\pr(\hat{a}_i^{\textsf{WMV}}\neq a_i| z^*(i)\neq t_i)\leq \exp\left(-\gamma_u l\right).
\end{split}
\end{equation}
where $\gamma_u=\frac{(2(2p-1)(2q-1)+(d-2)(2q-1)^2)^2}{2((2p-1)^2+(d-1)(2q-1)^2)}$.
When $z^*(i)= t_i$, the estimation error for $\hat{a}_i^{\textsf{WMV}}$ is equal to that of the oracle weighted majority voting, which is smaller than~\eqref{eqn:err_wmv_mis}.
By using the results in~\eqref{eqn:cluster_er}--\eqref{eqn:err_wmv_mis}, the expected fraction of label errors $\mathbb{E}\left[\frac{1}{m}\sum_{i=1}^m\mathbbm{1}(\hat{a}_i\neq a_i)\right]$ is bounded above by
\begin{equation}
\begin{split}
& { n \choose 2} \exp\left(-\frac{2(p-q)^4}{d^2}r\right)+d\exp\left(-2\left(\frac{1}{d}-\frac{l}{n}\right)^2n\right)+\\
& \min\left\{(2d+1)\exp\left(-\left(\frac{(p-q)^2}{2}+\gamma_u\right)l\right),2\exp\left(-\gamma_u l\right)\right\}.
\end{split}
\end{equation}
To limit the fraction of errors to $\alpha_c$, we can choose $r=\frac{d^2}{2(p-q)^4}\ln\frac{3n(n-1)}{2\alpha_c}$, $l=L_{\sf alg1}/d$ for $L_{\sf alg1}$ in~\eqref{eqn:Ld_alg1} and $n\geq \max\{2d^2\ln\frac{3d}{\alpha_c},2L_{\sf alg1}\}$. The total number of queries per task is bounded as
\begin{equation}
\begin{split}
&\frac{1}{m}(nr+ld(m-r))\leq ld+\frac{nr}{m}\\
&\leq L_{\sf alg1}+\frac{d^2}{2(p-q)^4m}\ln\frac{3n(n-1)}{2\alpha_c} \max\{2d^2\ln\frac{3d}{\alpha_c},2L_{\sf alg1}\},
\end{split}
\end{equation}
and the second term is dominated by the first term when $m\geq c$.
We can determine the conditions for $n$ and $r$ to guarantee the perfect clustering in \textit{Stage 1}, i.e., all the $n$ workers are correctly clustered to $d$ groups with high probability, i.e., $c=d$ and ${\mathcal V}_z={\mathcal W}_z$ for all $z\in[d]$, by using similar arguments as in~\cite{shah2018reducing}.
Since the number of queries per task $\frac{1}{m}(nr+ld(m-r)) \le \frac{nr}{m}+ld$ is dominated by $ld$ as $m \ge c_1r^2$ with a sufficiently large $c_1>0$,
let us then focus on the condition for $ld$ to guarantee~\eqref{eqn:err}.
The required number of $ld$ is dominated by the condition for $ld$ under the event that type matching error occurs, i.e., $z^*(i)\neq t_i$ in~\eqref{eqn:palg_typematching}.
Consider the probability of type matching error from~\eqref{eqn:palg_typematching}. Note that $S_{iz}:=\sum_{j\in\mathcal{N}_i\cap \mathcal{W}_z}\mathbbm{1}(m_{ij}=+1)$ will be Binomial($|\mathcal{N}_i\cap \mathcal{W}_z|,p$) if $t_i=z$ and $a_i=1$; Binomial($|\mathcal{N}_i\cap \mathcal{W}_z|,1-p$) if $t_i=z$ and $a_i=-1$; Binomial($|\mathcal{N}_i\cap \mathcal{W}_z|,q$) if $t_i\neq z$ and $a_i=1$; and Binomial($|\mathcal{N}_i\cap \mathcal{W}_z|,1-q$) if $t_i\neq z$ and $a_i=-1$.
When $|\mathcal{N}_i\cap \mathcal{W}_z|=l$, which happens with high probability, by using Chernoff's bound it can be shown that
$
\pr\left(|S_{iz}-\mathbb{E}[S_{iz}]|\geq \frac{1}{2}|p-q|l\right)\leq 2\exp\left(-\frac{(p-q)^2l}{2}\right).
$
By the union bound over $z\in[d]$, the type matching error is bounded above as\
\begin{equation}\label{eqn:type_match_err}
\pr(z^*(i)\neq t_i)\leq \min\left\{2d\exp\left(-\frac{(p-q)^2l}{2}\right),1\right\}
\end{equation}
Under the event that $z^*(i)\neq t_i$, the weight defined in~\eqref{eqn:weights_cluster} is not equal to the desired weight $\mu_{ij}=2f_{ij}-1$ but it is higher ($\mu_{ij}=2p-1$) for a cluster that is incorrectly matched to the task, and lower ($\mu_{ij}=2q-1$) for the cluster having the same type as the task.
By using~\eqref{eqn:wmv_P}, we can show that the expected error fraction from the weighted majority voting with such partially mismatched weights is bounded above by
\begin{equation}
\begin{split}\label{eqn:err_wmv_mis}
&\pr(\hat{a}_i^{\textsf{WMV}}\neq a_i| z^*(i)\neq t_i)\leq\\
&\exp\left(-\frac{(2(2p-1)(2q-1)+(d-2)(2q-1)^2)^2}{2((2p-1)^2+(d-1)(2q-1)^2)}l\right).
\end{split}
\end{equation}
By combining~\eqref{eqn:type_match_err} and~\eqref{eqn:err_wmv_mis}, it can be shown that the required number $ld$ of queries per task to guarantee~\eqref{eqn:err} becomes~\eqref{eqn:Ld_alg1}.
\fi
\end{proof}
\vspace{-0.3cm}
\subsection{Second Algorithm: When Parameters $(p,q)$ are Unknown}
In this section, we propose a new algorithm that does not require the knowledge of reliability parameters $(p,q)$.
For the purpose, we change both the clustering algorithm in \textit{Stage 1} and the weighted majority voting in \textit{Stage 2} of {Algorithm 1}.
\medskip
\noindent\textbf{Algorithm 2 (for the unknown $(p,q)$ case)}:
\begin{itemize}[leftmargin=*]
\item\textit{Stage 1 (Clustering Workers by Types):}
\begin{itemize}
\item Data preparation: after assigning each of $|{\mathcal S}|=r$ tasks to all $n$ workers, construct a data matrix $\boldsymbol{S}\in\{-1,1\}^{r\times n}$, and define the similarity matrix ${\boldsymbol A}={\boldsymbol S}^T{\boldsymbol S}$ while zeroing out the diagonal term of ${\boldsymbol A}$.
\item Parameter estimation for within-cluster and cross-cluster edge densities: compute and find the largest two eigenvalues of ${\boldsymbol A}$. Denote them by $\lambda_1$ and $\lambda_2$. Set $\hat{p}_c=\frac{\lambda_1+(d-1)\lambda_2}{n-d}$ and $\hat{q}_c=\frac{\lambda_1-\lambda_2}{n}$.
\item Clustering Based on SDP (Algorithm 1 in~\cite{lee2020hypergraph}): With a tuning parameter $\lambda=\frac{\hat{p}_c+\hat{q}_c}{2}$, solve the SDP problem
\begin{equation}
\begin{split}
\label{eq9}
\max_{\mathbf{X} \in \mathbb{R}^{n \times n}} \ &\langle \mathbf{A} - \lambda \mathbf{1}_{n \times n}, \mathbf{X} \rangle \\
\textnormal{subject to } &\mathbf{X} \succeq \mathbf{O};\ \langle \mathbf{I}_n, \mathbf{X} \rangle = n; \\
&0 \leq \mathbf{X}_{ij} \leq 1,\ \forall i, j \in [n].
\end{split}
\end{equation}
Employ the approximate $k$-medoids clustering algorithm (Algorithm 1 in~\cite{fei2018exponential}) on the optimal solution $\hat {\boldsymbol X}_{\sf SDP}$ of SDP to extract an explicit clustering, $\{{\mathcal V}_1, \dots, {\mathcal V}_d\}$.
\end{itemize}
\item \textit{Stage 2 (Type Matching and Weighted Majority Voting):}
for each task $i\in{\mathcal T}$, find the cluster of matched type $z^*(i)$ by~\eqref{eqn:palg_typematching}.
\begin{itemize}
\item Randomly split each cluster: for each $z\in[d]$, randomly split the workers in ${\mathcal V}_z$ into ${\mathcal V}^{(1)}_z$ and ${\mathcal V}^{(2)}_z$ with probability $\beta$ and $1-\beta$ respectively, where $\beta>0$ is a small enough probability.
Let ${\mathcal W}^{(1)}=\cup_{z=1}^d {\mathcal V}^{(1)}_z$, ${\mathcal W}^{(2)}=\cup_{z=1}^d {\mathcal V}^{(2)}_z$, and $({\mathcal V}^{(1)}_z)^c={\mathcal W}^{(1)}\backslash {\mathcal V}^{(1)}_z$ for $z\in[d]$.
\item Estimate ${p}$ and ${q}$: for $z^*(i)$ in~\eqref{eqn:palg_typematching}, define $\mathcal{M}(i):={\mathcal N}_i \cap {\mathcal V}^{(1)}_{z^*(i)} $ and $\mathcal{U}(i):={\mathcal N}_i \cap ({\mathcal V}^{(1)}_{z^*(i)})^c$, i.e, $\mathcal{M}(i)$ ($\mathcal{U}(i)$) is the set of workers in ${\mathcal W}^{(1)}$ who answered for the task $i$ and are believed to have the matched (unmatched) type.
Define $\hat{p}=\frac{1}{m}\sum_{i=1}^m\hat{p}_i$ and $\hat{q}=\frac{1}{m}\sum_{i=1}^m\hat{q}_i$ where
\begin{equation}
\begin{split}\nonumber
\hat{p}_i&=\max\left\{\frac{\sum\limits_{j\in\mathcal{M}(i) }\mathbbm{1}(m_{ij}=1)}{|\mathcal{M}(i)|},\frac{\sum\limits_{j\in\mathcal{M}(i) }\mathbbm{1}(m_{ij}=-1)}{|\mathcal{M}(i) |}\right\},\\
\hat{q}_i&=\max\left\{\frac{\sum\limits_{j\in\mathcal{U}(i) }\mathbbm{1}(m_{ij}=1)}{|\mathcal{U}(i)|},\frac{\sum\limits_{j\in\mathcal{U}(i) }\mathbbm{1}(m_{ij}=-1)}{|\mathcal{U}(i) |}\right\}.
\end{split}
\end{equation}
\item Set the weights $\mu_{ij}$ as in~\eqref{eqn:weights_cluster} by replacing $p$ by $\hat{p}$ and $q$ by $\hat{q}$, and estimate the label for the task $i$ by the weighted majority voting
$
\hat{a}_i^{\textsf{WMV}}=\text{sign}\left(\sum_{j\in {\mathcal N}_i \cap {\mathcal W}^{(2)}}\mu_{ij}m_{ij} \right).
$
\end{itemize}
\end{itemize}
\begin{rem}
We remark that {Algorithm 2} does not require any prior information about reliability parameters $(p,q)$ nor the task/worker types.
\textit{Stage 1} of {Algorithm 2} clusters workers by applying SDP to the similarity matrix with the tuning parameter $\lambda$ chosen from the data, and \textit{Stage 2} of {Algorithm 2} first finds a matched cluster and uses this information to obtain the estimates $(\hat{p},\hat{q})$ of the model parameters $(p,q)$, which then can be used for the weighted majority voting.
\end{rem}
The performance of the clustering algorithm is guaranteed by the lemma below.
\begin{lem} \label{lem1} \textit{
Suppose the tuning parameter $\lambda$ in the SDP~\eqref{eq9} obeys the bound $ \frac{1}{4} rp_m + \frac{3}{4} rp_u \leq \lambda \leq \frac{3}{4} rp_m + \frac{1}{4} rp_u$ where $p_m:=((2p-1)^2+(d-1)(2q-1)^2)/d$ and $p_u:=(2(2p-1)(2q-1)+(d-2)(2q-1)^2)/d$. Then, there is a universal constant $c_1>0$ such that Stage 1 of Algorithm 2 achieves the strong consistency with probability at least $1-4n^{-1}$ if $r \geq c_1 \frac{d^2 (\ln n)^2}{(p_m-p_u)^2}$.}
\end{lem}
\iffalse
We next analyze \textit{Stage 2} of Algorithm 2,
assuming the perfect clustering of workers from Stage 1, i.e., ${\mathcal V}_z={\mathcal W}_z$ for all $z\in[d]$.
Consider the performance of the weighted majority voting with the estimators $\hat{p}$ and $\hat{q}$.
By applying Chernoff bound to the estimators $\hat{p}=\frac{1}{m}\sum_{i=1}^m\hat{p}_i$ and $\hat{q}=\frac{1}{m}\sum_{i=1}^m\hat{q}_i$, we can show that $|\hat{p}-\mathbb{E}[\hat{p}]|\leq\epsilon$ and $|\hat{q}-\mathbb{E}[\hat{q}]|\leq\epsilon$ for an arbitrary small $\epsilon>0$ with high probability for large enough number $m$ of tasks. Note that $\mathbb{E}[\hat{p}]=\mathbb{E}[\hat{p}_i]$ and $\mathbb{E}[\hat{q}]=\mathbb{E}[\hat{q}_i]$ for all $i\in{\mathcal T}$.
Conditioned on the correct type matching event for task $i$, we have $\mathbb{E}[\hat{p}_i|z^*(i)=t_i]=p$ and $\mathbb{E}[\hat{q}_i|z^*(i)=t_i]=q$. On the other hand, conditioned on the incorrect type matching event for task $i$, we have $\mathbb{E}[\hat{p}_i|z^*(i)\neq t_i]=q$ and $\mathbb{E}[\hat{q}_i|z^*(i)\neq t_i]=\frac{p+(d-2)q}{(d-1)}$.
When we define the probability of incorrect type matching as $\Delta:=\pr(z^*(i)\neq t_i)$, we have $p':=\mathbb{E}[\hat{p}_i]=p-\Delta(p-q)$ and $q':=\mathbb{E}[\hat{q}_i]=q+\Delta\frac{(p-q)}{d-1}.$
Here we assume that $p\neq q$ to exclude a trivial case that $p'=p$ and $q'=q$.
Note that $\mathbb{E}[\hat{p}| z^*(i)\neq t_i]=\frac{1}{m}q+\frac{m-1}{m}p'\to p'$ as $m\to\infty$ and $\mathbb{E}[\hat{q}| z^*(i)\neq t_i]=\frac{1}{m}\frac{p+(d-2)q}{(d-1)}+\frac{m-1}{m}q'\to q'$ as $m\to\infty$.
Conditioned on the incorrect type matching ($z^*(i)\neq t_i$) for the task $i$, by using~\eqref{eqn:wmv_P} (and the fact that $\{\mu_{ij}\}$ and
$\{m_{ij}: j\in{\mathcal N}_i\cap {\mathcal W}^{(2)}\}$ are independent due to the random splition of workers in Stage 2 of the algorithm) we can show that the weighted majority voting~\eqref{eqn:weights_cluster} with $\hat{p}\to p'=p-\Delta(p-q)$ and $\hat{q}\to q'=q+\Delta\frac{(p-q)}{d-1}$ guarantees that the expected error fraction is bounded above by
$\exp\left(-\left(\gamma_u+\epsilon_1\right)l\right)
$ for some $\epsilon_1>0$ depending on $(p,q,d,\Delta)$ assuming $\Delta\ll 1$. Compared to the estimation error from type matching for the known $(p,q)$ case, i.e., $\exp\left(-\gamma_u l\right)$ with $\gamma_u$ in~\eqref{eqn:gammau}, the weighted majority voting with $(\hat{p},\hat{q})$ achieves a slightly larger exponent by $\epsilon_1>0$, conditioned on incorrect type matching ($z^*(i)\neq t_i$). This can be explained from the fact that conditioned on the incorrect type matching it is better to use $(p',q')$, which satisfy $p'<p$ and $q'>q$, since this results in putting a smaller weight on the answers from the cluster that is \textit{incorrectly} believed to be a matched cluster for the task.
Since the number $ld$ of queries per task required to achieve the desired recovery accuracy is dominated by the condition for $ld$ under the incorrect type matching, by following the similar analysis as in {Theorem 1}, we can show that {Algorithm 2} achieves the targeted recovery accuracy~\eqref{eqn:err} with the number of queries per task as in~\eqref{eqn:Ld_alg1}.
\end{oproof}
\vspace{-0.3cm}
\fi
\section{Numerical Results} \label{sec5}
We provide simulation results to show that the proposed algorithms outperform other baselines in diverse parameter regimes.
In Fig.~\ref{fig:comp1}, we compare our algorithm ({Alg.1 and 2}) with majority voting, oracle weighted majority voting, and {Alg.~\cite{shah2018reducing}} in terms of
the error fraction in inferred tasks over the number of queries per task when $d=3$. The result is averaged over 30 times Monte Carlo simulations.
When $q=1/2$ (left figure), {Alg. 1} becomes the same as {Alg.~\cite{shah2018reducing}} and these algorithms outperform the majority voting. We can observe that {Alg. 2}, which uses the estimates $(\hat{p},\hat{q})$, achieves as good performance as that of {Alg. 1}.
When $q>1/2$ (right figure), our algorithms show the best performance.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{sim1}
\caption{Comparisons of label recovery accuracy for five different algorithms.}
\label{fig:comp1}
\vspace{-0.3cm}
\end{figure}
In Fig.~\ref{fig:comp2}, the performance of the proposed algorithm is compared with that of the state-of-the-art algorithms developed for the single-coin Dawid-Skene model, which assumes that the worker reliability does not change depending on the task type. The state-of-the-art algorithms perform worse than the proposed algorithm when the data is collected assuming the worker-type specialization model, which may reflect more realistic scenarios in diverse crowdsourcing applications.
\begin{figure}
\centering
\includegraphics[width=7cm]{sim3}
\caption{Comparison of the proposed algorithm with the state-of-the-art algorithms designed for the single-coin Dawid-Skene model.}
\label{fig:comp2}
\vspace{-0.4cm}
\end{figure}
\section{Conclusions}\label{sec6}
We considered crowdsourced labeling under a $d$-type specialization model with general reliability parameters $p> q\geq 1/2$. When $(p,q)$ values are known but not the types of tasks/workers, our proposed algorithm (Alg. 1) recovers binary tasks up to any given accuracy $(1-\alpha_c)\in(0,1)$ with the number of queries per task scales as $\Theta(d\ln\frac{d}{\alpha_c})$ when $q=1/2$ and as $\Theta(\ln\frac{d}{\alpha_c})$ when $q>1/2$.
We also proposed an algorithm (Alg. 2) that does not require any information about reliability parameters nor the task/worker types, and empirically showed that this algorithm achieves as good performance as the algorithm with the known reliability parameters $(p,q)$.
\appendices
\bibliographystyle{IEEEtran}
|
1,941,325,220,425 | arxiv | \section{Introduction}
The phase-space formulation of quantum mechanics provides a complete framework that echoes classical statistical mechanics.
Quantum states and quantum operators are described within this formulation by continuous functions of the pair of canonical variables $x$ and $p$.
These variables traditionally refer to the position and momentum observables, but are also isomorphic to the conjugate quadrature components of a mode of the electromagnetic field (we use this quantum optics nomenclature in the present paper). The conversion from quantum operators to quantum phase-space distributions is carried out via the Wigner-Weyl transform \cite{Case2008}, which maps any linear operator $\hat{A}$ into a distribution $A(x,p)$ as
\begin{equation}
A(x,p)=
\dfrac{1}{\pi\hbar}
\int
\exp\left(2ipy/\hbar\right)
\bra{x-y}
\hat{A}
\ket{x+y}
\mathrm{d}y ,
\end{equation}
where $\hbar$ denotes the Planck constant (we set $\hbar =1$ in the remainder of this paper). Accordingly, the Wigner function of a quantum state is the Wigner-Weyl transform of its density operator $\hat{\rho}$, written as $W(x,p)$. The Wigner function comes as close to a probability distribution in phase space as allowed by quantum mechanics. It indeed shares most properties of a classical probability distribution. Notably, the marginal distributions of $W(x,p)$ coincide with the probability distributions for $x$ and $p$, respectively $\rho_x(x)=\bra{x}\hat{\rho}\ket{x}$ and $\rho_p(p)=\bra{p}\hat{\rho}\ket{p}$, as it can easily be shown that $\int W(x,p)\, \mathrm{d}p=\rho_x(x)$ and $\int W(x,p)\, \mathrm{d}x=\rho_p(p)$.
Also, the expectation value of any operator $\hat{A}$ in state $\hat{\rho}$ is straightforwardly computed from its Wigner function through the overlap formula \cite{Leonhardt2010}:
\begin{equation}
\langle \hat{A}\rangle
=
\mathrm{Tr}\left[\hat{A}\, \hat{\rho}\right]
=
2\pi\iint A(x,p)\, W(x,p)\, \mathrm{d}x \, \mathrm{d}p.
\label{eq:overlap_formula}
\end{equation}
However, it is well known that the Wigner function is not a true probability distribution as it lacks positiveness \cite{Zyczkowski2004}. For example, all pure non-Gaussian states have a Wigner function that admits negative regions as a consequence of the Hudson theorem \cite{Hudson1974}. This is the price to pay to the Heisenberg uncertainty principle, which forbids the joint definition of noncommuting variables $x$ and $p$. Hence, several common functionals of probability distributions, such as the Shannon differential entropy, become in general ill defined if applied to Wigner functions.
In contrast, there exists a well-known distribution in quantum phase space that behaves as a genuine probability distribution, namely the Husimi $Q$ function \cite{Leonhardt2010}, defined as $Q(\alpha)= \bra{\alpha}\hat{\rho}\ket{\alpha} /\pi$.
It corresponds to the probability to measure state $\hat{\rho}$ in a coherent state $\ket{\alpha}$.
Remember that a coherent state $\ket{\alpha}$ is an eigenstate of the annihilation operator $\hat{a}=\left(\hat{x}+i\hat{p}\right)/\sqrt{2}$ with eigenvalue $\alpha$.
Splitting the complex parameter $\alpha$ into two real parameters $x$ and $p$ such that $\alpha=x+ip$ gives
\begin{equation}
Q(x,p) = \dfrac{1}{\pi}
\bra{x+ip}\hat{\rho}\ket{x+ip} \, .
\label{eq:def_husimi}
\end{equation}
Despite lacking the nice properties of the Wigner function such as the overlap formula \eqref{eq:overlap_formula}, the Husimi function has the advantage of being positive, hence it admits a properly defined entropy. The Shannon differential entropy of the Husimi function is indeed known as the Wehrl entropy and is defined as $h\left(Q\right)=-\iint Q(x,p)\ln Q(x,p)\, \mathrm{d}x \, \mathrm{d}p$.
This entropy is at the core of the Wehrl conjecture \cite{Wehrl1979}, later proven by Lieb \cite{Lieb1978,Lieb2002}, which states that the Wehrl entropy is lower-bounded by $\ln\pi+1$ and that the only minimizers of $h(Q)$ are the coherent states \footnote{The Wehrl conjecture is also often written as $h\left(Q\right)\ge 1$, which simply originates from a different convention. If the Husimi Q-function is defined without the $1/\pi$ prefactor but, instead, this additional prefactor is inserted in the definition of $h\left(Q\right)$, we get an additive constant $-\ln\pi$ which shifts the lower bound to 1.}.
Interestingly, there is a link between the Husimi function and Wigner function of a state, as can simply be understood using the quantum optics language.
To this purpose, recall that the vacuum state $\ket{0}$ [or ground state of the harmonic oscillator $\hat{H} = \left(\hat{p}^2+\hat{x}^2\right)/2$ in natural units] admits the Wigner function $W_0\left(x,p\right)=\exp\left(-x^2-p^2\right)/\pi$.
Since a coherent state $\ket{\alpha}$ is a displaced vacuum state, its Wigner function is then $W_\alpha(x,p)=W_0\left(x',p'\right)$, where $x'=x-\sqrt{2}\, \mathrm{Re}(\alpha)$ and $p'=p-\sqrt{2}\, \mathrm{Im}(\alpha)$.
Using this, the Husimi function can be expressed from the overlap formula \eqref{eq:overlap_formula} as:
\begin{eqnarray}
\hspace{-0.5cm} Q(x,p) &=& \dfrac{1}{\pi} \mathrm{Tr}\left[ \ket{x+ip}\!\bra{x+ip} \, \hat{\rho}\right] \nonumber \\
&=& 2\iint
W_0\left(\tilde{x}-\sqrt{2}x,\tilde{p}-\sqrt{2}p\right)
W(\tilde{x},\tilde{p}) \,
\mathrm{d}\tilde{x} \,
\mathrm{d}\tilde{p}.
\end{eqnarray}
Thus, it appears that $Q$ is a convolution between $W$ and $W_0$, with a rescaling factor of $\sqrt{2}$.
In the language of random variables (and provided $W$ is non-negative), we could say that if $(\tilde{x},\tilde{p})$ is distributed according to $W$ and $(x_0,p_0)$ is distributed according to $W_0$, then $(x,p)$ is distributed according to $Q$, with
\begin{equation}
x = \left(\tilde{x}-x_0\right)/\sqrt{2} \quad \textrm{and} \quad p = \left(\tilde{p}-p_0\right)/\sqrt{2} .
\end{equation}
This is a familiar relation in quantum optics, describing the action of a beam splitter of transmittance $\eta=1/2$ onto the state $\hat{\rho}$ and the vacuum state. Defining $\hat{\sigma}$ as the reduced state of the corresponding output of the beam splitter, as shown in Fig. \ref{fig:bbs_vacuum},
we conclude that the Wigner function of $\hat{\sigma}$ is precisely the Husimi function of $\hat{\rho}$, namely,
\begin{equation}
W_{\hat{\sigma}}(x,p) =Q_{\hat{\rho}}(x,p),
\label{eq:fundamental}
\end{equation}
where
\begin{equation}
\hat{\sigma}=
\mathrm{Tr}_2\left[\hat{U}_{\frac{1}{2}}\left(\hat{\rho}\otimes\ket{0}\bra{0}\right)\hat{U}_\frac{1}{2}^\dagger\right] .
\label{eq:sigma_output_bbs_vacuum}
\end{equation}
Here $\hat{U}_{\frac{1}{2}}$ denotes the beam-splitter unitary of transmittance $\eta=1/2$, while $\mathrm{Tr}_2$ denotes a reduced trace over one of the modes, say the second mode.
From Eq. \eqref{eq:fundamental}, it appears that the entropy of the Wigner function of $\hat{\sigma}$ is nothing else but the Wehrl entropy of $\hat{\rho}$ in this particular setup.
A natural question then arises : can we give an intrinsic meaning to the entropy of a Wigner function independently of this particular setup?
\begin{figure}
\includegraphics[width=5cm]{pictures/Fig1_husimi.png}
\caption{
Reduced output state $\hat{\sigma}$ of a balanced beam splitter (of transmittance $\eta=1/2$) when the input state is $\hat{\rho}$, as described in Eq. \eqref{eq:sigma_output_bbs_vacuum}.
The Wigner function of $\hat{\sigma}$ coincides with the Husimi $Q$ function of $\hat{\rho}$; hence it is positive. Consequently, the Wigner entropy of $\hat{\sigma}$ is equal to the Wehrl entropy of $\hat{\rho}$.}
\label{fig:bbs_vacuum}
\end{figure}
In this paper, we will answer by the affirmative. First, let us notice that the setup of Fig. \ref{fig:bbs_vacuum} ensures that the output state $\hat{\sigma}$ always has a positive Wigner function (see Appendix \ref{apd:bbs_wigner_positive}). In general, we will denote the quantum states admitting a positive Wigner function [i.e., states such that $W(x,p)\ge 0$, $\forall x,p$] as \textit{Wigner-positive} states. For such states, it is possible to compute the Shannon differential entropy of their Wigner function. We make the leap and define the \textit{Wigner entropy} of any Wigner-positive state $\hat{\rho}$ as
\begin{equation}
h\left(W\right) = -\iint W(x,p) \, \ln W(x,p) \, \mathrm{d}x \, \mathrm{d}p
\label{eq-def-Wigner-entropy}
\end{equation}
where
\begin{equation}
W(x,p) = \frac{1}{\pi} \int \exp\left(2ipy\right) \bra{x-y} \hat{\rho} \ket{x+y} \mathrm{d}y
\label{eq-def-Wigner-function}
\end{equation}
is the Wigner function of $\hat{\rho}$. We argue that, although it is limited to Wigner-positive states, the Wigner entropy is a natural measure in order to characterize quantum uncertainty in phase space: it bears information about the uncertainty of the marginal distributions of the $x$ and $p$ variables as well as their correlations in phase space. In contrast with the Wehrl entropy, it is not the classical entropy of the outcome of a specific measurement, namely, a joint $(x,p)$ measurement (called heterodyne detection or eight-port homodyne detection in quantum optics). Of course, in the special case where a Wigner-positive state can be prepared using the setup of Fig. \ref{fig:bbs_vacuum}, its Wigner entropy can be viewed simply as the Wehrl entropy of the corresponding input state, but the definition goes further and the Wigner entropy remains relevant for Wigner-positive states that \textit{cannot} be built in this way.
The Wigner entropy $h(W)$ enjoys interesting properties. First, unlike the Wehrl entropy $h(Q)$, it is invariant under symplectic transformations (displacement, rotation, and squeezing) in phase space. Such transformations, which are ubiquitous in quantum optics, correspond to the set of all Gaussian unitaries in state space. We stress that a sensible measure of phase-space uncertainty must remain invariant under symplectic transformations since these are also area-preserving transformations in phase space. In contrast, $h(Q)$ is greater for squeezed states than for coherent states. As it can be understood from Fig. \ref{fig:bbs_vacuum}, this preference simply originates from the fact that one input of the balanced beam splitter is itself a coherent state. Second, the Wigner entropy $h(W)$ can be related to the entropy of the marginal distributions $h\left(\rho_x\right)$ and $h\left(\rho_p\right)$, but also encompasses the $x$-$p$ correlations. Shannon information theory establishes a relation between the entropy of a joint distribution and its marginal entropies, namely, $h(x,p)=h(x)+h(p)-I$, where $I\ge 0$ is the mutual information \cite{Cover1991}. Applied to the Wigner entropy, this gives the inequality $h(W)\leq h\left(\rho_x\right)+h\left(\rho_p\right)$.
This means that a lower bound on the Wigner entropy implies in turn a lower bound on the sum of the marginal entropies.
In the light of these considerations, we introduce a conjecture on the Wigner entropy, which resembles the Wehrl conjecture. As anticipated in \cite{Hertz2017}, we conjecture that the Wigner entropy of any Wigner-positive state $\hat{\rho}$ satisfies
\begin{equation}
h\left(W\right)\geq\ln\pi+1.
\label{eq:wig_conj}
\end{equation}
As we will show, this bound is reached by all Gaussian pure states, which appears consistent with the Hudson theorem \cite{Hudson1974}.
It implies (but is stronger than) the entropic uncertainty relation of Białynicki-Birula and Mycielski \cite{Bialynicki1975}, namely, $h\left(\rho_x\right)+h\left(\rho_p\right)\geq\ln\pi+1$.
Importantly, conjecture \eqref{eq:wig_conj} also implies the Wehrl conjecture since we have shown that the Husimi function of any state $\hat {\rho}$ is the Wigner function of some Wigner-positive state $\hat {\sigma}$ in a particular setup (see Fig. \ref{fig:bbs_vacuum}). However, the converse is not true as there exist Wigner-positive states whose Wigner function cannot be written as the Husimi function of a physical state (an example will be shown in Sec. \ref{sec:results}).
The paper is organized as follows. In Sec. \ref{sec:wig_entropy}, we start by recalling some basics of the symplectic formalism and then define the Wigner entropy of a Wigner-positive state as a distinctive information-theoretical measure of its uncertainty in phase space. In Sec. \ref{sec:wig_pos}, we discuss the characterization of the set of Wigner-positive states and focus on the particular subset of phase-invariant Wigner-positive states. Then, in Sec. \ref{sec:results}, we turn to the main conjecture and provide a proof for some special case of phase-invariant Wigner-positive states, namely the passive states. Finally, we conclude in Sec. \ref{sec:conclusion} and provide an example application of the Wigner entropy, namely the Wigner entropy-power inequality. Further, in Appendix~\ref{sect-wigner-renyi}, we extend the Wigner entropy and define the Wigner-R\'enyi entropy of Wigner-positive states. We also discuss a natural extension of the conjectured lower bound. In Appendix~\ref{apd:bbs_wigner_positive}, we present a quantum-optics-inspired method for generating a large variety of Wigner-positive states with a balanced beam splitter, extending on Fig.~\ref{fig:bbs_vacuum}. Appendix~\ref{apd:mixture_2phot} is devoted to the detailed analysis of the set of Wigner-positive states when considering the Fock space restricted to two photons as this provides a helpful illustration of our results. Finally, Appendix~\ref{apd:formula_extremal_states} provides more details on the derivation of the formula [Eq. \eqref{eq:extremal_states_formula}] at the heart of our proof.
\section{Wigner entropy of a state}
\label{sec:wig_entropy}
In this paper, we restrict our considerations to a single bosonic mode (one harmonic oscillator) for simplicity, although the definition of the Wigner entropy and the corresponding conjecture should extend to the multidimensional case. Let us briefly review the symplectic formalism for one bosonic mode. Let $\mathbf{\hat{x}} = (\hat{x}, \hat{p})^\intercal$ be the vector of quadrature operators (or position and momentum canonical operators) satisfying $[\hat{x}_j,\hat{x}_k]= i \, \Omega_{jk}$, with the matrix
\begin{equation}
\mathbf{\Omega} =\begin{pmatrix}
0 & 1
\\
-1 & 0
\end{pmatrix}
\end{equation}
being the symplectic form. The coherence vector (also called the displacement vector) of a state $\hat{\rho}$ is defined as
\begin{equation}
\mathbf{c} = \langle \mathbf{\hat{x}} \rangle \coloneq \mathrm{Tr}( \mathbf{\hat{x}} \, \hat{\rho}),
\end{equation}
where $\langle \cdot \rangle$ stands for the expectation value in state $\hat{\rho}$, while the covariance matrix $\mathbf{\Gamma}$ of state $\hat{\rho}$ is defined as
\begin{equation}
\Gamma_{jk}= \langle \{ \hat{x}_j - \langle \hat{x}_j \rangle , \hat{x}_k - \langle \hat{x}_k \rangle \} \rangle
\end{equation}
where $\{ \cdot,\cdot \}$ stands for the anticommutator.
The set of Gaussian states contains those for which the Wigner function $W(x,p)$ is Gaussian; hence these states are completely characterized by their first- and second-order moments $\mathbf{c}$ and $\mathbf{\Gamma}$. The set of Gaussian unitaries in state space is isomorphic to the set of symplectic transformations in phase space. Formally, a symplectic transformation is an affine map on the space of quadrature operators which is defined by a symplectic matrix $\mathbf{S}$ and a displacement vector $\mathbf{d}$, namely,
\begin{equation}
\mathbf{\hat{x}} \to \mathbf{S}\mathbf{\hat{x}}+\mathbf{d}.
\end{equation}
The symplectic matrix $\mathbf{S}$ is a real matrix that must preserve the symplectic form, that is, $\mathbf{S} \mathbf{\Omega} \mathbf{S}^\intercal = \mathbf{\Omega}$, which implies in particular that $\det\mathbf{S}=1$. The displacement vector $\mathbf{d}$ is an arbitrary real vector. The first- and second-order moments of a state $\rho$ evolve under such a symplectic transformation as
\begin{equation}
\mathbf{c} \to \mathbf{S} \mathbf{c} +\mathbf{d} \, , \qquad \mathbf{\Gamma}=\mathbf{S} \mathbf{\Gamma} \mathbf{S}^\intercal \, .
\end{equation}
In the special case of Gaussian states, this completely characterizes the evolution of the state under the Gaussian unitary.
The core of this paper is the definition of an information-theoretical measure of uncertainty in phase space, which we call the Wigner entropy $h(W)$, where $h(\cdot)$ denotes the Shannon differential entropy functional and $W(x,p)$ is the Wigner function of $\hat{\rho}$ [see Eqs. \eqref{eq-def-Wigner-entropy} and \eqref{eq-def-Wigner-function}]. As already mentioned, it only applies to Wigner-positive states since, otherwise, the definition of the entropy entails the logarithm of a negative number. We note it as a functional of $W$ but, of course, it is eventually a functional of the state $\hat{\rho}$ since $W$ itself depends on $\hat{\rho}$.
In contrast with the Shannon entropy of a discrete variable, the Shannon differential entropy of a continuous variable does not have an absolute meaning (it depends on the scale of the variable) and it becomes negative if the probability distribution is highly peaked \cite{Cover1991}. However, when applied to a Wigner function, a natural scale is provided here by the area $\hbar$ of a unit cell in phase space. Hence, the Wigner entropy has a meaning \textit{per se} and it is legitimate to conjecture a lower bound, namely, Eq.~\eqref{eq:wig_conj}, when setting $\hbar=1$. Further, it is natural to extend on this and consider a lower bound on the differential R\'enyi entropy of the Wigner function of any Wigner-positive state, a quantity that we define as the Wigner-R\'enyi entropy (see Appendix \ref{sect-wigner-renyi}).
The Wigner entropy $h(W)$ has the nice property to be invariant under symplectic transformations. Consider the symplectic transformation $\mathbf{\hat{x}} \to \mathbf{\hat{x}'} = \mathbf{S}\mathbf{\hat{x}}+\mathbf{d}$ and let us denote as $W$ and $W'$ the Wigner function of the input and output states, respectively. The change of variables corresponding to this transformation gives
\begin{equation}
W'(x',p') = \frac{ W(x,p)} {|\det\mathbf{S}|},
\end{equation}
which indeed implies that
\begin{eqnarray}
h(W') &=& -\iint W'(x',p') \, \ln W'(x',p') \, \mathrm{d}x' \, \mathrm{d}p' \nonumber \\
&=& -\iint W(x,p) \ln \left( \frac{ W(x,p)} {|\det\mathbf{S}|} \right) \, \mathrm{d}x \, \mathrm{d}p \nonumber \\
&=& h(W) + \ln |\det\mathbf{S}| \nonumber \\
&=& h(W),
\end{eqnarray}
where we have used the fact that $W$ is normalized and the fact that $\mathbf{S}$ is a symplectic matrix ($\det\mathbf{S}=1$).
Note that this invariance can also be understood as a sole consequence of the fact that symplectic transformations conserve areas in phase space since $\det\mathbf{S}=1$. Indeed, for any functional $F$, we have
\begin{eqnarray}
\lefteqn{ \iint F\big( W'(x',p') \big) \, \mathrm{d}x' \, \mathrm{d}p' } \hspace{1cm} \nonumber \\
&&= \iint F \left( \frac{ W(x,p)} {|\det\mathbf{S}|} \right) \, |\det\mathbf{S}| \, \mathrm{d}x \, \mathrm{d}p \nonumber \\
&&= \iint F\big( W(x,p) \big) \, \mathrm{d}x \, \mathrm{d}p.
\end{eqnarray}
The special case of Gaussian states is very easy to deal with. A straightforward calculation shows that the Wigner entropy of a Gaussian state $\hat{\rho}$ is given by
\begin{equation}
h(W)= \ln \left( 2 \pi \sqrt{\det\mathbf{\Gamma}} \right) + 1 = \ln ( \pi / \mu ) + 1,
\end{equation}
where $\mu=\mathrm{Tr} \hat{\rho}^2 = 1/ (2 \sqrt{\det\mathbf{\Gamma}}) \le 1$ stands for the purity of the state.
All Gaussian states that are connected with a symplectic transformation obviously conserve their purity since $\det\mathbf{\Gamma'} = \det (\mathbf{S} \mathbf{\Gamma} \mathbf{S}^\intercal) = \det\mathbf{\Gamma}$, which confirms that their Wigner entropy is invariant. The lowest value of $h(W)$ among Gaussian states is then reached for pure states ($\mu=1$) and is given by $\ln \pi + 1$, as expected. This is the value of the Wigner entropy of all coherent states and squeezed states (regardless the squeezing parameter, squeezing orientation, and coherence vector). Accordingly, the Gaussian pure states would be the minimum-Wigner-uncertainty states. The difficult task remains, however, to prove that non-Gaussian Wigner-positive states cannot violate this lower bound (see Sec. \ref{sec:results}).
Provided this conjecture is valid, the Wigner function of any Wigner-positive state can be classically simulated from the Wigner function of the vacuum state (or any other Gaussian pure state). More precisely, information theory tells us that the difference $\Delta = h(W)- \ln \pi - 1$ can be viewed as the number of independent equiprobable random bits that are needed, on average, to generate deterministically one random $(x,p)$ instance drawn from the Wigner function of state $\rho$ from one random $(x,p)$ instance drawn from the Wigner function of the vacuum state (or any Gaussian pure state). Of course, this results holds at the asymptotic limit only, that is, around $N\times \Delta$ bits of extra randomness are needed for converting $N$ random instances of $(x,p)\sim W_0$ into $N$ random instances of $(x,p)\sim W$ by deterministic means when $N\to \infty$.
\section{Wigner-positive states}
\label{sec:wig_pos}
As explained in Sec. \ref{sec:wig_entropy}, the Wigner entropy naturally appears as an information-theoretic measure of uncertainty in phase space, but is only properly defined for positive Wigner functions. For this reason, we devote this section to the quantum states with positive Wigner functions, which we call \textit{Wigner-positive} states. Note that Wigner positivity is a particular case of $\eta$-positivity for $\eta=0$ \cite{Narcowich1989, Brocker1995}.
Quantum Wigner-positive states of a single mode are described by a Wigner function $W(x,p)$ that respects the condition
\begin{equation}
W(x,p)\geq 0
\qquad
\forall x,p.
\end{equation}
Restricting to pure states, the set of Wigner-positive states is well known: the Hudson theorem establishes that Gaussian pure states are the only pure quantum states with a positive Wigner function \cite{Hudson1974}. When it comes to mixed states, however, the situation becomes more difficult since the mixing of states enables one to build non-Gaussian Wigner-positive states. The characterization of the set of Wigner-positive mixed states has been attempted \cite{Brocker1995,Mandilara2009}, but the resulting picture is somehow complex.
Just like writing a necessary and sufficient condition for a Wigner function to correspond to a positive-semidefinite density operator is a hard task, it appears cumbersome to express a necessary and sufficient condition for a density operator to be associated with a positive Wigner function.
On a more positive note, the set of Wigner-positive states is convex since a mixture of Wigner-positive states is itself Wigner positive. Taking advantage of this property, we may focus on the extremal states of the convex set, as pictured in Fig. \ref{fig:convex_set_extremal_points}. These are the states that cannot be obtained as a mixture of other states of the set. Conversely, any state of the set can be generated as a mixture of these extremal states. This brings a simplification in the proof of the main conjecture, namely, expressing a lower bound on the Wigner entropy of an arbitrary Wigner-positive state (see Sec. \ref{sec:results}). Indeed, the Shannon entropy being concave, the entropy of a mixture is lower-bounded by the entropy of its components, that is,
\begin{equation}
h(p_1 W_1+ p_2 W_2)\geq p_1 \, h(W_1) + p_2 \, h(W_2),
\end{equation}
where $p_1$ and $p_2$ are positive reals such that $p_1+p_2=1$. Hence, it is sufficient to prove the lower bound on $h(W)$ for the extremal Wigner-positive states in order to have a proof over the full set.
\begin{figure}
\includegraphics[width=3cm]{pictures/convex_set.png}
\caption{Schematic view of a convex set. The black and red points form all together the boundary of the convex set, while the red points are the extremal points on this boundary (note the existence of isolated extremal points as well as of a continuum of extremal points).}
\label{fig:convex_set_extremal_points}
\end{figure}
\subsection*{Several classes of Wigner-positive states}
To be more specific, we define several sets of Wigner-positive quantum states, which will be useful in the rest of this paper. As we will see, conjecture \eqref{eq:wig_conj} is trivially verified for some of them, while it remains hard to prove for others.
\begin{itemize}
\item $\mathcal{Q}$ :
\textit{Physical quantum states}
\\
It is the convex set of all single-mode quantum states.
Their density operator $\hat{\rho}$ satisfies the three physicality conditions: Hermiticity, positive semidefiniteness and unit trace.
Of course, they can have partly negative Wigner functions.
\item $\mathcal{Q}_+$ :
\textit{Wigner-positive quantum states}
\\
It is the subset of states in $\mathcal{Q}$ that have positive Wigner functions.
It is a convex set. All states within this set have a well-defined Wigner entropy and are the subject of conjecture \eqref{eq:wig_conj}.
\item $\mathcal{G}$ :
\textit{Gaussian states}
\\
It is the subset of states in $\mathcal{Q}_+$ that have a Gaussian Wigner function. It does not form a convex set since the mixture of Gaussian states does not need to be Gaussian, so we refer to its convex hull as $\mathcal{G}_c$.
\item $\mathcal{C}$ :
\textit{Classical states}
\\
According to Glauber's definition, classical states are mixtures of coherent states.
They are characterized by a positive Glauber-Sudarshan $P$ function.
By definition, $\mathcal{C}$ is a convex set and $\mathcal{C}\subset\mathcal{G}_c$ since coherent states are Gaussian states.
\end{itemize}
The extremal states of $\mathcal{C}$ and $\mathcal{G}_c$ are respectively coherent states and Gaussian pure states.
For these two sets, conjecture \eqref{eq:wig_conj} is trivially verified since the Wigner entropy of Gaussian pure states is precisely $\ln\pi+1$ and since the entropy is concave.
Unfortunately, the convex closure of Gaussian states $\mathcal{G}_c$ is yet but a small fraction of the set of Wigner-positive states $\mathcal{Q}_+$.
As an evidence of this, we construct a wider set of Wigner-positive states by exploiting a technique relying on a balanced beam splitter (hence, we name this set as $\mathcal{B}$).
\begin{itemize}
\item $\mathcal{B}$ :
\textit{Beam-splitter states}
\\
These are the states $\hat{\sigma}$ resulting from the setup depicted in Fig. \ref{fig:bbs_schema}. More precisely, a beam-splitter state $\hat{\sigma}$ denotes the reduced output state of a beam splitter with transmittance $\eta=1/2$ fed by a tensor product of two arbitrary states $\hat{\rho}_A$ and $\hat{\rho}_B$,
\begin{equation}
\hat{\sigma} =
\mathrm{Tr}_2\left[
\hat{U}_{\frac{1}{2}}
\left(
\hat{\rho}_A\otimes\hat{\rho}_B
\right)
\hat{U}_{\frac{1}{2}}^{\dagger}
\right] .
\label{eq:bbs_wig_positive}
\end{equation}
We show in Appendix \ref{apd:bbs_wigner_positive} that state $\hat{\sigma}$ always possesses a positive Wigner function, regardless of $\hat{\rho}_A$ and $\hat{\rho}_B$. The sole condition is that the input state is a tensor product and the beam splitter is balanced ($\eta=1/2$).
\end{itemize}
\begin{figure}
\includegraphics[width=5cm]{pictures/Fig3_beamsplitter_states.png}
\caption{Beam-splitter state ${\hat\sigma}$ obtained at the output of a balanced beam splitter (of transmittance $\eta=1/2$). If the input is an arbitrary product state $\hat{\rho}_A\otimes\hat{\rho}_B$, then the reduced state of the output ${\hat\sigma}$ is guaranteed to be Wigner positive. This generalizes Fig. \ref{fig:bbs_vacuum}, where $\hat{\rho}_B=\ket{0}\!\bra{0}$. The set of beam-splitter states is denoted as $\mathcal{B}$. The convex hull of these states, denoted as $\mathcal{B}_c$, is obtained by sending any separable state (i.e., a mixture of product states) into a balanced beam splitter and tracing over one of the output modes. The whole set $\mathcal{B}_c$ is strictly included in the set of Wigner-positive states $\mathcal{Q}_+$. }
\label{fig:bbs_schema}
\end{figure}
It can be shown with a simple argument that the set of Gaussian states $\mathcal{G}$ is a subset of $\mathcal{B}$.
Indeed, it is well known that the product of two identical copies of a Gaussian state $\hat{\gamma}$ is invariant under the action of a beam splitter (assuming the coherence vector vanishes \cite{Weedbrook2012}). We have the identity $\hat{U}_{\eta}\left(\hat{\gamma}\otimes\hat{\gamma}\right)\hat{U}_{\eta}^\dagger = \hat{\gamma}\otimes\hat{\gamma}$, where $\hat{\gamma}$ is any single-mode Gaussian state and $\hat{U}_\eta$ is the unitary of a beam splitter with transmittance $\eta$. One can then easily reconstruct the set of Gaussian states with the above setup, so it follows that $\mathcal{G}\subset\mathcal{B}$. Note that it is easy to build beam-splitter states as in Fig. \ref{fig:bbs_schema} that are not Gaussian states; hence this is a strict inclusion relation. The analog relation also applies to the respective convex hull of these sets, namely, $\mathcal{G}_c\subset\mathcal{B}_c$. Unfortunately, the set $\mathcal{B}_c$ does not coincide with $\mathcal{Q}_+$ as we will see that there exist Wigner-positive states that do not belong to $\mathcal{B}_c$ (see, e.g., the dark blue region in Fig.~\ref{fig:wig_pos_2}). In summary, we have the following chain of strict inclusion relations:
\begin{equation}
\mathcal{Q}\supset\mathcal{Q}_+\supset\mathcal{B}_c\supset\mathcal{G}_c\supset\mathcal{C} \,
\end{equation}
as pictured in Fig. \ref{fig:quantum_sets}.
\subsection*{Phase-invariant states in $\mathcal{Q}_+$}
As it appears, the set $\mathcal{Q}_+$ of Wigner-positive states remains hard to encompass and characterize efficiently. Therefore, in order to make a concrete step towards the proof of conjecture \eqref{eq:wig_conj}, we restrict our attention in this paper to a class of quantum states known as phase-invariant states. Phase-invariant states have a Wigner function that is invariant under rotation, so they are fully characterized by their radial Wigner function. Such states have the advantage of being easily characterized in state space as they can be written as mixtures of Fock states, which are eigenstates of the harmonic oscillator.
The wave function and the Wigner function of the $n^{\mathrm{th}}$ Fock state (starting at $n=0$ for vacuum) are the following:
\begin{align}
\psi_n(x) &= \pi^{-\frac{1}{4}}2^{-\frac{n}{2}}\left(n!\right)^{-\frac{1}{2}}H_n(x)\exp\left(-\frac{x^2}{2}\right),
\label{eq:wave_function_fock}
\\
W_n(x,p) &= \frac{1}{\pi}(-1)^n L_n\left(2x^2+2p^2\right)\exp\left(-x^2-p^2\right),
\label{eq:wigner_function_fock}
\end{align}
where $H_n$ and $L_n$ are respectively the $n^{\mathrm{th}}$ Hermite and Laguerre polynomials. A phase-invariant state is thus expressed as the mixture
\begin{equation}
\hat{\rho} = \sum_{k=0}^{\infty} p_k \ket{k}\bra{k}
\end{equation}
with $\ket{k}$ denoting the $k^{\mathrm{th}}$ Fock state, so that it is fully described by the probability vector $\mathbf{p}\in\mathbb{R}^{\mathbb{N}}$, with components $p_k$. In order to be an acceptable probability distribution, $\mathbf{p}$ must satisfy the physicality conditions
\begin{equation}
p_k\geq 0\quad\forall k,
\qquad
\sum\limits_{k=0}^{\infty}p_k = 1.
\label{eq:physicality}
\end{equation}
We call $\mathbb{S}$ the restriction of $\mathbb{R}^{\mathbb{N}}$ satisfying the physicality conditions \eqref{eq:physicality}.
Any vector $\mathbf{p}$ that belongs to $\mathbb{S}$ corresponds to a unique phase-invariant state in $\mathcal{Q}$.
\begin{figure}
\includegraphics[width=7cm]{pictures/Fig4_quantum_sets.png}
\caption{Pictorial representation of the various sets considered here. The full set of quantum states is denoted as $\mathcal{Q}$, while the set of Wigner-positive states is denoted as $\mathcal{Q}_+$. Then, $\mathcal{B}_c$ stands for the convex hull of the set $\mathcal{B}$ of beam-splitter states, while $\mathcal{G}_c$ stands for the convex hull of the set $\mathcal{G}$ of Gaussian states. Further, $\mathcal{C}$ stands for the set of classical states. Within all these sets, we distinguish the states that are phase invariant, which are characterized by a probability vector $\mathbf{p}\in\mathbb{S}$. For states in $\mathcal{Q}_+$, the vector $\mathbf{p}\in\mathbb{S}_+$, while for states in $\mathcal{B}_c$, the vector $\mathbf{p}\in\mathbb{S}_{\mathrm{b}}$. To be rigorous, we note that it is unknown whether the phase-invariant restriction of $\mathcal{B}_c$ might also contain some states such that $\mathbf{p}\notin\mathbb{S}_{\mathrm{b}}$. We have rigorously proven this is not the case for states up to two photons only (see below). Note also that the areas of all the above sets should not be understood quantitatively as they are arbitrary and only meant here to illustrate the chain of inclusion. }
\label{fig:quantum_sets}
\end{figure}
Now, we turn to the phase-invariant states in $\mathcal{Q}_+$. In order to check that the phase-invariant state that is characterized by a vector $\mathbf{p}\in\mathbb{S}$ is Wigner-positive, we need to verify that the corresponding mixture of Fock states has a positive Wigner function everywhere in phase space. This is done by using Eq. \eqref{eq:wigner_function_fock}, so that the Wigner-positivity condition on $\mathbf{p}$ reads as
\begin{equation}
\sum\limits_{k=0}^{\infty}
p_k \, (-1)^k \, L_k(t)
\geq 0\qquad\forall t\geq 0,
\label{eq:wigner_positivity}
\end{equation}
where we define $t=2x^2+2p^2$.
Let us also define the usual radial parameter $r=\sqrt{x^2+p^2}$, so that each value of $t$ corresponds to a specific value of $r$ through the relation $t=2r^2$.
When condition \eqref{eq:wigner_positivity} is fulfilled for some $t$, the Wigner function is non-negative at $r=\sqrt{t/2}$.
We call $\mathbb{S}_+$ the restriction of $\mathbb{S}$ satisfying the Wigner-positivity conditions \eqref{eq:wigner_positivity}, so that any vector $\mathbf{p}$ in $\mathbb{S}_+$ is associated with a unique phase-invariant Wigner-positive state in $\mathcal{Q}_+$.
The characterization of $\mathbb{S}_+$ can be operated as follows. Each value of $t$ in Eq. \eqref{eq:wigner_positivity} gives the equation of a hyperplane dividing $\mathbb{S}$ in two halves [$\mathbf{p}$ must be located on one side of the hyperplane to guarantee that $W(r)\ge 0$ for the corresponding $r$]. Two hyperplanes associated respectively to $t$ and $t+\mathrm{d}t$ intersect in a (lower-dimensional) hyperplane which is at the boundary of the convex set $\mathbb{S}_+$.
When $t$ goes from $0$ to $\infty$, the collection of all these intersections forms a locus of points which determines the curved boundary of $\mathbb{S}_+$.
Mathematically, the condition that a point $\mathbf{p}\in\mathbb{S}$ belongs to the curved boundary of $\mathbb{S}_+$ is equivalent to the following condition:
\begin{equation}
\exists t \quad\text{such that}\quad
\begin{cases}
\sum\limits_{k=0}^{\infty} p_k \, (-1)^k \, L_k(t) = 0
\\[1.2em]
\sum\limits_{k=0}^{\infty} p_k \, (-1)^k \, \dfrac{\mathrm{d}}{\mathrm{d}t}L_k(t) = 0
\end{cases}
\label{eq:boundary_nonflat}
\end{equation}
Note that since $\mathbb{S}_+$ is convex, all the points in its curved boundary are extremal points.
However, other isolated extremal points may exist, as illustrated in Fig. \ref{fig:convex_set_extremal_points}.
\begin{figure}
\includegraphics[width=6cm]{pictures/Fig4_sigma_mn.png}
\caption{Beam-splitter state ${\hat\sigma}(m,n)$ obtained at the output of a balanced beam splitter (of transmittance $\eta=\frac{1}{2}$) that is fed with Fock states of $m$ and $n$ photons. The states ${\hat\sigma}(m,n)$ are Wigner-positive phase-invariant states; hence they belong to the set $\mathbb{S}_+$.}
\label{fig:sigma-states}
\end{figure}
\subsection*{Phase-invariant beam-splitter states in $\mathcal{B}$}
The above considerations reflect the fact that characterizing the set of phase-invariant Wigner-positive states (associated with $\mathbf{p}\in\mathbb{S}_+$) remains complex. For this reason, we consider a subset of states that are built by using a balanced beam splitter, following the same idea as for the construction of set $\mathcal{B}$ but injecting phase-invariant Fock states at the input. As pictured in Fig. \ref{fig:sigma-states}, we define the beam-splitter state $\hat{\sigma}(m,n)$ as the reduced output state of a balanced beam splitter fed by $m$ and $n$ photons at its two inputs, that is,
\begin{equation}
\hat{\sigma}(m,n) =
\mathrm{Tr}_{2}
\left[
\hat{U}_{\frac{1}{2}}
\left(
\ket{m}\bra{m}
\otimes
\ket{n}\bra{n}
\right)
\hat{U}_{\frac{1}{2}}^\dagger
\right].
\label{eq:def_sigma_bs}
\end{equation}
Thus, any state $\hat{\sigma}(m,n)$ is Wigner positive and phase invariant. It is a mixture of Fock states with mixture coefficients given in Appendix \ref{apd:bbs_wigner_positive}. We denote as $\mathbb{S}_{\mathrm{b}}$ the set of probability vectors $\mathbf{p}$ corresponding to all mixtures of states $\hat{\sigma}(m,n)$. It is clear that $\mathbb{S}_{\mathrm{b}} \subset \mathbb{S}_{+}\subset \mathbb{S}$, as depicted in Fig. \ref{fig:quantum_sets} and discussed below.
Interestingly, the Wigner function associated with any state $\hat{\sigma}(m,n)$ happens to have a minimum value that reaches precisely zero [except for $\hat{\sigma}(0,0)$, which is simply the vacuum state]. In fact, it is shown in Appendix \ref{apd:bbs_wigner_positive} that whenever $m\neq n$, the Wigner function of $\hat{\sigma}(m,n)$ always cancels at the origin in phase space. This suggests that the states $\hat{\sigma}(m,n)$ are the extremal states of the set of phase-invariant Wigner-positive states (those associated with $\mathbb{S}_{+}$). However, as we will show in the following example, the situation is more tricky as this set also admits other extremal states that are not of the form $\hat{\sigma}(m,n)$. Hence, we will see that $\mathbb{S}_{\mathrm{b}} \subset \mathbb{S}_{+}$ is a strict inclusion and there exist phase-invariant Wigner-positive states that cannot be written as mixtures of beam-splitter states $\hat{\sigma}(m,n)$.
\subsection*{Example : restriction to two photons}
\begin{figure}
\includegraphics[width=7cm]{pictures/Fig6_mix2photons.png}
\caption{Two-dimensional representation of $\mathbb{S}^2$ (large white triangle) and $\mathbb{S}^{2}_{+}$ (blue zone, including dark and light blue). The points $a$, $b$, $c$, and $d$ correspond to the beam-splitter states whose convex closure yields $\mathbb{S}^{2}_{\mathrm{b}}$, represented as the light blue zone. The dark blue zone stands for the subset of phase-invariant Wigner-positive states that cannot be expressed as mixtures of beam-splitter states. As discussed in Sec.~\ref{sec:results}, the triangle $a$-$b$-$e$ encompasses the set of passive states while the triangle $a$-$b$-$d$ encompasses the states whose Wigner function coincides with the Husimi $Q$ function of a state.}
\label{fig:wig_pos_2}
\end{figure}
Let us denote by $\mathbb{S}^n$ and $\mathbb{S}^{n}_{+}$ the restriction of respectively $\mathbb{S}$ and $\mathbb{S}_{+}$ that have components $p_k = 0$ for $k>n$.
As an example, let us consider the set $\mathbb{S}^2$, which corresponds to mixtures of Fock states up to $n=2$, that is,
\begin{equation}
\hat{\rho} =
(1-p_1-p_2)\ket{0}\bra{0}+
p_1\ket{1}\bra{1}+
p_2\ket{2}\bra{2}
\label{eq:rho_mixt_2fock}
\end{equation}
with $p_1,p_2\ge 0$ and $p_1+p_2\le 1$.
We are interested in the Wigner-positive subset of $\mathbb{S}^2$, namely, $\mathbb{S}^2_+$.
Restricting ourselves to $n=2$ makes it possible to represent $\mathbb{S}^2_+$ in a two-dimensional plane with coordinates $p_1$ and $p_2$ (see Fig. \ref{fig:wig_pos_2}).
The mathematical description of $\mathbb{S}^2_+$ was also given in \cite{Brocker1995}, but we analyze it here from a physical perspective, through the prism of quantum optics.
Since the beam splitter conserves the total photon number, we know that only the states $\hat{\sigma}(m,n)$ such that $m+n\leq 2$ belong to $\mathbb{S}^2_+$.
These states are expressed as
\begin{equation}
\begin{split}
\hat{\sigma}_a &\equiv \hat{\sigma}(0,0) = \hspace{0.9em}\ket{0}\bra{0}
\\
\hat{\sigma}_b &\equiv \hat{\sigma}(1,0) =
\frac{1}{2}\ket{0}\bra{0}+\frac{1}{2}\ket{1}\bra{1}
\\
\hat{\sigma}_c &\equiv \hat{\sigma}(1,1) =
\frac{1}{2}\ket{0}\bra{0}+\frac{1}{2}\ket{2}\bra{2}
\\
\hat{\sigma}_d &\equiv \hat{\sigma}(2,0) =
\frac{1}{4}\ket{0}\bra{0}+\frac{1}{2}\ket{1}\bra{1}+\frac{1}{4}\ket{2}\bra{2}
\end{split}
\end{equation}
and their corresponding Wigner functions are displayed in Figs. \ref{fig:wigner_sigma_states} and \ref{fig:wigner_sigma_states_radial}. We observe that the minimum value of the Wigner functions always reaches zero (except for the vacuum state ${\hat \sigma}_a$), which reflects that these are extremal states of the set of Wigner-positive phase-invariant states (associated with $\mathbb{S}^2_+$).
This is confirmed in Fig. \ref{fig:wig_pos_2}, where the four beam-splitter states are represented by points $a$, $b$, $c$, and $d$: they are indeed extremal points of the convex set $\mathbb{S}^2_{+}$, which appears as the blue zone (including light and dark blue). However, as we will see, they are not the only extremal points of $\mathbb{S}^2_{+}$.
The complete characterization of $\mathbb{S}^2_+$ can be done by using the Wigner-positivity conditions \eqref{eq:wigner_positivity} and \eqref{eq:boundary_nonflat}. The derivation is done in Appendix \ref{apd:mixture_2phot} and leads to the following conditions on $p_1$ and $p_2$:
\begin{equation}
\begin{cases}
p_1\leq \dfrac{1}{2}
\\[1em]
p_2\leq \dfrac{1}{4}+\dfrac{1}{4}\sqrt{1-4p_1^2}
\end{cases}
\label{eq:domain_S2plus}
\end{equation}
Any state in the form \eqref{eq:rho_mixt_2fock} is Wigner positive if and only if its components $p_1$ and $p_2$ satisfy conditions \eqref{eq:domain_S2plus}.
\begin{figure}
\includegraphics[width=7.5cm]{pictures/Fig5_Wigner_functions.png}
\caption{Wigner functions of the four beam-splitter states $\hat{\sigma}_a$, $\hat{\sigma}_b$, $\hat{\sigma}_c$, and $\hat{\sigma}_d$, denoted respectively as $W_a(x,p)$, $W_b(x,p)$, $W_c(x,p)$, and $W_d(x,p)$. These four states are Wigner-positive phase-invariant states, but, in addition, their Wigner functions touch precisely zero (except for the vacuum state $\hat{\sigma}_a$) as is more evident from Fig. \ref{fig:wigner_sigma_states_radial}. This fact reflects that these are extremal states of the set of Wigner-positive phase-invariant states (associated with $\mathbb{S}^2_+$). }
\label{fig:wigner_sigma_states}
\end{figure}
\begin{figure}
\includegraphics[width=8cm]{pictures/Fig5bis_Wigner_radial.png}
\caption{Radial Wigner functions of the four phase-invariant beam-splitter states $\hat{\sigma}_a$, $\hat{\sigma}_b$, $\hat{\sigma}_c$, and $\hat{\sigma}_d$, denoted respectively as $W_a(r)$, $W_b(r)$, $W_c(r)$, and $W_d(r)$. As advertised, the minimum value of these Wigner functions touches zero [except for the vacuum state $\hat{\sigma}_a$, for which $W_a(r)\to 0$ as $r\to\infty$]. }
\label{fig:wigner_sigma_states_radial}
\end{figure}
\begin{figure}
\includegraphics[width=8cm]{pictures/Fig7_tangent.png}
\caption{Expressing the positivity of the radial Wigner function $W(r)$ for increasing values of $r$ corresponds to a continuum of straight lines, which are all tangents of ellipse \eqref{eq:ellipse}. As an illustration, we plot as dashed lines the tangents associated with $W(r)=0$ for $r=0$, $r=1/\sqrt{2}$, $r=1$, $r=\sqrt{2}$, and $r\to \infty$. For instance, expressing $W(0)\ge 0$ implies $p_1\le 1/2$, while expressing $W(1)\ge 0$ implies $p_2\le 1/2$. For $r>1$, the positivity condition becomes redundant, and, at the limit $r\to\infty$, it gives $p_2\ge 0$, which is equivalent to the physicality condition.}
\label{fig:rotating-tangent}
\end{figure}
Several observations can be made from Fig. \ref{fig:wig_pos_2}. First, state ${\hat \sigma}_a$, which coincides with the vacuum state, is a trivial extremal state of $\mathbb{S}^2_+$ even if its Wigner function does not reach zero. As already mentioned, ${\hat \sigma}_b$, ${\hat \sigma}_c$, and ${\hat \sigma}_d$ are other extremal states of $\mathbb{S}^2_+$, as witnessed by the fact that their Wigner function vanishes at some location in phase space. The convex set $\mathbb{S}^2_+$ has three facets.
Two of them correspond to the physicality conditions \eqref{eq:physicality}, i.e., $p_1\geq 0$ and $p_2\geq 0$.
The third one corresponds to condition \eqref{eq:wigner_positivity} where we have set $t=0$, which gives us $p_0+p_2\geq 1/2$ or equivalently $p_1\leq 1/2$. Note that the points in these facets belong to the boundary of $\mathbb{S}^2_+$ but are not extremal.
This can be easily understood for the third facet corresponding in Fig. \ref{fig:wig_pos_2} to the segment connecting ${\hat \sigma}_b$ to ${\hat \sigma}_d$, which both admit a zero of their Wigner function at the same location (i.e., the origin). Note also that, in general, the set $\mathbb{S_+}$ always has a facet corresponding to
\begin{equation}
\sum\limits_{k\ \mathrm{even}} p_k= \dfrac{1}{2},
\label{eq:boundary_flat}
\end{equation}
which expresses the positivity of the Wigner function at $r=0$ (recall that $t = 2r^2$).
As pictured in Fig. \ref{fig:rotating-tangent}, expressing the positivity of the radial Wigner function for increasing values of $r$ yields a continuum of straight lines, whose locus of intersecting points forms an ellipse centered in $(0,1/4)$, namely,
\begin{equation}
\left(\frac{p_1}{1/2}\right)^2 + \left(\frac{p_2-1/4}{1/4}\right)^2 = 1 \, .
\label{eq:ellipse}
\end{equation}
The resulting constraints on $p_1$ and $p_2$ for all $r$'s are summarized by Eq. \eqref{eq:domain_S2plus}.
Overall, Fig. \ref{fig:wig_pos_2} shows that the subspace $\mathbb{S}^{2}_{\mathrm{b}}$, which is spanned by the extremal states ${\hat \sigma}_a$, ${\hat \sigma}_b$, ${\hat \sigma}_c$, and ${\hat \sigma}_d$, covers a large region of $\mathbb{S}^{2}_{+}$ (indicated in light blue) so any point in this region can thus be generated by a convex mixture of them. However, $\mathbb{S}^{2}_{+}$ also includes a small region (indicated in dark blue) that is located under the ellipse defined by Eq. \eqref{eq:ellipse} and above the straight line $c$-$d$. This region is thus outside the polytope $\mathbb{S}^{2}_{\mathrm{b}}$ generated by the $\hat{\sigma}$ states, which confirms that $\mathbb{S}^{2}_{+}$ also admits a continuum of extremal points along this ellipse.
Note finally that it is not a trivial observation to see that $\mathbb{S}^2_{\mathrm{b}}$ coincides with the two-photon phase-invariant restriction of $\mathcal{B}_c$ (i.e., the phase-invariant states with up to two photons within the convex hull of beam-splitter states of $\mathcal{B}$). Indeed, $\mathbb{S}^2_\mathrm{b}$ is defined as the convex hull of beam-splitter states built from (phase-invariant) Fock states in Fig.~\ref{fig:sigma-states} with up to two photons, that is, the convex hull of $\lbrace \hat{\sigma}_a,\hat{\sigma}_b,\hat{\sigma}_c,\hat{\sigma}_d\rbrace$.
Since it is possible to create beam-splitter states in the setup of Fig.~\ref{fig:bbs_schema} that are phase-invariant starting from two input states that are not phase invariant (e.g., two squeezed states with orthogonal squeezing produce a thermal state), it might \textit{a~priori} be possible to build states within the two-photon phase-invariant restriction of $\mathcal{B}_c$ that do not belong to $\mathbb{S}^2_\mathrm{b}$. However, a simple argument convinces us otherwise. First, notice that we may restrict to pure input states without loss of generality. Since the output is a mixture with up to two photons, we must consider input states that are either in the form
\begin{equation}
\ket{\psi}=\ket{0} \otimes \left(a_0\ket{0}+a_1\ket{1}+a_2\ket{2}\right),
\label{eq_first-case}
\end{equation}
or
\begin{equation}
\ket{\psi}=\left(b_0\ket{0}+b_1\ket{1}\right)\otimes\left(c_0\ket{0}+c_1\ket{1}\right).
\label{eq_second-case}
\end{equation}
In case \eqref{eq_first-case}, the first input is the vacuum, which is phase invariant, so that the output state is phase invariant only if the second input state is also phase invariant. This is easy to understand given that the output Wigner function is a (scaled) convolution of the two input Wigner functions. In case \eqref{eq_second-case}, a straightforward calculation shows us that the output state is phase invariant only if at least one of the coefficients $b_0$, $b_1$, $c_0$, or $c_1$ vanishes.
This implies that one of the two input states must be phase invariant, which in turns implies that the other input must be phase-invariant too in order to ensure the phase invariance of the output. As a result, the two-photon phase-invariant restriction of $\mathcal{B}_c$ coincides with the set $\mathbb{S}^2_\mathrm{b}$ (it is unknown, however, whether this remains true for more than two photons, that is, whether the phase-invariant restriction of $\mathcal{B}_c$ corresponds to the set $\mathbb{S}_\mathrm{b}$ in general). Since we have found phase-invariant Wigner-positive states outside $\mathbb{S}^{2}_{\mathrm{b}}$, this confirms that $\mathcal{B}_c$ is strictly included in $\mathcal{Q}_+$, as advertised earlier (see Fig. \ref{fig:quantum_sets}).
\section{Conjectured lower bound}
\label{sec:results}
The conjectured lower bound on the Wigner entropy reads
\begin{equation}
h\left(W_{\! \hat{\rho}}\right)\geq\ln\pi+1 \qquad \forall \hat{\rho}\in \mathcal{Q}_+
\label{eq:to-be-proven}
\end{equation}
Note that an extended version for the Wigner-R\' enyi entropy is also discussed in Appendix \ref{sect-wigner-renyi}. We wish to prove Eq. \eqref{eq:to-be-proven} for all Wigner-positive states in $ \mathcal{Q}_+$ but it appeared in Sec. \ref{sec:wig_pos} that this set is hard to characterize. In this section, we will expose the central result of our paper, namely, a proof of this conjecture over a subset of phase-invariant Wigner-positive states with thermodynamical relevance that are called \textit{passive states}.
As a side result, we will exhibit an unexpectedly simple relation between the extremal passive states and the beam-splitter states $\hat{\sigma}(m,n)$, which guides us to test the conjecture over the much larger set $\mathbb{S}_{\mathrm{b}}$ of phase-invariant Wigner-positive states.
Before doing so, let us discuss the implication of the conjecture in the restricted subspace of phase-invariant Wigner-positive states associated with $\mathbb{S}^{2}_{+}$. First, as a consequence of Eq. \eqref{eq:fundamental}, we know that the Wigner functions of ${\hat \sigma}_a$, ${\hat \sigma}_b$, and ${\hat \sigma}_d$ coincide respectively with the Husimi $Q$ functions of $\ket{0}$, $\ket{1}$, and $\ket{2}$. Hence, the (proven) Wehrl conjecture applied to $\ket{0}$, $\ket{1}$, and $\ket{2}$ implies that the Wigner entropy of ${\hat \sigma}_a$, ${\hat \sigma}_b$, and ${\hat \sigma}_d$ is indeed lower bounded by $\ln\pi+1$. Further, this naturally extends to the subspace spanned by ${\hat \sigma}_a$, ${\hat \sigma}_b$, and ${\hat \sigma}_d$, corresponding to the triangle $a$-$b$-$d$ in Fig. \ref{fig:wig_pos_2}. Thus, the states that are located in the blue region but do not belong to this triangle are Wigner-positive states whose Wigner function cannot be expressed as a physical $Q$ function. This underlies the fact that conjecture \eqref{eq:wig_conj} is stronger than the Wehrl conjecture. In particular, let us prove that the Wigner function of state ${\hat \sigma}_c$ cannot be written as the $Q$ function of a physical state. Reasoning by contradiction, assume there exists an input state $\hat{\rho}$ in the setup of Fig.~\ref{fig:bbs_vacuum} such that the resulting output state is ${\hat \sigma}_c$. First, since the transformation on $\hat{\rho}$ is a (scaled) convolution with a (Gaussian) rotation-invariant function, the Wigner function of $\hat{\rho}$ must necessarily be rotation invariant in order to get the rotation-invariant Wigner function associated with ${\hat \sigma}_c$. Thus, $\hat{\rho}$ must be phase invariant, that is, a mixture of Fock states. Second, since ${\hat \sigma}_c$ does not contain more than two photons, it is clear that $\hat{\rho}$ can only be a mixture of $\ket{0}$, $\ket{1}$, and $\ket{2}$. However, the output state corresponding to any such mixture precisely belongs to the triangle $a$-$b$-$d$, which does not contain $c$. Hence, there is no state $\hat{\rho}$.
\subsection*{Passive states}
Passive states are defined in quantum thermodynamics as the states from which no work can be extracted through unitary operations \cite{Pusz1978}.
If $\hat{\rho}_p$ is the density operator of a passive state, then the following relation holds true for any unitary operator $\hat{U}$:
\begin{equation}
\mathrm{Tr}
\left[
\hat{\rho}_p \hat{H}
\right]
\leq
\mathrm{Tr}
\left[
\hat{U}\hat{\rho}_p \hat{U}^\dagger \hat{H}
\right],
\end{equation}
where $\hat{H}$ is the Hamiltonian of the system. Passive states are useless in the sense that it is not possible to decrease their energy by applying a unitary (since a unitary conserves the entropy, any work extraction should come with a decrease of internal energy).
It can be shown that passive states are decreasing mixtures of energy eigenstates, in the sense that if the eigenstates are labeled with increasing energy, the associate probabilities must be decreasing \cite{Lenard1978}. In the present paper, we are considering eigenstates of the harmonic oscillator, which are the Fock states.
A passive state is then written as
\begin{equation}
\hat{\rho}_p = \sum_{k=0}^{\infty} p_k \ket{k}\bra{k} \qquad \mathrm{with~} p_k\geq p_{k+1} .
\label{eq:def_passive_states_in_Fock}
\end{equation}
Among the set of passive states, \textit{extremal} passive states are defined as equiprobable mixtures of the low-energy eigenstates up to some threshold.
We refer to the $n^{\mathrm{th}}$ extremal passive state as $\hat{\varepsilon}_n$ and to its Wigner function as $E_n$.
They are defined as follows:
\begin{equation}
\begin{split}
\hat{\varepsilon}_n &= \dfrac{1}{n+1}\sum\limits_{k=0}^{n}\ket{k}\bra{k}
\\[1em]
E_n(x,p)&=\dfrac{1}{n+1}\sum\limits_{k=0}^{n}
W_k(x,p)
\label{eq:def_extremal_states}
\end{split}
\end{equation}
The states $\hat{\varepsilon}_n$ are called extremal \footnote{Note that the extremal passive states $\hat{\varepsilon}_n$ are very different from the extremal Wigner-positive states, such as the states $\hat{\sigma}(m,n)$: the extremality of $\hat{\varepsilon}_n$ pertains to a set of states that are defined in state space, while the extremality of $\hat{\sigma}(m,n)$ pertains to a distinct set of states that are defined in phase space.} in the sense that any passive state $\hat{\rho}_p $ can be expressed as a unique convex mixture of extremal passive states, namely,
\begin{equation}
\hat{\rho}_p = \sum\limits_{k=0}^{\infty} e_k \, \hat{\varepsilon}_k \, ,
\label{eq:passive-in-terms-of-extremal-passive}
\end{equation}
where $p_k$ and $e_k$ are probabilities that are linked through the relation $e_k = (k+1)\left(p_k-p_{k+1}\right)$.
In the special case of phase-invariant states within the restricted space with up to two photons, the set of passive states corresponds to the triangle $a$-$b$-$e$ in Fig. \ref{fig:wig_pos_2}, which belongs to $\mathbb{S}^2_+$ as expected. Of course, $a$, $b$, and $e$ correspond respectively to the extremal passive states $\hat{\varepsilon}_0$, $\hat{\varepsilon}_1$, and $\hat{\varepsilon}_2$.
\subsection*{Proof of the conjecture for passive states}
Let us prove the lower bound \eqref{eq:to-be-proven} for the subset of passive states $\hat{\rho}_p$.
First, note that passive states are known to be Wigner positive \cite{Bastiaans1983}, a fact that will become clear from Eq. \eqref{eq:extremal_states_formula}. Thus, their Wigner entropy is well defined. Second, notice that, as a consequence of the concavity of entropy, it is sufficient to prove the conjecture for all extremal passive states $\hat{\varepsilon}_n$.
The main tool that we will use to carry out our proof is a formula that we have derived from an identity involving Laguerre and Hermite polynomials \cite{Szeg1939}, making a nontrivial link between the Wigner functions and wave functions of the first $n$ Fock states. It reads as follows (to the best of our knowledge, it has never appeared as such in the literature):
\begin{equation}
\sum\limits_{k=0}^{n}W_k(x,p) =
\sum\limits_{k=0}^{n}\psi_{k}(x)^2 \, \psi_{n-k}(p)^2,
\label{eq:extremal_states_formula}
\end{equation}
where $W_k$ and $\psi_k$ are respectively the Wigner function and wave function of the $k^{\text{th}}$ Fock state as defined in Eqs. \eqref{eq:wave_function_fock} and \eqref{eq:wigner_function_fock}. As a by-product, note that Eq. \eqref{eq:extremal_states_formula} immediately implies that all extremal passive states $\hat{\varepsilon}_n$ admit a positive Wigner function; hence the Wigner function of an arbitrary passive state is necessarily positive.
More details on the derivation of Eq. \eqref{eq:extremal_states_formula} can be found in Appendix \ref{apd:formula_extremal_states}.
Let us denote the $x$ and $p$ probability densities of the $n^\text{th}$ Fock state as $\rho_n(x) = \vert\psi_n(x)\vert^2$ and $\rho_n(p) = \vert\psi_n(p)\vert^2$.
Their corresponding Shannon differential entropy is defined as $h\left(\rho_k(x)\right) = -\int\rho_k(x)\ln\rho_k(x)\, \mathrm{d}x$ and $h\left(\rho_k(p)\right) = -\int\rho_k(p)\ln\rho_k(p)\, \mathrm{d}p$. In the following, we refer to these quantities as $h\left(\rho_k\right) \equiv h\left(\rho_k(x)\right)=h\left(\rho_k(p)\right)$.
We are now ready to lower bound the Wigner entropy of the $n^{\text{th}}$ extremal passive state $\hat{\varepsilon}_n$ by using Eq. \eqref{eq:extremal_states_formula}:
\begin{equation}
\begin{split}
h\left(E_n \right)
&=
h\left(\dfrac{1}{n+1} \sum\limits_{k=0}^{n}W_k(x,p) \right)
\\
&=
h\left(\dfrac{1}{n+1}\sum\limits_{k=0}^{n}\psi_k(x)^2\psi_{n-k}(p)^2\right)
\\
&\geq
\dfrac{1}{n+1}
\sum\limits_{k=0}^{n}
h\big(\rho_k(x)\rho_{n-k}(p)\big)
\\
&=
\dfrac{1}{n+1}
\sum\limits_{k=0}^{n}
\big(h\left(\rho_k\right)+h\left(\rho_{n-k}\right)\big)
\\
&=\dfrac{2}{n+1}\sum\limits_{k=0}^{n}
h\left(\rho_k\right)
\\
&\geq \ln\pi+1
\end{split}
\label{eq:development_extremal_states_wig_conj}
\end{equation}
The first inequality in Eq. \eqref{eq:development_extremal_states_wig_conj} results from the concavity of the entropy. Then, we use the fact that the entropy of a product distribution is the sum of the marginal entropies. Finally, we apply the entropic uncertainty relation of Białynicki-Birula and Mycielski \cite{Bialynicki1975} on Fock states, namely, $2\, h\left(\rho_k\right)\geq\ln\pi+1$, $\forall k$. We have thus proven the conjecture for all extremal passive states and this proof naturally extends to the whole set of passive states. $\qed$
Let us now prove that a slightly tighter lower bound can be derived for the Wigner entropy of passive states by exploiting Eq. \eqref{eq:passive-in-terms-of-extremal-passive}, namely the fact that these states can be expressed as convex mixtures of extremal passive states $\hat{\varepsilon}_n$ (in place of decreasing mixtures of Fock states). We denote the Wigner function of the passive state $\hat{\rho}_p$ as $W_P(x,p)$ and bound its Wigner entropy as follows:
\begin{equation}
\begin{split}
h\left(W_P\right) &= h\left(\sum\limits_{k=0}^{\infty}e_k \, E_k(x,p)\right)
\\
&\geq
\sum\limits_{k=0}^{\infty}
e_k \,
h\big(E_k(x,p)\big)
\\
&=
\sum\limits_{k=0}^{\infty}
(k+1)\left(p_k-p_{k+1}\right)
h\big(E_k(x,p)\big)
\\
&\geq
\sum\limits_{k=0}^{\infty}
(k+1)\left(p_k-p_{k+1}\right)
\dfrac{2}{k+1}
\sum\limits_{j=0}^{k}
h\left(\rho_j\right)
\\
&= 2
\sum\limits_{k=0}^{\infty}
\sum\limits_{j=0}^{k}
\left(p_k-p_{k+1}\right)
h\left(\rho_j\right)
\\
&= 2
\sum\limits_{j=0}^{\infty}
\sum\limits_{k=j}^{\infty}
\left(p_k-p_{k+1}\right)
h\left(\rho_j\right)
\\
&= 2
\sum\limits_{j=0}^{\infty}
p_j \, h\left(\rho_j\right). \qed
\end{split}
\label{eq:second_proof}
\end{equation}
The first inequality in \eqref{eq:second_proof} comes from the concavity of entropy over the convex set of extremal states, while the second inequality is obtained from Eq. \eqref{eq:development_extremal_states_wig_conj}. The final expression is a stronger lower bound on the Wigner entropy of any passive state which reads as
\begin{equation}
h\left( \sum\limits_{k=0}^{\infty}
p_k \, W_k \right)
\geq 2
\sum\limits_{k=0}^{\infty}
p_k \, h\left(\rho_k\right)
\label{eq:lower_bound_passive_states}
\end{equation}
and is valid as soon as the probabilities $p_k$ are decreasing, that is, $p_k\geq p_{k+1}$.
It is tempting to extrapolate that the bound \eqref{eq:lower_bound_passive_states} remains valid beyond the set of passive states.
We know indeed that there exist phase-invariant Wigner-positive states that are not passive states (in Fig. \ref{fig:wig_pos_2}, these are the states within the light blue region that do not belong to the triangle $a$-$b$-$e$). As long as the coefficients $p_k$ are such that the corresponding state is Wigner positive, it has a well-defined Wigner entropy and we may expect that the lower bound \eqref{eq:lower_bound_passive_states} applies. Unfortunately, our numerical simulations have shown that relation \eqref{eq:lower_bound_passive_states} does not hold in general for nonpassive (Wigner-positive) states. Of course, we conjecture that relation \eqref{eq:to-be-proven} does hold for such states and we have not found any counterexample.
\subsection*{Relation between the extremal passive states and the beam-splitter states}
Let us now highlight an interesting relation between extremal passive states $\hat{\varepsilon}_n$ and the beam-splitter states $\hat{\sigma}(m,n)$ that we defined in Sec. \ref{sec:wig_pos}. To this purpose, we consider a mixed quantum state of two modes (or harmonic oscillators) which we denote as $\hat{\tau}_n$. It is defined as an equal mixture of all two-mode states with a total photon number (or energy) equal to $n$, namely,
\begin{equation}
\hat{\tau}_n =
\dfrac{1}{n+1}
\sum\limits_{k=0}^{n}
\ket{k}\bra{k}\otimes\ket{n-k}\bra{n-k}.
\label{eq:def_tau_2modes}
\end{equation}
This state is maximally mixed over the set of states with total energy $n$, so that it is invariant under any unitary transformation that preserves the total energy.
In particular, it is invariant under the action of a balanced beam splitter, which implies the identity $\hat{U}_{1/2}\ \hat{\tau}_n \ \hat{U}_{1/2}^\dagger = \hat{\tau}_n$.
After partial tracing over the second mode, we obtain
\begin{equation}
\mathrm{Tr}_2
\left[
\hat{\tau}_n
\right]
=
\dfrac{1}{n+1}
\sum\limits_{k=0}^{n}
\ket{k}\bra{k} \, ,
\end{equation}
which is simply the extremal state $\hat{\varepsilon}_n$.
Alternatively, exploiting the invariance under $\hat{U}_{1/2}$ and recalling the definition of the beam-splitter states $\hat{\sigma}(m,n)$, we have
\begin{equation}
\mathrm{Tr}_2
\left[
\hat{\tau}_n
\right]
=
\dfrac{1}{n+1}
\sum\limits_{k=0}^{n}
\hat{\sigma}(k,n-k).
\end{equation}
This establishes an interesting link between the extremal passive states and the beam-splitter states, namely,
\begin{equation}
\hat{\varepsilon}_n =
\dfrac{1}{n+1}
\sum\limits_{k=0}^{n}
\hat{\sigma}(k,n-k).
\end{equation}
Expressed in terms of Wigner function, this translates as
\begin{equation}
\sum\limits_{k=0}^{n}
W_k(x,p)
=
\sum\limits_{k=0}^{n}
S_{(k,n-k)}
(x,p),
\label{eq:sum_fock_sum_sigma}
\end{equation}
where $S_{(m,n)}$ denotes the Wigner function of $\hat{\sigma}(m,n)$.
It is instructive to compare Eq.~\eqref{eq:sum_fock_sum_sigma} with Eq.~\eqref{eq:extremal_states_formula}.
Extremal passive states $\hat{\varepsilon}_n$ are defined as mixtures of Fock states [see Eq.~\eqref{eq:def_passive_states_in_Fock}], which possess each a nonpositive Wigner function (except for the vacuum). This is at the heart of the difficulty of proving the conjecture: we cannot give a meaning to the Wigner entropy of a Fock state (except for the vacuum), so the convex decomposition of a state into Fock states cannot be used to bound its Wigner entropy. In this context, both Eqs. \eqref{eq:extremal_states_formula} and \eqref{eq:sum_fock_sum_sigma} have the crucial interest to provide the decomposition of the Wigner function of an extremal passive state into a sum of positive functions.
However, with Eq.~\eqref{eq:extremal_states_formula}, these positive functions do not correspond to \textit{physical} Wigner functions. Numerical simulations indeed show that in general $\psi_k(x)^2\psi_{n-k}(p)^2$ is not a physically acceptable Wigner function (it is positive but does not correspond to a positive-semidefinite density operator).
On the contrary, Eq. \eqref{eq:sum_fock_sum_sigma} exhibits the decomposition of an extremal passive state into states $\hat{\sigma}(m,n)$, which are Wigner-positive quantum states as we have shown.
The set spanned by the states $\hat{\sigma}(m,n)$ associated with $\mathbb{S}_{\mathrm{b}}$ is obviously bigger than the set of passive states and offers a nice playground for testing our conjecture. Indeed, each state $\hat{\sigma}(m,n)$ is Wigner positive so it has a well-defined Wigner entropy. Figure \ref{fig:bbs_entropy} displays the Wigner entropy of the states $\hat{\sigma}\left(m,n\right)$ as computed numerically up to $m,n=30$. As expected, the minimum Wigner entropy $\ln\pi+1$ is reached for the vacuum state $\sigma\left(0,0\right)=\ket{0}\bra{0}$, so it follows that the conjecture holds for whole set $\mathbb{S}_{\mathrm{b}}$ due the concavity of the entropy. Of course, this is based on numerical evidence since we do not have an analytical proof that $h(S_{(m,n)})\ge \ln\pi+1$.
Further, although the set $\mathbb{S}_{\mathrm{b}}$ is much bigger than the set of passive states, it still does not encompass the whole set of phase-invariant Wigner-positive states $\mathbb{S}_{+}$, as evidenced by Fig. \ref{fig:wig_pos_2}.
\begin{figure}
\includegraphics[width=8cm]{pictures/entropies_bbs_outputs.png}
\caption{Wigner entropy of the beam-splitter states $\hat{\sigma}(m,n)$ as computed numerically for $m,n=0,1,\dots, \,30$. It appears that the Wigner entropy increases monotonically for increasing values of $m$ and $n$.}
\label{fig:bbs_entropy}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
We have promoted the Wigner entropy of a quantum state as a distinct information-theoretical measure of its uncertainty in phase space. Although it is, by definition, restricted to Wigner-positive states, the fact that such states form a convex set makes it a useful physical quantity. Since it is a concave functional of the state, we naturally turn to its lower bound over the convex set of Wigner-positive states. We conjecture that this lower bound is $\ln \pi +1$, which is the value taken on by the Wigner entropy of all Gaussian pure states. The latter then play the role of minimum Wigner-uncertainty states.
This conjecture is consistent with the Hudson theorem, whereby all Wigner-positive pure states must be Gaussian states, thus states reaching the value $\ln \pi +1$. The conjecture also implies a lower bound on the sum of the marginal entropies of $x$ and $p$; hence it results in a tightening of the entropic uncertainty relation due to Bialynicki-Birula and Mycielski that is very natural from the point of view of Shannon information theory (the Wigner entropy accounts for $x$-$p$ correlations since it is the joint entropy of $x$ and $p$).
Of course, it also implies the Heisenberg uncertainty relation formulated in terms of variances of $x$ and $p$. Furthermore, the conjecture implies (but is stronger than) Wehrl conjecture, notoriously proven by Lieb. It is supported by several elements. First, we have provided in Sec. \ref{sec:results} an analytical proof for a subset of phase-invariant Wigner-positive states, namely, the passive states. Second, this was complemented by a semianalytical seminumerical proof for the larger set of phase-invariant states associated with $\mathbb{S}_{\mathrm{b}}$. Third, we also carried out an extensive numerical search for counterexamples in $\mathcal{Q}_+$ but could not find any.
Given that the Wigner entropy is only properly defined for Wigner-positive states, we have also been led to investigate the structure of such states in Sec. \ref{sec:wig_pos}. We have put forward an extensive technique to produce Wigner-positive states using a balanced beam splitter. In particular, we have focused on the beam-splitter states $\hat{\sigma}(m,n)$ and have highlighted their connection with the (smaller) set of passive states and (larger) set of phase-invariant Wigner-positive states. We have also found an unexpectedly simple relation between the states $\hat{\sigma}(m,n)$ and the extremal passive states.
The Wigner entropy enjoys various reasonable properties; in particular it is invariant over all symplectic transformations in phase space or equivalently all Gaussian unitaries in state space. Its excess with respect to $\ln \pi +1$ is an asymptotic measure of the number of random bits that are needed to generate a sample of the Wigner function from the vacuum state. More generally, since the Wigner entropy is the Shannon differential entropy of the Wigner function, viewed as a genuine probability distribution, it inherits all its key features. For example, we may easily extend to Wigner entropies the celebrated entropy power inequality \cite{Cover1991}, which relates to the entropy of the convolution of probability distributions. Consider the setup of Fig. \ref{fig:bbs_schema} where the input state is again a product state $\hat{\rho}_A\otimes\hat{\rho}_B$ but the beamsplitter now has an arbitrary transmittance $\eta$, so that the output state reads
\begin{equation}
\hat{\sigma} =
\mathrm{Tr}_{2}
\left[
\hat{U}_{\eta}
\left(
\hat{\rho}_A\otimes\hat{\rho}_B
\right)
\hat{U}_{\eta}^\dagger
\right].
\end{equation}
Let us restrict to the special case where both $\hat{\rho}_A$ and $\hat{\rho}_B$ are Wigner-positive states, which of course implies that $\hat{\rho}_A\otimes\hat{\rho}_B$ is Wigner positive as well as ${\hat\sigma}$ (even if $\eta\ne 1/2$). Thus, $\hat{\rho}_A$, and $\hat{\rho}_B$, and ${\hat\sigma}$ all have a well-defined Wigner entropy, which we denote respectively as $h_A$, $h_B$, and $h_{\textrm{out}}$. Since the beam splitter effects the affine transformation $x_{\textrm{out}} = \sqrt{\eta} \, x_A + \sqrt{1-\eta}\, x_B$ and $p_{\textrm{out}} = \sqrt{\eta} \, p_A + \sqrt{1-\eta}\, p_B$ in phase space, we may simply treat this as a convolution formula for probability distributions. Hence, the entropy power inequality directly applies to the Wigner entropy. Defining the Wigner entropy-power \footnote{We define the Wigner entropy-power $N$ of a (Wigner-positive) state as the entropy power of the Wigner function of the state. Here, the entropy power is defined following Shannon information theory for a \textit{pair} of continuous variables $x$ and $p$, namely $N=(2\pi e)^{-1}\textrm{e}^{h(x,p)}$, where $h$ stands for the Shannon differential entropy. } of the two input states as
\begin{equation}
N_A=(2\pi e)^{-1} \textrm{e}^{h_A}, \quad N_B=(2\pi e)^{-1} \textrm{e}^{h_B},
\end{equation}
and the Wigner entropy-power of the output state as
\begin{equation}
N_{\textrm{out}} =(2\pi e)^{-1} \textrm{e}^{h_{\textrm{out}} },
\end{equation}
we obtain the \textit{Wigner entropy-power inequality}
\begin{equation}
N_{\textrm{out}} \ge \eta \, N_A + (1-\eta) N_B .
\end{equation}
This is equivalent to a nontrivial lower bound on the Wigner entropy of the output state $\hat{\sigma}$, namely, $h(W_{\hat{\sigma}}) \ge h(W_{\hat{\sigma}_G})$, where $\hat{\sigma}_G$ denotes the Gaussian output state obtained if each input state is replaced by the phase-invariant Gaussian state (i.e., thermal states) with the same Wigner entropy. This illustrates the physical significance of the Wigner entropy.
Defining the Wigner entropy for Wigner-positive states might also be a good starting point for investigating the states that are \textit{not} within $\mathcal{Q}_+$ and whose Wigner function admits a negative region, hence indicating their nonclassicality and potential computational advantage (we recall that Wigner-positive states are efficiently simulatable classically \cite{Eisert2012}). Just as the characterization of separable states helps understand the advantage offered by entanglement and leads to a resource theory of entanglement, we may envisage building a resource theory of Wigner negativity based on Wigner entropies along the lines of the resource theory of quantum non-Gaussianity \cite{Zhuang2018,Albarelli2018}, going beyond witnesses of Wigner negativity \cite{Chabaud2021}.
Finally, a natural extension of the present work is to consider more than a single harmonic oscillator (or bosonic mode) as we expect that all properties of the Wigner entropy and especially conjecture \eqref{eq:wig_conj} will generalize. Further, following the lines of a recent work \cite{Floerchinger2021}, we might investigate the detection of entanglement in continuous-variable states by defining a Wigner conditional entropy and Wigner mutual information. Let us mention that this work is part of a broader project. The key observation is that the Wigner function of Wigner-positive states can be interpreted as a true probability distribution. Hence, we can take advantage of this observation and adapt all standard tools of probability theory (here, we have applied Shannon information theory to define the Wigner entropy). In this context, the theory of majorization \cite{Marshall2011} has proved to be another powerful tool, and it notably allows to formulate a generalization of Wehrl conjecture \cite{Lieb2002}. In a forthcoming paper \cite{VanHerstraeten2021Major}, we use the theory of majorization to state a stronger conjecture on the uncertainty content of Wigner functions. This enables us, for instance, to demonstrate analytically the lower bound on $h(W)$ for all phase-invariant Wigner-positive states in $\mathbb{S}^{2}_{+}$, including the dark blue region.
\subsection*{Note added}
We have learned that our method for generating positive Wigner functions with a 50:50 beam splitter as explained in Appendix \ref{apd:bbs_wigner_positive} has recently also been described in \cite{Becker2019}.
\subsection*{Acknowledgments}
The authors warmly thank Christos Gagatsos, Anaelle Hertz, Michael G. Jabbour and Karol \.Zyczkowski for helpful discussions on this subject. Z.V.H. acknowledges a fellowship from the FRIA foundation (F.R.S.-FNRS). N.J.C. acknowledges support by the F.R.S.-FNRS under Project No. T.0224.18 and by the EC under project ShoQC within ERA-NET Cofund in Quantum Technologies (QuantERA) program.
|
1,941,325,220,426 | arxiv | \section{Introduction}
The connection between early cosmology and high energy dates back to the discovery of the expansion of the Universe. Going backwards in time, the Universe contracts and the energy density increases more and more until it eventually reaches Planckian values and General Relativity breaks down. The underlying assumption to this general argument is that the stress-energy tensor $T_{\mu\nu}$ satisfies the NEC, which states that $T_{\mu\nu} k^\mu k^\nu \geq 0$ for every null vector $k^\mu$. For a perfect fluid this is equivalent to the inequality $\rho + p \geq 0$, where $\rho$ and $p$ are respectively the energy density and the pressure. For a Friedmann-Robertson-Walker (FRW) metric this inequality implies that the energy density, and therefore the Hubble parameter $H$, decreases as the Universe expands, as the covariant conservation of the stress-energy tensor reads $\dot \rho = - 3 H (\rho +p)$.
If a violation of the NEC were possible, then a Pandora's box of non-standard cosmologies would open up and in particular the contraction of the Universe going backwards in time would not necessarily lead to higher and higher energy densities. The realization that energy conditions can be violated, although ``standard" matter satisfies them, has a notable history in cosmology. Indeed the strong energy condition, equivalent for a perfect fluid to the inequality $\rho + 3 p \geq 0$, implies that the expansion of the Universe is always decelerating $\ddot a <0$, which resonates well with the Newtonian intuition. The evidence of the present acceleration, however, strongly indicates that the fluid which now dominates the Universe violates the strong energy condition and the same happens in the past during inflation, the most compelling theory of the early Universe. Given that these two important revolutions in cosmology are based on a violation of the strong energy condition, it is natural to wonder whether we are missing something taking the NEC for granted.
It is worth emphasizing that the NEC is usually taken for granted not due to lack of imagination, but because of its exceptional robustness---it is especially hard to construct consistent effective field theories that violate it \cite{thomas}. Morevover, perhaps less decisively, the NEC protects standard properties of black-hole theormodynamics.
In this paper we describe a novel cosmological scenario in which the NEC is grossly violated, and the Universe starts from a very low energy state, asymptotic to Minkowski in the far past. As pointed out in \cite{Nicolis:2009qm}, the violation of the NEC is possible in the context of the recently studied Galileon theories \cite{Nicolis:2008in}. For these theories the usual relation between the violation of the NEC and the presence of pathological instabilities \cite{thomas} is avoided, due to the presence of higher derivative interactions. A similar situation happens in the context of Ghost Condensate theories \cite{ArkaniHamed:2003uy,markus}, with the important difference that here we will have a strong violation of the NEC, i.e.~$\dot H \gg H^2$, while in the previous models, only a moderate violation of the NEC is compatible with the stability of the system \cite{markus}.
A general class of Galileon theories, as we will see in Section \ref{background}, gives rise to a very peculiar evolution of the scale factor with the Hubble parameter becoming larger and larger as the Universe expands: $H \propto (-t)^{-3}$ as $t \to 0^-$. This implies that most of the energy is created suddenly, with the scale factor blowing up as $a \sim \exp(1/t^2)$, in a sort of Genesis which (partially) justifies our dubbing of the scenario. As $\rho$ increases, the system will eventually exit the regime of validity of the Galileon effective field theory and here we assume that the energy gets transferred to more conventional degrees of freedom in a reheating process, similarly to what happens in inflation. This background evolution is completely stable and it represents a dynamical attractor. Notably the Universe evolves to this expanding phase even if it is initially contracting, a behaviour which is only possible because of the violation of the NEC. It is remarkable that this scenario in some sense explains why the Universe is now expanding, while we are usually forced to postulate initial conditions with a large positive $H$, which then goes on decaying for the entire evolution.\footnote{Of course our `auto-expanding' solution run backwards in time is also a solution---i.e.~we also have solutions that contract for all times and approach Minkowski space in the future. What we want to stress is that we are driven towards our expanding solution starting from an unusually large basin of attraction, which includes contracting initial conditions as well.
The time-reversed solutions we just alluded to start outside this basin.}
The study of perturbations in Section \ref {perturbations} and Appendix \ref{details} shows various peculiarities of this model. One such peculiarity is that the energy density of the background solution vanishes in the limit in which gravity is decoupled, $\rho \propto 1/\mpl^2$. Another peculiar feature is that the leading adiabatic solution does not correspond to the standard $\zeta =$ const.~mode, but to a constant time shift of the unperturbed solution (this will be shown in Newtonian gauge in section \ref{Newtonian}). Such a mode is going to decay during the standard post-reheating FRW evolution and this will allow us to conclude that the fluctuations of the Galileon field do not give rise to any relevant cosmological perturbations on large scales. Actually in Appendix \ref{squeezing} we will show that the Galileon perturbations are not amenable to any classical interpretation as they experience no relevant squeezing.
Fortunately, another source of scale invariant perturbations is naturally present in our model, as we explain in Section \ref{fake}. The Galileon Lagrangian is invariant under the conformal group SO(4,2) and the time dependent solution breaks it down to SO(4,1), the isometry group of de Sitter space. The only way another field can couple to the Galileon while respecting the conformal symmetry is by treating the Galileon as a dilaton, that is through a fictitious, conformally flat, metric. Therefore all other fields will perceive the Galileon background as a ``fictitious" de Sitter space and their dynamics will be essentially the same as for inflation, even though the Einstein metric at the time when cosmological perturbations are generated is virtually flat. In particular a massless scalar will acquire a scale-invariant spectrum of perturbations. These isocurvature perturbations can then be converted to adiabatic in a variety of ways, exactly as it happens for inflation. This novel mechanism to produce a scale invariant spectrum of perturbations shares some similarities with the attempts to explain the present acceleration through the universal coupling of matter to a scalar field. In both cases, an approximate de Sitter space is realized not in the Einstein metric but in the Jordan one.
However, it is crucial to keep in mind that this ``fake'', Jordan-frame de Sitter space seen by fluctuations, is by no means helping us solve the horizon problem and that the peculiar cosmological history we outlined here and that we are going to describe at length in the paper happens in the {\em Einstein} frame. Indeed our system violates the NEC even in the absence of dynamical gravity \cite{Nicolis:2009qm}; if we do turn on dynamical gravity, coupling it minimally to our system, this NEC-violating stress-energy tensor will generate an Einstein-frame NEC-violating geometry. Related to this, we also notice that the Galileon was originally introduced as a possible explanation of the present acceleration \cite{Nicolis:2008in}, benefiting from its natural implementation of the Vainshtein screening mechanism at short scales. Here we are using it in a completely different context, motivated by its healthy violation of the NEC.
It is important to stress a problem with our model: superluminality. Perturbations around the SO(4,1) invariant background move at the speed of light due the large amount of residual symmetry, and actually gravity corrections to the solution make them slightly subluminal. On the other hand, if we allow for large departure from the background, perturbations around the new solution will move superluminally. In Section \ref{super}, we will discuss this issue and its implications applying to our case the general discussion of \cite{Nicolis:2009qm}. Conclusions are drawn in Section \ref{conclusions}.
\section{The background}\label{background}
Our starting point is the simplest version of the conformal Galileon minimally coupled to gravity: the Lagrangian for the scalar field is just the sum of the kinetic term and the Galilean invariant cubic interaction plus the $(\partial \pi)^4$ term needed to recover conformal invariance \cite{Nicolis:2008in}
\be \label{minimal}
{\cal S}_\pi = \int \! d^4 x \, \sqrt{-g} \bigg[ f^2 e^{2 \pi} (\di \pi)^2 + \frac{f^3}{\Lambda^3} (\di \pi)^2 \Box \pi
+ \frac{f^3}{2 \Lambda^3} (\di \pi)^4 \bigg] \; ,
\ee
Lorentz indices are contracted with $g_{\mu\nu}$ and the $\Box$ contains a covariant derivative\footnote{We are using the mostly plus signature.}. Notice that the conformal symmetry of the $\pi$ Lagrangian is explicitly broken by the coupling with gravity.
We could add all Galilean-invariant interactions together with their conformal completions \cite{Nicolis:2008in}, and in fact a fully consistent NEC-violating system obeying all requirements of \cite{Nicolis:2009qm} will have them. However our analysis and results below would not be affected in an essential way, since virtually all our results follow from the symmetry structure of the theory. One important difference is that, with our minimal Lagrangian, the kinetic term has the wrong, ghost-like sign, around the trivial background $\pi=0$, while it is healthy in more general Galilean Lagrangians \cite{Nicolis:2009qm}. This instability is not relevant for us as we are going to be interested in a different background solution, but it may become important if the system eventually evolves to $\pi = 0$ after reheating.
In the present analysis we stick to the minimal theory (\ref{minimal}), for the simplicity of the computations involved.
We will comment on the effect of adding higher order Galilean terms when relevant.
The signs have been chosen so that if gravity is decoupled this action has a solution in Minkowski spacetime of the form
\be
\label{pidesitter}
e^{\pi_{{\rm dS}}} = -\frac{1}{H_0 t} \;, \qquad -\infty < t < 0 \; ,
\ee
provided that
\be \label{H0}
H_0^2 = \frac{2 \Lambda^3}{3 f} \; .
\ee
Such a solution spontaneously breaks the conformal group $SO(4,2)$ down to the de Sitter group $SO(4,1)$.
More importantly for our purposes, this ``de Sitter" field configuration violates the NEC but has no instabilities \cite{Nicolis:2009qm}. The $\pi$ stress-energy tensor can be easily computed from the action (\ref{minimal}) as $T_{\mu\nu} = -\frac{2}{\sqrt{-g}}\frac{\delta S_\pi}{\delta g_{\mu\nu}}$,
\bea \label{Tmn}
T_{\mu\nu}
& = & - f^2 e^{2 \pi}\big[ 2\di_\mu \pi \di_\nu \pi - g_{\mu\nu} (\di \pi)^2
\big] \nonumber\\
&-& \frac{f^3}{\Lambda^3} \big[ 2 \, \di_\mu \pi \di_\nu \pi \Box \pi - \big(\di_\mu \pi \, \di_\nu ( \di \pi)^2 + \di_\nu \pi \, \di_\mu ( \di \pi)^2 \big)
+ g_{\mu\nu} \, \di_\alpha \pi \, \di^\alpha ( \di \pi)^2
\big] \nonumber \\
&-& \frac{f^3}{2 \Lambda^3} \big[ 4 ( \di \pi)^2 \di_\mu \pi \di_\nu \pi - g_{\mu\nu} ( \di \pi)^4
\big] \; .
\eea
By plugging the solution (\ref{pidesitter}) into this expression with $g_{\mu\nu}= \eta_{\mu\nu}$ we find that it has vanishing energy density---this is a consequence of scale-invariance which is left unbroken by the background \cite{Nicolis:2009qm}---and negative pressure $\propto - 1/t^4$.
We defer a thorough stability analysis for this solution until the next section. For the moment, let us consider the dynamics of {\em homogeneous} perturbations $\delta \pi(t)$. From the Lagrangian (\ref{minimal}) we immediately get the equation for $\delta \pi$ in the linear regime:
\be\label{perturbdec}
\delta \ddot \pi - \frac{2}{t} \delta \dot \pi -\frac{4}{t^2} \delta \pi = 0 \;;
\ee
the two independent solutions are $\delta \pi \sim 1/t$ and $\delta \pi \sim t^4$. The latter decays away for $t \to 0^-$ and is thus no source of worry. The former blows up at late times, but in fact it just describes the same background solution we are interested in, slightly shifted in time \cite{Nicolis:2009qm}:
\be
\pi_{\rm dS} (t + \epsilon) \simeq \pi_{\rm dS} (t) + \dot \pi_{\rm dS} (t ) \cdot \epsilon = \pi_{\rm dS} (t) - \frac \epsilon t \; .
\ee
We conclude that a generic homogeneous initial condition that corresponds to a small departure from $\pi_{\rm dS}$ will be diluted away at late times.
Now let's reintroduce the coupling to gravity: the presence of a non-zero stress-energy tensor will source a gravitational field and $\pi=\pi_{\rm dS}$ with flat metric will no longer be a solution. However, since also the pressure vanishes at early times ($t \rightarrow -\infty $) the gravity-free solution is recovered in this limit,
while as time goes on corrections to this asymptotic behavior will become larger and larger. As we are interested in cosmological solutions of the form
\be
ds^2 = -dt^2 + a^2(t) d \vec x^2 \, , \qquad \qquad \pi=\pi_0(t)
\ee
we have just to solve Friedmann's equations for the Hubble rate $H$,
\bea
H^2 & = & \sfrac{8 \pi}{3} G \, \rho \label{F1}\\
\dot H & = & -4\pi G (\rho + p) \, \label{F2}
\eea
and the $\pi$ e.o.m. will be automatically satisfied.
From eq.~(\ref{Tmn}) we get the energy density and the pressure,
\bea
\rho & = & - f^2 \left[ e^{2\pi} \dot \pi^2 - \frac{1}{H_0^2} \big( \dot \pi^4 + 4 H \dot \pi^3 \big) \right] \label{rho} \\
p & = & - f^2 \left[ e^{2\pi} \dot \pi^2 - \frac{1}{3 H_0^2} \big( \dot \pi^4 - \sfrac43 \sfrac{d}{d t} \dot \pi^3 \big) \right] \;,
\eea
where we used eq.~(\ref{H0}) to write $\Lambda^3$ in terms of $H_0$.
We cannot solve Friedmann's equations analytically. Still, we can compute the asymptotic behaviors of the solution at early and late times. As we said, at early times the solution is approximately the unperturbed one $\pi=\pi_{\rm dS}$ and $H=0$, with small corrections proportional to $G= \frac{1}{8 \pi M_{\rm Pl}^2}$, which we can calculate perturbatively. Since $\rho \sim {\cal O}(G) \ll p = p_{\rm dS} + {\cal O}(G)$, the leading contribution to eq.~(\ref{F2}) is $\dot H \simeq -4\pi G \, p_{\rm dS} $ and this expression can be integrated to find the Hubble rate at early times:
\be\label{earlytimeH}
H \simeq - \frac{1}{3} \frac{f^2}{M_{\rm Pl}^2} \cdot \frac{1}{H_0^2 t^3} \quad {\rm for }\; t \to -\infty \; .
\ee
This result has a number of unusual properties: {\em i)} we can add to the value of $H$ an arbitrary integration constant and still have a solution of eq.~(\ref{F2}); we will discuss this possibility shortly, while for the moment we set the constant to zero; {\em ii)} because $\rho \rightarrow 0$ in the limit $M_{\rm Pl} \rightarrow \infty$, the Hubble rate is proportional to $1/M^2_{\rm Pl}$, unlike for standard cosmological scenarios where it scales like $1/M_{\rm Pl}$; {\em iii)} $H$ increases with time as a consequence of the NEC violation, with a rate $\dot H \gg H^2$.
Having computed the value of $H$ we can plug it into (\ref{F1}), or equivalently into the scalar equation of motion, and extract the ${\cal O}(G)$ correction to $\pi_{\rm dS}$
\be\label{earlytimepi}
t \to -\infty \qquad \pi_0 \simeq \pi_{\rm dS} - \frac{1}{2} \frac{f^2}{M_{\rm Pl}^2} \cdot \frac{1}{H_0^2 t^2}
\ee
(we are choosing $\pi_0 \to \pi_{\rm dS}$ for $t \to -\infty$ as initial condition, as required by consistency of the approximations we have adopted so far.)
At late times, for $t^2 \lesssim \frac{f^2}{M_{\rm Pl}^2} \frac{1}{H_0^2}$ the above approximation breaks down. Numerically integrating the Friedmann's equations shows that both $\pi$ and $H$ diverge at some positive $t_0 \sim H_0^{-1} \frac{f}{M_{\rm Pl}}$. Then, assuming they diverge like some powers of $(t_0 -t)$, we get their asymptotic behaviors:
\bea
\label{polephase}
t \to t_0 \quad && e^{\pi_0} \simeq \frac{8}{\sqrt{3}} \frac{f}{M_{\rm Pl}} \cdot \frac{1}{H_0^2 (t_0 - t)^2} \\ \label{polephase2}
&& H \simeq \frac{16}{3} \frac{f^2}{M_{\rm Pl}^2} \cdot \frac{1}{H_0^2 (t_0 - t)^3} \;,
\eea
which indeed match the actual numerical solutions. This gives the peculiar evolution of the scale factor
\be
a(t) \sim \exp{\left[\frac{8 f^2}{3 H_0^2 M_{\rm Pl}^2} \frac{1}{(t_0-t)^2}\right]} \; .
\ee
\begin{figure}[!!!t]
\begin{center}
\includegraphics[scale=0.45]{at2.ps}
\end{center}
\caption{\small {\em The cosmological evolution in our model.}}
\label{fig:cartoon}
\end{figure}
We have then the following scenario, represented in figure \ref{fig:cartoon}. The Universe starts at $t \to -\infty$ in a quiescent state, with flat metric and $\pi = \pi_{\rm dS}(t)$. Asymptotically in the past this configuration has zero stress-energy tensor, and so it is a solution. Then, as time goes by, a negative pressure arises, $p \sim -1/ t^4$, which makes the Universe start expanding and the energy density grow. The actual solution then departs from the original ``de Sitter'' configuration. $H$, $\rho$, and $p$ grow more and more, until $\pi$ becomes strongly coupled. At that moment the effective theory of $\pi$ breaks down, and we cannot predict what happens next. We can imagine that at that point, most of the available energy gets converted into radiation, the Universe reheats, and the standard radiation-dominated era takes over. In this era the Galileon $\pi$ may evolve to the $\pi =$ const.~solution, or cease to be a good degree of freedom.
The strong-coupling scale of the theory ``runs'' with $\phi \equiv H_0 e^\pi$, and it is \cite{Nicolis:2009qm}
\be \label{strongcoupling}
\Lambda_{\rm strong} \sim \frac{\phi}{g^{1/3}} \; , \qquad g \equiv H_0/f \; .
\ee
Notice that this estimate does not take into account gravity and the associated explicit breaking of conformal symmetry. It may thus get modified when dynamical gravity is included. Our effective theory breaks down---at the latest---when typical energies become larger than $\Lambda_{\rm strong}$. A good measure of a cosmological solution's typical energy is the freeze-out frequency for fluctuations: if fluctuations are strongly coupled at freeze-out, the background solution is hardly consistent.
In the phase where things are blowing up, eqs.~(\ref{polephase}, \ref{polephase2}), freeze-out happens at frequencies of order $H$. The highest Hubble rate we can get before strong-coupling/reheating thus comes from equating eq.~(\ref{polephase2}) with the strong-coupling scale (\ref{strongcoupling}). We get an impressive
\be
H_{\rm max} \sim M_{\rm Pl} \; ,
\ee
at which GR breaks down anyway. Because of this, we don't want to go this high in $H$, so we will assume that our effective theory breaks down before becoming strongly coupled, which is also very much in line with the arguments presented in \cite{Nicolis:2009qm}.
Notice that all other measures of the solution's typical energy---for instance $\dot \phi/\phi$ etc.---are smaller that the one we have adopted, and are thus less constraining.
In conclusion, the reheating temperature is essentially unconstrained in our model.
At this point one may ask whether the background discussed so far is an attractor. As in the previous discussion in the absence of gravity, we can study the homogenous perturbations $ \pi (t) = \pi_0(t) + \delta \pi(t) = \pi_{\rm dS}(t) - \frac{1}{2}\frac{f^2}{M^2_{\rm Pl}} \frac{1}{(H_0 t)^2} + \delta \pi(t) $. To expand linearly in $\delta \pi$ the only conditions needed are $\delta \pi \ll \pi_0$ and $\delta \pi \ll 1$. However even in the linearized approximation there are two possible regimes, depending on whether the perturbations are also smaller than the gravitational corrections suppressed by $1/M^2_{\rm Pl}$ or not.
Let's start from the scalar equation of motion:
\be\label{pieom}
e^{2\pi} (\ddot \pi + \dot \pi^2) - \frac{2}{H_0^2} \dot \pi^2 \ddot \pi = -3 e^{2 \pi} H \dot \pi +\frac{1}{H_0^2}\big[ 4 H \dot \pi \ddot \pi + 2 \dot \pi^2 (3 H^2 + \dot H)+ 2H \dot \pi^3 \big] \; .
\ee
Keeping $\delta\pi$ fixed and going to early times we can neglect gravity corrections and the linearized equation of motion for $\delta \pi$ reduces to the one studied before, eq. (\ref{perturbdec}), with the two solutions $\{1/t; \, t^4 \}$.
Notice another unfamiliar feature of this background: since $\rho_{\rm dS}=0$ and we are in a regime where gravitational contributions are small, linear perturbations to the scalar give a contribution to $H$ larger than that of the background.
In the opposite regime $\delta \pi \ll \frac{f^2}{M^2_{\rm Pl}} \frac{1}{(H_0 t)^2}$ terms proportional to $H$, $\dot H$ on the right-hand side of (\ref{pieom}) will give a contribution to the linear equation for the pertubations. Since $H^2$, contrary to the previous case, is now dominated by the background solution we can expand in equation (\ref{F1}) $\delta H^2 = 2H \delta H$ together with $\pi=\pi_0 + \delta \pi$ to find $\delta H = - \delta \pi/ t - \delta \dot \pi$. Using this expression in the RHS of (\ref{pieom}) gives the perturbations' equation in this regime:
\be\label{perturb2}
\delta \ddot \pi + \frac{\delta \dot \pi}{t} - \frac{\delta \pi}{t^2} =0 \; .
\ee
The two solutions are $\{ 1/t; \, t \}$; we have again the shift in time and a decaying solution that implies convergence to the attractor.
We can now study the most general solution for the Hubble rate. Suppose we start with an initial perturbation $\delta \pi = A t^4$; this gives a constant contribution to the energy density $\rho= -10 A f^2/H_0^2$, which corresponds to the integration constant for $H$ we alluded to below eq.~(\ref{earlytimeH}).
Since this can be positive or negative, a generic initial condition can produce a background that starts in expansion or even in contraction.
No matter which sign we choose for the initial condition, the perturbation decays as time goes on and we are eventually driven to the original expanding background.
Notice that as we approach the unperturbed solution, $\delta\pi$ will move from the first regime---where it dominates the energy density and has $\delta\pi \propto t^4$ as solution---to the second one, where $\delta\pi$ is smaller than the perturbation induced by gravity. In the intermediate regime we have no analytic control and one may be worried that we do not recover the unperturbed solution eventually. However we know that the equation $\dot H \simeq -4 \pi G \, p_{\rm dS}$ is always a good approximation since $\rho \ll p$. It tells us that $H$ follows the evolution discussed above up to small corrections,
even in the transition between the two limiting regimes for $\delta \pi$, where we don't have an explicit solution for homogenous perturbations.
In conclusion the NEC violating solution is an attractor for general homogeneous initial conditions close to the de Sitter solution: $\delta\pi \ll \pi_{\rm dS}$.
\section{\label{perturbations}Scalar perturbations}
Let us now move away from homogeneity and discuss scalar perturbations. To begin with, let us first assume again that gravity is decoupled, $\mpl \to \infty$.
Since the solution (\ref{pidesitter}) spontaneously breaks the conformal group $SO(4,2)$ down to the de Sitter one $SO(4,1)$, the Lagrangian for small perturbations will be invariant under the de Sitter symmetries, whereas the broken symmetries will be non-linearly realized.
In particular the fluctuation $\xi(x)$ defined via $\pi(x) = \pi_{\rm dS}(t + \xi(x))$ is the Goldstone boson associated with the spontaneously broken time-translational invariance $t \to t+ \epsilon$, which is now realized non-linearly as $\xi \to \xi + \epsilon$.
Indeed from eq.~(\ref{minimal}) we get the quadratic Lagrangian for $\xi$
\be
\label{phiaction}
{\cal L}_{\xi} = - \frac{f^2}{H_0^2} \frac{1}{t^4} (\partial \xi)^2 \; ,
\ee
which is manifestly shift-invariant.
The kinetic energy is positive, thus ensuring stability for the background solution against short-wavelength perturbations. For long-wavelength ones instead we get back to eq.~\eqref{perturbdec},
now written in terms of $\xi \,$:
\be
\ddot \xi_k - \frac4t \, \dot \xi_k = 0 \;, \qquad k \ll 1/t \; .
\ee
The solutions are
\be
\xi_k \sim t^5 , \: {\rm const} \; .
\ee
The constant one dominates at late times and simply describes---now manifestly---the original background solution slightly translated in time. We thus conclude that, in the absence of gravity, the solution $\pi_{\rm dS}$ is an attractor also for initial perturbations with non-vanishing gradients.
We now turn on gravity and in general we expect that the dynamics of scalar perturbations at large distances will be modified by their mixing with the scalar sector of $g_{\mu\nu}$.
Let's see how this works explicitly.
Suppose we have the background solution
$\pi_0(t)$, $H(t)$. Then, if we consider small fluctuations, it is particularly convenient to work in `unitary gauge':
\be
\pi (\vec x, t) = \pi_0 (t) \; .
\ee
This fixes time-diff invariance: we are defining the equal-time surfaces as the equal-$\pi$ ones, and the pace of time as that of the unperturbed solution. In this case there are no fluctuations in $\pi$, and the scalar fluctuation is in the metric tensor. We will fix the space diffs later.
Following \cite{maldacena} and \cite{markus}, we use ADM variables for the metric: the induced 3D metric $g_{ij}$, the lapse $N \equiv 1/\sqrt{-g^{00}}$, and the shift $N_i \equiv g_{0i}$. It is straightforward (see Appendix \ref{details}) to write the full action, the Einstein-Hilbert one plus the Galileon part, using these variables:
\bea
S & = & S_g + S_\pi \nonumber \\
S_g & = & \sfrac{1}{2} \mpl^2 \int \! d^4 x \, \sqrt{g_3} N \big[ R_3 + \big( K_{ij}K^{ij} - K^i {}_i {}^2 \big) \big] \label{Sg} \\
S_\pi & = & f^2 \int \! d^4 x \, \sqrt{g_3} N \bigg[
- e^{2 \pi_0}\dot \pi_0^2 \, \frac{1}{N^2} + \frac{4\dot \pi_0^3}{9 H_0^2} \, \frac{1}{N^3} K^i {}_i + \frac{\dot \pi_0^4}{3 H_0^2} \, \frac{1}{N^4}
\bigg] \;, \label{piADM}
\eea
where and henceforth spatial indices are raised and lowered via the spatial metric $g_{ij}$, and we have used the extrinsic curvature of costant-$t$ hypersurfaces
\be
K_{ij} \equiv \frac{1}{2N} \big[ \di_t g_{ij} - \nabla_i N_j - \nabla_j N_i \big] \; .
\ee
In fact the structure of the action is largely constrained by symmetry considerations \cite{markus}. The background solution spontaneously breaks time translations (diffs) as well as Lorentz boosts. This is made explicit by working with ADM variables, and, in unitary gauge, by allowing for explicit functions of time in the action. Then, we just have to write down all possible operators compatible with the residual symmetries, namely time- and space-dependent spatial diffs, $x^i \to x^i + \xi^i (t, \vec x)$, and 3D rotations.
The generic Lagrangian for matter ($\pi$, in our case) then is \cite{markus}
\bea
S_\pi & = & \int \! d^4 x \, \sqrt{g_3}N \nonumber
\left[ - \frac1{8\pi G} \, \dot H \frac{1}{N^2} - \frac1{8\pi G} (3 H^2 + \dot H) \right. \\
& + & \left. \frac12 M^4(t) \, (\delta N)^2 - \hat M^3(t) \, \delta E^i {}_i \delta N
+ \dots \right] \label{ADMaction}\; .
\eea
The terms in the first line are the only `tadpoles' there are: they start linear in the metric perturbations, thus yielding a non-trivial stress energy tensor on the background solution. As a result their coefficients are uniquely determined in terms of the background $H(t)$ by the Friedmann equations. The terms in the second line start quadratic in the fluctuations, and their coefficients are unconstrained. $\delta N$ is obviously the fluctuation in $N$, $N = 1 + \delta N$, whereas the tensor $\delta E_{ij}$ is, apart from an extra factor of $N$, the fluctuation in the extrinsic curvature of constant-$t$ surfaces,
\be
E_{ij} \equiv N K_{ij} \;, \qquad \delta E_{ij} \equiv E_{ij} - a^2 H g_{ij} \; .
\ee
Finally, the dots stand for higher-derivative terms---which in our case vanish, because of the magic properties of our conformally invariant Lagrangian---and for interaction terms, cubic and higher in the metric fluctuations---which we are not interested in. At the quadratic, two-derivative level the action (\ref{ADMaction}) is all we need.
Our action \eqref{piADM} can indeed be recast in the form \eqref{ADMaction} (see Appendix \ref{details}) with
\be \label{M4}
M^4(t) = \frac43 \frac{f^2}{H_0^2}\left(2 \dot\pi_0^4 + \dot\pi_0^2 \ddot\pi_0 + 9 H \dot\pi_0^3\right) \; ,
\qquad \hat M^3(t) = \frac43 \frac{f^2}{H_0^2} \dot\pi_0^3 \; .
\ee
In the following, however, we keep the analysis as general as possible, because then we can
apply it to other conformally invariant Lagrangians as well.
However from the expression for $\hat M^3$ above we see why we can violate the `theorem' of ref.~\cite{markus}: there it was assumed that the rate at which the Lagrangian coefficients---in particular $\hat M^3(t)$---vary with time is at most of order $H$. Here however at early times $H \sim f^2/(\mpl^2 H_0^2 t^3)$, whereas
$(1/\hat M^3) \partial_t \hat M^3 \sim 1/t$, which is much larger than $H$. Thus our example does not satisfy the hypotheses of the theorem. On the other hand, at late times the rate of $\hat M^3(t)$ is slower than the Hubble rate. However at late times $\dot H \ll H^2$, which, according to ref.~\cite{markus} is compatible with a ghost-free violation of the NEC. In conclusion: there is no contradiction with our NEC-violating system being free of instabilities throughout.
Before proceeding, it is time to comment on what changes if we allow for a more general conformal Galilean Lagrangian to start with. The NEC violating background solution will be the same \cite{Nicolis:2009qm}, apart from numerical factors and this fixes the first line of \eqref{ADMaction}. Galilean theories give rise to equation of motion containing at most two derivatives on each field \cite{Nicolis:2008in}, so that at the quadratic level the operators $(\delta N)^2$ and $\delta E^i_i \delta N$ are the only possible ones: all the others would give rise to equation of motion with more than two derivatives on $\xi$ \cite{markus}. With a proper choice of the coefficients of the Galilean Lagrangian the fluctuations around the NEC violating solution are healthy, as in our example eq.~\eqref{minimal}, and it is also possible, at the same time, to have stable perturbations around the $\pi = 0$ background \cite{Nicolis:2009qm}, i.e.~to flip the worrisome sign of the first term of eq.~\eqref{minimal}. Also the time dependence of eq.~\eqref{M4} will remain the same in the first phase $|t| \gg t_0$ for a general Galileon theory, so that all our conclusions can be straightforwardly applied to the more general case as well.
We finally move to compute the quadratic action for the propagating scalar mode. This was already done in ref.~\cite{markus} for the Lagrangian (\ref{ADMaction}), but only in the $M^4, \hat M^3 = {\rm const}$ case, which as we argued is quite different from ours.
Following Maldacena \cite{maldacena}, the spatial diffs can be fixed for instance by imposing
\be \label{zetagauge}
g_{ij} = a^2(t) \big[ (1+2 \zeta ) \delta_{ij} + \gamma_{ij} \big] \;, \qquad
\di_i \gamma_{ij} = 0\; , \quad \gamma_{ii} = 0 \; .
\ee
The transverse traceless matrix $\gamma_{ij}$ corresponds to tensor modes, which we will discuss below. For the moment we can consistently set $\gamma_{ij} = 0$, since 3D rotations are left unbroken by the background solution, thus preventing any mixing between scalar and tensor modes at the quadratic level, ${\bf 2} \otimes {\bf 0} \not\supset {\bf 0}$. $\zeta$ parametrizes the only scalar propagating d.o.f. As we will now see, the remaining metric components, $g_{00}$ and $g_{0j}$, can be expressed as functions of $\zeta$ through the constraint equations.
These are the variations of the full action $S = S_g + S_\pi$ with respect to $N$ and $N^j$ (see their explicit form in the Appendix \ref{details}). At zeroth-order in the fluctuations, the constraints are solved by the background solution. In particular, the Hamiltonian constraint reduces to Friedmann equation, whereas the momentum constraint is trivial. To get the quadratic Lagrangian for $\zeta$, we need to solve the constraints at first order in the perturbations $\zeta$, $\delta N$, and $N^j$. Defining $N^j = \di_j \beta$
(we can set to zero the transverse vector piece in $N^j$, for the same reason as for the tensor modes), we get
\bea
\label{deltaNzeta}
\delta N & = & \sfrac{2 \mpl^2}{2\mpl^2H - \hat M^3} \, \dot \zeta \\ \label{psizeta}
\nabla^2 \beta & = & - \sfrac{2 \mpl^2}{2\mpl^2 H - \hat M^3} \, \sfrac1{a^2} \nabla^2 \zeta
+ \sfrac{- 4\mpl^4 \dot H - 12 \mpl^2 H \hat M^3 + 3 \hat M ^6 + 2 \mpl^2 M^4 }{(2 \mpl^2H - \hat M^3 )^2} \, \dot \zeta \;.
\eea
The quadratic action for $\zeta$ is then obtained by plugging these back into the original action, eq.~(\ref{Sg}) plus eq.~(\ref{ADMaction}). After some integrations by parts we get
\be \label{zeta_action}
S_\zeta = \int \! d^4x \, a^3 \left[ A(t) \, \dot \zeta^2 -
B(t) \, \sfrac1{a^2} \big( \vec \nabla \zeta \big)^2 \right] \; ,
\ee
where
\bea
A(t) & = & \frac{\mpl^2 \big( - 4\mpl^4 \, \dot H - 12 \mpl^2 \, H \hat M^3 + 3 \hat M ^6 + 2 \mpl^2 \, M^4 \big)}{\big(2 \mpl^2 H - \hat M^3 \big)^2}
\label{A(t)}\\
B(t) & = & \frac{\mpl^2 \big( - 4\mpl^4 \, \dot H + 2 \mpl^2 \, H \hat M^3 - \hat M ^6 + 2 \mpl^2 \, \di_t \hat M^3 \big)}{\big(2 \mpl^2 H - \hat M^3 \big)^2}
\label{B(t)}
\; .
\eea
As a check, notice that for a cosmology driven by a minimally coupled scalar with a standard kinetic term and non-derivative interactions, we have $\hat M^3 = M^4 = 0$. This implies $A(t) = B(t) = - \mpl^2 \, (\dot H/ H^2)$, which is Maldacena's result \cite{maldacena}.
At early times---or equivalently at leading order in $1/\mpl^2$---we have (see eqs.~(\ref{earlytimeH}) and (\ref{M4}))
\be
\label{expearly}
H \simeq - \frac13 \frac{f^2}{\mpl^2} \cdot \frac{1}{H_0^2 t^3} \;, \quad M^4 \simeq \frac{4 f^2}{H_0^2} \frac1{t^4} \; , \quad \hat M^3 \simeq - \frac{4 f^2}{3 H_0^2} \frac1{t^3} \; , \quad \qquad |t | \gg \frac{f}{\mpl} H_0^{-1} \; ,
\ee
so that $A(t)$ and $B(t)$ above reduce to
\be
A(t) = B(t) = \frac{9 \mpl^4 H_0^2}{f^2} \, t^2 \; .
\ee
Also, given the smallness of $H$, the scale factor can be approximated as constant, $a(t) = 1 + {\cal O}(1/t^2)$. Therefore at early times we have
\be
\label{zetaaction}
S_\zeta = \frac{9 \mpl^4}{f^2} \int \! d^4x \, (H_0 t)^2 \left[ \dot \zeta^2 -
\big( \vec \nabla \zeta \big)^2 \right] \; .
\ee
It is easy to deduce the spectrum of $\zeta$ directly from this action. The action is invariant under
\be
t, \vec x \to \lambda t, \lambda \vec x \qquad \zeta \to \frac{1}{\lambda^2} \zeta \;;
\ee
as a result, the equal-time 2-point function of $\zeta$ must have the general form
\be \label{generic_form}
\langle \zeta (t, 0) \zeta(t, \vec x) \rangle = \frac{f^2} {\mpl^4 H_0^2} \frac{1}{|\vec x|^4} F\big( |\vec x| / t \big) \; ,
\ee
where $F$ is a generic function, with no additional dependence on the model parameters (the normalization prefactor comes from the overall constant appearing in the action (\ref{zetaaction}).) Notice that it is crucial that we look at symmetries of the action rather than simply at symmetries of the equation of motion (under which the action might change by an overall multiplicative constant). Indeed, if we want the $n$-point function to have the same transformation properties as $\zeta^n$, the vacuum state has to be invariant under the symmetries considered. This is the case, barring spontaneous breaking, if the action is invariant.
At short distances, $|\vec x| \ll t$, we have to recover the standard Minkowski 2-point function for a (non-canonically normalized) massless field:
\be
F\big( |\vec x| / t \big) \sim \frac{ |\vec x|^2}{t^2} \;, \quad |\vec x| \ll |t| \; .
\ee
To get the behavior of $F$ at large distances, we use the fact that the quantum $\zeta$ solves the classical equations of motion. For long wavelengths, or equivalently late times, we have the two behaviours
\be
\zeta \sim {\rm const}\;,\;\frac1t \;.
\ee
The second term dominates so that $\langle \zeta \zeta \rangle \sim 1/t^2$ also at late times. This implies\footnote{Using the same method for inflation, one would start with the action in conformal time $\eta$ of the form \cite{maldacena}
\be
\label{inflactionconf}
S= \mpl^2 \epsilon \int d^4 x \;\frac{1}{H^2 \eta^2} \left[\zeta'^2-(\nabla\zeta)^2\right] \;,
\ee
where $H$ and the slow-roll parameter $\epsilon$ can be taken as constants at leading order in slow-roll. This action is invariant under
\be
\eta, \vec x \to \lambda \eta, \lambda \vec x \qquad \zeta \to \zeta \;,
\ee
which implies
\be \label{generic_form_infl}
\langle \zeta (\eta, 0) \zeta(\eta, \vec x) \rangle = \frac{H^2} {\mpl^2 \epsilon} F\big( |\vec x| / \eta \big) \;,
\ee
with $F \sim \eta^2/|\vec x|^2$ for $|\vec x| \ll |\eta|$ to reproduce the Minkowski result and $F \sim $ const for $|\vec x| \gg |\eta|$ to reproduce the time evolution $\zeta \sim$ const one deduces from the equation of motion. In Fourier space this gives the celebrated $1/k^3$ spectrum.
}
\be \label{2pf}
\langle \zeta (t, 0) \zeta(t, \vec x) \rangle \propto \frac{f^2} {\mpl^4 H_0^2} \frac{1}{|\vec x|^2 t^2} \; ,
\ee
which is a very blue spectrum, going as $k^{-1}$ in Fourier space. Indeed the standard calculation in terms of modes (see Appendix \ref{squeezing}) gives
\be
\langle \zeta(t,\vec k) \zeta(t,\vec k') \rangle = (2\pi)^3 \delta(\vec k + \vec k') \frac{1}{18} \frac{f^2}{\mpl^4 H_0^2}\frac{1}{2 k} \frac1{t^2} \;.
\ee
Although the spectrum clearly shows that the Galileon perturbations are irrelevant on large scales, the reader may be puzzled. Indeed, the two independent solutions of the equation of motion are
\be
\label{zetamodes}
\frac{\sin {k t}}{k t} \; , \quad \frac{\cos {k t}}{kt}
\ee
which, in the limit of long wavelength, respectively give as we discussed
\be
\zeta \sim {\rm const}\;,\;\frac1t \;.
\ee
The first solution describes the celebrated
conservation of $\zeta$ on super-horizon scales\footnote{See
\cite{Cheung:2007sv} for a proof of the conservation of $\zeta$ at
all orders in perturbation theory, that applies to the Lagrangian
studied here.}. However, this solution is irrelevant as
the second mode dominates at late times. On the other hand we saw
that, in the absence of gravity, we have an attractor and we thus
expect to flow to the adiabatic solution $\zeta$ = const.
To understand why it is not the $\zeta= {\rm const}$ mode that dominates, in the next Section we move to Newtonian gauge where things will (hopefully) clarify. The Section may however be skipped without loss of continuity.
As to the cosmological (ir)relevance of adiabatic perturbations, in Appendix \ref{squeezing} we study the quantum state of each Fourier mode until it eventually comes back into the Hubble radius, calculating the amount of squeezing induced by the cosmological evolution and comparing our case with inflation. We will see that no appreciable squeezing is produced and all Fourier modes are practically in the ground state when relevant for observations. This unambiguously shows that there are no sizable perturbations.
\section{\label{Newtonian}The two adiabatic modes in Newtonian gauge}
To clarify the situation it is better to write things in a way that has a smooth limit when gravity is decoupled, which is clearly not the case in the gauge we are using, where perturbations of the scalar are set to zero. Things are much clearer in Newtonian gauge
\be
ds^2 = -(1+ 2 \Phi) dt^2 + a^2(t) (1- 2 \Psi) dx^2 \;.
\ee
Instead of doing an explicit change of gauge we can equivalently use the Bardeen potentials (for a recent review on cosmological perturbation theory see \cite{Malik:2008im}), i.e.~the gauge invariant combinations which coincide with the scalar perturbations in Newtonian gauge. These are expressed in terms of our variables
as
\begin{eqnarray}
\Phi & = &\delta N + (a^2 \beta)^{\bf\dot{}} \label{Phi} \\
\Psi & = & -\zeta - H a^2 \beta \;. \label{Psi}
\end{eqnarray}
Also the perturbation of the scalar field in Newtonian gauge can be written as a gauge invariant combination which in our notation reduces to
\be
\label{piGI}
\xi = a^2 \beta \;.
\ee
First of all, let us verify that in this gauge one has a smooth
$\mpl \to \infty$ limit \footnote{The mixing with gravity is important at all scales, but it fades away in the $\mpl \to \infty$ limit with fixed $f$ and $H_0$. Notice however that if one uses \eqref{zetaaction} to write the action for $\varphi$ in the spatially flat gauge, $\zeta = - \varphi \cdot H /\dot \pi_0$, the result does not reduce to the one without gravity, eq.~\eqref{phiaction}, sending $\mpl \to \infty$. The reason for this unexpected result is that if we take $\mpl \to \infty$ in a generic gauge, the spacetime does become flat, but we are not guaranteed that it be written in standard coordinates with metric $\eta_{\mu\nu}$. Indeed it is straightforward to check, starting from \eqref{deltaNzeta} and \eqref{psizeta} and doing the change of gauge, that in spatially flat gauge
\be
\label{ADMflat}
\delta N_{\varphi} = - \frac{\dot\zeta}{H} - \left(\frac{\zeta}{H}\right)^{\bf{\dot{}}} \qquad
\beta_{\varphi} = \frac{2}{a^2 H} \zeta + \frac{9 \mpl^2 H_0^2 t^2 }{f^2} \frac1{\nabla^2} \dot \zeta \;.
\ee
The limit must be taken while keeping $\varphi$, i.e.~$\zeta/H$, constant; we see that both $\delta N$ and $\beta$ remain finite in spatially flat gauge when $\mpl \to \infty$: the metric does not become $\eta_{\mu\nu}$.
This does not happen if one considers a model with a scalar field with a minimal kinetic term: in this case $\delta N$ and $\beta$ go to zero when $\mpl \to \infty$.}.
We have to check that in this limit, keeping
the amplitude of the scalar mode perturbation $\xi$ fixed, the metric
becomes Minkowski, i.e.~$\Phi$ and $\Psi$ go to zero. From
eq.~\eqref{psizeta} one can see that in the $\mpl \to \infty$ limit
with fixed $\xi$, which is equivalent (via eq.~\eqref{piGI}) to
fixed $\beta$, $\zeta$ goes to zero. As $H$ also goes to zero, $\Psi$
in eq.~\eqref{Psi} goes to zero. For $\Phi$ the limit is not so
evident as both terms in eq.~\eqref{Phi} remain finite. However one
can check that they cancel in the limit of decoupled gravity, by using
the equation of motion of $\zeta$ derived from the action
\eqref{zetaaction}. As spacetime approaches Minkowski when $\mpl \to
\infty$, also the equation of motion for the scalar will reduce to that in the absence of gravity.
We are now in a position to compare with the homogeneous perturbations studied in Section \ref{background}. Rewriting $\xi$ in terms of $\zeta$ using \eqref{psizeta} and the expressions \eqref{expearly} we have
\be
\label{hatpi}
\xi = a^2 \beta = \frac{\zeta}{H} + \frac{\dot H}{H^2} \frac{a^2}{\nabla^2} \dot \zeta \;.
\ee
From this we see that, at long wavelengths, the leading solution $\zeta \propto 1/t$ corresponds to $\xi ={\rm const}$---the attractor we found studying homogeneous perturbations. Let us check that the same holds for the metric as well. In the absence of anisotropic stress $\Psi$ equals $\Phi$ \footnote{Actually this equality is not so straightforward to obtain. If one expresses $\Phi$ as a function of $\zeta$ using \eqref{Phi} and \eqref{hatpi}, one gets
\be
\Phi = - \frac{\dot H}{H^2} \zeta + \left(\frac{\dot H}{H^2} \frac{a^2}{\nabla^2} \dot\zeta\right)^{\bf{\cdot}} \;.
\ee
There is a partial cancellation between the two terms using the equation of motion of $\zeta$ and the term $\dot H/ H^2 \zeta$ cancels (notice that $\dot H/ H^2 \gg 1$ in the limit we are considering.) To retain terms ${\cal O}(1) \times \zeta$---which are needed to check whether $\Phi = \Psi$---one should go beyond the approximation we are using. We do not do so, but we use the expression of $\Psi$ in which no cancellation occurs, taking for granted that subleading corrections will enforce $\Phi = \Psi$.}. For the $\zeta \sim 1/t$ mode, we can neglect the first term of the right-hand side of eq.~\eqref{Psi} so that
\be
\Psi \simeq - H \xi \qquad {\rm with} \quad \xi = {\rm const.}
\ee
As $\xi$ = const describes the unperturbed solution shifted in time by an amount $\xi$, we see that the metric also describes the unperturbed FRW shifted in time by $\xi$. Indeed $a^2(t + \xi) \simeq (1 + 2 H \xi) a^2$ (\footnote{In order to cast the metric in FRW form, one should also do a redefinition of the time coordinate. It is easy to check that the resulting effect on the metric is suppressed by ${\cal O}(H t)$ with respect to $\Psi$.}).
What is quite confusing is that we are used in standard inflation
to identifying the existence of an attractor with the $\zeta = {\rm const}$ mode's being the leading one at late times. Why does not this happen here? To
understand this different behaviour, it is useful to follow \cite{weinberg} and study the most generic adiabatic mode in Newtonian gauge. One can do so by considering the residual gauge freedom of Newtonian gauge at $k=0 \,$:
\be
t \to t + \epsilon(t) \qquad x^i \to x^i (1- \lambda) \;,
\ee
with constant $\lambda$.
Under these transformations the potentials transform as
\be
\Psi \to \Psi + H \epsilon - \lambda \qquad \Phi \to \Phi - \dot\epsilon \;.
\ee
These are just gauge modes. To become the $k \to 0$ limit of physical
solutions, they must satisfy Einstein equations at infinitesimal but non-vanishing $k$. These will set $\Phi = \Psi$, assuming
we have no anisotropic stress. Using this, one can find
the most generic adiabatic mode solving for $\epsilon(t)$
\be
\label{epsilonW}
\epsilon(t) = \frac{\lambda}{a(t)} \int^t_0 a(t') dt' + \frac{c}{a(t)}\;.
\ee
A generic adiabatic mode is thus fixed by two constants: $\lambda$ and
$c$. Usually one neglects $c$ as $\epsilon$ is dominated by the
integral, which grows as $t$ for any power-law expansion of the form
$a(t) \propto t^{2(1+w)/3}$ with $1+w>0$. This is the standard
adiabatic mode. Notice that the constant $\lambda$ is the value
of $\zeta$ for the constant solution \cite{weinberg}: indeed it parameterizes a rescaling of the spatial
coordinates. For a constant $w$, one obtains a constant value for
$\Phi$
\be
\Phi = \Psi = - \frac{1+w}{\frac53 + w} \lambda \;,
\ee
which is the standard relation between the Newtonian potential and the
conserved quantity $\zeta$.
Our situation, however, is quite different. Taking $a \simeq 1$, we see that
the first term in eq.~\eqref{epsilonW} still goes as $t$, but now this
implies that it becomes subdominant as $t$ gets close to zero. The dominant adiabatic
mode is now given by $\epsilon = c/a$, which in our
approximation amounts to $\epsilon \sim c$, corresponding to a constant time-shift. This makes perfect
sense: as the model we are studying has a smooth $\mpl \to \infty$
limit, we expect it to be dominated by an adiabatic mode that reduces to a
constant time shift, which is what we get in the absence of gravity.
It is immediate to work out the precise relation between the coefficients $\lambda$ and $c$, describing the general long wavelength solution in Newtonian gauge and the two modes of $\zeta$, eq.~\eqref{zetamodes}. We can relate $\Psi$ with $\zeta$ using \eqref{Psi} and \eqref{hatpi}: we see, as already discussed, that the $\zeta \propto \cos{kt}/t$ corresponds to a constant $\epsilon$ and has $\lambda =0$. On the other hand the mode $\zeta = A \sin{kt}/kt$ gives $\epsilon \simeq A t$ with $c=0$ and $\lambda =A$: as expected $\lambda$ corresponds to the amplitude of the constant $\zeta$ mode.
This adiabatic mode enjoys similar properties as the standard $\zeta
= {\rm const}$ one. Independently of any details about the future evolution,
$\epsilon \propto 1/a$ remains a solution. This implies that this mode
is completely irrelevant for observations, as it quickly decays away
in the standard FRW evolution.
To complete the picture, in Appendix \ref{details} we reproduce in Newtonian gauge the results for homogeneous perturbations of Section \ref{background}.
\section{\label{fake}A second scalar in the fake de Sitter}
The conclusion of the previous Sections is that the fluctuations of the $\pi$ scalar are cosmologically irrelevant. We are therefore forced to look for an alternative mechanism to give rise to the observed scale invariant spectrum of primordial perturbations. We do not have to try very hard as the model itself naturally suggests one. The fictitious metric
\be
g_{\mu\nu}^{(\pi)} \equiv e^{2 \pi(x)} \eta_{\mu\nu} \;,
\ee
with $\pi$ following the unperturbed solution \eqref{pidesitter}, describes de Sitter space. Notice that any coupling of additional degrees of freedom with $\pi$ will have to go through the metric above to preserve the conformal invariance of the theory and, for de Sitter, any tensor constructed with the metric is proportional to the metric itself. This means that a second scalar $\sigma$ coupled to $\pi$ will behave as in de Sitter space. If $\sigma$ is massless---which can be ensured by a shift symmetry $\sigma \to \sigma + {\rm const}$---its spectrum will be scale invariant,
\be
\langle \sigma(\vec k) \sigma(\vec k') \rangle = (2\pi)^3 \delta(\vec k + \vec k') \frac{H_0^2}{2 k^3} \;,
\ee
while the inclusion of a small mass term will tilt the spectrum either way. It is straightforward to check that the corrections coming from the evolution of the ``real'' metric $g_{\mu\nu}$ are exponentially small, for modes of cosmological interest. This means that a massless $\sigma$ will acquire an {\em exactly} scale invariant spectrum, which is still marginally compatible with the data \cite{Pandolfi:2010dz}. We stress that gravity breaks the conformal symmetry of the Galileon Lagrangian, and this may induce small corrections to the scale invariant spectrum above.
The conversion of $\sigma$ fluctuations into adiabatic ones can happen through one of the mechanisms that have been studied at length for inflation: $\sigma$ may change the way $\pi$ reheats the Universe \cite{Dvali:2003em,Dvali:2003ar}, or become relevant at a later epoch \cite{Lyth:2001nq}. As the conversion mechanism is model dependent, unfortunately we cannot infer the value of $H_0$ from the data. However, the experimental signatures are the typical ones for a ``second field" mechanism: large, local non-Gaussianities and, possibly, isocurvature perturbations.
As it is common to most alternatives to inflation, the gravitational wave spectrum is very blue in our model, and unobservable. Gravity waves are just sensitive to what the ``real" metric is doing; given the rapid increase of $H$ we will have a very blue spectrum of tensor modes. Indeed given that $a$ can be approximated as a constant while $H$ blows up, each mode gets frozen---$k/a \sim H$---at an amplitude of order of the Minkowski quantum uncertainty. Therefore we have a spectrum $\propto k^{-1}$, very suppressed on cosmological scales.
\section{\label{super}Faster than light}
We finally analyze the worrisome feature of our scenario: superluminality. It was shown in \cite{Adams:2006sv, Nicolis:2009qm, Nicolis:2008in} that for DGP- and Galileon-like theories superluminal excitations may be generically expected about non-trivial solutions. Let's briefly review the general arguments of \cite{Nicolis:2009qm} and check whether they apply to our case as well.
Consider our model (\ref{minimal}) in the absence of gravity ($M_{\rm Pl} \to \infty$, $g_{\mu\nu} \to \eta_{\mu\nu}$) and about the trivial configuration with $\pi = 0$ (\footnote{As we discussed, fluctuations around $\pi =0$ are ghost-like for the action \eqref{minimal}, but this sign can be flipped starting from a more general Galileon Lagrangian and preserving the NEC violating solution we are interested in. Let us assume thus that the sign is the healthy one: the discussion below remains unaltered.}). $\pi$ excitations are of course exactly luminal, for Lorentz invariance is unbroken. Now turn on localized sources so as to create a weak stationary $\pi_0 (\vec x)$ field. By `weak' we mean that $\pi$'s self-interactions are unimportant to determine the field configuration, that is the solution obeys
\be \label{Laplace}
\nabla^2 \pi_0 \simeq 0 \; ,
\ee
outside the sources. The quadratic Lagrangian for small perturbations $\delta \pi$ about this solution is
\be
\delta_2 {\cal L} = f^2 \, G^{\mu\nu} \, \di_\mu \delta\pi \,\di_\nu \delta \pi + \dots ,
\ee
where the (inverse) effective metric is
\be
G^{\mu\nu} = \eta^{\mu\nu} \Big(1+ \frac{4}{3 H_0^2} \Box \pi_0 \Big) - \frac{4}{3} \frac{1}{H_0^2} \di^\mu \di^\nu \pi_0 \; , \label{effective_inverse}
\ee
and the dots stand for non-derivative terms for $\delta \pi$, as well as for corrections that are at least quadratic in the background field $\pi_0$ and derivatives thereof. The causal structure is determined by the highest derivative terms in the quadratic Lagrangian---therefore non-derivative ones like mass terms are irrelevant for our discussion. Also irrelevant is the correction proportional to $\eta_{\mu\nu}$ in (\ref{effective_inverse})---at the order in $\pi
_0$ we are keeping, it is just an overall conformal factor, which does not affect the light-cone aperture. In conclusion, the propagation of $\delta \pi$ is constrained by the light-cone of
\be \label{effective_metric}
G_{\mu\nu} \simeq \eta_{\mu\nu} + \frac43 \frac{1}{H_0^2} \di_\mu \di_\nu \pi_0 \; .
\ee
Because of (\ref{Laplace}), this is narrower than the Minkowski light-cone in some directions, but wider in others \cite{Nicolis:2009qm}.
Notice that this conclusion only relies on the presence of the Galileon cubic interaction $(\di \pi)^2 \Box \pi$, regardless of its sign---which in fact can be changed by redefining $\pi \to - \pi$.
At the classical level there is no way out---generic weak-field solutions admit superluminal excitations.
At the quantum one however, we have to make sure that the effect be measurable within the effective theory. Roughly speaking, this is the case if signals of frequency lower than the UV cutoff can gain an order one phase-shift over exactly luminal signals of the same frequency. Suppose $\di \di \pi_0$ in (\ref{effective_metric}) can be approximated as constant over some distance $L$, and let's make a $\delta \pi$ signal travel such a distance. In order for our weak-field approximation to be valid throughout this region we need
\be \label{weak_field}
\di \di \pi_0 \ll H_0^2 \; , \qquad \di \pi_0 (L) \sim\di \di \pi_0 \times L \ll H_0 \; ,\qquad \pi_0 (L) \sim \di \di \pi_0 \times L^2 \ll 1 \; .
\ee
The superluminal shift in the velocity is of order $\delta c \sim \di \di \pi_0 / H_0^2 $, corresponding to an overall phase-shift for $\delta \pi$
\be
\delta {\rm phase} \sim \delta c \, \omega L \sim \frac{\di \di \pi_0 \, L}{ H_0}\,\frac{\omega}{H_0} \; ,
\ee
where $\omega$ is the signal's frequency.
The phase shift is much smaller than $\omega/H_0$ in the weak field regime---see eq.~(\ref{weak_field})---and becomes of order $\omega/H_0$ if we stretch the linear approximation to the limit. This means that the source of superluminality we identified (there may be others) is ineffective if we declare that the effective theory breaks down at energies/frequencies of order $H_0$, so that the would-be superluminal effect is not measurable consistently within the EFT. Notice that $H_0$ is well below the strong-coupling scale of the theory \cite{Nicolis:2009qm}.
The above discussion applies to small deformations of the trivial $\pi=0$ background. But what about our cosmological solution? As we will see, essentially the same conclusion applies. However the scale $H_0$ will be replaced by $1/t$---not surprisingly given our solution's scale-invariance. Consider first our solution in the absence of gravity, eq.~(\ref{pidesitter}). Given the high-degree of symmetry of the original Lagrangian (SO(4,2)) as well as of the solution (SO(4,1)), small fluctuations about this configuration are exactly luminal \cite{Nicolis:2009qm}. We now want to run the same argument as above---turning on a weak-field deformation of this background and studying its small excitations---but for simplicity we want to do so in a patch small enough so that we can approximate the ``fake'' de Sitter metric $e^{2 \pi_{\rm dS}} \eta_{\mu\nu}$ as flat. Given the homogeneity of de Sitter space, all points are equivalent, and we can choose for instance $\vec x = 0$, $t = -H_0^{-1}$ as the center of our patch. We can then perform a special conformal transformation combined with a translation---recall that our Lagrangian is conformally invariant---to rewrite the de Sitter background as
\cite{Fubini:1976jm, Nicolis:2008in}
\be
e^{2 \pi_{\rm dS}} = \frac{1}{1+\frac14 H_0^2 \, x^\mu x_\mu} \; ,
\ee
where now $x^\mu$ is measured from our new origin
\footnote{Notice that performing conformal transformations does not perturb the causal structure, even though they do not commute with the Lorentz group, because they only affect the metric by a conformal factor.}.
This rewriting of the solution makes it immediate to carry out our analysis in a small patch centered at the origin. First of all, the de Sitter conformal factor reduces to
\be
e^{2 \pi_{\rm dS}} \simeq 1 + {2 \pi_{\rm dS}} \simeq 1 - \sfrac14 H_0^2 \, x^\mu x_\mu \; .
\ee
Second, at lowest order in $H_0^2 x^2$, the full non-linear dynamics of perturbations about the $\pi_{\rm dS}$ background are given by a Galileon Lagrangian whose coefficients are of the same order as the original ones \cite{Nicolis:2008in}. In other words, expanding a generic Galileon Lagrangian about a $\pi \propto x^\mu x_\mu$ solution yields another Galileon Lagrangian with similar coefficients. As a consequence, as long as we restrict to distances from the origin smaller than $H_0^{-1}$, our analysis above applies unaltered. We thus have superluminal excitations, which do not really have a chance of yielding measurable superluminal effects if the effective theory breaks down at frequencies of order $H_0$, or below.
However here the appearance of $H_0$ stems uniquely from our choosing to expand about $t=H_0^{-1}$. This is made manifest by working with the field
\be
\phi = H_0 e^\pi \; ,
\ee
in terms of which the Lagrangian takes the form \cite{Nicolis:2009qm}
\be
{\cal L} = \frac{f^2}{H_0^2} \phi^4 F\Big( \frac{\di \phi}{\phi^2}, \frac{\di \di \phi}{\phi^3} \Big) \; ,
\ee
where $F$ is a polynomial with order-one coefficients.
The overall dimensionless factor $f^2/H_0^2$ has no effect at the level of classical equations of motion. Then the only scale present in the Lagrangian is the local value of $\phi$. Our de Sitter solution (in the original coordinates) corresponds to
\be
\phi_{\rm dS} = -\frac{1}{t} \; .
\ee
So, the fact that to avoid superluminality in a neighborhood of $t = H_0^{-1}$ we have to impose a frequency cutoff of order $H_0$, implies that to avoid superluminality about a generic $t$ the UV cutoff has to be $1/|t|$. Notice that cosmological perturbations of the Galileon field freeze-out at frequencies precisely of order $1/|t|$.
Therefore, if we decide to ban superluminality from our effective theory, we also lose predictivity for cosmological observables.
The other possibility---swallowing the presence of measurable superluminal effects in a Lorentz-invariant effective theory---does not necessarily lead to inconsistencies. As long as the effective theory is free of closed time-like curves (which for our model has not been proven nor disproven yet), there are no pathologies from the low-energy viewpoint (see for example \cite{Babichev:2007dw} for an optimistic point of view). However, physically measurable superluminality certainly implies that the effective theory at hand cannot arise as the low-energy limit of a microscopic theory with the standard relativistic causal structure, such as a renormalizable Lorentz-invariant QFT for instance \cite{Adams:2006sv}.
The introduction of dynamical gravity perturbs the above analysis, and to some extent its conclusions too. To begin with, our cosmological solution (\ref{pidesitter}) gets modified: slightly at early times (eq.~(\ref{earlytimepi})), drastically at late ones (eq.~(\ref{polephase})). As a consequence its de Sitter symmetry is gone, and small $\delta \pi$ perturbations are no more exactly luminal, even in the absence of the sources we needed to run the above arguments. Second, given the peculiar structure of the Galilean self-interactions, the mixing of $\delta \pi$ with scalar gravitational perturbations is relevant at all scales (see a related discussion in \cite{markus}), and cannot be ignored even when studying sub-horizon perturbations.
We thus have to use eq.~(\ref{zeta_action}), which is the quadratic action for the propagating scalar mode about the FRW background, taking into account gravitational corrections. The propagation speed for short-wavelength excitations as measured by a comoving observer in terms of the background FRW metric is
\be
c^2_\zeta = \frac{B(t)}{A(t)} \; .
\ee
In other words, if $c^2_\zeta$ thus defined is larger than one $\zeta$ excitations exit the FRW light-cone. Plugging the approximate solutions we found in sect.~\ref{background} into eqs.~(\ref{A(t)}, \ref{B(t)}), we get
\bea
t \to -\infty : & \quad & c^2_\zeta \simeq 1- \frac{32}{9} \frac{f^2}{\mpl^2} \frac{1}{H_0^2 t^2 } \\
t \to t_0 : & \quad & c^2_\zeta \to 0 \; .
\eea
In both regimes the correction to the propagation speed is {\em sub}-luminal---extremely so at late times
\footnote{The vanishing of $c^2_\zeta$ for $t$ approaching $t_0$ signals that, in such a limit, higher-derivative corrections to the perturbations' gradient energy cannot (and should not) be neglected, pretty much like for the ghost condensate \cite{ArkaniHamed:2003uy}.}.
This relaxes our conclusions above somewhat: since the cosmological background introduces a subluminal offset into the excitations' speed, we now need perhaps small, but finite perturbations to overturn this offset and make excitations about a new background superluminal.
Although this is certainly more welcome than the opposite result, in practice it is not very helpful: at very early times the gravitational correction to the propagation speed goes to zero---like all other gravitational effects in our model. This means that, at least at early times, we have to live with superluminality, or give up the model.
\section{\label{conclusions}Conclusions and Outlook}
We are putting forward a model for our Universe's early cosmology that departs strikingly from the conventional inflationary picture. Schematically: there is no Big Bang in our past; spacetime is flat at $t \to -\infty$; related to this, the Universe is initially devoid of any form of energy; energy and the associated Hubble expansion get created by a NEC-violating sector.
The main virtue of the model lies not in its sheer radicalness---we are certainly not the first authors to come up with ``phantom''-like equations of state---but in its being able to associate such a radicalness with an healthy effective field theory coupled to gravity, and in the consequent robustness of the scenario. Our theory---the Galileon or more precisely its conformally invariant generalization---is well-behaved classically as well as quantum mechanically, even for strongly non-linear background solutions. The structure of the Lagrangian is protected by symmetries. More relevant for us, the system
retains stability even when the stress-energy tensor violates the NEC \cite{Nicolis:2009qm}. Partially as a consequence of this, the cosmological solution we outlined is an attractor---the universe wants to follow it even when initially displaced from it---which implies that our model solves the horizon and flatness problems as well as standard inflation does. Remarkably, expansion is not put in as an initial condition but follows from generic initial conditions, including contracting ones---within a bounded basin of attraction of course.
As to density perturbations, in its minimal incarnation our model does {\em not} produce sizable adiabatic perturbations on cosmological scales. However the symmetry structure is so constraining that postulating the existence of extra light scalars {\em unavoidably} yields nearly scale-invariant spectra for them---which can later be converted into adiabatic perturbations via any of the standard conversion mechanisms available on the market. The downside is that, like for standard multi-field inflationary models, predictions are more model-dependent than for single-field slow-roll inflation, although the presence of a sizeable local Non-Gaussianity is rather robust.
The only reservation we have about welcoming our model as a compelling alternative to inflation concerns superluminality.
The generic presence of superluminal excitations about non-trivial solutions indicates that our model cannot arise as the low-energy limit of a standard relativistic UV-complete theory, like e.g.~a renormalizable Lorentz-invariant QFT. Depending on one's personal taste and attitude, reactions to giving up Lorentz invariance in the UV may range from disgust to excitement.
Minimally, it is fair to say that it makes us depart from known territory, especially when gravity is involved. We therefore feel that it deserves special care. It is interesting to note that superluminality in our model is tied to the presence of the DGP-like interaction $(\di \pi)^2 \Box \pi$, which in turn is forced upon us by demanding that scattering amplitudes obey standard properties of $S$-matrix theory \cite{Nicolis:2009qm}. However it is possible, in principle, that suitable deformations of the theory exist where such an interaction is absent and where superluminality is gone as well.\footnote{Given the non-renormalization theorem of \cite{LPR}, such a ``tuning'' would be preserved by quantum corrections.}
Of course we would like to maintain the nice features of our cosmological scenario---among which the near deSitter invariance of the solution. So, instead of considering a conformal completion of the Galileon theory, we may consider a different symmetry group containing the de Sitter one as a subgroup and that reduces to the galileon symmetry group in the appropriate limit. An obvious choice is the 5D Poincar\'e group ISO(4,1) \cite{Nicolis:2008in, Nicolis:2009qm, dRT}.
This possibility certainly deserves further study.
\section*{Ackowledgements}
It is a pleasure to thank Riccardo Rattazzi for collaboration in the early stages of this project and Niayesh Afshordi for useful discussions.
\begin{appendix}
\section{\label{details}Details on adiabatic perturbations}
This Appendix complements the study of adiabatic perturbations of Sections \ref{perturbations} and \ref{Newtonian}.
{\em Unitary gauge action}. To deduce eq.~\eqref{piADM}, we use that in unitary gauge the various terms in the action read
\bea
\sqrt{-g} \, e^{2\pi} (\di \pi)^2 & = & - e^{2 \pi_0}\dot \pi_0^2 \, \sqrt{g_3} \frac{1}{N}\\
\sqrt{-g} \, (\di \pi)^4 & = & \dot \pi_0^4 \, \sqrt{g_3} \frac{1}{N^3} \\
\sqrt{-g} \, \Box \pi (\di \pi)^2 & = & -2 \dot \pi_0^2 \ddot \pi_0 \, \sqrt{g_3} \frac{1}{N^3} + \dot \pi_0^3 \, \sqrt{g_3} \frac{1}{N} \left[ N^i \di_i \frac{1}{N^2} - \di_t \frac{1}{N^2} \right] \label{dgp_term}\; ,
\eea
where $g_3 \equiv \det g_{ij}$, we used that $ \sqrt{-g} = N \sqrt{g_3}$, and in the last line we integrated by parts. Also we made use of $g^{0i} = N^i / N^2$, where and henceforth spatial indices are raised and lowered with the spatial metric $g_{ij}$. We can rewrite the terms in bracket in terms of the extrinsic curvature of costant-$t$ hypersurfaces,
\be
K_{ij} \equiv \frac{1}{2N} \big[ \di_t g_{ij} - \nabla_i N_j - \nabla_j N_i \big] \; .
\ee
Indeed after straightforward manipulations and integrating by parts we get
\be
\dot \pi_0^3 \, \sqrt{g_3} \frac{1}{N} \left[ N^i \di_i \frac{1}{N^2} - \di_t \frac{1}{N^2} \right] = 2 \dot \pi_0^2 \ddot \pi_0 \, \sqrt{g_3} \frac{1}{N^3} + \sfrac23 \dot \pi_0^3 \, \sqrt{g_3} \frac{1}{N^2} K^i {}_i \; ,
\ee
where we used that $\di_t \sqrt{g_3} = \frac12 \sqrt{g_3} \, g^{ij} \, \di_t g_{ij}$. The first piece cancels exactly the first term in eq.~(\ref{dgp_term}) and we are left with eq.~\eqref{piADM} (\footnote{As $\ddot\pi \dot\pi^2$ is a total derivative, each term in $(\partial\pi)^2\Box\pi$ contains at least two spatial derivatives. That's why it is not surprising that it can be written solely in terms of an operator containing the extrinsic curvature $K$, which contains two spatial derivatives on $\pi$.}).
{\em Unitary action in the standard form}. To cast eq.~\eqref{piADM} in the form \eqref{ADMaction} we can expand the third term of \eqref{piADM} as $1/N^4 = 2/N^2 -1 + 4 \delta N^2 + \ldots$ The second term can be rewritten as
\be
\frac{1}{N^3} K^i_i = \delta\frac{1}{N^3}\delta K_i^i + K^i_i +3 H \frac{1}{N^3} -3 H \;.
\ee
Notice that $K^i_i$ appears as an additional tadpole term besides the ones in eq.~\eqref{ADMaction}. However one can get rid of it using the identity \cite{Cheung:2007st}\footnote{Notice there is a sign error in the last term of eq.~(80) of \cite{Cheung:2007st}.}
\be
\int d^4 x \sqrt{-g} \, f(t) K^\mu_\mu = \int d^4 x \sqrt{-g} \, f(t) \nabla_\mu n^\mu = -\int d^4 x \sqrt{-g} \,\partial_\mu f(t) n^\mu= - \int d^4 x \sqrt{-g} \,\dot f (t) \frac{1}{N} \;.
\ee
In this way one can write the action in the form \eqref{ADMaction} and check that the coefficients of the tadpole terms can indeed be written in terms of $H$ and $\dot H$ using the expression of the $\pi$ stress-energy tensor.
{\em Constraint equations}. The explicit form of the constraint equations is
\bea
\mpl^2 \bigg[R_3 - \frac1{N^2}\big( E_{ij}E^{ij} - E^i {}_i {}^2 \big) + \frac{2}{N^2} \dot H
-2 \big( 3H^2 + \dot H\big) \bigg] + 2 M^4 \delta N
- 2 \hat M^3 \delta E^i {}_i = 0 && \\
\nabla_i \bigg[ \mpl^2 \frac1N \big(E^i{}_j - \delta^i_j \, E^k {}_k \big) - \delta^i_j \, \hat M^3 \delta N\bigg] = 0 &&
\eea
whose solution at linear order gives \eqref{deltaNzeta} and \eqref{psizeta}.
{\em Homogeneous perturbations in Newtonian gauge}. Let us check that in Newtonian gauge we find the same two regimes of perturbations we found in Section \ref{background}: one when the perturbation dominates the energy density (this is equivalent to decoupling gravity as the energy density of the background vanishes when $\mpl \to \infty$) and one when perturbations are small and only perturb the background energy density. The first regime is obtained simply by sending $\mpl \to \infty$. In eq.~\eqref{hatpi} the leading solution in $\zeta$ gives, as we saw, the $\xi = {\rm const}$ mode, while the solution $\zeta \propto \sin{kt}/kt$ gives the decaying solution $\xi \sim t^5$, as it can be easily verified. Actually this identification holds not only on large wavelengths, but it is exact in the limit $\mpl \to \infty$: the second mode of $\zeta$ in eq.~\eqref{zetamodes} matches with the solution of eq.~\eqref{phiaction} that is constant at small $k$, while the first one matches with the $\xi$ mode that as $t^5$ at small $k$, without mixing. It is much trickier to study the regime when perturbations do not dominate the energy density. From Section \ref{background} we expect the decaying mode to give $\xi \sim t^2$ in this case, but as we said the decaying mode of $\zeta$ gives $\xi \sim t^5$. The trick is that one has to be careful about the two limits $\mpl \to \infty$ and $k \to 0$. Indeed, if one keeps higher orders in $1/\mpl^2$ in the action for $\zeta$, eq.~\eqref{zetaaction}, in the relation between $\zeta$ and $\xi$, eq.~\eqref{hatpi}, and takes also into account the change in the time variable to compare with the FRW solutions of Section \ref{background}, the decaying mode goes as
\be
\xi \propto \frac{H_0^2 t^5 k^2}{5 f^2} \mpl^2 + \left(-\frac{13 \pi}{5 k} + \frac{13}{15} \pi t^2 k\right) + \ldots
\ee
where the dots stand for terms of higher order in $1/\mpl^2$ and $k$. We see that the $\xi \sim t^5$ solution dominates if one sends $\mpl \to \infty$ at fixed $k$. On the other hand sending $k \to 0$ at fixed $\mpl$ we have a constant term, which describes a mixing with the dominant $\xi = {\rm const}$ mode and the $t^2$ behaviour we were looking for.
\section{\label{squeezing} Squeezing and absence thereof}
As we discussed, the perturbations of the Galileon field $\pi$ have quite peculiar properties, rather different from the standard inflationary scenario. The best way to pin down the status of these perturbations is to determine the quantum state of each Fourier mode, when it comes back into the Hubble radius, for example during a radiation dominated phase. Indeed, in the linear approximation, the field is just a collection of harmonic oscillators, each with a time dependent Lagrangian, as the background we are perturbing around is time dependent.
This causes each Fourier mode to be in a squeezed state, when it gets back into the Hubble radius. The amount and direction of squeezing uniquely fix the state of the perturbation and also tell us whether a classical interpretation in terms of classical stochastic variables is possible.
In order to follow the evolution of each Fourier mode until the it comes back into the Hubble radius, it is useful to have a unified description in which perturbations around a homogeneous, isotropic Universe are always described in each phase of the evolution of the Universe by an action for the same scalar variable. We will do so using $\zeta$ as such a variable. In this Appendix we start calculating the squeezing status of perturbations in the case of inflation and then compare it to our Galileon case. Of course the case of inflation is quite well known, but the way it is presented here is, to our knowledge, new.
The action for $\zeta$ during inflation is of the form \cite{maldacena}
\be
\label{inflaction}
S= \mpl^2 \int d^4 x \;a^3 \epsilon \left[\dot\zeta^2- \frac{1}{a^2} (\nabla\zeta)^2\right] \;,
\ee
where $\epsilon = \dot\phi^2/(2 H^2 \mpl^2)$ can be taken as a constant at leading order in slow-roll. The field $\zeta$ can be decomposed in terms of annihilation and creation operators as
\be
\label{zetaaa}
\zeta(t,\vec x) = \int \frac{d^3k}{(2\pi)^3} \left(\zeta_{\vec k}^{\rm cl}(t) a_{\vec k} + \zeta_{\vec k}^{{\rm cl}*}(t) a^\dagger_{\vec k}\right)\;.
\ee
As the field satisfies the equations of motion, the functions $\zeta_{\vec k}^{\rm cl}(t)$ are solutions of the equations of motion that at very early times, when the mode is much shorter than the Hubble radius, reduce to the Minkowski form:
\be
\zeta_{\vec k}^{\rm cl}(t) = \frac{1}{\sqrt{2 \epsilon} \mpl} \cdot \frac{H}{\sqrt{2k^3}}\left(1 - i \frac{k}{a H}\right) e^{i \frac{k}{a H} + i \vec k \vec x} \;.
\ee
Let us define a scalar product between classical solutions of the field equation
\be
\label{scalar}
\langle \zeta_1^{\rm cl} ; \zeta_2^{\rm cl} \rangle \equiv - i \int d^3x \left( \zeta_1^{\rm cl} \Pi_2^{\rm cl *} - \zeta_2^{\rm cl *} \Pi_1^{\rm cl}\right) \;,
\ee
where $\Pi^{\rm cl}$ is the momentum conjugate to $\zeta$. For the action \eqref{inflaction}, $\Pi^{\rm cl} = 2 \mpl a^3 \epsilon \dot\zeta^{\rm cl}$. It is important to notice that this scalar product is time independent as a consequence of the equations of motion. The solutions $\zeta^{\rm cl}$ are normalized as
\be
\begin{split}
\label{norms}
\langle \zeta_{\vec k}^{\rm cl} ; \zeta_{\vec k'}^{\rm cl} \rangle & = (2 \pi)^3 \delta(\vec k - \vec k') \\
\langle \zeta_{\vec k}^{\rm cl *} ; \zeta_{\vec k'}^{\rm cl *} \rangle & = - (2 \pi)^3 \delta(\vec k - \vec k') \\
\langle \zeta_{\vec k}^{\rm cl} ; \zeta_{\vec k'}^{\rm cl *} \rangle & = 0 \;.
\end{split}
\ee
The action describing scalar perturbations during a phase dominated by a barotropic fluid with $p = w \rho$ is given by \cite{Boubekeur:2008kn}
\be
S= \mpl^2 \int d^4 x \;a^3 \frac{3(1+w)}{2w} \left[\dot\zeta^2- \frac{w}{a^2} (\nabla\zeta)^2\right] \;.
\ee
Let us concentrate for example on a period of radiation dominance, $w=1/3$, which gives an evolution $a \propto t^{1/2}$, $H = 1/(2 t)$. One can still perform an expansion analogous to \eqref{zetaaa},
\be
\label{zetaaa2}
\zeta(t,\vec x) = \int \frac{d^3k}{(2\pi)^3} \left(\tilde\zeta_{\vec k}^{\rm cl}(t) \tilde a_{\vec k} + \tilde\zeta_{\vec k}^{{\rm cl}*}(t) \tilde a^\dagger_{\vec k}\right)\;.
\ee
Now the appropriate solutions of the equation of motion are given by
\be
\tilde \zeta_{\vec k}^{\rm cl}(t) = \frac{1}{\sqrt{12} \mpl}\frac{1}{\sqrt{2k/\sqrt{3}}} \cdot \frac{i}{a} e^{- i \frac{1}{\sqrt{3}}\frac{k}{a H} + i \vec k \vec x} \;.
\ee
These functions reduce to the Minkowski result in the limit in which the modes are well within the Hubble radius and they are normalized in the same way as in \eqref{norms}. The choice of phase of these solutions is done for later convenience.
Equating the two expansions \eqref{zetaaa} and \eqref{zetaaa2} for each Fourier mode we get
\be
\label{2Fourier}
\zeta_{\vec k}^{\rm cl} a_{\vec k} + \zeta_{-\vec k}^{{\rm cl}*} a^\dagger_{-\vec k} = \tilde\zeta_{\vec k}^{\rm cl} \tilde a_{\vec k} + \tilde\zeta_{-\vec k}^{{\rm cl}*} \tilde a^\dagger_{-\vec k}
\ee
The two set of modes are related by
\be
\label{modesrel}
\tilde \zeta_{\vec k}^{\rm cl} = \alpha_{\vec k} \zeta_{\vec k}^{\rm cl} + \beta_{\vec k} \zeta_{-\vec k}^{\rm cl *}
\ee
where the coefficients can be generically calculated using the scalar product\footnote{Notice that the scalar product \eqref{scalar} is well defined at any time, and time independent. Of course the explicit expression for the momentum $\Pi$ in terms of $\dot \zeta$ depends on the action for $\zeta$ valid at any given moment.}
\be
\alpha_{\vec k} = \langle \tilde \zeta_{\vec k}^{\rm cl} ; \zeta_{\vec k}^{\rm cl}\rangle \qquad \beta_{\vec k} = -\langle \tilde \zeta_{\vec k}^{\rm cl} ; \zeta_{-\vec k}^{\rm cl *}\rangle \; ,
\ee
but at small $k$ we can use a quicker method---see below.
Plugging eq.~\eqref{modesrel} into \eqref{2Fourier} gives the relation among the two sets of creation and annihilation operators
\be
a_{\vec k} = \alpha_{\vec k} \tilde a_{\vec k} + \beta^*_{\vec k} \tilde a^\dagger_{-\vec k} \;.
\ee
We assume to be in the vacuum at the beginning, i.e.~in a state which is annihilated by $a_{\vec k}$. This state does not evolve in time as we are putting the time evolution in the operators. In terms of the ``radiation dominance" operators $\tilde a_{\vec k}$, the state of each harmonic oscillator is annihilated by a linear combination of $\tilde a_{\vec k}$ and $\tilde a^\dagger_{\vec k}$: it is a squeezed state.
Taking the norm of both sides of eq.~\eqref{modesrel} we derive that the coefficients $\alpha$ and $\beta$ must satisfy
\be
\label{bogolubov}
|\alpha_{\vec k}|^2-|\beta_{\vec k}|^2 =1
\ee
which is equivalent to the condition that both sets of creation and annihilation operators satisfy the standard commutation rules.
The simplest way to relate the modes $\zeta^{\rm cl}$ and $\tilde\zeta^{\rm cl}$, is noticing that their real parts become constant in the long wavelength limit, while their imaginary parts have constant conjugate momenta in the same limit. Given the independence or incompatibility of these two conditions---constant field vs.~constant momentum---this implies that real and imaginary parts do not mix in the long wavelength limit, that is
\be
\zeta_{\vec k}^{\rm cl} + \zeta_{\vec k}^{\rm cl *} = A_k (\tilde\zeta_{\vec k}^{\rm cl} + \tilde\zeta_{\vec k}^{\rm cl *}) \; , \qquad \zeta_{\vec k}^{\rm cl} - \zeta_{\vec k}^{\rm cl *} = B_k (\tilde\zeta_{\vec k}^{\rm cl} - \tilde\zeta_{\vec k}^{\rm cl *}) \; , \qquad k \to 0 \;.
\ee
Condition \eqref{bogolubov} implies that the two coefficients $A_k$ and $B_k$ are the inverse of each other: $A_k = B_k^{-1}$. $A_k$ and $B_k$ can be calculated using the explicit expression of the modes: it is enough in the two cases to compare the value of $\zeta$ and $\Pi$ which is approached in the long wavelength limit. One gets
\be
\label{Ainfl}
A_k = 3^{3/4} \sqrt\frac{2}{\epsilon} \frac{a_{\rm rd}^2 H_{\rm rd}H_{\rm infl} }{k^2} \qquad B_k = \frac{1}{A_k} \;.
\ee
These expressions are time-independent, as they should. It is easy to realize that, neglecting numerical factors, one has
\be
A_k \sim \frac{a_{\rm in}}{a_{\rm out}} \gg 1 \;,
\ee
the ratio between the scale factors when the modes leave the Hubble radius and when they come back in. The uncertainty in the real part of $\zeta$ is huge compared to that we would have in the ``radiation dominance vacuum". Conversely the uncertainty in the imaginary part is very suppressed with respect to the vacuum state. If we neglect this minuscule uncertainty in the imaginary part, assuming that fluctuations in every observable will be dominated by the huge real part's uncertainty, we can treat the quantum field as a classical stochastic variable.
Let us now come back to our model, and carry out exactly the same procedure. Now the $\zeta_{\vec k}^{\rm cl}$ modes must be calculated from the action \eqref{zetaaction} and they are given by
\be
\zeta_{\vec k}^{\rm cl} = \frac{f}{3 \sqrt{2} \mpl^2 H_0} \frac{1}{\sqrt{2k}} \frac{i}{t} e^{-i k t + i \vec k \vec x} \;,
\ee
while the radiation dominance modes $\tilde\zeta_{\vec k}^{\rm cl}$ remain, obviously, the same. Again the real part of the modes gives $\zeta$ = const in the long wavelength limit, while the imaginary part has $\Pi$ = const. Matching these constants allows to calculate the coefficients $A$ and $B$
\be
\label{Agali}
A_k = \frac{f}{\mpl H_0} a_{\rm rd}^2 H_{\rm rd} \frac{\sqrt{2}}{3^{1/4}} \qquad B_k = \frac{1}{A_k} \;.
\ee
Let us assume that the Galileon dominated phase ends when $H \simeq \mpl/f H_0$, i.e.~when the approximation of treating gravity as a perturbation breaks down, and that immediately after we have a radiation dominated phase. Evaluating $A_k$ at the transition between the two phases we get
\be
A_k \sim B_k \sim 1 \;.
\ee
There is no relevant squeezing of the modes, so that during radiation dominance the uncertainties are close to the standard zero point quantum fluctuations. Of course in this case the perturbations are completely negligible and no classical interpretation is possible\footnote{The complete absence of squeezing is accidental and does not occur if we modify the matching between the Galileon dominated regime and the standard decelerating evolution, to take into account the intermediate phase when gravity cannot be treated as a small perturbation to the Galileon dynamics, eqs.~\eqref{polephase} and \eqref{polephase2}. The same will happen if we replace radiation dominance with a different decelerated evolution. However, by comparing \eqref{Agali} with \eqref{Ainfl}, we see that the squeezing parameters are very ``blue" compared with the inflationary result, and therefore always irrelevant on cosmological scales.}.
A sizable generation of perturbations needs a large squeezing. This, in inflation, is closely related to the existence of a dynamical attractor which makes the $\zeta = {\rm const}$ mode dominate. Our model shows that the existence of an attractor does not by itself guarantee a sizable generation of perturbations: during radiation dominance the modes are essentially in their vacuum state.
\end{appendix}
|
1,941,325,220,427 | arxiv | \section{Introduction}
\subsection{ The Reconstruction Conjecture (RC)}
\paragraph*{} The Reconstruction Conjecture (RC) is one of the most celebrated unsolved problems in Discrete Mathematics and Combinatorics circles. It was first conjectured by Ulam and Kelly in 1941 as stated in the survey paper by Bondy \cite{b1}.
\subsubsection{ Original Definition}
\paragraph*{} Ulam \cite{u1} states the following problem:
\begin{quote}
``Suppose that in two sets $A$, $B$; each of $n$ elements, there is defined a distance function $\rho$ for every pair of distinct points, with values either $1$ or $2$ and $\rho(x,x) =0$. Assume that for every subset of $n-1$ points of $A$; there exists an isometric system of $n-1$ points of $B$, and that the number of distinct subsets isometric to any given subset of $n-1$ points is same in $A$ as in $B$. Are $A$ and $B$ isometric?''
\end{quote}
\subsubsection{ Modified Definition of the Graph Reconstruction Conjecture}
Reconstruction Conjecture can be restated as:
\begin{quote}
``A simple finite graph $G$ with at least three points can be reconstructed uniquely (up to isomorphism) from its collection of vertex deleted subgraphs $G_i$.''
\end{quote}
This conjecture was termed by Harary \cite{h3}, a ``graphical disease'', along with the 4-Color Conjecture and the characterization of Hamiltonian graphs. The term ``diseases'' comes from the fact that such problems can be formulated very easily and concisely, and most identified diseases are understandable to undergraduates. They are highly contagious, thereby attracting the attention of both professionals and layman mathematicians.
\paragraph*{} The reconstruction problems provide a fascinating study of the structure of graphs. The identification of structure of a graph is the first step in its reconstruction. We can determine various invariants of a graph from its subgraphs, which in turn tell us about the structure of the graph.
\subsection{Basic Terminologies}
\paragraph*{} The key terms in the paper are introduced below. For terms not defined here, we shall use the terminology followed in Harary \cite{h3}.
\begin{definition} [Deck of a Graph and its Cards]
Any graph $G$ has a vertex set $V(G)$ and an edge set $E(G)$. A card is any vertex-deleted-subgraph of $G$, with $G_i$ representing the unlabelled subgraph of $G$ with the $i^{th}$ vertex and its coincident edges removed. The deck of the graph $G$ is the multiset of all cards of $G$.
\end{definition}
\begin{definition} [$k$-periphery of a subtree]
Given a tree $T$ with vertex set $V$ and an arbitrary subtree $T_s$, the distance of $v$ $\in$ $V$ from $T_s$ is defined to be length of smallest path connecting $v$ to some $v'$ $\in$ $T_s$. The k-periphery of $T_s$ is defined to be a set of vertices as a distance $k$ from $T_s$.
\end{definition}
\begin{definition}[Peripheral Vertices of a Graph]
The eccentricity $\varepsilon_G(v)$ of a vertex $v$ in a graph $G$ is the maximum distance from $v$ to any other vertex. Vertices with maximum eccentricity are called peripheral vertices.
\end{definition}
\begin{definition} [Power of a Graph]
Let $G$ be a graph on $p$ points $v_1$, $v_2$,..., $v_p$. The k-th power of $G$, denoted by $P_k(G)$, is a graph on points $u_1$,$u_2$,..., $u_p$ where $u_i$ and $u_j$ ($i \neq j$) are adjacent if and only if $v_i$ and $v_j$ are at distance at most $k$ in $G$. We also call $G$ to be a k-th root of $P_k(G)$.
\end{definition}
\paragraph* {}In general, a graph may have more than one $k$th root. The uniqueness of tree as a square root of a graph has been proven independently by Ross and Harary \cite{r2} and a simpler proof using the reconstruction conjecture, has been given by Gupta \cite{s1}\cite{s2}. The uniqueness of a tree a a cube root of a graph has been established by Yerra et al. \cite{a1} In Section 3, we use a different approach to show the uniqueness of tree as a cube root of a graph $G$, except when $G$ is a complete graph, in which case $G$ will not have a unique tree root. Further, Yerra et al. \cite{a1} showed that for any $n \ge 4 $, there exist non-isomorphic trees $T_1$ and $T_2$ such that $T_1^n \cong T_2^n$.
\begin{definition} [End Deleted Tree]
The end deleted tree $\xi$ of a given tree $T$ is defined to be the tree obtained by deletion of all the leaf node of $T$.
\end{definition}
\begin{definition} [Weighted Tree]
\label{WT}
A weighted tree is a tree having weights associated with every vertex. The weight on vertex $i$ represents the number of branches emanating from vertex $i$ in tree $T$ [Fig. \ref{tree_cube},\ref{weighted_tree}]. Any tree $T$ is equivalent to a weighted tree having weights associated with every vertex of $\xi$, the end deleted tree of $T$.
\end{definition}
\begin{definition}[$i$th order leaf nodes]
The set of $i$th order leaf nodes $L_i$ is defined to be that set of vertices which are leaf nodes of i-times end deleted tree of $T$. i.e., a vertex $v \in L_i$ if $v$ is a leaf node in i-times deleted tree. In the base case, $L_0$ is the set of leaf nodes of $T$.
\end{definition}
\begin{definition} [Distance between two edges]
If $e_1\equiv\{u_1,u_2\}$ and $e_2\equiv\{v_1,v_2\}$ are two edges in the graph then the distance $d(e_1,e_2)$ between them is defined to be $n+1$, where $n = min\{d(u_1,v_1),d(u_2,v_1),d(u_1,v_2),d(u_2,v_2)\}$.
\end{definition}
\begin{definition} [Distance between an edge and a vertex]
If $\{u_1,u_2\}$ is an edge $e$ and $v$ is a vertex in a Graph $G$, then the distance $d(e,v)$ between $e$ and $v$ is defined as $d(e,v) = min\{d(u_1,v),d(u_2,v)\}$.
\end{definition}
\begin{definition} [$k$-span of a vertex] - Let $k$ be a natural number. If $v$ is a vertex then the $k$-span of vertex $v$, $S(v,k)$ is defined to be the set of all vertices as a distance upto $k$ from $v$, i.e.,
\begin{center} $S(v,k) = \{u|d(u,v) \le k\}$ \end{center}
\end{definition}
\begin{definition} [Span of an edge]
Let $k$ be a natural number. If $e = \{v_1,v_2\}$ is an edge, then the $k$-span of edge $e$, $S(e,k)$ is defined to be the set union of $k$-spans of its end-points, i.e.,
\begin{center} $S(e,k) = S(v_1,k) \cup S(v_2,k)$ \end{center}
\end{definition}
\subsection{ Discussion about the Problem }
The statement of the reconstruction conjecture excludes the trivial graph $K1$, graphs on two vertices and infinite graphs \cite{b1}\cite{n1}. The deck of graphs on two points, i.e. $K2$ and $K2'$, have a pair of $K1$s comprising each of their decks but the graphs are non-isomorphic. For every infinte cardinal $\alpha$, there exists a graph with $\alpha$ edges which is not uniquely reconstructible from its family of edge deleted subgraphs \cite{c1}.
Apart from these two exceptions which prohibit the conjecture from encompassing all graphs, unique reconstructibility is conjectured for all other graphs.
One of the ways for tackling the RC is known as the reconstructive approach, and is followed in many of the proofs of the conjecture for specific classes. While reconstructing a class of graphs using this approach, the problem of reconstruction partitions into two subproblems, namely recognition: showing that membership in that class is determined by the deck, and weak reconstruction: showing that no two non-isomorphic members of the class have the same deck.
The reconstruction conjecture has been proved for trees by Kelly \cite{k1}, and squares of trees by Gupta \cite{s1}\cite{s2}. Apart from these, the conjecture has been proved for a number of graph classes such as unicyclic graphs \cite{m2}, regular graphs \cite{n2} and disconnected graphs \cite{h1}. Though the problem can be stated very simply,yet due to a lack of a nice set of characterizing invariants, it has still not been proven for very important classes of graphs like bipartite graphs and planar graphs. For further study of this conjecture, the reader is referred to surveys by Bondy \cite{b1} and Harary \cite{h2}.
\section{ Reconstruction Conjecture For Cube of Trees }
\subsection{\label{2.1} Overview of Proof Technique}
Section 2 lists basic properties of cubes of trees. In Section 3, a characterization of cubes of trees is given. It also shows uniqueness of tree as a cube root of a graph $G$, except when $G$ is a complete graph. Section 4 proves recognizability and weak reconstruction of graphs isomorphic to cubes of trees, utilizing reconstructibility of trees from their peripheral vertex deleted subgraphs.
\subsection{\label{2.2} Properties of Third Power }
Listed below are few properties of third power of a graph:
\begin{lemma}
\label{Lemma 2.1}
Let $e = \{v_1,v_2\}$ be an edge. If $v_1,v_2$ $\in$ $L_{i+1}$, $i \ge 0$ then the subgraph which is the $1$-span of $e$ is a clique $\kappa$. Any Graph $G(\cong P_3(T))$ has cliques only of this type. The edge $e$ is defined as a clique edge and clique $\kappa$ is said to be centered about $e$.
\end{lemma}
\begin{proof}
This lemma follows directly from Lemma 3.1.2.1 in \cite{s3}.
\end{proof}
\begin{definition} [Clique Distance]
The distance $d(S_1,S_2)$ between two cliques $S_1$ and $S_2$ in $G$ is defined to be the distance between clique edges of $S_1$
and $S_2$.
\end{definition}
\begin{definition} [Terminal Edge]
An edge $e$ = $\{u,v\}$ is called a terminal edge if atleast one of $deg(u)$ and $deg(v)$ is 1.
\end{definition}
\begin{definition} [$k$th order Terminal edges]
The terminal edges of $k$-times end deleted tree of tree $T$ are called $k$th order terminal edges of $T$ .
\end{definition}
\begin{definition} [Terminal Clique]
A clique $S$ is said to be a terminal clique if $S$ is the $k$-span of edge $e$ = $\{v1,v2\}$ where $k = \frac{n - 1}{2}$ and either $v_1$ $\in$ $L_k$ or $v_2$ $\in$ $L_k$.
\end{definition}
\begin{lemma}
\label{Lemma 2.2}
$S$ is a terminal clique of graph $G$ iff there exists a unique clique $S'$ such that $\forall v$ $\in$ $S$, either $v$ $\in$ $S$ or $v$ $\in$ any clique other than $S$. Further, the clique edge of $S$ is adjacent to the clique edge of $S'$ .
\end{lemma}
\begin{proof}
This lemma follows directly from Theorem 3.2.2.1 in \cite{s3}.
\end{proof}
\begin{lemma}
\label{Lemma 2.3}
If $v$ is a terminal vertex in $G(\cong P_3(T))$ , then $v$ is a leaf node in $T$ .
\end{lemma}
\begin{proof}
This lemma follows directly from Lemma 3.2.2.2 in \cite{s3}.
\end{proof}
\begin{lemma}
\label{Lemma 2.4} If $V_T$ = $\{v_1 , v_2 , v_3 . . . v_k \}$ is the set of terminal vertices of a graph $G(\cong P_3(T))$ and $V_s$ is any subset of $V_T$ then there exists a tree $T_1$ such that $G-V_s$ $\cong P_3(T_1)$.
\end{lemma}
\begin{proof}
This lemma follows directly from Lemma 3.2.2.3 in \cite{s3}.
\end{proof}
\begin{definition} [$k$th order terminal cliques]
If $G(\cong P_3(T))$ is a graph and $V_T$ the set of its terminal vertices then the terminal cliques of $G'(\cong G-V_T)$ are defined to be terminal cliques of $1$st order. The $k$th order terminal cliques of $G$ can be obtained by extending this to $k$-times terminal vertices deleted graph of $G$ .
\end{definition}
\begin{definition} [Tree of Cliques]
\label{TOC}
A tree of cliques is that subgraph $T'$ of $T$ of which every edge forms a clique in $G(\cong P_3(T))$.
\end{definition}
\begin{lemma}
\label{Lemma 2.5}
The tree of cliques $T'$ of any graph $G(\cong P_3(T))$ is the end deleted tree of T.
\end{lemma}
\begin{proof}
This lemma follows directly from Lemma 3.3.2.1 in \cite{s3}.
\end{proof}
\begin{figure*}
\centering
\includegraphics[width=10cm]{tree_cube.pdf}
\caption{\small{Tree $T$}}
\label{tree_cube}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=5cm]{weighted_tree.pdf}
\caption{\small{Weighted Tree $\tau$ equivalent to Tree $T$(Fig.~\ref{tree_cube}). }}
\label{weighted_tree}
\end{figure*}
\section{Characterization of Third Power of a Tree}
\paragraph* {} Consider the formation of cliques in $P_3(T)$. Any clique can be seen centered around an edge with branches emanating from both end points.
By Definition~\ref{WT}, there is a unique 1 to 1 mapping between any tree and its weighted tree representation. That is, any tree $T$ is equivalent to its weighted end-deleted tree $\xi$. For example, consider the following two scenarios: (i) In the Fig. \ref{tree_cube} shown, the vertices $1$,$2$,...,$9$ form a single clique in $P_3(T)$. (ii) An equivalent representation of this tree is a weighted tree as shown in Fig. \ref{weighted_tree}. Vertices labelled $4$ and $5$ in Fig.~\ref{tree_cube} correspond to vertices a and b in Fig.~\ref{weighted_tree} with weights 3 and 5 respectively. The weights in the tree can be visualized as count of branches emanating from the corresponding vertices. Thus both (i) and (ii) have the same third power.
\begin{lemma}
\label {Lemma 3.1}
For any Tree $T$, the end-deleted tree $\xi$ is isomorphic to the tree of cliques consisting of the edges forming cliques in $G(\cong P_3(T))$.
\end{lemma}
\begin{proof}
This lemma follows directly from Definition~\ref{TOC} and Lemma~\ref{Lemma 2.5}.
\end{proof}
\begin{theorem}
\label{Theorem 3.1}
Let $T$ be a tree. Then $v_i$ is an end point of $T$ if and only if $P_3(T_{v_i})$ is isomorphic to ${(P_3(T))}_{v_i}$, where $G_{v_i}$ represents the vertex deleted graph obtained after removing $v_i$ from the graph $G$ .
\end{theorem}
\begin{proof}
Let $v_i$ be a end point of $T$. Any edge $v_lv_m$ in $P_3(T_{v_i})$ is also present in $(P_3(T))_{v_i}$, where subscripts are used to indicate vertex deleted graphs. Now since there does not exist any path $v_lv_iv_m$ in $T$, there are no two points $v_l$ and $v_m$ which are adjacent in $P_3(T_{v_i})$ but not adjacent in $(P_3(T))_{v_i}$.
For the second part, let $T$ be a tree and $v_i$ be a point of it such that $P_3(T_{v_i})$ is isomorphic to ${(P_3(T))}_{v_i}$. ${(P_3(T))}_{v_i}$ is connected for All $v_i$ but $P_3(T_{v_i})$ is connected only if $v_i$ is an end point.
\end{proof}
\begin{corollary}
\label{Corollary 3.1}
Let $T$ be a tree. Then $v_i$ is an end point of $T$ if and only if $P_3(T_{v_i})$ is isomorphic to ${P_3(T)}_{v_i}$.
\end{corollary}
\begin{proof}
It follows directly from Theorem~\ref{Theorem 3.1}.
\end{proof}
This theorem establishes a one-to-one mapping between Trees and their Third Powers. It follows the approach suggested by Yerra et al. \cite{a1}.
\begin{theorem}
\label{Theorem 3.2}
A tree $T$ can be uniquely determined from its third power,$P_3(T)(\not\cong$ $K_p$).
\end{theorem}
\begin{proof}
If $P_3(T)$ is the complete graph $K_p$, then $T$ could be any tree of diameter less than $4$. We shall prove this theorem by induction on $|T|$ for $P_3(T)\not\cong K_p$ where $|T|$ is the number of vertices in $T$.
The hypothesis of the theorem is true for $|T|=1$ and $|T|=2$ trivially. Assume it to be true for $|T|\le r$. Let $|T|=r+1$. Consider the set $\mathbb{S}$ of point deleted subgraphs of $P_3(T)$. Gupta et al. \cite{s3}, in their discussion of characterization of power of trees show that it is possible to select a subset $\mathbb{M}$ from $\mathbb{S}$ consisting of those subgraphs which are cubes of some trees. $T_{v_i}$ is a tree only when $v_i$ is and end point and in this case, from Corollary~\ref{Corollary 3.1} we have,
\begin{center}
$P_3(T_{v_i})\cong{(P_3(T))}_{v_i}$
\end{center}
So $\mathbb{M}$ is precisely the set ${(P_3(T))}_{v_i}$ where $v_i$ is an end point of $T$ from Theorem~\ref{Theorem 3.1}. By assumption of this theorem, $T_{v_i}$ can be uniquely determined as $|T_{v_i}|=r$. The result of this theorem now follows by induction as a tree is uniquely reconstructible from end point deleted subgraphs (\cite{h4}).
\end{proof}
\section{Recognition and Weak Reconstruction}
Harary et al. \cite{h4} have shown that trees are reconstructible from their end vertex deleted subgraphs. In our approach, we shall use this reconstruction approach as a black box $\bar{B}$. Given the set of end vertex deleted subgraphs, $\bar{B}$ will uniquely return the tree. In case the inputted deck does not belong to a tree, the blackbox outputs an error. Let $\mathbb{C}$ denote the class consisting of all graphs isomorphic to third power of some tree.
\begin{lemma}
\label{Lemma 4.1}
$\mathbb{C}$ is weakly reconstructible.
\end{lemma}
\begin{proof}
We are given a set $\mathbb{S}$ of subgraphs $G_1$,$G_2$,...,$G_n$, known to be the deck of $G\in \mathbb{C}$. We have to reconstruct $G$ uniquely. Using the characterization of tree powers as discussed in Gupta et al. \cite{s3}, it is possible to select a subset $\mathbb{M}$ from $\mathbb{S}$ consisting of those subgraphs which are cubes of some tree. $T_{v_i}$ is a tree only when $v_i$ is and end point and in this case, from Corollary~\ref{Corollary 3.1} we have, $P_3(T_{v_i})\cong{(P_3(T))}_{v_i}$. So $\mathbb{M}$ is precisely the set ${(P_3(T))}_{v_i}$ where $v_i$ is an end point of $T$ from Theorem~\ref{Theorem 3.1}.
Using the set $\mathbb{M}$ and the black box $\bar{B}$ and given the fact that original deck corresponds to some member of $\mathbb{C}$, we can reconstruct $T$ and then $G \equiv P_3(T)$ uniquely. Due to the unique reconstruction, we can conclude that no two non-isomorphic members of the $\mathbb{C}$ have the same deck, hence $\mathbb{C}$ is weakly reconstructible.
\end{proof}
\begin{lemma}
\label{Lemma 4.2}
$\mathbb{C}$ is recognizable.
\end{lemma}
\begin{proof}
We are given a set $\mathbb{S}$ of subgraphs $G_1$,$G_2$,...,$G_n$. In order for $\mathbb{C}$ to be recognizable, we have to give a boolean answer to the question: Does the deck $\mathbb{S}$ corresponds to a graph in $\mathbb{C}$? We consider both the cases in the paragraphs below.
If $\mathbb{S}$ indeed corresponds to a graph in $\mathbb{C}$, we are guaranteed to obtain the unique reconstruction $G \in \mathbb{C}$ by using a strategy employed in the proof of Lemma~\ref{Lemma 4.1}. It can be verified using Deck Checking Algorithm \cite{k2}, asserting whether $\mathbb{S}$ resulted from the reconstructed $G$, and ``true'' is returned as the boolean answer.
Now consider the other case, where deck $\mathbb{S}$ didn't correspond to a graph in $\mathbb{C}$. On using the black box $\bar{B}$ over $\mathbb{M}$, we have two subcases: it will either give an error, or return a tree $T_x$. In case of error, we return ``false'' directly. In the other subcase, we obtain $G_x = P_3(T_x)$. On using the Deck Checking Algorithm \cite{k2}, $\mathbb{S}$ will not match the reconstructed $G_x$ since $\mathbb{S}$ didn't correspond to a graph in $\mathbb{C}$, so again we can return ``false''.
\end{proof}
\begin{lemma}
\label {Lemma 4.3}
RC is true for class of graphs isomorphic to cube of a tree, except for complete graphs.
\end{lemma}
\begin{proof}
This follows directly from Lemma~\ref{Lemma 4.1} and Lemma~\ref{Lemma 4.2}
\end{proof}
\paragraph* {}
An alternate result shows that reconstruction conjecture holds trivially for complete graphs \cite{m3}. The following result follows:
\begin{theorem}
\label{Theorem 4.1}
RC is true for $\mathbb{C}$, the class of graphs isomorphic to cube of a tree.
\end{theorem}
\section{Conclusion and Future Work}
\paragraph*{} Trees were proven to be reconstructible by Kelly \cite{k1} and squares of trees by Gupta \cite{s1}\cite{s2}. In this paper, we have proved the conjecture for graphs isomorphic to cube of a tree. It would be interesting to prove the conjecture for higher powers of trees.
As discussed in Yerra et al \cite{a1}, for any $n \ge 4 $, there exist non-isomorphic trees $T_1$ and $T_2$ such that $P_n(T_1) \cong P_n(T_2)$. Thus the uniqueness argument no longer holds while proving the class of Graphs isomorphic to fourth(or higher) powers. Thus proving RC for such classes of graphs requires different approach. In general, we'd like to prove RC for Graphs isomorphic to any power $n$ of a tree.
|
1,941,325,220,428 | arxiv | \section{Introduction}\label{sec:introduction}}
\IEEEPARstart{I}{}ncrementally described in the form of graph in big data applications, graph are processed in an iterative manner. For example, search services (such as Google~\cite{Google}) use PageRank algorithm to sort results, social networks (such as Facebook~\cite{Facebook}) use Clustering algorithm to analyze user communities, knowledge sharing sites (such as Wikipedia~\cite{Wikipedia}) use Named Entity Recognition algorithm to identify text information, video sites (such as Netflix~\cite{Netflix} and Anysee~\cite{Anysee}) Based on Collaborative Filtering algorithm to provide film and television recommendations. Relevant studies indicate that the computational and storage characteristics of graph computing make it difficult for data-oriented parallel programming models to provide efficient support. The lack of description of correlation between data and inefficient support for iterative calculations can result in multiple times. Dozens of times the performance loss. The urgent need for an efficient Graph Computation system has made it one of the most important issues to be solved in the field of parallel and distributed processing. Current graph system processing strategy~\cite{GraphLab, PowerSwitch, HybirdGraph, Gemini, GraphChi, NXgraph, Mosaic} still lack of efficiency which listed below: (1) High cache miss rate; (2) Large I/O access overhead; (3) Slow convergence rate of large-scale graph data.
We profiled the solutions that resulted in the low performance of the existing representative graph systems.Due to the small-world phenomenon, the graph vertices will obey the power function distribution. A few graph vertices will connect the vast majority of graph vertices, while the vast majority of these vertices need to transfer state through these few vertices. Therefore, frequent visits and updates are needed for these core graph vertices while other vertices shortly converge, resulting in low frequency of access, thus confronting the problem mentioned above. So this paper adopts the graph partition of the dynamic increment, which will be explained explicitly in Section 3.
Currently, some work has already been done for graph partition of power law graph, but most of them are based on a distributed environment, regarding the underlying computing nodes as equivalent nodes. However, most graph processing methods treat the underlying graph data as black boxes, lacking research on dynamic graph partitioning and graph processing based on graph structure. However, in the real world, the graph structure is constantly changing. With iteration, a large number of graph vertices may converge in the graph partition. Frequent accesses to a small number of active vertices may result in repetitive loading of the entire graph partition including convergence, but these convergent vertices do not require access and processing, which leads to the severe waste of memory bandwidth and cache. The existing method does not consider the structural features of each partition, and the graph algorithm requires more update times for convergence and each update requires large overhead.
The graph vertex degree and its state degree have particularly critical influence on the convergence of graph vertices. Meanwhile, they also determine the processing order of the graph vertices. In the case of PowerSwitch system as shown in figure~\cite{PowerSwitch}, vertex 1 has a large degree and is more active. Theoretically, asynchronous method should be adopted to increase the convergence speed as a large number of graph vertices ($v_2,v_3,v_4,v_5$) require state transfer through active vertices. After updating its own data by asynchronous method, each vertex will be immediately updated through sending messages, so that the neighbors can be calculated by using the latest data. The vertices ($v_2,v_4,v_6$) have lower degree and will shortly converge, and it is of no high value to adopt asynchronous system to increase the convergence rate. The synchronization system should be adopted to reduce the cache miss rate and the time required for state updates of graph vertices.
Presently, the graph structure can be diverse, and its processing performance can be more different in a uniform way. Secondly, the graph structure formed by the unconverged graph vertices are constantly changing in operation, causing large fluctuations in performance. According to the above reasons, the paper proposes graph processing methods for graph structure perception. This paper incrementally obtain the graph structure characteristics formed by unconverged graph vertices in accordance with the analysis, adopting a suitable graph processing method for each graph partition block adaptively according to the underlying operation environment (the processor load, cache miss rate, etc. in each graph partition). More specifically, the main contributions of this work are summarized as follows:
\begin{itemize}
\item This paper analyzes the existing problems in the state-of-the-art distributed graph processing system and points out that the current graph processing system is lacked with targeted processing in the graph structure, affecting system performance.
\item This paper proposes the structure-centered graph partition and graph processing. According to the graph structure (graph vertices heat, etc.), the graph is partitioned by dynamic increment manner. The order of block partition is processed according to the graph schedule map of graph partition state degree.
\item This paper uses the graph structure perception combined with feature analysis in operation to switch each block of graph partition to the appropriate processing method.
\item The method is applied in the latest system. Experiments with five applications on five real-world graphs show that Gemini significantly outperforms existing distributed implementations, and the performance is improved by 2 times.
\end{itemize}
The remainder of this paper is organized as follows. Section $2$ analyzed the defects of the existing graph processing system, which puts forward the dynamic graph partitioning and adaptive graph processing optimization strategy. Section $3$ presents the dynamic graph partitioning modus, followed by adaptive graph processing method in Section $4$. Section $5$ shows experimental results. The related work is surveyed in Section $6$, and finally, Section $7$ concludes this work.
\section{Background and Motivation}
With the present of big data era, increasing data applications needed to be expressed in the form of vertices and edges, and processed through iterations. While state-of-the-art graph processing systems mainly concentrated on solving load balancing and communication overhead among varies of runtime environment, therefor ignoring the graph structural features of input data which have great impact on system performance. First, Assorted graph structure been processed may lead to immense performance differences with unified method; Second, Structure variations of the vertices that haven't converged in operation bring out volatile performance.
\begin{figure}[h]
\centering
\includegraphics[scale=0.25]{fig1_PowerSwitch.pdf}
\caption{Ineffective graph processing of partitions}
\label{fig:PowerSwitch}
\end{figure}
Graph processing methods are very sensitive to the graph structure, therefore graph processing systems show quite different performance among diverse data sets. Most of the previous graph processing methods treats underlying data as black boxes, take neither partitioning nor processing strategy accordingly. Current graph computing model research work are mainly carried out in two aspects: one is focusing on performance optimization for a certain pattern, the other providing a same interface for two patterns(Synchronous mode and Asynchronous mode) that allows the user to choose according to the algorithmic features.Three issues has arisen due to the ignorance of above model, including low cache hit ratios, high input/output overhead, and slow convergence of large scale data.
\subsection{Disadvantages of Existing Methods}
To study the performance lose, We select some typical graph processing algorithms: PR (PageRank), CC (Connected Components), SSSP (Single-Source Shortest Paths), BFS (Breadth-First Search) and BC (Betweenness Centrality), along with commonly used graph data sets: amazon-2008, WikiTalk and twitter-2010 to evaluate the performance otherness among different algorithms and data sets. We set up experiments on an 8-node high-performance cluster interconnected with Infiniband EDR network (with up to 100Gbps bandwidth), each node containing two Intel Xeon E5-2670 v3 CPUs (12 cores and 30MB L3 cache per CPU) and 128 GB DRAM. We run 100 iterations on Gemini.
Figure~\ref{fig:6} shows the vertex convergence of six data sets with different structures under four algorithms through iterations. Figure~\ref{fig:6} gives detailed cache miss rate for different algorithms under different data sets. As shown in Figure~\ref{fig:6}, for the same data set, The structure of subgraph that non-convergent vertices composed of change continuously, traditional methods lack of the reflection to the diversity and dynamic changes of the graph structure, but a integrated graph partitioning and processing methods. The above strategies may depress convergence rate of the whole algorithm: In the iteration, some less active vertices have already converged while other remain active, which keep the entire partition loaded uninterruptedly and lead to decline in cache miss rate. (See Figure~\ref{fig:3})
\subsection{Optimized strategy}
We argue that inefficiency of traditional strategies mainly illustrated by following three points:
(1) Static graph partition methods. Structural diversification caused by vertex convergence is not considered. After one iteration. large number of vertices in each partition may converge, several vertices remain active, which result in frequent loading of a whole cache block, eventually wasting memory bandwidth, reducing cache hit rate, and frequent IO as well.
(2) Unified message processing mechanism. In terms of graph processing, The structural differences among graph partitions haven't been considered by the message passing model of existing systems, instead, they adopt a unified message processing mechanism. Some graph processing systems, such as $PowerSwitch$~\cite{PowerSwitch}, allow switching execution modes between synchronous and asynchronism, but are indistinguishably operated on all blocks. When synchronous message passing mechanism is adopted, the convergence speed of graph partition with more active vertices is limited. When asynchronous, High cache miss rate occur in partitions with less active vertices.
(3) Equal treatment to all graph partitions. The partitions are all treated the same as giving the same weight, nevertheless, It is known that natural graphs subject to skewed power-law degree distribution, which means small portion of vertex connects bulk of edge. Therefore, Frequent IO and high cache miss rate will arise in the event of average vertices partition.
For the reasons mentioned above, We present a novel graph structure-aware technique in the paper that obtains graph structure of the vertices that are not convergent by the analysis, and then incrementally partition the graph. After dynamic partition, We schedule the processing order of graph partitions, and for each iteration, adaptively choosing appropriate way to processing the graph partitions.In Summary, we have the following contributions:
\begin{itemize}
\item Our partition method separates the hot vertices from the cold, which endues the former with frequent update and significant change a higher priority, and reach the convergence faster, eventually reduce the average number of updates that an input graph needs to achieve converge.
\item After The graph partition s with dramatically drop-off in active vertices will be repartitioned after specific times of iterations. This method, on the one hand, takes the load balance problem caused by the change of graph structure into consideration, on the other hand, controls the computation overhead caused by the migration of vertices during dynamic graph partition.
\item We put high activity vertices with frequent updates into the same cache, for the vertices will be loaded in memory at the same time. By doing this, we reduce the overhead caused by inactive vertices and their loading times as well.
\end{itemize}
\section{Dynamic Graph Partition}
Due to the small-world phenomenon, the graph vertices will obey the power function distribution. A few graph vertices will connect with the vast majority of graph vertices, while the vast majority of these vertices need to transfer state through these few vertices. Therefore, frequent visits and updates are needed for these core graph vertices while other vertices rapidly reaching convergence, resulting in low frequency of access, thus confronting the problem mentioned above. Consequently, according to changes in graph structure caused by the convergence of some vertices during iteration. In this paper, partitions will be redivided, the less active vertices will be moved together to decrease the calculation frequency by graph partition manner of dynamic increment, thereby reducing the I/O overhead caused by active vertices and lowering the cache miss rate.
\begin{figure}[h]
\centering
\includegraphics[scale=0.18]{example_graph.png}
\caption{Example graph}
\label{fig:6}
\end{figure}
\subsection{Active Degree and State Degree}
Before getting to details, let us first give the targeted graph processing concepts. As the graph data increases dramatically, the researchers divided the graph data into several partitions and assigned the closely-related graph vertices to the same partition in order to accelerate convergence of the graph vertices. The input graph data is represented by $G = (V, E)$. While $V$ represents all the vertices and $E$ represents the edges of all the connected vertices. The current graph processing system stores the updated messages in the vertices by default, and the edges exist as fixed values. Therefore, the vertex degree is regarded as a fixed value in the computing.
\begin{table}
\begin{center}
\setlength\tabcolsep{1pt}
\begin{tabular}{cc}
\toprule
Symbol&Definition\\
\midrule
$D_i (v_i)$&In-degree of vertex $i$\\
$D_o (v_i)$&Out-degree of vertex $i$\\
$D(v_i)$&Degree function of vertex $i$\\
$D_{Max}(V)$&The maximum degree of all vertices\\
$SD(v_i)$&State degree of vertex $i$\\
$AD(v_i)$&Active degree of vertex $i$\\
$I_1$&Iteration that re-partitioning the partitions\\
$I_2$&Iteration that schedule cold partitions to compute\\
$T_1$&Threshold of vertices active degree\\
$T_2$&Threshold of vertices convergence\\
\bottomrule
\end{tabular}
\caption{Definitions of symbols}
\label{fig:3}
\end{center}
\end{table}
{\bf Degree}\quad In this paper, $D_i (v_i)$ is used to represent in-degree of vertex $i$. The larger the in-degree, the more easily the vertex is affected by the neighbors. Which means, only when most neighbor vertex converge can the vertex tend to converge. Therefore, in the practical computation, vertices with large in-degree should be delayed to reduce the number of unnecessary updates. $D_o (v_i)$ indicates out-degree of vertex $i$. The greater the out-degree, more vertices will be affected by its update state. That indicates that only when the vertex converges can its neighbors tend to converge. Thence, in the practical computing, vertices with large out-degree should be processed in priority to accelerate the entire graph convergence. Regarding which mentioned above, the paper puts forward the concept of vertex power function, which is used to quantify the static structure features of graph vertices. Its formula is as follows:
\begin{align}
D(v_i) = D_o(v_i) + \alpha*D_i(v_i)
\end{align}
The parameter $\alpha$ $($0.5$<\alpha<$1$)$ is an adjustable parameter, which is dynamically adjusted according to different data sets in the actual computation in order to achieve optimal performance. It can be a challenge to select the condition to match the value when computing heat value. The basis for selection is: value $α$ is adjusted according to the structure of input graph data. In the case of road network, a data set, each vertex has even in-edge and out-edge distributions and most graphs have similar vertex activity. The entire graph is of even distribution with value a trending to 0.5. However, in the case of a data set focused on by Weibo users, a few celebrities will have a large number of followers while most people have few followers, which leads to data skew. It amplifies the influence of vertex out-edge on the convergence of the entire graph, so value $\alpha$ will trend to 1 accordingly.
{\bf Active Degree}\quad The vertex activity depends not only on its degree function, but also on its neighbor vertex structure. In order to predict the initial activity information of each vertex in an input graph data set, the graph data is optimally partitioned under the condition of guaranteeing load balancing while improving the computing efficiency of subsequent iterations. The paper puts forward the structure features of quantification graph active degree, which are used as reference factors for the initial graph partition of data of data graph. It relies on the in-degree $D_i(v_i)$ and out-degree $D_o(v_i)$ of the vertex as well as the degree $D(v_k)$ of its neighbors.To this end, We use the hot-cold notion as in HotGraph and present our active degree algorithm function, scilicet the following $AD(v_i)$ :
\begin{equation}
AD(v_i) = D(v_i) + \frac{\sum_{v_k}^{V} D(v_k)}{\sqrt{D_{Max}(V)} * D(v_i)}
\end{equation}
\begin{figure*}[!tb]
\centering
\includegraphics[scale=0.15]{Initial_graph_partition.png}
\caption{Initial Chunk-based partitioning}
\label{fig:6}
\end{figure*}
$D(v_k)$ indicates its neighbors’ degrees while $D_{Max}(V)$ indicates the maximum degree of all vertices, here we explore the feasibility of extending such design with fine-grained quantification of graph structure. The major difference is decoupling the in-degree and out-degree of vertices, note that unlike in HotGraph, $D(v_i)$ in this paper act like an degree function, taking in degree and out degree both into consideration and extending the graph data set to a more common directed graph.
$T_1$ is set as the active degree threshold, and is determined on the basis of user-defined sample size and the ratio of hot vertices, which follows T in HotGraph. For example: if the vertex number $V$ is viewed as 10000, the user-defined sample size is $V$=1000 and the ratio of hot vertex $R$ is 0.1, then the active degree threshold is $AD(V) = AD(v_{100})$, ie the active degree of the 100th vertex in the sample.
The vertices with active degree value $AD (v_i)$ greater than $AD(V)$ are marked as hot vertices and are stored in the hot partition. The vertices with active degree value smaller than $AD(V)$ are marked as cold vertices and stored in the cold partition. The hot and cold partitions are physically composed of cache blocks. The hot and cold vertices are stored in multiple cache blocks. For instance, vertices with active degree value greater than 50 are hot vertices and the number is 200. On the contrary are cold vertices and the number is 2000. One cache block can store 100 vertices. Therefore, there are 2 hot partition and 20 cold partition. Particularly, vertices with 0 degree neither affect nor been affected by other vertices. Its convergence can be achieved in one iteration. This paper uniformly partitions them into regions with continuous addresses. The region is called as: dead partition.
{\bf State Degree}\quad According to the characteristics of input graph structure, $AD (V)$ evaluates the activity of vertices. As the vertices convergence, in iteration process, the activity would alter. Thereby, the state degree, $SD(v_i)$ represent the alteration of the activity of vertices in iteration process. The state degree means that when the state degree is higher, the state of graph vertices changed more. Also, more activity vertices have more influence on neighbor vertices. only when the vertex converges can its neighbors tend to converge. Otherwise, the low state degree vertices would continue to be updated. For different algorithmic, the definition of state degree and the methods of calculation are different, we will elaborate on the state degree formula corresponding to the common graph algorithm in section 3.3.
The partition state degree, $PSD(j)$ is the average of all vertices state degree accumulation in this partition. As a result of separation according to active degree value, the state degree of hot vertices is high and the state degree of cold vertices is low, which avoid this situation where low state degree vertices are more and there are fewer high state degree vertices so that the partition state degree improve. In conclusion, that the average of all vertices state degree accumulation in this partition is regarded as the state degree of the whole partition is reasonable.
The vertices state degree, $SD(v_i)$ and the partition state degree, $PSD(j)$ are applied in evaluating the activity of graph vertices and partition, respectively. In order to the whole graph can be convergence rapidly, as well as making high state degree vertices synchronous load to reduce cache invalidation, high state degree vertices and partition would be dealt priority.
Vertices active degree, $AD(v_i)$ and state degree, $SD(v_i)$ play an important role in vertices separation. This essay gives details about how to divide input graph structure and initial graph based in vertices activity in section 3.2 and section 3.3.
\subsection{Activity-based Partitioning}
In order to reduce the average number of updates that an input graph needs to achieve converge, this paper propose a graph partition strategy about graph structure sense, which not only is extensive, but also it has a characteristics that when the scale of data is larger, the performance is better. According to graph vertices in-degree and out-degree, vertices active degree, $AD(v_i)$ is calculated. And then according to $AD(v_i)$, graph vertices sort in descending. Based in this order, vertices are separated. The scale of partition is an exact of multiple of cache number. This separation act only is operated when data input. The time of reordering graph vertices is once in the whole algorithmic process. The expense produced by initial graph separation is divided to every time iteration. Not only it is helpful to improve cache hits rate, but also it lessen number of calculation. It can be proved that systems performance improvement is much more than extra expense. What’s more, for big scale input data, expense producing by every time become fewer contribution to great system extensiveness.
In initial iteration situation, 0 state degree vertices is investigated and put them into the dead partition. We Create a table named the first graph vertices degree table store vertices in-degree and out-degree. Moreover, We Create a table named the second graph vertices degree table to store position of neighbor vertices. The first graph vertices value table and the second graph vertices value table are applied in storing this time calculation value and last time calculation, respectively. Based in the value stored in the first graph vertices value table and the second graph vertices value table, vertices state degree and partition state degree can be known. To store partition ID and partition state degree, we create two tables, one is called ID table and the other one is partition state degree table. After this, we can separated hot partition and cold partition based in heat of vertices. As soon as all vertices are marked and separated to specific partitions, the table, partition state degree,is initialized and output initial partition.
\begin{algorithm}
\begin{footnotesize}
\caption{Initial Activity-based partitioning}
\label{algorithm:PNPFI}
\begin{algorithmic}[1]
\Procedure{Active\_Based Partition}{\emph{v$_i$, D$_o$(v$_i$),D$_i$(v$_i$)}}
\State expected chunk size $\leftarrow$ remain amount $/$ remain partitions
\While{ $V$ has unvisited vertex v$_i$}
\If{ \emph{D$_i$(v$_i$) = $0$ and D$_o$(v$_i$) = $0$}}
\State \emph{P$_{hot}$} $\leftarrow$ \emph{v$_i$}
\EndIf
\If{ \emph{AD(v$_i$) $\geqslant$ T$_1$}}
\State $hot\ edges$ $\leftarrow$ ${hot\ edges} \cup \emph{D$_o$(v$_i$)}$
\If{ ${hot\ edges}$ $>$ expected chunk size}
\State $hot\ partitions$ $\leftarrow$ $hot\ partitions$ $+\ 1$
\EndIf
\State \emph{hot\ partitions} $\leftarrow$ \emph{D$_o$(v$_i$)}
\State \emph{P$_{hot}$} $\leftarrow$ \emph{v$_i$}
\EndIf
\If{ \emph{AD(v$_i$) $\leqslant$ T$_1$}}
\State $cold\ edges$ $\leftarrow$ ${cold\ edges} \cup \emph{D$_o$(v$_i$)}$
\If{ ${cold\ edges}$ $>$ expected chunk size}
\State $cold\ partitions$ $\leftarrow$ $cold\ partitions$ $+\ 1$
\EndIf
\State \emph{cold\ partitions} $\leftarrow$ \emph{D$_o$(v$_i$)}
\State \emph{P$_{cold}$} $\leftarrow$ \emph{v$_i$}
\EndIf
\EndWhile
\EndProcedure
\end{algorithmic}
\end{footnotesize}
\end{algorithm}
Figure 5 gives an example of chunk-based partitioning, showing the vertex set on three nodes, with their corresponding dense mode edge sets. Knowing graph vertices active degree value and sorting them in descending relying on $AD(V)$, we separate graph vertices to two partition, $P_{cold}$ and $P_{hot}$. Each partition is made up of equal cache blocks. To read data conveniently, the scale of cache block is designed as the integral multiple of cache page number. For 0 state degree vertices, not only it does not received message of neighbor vertices, but also it can not transfer and update. And only one iteration can make it convergence. For this reason, we filter 0 state degree vertices firstly and deal with these data alone, which means these vertices would be separated when the state degree of vertices is calculated, would separate store them and would calculate priority in adaptive schedule period. When the iteration of 0 state degree vertices is achieved, there is no any act to reduce expense of iteration.
\begin{figure*}[!tb]
\centering
\includegraphics[scale=0.28]{Dynamic_graph_partition_PG.png}
\caption{Dynamic Structure-based graph partition for PageRank}
\label{fig:7}
\end{figure*}
\begin{figure}[h]
\centering
\includegraphics[scale=0.26]{cold_and_hot.png}
\caption{The comparision of cold partition and hot partition}
\label{fig:6}
\end{figure}
Due to the constant of edge data and input/output degree, we can preprocess input data and distinguish hot vertices and cold vertices relying on edge function, which is useful to increase rate of cache hits rate and decrease expense of I/O. Also, it is a great way to lessen number of iteration. Hot vertices become cold is a common trend in iteration process. There is a few cold vertices affected by neighbor vertices to be hot. It is essential to improve system performance which is separated based in heat graph partition. However, as the graph vertices convergence in calculation process, graph structure would modified so that it can not satisfy requirement that in initial partition, the high activity vertices is calculated priority during iteration. Because of this situation, dynamic increment graph partition is proposed. In initial partition, it would be separated again, according to graph vertices state.
\subsection{Structure-based Partitioning}
After a certain number of iteration, due to hot vertices convergence continuously, the number of hot vertices plumped. In order to make expense fall, hot partition would be rescheduled which would not result plenty of expense. Only to a marked variance is required to be updated. Time Complexity is $O(n)$.
The accumulation of the vertex state degree is obtained every $T_1$ iteration to obtain the average block state degree of all the hot and cold partitions and to determine whether there are hot partition with value smaller than the threshold $T_1$ and whether there are cold partition with value larger than threshold $T_1$. The hot partition with decreasing activity can be marked as the cold partition, and similarly, the cold partition with increasing activity can be marked as the hot partition. Because in the previous section, it is divided according to active degree value. Generally speaking, the state degree of hot vertices is higher while the state degree of the cold vertex is lower. So the phenomenon will not exist that many vertices with low state degree are in the partition while a few vertices with high state degree raise the state degree of the whole partition. Therefore, it is reasonable to use the average value of the vertex state degree in the partition as the state degree of the entire partition.
However, for some graph algorithms such as $PageRank$, the graph data shows the whole tendency from dense state to sparse state under these algorithms. The case fails to exist that the cold notion tan become the hot notion. In order to optimize the algorithm to reduce the program space occupation, the border variable barrier is maintained to partition cold and hot vertices. As the hot block gradually becomes cold, the barrier also moves accordingly. Compared with the universal partition method mentioned above which requires maintaining a tag variable table, the method only needs to maintain a $Vertex\_ID$ variable. However, for graph algorithms such as $SSSP$, the graph data tends to be dense and then tend to be sparse as a whole in these algorithms. That is to say, the cold vertices will first become hot and then converge, and a single barrier variable cannot represent the tendency. It requires the application of the universal method first proposed.
\begin{algorithm}
\begin{footnotesize}
\caption{Dynamic Structure-based Partition}
\label{algorithm:PNPFI}
\begin{algorithmic}[1]
\Function{Process\_Vertex}{\emph{v$_i$}, \emph{curr[ ]}, \emph{nexr[ ]}}
\State \emph{\#Pragma omp parallel reduction($+$:reducer)}
\While{ \emph{active vertices} all been visited}
\State \emph{local\_reducer} $\leftarrow$ \emph{local\_reducer} $+$ \emph{Process(v$_i$, curr[v$_i$], next[v$_i$])}
\State \emph{v$_i++$}
\EndWhile
\State \emph{reducer} $\leftarrow$ \emph{reducer} $+$ \emph{local\_reducer}
\State \emph{end Pragma}
\State \emph{global\_reducer} $\leftarrow$ \emph{global\_reducer} $+$ \emph{reducer}
\State \Return{\emph{global\_reducer}}
\EndFunction
\State
\Procedure{Structed\_Based Partition}{\emph{barrier}, \emph{curr$[$ $]$}, \emph{nexr$[$ $]$}}
\If{\emph{iteration} == \emph{I$_1$}}
\For{\emph{P$_{hot}$} and \emph{P$_{cold}$} have all been processed}
\For{\emph{v$_i$} belongs to \emph{Partition i}}
\State Process\_Vertex(\emph{v$_i$}, \emph{curr[ ]}, \emph{nexr[ ]})
\EndFor
\If{\emph{SD(P$_i$)} \emph{$<$} \emph{T$_1$} and \emph{P$_{hot}$}}
\State \emph{P$_{cold}$} $\leftarrow$ \emph{Partition i}
\State $barrier$ $\leftarrow$ $i$
\EndIf
\If{\emph{SD(P$_i$)} \emph{$>=$} \emph{T$_1$} and \emph{P$_{cold}$}}
\State \emph{P$_{hot}$} $\leftarrow$ \emph{Partition i}
\EndIf
\EndFor
\EndIf
\EndProcedure
\end{algorithmic}
\end{footnotesize}
\end{algorithm}
Figure~\cite{7} indicate the process of dynamic graph partitioning in $PageRank$. According to the accumulation state degree of each vertices, the average state degree of hot partition can be calculated. Find out the partition whose average state degree is less than $T_1$. And then, value of barrier is changed to be ID of the first vertices. This separation methods separate hot vertices again, but for cold vertices there is no effect. When it is calculated, hot vertices would fewer and fewer. Therefore, it is obvious that the scale of reschedule would zoom out.
When measuring the value of $PageRank$, the edge would be operated and divided, so the results relates to input edge and output edge of the vertex. The in-degree and the out-degree would straightly affected the vertices convergence. Hence, the difference of $Rank$ could be applied in evaluation the activity of vertices, which means, for $PageRank$, the definition of state degree is accumulation of the difference between this algorithmic result and last algorithmic result. Give $PageRank$ an example, assume that the first result is default 1 and there is no accumulation result at first time. If the second result is 5,and then it is obvious that the difference is 4. The accumulation is also 4. If the third result is 7, the difference between this result 7 and last result 5 is 2 so that the accumulation is 6, 4 and 2.
\begin{equation}
\Delta_{PG} = \sum \left|Rank_{curr} - Rank_{next}\right|
\end{equation}
Figure~\cite{8} indicate the process of dynamic graph partitioning in $SSSP$. According to the accumulation state degree of each vertices, the average state degree of hot partition can be calculated. The partition whose average state degree is less than $T_1$, is marked as hot partition. Otherwise, it is marked as cold partition.
For $SSSP$, there is the same methods. When $SSSP$ was applied, because the calculation of the shortest path is relate to the accumulation account, it is not adaptable to evaluate the activity with the difference. In this methods, the smaller edge data between two calculation results is utilized, which is accumulated to decide whether there is modification of the vertices activity. Consequently, for $SSSP$, the definition of state degree is the accumulation of the smaller edge data between this result and last result. Same analogy to $CC$, it take a maximum clique. In this situation, the definition of state degree is the accumulation of the larger between this result and last result. The example of $CC$ is not a separate example here.
\begin{equation}
\Delta_{SSSP} = min\{Edge\_data_{curr} , Edge\_data_{next}\}
\end{equation}
\begin{figure*}[!tb]
\centering
\includegraphics[scale=0.28]{Dynamic_graph_partition_SSSP.png}
\caption{Dynamic Structure-based graph partition for SSSP}
\label{fig:8}
\end{figure*}
The number of dynamic Structure-based Partition is positively correlated with the number of iteration. For this reason, when the number of iteration goes up, the adjacent interval of rescheduling increases. Under the condition of ensuring the right results of algorithmic and avoiding extra expense, the rate of convergence is improved and the absence rate of cache is decreased.
\section{Adaptive Partition Scheduling}
In adaptive scheduling period, because of the different convergence rate of each vertices and the distinguish of the state degree of graph vertices, in calculating the accumulation of state degree of graph vertices process, the vertices that vary more frequent and greater have priority to be measured, as well as increasing the rate of convergence of graph vertices and shortening algorithmic running time. During $T_1$ iteration, hot partition would be separated again. After separation, if there is still hot partition, we adaptively scheduling hot partition and cold partition for calculation. If there is no situation where the whole graph is not convergence, the highest state degree cold partition is measured.
When there is hot partition after rescheduling, in each iteration process, the $n$ highest state degree cold partition and $m$ highest state degree hot partition are operated. The value of $m+n$ keep pace with the number of CPU. For example, if the number of CPU is 10, $m+n$ would be 10. For $I_2$ iteration, the value of $m$ and $n$ is decided by the algorithmic. Usually, it have to satisfy the condition $m>n$. It means that each time in hot partition the $m$ highest state degree cache partition are chosen and in cold partition the n highest state degree cache partition are chosen. On the contrary, if it is not $I_2$ iteration, we only apply the highest state degree hot partition. Thus, $n$ is equal to 0, and $m$ is equal to the number of CPU and equal to 10. Interval vertices stores with orders in ID sequence. If we need the specific partition to calculate, it represents reading in ascending ID order.
The sum of state degree values with all partitions is computed based on the partition state degree values stored in the partition state degree table. The smaller the state degree is, the closer the vertices are to convergence. When the sum of partition state degrees is smaller than a minimum value $T_2$, it can be regarded that the entire graph converges. Therefore, when the sum of state degree values with all partitions is smaller than the convergence threshold, it is determined that the entire picture converges, and the computation comes to an end and its result is output. Preferably, the specific value of the convergence threshold is defined by the user, and the default value is 0.000001.
According to a preferred mode of execution, the graph processing method stated further includes the step: judging whether it is the initial iteration. In the case of the first iteration, the block with the highest state degree in the hot partition is scheduled to be computed on the basis of computation the mentioned dead partition, and the convergence of the entire graph is determined after the iterative computations based on the sum of status degree value with all partitions. In the case that the entire graph does not converge, the subsequent iteration is proceeded.
\begin{algorithm}
\begin{footnotesize}
\caption{Adaptive Partition Scheduling}
\label{algorithm:PNPFI}
\begin{algorithmic}[1]
\Function{Procee\_Active}{\emph{m, n}, Partition \emph{P$_{hot}$, P$_{cold}$}}
\State \emph{threads} = \emph{numa\_num\_configured\_cpus()}
\For{each Partition \emph{p}}
\For{Vertex \emph{v$_i$} belongs to Partition \emph{p}}
\State SD(p) $\leftarrow$ SD(p) + Process\_Vertices(Process($v_i$), V)
\EndFor
\EndFor
\If{ Still remains \emph{P$_{hot}$}}
\If{iterations == I$_2$}
\State \emph{actives vertices} $\leftarrow$ \emph{m} * \emph{P$_{hot}$} + \emph{n} * \emph{P$_{cold}$}
\Else
\State \emph{actives vertices} $\leftarrow$ \emph{threads} * \emph{P$_{hot}$}
\EndIf
\EndIf
\If{ Only remains \emph{P$_{cold}$}}
\State \emph{actives vertices} $\leftarrow$ \emph{threads} * \emph{P$_{cold}$}
\EndIf
\State \Return{\emph{actives vertices}}
\EndFunction
\State
\Procedure{Scheduling}{\emph{active vertices}}
\For{Still remains untraversed Partition}
\State Send \emph{edge} in Partition \emph{p} to other nodes
\EndFor
\For{edge in Partition p hasn't all been received}
\State Receive \emph{edge} in Partition \emph{p} from other nodes
\EndFor
\If{ \emph{P$_{hot}$}}
\State master $\leftarrow$ mirror vertex update
\EndIf
\If{ \emph{P$_{cold}$}}
\State mirror $\leftarrow$ master vertex update
\EndIf
\EndProcedure
\end{algorithmic}
\end{footnotesize}
\end{algorithm}
\begin{figure}[tb]
\centering
\includegraphics[scale=0.36]{Adaptively_schedule.png}
\caption{Adaptive Partition Scheduling}
\label{fig:9}
\end{figure}
One of the challenges of adaptive scheduling is to ensure that the hot partition is sufficiently computed. When the number of hot partitions is greater than that of machine threads, it indicates that one single iteration fails to make all hot partitions to be computed. Therefore, it is necessary to ensure for scheduling of partition that the hot partition with higher heat can be computed after the activity of the hot partition is reduced. It should be noted that it is a long process when the hot partition is repeatedly computed and the activity declines, trending to the activity of the cold partition. Due to the complexity of the structure, even the computation number increases, the convergence condition the hot partition requires is also more than that of the cold partition. And when all the hot partitions tend to converge, the entire graph will tend to converge and the graph algorithm will also be close to the end of computation.
In addition, regarding the convergence threshold: (1) Whether the algorithm actually converges or whether the computation is completed has nothing to do with the convergence threshold. $T_2$ is just a value for judging whether the current algorithm converges as the accomplish time of computation cannot be known in operational process of algorithm. Therefore, the state degree should be obtained at regular intervals to obtain the result that if it has converged compared with $T_2$. Consequently, it is not the case that the smaller the D2 is set, the faster the algorithm converges. (2) It does not make much sense that different convergence thresholds should be set based on different algorithms or application cases. Because the state degree of all algorithms can be 0 when reaching the convergence. It requires a relatively long time from 0.000001 to 0. In order to improve performance, the state degree 0.000001 is considered as convergence, and the convergence threshold is fixed at 0.000001 while the algorithm result is within the tolerance range.
\section{Related Work}
Previous graph processing systems, over distributed~\cite{Pregel, X-Pregel, GraphLab, PowerGraph, Giraph, PowerSwitch, HybirdGraph, Gemini} or single multi-core platform~\cite{GraphChi, TurboGraph, VENUS, GridGraph, NXgraph, Mosaic}, have done plenty of work on effective graph processing such as load balancing and communication overhead reducing. Most of those approaches treat graph data as black box, which means, graph have been managed as a combination of vertices and edges (i.e.Vertex-centric and Edge-centric) rather than logical structure, the difference among which generates performance variability. In this section, we give a brief summary of related categories of prior work.
\textbf{Distribute Systems:} Pregel~\cite{Pregel} divide the graph by hashing the vertex id which ensure loading balancing. Yet Pregel uses message communication paradigm, messages that need to be processed will be huge when vertices has many adjacent points. Coincidentally, Pregel performs inefficiency in power-law graph and only allows global synchronization. X-Pregel~\cite{X-Pregel} optimizes Pregel's messaging mechanism by reducing the number of messages that needed to be delivered in every iteration. Giraph~\cite{Giraph} adds more features compare to Pregel, including master computation, out-of-core computation, etc. But the poor locality of data access limits its effective.
GraphLab~\cite{Giraph} follows the vertex-centric GAS model, but its partitions are still obtained by randomly division. On the other hand, Its shared memory storage strategy may have performance bottlenecks for large graphs. PowerGraph~\cite{PowerGraph} works well on power-law graph, but no special optimizations are considered for speeding up I/O access just as GraphLab. PowerSwitch~\cite{PowerSwitch} proposed an adaptive graph processing method based on PowerGraph, adaptively switching between synchronous and asynchronous processing modes according to the amount of vertices processed per unit time to achieve the best performance. However, it treats all the vertices of a graph as the same, and does not handle the convergence according to the vertices in the iteration. PREDIcT~\cite{PREDIcT} proposes an experimental methodology for predicting the runtime of iterative algorithms, which optimizes cluster resource allocations among multiple workloads of iterative algorithms.
Maiter~\cite{Maiter} propose delta-based accumulative iterative computation which reduce costs and accelerate calculations.HybridGraph~\cite{HybirdGraph} puts forward a algorithm adaptively switching between pull and push, focusing on performing graph analysis on a cluster IO-efficiently. Compare to GraphLab, PowerGraph employs a vertex-cut mechanism to reduce the network cost of sending requests and transferring messages at the expense of incurring the space cost of vertex replications. GrapH~\cite{GrapH} focus on minimize overall communication costs by using an adaptive edge migration strategy to avoid frequent communication over expensive network links. Gemini~\cite{Gemini} is a computation-centric distributed graph processing system that uses a hybrid pull/push approach to facilitate state updates and messaging of graph vertices.
\textbf{Single-machine Systems:} GraphChi~\cite{GraphChi} is a vertex-centric graph processing system and improve IO access efficiency by parallel Sliding Window processing strategy. But the outgoing edges of all vertices have to be loaded into memory before computation, resulting in unnecessary transfer of disk data. Also, all memory blocks have to be scanned when accessing neighboring vertices, which lead to inefficient graph traversal. TurboGraph~\cite{TurboGraph} proposed a Pin-And-Slide model to solve this problem. PAS has no delay in dealing with local graph data, but only applies to some specific parallel algorithms. Compare to the two above, VENUS~\cite{VENUS} expands to nearly every algorithm and enables streamlined processing which performs computation while the data is streaming in. Moreover, it uses a fixed buffer to cache the v-shard, which can reduce random IO.
GridGraph~\cite{GridGraph} uses a 2-level Hierarchical Partitioning scheme to reduce the amount of data transfer, enable streamlined disk access, and maintain locality. But it requires more disk data transfer using TurboGraph-like updating strategy. Besides, it cannot fully utilize the parallelism of multi-thread CPU without sorted edges. NXgraph~\cite{NXgraph} propose the Destination-Sorted Sub-Shard (DSSS) structure to store graph with three updating strategies: SPU, DPU and MPU. it adaptively choose suitable one to fully utilize the memory space and reduce the amount of data transfer. It achieves higher locality than v-shards in VENUS~\cite{VENUS} and reduces the amount of data transfer and enables streamlined disk access pattern. Mosaic~\cite{Mosaic} combines fast host processors for concentrated memory-intensive operations, with coprocessors for compute and I/O intensive components.
Traditional graph systems, either memory-share nor distribute, take the variable of graph structure into consideration, which appears through constantly convergence of vertices during iterations and plays significant role in program optimization. In this case, We present a novel graph structure-aware technique in the paper that provide adaptive graph partitioning and processing scheduling according to the variety of graph structure. Our strategy reduce the overhead caused by inactive vertices and their loading times as well speed up convergence rate.
\section{Conclusion}
In this paper, We adopted a structure-centric distributed graph processing method, Through graph structure perception, graph structure features of unconvergent vertices are incrementally obtained according to analysis, adaptively scheduling suitable graph processing methods. Our development reveal that (1) The dynamic incremental partitioning of vertex degree and state degree can significantly reducing IO resource overhead and cache miss rate, and (2) Computation and communication overhead of less active vertices can be reduced by setting priority of graph partitions and scheduling them based on predestinated order, and accelerated the algorithm convergence as well. Our experimental results on a variety of different data sets and their structural features of the graph demonstrate the efficiency, effectiveness and scalability of our approach, in comparison to state-of-the-art race detection approaches.
\bibliographystyle{IEEEtran}
|
1,941,325,220,429 | arxiv | \section{Introduction}
Let $(X,\mathcal{T})$ be a topological dynamical system( for short TDS), where $(X,d)$ is a compact
metric space with compatible metric $d$ and $\mathcal{T}$ a continuous $L$-action on $X$.
The set $\mathcal{M}(X)$ denotes the compact convex set of all Borel probability measures, and
$\mathcal{M}(X,\mathcal{T})$ the compact convex set of $\mathcal{T}-$invariant Borel probability measures. We denote by $\mathbb{Z}_+$ the
set of all non-negative integers.
Entropy is one of the most widely used notions in the characterization of the complexity of topological dynamical systems.
Among the notions of entropy, there are two classical ones which are topological entropy
and measure-theoretical entropy. In 1958 Kolmogorov \cite{KL} introduced the definition of
measure-theoretical entropy for an invariant measure and in 1965 Adler \textit{et al} \cite{AK}
defined topological entropy. According to Goodwyn \cite{GY}, Goodman \cite{GA} and Misiurewicz \cite{MI}'s
work, a classical variational principle is found. The topological variational principle
establishes that topological entropy is supremum over all $\mu\in\mathcal{M}(X,T)$ of
the measure-theoretical entropy.
In1973 Bowen \cite{BO} introduced the topological entropy $h_{top}^B(T,K)$ for any set $K$ in a TDS $(X,T)$
resembling Hausdorff dimension. He also proved the remarkable result that
$h_{top}^B(T,G_{\mu})={h}_\mu(T)$ for ergodic measure~$\mu$,
where $G_{\mu}$ denotes the set of \textit{generic} points of $\mu$, and ${h}_\mu(T)$ is
the measure-theoretical entropy. Bowen's topological entropy plays a key role in topological dynamical systems and
dimension theory, see Pesin \cite{PE}. In 1983 Brin and Katok \cite{BK} gave a topological
version of the Shannon-McMillan-Breiman theorem with a local decomposition of
the measure-theoretical entropy. Recently, Feng and Huang \cite{FH} gave a certain variational
relation between Bowen's topological entropy and measure-theoretic entropy for arbitrary
non-invariant compact set. i.e.
\begin{equation*}
h_{top}^B(T,K)=\sup\{\underline{h}_\mu(T):\mu\in \mathcal{M}(X),\mu(K)=1\}
\end{equation*}
where $K$ is any non-empty compact subset of $(X,T)$, $\underline{h}_\mu(T)$ is
the measure-theoretical lower entropy of Borel probability measure $\mu$ (see \cite{BK,FH}).
The name \textbf{slow entropy} was introduced into dynamical systems by
Katok and Thouvenot\cite{KT}, Hochman\cite{HO} for $\mathbb{Z}^k$-actions.
In this paper, we use the Carath$\acute{e}$odory dimension structure to study the slow entropy. We define a new topological slow entropy
$h_{top}^S(\mathcal{T,}Z)$ of any subset
$Z\subseteq X$ for higher dimension $\mathbb{Z}^d-$actions like Hausdorff dimension inspired by Bowen\cite{BO}'s definition of topological
entropy, but it varies or converges more slowly than Bowen's. We also give a modification
of Brin an Katok's definition of measure-theoretical lower entropy, i.e. measure-theoretical slow entropy $\underline{h}_\mu^S(\mathcal{T})$ for higher dimension $\mathbb{Z}^d-$actions. we prove that if Bowen entropy is positive, the topological slow entropy must be infinite.
We prove a variational principle for slow entropies:
\begin{equation*}
h_{top}^S(\mathcal{T},K)=\sup\{\underline{h}_\mu^S(\mathcal{T}):\mu\in \mathcal{M}(X),\mu(K)=1\},
\end{equation*}
where $K$ is any non-empty compact subset of $X$.
The paper is organized as follows. In Sect. 2 we give the
definition of topological slow entropy for $\mathbb{Z}^d-$actions in the form of Hausdorff dimension, topological slow entropy using open covers, and some properties. In Sect. 3 some examples in a symbolic dynamical system are given.
In Sect. 4 measure-theoretical slow entropy for $\mathbb{Z}^d-$actions is presented .We prove a variational principle, the topological slow entropy is supermum over all Borel probability measure of
the measure-theoretical slow entropy.
\section{Slow entropies and related properties}
In this section, we give definitions and some related properties of two slow entropies of subsets in a topological dynamical system: topological slow entropy for $\mathbb{Z}^d-$actions in the form of dimension and using open covers for $\mathbb{Z}_+-$action.
Firstly, we introduce the notion of $L$-action found in \cite{YA} for convenience. Let $(X,\mathcal{T})$ be a TDS.
A family of continuous transformations
$\mathcal{T}=\{T^h: X\rightarrow X\}_{h \in L}$ is called a continuous $L$-action, with $L=\mathbb{Z}^d$ or $\mathbb{Z}_+^d$,$d\geq1$,
if $\mathcal{T}$ satisfies $T^{h+k}=T^h\circ T^k, h,k\in L$,and $T^0$ is the identity map.
For $k\in L$ and $H\subset L$, we set $H+k=\{h+k: h\in H\}$. For $n\in \mathbb{Z}_+$, let
$$H_n:=\{h=(h_1,h_2,\cdots,h_d)\in L:|h_i|<n,1\leq i\leq d\},$$
and $\lambda_n:=\sharp H_n$,where $\sharp G$ denotes the cardinality of the set $G$.
Secondly, the definition of Bowen ball for $\mathbb{Z}^d-$actions is as follow: For
$n\in \mathbb{N},x\in X,\epsilon>0$, we denote by
$B_{n}(x,\epsilon)$ the open Bowen ball of radius $\epsilon>0$ in the metric $d_{n}$ around
$x,$ i.e.
\begin{equation*}
d_n(x,y)=\max_{h\in H_n}d(T^hx,T^hy),
\end{equation*}
\begin{equation*}
B_n(x,\epsilon)=\{y \in X:d_n(x,y)< \epsilon\}.
\end{equation*}
\subsection{Dimension definition of topological slow entropy.}
We now give the definition of slow entropy with $\mathbb{Z}^d$ or $\mathbb{Z}_+^d$-actions. Let $(X,d)$ be a compact metric space
and $\mathcal{T}$ be a continuous $L$-action on $X$ with $L=\mathbb{Z}^d$ or $\mathbb{Z}_+^d$, $d\geq1$.
For $Z\subset X,s\geq0,N\in\mathbb{N},\epsilon>0$. Define
\begin{equation*}
M^\mathcal{T}(Z,s,N,\epsilon)=\inf_{\mathcal{G}}\bigg\{\sum\limits_{i}\bigg(\frac{1}{\lambda_{n_i}}\bigg)^s\bigg\},
\end{equation*}
where the infimum is taken over all finite or countable families $\mathcal{G}:=\{B_{n_i}(x_i,\epsilon)\}$ such that
$x_i\in X, n_i\geq N$ and $\bigcup_{i}B_{n_i}(x_i,\epsilon)\supset Z$.
Clearly, $M^\mathcal{T}(Z,s,N,\epsilon)$ does not decrease as $N$ increases and $\epsilon$ decreases , hence the
following limit exists:
\begin{equation*}
M^\mathcal{T}(Z,s,\epsilon)=\lim\limits_{N\rightarrow\infty}M^\mathcal{T}(Z,s,N,\epsilon),
\end{equation*}
then we can easily see that there exists the critical value $h^S_{top}(\mathcal{T},Z,\epsilon)\geq0$ such that
\begin{equation*}
h^S_{top}(\mathcal{T},Z,\epsilon)=\inf\{s:M^\mathcal{T}(Z,s,\epsilon)=0\}=\sup\{s:M^\mathcal{T}(Z,s,\epsilon)=\infty\}.
\end{equation*}
Finally we set
\begin{equation*}
h^S_{top}(\mathcal{T},Z)=\lim\limits_{\epsilon\rightarrow0}h^S_{top}(\mathcal{T},Z,\epsilon),
\end{equation*}
which is called the\textit{ topological slow entropy } for $Z$ with respect to $\mathcal{T}$.
This quantity $h^S_{top}(\mathcal{T},\bullet)$ is defined in way which resembles the Hausdorff dimension, also satisfies most properties like Bowen entropy\cite{BO}
for $\mathbb{Z}_+$ action. For convenience, we use ~$M(Z,s,N,\epsilon)$, $M(Z,s,\epsilon)$~instead of~$M^\mathcal{T}(Z,s,N,\epsilon)$, $M^\mathcal{T}(Z,s,\epsilon)$ without any confusion.
\begin{prop}
\noindent{\rm (i)}If $d=1$, then $h^S_{top}(T^m,Z)=h^S_{top}({T},Z),m>0;$
\noindent{\rm (ii)}If $Z_1\subset Z_2\subset X$, then $h^S_{top}(\mathcal{T},Z_1)\leq h^S_{top}(\mathcal{T},Z_2);$
\noindent{\rm (iii)} $h^S_{top}(\mathcal{T},\bigcup_{i=1}^{\infty}Z_i)=\sup_{i}h^S_{top}(\mathcal{T},Z_i)$; If $Z\subset X$ is a countable set, then $h^S_{top}(\mathcal{T,}Z)=0.$
\end{prop}
\begin{proof}
We will only prove Proposition (i), others' proof are omitted, because we can get them from the definition without difficulty.
Suppose a finite or countable family $\mathcal{G}=\{ B_{n_i^T}(x_i,\epsilon)\},x_i\in X,n_i^T\geq N$, and $\bigcup_i B_{n_i^T}(x_i,\epsilon)\supset Z$ corresponding to~$T$~in the situation of $d=1$.
Suppose
\begin{eqnarray*}
B_{n_i^{T^m}}(x_i,\epsilon)&=&
\{y\in X: \text{max}_{0\leq j<n_i^{T^m}}d(T^{mj}x_i,T^{mj}y)<\epsilon\}\\
&= & \bigcap_{j=0}^{n_i^{T^m}-1}T^{-mj}B(T^{mj}x_i,\epsilon).
\end{eqnarray*}
We may suppose $n_i^{T^m}\geq N$ as well, then $$n_i^{T^m}= \lceil\frac{n_i^T}{m}\rceil\geq \{N,\frac{n_i^T}{m}-\frac{m-1}{m}\},$$
where $\lceil a\rceil$ denotes the integral part of a real number $a$. Obviously,
$$B_{n_i^{T^m}}(x_i,\epsilon)\supset \bigcap_{p=0}^{n_i^T-1}T^{-p}B(T^px_i,\epsilon)= B_{n_i^T}(x_i,\epsilon),$$
which implies ~$\bigcup_i B_{n_i^{T^m}}(x_i,\epsilon)\supset Z$. Then
\begin{eqnarray*}
\sum_i\bigg(\frac{1}{n_i^{T^m}}\bigg)^s&\leq&
\sum_i\bigg(\frac{m}{n_i^T-(m-1)}\bigg)^s\bigg(\frac{n_i^T}{n_i^T}\bigg)^s\\
&\leq&m^s\bigg(\frac{N}{N-(m-1)} \bigg)^s\sum_i\bigg(\frac{1}{n_i^{T}}\bigg)^s.
\end{eqnarray*}
Taking the infimum of both sides, and we get
\begin{equation*}
M^{T^m}(Z,s,N,\epsilon)\leq m^s\bigg(\frac{N}{N-(m-1)} \bigg)^sM^T(Z,s,N,\epsilon).
\end{equation*}
By letting $N\rightarrow +\infty$, then $ M^{T^m}(Z,s,\epsilon)\leq m^s M^T(Z,s,\epsilon)$, and then
$$h^S_{top}(T^m,Z,\epsilon)\leq h^S_{top}(T,Z,\epsilon).$$
Letting $\epsilon\rightarrow0$, we have $h^S_{top}(T^m,Z)\leq h^S_{top}(T,Z)$.
Next, we will prove the reverse side. Because $T$ is uniformly continuous, for $\forall \epsilon>0, \exists \delta>0$~ such that
\begin{equation*}
d(x,y)<\delta \Rightarrow \text{max}_{0\leq j\leq m-1}d(T^jx,T^jy)<\epsilon.
\end{equation*}
We suppose Bowen family $\mathcal{G}=\{B_{n_i^{T^m}}(x_i,\delta)\}$, $x_i\in X, n_i^{T^m}\geq N$ corresponding to~$T^m$~, and
$\bigcup_{B\in \mathcal{G}}B\supset Z$, then
\begin{equation*}
\bigcup_{i}B_{mn_i^{T^m}}(x_i,\epsilon)\supset Z.
\end{equation*}
In fact, for any arbitrary $z\in Z$, there exists some ${n_{i_0}^{T^m}}\geq N$ such that ~$z\in B_{n_{i_0}^{T^m}}(x_{i_0},\delta)$, i.e.
\begin{equation*}
d(T^{mk}z,T^{mk}x_{i_0})<\delta, \ \ 0\leq k\leq n_{i_0}^{T^m}-1.
\end{equation*}
So \begin{equation*}
d(T^{m{k}+j}z,T^{m{k}+j}x_{i_0})<\epsilon, \ \ 0\leq k\leq n_{i_0}^{T^m}-1, \ \ 0\leq j\leq m-1.
\end{equation*}
then $z\in B_{mn_{i_0}^{T^m}}(x_{i_0},\epsilon)$, and then $\bigcup B_{mn_{i}^{T^m}}(x_{i},\epsilon)\supset Z$.
We notice that any finite or countable family ~$\mathcal{G}^{'}=\{B_{n_j^T}(x_j,\epsilon)\}$ covering $Z$ contains $\{B_{mn_i^{T^m}}(x_i,\epsilon)\}$. So we get
\begin{equation*}
\inf_{\mathcal{G}}\sum_i\bigg(\frac{1}{n_i^{T^m}} \bigg)^s= m^{-s}\inf_{\mathcal{G}}\sum_i\bigg(\frac{1}{mn_i^{T^m}} \bigg)^s\geq m^{-s}\inf_{\mathcal{G}^{'}}\sum_j\bigg(\frac{1}{n_j^{T}} \bigg)^s.
\end{equation*}
and
\begin{equation*}
M^{T^m}(Z,s,N,\delta)\geq m^{-s}M^T(Z,s,mN,\epsilon).
\end{equation*}
Letting $N\rightarrow +\infty$, hence $ M^{T^m}(Z,s,\delta)\geq m^{-s} M^T(Z,s,\epsilon)$. And noticing $\delta,\epsilon$ arbitrary,
we get $h^S_{top}(T^m,Z)\geq h^S_{top}(T,Z)$.
\end{proof}
We point out $h^B_{top}(T,\bullet)$ Bowen topological entropy defined by using Bowen balls. For details, see[18, Page 74].
\begin{prop}
\noindent{\rm (i)} For $\mathbb{Z}_+$-action, $Z\subset X$, $h^S_{top}(T,Z)\geq h^B_{top}(T,Z);$
\noindent{\rm (ii)} For $\mathbb{Z}_+$-action, if $h^B_{top}(T,Z)>0$, then $h^S_{top}(T,Z)=+\infty$.
\end{prop}
\begin{proof}
(i) is obvious. We only prove (ii). We suppose $\forall \beta >0, h_{top}^B(T,Z)=a>0$. From the definition of Bowen entropy,
for arbitrary $\delta>0$, there exists $\epsilon_0>0$, for arbitrary $0<\epsilon<\epsilon_0$, we have
\begin{equation*}
0<a-\delta< h_{top}^B(T,Z,\epsilon)<a+\delta,
\end{equation*}
which implies $M^B(Z,a-\delta,\epsilon)=+\infty$. Because $M^B(Z,a-\delta,N,\epsilon)$ increases as $N$, so
\begin{equation*}
M^B(Z,a-\delta,N,\epsilon)\rightarrow+\infty,~ as ~N\rightarrow\infty.
\end{equation*}
Then for any subfamily $\mathcal{G}$ of $Z$, $\sum_i e^{-n_i(a-\delta)}\rightarrow+\infty$. For arbitrary real number $K>0$,
because $\frac{e^{-n_i(a-\delta)}}{(\frac{1}{n_i})^K}=\frac{n_i^K}{e^{n_i(a-\delta)}}\rightarrow 0$, i.e.
For arbitrary $\varepsilon_1>0$, there is some $n^{'}$, for $n_i>n^{'}$,
$\bigg(\frac{1}{{n_i}}\bigg)^K>\frac{1}{\varepsilon_1}e^{-n_i(a-\delta)}$.
Hence $\sum_i\bigg(\frac{1}{n_i}\bigg)^K\rightarrow+\infty$. And thus
\begin{equation*}
\inf_{\mathcal{G}}\bigg\{\sum\limits_{i}\bigg(\frac{1}{{n_i}}\bigg)^K\bigg\}\rightarrow+\infty,
\end{equation*}
that is $M^S(Z,K,N,\epsilon)\rightarrow+\infty$~as~$N\rightarrow\infty$. Letting $N\rightarrow +\infty$, we have $M^S(Z,K,\epsilon)=+\infty.$
So \begin{equation*}
h^S(T,Z,\epsilon)\geq K.
\end{equation*}
Because $\epsilon$ and $K$ are arbitrary, hence $h^S(T,Z)=+\infty$. Thus we complete the proof.
\end{proof}
We now give a equivalent definition of $h^S_{top}({T},\bullet)$ for the situation $d=1$, which comes from Bowen \cite{BO}. We set $(X,T)$ be a TDS,
$\mathcal{U}\in C_X^o$ a finite open cover of $X$. $Z\subset X,K\subset X$, let
\begin{equation*}
n^T_{\mathcal{U}}(K)=
\begin{cases}
0& \text{if ~$K\nsucceq \mathcal{U}$},\\
+\infty & \text{ if~$T^iK\succeq \mathcal{U}$,~for all~$i\in \mathbb{Z}_+$},\\
k& \text{$k=\max\{j\in \mathbb{N}: T^i K\succeq \mathcal{U},i =0,1,\cdots,j-1$\}~ ohterwise}.
\end{cases}
\end{equation*}
For $k\in \mathbb{N}$, we define
\begin{equation*}
\mathfrak{ G}(T,\mathcal{U},Z,k)=\{\mathcal{E}:\mathcal{E}\text{~is~}\text{a countable family of subsets of X,}Z\subseteq \bigcup\mathcal{E}, \mathcal{E}\succeq\mathcal{U}_0^{k-1} \}.
\end{equation*}
Then for each $s\in \mathbb{R}$, set
\begin{equation*}
m_{T,\mathcal{U}}(Z,s,k)=\inf_{\mathcal{E}\in \mathfrak{G}(T,\mathcal{U},Z,k)}\sum_{E\in \mathcal{E}}\bigg(\frac{1}{n^T_{\mathcal{U}}(E)}\bigg)^s.
\end{equation*}
We write $m_{T,\mathcal{U}}(Z,s,k)=0$ for the case $\emptyset=\mathcal{E}\in \mathfrak{G}(T,\mathcal{U},Z,k)$ by convention. When $k\rightarrow\infty$,
$m_{T,\mathcal{U}}(Z,s,k)$ increases, therefore we could define
\begin{equation*}
m_{T,\mathcal{U}}(Z,s)=\lim_{k\rightarrow\infty} m_{T,\mathcal{U}}(Z,s,k).
\end{equation*}
We notice that if $s_1\geq s_2$, then $m_{T,\mathcal{U}}(Z,s_1)\leq m_{T,\mathcal{U}}(Z,s_2)$. We define
\textit{Bowen topological slow entropy} $h^{BS}_{\mathcal{U}}(T,Z)$ of $\mathcal{U}$ with respect to ~$T$~ as follows:
\begin{equation*}
h^{BS}_{\mathcal{U}}(T,Z)=\inf\{s:m_{T,\mathcal{U}}(Z,s)=0\}=\sup\{s:m_{T,\mathcal{U}}(Z,s)=\infty\},
\end{equation*}
and \textit{Bowen topological slow entropy} of $Z$ as follows:
\begin{equation*}
h^{BS}_{top}(T,Z)=\sup_{\mathcal{U}\in \mathcal{C}_X^o}h^{BS}_{\mathcal{U}}(T,Z).
\end{equation*}
\begin{prop}
\begin{equation*}
h_{top}^{BS}(T,Z)=\lim_{\delta\rightarrow0}h_{top}^S(T,Z,\delta).
\end{equation*}
\end{prop}
\begin{proof}
It can be similarly proved using techniques in \cite{PE}.
\end{proof}
\subsection{Definition of topological slow entropy using open covers.}
We set $(X,T)$ be a TDS,
$\mathcal{U}\in C_X^o$ a finite open cover of $X$. Set $N(\mathcal{U},Z)$ to be the minimal cardinality of
sub-families $\mathcal{V}\subset\mathcal{U}$ with $\bigcup \mathcal{V}\supset Z$, where $\bigcup \mathcal{V}=\bigcup_{V\in \mathcal{V}}V$.
And we write $N(\mathcal{U},\emptyset)=0$. Obviously, $N(\mathcal{U},Z)=N(T^{-1}\mathcal{U},Z)$. Let
\begin{equation*}
h_{\mathcal{U}}^S(T,Z)=\limsup_{n\rightarrow \infty}\frac{1}{\log n}\log N(\mathcal{U}_0^{k-1},Z).
\end{equation*}
$h_{\mathcal{U}}(T,Z)$ increases with respect to $\mathcal{U}$. We define the topological slow entropy of $Z$ by
\begin{equation*}
h^S(T,Z)=\sup_{\mathcal{U}\in C_X^o} h_{\mathcal{U}}^S(T,Z).
\end{equation*}
\noindent{\textbf{Remark}}: From the definition of topological slow entropy and topological entropy using open covers, there exists a relation:
\begin{equation*}
\frac{1}{n}\log N(\mathcal{U}_0^{k-1},Z)=\frac{1}{\log n}\log N(\mathcal{U}_0^{k-1},Z)\cdot \frac{\log n}{n}.
\end{equation*}
Noticing $\frac{\log n}{n}\rightarrow 0$ as $n\rightarrow\infty$. Therefore, if $h_{top}(T,Z)>0$, then $h^S(T,Z)=+\infty$; if $h^S(T,Z)<+\infty$, then
$h_{top}(T,Z)=0$.
\begin{prop}
\begin{equation*}
h_{\mathcal{U}}^{BS}(T,Z)\leq h^S_{\mathcal{U}}(T,Z),\text{and then}\ \ h_{top}^{S}(T,Z)\leq h^S(T,Z).
\end{equation*}
\end{prop}
\begin{proof}
We only prove~$h_{\mathcal{U}}^{BS}(T,Z)\leq h^S_{\mathcal{U}}(T,Z)$.
Suppose $$\mathcal{A}_n=\{U_{i_0}\cap T^{-1}U_{i_1}\cap\cdots\cap T^{-n+1}U_{i_{n-1}}:U_{i_{k}}\in \mathcal{U}\},$$
such that $\bigcup\mathcal{A}_n\supseteq Z$. Obviously, $A\in \mathcal{A}_n$, $ n_{T,\mathcal{U}}(A)\geq n, s>0$ and we get
\begin{equation*}
m_{T,\mathcal{U}}(Z,s,n)\leq \sum_{A\in \mathcal{A}_n}\bigg(\frac{1}{n_{T,\mathcal{U}}(A)}\bigg)^s\leq \sum_{A\in \mathcal{A}_n}\bigg(\frac{1}{n}\bigg)^s=N(\mathcal{U}_0^{k-1},Z)\bigg(\frac{1}{n}\bigg)^s.
\end{equation*}
So
\begin{eqnarray*}
m_{T,\mathcal{U}}(Z,s)&\leq&
\limsup_{n\rightarrow\infty}N(\mathcal{U}_0^{k-1},Z)\bigg(\frac{1}{n}\bigg)^s\\
&=& \limsup_{n\rightarrow\infty}\exp(-\log n(s-\frac{1}{\log n}\log N(\mathcal{U}_0^{k-1},Z))).
\end{eqnarray*}
And if $s>\frac{1}{\log n}\log N(\mathcal{U}_0^{k-1},Z)$, then $m_{T,\mathcal{U}}(Z,s)=0$. Therefore,
\begin{equation*}
h_{\mathcal{U}}^{BS}(T,Z)\leq h^S_{\mathcal{U}}(T,Z).
\end{equation*}
\end{proof}
\section{Examples in a symbolic dynamical system.}
We take examples with a continuous $L=\mathbb{Z}^d_+$-action to explain the new
topological slow entropy in a symbolic dynamical system with a special
metric.
Suppose a finite alphabet
$A=\{1,2.\cdots,k\}$, where $k\geq2$. $$A^{L}=\{1,2.\cdots,k\}^{L}=\{(\omega_h)_{h\in L}:\omega_h\in A,h\in L\}.$$
Suppose $\omega,\omega'\in A^L$, let $$n(\omega,\omega')=\min\{k:\omega_h=\omega'_h (h\in H_{k-1}), \omega_h\neq \omega'_h~(\text{for some}~h\in H_k\backslash H_{k-1})\}.$$
Endow $A^{L}$ with the metric
$d(\omega,\omega')=\frac{1}{\lambda_{n(\omega,\omega')}}$ for $\omega,
\omega'\in A^L$, then $d(\bullet,\bullet)$ is a compatible metric for $A^L$.
$A^{L}$ is the onesided
full shift of $k$ symbols, i.e. for $h\in L$, we define the shift action~$\sigma^h:A^L\rightarrow A^L$ as
\begin{equation*}
(\sigma^h(\omega))_k=\omega_{h+k},k\in L
\end{equation*}
and then $\mathcal{T}=\{\sigma^h\}_{h\in L}$ is a continuous $L$ -action on $A^L$.
\begin{prop}
For any subset $Z\subseteq A^{{L}}$,
$h^S_{top}(\mathcal{\sigma},Z)=\text{dim}_H Z$, where $\text{dim}_H Z$ is denoted
the Hausdorff dimension in $(X,d)$(see\cite{MA}).
\end{prop}
\begin{proof}
In fact, for $m\in \mathbb{N},\omega\in A^L$ we
set a cylinder set as $$C_m(\omega)=\{\omega'\in A^L: \omega_h=\omega'_h,h\in H_m\}.$$
It is not hard to see the $s-$ Hausdorff outer measure of $Z$ can be
$$H(Z,s)=\lim_{\delta\rightarrow0}\inf_{\mathcal{G}}\text{diam}(C_{m_i}(\omega_i))^s,$$
and where the infimum is taken over all finite or countable family $\mathcal{G}:=\{C_{m_i}(\omega_i)\}$, which
covers $Z$ with $\sup_i\text{diam}(C_{m_i}(\omega_i))<\delta$. Let $\epsilon>0$ be sufficiently small and
choose $n\in \mathbb{N}$ such that $\frac{1}{\lambda_{n+1}}\leq\epsilon<\frac{1}{\lambda_{n}}$. So it also
follows from the choice of the metric $d(\bullet,\bullet)$ that
$B_k(\omega,\epsilon)=C_{k+n-1}(\omega)$ for all $k\in \mathbb{Z}_+$ and $\text{diam}(C_{j}(\omega))=\lambda_{j+1}^{-1}$. Comparing the two definitions, we get the result.
\end{proof}
\begin{prop}
For any real number $0\leq t <+\infty$, there exists a compact subset $E\subset A^{L}$, such that
$h^S_{top}(\mathcal{\sigma},E)=t$.
\end{prop}
\begin{proof}
From Mattila\cite{MA}, we know the fact: If the Hausdorff dimension of a set $A$ is $s$, then for any $0<t<s$, there exists a compact subset $E$ such that $\text{dim}_H E=t$. From theorem 3.1, we get $h^S_{top}(\mathcal{T},E)=\text{dim}_H E$, this means $h^S_{top}(\mathcal{T},E)=t$.
\end{proof}
\begin{prop}
In the symbolic system $(A^{\mathbb{Z}_+},\sigma,d)$, for any non-empty subset $Z\subset A^{\mathbb{Z}_+}$, $\overline{\text{dim}}_B Z= h^S(\sigma,Z)$, where $\overline{\text{dim}}_B Z$ is denoted the upper Box dimension of $Z$ (see \cite{MA}).
\end{prop}
\begin{proof}
Suppose $Z\subset A^{\mathbb{Z}_+}$, for $0<\epsilon<+\infty$, let $N(Z,\epsilon)$ be the smallest number of $\epsilon-$ball needed to cover $Z$:
\begin{equation*}
N(Z,\epsilon)=\min\{k:Z\subset \bigcup^k_{i=1}B(\omega_i,\epsilon) \text{~for some}~\omega_i\in A^{\mathbb{Z}_+}\}.
\end{equation*}
Suppose $\mathcal{U}=\{A_1,\cdots,A_k\}$ be a generator, $\omega=(x_0,x_1,\cdots)$, $$A_j=\{\omega: x_0=j\},$$ and every $A_j$ is clopen set for $j=1,\cdots,k$.
Suppose $\frac{1}{m+1}\leq \epsilon<\frac{1}{m}$ for $m>1$, for any $\omega_i\in A^{\mathbb{Z}_+}$, Bowen ball $B_n(\omega_i,\epsilon)$ with metric $d(\bullet,\bullet)$ be the cylinder set as follows:
\begin{equation*}
C_{n+m-1}(\omega_i)= B_n(\omega_i,\epsilon), \text{and}~B_n(\omega_i,\epsilon)\subset B(\omega_i,\epsilon).
\end{equation*}
Noticing $C_{n+m-1}(\omega_i)=\bigcap_{j=0}^{n+m-1}\sigma^{-j}A_{i_j}\in \mathcal{U}_{0}^{n+m-1}$, the union of elements of $\mathcal{U}_{0}^{n+m-1}$ covers $Z$, then
\begin{equation}\label{2.3}
N(Z,\epsilon)\leq N(\mathcal{U}_{0}^{n+m-1},Z).
\end{equation}
Similarly, $B(\omega_i,\epsilon)\subset C_{m-1}(\omega_i)=\bigcap_{j=0}^{m-1}\sigma^{-j}A_{i_j}\in \mathcal{U}_{0}^{m-1}$, and then
\begin{equation}\label{2.3}
N(\mathcal{U}_{0}^{m-1},Z)\leq N(Z,\epsilon).
\end{equation}
From (3.1),(3.2), therefore,
\begin{equation*}
\frac{\log (m-1)}{\log (m+1)} \cdot\frac{\log N(\mathcal{U}_{0}^{m-1},Z)}{\log (m-1)}\leq\frac{\log N(Z,\epsilon)}{-\log \epsilon}\leq\frac{\log N(\mathcal{U}_{0}^{n+m-1},Z)}{\log (n+m-1)}\cdot\frac{\log (n+m-1)}{\log m}.
\end{equation*}
Taking $n,m\rightarrow +\infty$, $\epsilon\rightarrow 0$ and supper limit, we get
\begin{equation*}
\overline{\text{dim}}_B Z= h^S(\sigma,Z).
\end{equation*}
\end{proof}
If $(X,d)$ is a compact metric space, $Z\subset X$, $\{U_\alpha\}_{\alpha\geq1}$ is a $\epsilon-$cover of $Z$, if $\mid U_\alpha \mid\leq\epsilon$ for $\epsilon>0$ and $\bigcup_\alpha U_\alpha\supset Z$, where $\mid U \mid$ denotes the diameter of $U$.
\begin{lem}
If $(X,d)$ and $(Y,\rho)$ are both compact metric spaces, $f:X\rightarrow Y$ is a map, and if there exists $\delta_0>0$, so that $\forall ~0<\delta\leq\delta_0$, $\mid B\mid \leq \delta$ for $B\subset X$, $f\mid_B $ is a bi-Lipschitz map i.e.
\begin{equation*}
c_1 d(x,y)\leq \rho (f(x),f(y))\leq c_2 d(x,y) ~~~~~~~for ~~\forall x,y\in B, c_1,c_2>0.
\end{equation*}
Then for any $Z\subset X$, we have $dim_H(f(Z))=dim_H(Z)$.
\end{lem}
\begin{proof}
Suppose $\{U_i\}_{i\geq1}$ is a $\delta-$cover of $Z$, since $f\mid_{U_i}$ is a bi-Lipschitz map for aibitray $i$, so
\begin{equation*}
\mid f(Z\cap U_i)\mid\leq c_2 \mid Z\cap U_i\mid\leq c_2\delta,
\end{equation*}
then $\{f(Z\cap U_i)\}$ is a $\epsilon:=c_2\delta-$cover of $f(Z)$. For $\forall s>0$,
\begin{equation*}
\sum_i\mid f(Z\cap U_i)\mid^s\leq c_2^s\sum_i\mid Z\cap U_i\mid^s\leq c_2^s\sum_i\mid U_i\mid^s,
\end{equation*}
which implies that
\begin{equation*}
H^s_\epsilon(f(Z))\leq H^s_\delta(Z).
\end{equation*}
Taking $\delta,\epsilon\rightarrow0$, we get $H^s(f(Z))\leq H^s(Z)$.
If $s>\text{dim}_H Z$, then $H^s(f(Z))\leq H^s(Z)=0$, that is $\text{dim}_H f(Z)\leq s$ for all $s>\text{dim}_H Z$, so $\text{dim}_H f(Z)\leq \text{dim}_H Z$.
Noticing the bi-Lipschitz mapping $f\mid_{B}$ has a inverse mapping $f^{-1}\mid_B:f(B)\rightarrow B$ and using the above result, we have $\text{dim}_H f(Z)=\text{dim}_H Z$.
\end{proof}
\begin{prop}
In the symbolic system $(A^{\mathbb{Z}_+},\sigma,d)$, for arbitray $t>0$, there exists a $F_\sigma$ subset $E$, $\sigma E \subset E$, such that $h^S_{top}(\sigma,E)= t$.
\end{prop}
\begin{proof}
From Proposition 3.2 and Lemma 3.1, for any $t>0$, there is a compact subset $Z\subset A^{\mathbb{Z}_+}$, $\textmd{dim}_H \sigma Z=\textmd{dim}_H Z=h^S_{top}(\sigma,Z)= t$. We set $E=\bigcup^\infty_{i=0}\sigma^iZ$, then $E$ is a $F_\sigma$ subset, $\sigma E \subset E$, and $\textmd{dim}_H E=\textmd{dim}_H Z$, which implies $h^S_{top}(\sigma,E)=t$.
\end{proof}
\section{The variational principle for slow entropies.}
We firstly introduce the measure-theoretic slow entropy. The notion of weighted topological slow entropy is presented, which is important to prove the variational principle.
\subsection{Definition of measure-theoretic slow entropy}
From [2], Brin and Katok defined the local or measure-theoretic entropy for $\mathbb{Z}_+-$action as follows:
Suppose $\mu\in\mathcal{M}(X)$, define
\begin{equation*}
\underline{h}_\mu(T,x)=\lim\limits_{\epsilon\rightarrow0}\liminf\limits_{n\rightarrow\infty}-\frac{1}{n}\log \mu(B_n(x,\epsilon));
\underline{h}_\mu(T,x)=\lim\limits_{\epsilon\rightarrow0}\liminf\limits_{n\rightarrow\infty}-\frac{1}{n}\log \mu(B_n(x,\epsilon)).
\end{equation*}
\begin{equation*}
\underline{h}_\mu(T)=\int\underline{h}_\mu(T,x)d\mu(x);\overline{h}_\mu(T)=\int\overline{h}_\mu(T,x)d\mu(x).
\end{equation*}
They also proved the proposition:
For $\mu\in\mathcal{M}(X,T),\mu-$a.e.$x$, $\underline{h}_\mu(T,x)=\overline{h}_\mu(T,x)$, and
$\int\underline{h}_\mu(T,x)d\mu(x)=h_{\mu}(T).$
Hence for $\mu\in\mathcal{M}(X,T)$, $\underline{h}_\mu(T)=\overline{h}_\mu(T)=h_{\mu}(T).$
Now, we give a modification of measure-theoretic lower entropy of higher dimension $\mathbb{Z}^d$-actions.
Suppose $\mu\in\mathcal{M}(X)$, define
\begin{equation*}
\underline{h}_\mu^S(\mathcal{T},x)=\lim\limits_{\epsilon\rightarrow0}\liminf\limits_{n\rightarrow\infty}-\frac{1}{\log \lambda_n}\log \mu(B_n(x,\epsilon));
\end{equation*}
\begin{equation*}
\underline{h}_\mu^S(\mathcal{T})=\int\underline{h}_\mu^S(\mathcal{T},x)d\mu(x).
\end{equation*}
We call $\underline{h}_\mu^S(\mathcal{T},x)$ the \textit{measure-theoretic slow entropy }of point $x$ with respect to $\mathcal{T}$,
and $\underline{h}_\mu^S(\mathcal{T})$ the \textit{measure-theoretic slow entropy} of $X$ with respect to $\mathcal{T}$.
\noindent{\textbf{Remark}}: From the definition of measure-theoretic slow entropy, it is easy to know
$$-\frac{1}{\lambda_n}\log \mu(B_n(x,\epsilon))=-\frac{1}{\log \lambda_n}\log \mu(B_n(x,\epsilon))\cdot \frac{\log \lambda_n}{\lambda_n},$$
because $\frac{\log \lambda_n}{\lambda_n}\rightarrow0$ as $n\rightarrow\infty$. If $\underline{h}_\mu^S(\mathcal{T})$ be finite, then~$h_{\mu}(\mathcal{T})=0$;
if $h_{\mu}(\mathcal{T})>0$, $\underline{h}_\mu^S(\mathcal{T})$ must be infinite.
\subsection{Weighted topological slow entropy.}
For any positive function $f:X\rightarrow [0,\infty), N\in\mathbb{N},\epsilon>0$, we define
\begin{equation*}
W(f,s,N,\epsilon)=\inf\sum_{i}c_i\bigg(\frac{1}{\lambda_{n_i}}\bigg)^s,
\end{equation*}
where the infimum is taken over all finite or countable families $\{(B_{n_i}(x_i,\epsilon),c_i)\}$ such that
$x_i\in X, n_i\geq N,0<c_i<\infty$ and $\sum_ic_i\chi_{B_i}\geq f$.
For $Z\subset X,f=\chi_{Z}$, set $W(Z,s,N,\epsilon)=W(\chi_Z,s,N,\epsilon)$. Clearly, the function $W(Z,s,N,\epsilon)$
does not decrease as $N$ increases and $\epsilon$ decreases. So the following limits exist:
\begin{equation*}
W(Z,s,\epsilon)=\lim\limits_{N\rightarrow\infty}W(Z,s,N,\epsilon), W(Z,s)=\lim\limits_{\epsilon\rightarrow0}W(Z,s,\epsilon).
\end{equation*}
It's not difficult to prove that there exists a critical value of parameter $s$, which we will denote by $h_{top}^W(\mathcal{T},Z)$, such that
\begin{equation*}
W(Z,s)=\left\{
\begin{array}{ll}
0, & s>h_{top}^W(\mathcal{T},Z) ;\\
\infty, & s<h_{top}^W(\mathcal{T},Z).
\end{array}
\right.
\end{equation*}
We call $h_{top}^W(\mathcal{T},Z)$ the\textit{ weighted topological slow entropy} of $Z$ with respect to $\mathcal{T}$.
\subsection{Equivalence of $h_{top}^S$ and $h_{top}^W$.}
\begin{prop}
\noindent{\rm (i)} For any $s\geq0,N\in\mathbb{N},\epsilon>0,M(\cdot,s,N,\epsilon) \text{\ and\ } W(\cdot,s,N,\epsilon)$
are outer measures on $X$.
\noindent{\rm (ii)}For any $s\geq0$, both $M(\cdot,s)$ and $W(\cdot,s)$ are metric outer measures on $X$.
\end{prop}
\begin{prop}
Suppose $Z\subset X$, for any $s\geq0,\epsilon,\delta>0$, we have $$M(Z,s+\delta,N,6\epsilon)\leq W(Z,s,N,\epsilon)\leq M(Z,s,N,\epsilon)$$
for large enough $N$. And then $h_{top}^S(\mathcal{T},Z)=h_{top}^W(\mathcal{T},Z)$.
\end{prop}
\begin{lem}\cite{MA}
Let $(X,d)$ be a compact metric space and $\mathcal{B}=\{B(x_i,r_i)\}_{i\in\mathcal{I}}$ be a family open of (or closed) balls
in $X$. Then there exists a finite or countable subfamily $\mathcal{B}^{'}=\{B(x_i,r_i)\}_{i\in\mathcal{I}^{'}}$ of pairwise disjoint
balls in $\mathcal{B}$ such that $$\bigcup_{B\in\mathcal{B}}B\subseteq\bigcup_{i\in\mathcal{I}^{'}}B(x_i,5r_i).$$
\end{lem}
\noindent{\it Proof of Proposition4.2.} We follow the argument in \cite{FH} for the condition of $L$-actions. Let $Z\subset X,s\geq
0,\varepsilon,\delta>0$, set $g=\chi_ Z,c_i\equiv 1$ in the definition of weighted topological entropy,
we have $W(Z,s,N,\epsilon)\leq M(Z,s,N,\epsilon)$ for $\forall N\in \mathbb{N}$.
Next, we prove $M(Z,s+\delta,N,\epsilon)\leq W(Z,s,N,\epsilon)$ for large enough $N$.
Let$\{(B_{n_i}(x_i,\epsilon),c_i)\}_{i\in \mathcal{I}}$ be a family so that $\mathcal{I}\subseteq \mathbb{N},
x_i\in X,0\leq
c_i<\infty,n_i\geq N,$ and
\begin{equation}\label{2.3}
\sum_i c_i\chi_{B_i}\geq\chi_Z,
\end{equation}
here $B_i:=B_{n_i}(x_i,\epsilon)$. We claim that
\begin{equation}\label{2.4}
M(Z,s+\delta,N,6\epsilon)\leq\sum_{i\in\mathcal{I}}c_i\bigg(\frac{1}{\lambda_{n_i}}\bigg)^s
\end{equation}
which implies $M(Z,s+\delta,N,6\epsilon)\leq W(Z,s,N,\epsilon)$.
We denote $$\mathcal{I}_n=\{i\in \mathcal{I}:n_i=n\},$$
and $$\mathcal{I}_{n,k}=\{i\in \mathcal{I}_n:i\leq k\}$$ for $n\geq N,k\in\mathbb{N}.$
We write $B_i:=B_{n_i}(x_i,\epsilon),5B_i:=B_{n_i}(x_i,5\epsilon)$ for $i\in\mathcal{I}$.
Obviously we may assume$B_i\neq B_j $ for $i\neq j$. For $t>0$, set
\begin{equation*}
Z_{n,t}=\{x\in Z:\sum _{i\in \mathcal{I}_n}c_i\chi_{B_i}(x)>t\}\ \
\end{equation*}
and
\begin{equation*}
Z_{n,t,k}=\{x\in Z:\sum _{i\in \mathcal{I}_{n,k}}c_i\chi_{B_i}(x)>t\}.
\end{equation*}
We divide the proof of (4.2) into the following three steps.
Step 1. For each $n\geq N, k\in \mathbb{N}$, and~$t>0$, there exists a finite set $\mathcal{J}_{n,k,t}\subseteq\mathcal{I}_{n,k} $
such that the ball $B_i,i\in\mathcal{J}_{n,k,t}$ are pairwise disjoint, $Z_{n,t,k}\subseteq\cup_{i\in\mathcal{J}_{n,k,t}}5B_i$,
and
\begin{equation*}
\#(\mathcal{J}_{n,k,t})\bigg(\frac{1}{\lambda_n}\bigg)^s\leq\frac{1}{t}\sum_{i\in \mathcal{I}_{n,k}}c_i\bigg(\frac{1}{\lambda_n}\bigg)^s.
\end{equation*}
we will use the method of Federer \cite{FE}, also Mattila \cite{MA} for $L-$actions. Since $\mathcal{I}_{n,k}$ is
finite, by approximating the $c_i$'s from above, we may assume that each $c_i$ is a positive rational, and then by multiplying with
a common denominator we may assume that each $c_i$ is a positive integer. Let $m$ be the least integer with $m\geq t$.
Denote $\mathcal{B}=\{B_i,i\in\mathcal{I}_{n,k}\}$, and define ${u:\mathcal{B}\rightarrow\mathbb{Z}}$, by $u(B_i)=c_i$.
Since $B_i\neq B_j$, for $i\neq j$, so $u$ is well defined. We define by introduction integer-valued functions $v_0,v_1,\cdots,v_m$
on $\mathcal{B}$ and sub-families $\mathcal{B}_1,\mathcal{B}_{2},\ldots,\mathcal{B}_m$ of $\mathcal{B}$
starting with $v_0=u$. Using Lemma 4.1 repeatedly, we define inductively for $j=1,\cdots,m$, disjoint subfamilies $\mathcal{B}_{i}$
of $\mathcal{B}$ such that
\begin{equation*}
\mathcal{B}_j\subset\{B\in\mathcal{B}:v_{j-1}(B)\geq1\},
\end{equation*}
\begin{equation*}
Z_{n,k,t}\subseteq\cup_{B\in\mathcal{B}_j}5B
\end{equation*}
and the functions $v_j$ such that
$$v_j(B)=\left\{\begin{array}{ll}
v_{j-1}(B)-1,& \text{for}\ \ B\in\mathcal{B}_j ; \\
v_{j-1}(B),&
\text{for}\ \ B\in{\mathcal{B}\backslash\mathcal{B}_j\text{.}}\end{array}\right .$$
This is possible for $j<m$, $$Z_{n,k,t}\subseteq\{x:\sum_{B\in\mathcal{B}:B\ni x}v_j(B)\geq m-j\}, $$
whence every $x\in Z_{n,k,t}$ belongs to some ball $B\in \mathcal{B}$ with $v_j(B)\geq1$. Thus
\begin{eqnarray*}
\sum_{j=1}^{m}{\#(\mathcal{B}_{j})\bigg(\frac{1}{\lambda_n}\bigg)^{s}}&=&\sum_{j=1}^{m}{\sum_{B\in{\mathcal{B}}_{j}}(v_{j-1}(B)-v_{j}(B))\bigg(\frac{1}{\lambda_n}\bigg)^{s}}\\
&\leq & \sum_{B\in \mathcal{B}}{\sum_{j=1}^{m}(v_{j-1}(B)-v_{j}(B))\bigg(\frac{1}{\lambda_n}\bigg)^{s}} \\
&\leq& \sum_{B\in \mathcal{B}}u(B)\bigg(\frac{1}{\lambda_n}\bigg)^{s}=\sum_{i\in \mathcal{I}_{n,k}}c_i\bigg(\frac{1}{\lambda_n}\bigg)^{s}.
\end{eqnarray*}
Choose $j_0\in\{1,\cdots,m\}$ so that $\#(\mathcal{B}_{j_0})$ is the smallest. Then
\begin{eqnarray*}
\#(\mathcal{B}_{j_0})\bigg(\frac{1}{\lambda_n}\bigg)^{s}&\leq&\frac{1}{m}\sum_{i\in \mathcal{I}_{n,k}}c_i\bigg(\frac{1}{\lambda_n}\bigg)^{s}\leq \frac{1}{t}\sum_{i\in \mathcal{I}_{n,k}}c_i\bigg(\frac{1}{\lambda_n}\bigg)^{s}.
\end{eqnarray*}
So $\mathcal{J}_{n,k,t}=\{i\in \mathcal{I}:B_i\in \mathcal{B}_{j_0}\}$ is desired.
Step 2. For each $n\in \mathbb{N}$ and $t>0$, we have
\begin{equation}\label{2.5}
m(Z_{n,t},s+\delta,N,6\epsilon)\leq\frac{1}{\lambda_n^{\delta}t}\sum_{i\in \mathcal{I}_n}c_i\bigg(\frac{1}{\lambda_n}\bigg)^{s}.
\end{equation}
Assume $Z_{n,t}\neq\emptyset$, otherwise (4.3) is obvious. Since $Z_{n,k,t}\uparrow Z_{n,t}$, $Z_{n,k,t}\neq\emptyset$ for large
enough $k$. Let $\mathcal{J}_{n,k,t}$ be the sets constructed in Step 1. Then $\mathcal{J}_{n,k,t}\neq\emptyset$ for large
enough $k$. Set $E_{n,k,t}=\{x_i:i\in\mathcal{J}_{n,k,t}\}$. Note that the family of all non-empty compact subsets of $X$ is
compact with respect to Hausdorff distance(Federer[5,2.10.21]). It follows that there is a subsequence $(k_j)$ of natural
numbers and a non-empty compact set $E_{n,t}\subset X$ such that $E_{n,k_j,t}$ converges to $E_{n,t}$ in the Hausdorff distance
as $j\rightarrow\infty$. Since any two points in $E_{n,k,t}$ have a distance (with respect to $d_n$) not less than $\epsilon$,
so do the points in $E_{n,t}$. Thus $E_{n,t}$ is a finite set, moreover, $\#(E_{n,k_j,t})=\#(E_{n,t})$ when $j$ is large enough.
Hence $$\bigcup_{x\in E_{n,t}}B_n(x,5.5\epsilon)\supseteq
\bigcup_{x\in
E_{n,k_j,t}}B_n(x,5\epsilon)=\bigcup_{i\in\mathcal{J}_{n,k_j,t}}5B_i\supseteq
Z_{n,k_j,t},$$
when $j$ is large enough, and thus $\bigcup_{x\in E_{n,t}}B_n(x,6\varepsilon)\supseteq Z_{n,t}$. By the way, since
$\#{(E_{n,k_j,t})}=\#(E_{n,t})$ when $j$ is large enough, we have
\begin{equation*}
\#(E_{n,t})\bigg(\frac{1}{\lambda_n}\bigg)^{s}\leq \frac{1}{t}\sum_{i\in
\mathcal{I}_n}c_i\bigg(\frac{1}{\lambda_n}\bigg)^{s}.
\end{equation*}
Therefore
\begin{equation*}
M(Z_{n,t},s+\delta,N,6\epsilon)\leq\#(E_{n,t})\bigg(\frac{1}{\lambda_n}\bigg)^{s+\delta}\leq\frac{1}{{\lambda_n}^{\delta}t}\sum_{i\in \mathcal{I}_n}c_i\bigg(\frac{1}{\lambda_n}\bigg)^{s}.
\end{equation*}
Step 3. For any $t\in(0,1)$, we have
\begin{equation*}
M(Z,s+\delta,N,6\epsilon)\leq\frac{1}{t}\sum_{i\in \mathcal{I}}c_i\bigg(\frac{1}{\lambda_{n_i}}\bigg)^{s},
\end{equation*}
which implies (4.2). In fact, fix $t\in(0,1)$. Then
$Z\subset\bigcup_{n=N}^\infty Z_{n,{\lambda_n}^{-\delta}t}$. Thus
by Proposition 4.1(i) and (4.3), we get
\begin{equation*}\begin{split}
M(Z,s+\delta,N,6\epsilon)&\leq\sum_{n=N}^{\infty}M(Z_{n,{\lambda_n}^{-\delta}t},s+\delta,N,6\epsilon)\\&
\leq\sum_{n=N}^{\infty}\frac{1}{t}{\sum_{i\in\mathcal{I}_n}c_i\bigg(\frac{1}{\lambda_n}\bigg)^s}=\frac{1}{t}\sum_{i\in\mathcal{I}}c_i\bigg(\frac{1}{\lambda_{n_i}}\bigg)^s.
\end{split}\end{equation*}
This completes the proof of the Proposition.
We will give a Frostman's lemma in dynamical system, which is important to our proof.
\begin{lem}
Suppose $K$ be a non-empty compact subset of $X$. Let $s\geq0,N\in\mathbb{N},\epsilon>0$. Set
$c:=W(K,s,N,\epsilon)>0.$ Then there exist a Borel probability measure $\mu$ on $X$ such that
$\mu(K)=1$ and $$\mu(B_n(x,\epsilon))\leq \frac{1}{c}\bigg(\frac{1}{\lambda_n}\bigg)^s.$$
\end{lem}
\begin{proof} Clearly $c<\infty$. We define a function $p$ on the space $C(X)$
of continuous real-valued functions on $X$ by
\begin{equation*}
p(f)=\frac{1}{c}\text{W}(\chi_K\cdot f,s,N,\epsilon).
\end{equation*}
Let $\mathbf{1}\in C(X)$ denote the constant function $\mathbf{1}(x)\equiv 1$. It is easy to verify that
(1)$p(tf)=tp(f)$ for $f\in C(X)$ and $t\geq0$,
(2)$p(f+g)\leq p(f)+p(g)$ for $f,g\in C(X)$,
(3)$p(\mathbf{1})=1,0\leq P(f)\leq \parallel f\parallel_{\infty}$ for $f\in C(X)$, and $p(g)=0$ for $g\in C(X),g\leq0.$
By the Haha-Banach Theorem, we can extend the linear functional $t\rightarrow tp(1),t\in \mathbb{R}$, from the subspace
of constant functions to a linear functional $L:C(X)\rightarrow \mathbb{R}$ satisfying
\begin{equation*}
L(\mathbf{1})=p(\mathbf{1})=1 \ \ \text{and}\ \ -p(-f)\leq L(f)\leq p(f), \text{for} \forall f\in C(X).
\end{equation*}
If $f\in C(X)$ with $f\geq 0$, then $p(-f)=0$ and so $L(f)\geq0$. Hence we can use the Riesz representation Theorem
to find a Borel probability measure $\mu$ on $X$ such that $L(f)=\int f d\mu$ for $f\in C(X)$.
Next, we prove $\mu(K)=1$. For any compact set $E\subset X\backslash K$, by Urysohn Lemma there exists $f\in C(X)$
such that $0\leq f\leq1,f(x)=1$ for $x\in E$ and $f(x)=0$ for $x\in K$. Then $f\cdot\chi_K=0$ and thus $p(f)=0$.
Hence $\mu(E)\leq L(f)\leq p(f)=0$. This shows $\mu(X\backslash K)=0,$ that is $\mu(K)=1$.
In the end, we prove $\mu(B_n(x,\epsilon))\leq\frac{1}{c}(\frac{1}{\lambda_n})^s$ for $\forall x\in X,n\geq N$.
In fact, for any compact set $E\subset B_n(x,\epsilon)$, by Urysohn lemma again, there is $f\in C(X)$, such that
$0\leq f\leq1, f(y)=1$ for $y\in E$ and $f(y)=0$ for $y\in X\backslash B_n(x,\epsilon)$. Then $\mu(E)\leq L(f)\leq p(f)$.
Since $\chi_K\cdot f\leq \chi_{B_n(x,\epsilon)}$, and $n\geq N$, we get $W(\chi_K\cdot f,s,N,\epsilon)\leq (\frac{1}{\lambda_n})^s$
and hence $p(f)\leq \frac{1}{c}(\frac{1}{\lambda_n})^s$. Therefore, we have $\mu(E)\leq\frac{1}{c}(\frac{1}{n})^s$. It follows that
\begin{equation*}
\mu(B_n(x,\epsilon))=\sup\{\mu(E): E\subset B_n(x,\epsilon), \ \ \text{is compact}\}\leq\frac{1}{c}\bigg(\frac{1}{\lambda_n}\bigg)^s.
\end{equation*}
This completes the proof of the Lemma.
\end{proof}
\noindent{\textbf{Remark}: } There is a related slow entropy distribution principle. Using techniques in\cite{MW}, we have:
For any Borel set $E\subset X$ and Borel probability measure $\mu$ on $E$, if $\underline{h}_{\mu}^S(\mathcal{T},x)\leq s$ for all $x\in E$,
then $h^S_{top}(\mathcal{T,}E)\leq s$; if $\underline{h}_{\mu}^S(\mathcal{T},x)\geq s$ for all $x\in E,\mu(E)>0$, then $h^S_{top}(\mathcal{T},E)\geq s$
\begin{thm}
Suppose $(X,\mathcal{T})$ be a TDS, $K\subset X$ be any non-empty compact subset. Then
\begin{equation*}
h^S_{top}(\mathcal{T},K)=\sup\{\underline{h}_\mu^S(\mathcal{T}):\mu\in \mathcal{M}(X),\mu(K)=1\}.
\end{equation*}
\end{thm}
\begin{proof} Firstly, we prove $h^S_{top}(\mathcal{T},K)\geq\underline{h}_\mu^S(\mathcal{T})$, for any $\mu\in \mathcal{M}(X),\mu(K)=1$.
We set
\begin{equation*}
\underline{h}_\mu^S(\mathcal{T},x,\epsilon)=\liminf\limits_{n\rightarrow\infty}-\frac{1}{\log \lambda_n}\log \mu(B_n(x,\epsilon))
\end{equation*}
for $x\in X,n\in\mathbb{N},\epsilon>0$. It's easy to see that $\underline{h}_\mu^S(\mathcal{T},x,\epsilon)$ is nonnegative and
increases as $\epsilon$ decreases. By the monotone convergence theorem ,we get
\begin{equation*}
\lim\limits_{\epsilon\rightarrow0}\int\underline{h}_\mu^S(\mathcal{T},x,\epsilon)d\mu(x)=\int\underline{h}_\mu^S(\mathcal{T},x)d\mu(x)=\underline{h}_\mu^S(\mathcal{T}).
\end{equation*}
Thus to show $h^S_{top}(\mathcal{T},K)\geq\underline{h}_\mu^S(\mathcal{T})$, we only to show $h^S_{top}(\mathcal{T},K)\geq\int\underline{h}_\mu^S(\mathcal{T},x,\epsilon)d\mu(x)$
for any $\epsilon>0$.
Now we fix $\epsilon>0, l\in\mathbb{N}$, set
$u_l=\min\{l,\int\underline{h}_\mu^S(\mathcal{T},x,\epsilon)d\mu(x)-\frac{1}{l}\}$,
then exist a Borel set $A_l\subset X,\mu(A_l)>0,N\in \mathbb{N}$
such that
$$\mu(B_n(x,\epsilon))\leq \bigg(\frac{1}{\lambda_n}\bigg)^{u_l}, \forall x\in A_l, n\geq N.\eqno{(4.4)}$$
Let $\{B_{n_i}(x_i,\epsilon/2)\}$ ba a finite or countable family such that $x_i\in X,n_i\geq N$ , and $K\cap A_l\subset \bigcup_iB_{n_i}(x_i,\epsilon/2)$.
We may as well assume that for each $i$, $B_{n_i}(x_i,\epsilon/2)\bigcap(K\cap A_l)\neq\emptyset$, and select $y_i\in B_{n_i}(x_i,\epsilon/2)\bigcap(K\cap A_l)$.
Then by (4.4),we have
\begin{equation*}\begin{split}
\sum_i\bigg(\frac{1}{\lambda_{n_i}}\bigg)^{u_l}&\geq \sum_i\mu(B_{n_i}(y_i,\epsilon))\\&\geq \sum_i\mu(B_{n_i}(x_i,\epsilon/2))
\\&\geq\mu(K\cap A_l)=\mu(A_l)>0.
\end{split}\end{equation*}
So, we get
\begin{equation*}\begin{split}
m(K,u_l)&\geq m(K,u_l,N,\epsilon/2)\\&\geq m(K\cap A_l,u_l,N,\epsilon/2)
\\&\geq\mu(A_l)>0.
\end{split}\end{equation*}
Therefore, $h^S_{top}(\mathcal{T},K)\geq u_l$. Letting $l\rightarrow\infty$, we get $h^S_{top}(\mathcal{T},K)\geq\int\underline{h}_\mu^S(\mathcal{T},x,\epsilon)d\mu(x)$.
Thus $h^S_{top}(\mathcal{T},K)\geq\underline{h}_\mu^S(\mathcal{T})$.
We next prove $h^S_{top}(\mathcal{T},K)\leq\{\underline{h}_\mu^S(\mathcal{T}):\mu\in
\mathcal{M}(X),\mu(K)=1\}$. We may as well assume
$h^S_{top}(\mathcal{T},K)>0$, otherwise the conclusion is obvious. By
Proposition 4.2, $h^S_{top}(\mathcal{T},K)=h_{top}^W(\mathcal{T},K)$. Suppose
$0<s<h^W_{top}(\mathcal{T},K)$, then there exists$\epsilon,N\in\mathbb{N},$
such that $c=W(K,s,N,\epsilon)>0$. By Lemma 4.2, there exists
$\mu\in\mathcal{M}(X),\mu(K)=1$, such that
$$\mu(B_n(x,\epsilon))\leq \frac{1}{c}\bigg(\frac{1}{\lambda_n}\bigg)^s$$
for any $x\in X,n\geq N$. And then $\underline{h}_\mu^S(T,x)\geq
s$ for each $x\in X$. Therefore,
$\underline{h}_\mu^S(\mathcal{T})\geq\int\underline{h}_\mu^S(\mathcal{T},x)d\mu(x)\geq
s.$ The proof is completed.
\end{proof}
\begin{cro}
Suppose $(X,T)$ be a TDS. Then
\begin{equation*}
h^S_{top}(T,X)=\sup_{\mu\in \mathcal{M}(X)}\underline{h}_\mu^S(T).
\end{equation*}
\end{cro}
|
1,941,325,220,430 | arxiv | \section{Introduction}
In this paper, we consider the tasks of time series segmentation and
modeling. Formally, suppose that we observe a sequence of $T$
input/output pairs, represented as
\begin{equation}
(x_1,y_1), (x_2, y_2), \ldots, (x_T, y_T)
\end{equation}
for $x_t \in \mathbb{R}^n$ (which can even include functions of past
outputs of the time series to capture scenarios such as
autoregressive models) and $y_t \in \mathbb{R}^p$ (though we can
also consider other possible forms of the output vector, such as
categorical variables). Our goal is twofold: 1) to segment the time
series into (potentially non-contiguous) partitions $x_{\mathcal{I}_1},
\ldots, x_{\mathcal{I}_k}$, such that all the time points within each
partition can be modeled via a single probabilistic model $p(y_t |
x_t; \theta_i), \;\; \forall t \in \mathcal{I}_i$, parameterized by
$\theta_i \in \mathbb{R}^d$; and 2) to determine the
parameter,s $\theta_i$, of each different segment. This is an extremely
general problem formulation and captures many of the aims of time
series latent variable models like hidden Markov models and their many
extensions \cite{rabiner1989tutorial}, multiple change-point detection
methods \cite{basseville1993detection}, and switched dynamical systems
\cite{sun2006switched}.
A common approach to such tasks is what we generically refer to as a
latent state mixture model, with an illustrative example shown in
Figure \ref{fig-model} (left). In such models, we associate with each
time a discrete latent state $z_t \in \{1,\ldots,k\}$ that
``selects'' parameters to use for modeling the conditional
distribution $p(y_t | x_t, z_t) = p(y_t | x_y; \theta_{z_t})$. Under
such a model, the segmentation and modeling challenges above can be
addressed respectively by e.g., 1) computing the most likely sequence
of latent states; and 2) jointly inferring the distribution over hidden
states and model parameters through the classical EM algorithm.
Though such models are very powerful, the fact that the EM algorithm
can be highly susceptible to local optima often makes learning such
models difficult, especially for complex distributions with many
parameters. Furthermore, although the model over the latent
variables, $p(z_{t+1} | z_t; A)$ can capture complex dynamics in the
system, in practice a primary characteristic that these models must
capture is simply a ``stickiness'' property: the fact that a system
in state $z_t$ tends to stay in this state with relatively high
probability; indeed, including such properties in latent variable models has
been crucial to obtaining good performance \cite{fox2011sticky}.
In this work, we propose to capture very similar behavior, but in a
fully convex probabilistic framework. In particular, we directly
associate with each time point a \emph{separate} set of parameters,
$\theta_t$, but then encourage the parameters to remain constant over
time by penalizing the $\ell_2$ norm of the difference between
successive parameters. As is well-known from the group lasso setting
\cite{yuan2006model}, a penalty on the $\ell_2$ norm will encourage
group sparseness in the differences, i.e., a piecewise-constant
sequence of parameters; a graphical representation of this approach
is show in Figure \ref{fig-model}. Formally, we jointly segment and
model the system by solving the (convex) optimization problem
\begin{equation}
\label{eq-prob}
\minimize_{\theta_1,\ldots,\theta_T} \;\; -\sum_{t=1}^T \log
p(y_t|x_t;\theta_t) + \lambda \sum_{t=1}^{T-1}\|\theta_{t+1} -
\theta_t\|_2.
\end{equation}
This penalty function on $\theta$ is sometimes referred to
as the group fused lasso \cite{alaiz2013group,bleakley2011group} or
the multivariate total variation penalty, but the key of our approach
is to apply such penalties to the underlying parameters of the
probability distribution rather than to the output signal itself.
The resulting model generalizes several existing
methods from the literature, including the standard group fused lasso
itself \cite{bleakley2011group}, time-varying linear regression
\cite{ohlsson2013identification} and auto-regressive modeling
\cite{ohlsson2010segmentation}, and $\ell_1$ mean and variance
filtering \cite{wahlberg2012admm}.
Although the proposed approach is conceptually simple, there are a
number of elements needed to make this approach practical;
together, these make up the majority of our contribution in this work.
First, the optimization problem
\eqref{eq-prob} is challenging: it involves a $kT$ dimensional optimization
variable, potentially non-quadratic loss functions, a non-smooth
$\ell_2$ norm penalty, and achieving \emph{exact} sparseness in these
differences is crucial given that we want to use the method to split
the time series into distinct segments. Second, a
major advantage of latent variable models is that they can capture
disjoint segments, where the underlying parameters change and return
to previous values; this structure is inherently missed by the total
variation penalties, as there is no innate mechanism by which we can
``return'' to previous parameter values. Instead, we propose to
capture much of this same behavior via a two-pass algorithm that
clusters the parameters themselves using kernel density estimation.
Finally, the main message of this paper is an empirical one, that the
convex framework \eqref{eq-prob} can perform as well, in practice, as
much more complex latent variable models, while simultaneously being
much faster and easier to optimize. Thus, we present empirical
results on three real-world domains studied in the latent variable
modeling literature: segmenting honey bee motion, detecting and
modeling device energy consumption in a home, and segmenting motion
capture data. Together, these illustrate the wide applicability of
the proposed approach.
\begin{figure}[t]
\label{fig-model}
\centering
\includegraphics[width=2.4in]{generic_latent_ts}
\hspace{0.4in}
\includegraphics[width=2.4in]{segment_ts}
\caption{A generic latent variable time series model (left) and our
convex framework for segmentation and modeling (right).}
\end{figure}
\section{Efficient ADMM optimization method with fast Newton subroutines}
In this section, we develop fast algorithms for solving the probabilistic
segmentation problem \eqref{eq-prob} with different probabilistic
models $p(y_t | x_t;\theta_t)$, a necessary step for applying the model to
real data sets. Although the problem is convex, optimization is
complicated by the non-smooth nature of the total variation norm (i.e. exactly
the structure that promotes sparsity in the change points which we desire) and
the composite objective incorporating a probability distribution with a possibly
different set of parameters at each time point. The result is a
difficult-to-solve optimization problem, and we found that
off-the-shelf solvers performed poorly on even moderately sized
examples. Our approach to
optimization decomposes the objective into many smaller subproblems
using the alternating direction methods of multipliers (ADMM)
\cite{boyd2011distributed}---iterating between solving the subproblems
and taking a gradient step in the dual.
Omitting details for the sake of brevity (the derivations here are straightforward and for a thorough
description of ADMM including complete description of the algorithm, we refer readers to \cite{boyd2011distributed} and a
similar form form of ADMM, though just for quadratic loss, is also described in
\cite{wytock2014fast}), for problems of the form \eqref{eq-prob}, the
algorithm iteratively performs the following updates starting at some
initial $\theta_{1:T}^0$, $z_{1:T}^0$, and $u_{1:T}^0$:
\begin{equation}
\label{eq-admm}
\begin{split}
\theta^{k+1}_t & \leftarrow \argmin_{\theta_t} \;\; -\log p(y_t | x_t ;
\theta_t) + \frac{\rho}{2} \|\theta_t - z^k_t + u^k_t\|_2^2, \;\; \forall
t=1,\ldots,T \\
z^{k+1}_{1:T} & \leftarrow \argmin_{z_{1:T}} \;\; \sum_{t=1}^{T-1} \|z_t -
z_{t+1}\|_2 + \frac{\rho}{2} \sum_{t=1}^T \|x^{k+1}_t - z_t + u^k_t\|_2^2 \\
u^{k+1}_{t} & \leftarrow u^k_t + x^{k+1}_t - z^{k+1}_t, \;\; \forall
t=1,\ldots,T.
\end{split}
\end{equation}
ADMM is particularly appealing for such problems because the $z$
update here is precisely a group fused lasso, a problem for
which efficient second-order methods exist \cite{wytock2014fast}, and
because the $\theta_t$ updates are separate, which allows the method
to be trivially parallelized. In addition,
the proposed ADMM approach is appealing because it is extensible---for
example, we could encode
additional structure in the problem by penalizing the trace norm
\cite{recht2010guaranteed} of $[\theta_1,\ldots,\theta_T]$, an extension that is
straightforward requiring only minor modifications to the algorithm
and the implementation of the proximal operator for this new penalty
(in this case, thresholding on singular values).
While fast algorithms exist for the total variation norm, the novel element
required for our problem is efficient implementation of the proximal operators
for the log-loss term, that is, the $\theta$ updates in
\eqref{eq-admm}. In particular (observing that the this term separates
over time points, we drop the subscript $t$) we derive efficient implementations
for the subproblems
\begin{equation}
\minimize_\theta -\log p(y|x;\theta) + \frac{\rho}{2}\|\theta -
\theta_0\|_2^2.
\end{equation}
Note that since our framework allows
for the possibility of different parameters at each time point, in each
iteration of ADMM we must solve this problem $T$ times resulting in different
estimates for $\theta_t$, a setting somewhat different than standard maximum
likelihood estimation in which we estimate one set of parameters for many data
points. Furthermore, as we highlight in the sequel, minimizing the log-loss term
over only a single observation gives rise to additional structure which we can
exploit. Next, we consider fast updates for the cause of a
multivariate conditional Gaussian (with unknown covariance/precision
matrix), a natural distribution for our model. The models are of
course also extensible to other probability models such as the softmax
model for discrete outputs, and the resulting method is very similar
to that presented below. Code for the full algorithm will be
included with the final version of the paper.
\subsection{Gaussian model}
Suppose we have a continuous output variable $y \in
\mathbb{R}^p$ and we model $y|x$ as
\begin{equation}
p(y|x;\Lambda,\Theta) = \frac{1}{Z(x)}\exp\left(-\frac{1}{2}y^T\Lambda y -
x^T\Theta y \right)
\end{equation}
with parameters $\Lambda \in \mathbb{R}^{p \times p}$ and $\Theta \in
\mathbb{R}^{n \times p}$; note that under this model, $\Theta$ and
$\Lambda$ take the place of $\theta$ with $\theta = [\vect(\Lambda)^T,
\vect(\Theta)^T]^T$ and for the rest of this section we simply consider the
parameters to be $\Lambda$ and $\Theta$ for ease of notation. This model is
equivalent to a multivariate Gaussian with $y|x \sim
\mathcal{N}(-\Lambda^{-1}\Theta^Tx,\Lambda^{-1})$ but this particular
parameterization with the scaled mean parameter is attractive as it admits
a convex regularized maximum likelihood estimation problem:
\begin{equation}
\minimize_{\Lambda,\Theta} -\log \det \Lambda +
y^T\Lambda y + x^T\Theta^T\Lambda^{-1}\Theta x + 2y^T\Theta x +
\frac{\rho}{2} \left(\|\Lambda-\Lambda_0\|_F^2 + \|\Theta-\Theta_0\|_F^2 \right).
\end{equation}
This problem is convex and without the addition of the augmented
Lagrangian terms can be solved in closed form; however, with those terms no such
closed form exists and thus our approach is to develop a second-order Newton
method. We start by taking the gradient
\begin{equation}
\begin{split}
\nabla_\Lambda &= - \Lambda^{-1} + yy^T -
\Lambda^{-1}\Theta^Txx^T\Theta\Lambda^{-1} + \rho(\Lambda - \Lambda_0)\\
\nabla_\Theta &= 2xx^T\Theta\Lambda^{-1} + 2xy^T + \rho(\Theta-\Theta_0)
\end{split}
\end{equation}
and we solve for the Newton direction, parameterized by $(U,V)$ where $U$
represents the change in $\Lambda$ and $V$ in $\Theta$, by considering the
system of equations
\begin{equation}
\begin{split}
\Lambda^{-1}U\Lambda^{-1} +
\Lambda^{-1}\Theta^Txx^T\Theta\Lambda^{-1}U\Lambda^{-1} +
\Lambda^{-1}U\Lambda^{-1}\Theta^Txx^T\Theta\Lambda^{-1} \\
- \Lambda^{-1}V^Txx^T\Theta\Lambda^{-1}
- \Lambda^{-1}\Theta^Txx^TV\Lambda^{-1}
+ \rho U &= \nabla_\Lambda \\
2xx^TV\Lambda^{-1} - 2xx^T\Theta\Lambda^{-1}U\Lambda^{-1} + \rho V &= \nabla_\Theta
\end{split}
\end{equation}
which is a Sylvester-like equation that we could solve using the identity
$\vect(AXB) = (B^T \otimes A)\vect(A)$ where $\otimes$ denotes the Kronecker
product.
However, naively employing this approach requires inverting a $p(n+p)
\times p(n+p)$ matrix and thus is not computationally tractable for
reasonably sized problems. Instead, we simplify the system of
equations analytically so that solving for the Newton direction
requires only $O(p^3)$ operations. We proceed by taking the
eigendecomposition $\Lambda^{-1} = WSW^T$ and writing this system of
equations as
\begin{equation}
\begin{split}
S\tilde{U}S + S\tilde{\Theta}^Txx^T\tilde{\Theta}S\tilde{U}S +
S\tilde{U}S\tilde{\Theta}^Txx^T\tilde{\Theta}S \\
-S\tilde{V}^Txx^T\tilde{\Theta}S - S\tilde{\Theta}^Txx^T\tilde{V}S + \rho\tilde{U} &= W^T\nabla_\Lambda W \\
2xx^T\tilde{V}S -2xx^T\tilde{\Theta}S\tilde{U}S + \rho\tilde{V} &=
\nabla_\Theta W
\end{split}
\end{equation}
where $\tilde{U} = W^TUW$, $\tilde{V} = VW$ and $\tilde{\Theta} = \Theta
W$. Now using the $\vect$ operator we have
\begin{equation}
\label{eq-kron-newton}
\left[ \begin{array}{cc}
S \otimes S + S \otimes aa^T + aa^T \otimes S + \rho I & S \otimes
ax^T + (S \otimes ax^T)K_{np} \\
S \otimes xa^T + K_{pn}(xa^T \otimes S) & 2S \otimes xx^T + \rho I
\end{array} \right]
\left[
\begin{array}{cc}
\vect \tilde{U} \\ \vect \tilde{V}
\end{array} \right] =
\left[
\begin{array}{cc}
\vect{W^T\nabla_\Lambda W} \\ \vect \nabla_\Theta W
\end{array} \right]
\end{equation}
where $a = -S\tilde{\Theta}^Tx$ and $K_{np}$ is the commutation matrix
(see e.g. \cite{magnus1988matrix}). Although the matrix on the LHS of this linear system is
large, it is highly structured; specifically it can be factorized into
diagonal and low rank components, written as $D + AA^T$ with
\begin{align}
D = \left[ \begin{array}{cc}
S \otimes S + \rho I & 0 \\
0 & \rho I
\end{array} \right]
&&
A = \left[ \begin{array}{cc}
S^{1/2} \otimes a & a \otimes S^{1/2} \\
S^{1/2} \otimes x & S^{1/2} \otimes x
\end{array} \right].
\end{align}
Next, using the matrix inversion lemma we have
\begin{equation}
\label{eq-matrix-inversion}
(D + AA^T)^{-1} = D^{-1} - D^{-1}A(I + A^TD^{-1}A)^{-1}A^TD^{-1}
\end{equation}
and after a bit of algebra, we observe that
\begin{equation}
\label{eq-mat-inv}
(I + A^TD^{-1}A) = \left[ \begin{array}{cc}
X + C_1 & X + C_2 \\
X + C_2 & X + C_1
\end{array} \right]
\end{equation}
where
\begin{align}
X = \frac{1}{\rho}(x^Tx)S, && C_1 = \diag\left( \frac{(a \circ a)s^T}{ss^T +
\rho} 1 \right), && C_2 = \frac{S^{1/2}aa^TS^{1/2}}{ss^T + \rho}.
\end{align}
Using this form, we are able to compute each term in the Newton direction
$(D+AA^T)^{-1}\vect(G)$ where $G$ is the RHS of the equation from
\eqref{eq-kron-newton} without ever forming the Kronecker products
explicitly resulting in a computation complexity of $O(p^3)$, the cost to invert
\eqref{eq-mat-inv}.
\begin{comment}
\subsection{Softmax model}
We also consider discrete $y \in \{1,\ldots,k\}$ and a softmax model
\begin{equation}
p_i = p(y=i|x;\theta) = \frac{\exp\left({\theta^{(i)}}^Tx\right)}{1 + \sum_{i'=1}^{k-1}\exp\left({\theta^{(i')}}^Tx\right)}
\end{equation}
and $p_k = 1 - \sum_{i=1}^{k-1} p_i$. This model is parameterized by the
$(k-1)n$-dimensional vector $\theta = {({\theta^{(1)}}^T, \ldots,
{\theta^{(k-1)}}^T)}^T$ and minimizing the negative log-likelihood under with
the addition of an augmented Lagrangian term takes the form
\begin{equation}
\minimize_{\theta} - \sum_{i=1}^{k-1}I(y=i){\theta^{(i)}}^Tx + \log \left( 1 + \sum_{i=1}^{k-1} \exp
\left({\theta_t^{(i)}}^Tx\right) \right) + \frac{\rho}{2}\|\theta - \theta_0\|_2^2.
\end{equation}
With small modification to well-known results (see
e.g. \cite{bohning1992multinomial}), we have the
gradient and Hessian
\begin{equation}
\begin{split}
\nabla_\theta &= (p - e_y) \otimes x + \rho(\theta - \theta_0)\\
\nabla^2_\theta &= (P - pp^T) \otimes xx^T + \rho I
\end{split}
\end{equation}
where $e_y$ denotes the basis vector with a $1$ at position $y$ if $y < k$ and
$P$ is the diagonal matrix with $P_{ii} = p_i$. As with the Gaussian distribution,
we write the Hessian as a diagonal plus low rank matrix $D + AA^T$ with
$D = \rho I$, $A = \left[ \begin{array}{cc} P^{1/2} \otimes x & p
\otimes x \end{array} \right]$. Again using the matrix inverse
lemma we can derive an efficient Newton upon, in this case using the
explicit form
\begin{equation}
(I + A^TD^{-1}A) = \frac{x^Tx}{\rho} \left[ \begin{array}{cc}
P & P^{1/2}p \\
p^TP^{1/2} & p^Tp
\end{array} \right]
\end{equation}
and thus the computation of the Newton direction is $O(k^3)$.
\end{comment}
\section{Segment clustering via kernel density estimation}
As mentioned above, a notable disadvantage our proposed convex
segmentation methods is that, unlike latent variable models, there is
no inherent notion of parameters being tied across disjoint segments
of the time series. Indeed, the effect of the above segmentation
model will be to determine the best single model for each segment
individually (modulo the regularization penalties). Although we
observe that, in real-world settings, this does not appear to be as
large a problem as might be imagined for learning the individual model
parameters themselves (the ``stickiness'' component mentioned above
typically means that there is enough data per segment to learn
good models), it is a substantial concern if the overall goal is to
make joint inferences about the nature of related segments in the same
time series or across multiple time series. To this end, we advocate
for a two-stage alternative to the latent variable model:
in the first stage,we compute the convex segmentation as above and in
the second stage, we cluster the segments directly in parameter space,
via kernel density estimation.
While there are many methods to cluster points in Euclidean space, density
clustering using the kernel estimator is appealing as identifying modes in the
distribution over the parameter space fits well with our probabilistic model.
The intuitive idea is that given the true probability distribution over the parameter
space $p$ and a point $\theta \in \mathbb{R}^d$, we define the cluster for $\theta$
to be the mode found by following the gradient $\nabla p(\theta)$. In practice,
since we do not know the true underlying distribution, we replace $p$ with
$\hat{p}_h$, the kernel density estimator constructed with bandwidth $h$
\begin{equation}
\hat{p}_h(\theta) = \frac{1}{T} \sum_{t=1}^T \frac{1}{h^d} K \left(\frac{\|\theta-\theta_t\|}{h}\right)
\end{equation}
where $K$ is a smooth, symmetric kernel. The standard kernel choice (which we use) is the Gaussian
kernel; in this case, it is known that the number of modes of $\hat{p}_h$ is a
nonincreasing function of $h$ \cite{silverman1981using} and thus the clustering
is well-behaved. Furthermore, given this property, in practice we typically fix the number
of clusters based on the application and choose $h$ such that
kernel density clustering results in the desired number of
modes. However, we note that there are several possibilities for a more nuanced
selection of the bandwidth---for example, we could select $h$ based on the terms
of the objective function or the difference in norms between adjacent segments.
\section{Experimental results}
In this section, we evaluate the proposed method on several applications,
some of which were previously considered in the context of parametric and
nonparametric latent variable models using Bayesian inference
\cite{fox2011bayesian,fox2013joint,oh2008learning,xuan2007modeling}. In these
applications, we typically demonstrate equal or better performance with a
substantially different approach---unlike the latent variables models, our
method is fully convex and thus not subject to local optima. Following the
direction of previous work, we treat these tasks as unsupervised with the
parameter $\lambda$ controlling the trade-off between the complexity of
the model (number of change points) and the data
fit. In addition to considering our probabilistic method (``TV Gaussian''),
we also consider the previously proposed linear regression model
\cite{ohlsson2013identification} (``TV LR'') in which case the log-loss term
simply becomes the least squares penalty.
\begin{table}
\caption{Performance on dancing honey bees data set.}
\begin{center}
\begin{tabular}{|l|c|c|c|c|c|c|c|}
\hline & 1 & 2 & 3 & 4 & 5 & 6 & Average \\
\hline HDP-VAR(1)-HMM unsupervised & 46.5 & 44.1 & 45.6 & 83.2 & 93.2 & 88.7 &
66.9 \\
HDP-VAR(1)-HMM partially supervised & 65.9 & 88.5 & 79.2 & 86.9 & 92.3 & 89.1 &
83.7 \\
SLDS DD-MCMC & 74.0 & 86.1 & 81.3 & 93.4 & 90.2 & 90.4 & 85.9 \\
PS-SLDS DD-MCMC & 75.9 & 92.4 & 83.1 & 93.4 & 90.4 & 91.0 & 87.7 \\
TV Linear regression & 54.4 & 47.7 & 79.6 & 78.8 & 76.1 & 75.5 & 68.9 \\
TV Gaussian & 82.2 & 83.3 & 76.1 & 91.1 & 93.1 & 93.1 & 86.5 \\
\hline
\end{tabular}
\label{tab-bees}
\end{center}
\end{table}
\begin{figure}[t]
\includegraphics{bees_data}
\includegraphics{bees_segments}
\includegraphics{bees_lambda_acc}
\includegraphics[width=2.73in]{bees_shift}
\caption{Segmentation results for honey bee sequence 1 with top plot showing
the observed heading angle of the bee along with the (unobserved) segmentation
provided by human experts. In middle, we show the segmentation provided by the
best choice of $\lambda$ and on bottom left the comparison of performance
vs. $\lambda$. On bottom right, the parameters for each segment (gray circles)
are clustered using kernel density estimation to identify three modes (red
circles).}
\label{fig-bees}
\end{figure}
{\bf Dancing honey bees.} Our first data set involves tracking honey bees from
video, a task that was first considered in \cite{oh2008learning} and
subsequently studied by \cite{xuan2007modeling,fox2011bayesian}. The data includes
the output of a vision system which identifies the bees position and heading
angle over time and the task is to segment these observed values into three
distinct actions: turn left, turn right and ``waggle''---characterized by rapid
back and forth movement. It is known that these actions are used by the bees
to communicate about food sources, and for the 6 sequences provided we also have a ground truth
labeling of actions by human experts. In Figure \ref{fig-bees} (top) we show the
angle variable along with labels showing behavior changes; as can be seen from
the graph, we found that the change in angle to be highly indicative of the bees
behavior, to the extent that we model this time series ignoring the other
data. Specifically we take first order differences $y_t = \phi_{t+1} - \phi_t$
and represent this sequence probabilistically as $y_t \sim
\mathcal{N}(\mu_t,\sigma^2_t)$, expecting that both the mean and variance to
change based on the bee's action.
In Table \ref{tab-bees} we compare the accuracy of our segmentation using the
best setting of $\lambda$ to that of previous work on this data set
\cite{oh2008learning,fox2011bayesian}. Although this is an optimistic
comparison, we observe that our method is not especially sensitive to $\lambda$
as can be seen in Figure \ref{fig-bees} (bottom left); in particular, while the
performance is particularly good for the best $\lambda$, there is a wide
range in which the model performs as well or better than the other
methods. It should also be noted that with the exception of ``HDP-VAR(1)-HMM
unsupervised'' all of the considered approaches include some level of
supervision (e.g. first training on the other 5 sequences) while our method
is fully unsupervised with only a single tuning parameter. Next, considering the
distribution of the parameters using kernel density estimation, we see in
Figure \ref{fig-bees} (bottom right) that our method indeed identifies the 3
modes of the distribution corresponding to labeled actions: turning left/right
correspond to positive/negative mean while waggle has zero mean but
significantly larger variance. The change in variance offers one intuitive
explanation as to why the probabilistic model outperforms linear regression
on this data set since the latter does not model this behavior.
\begin{figure}[t]
\includegraphics[width=2.73in,height=1in]{pecan_fridge}
\includegraphics[width=2.73in,height=1in]{pecan_ac}
\includegraphics[width=2.73in,height=1in]{pecan_fridge_seg}
\includegraphics[width=2.73in,height=1in]{pecan_ac_seg}
\includegraphics[width=2.73in,height=1in]{pecan_fridge_shift}
\includegraphics[width=2.73in,height=1in]{pecan_ac_shift}
\caption{Segmentation results for power traces from the Pecan street data set
with refrigerator on left and A/C on right. We model each device with an AR(1) process
parameterized as $y_t = \alpha_t + \beta_ty_{t-1}$ and in the middle row show
a particular segmentation for each device using $\lambda= 0.1$ and $1$,
respectively. The bottom row shows the result of kernel density
estimation for clustering the parameter space, identifying 2 modes (off/on) for the A/C and 3
modes for the refrigerator---the third capturing the initial spike when the
refrigerator switches on.}
\label{fig-energy}
\end{figure}
{\bf Modeling sources of energy consumption.} In our next example, we consider
the task of modeling energy consumed by household appliances using data from
Pecan Street, Inc. (\url{http://www.pecanstreet.org/}) collected with current
sensors installed at the circuit level inside the home. In this data set each
device has a unique energy profile and our goal is to build accurate models
which can be used to understand energy consumption in order to improve
energy efficiency. In Figure \ref{fig-energy} (top) we show typical power traces for
two such devices, a refrigerator and A/C unit---these devices that are characterized by a
small number of states and and their energy usage demonstrates strong
persistence between being on/off which we capture with an AR(1) model
parameterized by $y_t = \alpha_t + \beta_ty_{t-1}$. Empirically (as in the
previous example) we found that the probabilistic approach
improved significantly upon the simpler linear regression model which
typically did not produce segments corresponding to logical device states (the on
state was often over-segmented). In contrast, Figure
\ref{fig-energy} (bottom) shows the estimated modes from probabilistic
segmentation which correspond to an off state, on state, and in the case of the
refrigerator, a state representing the initial spike in energy consumed when the device
transitions from off to on.
\begin{figure}[t]
\includegraphics{mocap_data}
\includegraphics{mocap_seg}
\includegraphics[width=1.8in,height=1.8in]{mocap_cluster}
\includegraphics{mocap_dist_1}
\includegraphics{mocap_dist_2}
\caption{Segmentation results on motion capture with input data of
12 joint angles measured at 10Hz and manual segmentation on top; in middle,
the segmentation from the AR(2) Gaussian
model plus iterative reweighting. On bottom, we
compare accuracy on identifying actions (segmentation plus clustering) with
the reweighted Gaussian model performing the best. We examine this behavior on bottom
middle/right, by reordering $\theta_1,\ldots,\theta_T$ so that identical actions
are adjacent and consider distance between pairs $(\theta_t,\theta_{t'})$ with dark
signifying closer. Comparing middle (non-reweighted) vs. right (reweighted),
the evident block structure indicates that parameters for the same action are
relatively closer after reweighting (presumably) allows for better parameter estimation.}
\label{fig-mocap}
\end{figure}
{\bf Motion capture.} In our final example we consider segmenting motion capture
data, a task first proposed in \cite{fox2013joint} in conjunction with a
hierarchical nonparametric Bayesian model specifically designed to jointly model
behavior across subjects. We attempt to replicate that experimental setup here
which includes sub-selecting from 62 available measurements a representative set of 12 angles
characterizing the behavior of the subject and manually labeling the sequences
with one of 12 actions. A typical sequence is shown in
Figure \ref{fig-mocap} (top), which (after normalization) we take as the output
variables $y_t \in \mathbb{R}^{12}$. As the signal shows not only persistence
but also clear periodic structure, we model this as an AR(2) process resulting
in $x_t \in \mathbb{R}^{25}$ and a parameter space with much higher dimension
that in previous examples (in the Gaussian model, $\Lambda_t \in
\mathbb{R}^{12 \times 12}$ and $\Theta_t \in \mathbb{R}^{25 \times 12}$).
First, in Figure \ref{fig-mocap} (middle) we have an accurate segmentation
provided by the Gaussian model with the
additional step of a few iterations of iterative reweighting
\cite{candes2008enhancing}, an extension to the algorithm that previous authors
have suggested in the case of linear regression
\cite{ohlsson2010segmentation}. We found that while all methods
considered on this data set performed well on segmentation, the addition of the reweighting
step improved parameter estimation significantly resulting in the better
performance shown in Figure \ref{fig-mocap} (bottom left); here the comparison
depends on both accurate segmentation and parameter estimation in order
for density-based clustering to identifying similar segments which is required
to do well on this task.
The intuition behind the improvement from reweighting is shown in
Figure \ref{fig-mocap} (bottom right) which compares the distance of the
parameters after segmentation for the Gaussian model and the Gaussian model plus
reweighting---we see that when the parameters are allowed to
vary more significantly between segments (as a consequence of reweighting), the
parameters corresponding to the same action remains close
relative to the parameters for different actions. This is likely due to
better parameter estimation on the individual segments
from reducing the bias from total variation regularization. Overall, the
reweighted Gaussian model achieves accuracy of around 60\% which is comparable
to most previous results from \cite{fox2013joint} but somewhat worse than the
best model which is specifically designed for this task and benefits from highly
structured prior information.
\section{Conclusions and future work}
At a basic level, the techniques proposed in this paper center around
finding a convex (and hence, local-optima-free) approach to modeling
time series data in a manner that naturally segments the data into
different probabilistic models. While the proposed method works well
in many settings, numerous extensions are possible in the overall
framework. For instance, we can consider imposing additional
regularization on the joint space of parameters to enforce further
structure; we can generalize the getting to many other possible loss
functions; we can generalize the total variation penalty to the more
general $\ell_1$ trend filtering setting \cite{kim2009ell_1}, to
capture linear or higher order smooth segments in parameter space; and
we can extend the total variation penalty non-adjacent time points,
potentially directly allowing for segmentation across non-contiguous
regions. Further, we can explore extensions to kernel density
estimation in the parameter space that explicitly model the evolution
of these parameters, allowing us to build a full generative model rather than
just finding modes as we do now. Together, we believe that this
combination of approaches can lead to time series methods competitive
with latent variable models in terms of their flexibility and
representational power, but which are substantially easier and more
efficient to build and learn.
\bibliographystyle{plain}
|
1,941,325,220,431 | arxiv | \section{Introduction}
\label{sec:introduction}
Over the past few years, network multiple-input multiple-output (N-MIMO) technique \cite{M.V.Clark} has received significant attention due to its flexibility, power and capacity advantage over the centralized architectures in fifth generation (5G) dense cellular networks. Multiple mobile terminals (MTs) may share the same radio resources and be served by the corresponding access points (APs) where the inter-cell interference can be effectively mitigated. This was also applied in the coordinated multipoint (CoMP) approach standardized in LTE-A \cite{LTE-A, R.Irmer}. The Cloud Radio Access Network (C-RAN) concept has been proposed in \cite{R.Irmer} with similar goals. However a potential issue in these approaches is a significant increased upload burden on the backhaul (referred to as fronthaul in C-RAN) network between APs and central processing unit (CPU) on the uplink, especially for wireless backhaul networks. Several methods have been studied in previous work in order to address this problem. In \cite{re-bkload0}, Wyner-Ziv compression is utilised to reduce the backhaul load. Iterative interference cancellation and compressive sensing algorithms are designed in \cite{re-bkload2}-\cite{nonidcran} as alternative solutions, but the total backhaul load remains typically several times the total user data rate. In \cite{Lattice_QTSun, DongF}, novel approaches have been designed based on physical layer network coding (PNC) that keep the total backhaul load equal to the total user data rate.
PNC is a scheme implemented at APs in which each AP attempts to infer and forward combinations of the signals over an algebraic field, where the signals are transmitted from multiple sources simultaneously and superimposed in the received constellation. An important property of PNC is that the APs decode the joint messages from multiple sources to a linear function over the algebraic field rather than decode each source symbol individually. On the other aspect, PNC is a multiple-message compressing technique that makes network throughput greatly improved and keeps the cardinality of relay outputs considerably reduced. Hence PNC is an appealing technique to serve wireless RAN infrastructure with high user density in 5G and beyond.
Previous work on PNC mainly focused on a two-way relay channel (TWRC) scenario to easily double the network throughput without routine operations \cite{LPNC}-\cite{DongF2}. The original PNC was proposed and designed in a TWRC based on BPSK \cite{Zhang2006}. Although only BPSK was used, PNC contributes to a big idea and motivates many research outcomes thereafter, e.g. compute-and-forward (C$\&$F) which generalizes PNC of TWRC to multiuser relay networks by utilizing structured nested lattice codes \cite{CF_Nazer}, and lattice network coding \cite{FengLNC}. However, the lattice based network coding in construction A and D operates over a finite field and the coset size of the quotient lattices is typically not binary-based \cite{Zhang2006}. Then lattice codes have disadvantages for engineering applications as non-binary codes are required over large prime fields.
In contrast to the previous work in PNC, we focus on designing the PNC approach with conventional $2^m$-ary digital modulation. When QAM modulation schemes are used at MTs, PNC has to solve the so-called singular fading problem which is typically unavoidable at the multiple access phase under some circumstances at each AP. Failure to resolve such problem results in network performance degradation. Toshiaki \textit{et. al.} \cite{Popovski, Popovski2} proposes a scheme, namely the denoise-and-forward, which employs a non-linear $5$QAM PNC mapping to mitigate all singular fade states, and gives good performance. Other researches on this issue have worked on the design of linear functions over the integer finite field or ring, e.g. linear PNC (LPNC) \cite{ABurr} which can only be optimised for the $q$-ary PNC mapping where $q$ is a prime in $\mathbb{Z}^+$. All these approaches, however, do not operate over the binary systems, and hence cannot be readily applied in the current mobile communication networks. The work in \cite{DongF} provides a solution for implementing PNC in binary systems with low modulation orders only.
In this paper, we propose an adaptive PNC with reduced backhaul load and unambiguous decoding for QAM modulation schemes and the main contributions are listed as follows
\begin{enumerate}
\item A PNC design guideline for uplink scenarios is proposed, along with a search algorithm based on this guideline to find the optimal coefficient mapping matrices, such that a) the global matrix, formed by the coefficient matrices from each AP, guarantees all source symbols to be decoded at CPU; b) the matrices stored at each AP can resolve all singular fade states; c) the number of coefficient matrices stored at each AP is minimised; d) the proposed algorithm is generalised to QAM modulation schemes of different orders; e) the proposed algorithm can be applied in N-MIMO systems with multiple MTs and APs.
\item The whole scheme operates over binary systems with multiple MTs and APs. As discussed earlier about advantages of coping with the singular fading problem in multiple access stage, PNC plays as a reliable role not only in TWRC but also in RAN to serve multiple MTs. In this paper, we investigate the design criteria of engineering applicable PNC over binary systems for an uplink scenario of 5G N-MIMO system and discuss how to address the singular fading problem with multiple MTs.
\item A regulated PNC approach which fulfils the low latency demand in practical networks is also presented in this paper. The regulated approach is developed based on the original search algorithm, and the lookup table mechanism is adopted to achieve low latency. In such approach, all the optimal coefficient mapping matrices to resolve different singular fade state combinations are stored at the APs and the CPU with a corresponding table with their indexes. Instead of search among the matrix candidates, the optimal matrix selection algorithm is replaced by looking up the index table. The drawback of this approach is discussed in this paper and we provide a solution to overcome the problem.
\item The impact of estimated channel information to optimal matrix selection as well as the effect of the reduced number of singular fade states to performance degradation are studied. The proposed PNC mapping selection algorithm depends on the accuracy of channel information at each AP, thus we tested how the estimated channel affects the proposed algorithm. Utilisation of less number of singular fade states contributes to reduced calculation complexity, but performance degradation is observed. We discuss these issues in this paper and give potential resolutions.
\end{enumerate}
The rest of this paper is organised as follows. The introduction of N-MIMO systems and definitions of PNC design criteria are given in Section II and III, respectively. The proposed binary matrix adaptive selection algorithm is derived in Section IV, followed by the discussion of methods to reduce the computational complexity of the proposed algorithm in Section V. Numerical results are given in Section VI and finally the conclusions are drawn in Section VII.
\section{System Model}
A two-stage uplink model of N-MIMO system is illustrated in Fig. \ref{fig:system}. We assume MTs and APs are all equipped with a single antenna for simplicity. At the first stage, $u$ MTs transmit symbols to $n$ APs during the same period, which refers to a multi-access stage. We have studied the impact of synchronisation errors in \cite{Acs} so in this paper we assume the synchronisation is perfect for simplicity. Each AP receives data from all MTs and then infers and forwards a linear combination (which is referred to as the network coded symbols (NCS) in this paper) of the entire messages over a finite field or ring. The second stage is called backhaul stage where $n$ APs forward the NCSs to CPU via a lossless but capacity limited `bit-pipe'. In this paper, the links in multi-access stage are modelled as wireless links in order to fulfil the request of 5G systems; while the backhaul links may be wireless or deployed on wireline. The techniques presented in this paper are in particular suitable for wireless backhaul which is normally more cost-effective.
Each MT employs a $2^m$-ary digital modulation scheme where $m$ denotes the modulation order. Let $\mathscr{M}:\mathbb{F}_{2^m} \longrightarrow \Omega$ denotes a one-to-one mapping function, where $\Omega$ is the set of all possible complex constellation points. Hence the messages $\mathbf{w}_{\ell}\in \mathbb{F}_{2^m}$ at the $\ell^{\mathrm{th}}$ MT can be mapped to the complex symbol $s_{\ell} = \mathscr{M}(\mathbf{w}_{\ell})$, where $\mathbf{w}_{\ell} = [w_{\ell}^{(1)},\cdots,w_{\ell}^{(m)}]$ is an $m$-tuple with each element $w_{\ell}^{(i)}\in\mathbb{F}_2$.
The link between all MTs and the $j^{\mathrm{th}}$ AP forms a multiple access channel (MAC), where the $j^{\mathrm{th}}$ AP observes the noisy, faded and superimposed signals at a certain time slot, mathematically given by
\begin{align}
y_{j} = \sum_{\ell=1}^u h_{j,\ell}s_{\ell} + z_j, \label{equ:channel}
\end{align}
where $z_j$ denotes the additive complex Gaussian noise with zero mean and variance $\sigma^2$, and $h_{j,\ell}$ represents the channel fading coefficient between the ${\ell}^{\mathrm{th}}$ MT and the $j^{\mathrm{th}}$ AP, which is a random variable with Rayleigh distribution.
\begin{figure}[t]
\centering
\begin{minipage}[t]{1\linewidth}
\centering
\includegraphics[width=1\textwidth]{System.eps
\end{minipage}
\caption{\small The uplink system diagram.} \label{fig:system}
\end{figure}
\section{Design Criteria}
\label{sec:UD}
Before presenting the proposed PNC design criteria, we list three main constraints for a PNC to be engineering applicable:
\begin{enumerate}
\item The PNC decoding must operate over $\mathbb{F}_2$; thus, the NCS need to be binary-based.
\item The PNC mapping function must be well designed such that all singular fade states can be resolved.
\item The PNC mapping functions must ensure that CPU can unambiguously recover all source messages.
\end{enumerate}
We give details of the proposed design criteria for PNC in N-MIMO systems in this section, which relaxes all three aforementioned constraints.
\subsection{Engineering Applicable PNC Function}
We are primarily concerned with the MAC phase between $u$ MTs and the $j^{\mathrm{th}}$ AP in the design of the PNC mapping function. Instead of using PNC approaches performing linear combinations on symbol level, such as \cite{LPNC}, we design a method to encode PNC directly at the bit level which allows the APs to operate over a binary field for industrial application.
\textit{Definition 1:} The bit-level linear network coding function of the $j^{\mathrm{th}}$ AP for $u$ MTs is defined as
\begin{align}
\mathscr{N}_j: (\mathbf{M}_j,\mathbf{w}) \longrightarrow \mathbf{x}_j,
\end{align}
and mathematically expressed as
\begin{align}\label{eq:NCSt}
\mathbf{x}_j = \mathscr{N}_j(\mathbf{M}_j,\mathbf{w}) = \mathbf{M}_j \otimes \mathbf{w},
\end{align}
where $\mathbf{w}\triangleq [\mathbf{w}_1,\cdots,\mathbf{w}_u]^T$ denotes an $mu \times 1$ joint message set with $\mathbf{w}\in\mathbb{F}_2^{mu \times 1}$, and each $\mathbf{w}_\ell$ stands for a $1 \times m$ binary data vector at the $\ell^{\mathrm{th}}$ MT. $\mathbf{M}_j$ denotes a matrix with $\mathbb{F}_2^{t \times mu}$, where $t$ stands for the size of network coded vector at the $j^{\mathrm{th}}$ AP and $t \geq m$, and $\otimes$ denotes the multiplication over $\mathbb{F}_2$. $\mathbf{x}_j\in\mathbb{F}_2^{t \times 1}$ is called the network coded vector (NCV) which consists of all $t$ linear network coded bits
\begin{align}
\mathbf{x}_j = [x_j^{(1)},x_j^{(2)},\cdots,x_j^{(t)}]^T.
\end{align} $\square$
It is obvious that each coded bit $x_j^{(i)}$ is indeed a linear combination of all source bits over $\mathbb{F}_2$, thus,
\begin{align}
x_j^{(i)} = M_j^{(i,1)}\otimes w_{1}^{(1)}\oplus\cdots\oplus M_j^{(i,um)}\otimes w_{u}^{(m)},
\end{align}
where $\oplus$ denotes the addition operation over $\mathbb{F}_2$, and $M_j^{(i,1)}$ denotes the entry at the $i^{\mathrm{th}}$ row and the $1^{\mathrm{st}}$ column of $\mathbf{M}_j$.
\textit{Definition 2:} We define the constellation set which contains all possible superimposed symbols at the $j^{\mathrm{th}}$ AP over a given channel coefficient vector $\mathbf{h}_j\triangleq[h_{j,1},\cdots,h_{j,u}]$ as $\mathbf{s}_{j,\bigtriangleup}\triangleq[s^{(1)}_{j,\bigtriangleup},\cdots,s^{(2^{mu})}_{j,\bigtriangleup}]$, where
\begin{align}
s^{(\tau)}_{j,\bigtriangleup} = \sum_{\ell=1}^u h_{j,\ell}s_{\ell}, ~~\forall s_{\ell}\in\Omega,
~\tau = 1,2,\cdots,2^{mu}. \notag \label{equ:SIC.Definition}
\end{align}
\begin{theorem}
For the MAC link between $u$ MTs and the $j^{\mathrm{th}}$ AP, there exists a surjective function
\begin{align}
\mathrm{\Theta}: \mathbf{s}_{j,\bigtriangleup} \longrightarrow \mathbf{x}_j,
\end{align}
\end{theorem}
when the size of NCV $t<mu$.
\begin{IEEEproof}
Since $\mathscr{M}$ is a bijective function, we have the following relationship
\begin{align}
\mathbf{x}_j \xLeftarrow{\mathscr{N}_j} \mathbf{w}\xleftrightarrows{\mathscr{M}}{\mathscr{M}^{-1}} \mathbf{s},
\end{align}
where $\xLeftarrow{}$ and $\xleftrightarrows{}{}$ represent surjective and bijective relationships, respectively. $\mathbf{s}\triangleq [s_1,\cdots,s_u]=[\mathscr{M}(\mathbf{w}_1), \cdots, \mathscr{M}(\mathbf{w}_u)]$ stands for the set that contains the modulated symbols at all MTs. Following (\ref{equ:SIC.Definition}), for each element in $\mathbf{s}$, there exists a superimposed constellation point $s_{j,\bigtriangleup}$ at a given channel coefficient vector $\mathbf{h}_j$, and this proves Theorem 1.
\end{IEEEproof}
We call $\mathrm{\Theta}$ the PNC mapping function which maps a superimposed constellation point to an NCV and plays the key role in PNC encoding, where this PNC encoding performs estimation of the possible NCV outcomes $\mathbf{x}_j$ for the $j^{\mathrm{th}}$ AP, based on the received signals $y$. Let $\mathbf{X}_j$ denote the vector-based random variable with its realization $\mathbf{x}_j$. The \textit{a posteriori} probability of the event $\mathbf{X}_j = \mathbf{x}_j$ conditioned on the MAC outputs $Y_j=y_j$ is
\begin{align}
&\mathrm{Pr}(\mathbf{X}_j = \mathbf{x}_j|y_j,\mathbf{h}_j) \notag \\
=&\frac{\mathrm{Pr}(Y_j|\mathbf{X}_j=\mathbf{x}_j,\mathbf{h}_j)\mathrm{Pr}(\mathbf{X}_j=\mathbf{x}_j)}{\mathrm{Pr}(Y_j=y_j)} \notag \\
=&\frac{\sum\limits_{\forall\mathbf{w}:\mathscr{N}_j(\mathbf{M}_j,\mathbf{w})=\mathbf{x}_j}\mathrm{Pr}(Y_j|\mathbf{w},\mathbf{h}_j)\mathrm{Pr}(\mathbf{w})}{\mathrm{Pr}(Y_j=y_j)} \notag \\
=&\frac{\sum\limits_{\forall\mathbf{s}:\Theta(\mathbf{s}_{j,\bigtriangleup})=\mathbf{x}_j}\mathrm{Pr}(Y_j|\mathbf{S}_{j,\bigtriangleup}=\mathbf{s}_{j,\bigtriangleup})\mathrm{Pr}(\mathbf{S}=\mathbf{s}) }
{\mathrm{Pr}(Y_j=y_j)}. \label{equ:PostProb.bg}
\end{align}
The conditional probability density function is given by
\begin{align}
\mathrm{Pr}(Y_j|\mathbf{S}_{j,\bigtriangleup}=\mathbf{s}_{j,\bigtriangleup}) = \frac{1}{\sqrt{2\pi\sigma^2}}\mathrm{exp}\left(-\frac{|y_j-\mathbf{s}_{j,\bigtriangleup}|^2}{2\sigma^2} \right).
\end{align}
The \textit{a posteriori} L-value $L_{\mathbf{x}_j}$ for the event $\mathbf{X}_j = \mathbf{x}_j$ is
\begin{align}\label{equ:PostProb.ed}
L_{\mathbf{x}_j} = \log\left(\frac{\sum\limits_{\forall\mathbf{s}:\Theta(\mathbf{s}_{j,\bigtriangleup})=\mathbf{x}_j}\mathrm{Pr}(Y_j|\mathbf{S}_{j,\bigtriangleup}=\mathbf{s}_{j,\bigtriangleup})\mathrm{Pr}(\mathbf{S}=\mathbf{s}) }{\sum\limits_{\forall\mathbf{s}:\Theta(\mathbf{s}_{j,\bigtriangleup})=\mathbf{0}}\mathrm{Pr}(Y_j|\mathbf{S}_{j,\bigtriangleup}=\mathbf{s}_{j,\bigtriangleup})\mathrm{Pr}(\mathbf{S}=\mathbf{s}) } \right),
\end{align}
where $\mathbf{0}$ is a length-$t$ all-zero vector over $\mathbb{F}_2^{t \times 1}$.
\subsection{Resolving the Singular Fading}
We have set up the PNC mapping approach in binary systems, which establishes the fundamental PNC system structure available for practical engineering application. The next upcoming problem lies in how to resolve the singular fading in the multiple access stage. In this section, we demonstrate that the PNC mapping function $\mathrm{\Theta}$ proposed above is capable of resolving all singular fade states with a simple design approach. We first define the singular fade states as follows
\textit{Definition 3:} The singular fade state (SFS) at the $j^{\mathrm{th}}$ AP is defined as the channel fading coefficients $\mathbf{h}_j$ which makes $s_{j,\bigtriangleup}^{(\tau)} = s_{j,\bigtriangleup}^{(\tau^{\prime})}$ when $\tau\neq\tau^{\prime}$. $\square$
In other words, for a given channel coefficient vector $\mathbf{h}_j$, if two or more elements in the set $\mathbf{s}_{j,\bigtriangleup}$ are the same, $\mathbf{h}_j$ is an SFS. Normally, singular fading is unavoidable at MAC stage, and multiuser detection is in principle infeasible if the $j^{\mathrm{th}}$ AP expects to decode all source messages. PNC is capable to overcome SFS problem when the coincident superimposed constellation points are well labelled by the NCV $\mathbf{x}_j$, which helps CPU to recover all source messages.
\textit{Definition 4:} If a set of constellation points received at APs are mapped to the same NCV, we call this set a \emph{cluster}, denoted as
\begin{align}
\mathbf{c}^{(\tau)} \triangleq [s^{(\tau_1)}_{j,\bigtriangleup}, s^{(\tau_{2})}_{j,\bigtriangleup}, \cdots],
\end{align}
where $s^{(\tau_i)}_{j,\bigtriangleup}$ denotes the $i^{\mathrm{th}}$ cluster members. In a singular fading, if the values of cluster members are the same then this cluster is called a \emph{clash}. $\square$
\textit{Definition 5:} Given two clusters $\mathbf{c}^{(\tau)}$ and $\mathbf{c}^{(\tau^\prime)}$ that the constellation points in which are mapped to different NCVs, then the minimum inter-cluster distance, also known as the minimum distance between these two different NCVs, is defined as
\begin{align}
d_{\mathrm{min}} = &\min_{\Theta(s_{j,\bigtriangleup}^{(\tau_i)})\neq \Theta(s_{j,\bigtriangleup}^{(\tau^{\prime}_k)}) } |s_{j,\bigtriangleup}^{(\tau_i)} - s_{j,\bigtriangleup}^{(\tau^{\prime}_k)}|^2, \\
\forall s^{(\tau_i)}_{j,\bigtriangleup} \in \mathbf{c}^{(\tau)}, &~\forall s^{(\tau^\prime_k)}_{j,\bigtriangleup} \in \mathbf{c}^{(\tau^\prime)}, ~i=1,2,..., ~k=1,2,... \notag \label{dmint}
\end{align}
\begin{theorem}
The PNC mapping function $\mathrm{\Theta}$ cannot resolve singular fading if the minimum inter-cluster distance $d_{\mathrm{min}} = 0$.
\end{theorem}
\begin{IEEEproof}
When $d_{\mathrm{min}} = 0$, the posterior probability of some outcomes of $\mathbf{X}_j$ will be very similar (in terms of (\ref{equ:PostProb.bg})). This definitely introduces the ambiguities in estimating the real NCV $\mathbf{x}_j$, especially when a superimposed constellation point labelled by one NCV is close to another point that is labelled by another NCV. Hence the singular PNC mapping function is in principle not capable of decoding the NCV reliably.
\end{IEEEproof}
Normally the dimension of NCV $\mathbf{x}_j$ is $t \times 1$ at the $j^{\mathrm{th}}$ AP, $m \leq t \leq mu$. When the number of source increases (a large MAC), the singular fading problem becomes more severe and the method of SFS values calculation is different. However, by simply increasing the dimension $t$ of NCV (thus, increasing the number of rows of $\mathbf{M}_j$), there definitely exists non-singular PNC function which is capable of resolving a kind of SFS.
\begin{remark} \label{remark:t}
We can obtain non-singular PNC mapping function $\mathrm{\Theta}_j$ for the $j^{\mathrm{th}}$ AP if the cardinality $t$ of the PNC encoding outcomes are determined in terms of the following criterion
\begin{align}
t = \argmin_{m\leq t<mu} \left\{d_{\mathrm{min}} - d_{\alpha} \geq 0\right\},
\end{align}
where $d_{\alpha}>0$ is a distance threshold.
\end{remark}
Remark \ref{remark:t} reveals the second design criterion for PNC mapping function $\mathrm{\Theta}_j$ over a $u$-MT and $2^m$-ary digital modulation MAC, which guarantees the reliable PNC encoding with the minimum possible cardinality expansion.
\subsection{Algebraic Work for Unambiguous Decodability}
We have set up two design guidelines of the engineering applicable PNC approach for uplink scenarios. The next criterion is that the CPU can guarantee all source messages to be unambiguously recovered. We need to carefully design each $\mathbf{M}_j$, $j=1,2,\cdots,n$, so that $\mathbf{M} = [\mathbf{M}_1,\cdots,\mathbf{M}_n]^T$ includes a number of row coefficients which forms the following theorem:
\begin{theorem} \label{theorem:full.rank}
Assume $\mathbf{M}=M_{n\times n}(R)$, where the coefficients are from a commutative ring $R$. Source messages are drawn from a subset of $R$ and all source messages can be unambiguously decoded at the destination if and only if the determinant of the transfer matrix is a unit in $R$,
\begin{align}
\mathrm{det}(\mathbf{M}) = \mathcal{U}(R). \label{equ:det.unit}
\end{align}
\end{theorem}
\begin{IEEEproof} We first prove that (\ref{equ:det.unit}) gives the sufficient and necessary conditions that make a matrix $\mathbf{B}$ invertible in N-MIMO networks. Suppose $\mathbf{B}$ is invertible, then there exists a matrix $\mathbf{C}\in M_{n\times n}(R)$ such that $\mathbf{B}\mathbf{C} = \mathbf{C}\mathbf{B} = \mathbf{I}_n$. This implies $1 = \mathrm{det}(\mathbf{I}_n) = \mathrm{det}(\mathbf{B}\mathbf{C}) = \mathrm{det}(\mathbf{B})\mathrm{det}(\mathbf{C})$. According to the definition of a unit, we say $\mathrm{det}(\mathbf{B})\in U(R)$.
We know $\mathbf{B}\cdot \mathrm{adj}(\mathbf{B}) = \mathrm{adj}(\mathbf{B})\cdot \mathbf{B} = \mathrm{det}(\mathbf{B})\mathbf{I}_n$. If $\mathrm{det}(\mathbf{B})\in U(R)$, we have
\begin{align}
\mathbf{B} \cdot (\mathrm{det}(\mathbf{B})^{-1}\mathrm{adj}(\mathbf{B}) ) &= (\mathrm{det}(\mathbf{B})^{-1}\mathrm{adj}(\mathbf{B}) ) \mathbf{B} \notag\\
&= \mathrm{det}(\mathbf{B})^{-1} \mathrm{det}(\mathbf{B}) = \mathrm{I}_n.
\end{align}
Hence, $\mathbf{C} = (\mathrm{det}(\mathbf{B})^{-1}\mathrm{adj}(\mathbf{B}) )$ is the inverse of $\mathbf{B}$ since $\mathbf{B}\mathbf{C} = \mathbf{C}\mathbf{B} = \mathbf{I}_n$.
If $\mathbf{B}$ is invertible, then its inverse $\mathbf{B}^{-1}$ is uniquely determined. Assuming $\mathbf{B}$ has two inverses, say, $\mathbf{C}$ and $\mathbf{C}^{\prime}$, then
\begin{align}
\mathbf{B}\cdot \mathbf{C} &= \mathbf{C}\cdot \mathbf{B} = \mathbf{I}_n, \\
\mathbf{B}\cdot \mathbf{C}^{\prime} &= \mathbf{C}^{\prime}\cdot \mathbf{B} = \mathbf{I}_n,
\end{align}
hence we have
\begin{align}
\mathbf{C} = \mathbf{C}\cdot \mathbf{I}_n = \mathbf{C}\cdot \mathbf{B}\cdot \mathbf{C}^{\prime} = \mathbf{I}_n\cdot \mathbf{C}^{\prime} = \mathbf{C}^{\prime}.
\end{align}
It proves the uniqueness of the invertible matrix $\mathbf{B}$ over $R$.
Assume $\mathbf{a}\neq \mathbf{a^{\prime}}$, $\mathbf{B}\cdot \mathbf{a} = \mathbf{F}$, $\mathbf{B}\cdot \mathbf{a}^{\prime} = \mathbf{F^{\prime}}$, and $\mathbf{F} = \mathbf{F^{\prime}}$. This means
\begin{align}
\mathbf{a} = \mathbf{B}^{-1}\cdot \mathbf{F} = \mathbf{B}^{-1}\cdot \mathbf{F^{\prime}} = \mathbf{a^{\prime}}.
\end{align}
This contradicts $\mathbf{a}\neq \mathbf{a^{\prime}}$. Hence it ensures unambiguous decodability:
\begin{equation}
\mathbf{B}\cdot \mathbf{a} \neq \mathbf{B}\cdot \mathbf{a^{\prime}}, ~ \forall\mathbf{a}\neq \mathbf{a^{\prime}}.
\end{equation}
\end{IEEEproof}
\textit{Definition 6:} The ideal in $R$ generated by all $\nu\times\nu$ minors of $M_{m\times n}(R)$ is denoted by $I_{\nu}(M_{m\times n}(R))$, where $\nu=1,2,\cdots,r=\min\{m,n\}$. $\square$
A $\nu\times \nu$ minor of $M_{m\times n}(R)$ is the determinant of a $\nu\times\nu$ matrix obtained by deleting $m-\nu$ rows and $n-\nu$ columns. Hence there are $\binom{m}{\nu}\binom{n}{\nu}$ minors of size $\nu\times\nu$. $I_{\nu}(M_{m\times n}(R))$ is the ideal of $R$ generated by all these minors.
\textit{Design Criterion}: The destination is able to unambiguously decode $u$ source messages if:
\begin{enumerate}
\item $u\geq\max\left\{\nu\mid \mathrm{Ann}_R(I_{\nu}(\mathbf{M}_j)) = \langle 0 \rangle \right\}$, $\forall j=1,2,\cdots,n$,
\item $\mathbf{M}_j = \arg\max\limits_{\mathbf{M}_j} \left\{ I\left( \overrightarrow{Y}; \overrightarrow{F}_j \right) \right\}$,
\end{enumerate}
where $\langle x\rangle$ denotes the ideal generated by $x$.
Condition 1 can be proved as follows. According to Laplace's theorem, every $(\nu+1)\times (\nu+1)$ minor of $M_{m\times n}(R)$ must lie in $I_{\nu}(M_{m\times n}(R))$. This suggests an ascending chain of ideals in $R$:
\begin{align}
\langle 0 \rangle = I_{r+1}(\mathbf{M}_j)&\subseteq I_{r}(\mathbf{M}_j)\subseteq\cdots\subseteq I_{1}(\mathbf{M}_j)\notag\\
&\subseteq I_{0}(\mathbf{M}_j) =R. \label{equ:chain}
\end{align}
Computing the annihilator of each ideal in (\ref{equ:chain}) produces another ascending chain of ideals,
\begin{align}
\langle 0 \rangle &=\mathrm{Ann}_R(R)\subseteq\mathrm{Ann}_R(I_{1}(\mathbf{M}_j))\subseteq\cdots \notag\\
&\subseteq\mathrm{Ann}_R(I_{r}(\mathbf{M}_j))\subseteq\mathrm{Ann}_R(\langle 0 \rangle ) =R.
\end{align}
It is obvious that:
\begin{align}
&\mathrm{Ann}_R(I_{k}(\mathbf{M}_j))\neq \langle 0 \rangle \notag \\
\Rightarrow & \mathrm{Ann}_R(I_{k^\prime}(\mathbf{M}_j))\neq \langle 0 \rangle, ~~\forall k\leq k^\prime.
\end{align}
The maximum value of $\nu$ which satisfies $\mathrm{Ann}_R(I_{\nu}(\mathbf{M}_j))= \langle 0 \rangle$ guarantees that $I_{k}(\mathbf{M}_j)\in R$, $\forall k<\nu$. Hence we define the rank of $\mathbf{M}_j$ as $\mathrm{rk}(\mathbf{M}_j)=\max\left\{\nu\mid \mathrm{Ann}_R(I_{\nu}(\mathbf{M}_j)) = \langle 0 \rangle \right\}$. Suppose that $\mathbf{M}_k\in M_{m\times p}(R)$ and $\mathbf{M}_{k^\prime}\in M_{p\times n}(R)$, then $\mathrm{rk}(\mathbf{M}_k\mathbf{M}_{k^\prime})\leq \min\{\mathrm{rk}(\mathbf{M}_k),\mathrm{rk}(\mathbf{M}_{k^\prime})\}$, and we can easily prove that $0\leq \mathrm{rk}(M_{m\times n}(R))\leq \min\{m,n\}$. Thus, in order to guarantee there are at least $u$ unambiguous linear equations available at the CPU, $\mathrm{rk}(\mathbf{M}_j)$ must be at least $u$, $\forall j=1,2,\cdots,n$.
The special case of condition $1$ is that the entry of the coefficient matrix $\mathbf{M}_j\in M_{m\times n}(F)$ is from a finite field $F\in\mathbb{F}$. Then condition $1$ of the above \textit{Design Criterion} may be changed to \textquotedblleft the maximum number of linearly independent rows (or columns)\textquotedblright~ since $\mathrm{Ann}_R(I_{\nu}(\mathbf{M}_j))= \langle 0 \rangle$ if and only if $I_{\nu}(\mathbf{M}_j)\neq 0$. In other words, the largest $\nu$ such that the $\nu\times \nu$ minor of $\mathbf{M}_j$ is a non-zero divisor represents how many reliable linear combinations the $j^{\mathrm{th}}$ layer may produce. Hence condition $1$ is a strict definition which ensures unambiguous decodability of the $u$ sources. Condition $2$ ensures that the selected coefficient matrix maximises the mutual information of the particular layer, giving finally the maximum overall throughput.
\section{Binary Matrix Adaptive Selection Algorithm Design}
\label{sec:CombInfo}
According to the design criteria proposed in the previous section, we can summarise that given a QAM modulation scheme, the optimal binary PNC mapping function contains the following properties
\begin{enumerate}
\item it maximises the minimum distance between different NCVs ;
\item the composited global mapping matrix is invertible.
\end{enumerate}
In order to achieve these properties and applicability in practical N-MIMO systems, we propose a binary matrix adaptive selection (BMAS) algorithm based on the design criteria introduced in Section III. The BMAS algorithm is divided into two stages, one is called Off-line search, in which an exhaustive search is implemented among all $m \times mu$ binary mapping matrices to find a set of candidate matrices which resolves all SFSs with the above property 1); the other one is called On-line search, in which a selection from the candidate matrices in order to obtain the invertible mapping matrix according to property 2) is executed. The computational complexity of the proposed algorithms is mainly caused by the Off-line search, especially in higher order modulation schemes due to the increased number of SFSs and matrices, and this Off-line search algorithm only needs to be done once for each modulation scheme. The candidate mapping matrices found in Off-line search are stored at APs and CPU in order to implement the On-line search in real-time transmission.
\subsection{Off-Line Search Algorithm}
Define a set $\mathbf{W}_{joint} \triangleq [\mathbf{w}_{jo_1}, \mathbf{w}_{jo_2}, \cdots,\mathbf{w}_{jo_N}]$ which contains all possible binary joint message combinations with $N=2^{um}$, so that each $\mathbf{w}_{jo_i}$ in this set stands for a $1 \times mu$ binary joint message vector from $u$ MTs, for $i=1,2,\cdots,N$. By applying a modulation scheme $\mathscr{M}$ over each $\mathbf{w}_{jo_i}$ in $\mathbf{W}_{joint}$, a joint modulation set $\mathbf{S}_{joint} \triangleq [\mathbf{s}_{jo_1}, \mathbf{s}_{jo_2}, \cdots,\mathbf{s}_{jo_N}]^{T}$ is obtained, where $\mathbf{s}_{jo_i}=[s^{(jo_i)}_1 ~s^{(jo_i)}_2 \cdots ~s^{(jo_i)}_u]$ stands for the $i^\mathrm{th}$ combination of $u$ modulated symbols and $s^{(jo_i)}_\ell \in \Omega$ for $\ell = 1, \cdots, u$. The next step is to calculate the NCS $s^{(q)}_{n,\bigtriangleup}$ and its corresponding NCV $\mathbf{x}^{(q)}_{i,n}$ under all $L$ SFS circumstances, mathematically given by
\begin{align}\label{eq:NCScwd}
& ~~~s^{(q)}_{n,\bigtriangleup} = \mathbf{h_v}^{(q)}_{SFS}\mathbf{s}_{jo_n}^T,
~\mathbf{x}^{(q)}_{i,n} = \mathbf{M}_i \otimes \mathbf{w}^{(q)}_{jo_n}, \\
& n=1,\cdots,N, ~q=1,\cdots,L, ~i=1,\cdots, N^2, \notag
\end{align}
where $\mathbf{h_v}^{(q)}_{SFS}$ denotes the channel coefficient vector causes SFS. Due to the property of SFS, the same $s^{(q)}_{n,\bigtriangleup}$ could be obtained with different $\mathbf{s}_{jo_n}$ sets in a clash. In that case, these joint symbol sets should be encoded to the same NCV according to the unambiguous decodability theorem. The next step is to store the mapping matrices which can resolve one SFS and also contains a high possibility to form an invertible global mapping matrix when combining with other selected mapping matrix candidates in other SFSs. Detailed description of the Off-line search is illustrated in Algorithms \ref{Alg:offline.p1} and \ref{Alg:offline.p2} in Appendix \ref{ap:offline}.
\subsection{On-Line Search Algorithm}
The proposed Off-line search is implemented before the transmission to reduce the number of mapping matrices utilised in the On-line search. In the real-time transmission, the proposed On-line search, which contains the same steps in Algorithm \ref{Alg:offline.p2} but with a much smaller value of $K$, is applied at the CPU to select the optimal mapping matrix for each AP.
When the optimal mapping matrix is selected, the indexes of the selected mapping matrices will be sent back to each AP through the backhaul channel and at each AP, an estimator calculates the conditional probability of each possible NCV given the optimal mapping function. The estimator returns the log-likelihood ratio (LLR) of each bit of $\mathbf{x}_j$ which is then applied to a soft decision decoder. Note that the LLR algorithm does not require to detect individual symbols transmitted from each MT but a linear combination of the binary messages. Finally the NCV at each AP will be forwarded to the CPU and the original data from all MTs can be recovered by multiplying the inverse of the global binary PNC mapping matrix.
\section{Analysis and Discussion}
\label{sec:PrinSFS}
In this section, we discuss how to apply the proposed BMAS algorithm to a general N-MIMO network with multiple MTs and APs, including utilisation of reduced number of SFSs and discussion of resolving the SFS problem with more than $2$ MTs. With a study of the properties of SFSs, a regulated On-line search algorithm based on lookup table mechanism with a small performance degradation is proposed in this section in order to fulfil the request of low latency in 5G RANs.
\subsection{Image SFSs and Principal SFSs}
Since an exhaustive search is carried out among all $t \times mu$ binary matrices in the proposed Off-line search algorithm, the computational complexity increases due to a large number of SFSs as well as an increased value of $um$ in higher order modulation schemes with large number of MTs. For example, in $2$-MT and $2$-AP case, the number of SFSs need to be resolved at each AP is $L=13$ in $4$QAM and $L=389$ in $16$QAM. Thus for the $4$QAM case, at least $13$ binary matrices with each size of $2 \times 4$ should be stored at each AP for On-line search. When $16$QAM scheme is employed at each MT, at least $389$ of $4 \times 8$ binary matrices need to be stored which results in a huge increased number of candidates in real-time computation.
In the proposed BMAS algorithm, we resolve this problem by keeping the number of useful SFSs minimum. According to our research of NCV calculation expressed in (\ref{eq:NCScwd}), we found some of different SFSs generate the same clashes which can be resolved by the same binary matrices. We then define such SFSs as image SFSs (iSFSs) and keep only one in the proposed search algorithm.
However, this problem still remains when higher order modulation schemes are employed at MTs. In addition, due to a larger constellation in a higher order QAM scheme, a few SFSs cannot be resolved by a binary mapping matrix. In order to address these problems, we focused on the occurrence probability of an SFS in higher modulation schemes and noticed that not all SFSs occur frequently, so that we can ignore those \textquotedblleft nonactive\textquotedblright ~SFSs with low appearance probabilities to minimise the number of mapping matrices utilised in the proposed On-line search. We define the SFSs with high appearance probabilities as principal SFSs (pSFSs) and a trade-off between the performance degradation and the number of pSFSs used in Off-line search is illustrated in the next section.
\subsection{Calculation of Singular Fade States}
We illustrate how to determine a singular fading for a QAM modulation scheme with a simple network first with $u=2$ MTs, and discuss the SFS calculation issue in a network with more than $2$ MTs later. Following Definition $3$, given a QAM modulation scheme, for singular fading we have $s_{j,\bigtriangleup}^{(\tau)} = \ s_{j,\bigtriangleup}^{(\tau^{\prime})}$ when $\tau\neq\tau^{\prime}$ in a constellation. Then mathematically, an SFS can be derived as
\begin{align}
& s_{1,\bigtriangleup}^{(\tau)} = s_{1,\bigtriangleup}^{(\tau^{\prime})},
~\mathbf{h}\mathbf{s}^{(\tau)} = \mathbf{h}\mathbf{s}^{(\tau^\prime)}, ~\text{for}~ \mathbf{s}^{(\tau)} \neq \mathbf{s}^{(\tau^\prime)}, \notag \\
& h_{1,1}s^{(\tau)}_1+h_{1,2}s^{(\tau)}_2 = h_{1,1}s^{(\tau^{\prime})}_1+h_{1,2}s^{(\tau^{\prime})}_2, \notag
\end{align}
where $h_{j,\ell}$ denotes the channel coefficient between the $\ell^{\mathrm{th}}$ MT and the $j^{\mathrm{th}}$ AP, and $\mathbf{s}^{(\tau)}\triangleq[s^{(\tau)}_1 s^{(\tau)}_2]$ refers to a joint symbol set which contains the modulated symbols at both MTs, and $\mathbf{s}^{(\tau)} \neq \mathbf{s}^{(\tau^\prime)}$ means at least one symbol is different in $\mathbf{s}^{(\tau)}$ and $\mathbf{s}^{(\tau^\prime)}$. Then we define $\mathbf{v}_{SFS}=[v^{(1)}_{SFS}, v^{(2)}_{SFS}, \cdots , v^{(L)}_{SFS}]$ as the set contains all unique value of SFSs and calculated by
\begin{align}\label{eq:SFSt}
v^{(q)}_{SFS} = \frac{h_{1,2}}{h_{1,1}} = \frac{s^{(\tau)}_1-s^{(\tau^{\prime})}_1}{s^{(\tau^{\prime})}_2-s^{(\tau)}_2}, ~~\forall s^{(\tau)}_l,s^{(\tau^{\prime})}_l \in \Omega.
\end{align}
By substituting the QAM modulated symbols with all possible combinations to (\ref{eq:SFSt}), we can find all SFS values for this QAM scheme when $u=2$ MTs.
In the multiple-MT ($u>2$) case, (\ref{eq:SFSt}) is no longer suitable for SFS values representation due to the increased number of MTs in MAC stage. In this case, the relationship between the values of SFSs and channel coefficients are no longer able to expressed by a simple ratio between different channel coefficients, e.g. the SFSs form different surfaces with infinite values in $3$-MT case. This is still an open issue in the literature for PNC design and we give our potential solution here. One solution is to utilise clashes instead of calculation of the values of SFSs in the proposed algorithm. In the multiple-MT case, an SFS still causes clashes and different clashes can be always found according to Definition $5$, then an optimal binary matrix is found if it maps the superimposed symbols within a clash to the same NCV and keep the value of $d_{min}$ maximised at the same time, without SFS calculation.
Another solution to this issue is to divide the whole networks into multiple $2$-MT subnetworks. One way to achieve this goal is by allocating different pairs of MTs to different frequencies or time slots. In this case, the superimposed symbol at an AP is always from two MTs and then SFSs can be calculated by (\ref{eq:SFSt}). The only issue of this approach is that multiple On-line search algorithms for different MT pairs are required to be implemented which may cause extra computational complexity and latency time. An alternative way is to consider two MTs with the similar channel strength and transmit power as the prime MT pair and trade the other received signals as additional noise. According to our research in \cite{Acs}, when a strong signal with a much higher energy comapred to the rest of received signals is received at an AP in the multiple access stage, it is difficult to find an optimal matrix achieving unambigous recovery due to the high interference from this strong signals \cite{Acs}. Thus the $2$ MTs whose received signals are allocated at the similar energy level in the multiple access stage can be paired to form a subnetwork for PNC encoding and by pairing different MTs and APs, the multiple-MT-multiple-AP case is replaced by multiple $2$-MT-$2$-AP cases.
\subsection{Regulated BMAS Search Algorithm}
In order to fulfil the request of low latency in some scenarios, we present a regulated BMAS (R-BMAS) approach with a lookup table mechanism in this subsection. According to the definition of clash, the superimposed symbols in a clash have an intra-cluster distance of `$0$' and by the calculation in (\ref{eq:SFSt}), the clash groups in an SFS are mainly determined by the absolute value and the angle of the ratio of two channel coefficients. So following the design rules in the propose algorithm, we have:
\begin{theorem} \label{theorem:clsh}
The mapping matrix which resolves an SFS can always resolve the non-singular fade states with the values close to this SFS.
\end{theorem}
\begin{IEEEproof}
When a non-singular fade state (nSFS) happens, different superimposed symbols received at an AP will not be coincided which means no clash is observed. When this nSFS holds a similar absolute value and rotation angle to an SFS, the superimposed symbols, which form a clash in this SFS, will form a cluster in this nSFS with a smaller intra-cluster distances compared to inter-cluster distances to other clusters. In this case, the mapping matrices, that are capable to resolve the SFS by mapping the coincided superimposed symbols in a clash to the same NCV and keep different NCVs as far as possible, can achieve the maximum $d_{\mathrm{min}}$ in this nSFS by mapping the superimposed symbols in the cluster to the same NCV.
\end{IEEEproof}
An example of this theorem is given in Appendix \ref{ap:rBMAS}. According to Theorem 4, the proposed On-line search approach could be replaced by a lookup table based mechanism for the optimal mapping matrices selection. A table contains all SFS combinations and their corresponding invertible $mu \times mu$ optimal binary mapping matrices could be established in the Off-line search. During the real-time transmission, when the channel coefficients are estimated at the $j^{\mathrm{th}}$ AP, the value of the fade state $v_{FS_j}$ is calculated by
\begin{align} \label{eq:vFS}
v_{FS_j} = h_{j , 2} / h_{j , 1},
\end{align}
and then the closest SFS to this nSFS is obtained by
\begin{align}\label{eq:dFS}
& d^{(q_j)}_{FS} = \min|v_{FS_j} - v^{(q_j)}_{SFS}|^2, \\
\text{for} &~q = 1, \cdots, L, ~j = 1, \cdots, n.
\end{align}
The index $q_j$ will be forwarded to the CPU. By checking the table, the CPU send the optimal mapping matrix index back to the APs for PNC encoding. The algorithm is summarised in Algorithm \ref{Alg:regserch}.
\begin{algorithm}[ht]
\caption{Regulated Binary Matrices Adaptive Selection (R-BMAS) Algorithm}
\label{Alg:regserch}
\begin{algorithmic}[1]
\Statex
\textbf{Off-line Search}
\For {$i=1:L$} \Comment{each SFS}
\State {Apply Algorithm \ref{Alg:offline.p1} and \ref{Alg:offline.p2} for $\mathbf{M}_i$}
\EndFor
\For {$i_1=1:L$} \Comment{all SFS combinations}
\State {\vdots}
\For {$i_n=1:L$} \Comment{all $n$ APs}
\State {$\mathbf{G} = \begin{bmatrix} \mathbf{M}_{i_1} \\
\vdots \\
\mathbf{M}_{i_n}
\end{bmatrix}$}
\State $\delta \leftarrow\mathrm{det}(\mathbf{G})_{|\mathbb{F}_2} $ \Comment{determinant over $\mathbb{F}_2$.}
\If{$\delta = 1$}
\State $\mathcal{G}_l\leftarrow \mathcal{G}_l\cup\mathbf{G}$
\State {Add $l$ as the optimal mapping matrix for SFS combination [$i_1\cdots i_n$]}
\EndIf
\EndFor
\State {\vdots}
\EndFor
\Statex
\textbf{On-line Search}
\For {$j=1:n$} \Comment{each AP}
\State {$v_{FS_j} = h_{2 , j} / h_{1 , j}, ~~j \in [1,2]$} \Comment{fade state}
\For {$i=1:L$}
\State {$\mathbf{d}^{(i_j)}_{SF} = |v_{FS_j} - v^{(i)}_{SFS}|^2$}
\EndFor
\State { $[d^{(k_j)}, k_j] = \min\mathbf{d}_{SF}^{(i_j)}$} \Comment{SFS index}
\EndFor
\State {Forward [$k_1\cdots k_n$] to CPU }
\State {Look up the table to obtain the optimal mapping matrix index $l$}
\State {Send the index back to APs}
\end{algorithmic}
\end{algorithm}
As shown in Algorithm \ref{Alg:regserch}, most of the calculations for mapping selection have been done before the transmission. At the same time, the latency is reduced by applying the regulated On-line search in Algorithm \ref{Alg:regserch} instead of Algorithm \ref{Alg:offline.p2}. However, a disadvantage of the R-BMAS algorithm is the performance degradation caused by sub-optimal global mapping matrices stored for some SFS combinations to achieve unambiguous recovery at the CPU. In order to overcome this problem, a combination of the two proposed algorithms can be utilised. During the Off-line search, a request of BMAS algorithm implementation could be stored in the table established in R-BMAS algorithm for the SFS combinations that need to be resolved by a suboptimal global mapping matrix. Thus the real-time computational complexity and latency time is reduced and the traffic in backhaul network is restricted to the total user data rate at the same time.
\subsection{Computational Complexity and Backhaul Load}
We investigate the computational complexity of the ideal CoMP, non-ideal CoMP and the proposed algorithms in this subsection to illustrate the advantage of the proposed algorithms.
In ideal CoMP, the bandwidth of backhaul network is assumed unlimited so that the received signal $y_j$ in (\ref{equ:channel}) at each AP will be forwarded to the CPU for joint multiuser ML detection, given by
\begin{align}
\mathbf{\hat{s}} = \arg\min\limits_{\mathbf{s}\in \Omega^u} \Vert \mathbf{y}-\mathbf{H}\mathbf{s}\Vert,
\end{align}
where $\mathbf{s}$ denotes the $u \times 1$ symbol vector at the MTs and $\mathbf{\hat{s}}$ is the estimated version, and $\mathbf{H}$ stands for the $n \times u$ channel matrix.
In non-ideal CoMP, the backhaul network is bandwidth-limited and each AP employs an LLR based multiuser detection algorithm to estimate the transmitted symbols from each MT, mathematically given by
\begin{align}\label{eq:llr}
\chi_{i,\ell} = \frac{\sum\limits_{w_{i,\ell}=0}P(s_\ell)p(y_j\mid s_\ell)}{\sum\limits_{w_{i,\ell}=1}P(s_\ell)p(y_j\mid s_\ell)}, ~i=1,2,\cdots,m,
\end{align}
where $\chi_{i,\ell}$ stands for the LLR corresponding to the $i^\mathrm{th}$ bit of the binary message vector $\mathbf{w}_\ell$. A scalar quantizer which quantizes $\chi_{i,\ell}$ into binary bits is employed after the estimation. The quantised bits are sent to the CPU via the backhaul network. A detailed computation of (\ref{eq:llr}) is given in \cite{MUD}. Consider a quantisation scheme of $2$ bits is employed at each AP and $4$QAM is utilised at $2$ MTs in a simple $5$-node system, then $4$ LLRs are calculated by (\ref{eq:llr}) at each AP and a total $16$ bits are sent via the backhaul network. In order to achieve a good performance in terms of error rate and outage probability, a quantization scheme with a larger number of quantized bits is required and in this case, there is a trade-off between the performance and the backhaul load in non-ideal CoMP.
In the proposed BMAS algorithm, each AP estimates the linear combination of the messages from MTs based on the ML rule rather than decoding individual symbols (such as ML detection in the ideal CoMP), mathematically given by (\ref{equ:PostProb.bg}) - (\ref{equ:PostProb.ed}). In order to minimise the computational complexity, an exhaustive search before the real-time transmission is implemented which contains the majority of computational complexity in the proposed algorithm, and the proposed On-line search is implemented during the transmission with a reduced number of mapping matrix candidates, e.g. $K=5$ matrices at each AP for $4$QAM modulation scheme in Algorithm \ref{Alg:offline.p2}.
In the proposed R-BMAS algorithm, calculations in the mapping selection are replaced by a lookup table mechanism and the computation complexity is reflected in distance comparison in (\ref{eq:dFS}). Then an LLR estimation of each bit in the NCV is applied which is the same as in the BMAS algorithm. Following the above $5$-node example, as illustrated in (\ref{eq:NCSt}), a binary mapping matrix with the minimum size of $2 \times 4$ is selected at each AP to encode the $4$ message bits from both MTs into the NCV, which results in a total $4$ bits backhaul load. Note an AP could employ a mapping matrix with the maximum size of $mu \times mu$ to generate the NCV and in this case, the other APs will not participate in the PNC encoding because they fail to receive any useful signals and the total backhaul load is still equal to $mu$.
\section{Numerical Results}
\label{sec:NumRes}
In this section, we illustrate the outage probability performances of the proposed BMAS algorithm, R-BMAS algorithm and CoMP in a $5$-node system which includes $2$ MTs, $2$ APs and $1$ CPU. As mentioned in previous sections, the $5$-node network is the smallest network to apply the proposed algorithms so that we use this network as a baseline to illustrate the advantage of the proposed algorithms. The proposed algorithms can be adapted to an N-MIMO network with more nodes and we have discussed the respective potential issues and solutions in Section V.
In the simulations, we assume the multi-access links are wireless and the backhaul is wired which allows only binary data to be transmitted. Each node contains $1$ antenna for transmission and receiving, and $4$QAM/$16$QAM modulation schemes is employed at both MTs. We employ the convolutional code as an example and more powerful channel code can be utilised in order to enhance the reliability, such as LDPC \cite{Burr.MC}. In the simulation of ideal CoMP, we assume the backhaul capacity is limitless and the channel coefficients are exchanged in order to implement a joint ML detection algorithm. In the non-ideal CoMP scenario, quantizer with different quantization bits ($2$ bits and $4$ bits) are employed at each AP to investigate the outage probability performance.
Fig. \ref{fig:SFS_16Q} illustrates the outage probabilities of the proposed BMAS algorithm in $4$QAM and $16$QAM schemes. As we can see from the figure, the outage probability curves achieve the same diversity order. The ideal CoMP achieves the optimal performance in both $4$QAM and $16$QAM cases due to the unlimited backhaul capacity and joint detection. When the backhaul load is capacity limited, non-ideal CoMP with $2$-bit and $4$-bit quantizer, which results in a total of $8$ bits and $16$ bits backhaul load respectively, are implemented in the simulation. Compared to the ideal CoMP, a $8$dB and $13$dB performance degradation in the non-ideal CoMP can be observed; whilst the degradation is limited to only approximately $3$dB by the proposed algorithm. In the proposed BMAS algorithm, the backhaul load is equal to the total number of bits which is $4$ bits for $4$QAM, i.e. it is smaller than that in both non-ideal CoMP approaches. Instead of obtaining all SFSs-resolvable mapping matrix candidates for $16$QAM schemes, we consider only $4$, $12$ and $50$ pSFSs in Off-line search algorithm with computational complexity reduction. Note that the iSFSs are removed before selecting these $4$, $12$ and $50$ pSFSs in the simulation. As illustrated in the figure, approximately $10$dB degradation in outage performance is seen when using only $4$ pSFSs in the proposed On-line search algorithm. When $12$ and $50$ pSFSs are used, the degradation is reduced to $7$dB and $5$dB respectively and the gap will reduce when more pSFSs are considered in the proposed BMAS algorithm.
\begin{figure}[ht]
\centering
\begin{minipage}[t]{1\linewidth}
\includegraphics[width=1.0\textwidth]{SER_2x2x1_4QAM_16QAM.eps}
\end{minipage}
\caption{\small Outage Probability of the Proposed BMAS Algorithm in $4$QAM and $16$QAM.} \label{fig:SFS_16Q}
\end{figure}
In Fig. \ref{fig:regvsbmas}, outage probability comparisons between the proposed BMAS algorithm and R-BMAS algorithm are shown. When $4$QAM is used at both MTs, outage probability performance of BMAS algorithm is $1$dB better than that of R-BMAS algorithm due to random suboptimal mapping matrices are stored in table used in the R-BMAS algorithm. In term of reduced computational complexity in the R-BMAS algorithm, an index searching among $25$ binary $4 \time 4$ mapping matrices are implemented to resolve all possible SFS combinations ($5$ pSFSs at each AP). When $16$QAM is employed at both MTs, the gap between the BMAS algorithm and the R-BMAS algorithm depends on the number of pSFSs utilised in the algorithm. When only $4$ pSFSs are used for optimal mapping matrix selection in both algorithms, BMAS achieves about $5$dB gains in outage performance compared to R-BMAS. The big gap in $4$ pSFSs case is caused by inefficient mapping matrix candidates in the R-BMAS algorithm. With the number of pSFSs increasing to $50$, more pSFS candidates are used in the R-BMAS algorithm which improves the outage performance about $7$dB and reduces the gap to $1$dB only.
\begin{figure}[ht]
\centering
\begin{minipage}[t]{1\linewidth}
\includegraphics[width=1.0\textwidth]{regvsbmas.eps}
\end{minipage}
\caption{\small Outage Probability of the Proposed BMAS Algorithm vs Regulated On-line Search Algorithm.} \label{fig:regvsbmas}
\end{figure}
We have investigated how estimated channel state information (CSI) affects the network performance, and illustrated the result comparisons in Fig.s \ref{fig:cha_map_prob} and \ref{fig:SER_chaest}. The values of FSs are calculated by the first equation in (\ref{eq:SFSt}) so that during the transmission, the accuracy of the CSI in access link is important because it determines if the optimal mapping matrix can be selected. In Fig. \ref{fig:cha_map_prob}, we illustrate the impact of estimated CSI to the optimal mapping selection. The term \textquotedblleft mis-mapping \textquotedblright~ means the optimal mapping matrix is not selected. As we can see from the figure, the mis-mapping percentage decreases in any pilot length circumstances with increase of $E_b/N_0$. By using a short-length pilot sequence, the mis-mapping percentages are quite high which is caused by the fact that inaccurate fade states are calculated by using the estimated CSI. The low mis-mapping percentage shown in Fig. \ref{fig:cha_map_prob} leads to better outage probability performance in Fig. \ref{fig:SER_chaest}. For example, at $15$dB, the outage probability using only $1$ pilot symbol is $10^{-2}$ which refers to a mis-mapping percentage of $22\%$; while using $10$ pilot symbols, the outage probability reduces to $3 \times 10^{-3}$ and a mis-mapping percentage of only $8\%$ is achieved. By comparing the outage performances in Fig. \ref{fig:SER_chaest} with perfect and estimated CSI, we can conclude that pilot sequence with length of $10$ is good enough for the proposed BMAS algorithm.
\begin{figure}[t]
\centering
\begin{minipage}[t]{1\linewidth}
\includegraphics[width=1.0\textwidth]{channelestimation_map_prob.eps}
\end{minipage}
\caption{\small Miss-Mapping Probabilities with Different Pilot Lengths.} \label{fig:cha_map_prob}
\end{figure}
\begin{figure}[ht]
\centering
\begin{minipage}[t]{1\linewidth}
\includegraphics[width=1.0\textwidth]{SER_2x2x1_4QAM_PracSch_channelesti.eps}
\end{minipage}
\caption{\small Outage Probability with Different Pilot Lengths.} \label{fig:SER_chaest}
\end{figure}
\section{Future Work and Conclusion}
In this paper we present a design guideline of engineering applicable physical layer network coding in the uplink of N-MIMO networks. The proposed design criteria guarantee unambiguous recovery of all messages and the traffic in the backhaul network is reduced to the level of total user data rate at the same time. We then propose an optimal mapping matrix selection algorithm based on the design criteria. In order to reduce the real-time computational complexity, the proposed algorithm is divided into Off-line and On-line parts. An extension study of applying the proposed PNC design in binary systems with full-duplex (FD) APs \cite{FD-D2D}-\cite{FD-ARQ} has been started. Practical PNC design with cross layer optimisation in \cite{CLD-D2D}-\cite{CLD-D2D_2} provide another research direction, and the research in \cite{SE} focuses on spectrum efficiency solutions and can be extended to PNC-implemented systems. Moreover, the PNC application with optimal resource allocation \cite{RA-CDMA} and \cite{MQAM} is critical in order to serve the 5G systems and achieve massive data transmissions with high accuracy and low latency. The proposed algorithm is not only designed for a simple $5$-node network but for a general N-MIMO network serves multiple MTs. In addition, a regulated On-line search algorithm based on lookup table mechanism is also presented in order to further reduce the computational complexity and latency without much performance degradation. With reduced backhaul load, the proposed algorithms achieve higher outage probability performance compared to the practical non-ideal CoMP approaches.
\begin{appendices}
\section{}\label{ap:offline}
We illustrate a detailed Off-line search algorithm in Algorithms \ref{Alg:offline.p1} and \ref{Alg:offline.p2}. Part I indicates how to calculate SFSs and how to remove the image SFS in order to reduce the computational complexity, while Part II focuses on optimal mapping matrix selection. The steps in the proposed On-line search is the same as that in the Off-line search but with less number of matrix candidates so we will not show the repeated work here. $\mathcal{Q}_{\mathrm{d}}$ in Algorithm \ref{Alg:offline.p1} is defined as a vector contains all the $d_{min}$ between different NCSs for every binary mapping matrices in each SFS.
\begin{algorithm}[htb]
\caption{SFS Calculation and Image SFS Remove (Off-line Search Algorithm. Part I)}
\label{Alg:offline.p1}
\begin{algorithmic}[1]
\Statex
\For {$i=1:L$} \Comment{each singular fade state}
\State {$h = \mathcal{S}(i)$} \Comment{$h$ is a $1\times m$ vector.}
\For{$j=1:K$} \Comment{each binary matrix}
\State $[\xi, T_{\xi}]= N(\mathcal{M}(j))$
\State $\xi_{\mathrm{f}} \leftarrow \mathscr{F}(T_{\xi},h)$ \Comment{$\mathscr{F}(\cdot)$ produces all faded NCSs.}
\State $d_{\mathrm{min}} \leftarrow \mathscr{D}(\xi_{\mathrm{f}})$ \Comment{$\mathscr{D}(\cdot)$ calculates the minimum distance of all NCSs.}
\State $\mathcal{Q}_{\mathrm{d}}\leftarrow \mathcal{Q}_{\mathrm{d}} \cup d_{\mathrm{min}}$ \Comment{store all $d_{\mathrm{min}}$ in $\mathcal{Q}_{\mathrm{d}}$.}
\EndFor
\State $[\boldsymbol\beta(i),\boldsymbol\alpha(i)] \leftarrow\mathscr{C}(\mathcal{Q}_{\mathrm{d}})$ \Comment{$\mathscr{C}(\cdot)$ sorts $\mathcal{Q}_{\mathrm{d}}$ in descending order stored in $\boldsymbol\beta(i)$ and outputs the rearranged index vector $\boldsymbol\alpha(i)$.}
\EndFor
\State $\mathcal{S}^{\prime} \leftarrow \mathscr{I}(\mathcal{S},\boldsymbol\alpha)$ \Comment{delete all image singular fade states and $\mathcal{S}^{\prime}$ has $L^{\prime}$ singular states, $L^{\prime}<L$.}
\State $\boldsymbol{\alpha}\leftarrow \boldsymbol\alpha \setminus \boldsymbol\alpha(\boldsymbol\beta=0)$ \Comment{delete the index element of $\boldsymbol\beta=0$.}
\State $\boldsymbol\alpha^{\prime} \leftarrow \boldsymbol{\alpha}(i|\mathcal{S}^{\prime})$ \Comment{$\boldsymbol\alpha^{\prime}$ corresponds to only $\mathcal{S}^{\prime}$.}
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[!h]
\caption{Binary Matrix candidates Selection for Each AP (Off-line Search Algorithm. Part II)}
\label{Alg:offline.p2}
\begin{algorithmic}[1]
\Statex
\For{$l_{L^{\prime}}=1:L^{\prime}$}
\State $\mathcal{S}^{\dag}_{L^{\prime}-1}\leftarrow\mathcal{S}^{\prime}\setminus \mathcal{S}^{\prime}(l_{L^{\prime}})$
\State $\theta_{L^{\prime}-1}\leftarrow \mathscr{E}(\mathcal{S}^{\dag}_{L^{\prime}-1})$ \Comment{Index set of $\mathcal{S}^{\prime}$ excluding the $l^{\mathrm{th}}$ element.}
\For{$l_{L^{\prime}-1}= \theta_{L^{\prime}-1}$ }
\State $\mathcal{S}^{\dag}_{L^{\prime}-2}\leftarrow\mathcal{S}^{\dag}_{L^{\prime}-1}\setminus \mathcal{S}^{\dag}_{L^{\prime}-1}(l_{L^{\prime}-1})$
\State $\theta_{L^{\prime}-2}\leftarrow \mathscr{E}(\mathcal{S}^{\dag}_{L^{\prime}-2})$
\State $\vdots$
\For{$l_{L^{\prime}-n+1} = \theta_{{L^{\prime}-n+1}}$}
\For{$i_1 = 1:K$}
\State $\vdots$
\For{$i_n = 1:K$}
\State $
\mathbf{M} = \begin{bmatrix} \mathcal{M}[\boldsymbol\alpha(l_{L^{\prime}},i_1)] \\
\vdots \\
\mathcal{M}[\boldsymbol\alpha(l_{L^{\prime}-n+1},i_n)]
\end{bmatrix}
$
\State $\delta \leftarrow\mathrm{det}(\mathbf{M})_{|\mathbb{F}_2} $ \Comment{determinant over $\mathbb{F}_2$.}
\If{$\delta = 1$}
\State $\mathscr{R}\leftarrow \mathscr{R}\cup (l_{L^{\prime}}\cdots l_{L^{\prime}-n+1};i_1\cdots i_n)$
\State $\mathbf{G}\leftarrow \mathbf{G}\cup\mathbf{M}$ \Comment{$\mathbf{M} \leftrightarrow\mathbf{G}_{A(k)}$ in $\mathbf{G}$ has unique address $A(k)=(l_{L^{\prime}}^{(k)}\cdots l_{L^{\prime}-n+1}^{(k)};i_1^{(k)}\cdots i_n^{(k)})$, $k=1,\cdots \frac{L^{\prime}!}{(L^{\prime}-n)!}$.}
\State \Return {(21)}
\EndIf
\EndFor
\EndFor
\EndFor
\EndFor
\EndFor
\State $$[\mathbf{G}_{A(k_1)}\cdots\mathbf{G}_{A(k_n)}]\leftarrow\mathscr{X}(\mathbf{G})$$ \Comment{find $n$ $\mathbf{M}$ from $\mathbf{G}$ satisfying bijection relations $(l_{L^{\prime}}^{(k_e)}\cdots l_{L^{\prime}-n+1}^{(k_e)})\Leftrightarrow\mathcal{S}^{\prime}$ for $k_e = k_1\cdots k_n$.}
\For {$i=1:n$}
\State $\mathcal{Q}_i\leftarrow [\mathbf{G}_{A(k_1)}^{i}\cdots\mathbf{G}_{A(k_n)}^{i}]$ \Comment{$\mathbf{G}_{A(k_i)}^{i}=\mathcal{M}[\boldsymbol\alpha(l_{L^{\prime}-i+1}^{(k_i)},i_i^{(k_i)})]$}
\EndFor
\State \textbf{Output:} $n$ stacks $\mathcal{Q}_i$ with each including $L^{\prime}$ binary matrices.
\end{algorithmic}
\end{algorithm}
\section{}\label{ap:rBMAS}
We illustrate an example of Theorem 4 here. In Fig. \ref{fig:QAM4_SFS2}, a received constellation of an SFS with $v_{SFS_1}=i$ is illustrated. The number of MTs is $u=2$ and $4$QAM modulation is employed. We can observe the clashes clearly from the figure and their values can be calculated according to (\ref{eq:SFSt}), e.g. $4$ constellation points are superimposed at $(0,0)$ and $2$ are at $(0,2)$. Then the optimal binary mapping matrix will encode the superimposed constellation points in a clash to the same NCV and maximise the distance between different NCVs at the same time according to the design criteria.
\begin{figure}[ht]
\centering
\begin{minipage}[t]{1\linewidth}
\includegraphics[width=1.0\textwidth]{QAM4_SFS2.eps}
\end{minipage}
\caption{\small Constellation of the Received Signals at AP, $v_{SFS_1}=i$.} \label{fig:QAM4_SFS2}
\end{figure}
Fig. \ref{fig:QAM4_SFS3} illustrates the received constellation of all possible superimposed symbols of another SFS with $v_{SFS_2}=1/2+1/2i$. In this case, $2$ constellation points are superimposed at $(0,1)$, $(0,-1)$, $(1,0)$ and $(-1,0)$, respectively. The optimal mapping matrices for $SFS_1$ and $SFS_2$ are different due to the differences between the clashed constellation points.
\begin{figure}[ht]
\centering
\begin{minipage}[t]{1\linewidth}
\includegraphics[width=1.0\textwidth]{QAM4_SFS3.eps}
\end{minipage}
\caption{\small Constellation of the Received Signals at AP, $v_{SFS_1}=1/2+1/2i$.} \label{fig:QAM4_SFS3}
\end{figure}
Then we consider a received constellation with a non-singular fading with $v_{nSFS}=7/10+7/10i$ illustrated in Fig. \ref{fig:QAM4_SFS3_5}. Clearly, none of the constellation points are superimposed but we can easily indicate different clusters by the distances between constellation points. Also we can find that the distance, which refers the absolute value and rotation angle in (\ref{dmint}), between $v_{SFS_1}$ and $v_{nSFS}$ is smaller than that between $v_{SFS_2}$ and $v_{nSFS}$. Then according to the unambiguous detection theorem, the $4$ points around $(0,0)$ in Fig. \ref{fig:QAM4_SFS3_5} should be mapped to the same NCV to maximise the inter-cluster distance. The same criteria should be satisfied by the $8$ points near $(0,2)$, $(-2,0)$, $(0,-2)$ and $(2,0)$. Then the initial cluster groups in Fig. \ref{fig:QAM4_SFS2} and Fig. \ref{fig:QAM4_SFS3_5} are the same which leads to the same optimal mapping matrices could be used in both circumstances.
\begin{figure}[ht]
\centering
\begin{minipage}[t]{1\linewidth}
\includegraphics[width=1.0\textwidth]{QAM4_SFS3_5.eps}
\end{minipage}
\caption{\small Constellation of the Received Signals at AP, $v_{nSFS}=7/10+7/10i$.} \label{fig:QAM4_SFS3_5}
\end{figure}
\end{appendices}
\bibliographystyle{IEEEtran}
{\footnotesize{ |
1,941,325,220,432 | arxiv | \section{Introduction}
The Large and Small Magellanic Clouds (LMC and SMC) are the two most massive satellite galaxies of the Milky Way, and the closest example of an interacting pair. They are situated deep inside the gravitational potential of the Galaxy at distances of $\approx 50$ and $60$\ kpc, respectively \citep*{degrijs:14,degrijs:15}. The close proximity and locally-unique properties of the Clouds mean that they constitute exceptional laboratories for studying a huge variety of astrophysical problems, including critical general questions related to stellar evolution, star formation, the behaviour of the interstellar medium, and the evolution of galaxies \citep[see e.g.,][]{nidever:17}.
One of the defining features of the Magellanic system is a vast envelope of neutral hydrogen gas stretching between the LMC and SMC \citep[the {\it Magellanic Bridge}; e.g.,][]{kerr:57,hindman:63} and extending into a filamentary stream spanning more than $200$ degrees across the sky \citep[the {\it Magellanic Stream}; e.g.,][]{mathewson:74,putman:03,bruns:05,nidever:10}. It was long thought that these structures were the result of repeated gravitational interactions between the Clouds and the Milky Way \citep[e.g.,][]{gardiner:96} and/or ram-pressure stripping by a hot gaseous component surrounding the Galaxy \citep[e.g.,][]{mastropietro:05}. However, recent measurements of the proper motions of the LMC and SMC are largely incompatible with these ideas -- the Clouds are moving too fast to be on stable short-period orbits about the Milky Way, and indeed are most likely on their {\it first} close passage \citep{kallivayalil:06a,kallivayalil:06b,kallivayalil:13,besla:07}. In this case the origin of the Magellanic Bridge and Stream cannot be attributed to the influence of the Milky Way; instead these features are generated by gravitational tides from the LMC acting on the SMC during repeated close encounters between the two \citep[e.g.,][]{besla:10,besla:12,diaz:11,diaz:12}.
Such interactions can also explain many of the peculiar characteristics of the stellar component in the Clouds \citep[e.g.,][]{besla:16} such as the striking off-centre bar in the LMC \citep{devau:72,vdm:01}, a remote arc of stars in the extreme outskirts of the LMC disk \citep{mackey:16}, the irregular morphology of the SMC including a ``wing'' feature that extends towards the LMC \citep{shapley:40} and exhibits a substantial line-of-sight depth \citep{nidever:13}, the apparent existence of stripped SMC stars residing in the outskirts of the LMC \citep{olsen:11}, the presence of a substantial population of intermediate-age stars in the inter-Cloud region \citep[e.g.,][]{noel:13,noel:15,skowron:14,deason:17}, tidal tails extending from the SMC \citep{belokurov:17}, and a ``bridge'' of ancient metal-poor stars that stretches from the SMC almost to the LMC and which is strongly misaligned with the peak of the H{\sc i} \citep{belokurov:17}.
It has been known for several decades that, in addition to the components described above, the Magellanic inter-Cloud region hosts a population of very young stars that stretches from the eastern ``wing'' of the SMC to the western periphery of the LMC \citep[e.g.,][]{irwin:85,irwin:90,demers:99}. Although sparsely distributed, particularly on the LMC side of the inter-Cloud region, this young population is observed to be significantly spatially clustered \citep[e.g.,][]{grondin:90,demers:91,battinelli:92,demers:98,bica:15}. Recent wide-field surveys have shown that the young stars closely trace the peak of the gaseous H{\sc i} Bridge between the Clouds, departing by a few degrees only at the easternmost end \citep[e.g.,][]{dinescu:12,skowron:14,noel:15,belokurov:17}. This tight correspondence suggests that these stars have likely formed {\it in situ} out of the H{\sc i} gas in the Magellanic Bridge, rather than having been stripped from the SMC.
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth]{figure1.eps}
\end{center}
\caption{Spatial density map of young (upper) main sequence stars in the Magellanic system, constructed using the ``GaGa'' {\it Gaia$+$GALEX} catalogue described by \citet{belokurov:17}. Stars were selected by requiring low foreground extinction $E(B-V) < 0.2$, and that $18 < G_0 < 20$ and $0 < (NUV-G)_0 < 1$ in colour-magnitude space. This map is a gnomonic projection with the centre of the LMC at the origin. The blue long-dashed circles indicate projected LMC-centric radii of $8\degr$, $12\degr$, $16\degr$, and $20\degr$ (or $\approx 7.0,\,10.5,\,14.0,$\ and $17.5$\ kpc), while the magenta short-dashed circles show projected distances of $4\degr$ and $8\degr$ from the SMC ($\approx 4.2$\ and $8.4$\ kpc). The distorted nature of the SMC is clearly visible in the form of its eastern wing; this feature extends into a narrow band of young stars that reaches to the south-western periphery of the LMC. It is also evident that, although diffuse, these populations appear rather spatially clustered. Gaps in the map largely result from incompleteness in the {\it GALEX} AIS catalogue \citep[see][]{belokurov:17}, although regions to the east of the map approach the Galactic plane and some data are consequently excised due to our cut on the colour excess. Sources near the eastern edge of the map are residual foreground contamination.
\label{f:gaga}}
\end{figure}
To highlight the properties of this young population, in Figure \ref{f:gaga} we map the spatial density of upper main sequence stars in the Magellanic system using the ``GaGa'' catalogue described by \citet{belokurov:17}. This consists of sources detected in both the {\it GALEX} ultraviolet space telescope's \citep{martin:05,morrissey:07} all-sky imaging survey \citep[AIS;][]{bianchi:14}, and the {\it Gaia} satellite's \citep{prusti:16} first data release \citep[DR1;][]{brown:16,lindegren:16}. We isolate young main sequence stars at the distance of the Magellanic Clouds by selecting only objects with $18 < G_0 < 20$ and $0 < (NUV-G)_0 < 1$ in colour-magnitude space\footnote{Here we are using measurements in {\it Gaia's} very broad optical $G$-band and {\it GALEX's} near-UV passband.}. To avoid issues due to regions of high extinction we further require that $E(B-V) < 0.2$; we de-reddened the photometry using extinction coefficients of $2.55$ for {\it Gaia} $G$ and $7.24$ for {\it GALEX} $NUV$. In Figure \ref{f:gaga} the distorted eastern wing of the SMC is clearly visible, together with a narrow band of young main sequence stars that extends from this structure across to the southern outskirts of the LMC. The clumpy nature of this young stellar bridge is also evident.
The brightest young stars in the inter-Cloud region are of significant importance as they allow various properties of the interstellar medium in the Magellanic Bridge to be measured that would otherwise be very difficult to access. Studies of absorption features along lines-of-sight to a handful of these hot stars (and a few background quasars) agree with high-resolution spectroscopy of the young stars themselves that the abundances of iron, as well as light elements such as C, N, O, Mg, and Si, are typically depleted in the Bridge by $\sim -1$ dex relative to the Milky Way \citep[e.g.,][]{hambly:94,rolleston:99,lehner:01,lehner:08,dufton:08,misawa:09}. Absorption measurements have further revealed that the interstellar medium in the Bridge region is highly complex, with molecules, neutral gas, and both weakly and highly ionised species all seen along various sightlines and at different velocities \citep[e.g.,][]{lehner:01,lehner:08,lehner:02}.
The observation that the Magellanic Bridge apparently has a lower abundance than either the LMC or SMC at the present day is puzzling; however it is in good agreement with similar measurements from numerous sightlines through the Magellanic Stream \citep[e.g.,][]{fox:10,fox:13}. The literature contains a variety of suggestions as to how this result might be reconciled with the general agreement of numerical models that the Magellanic Bridge consists of gas stripped from the SMC by a close encounter with the LMC -- for example, (i) the stripping may have occurred $\sim 1.5-2$ Gyr ago, when the abundance of the SMC was $\sim 0.1$ solar \citep{fox:13}; or (ii) in the case where the gas was stripped in the most recent interaction only $\sim 200$ Myr ago, there may have been significant dilution by a separate very low metallicity component \citep[e.g.,][]{rolleston:99,lehner:08}. In any case, it is clear that observations of the stars and gas in the Bridge region have the potential to provide important constraints on the interaction history of the LMC and SMC. They also offer insight into the nature of the star formation itself, which is notable because of its isolated location far out in the Milky Way halo, its low density, and its low metallicity.
\begin{figure*}
\begin{center}
\includegraphics[height=85mm]{figure2a.eps}
\hspace{2mm}
\includegraphics[height=85mm]{figure2b.eps}
\end{center}
\caption{{\bf Left:} Colour-magnitude diagram for stars lying at selected locations in the boxed region on the map to the right (see text). A substantial population of young blue stars is present, well separated from the Galactic foreground contamination that sits to the red. The thick solid line is a $30$\ Myr PARSEC isochrone \citep{bressan:12} with $[$M$/$H$] = -1.0$ and $\mu = 18.7$. The thin solid line is the equal-mass binary sequence for this fiducial track. The shaded region illustrates our main sequence selection box for the young population -- bounded by the isochrone to the blue and the binary sequence to the red, and broadened according to the photometric uncertainties at given $r_0$. The {\it upper} main sequence is delineated by the yellow portion of this box, extending down to $r_0 \sim 20$. Mean photometric uncertainties for stars with $(g-r)_0 < 0.0$ are plotted down the left-hand side of the panel. The mean $50\%$ completeness level sits at $r_0 \approx 23.0$. {\bf Right:} Spatial density map for upper main sequence stars across our complete survey footprint. The projection is the same as in Figure \ref{f:gaga}. As before, the blue long-dashed circles indicate LMC-centric radii of $8\degr$, $12\degr$, $16\degr$, and $20\degr$; inside $8\degr$ we display a wide-field optical image of the LMC (credit: Yuri Beletsky) to help place the size of the periphery into perspective. The locations of the SMC and the Carina dwarf are indicated, together with the LMC globular cluster NGC 1841; as before, the magenta short-dashed circles show distances of $4\degr$ and $8\degr$ from the SMC. The only significant concentration of young stellar populations within our survey footprint occurs in the inter-Cloud region.
\label{f:cmdmap}}
\end{figure*}
In this paper we present results from a new contiguous imaging survey that spans the eastern half of the Magellanic inter-Cloud region and reaches several magnitudes deeper than any extant mapping of this location with comparable spatial coverage. These data allow us to examine the spatial distribution of young stars in the Bridge with substantially higher contrast and resolution than previous studies (including Figure \ref{f:gaga}), as well as undertake the first detailed exploration of their relationship with the surrounding gas.
\section{Observations and data reduction}
\label{s:data}
This work utilises the photometric catalogue presented by \citet{koposov:15} and \citet{mackey:16}, combined with new observations conducted by our group. All data were obtained using the {\it Dark Energy Camera} \citep[{\it DECam};][]{flaugher:15} mounted on the 4m Blanco Telescope at the Cerro Tololo Inter-American Observatory in Chile. {\it DECam} is a 520 megapixel imager consisting of 62 individual CCDs arranged into a hexagonal mosaic with a $\sim 3$\ deg$^2$ field of view. The base catalogue is derived from publically-available images taken during the first year of the Dark Energy Survey \citep[DES;][]{abbott:05,abbott:06}, and covers the outskirts of the LMC to the north and north-west of the galaxy's centre. Our new data were obtained as part of program 2016A-0618 (PI: Mackey) over four half-nights on 2016 February 25-28, and span two separate regions adjoining the DES footprint. One covers $\approx 60$\ deg$^2$ and follows the extension of the stellar substructure discovered by \citet{mackey:16} to the north-east in the direction of the Carina dwarf, while the other covers $\approx 220$\ deg$^2$ of the LMC periphery to the west and south. This latter area reaches roughly half-way to the SMC and thus spans approximately $50\%$ of the Magellanic inter-Cloud region. Our full survey footprint is displayed in the right-hand panel of Figure \ref{f:cmdmap} -- in the current work we focus on young stellar populations in the inter-Cloud region, while in an accompanying paper we present the discovery of several new low surface-brightness substructures in the outer LMC (Mackey et al. 2017, in prep.).
The entirety of our February 2016 observing run was clear and close to photometric. At each pointing we observed three $g$-band and three $r$-band frames. Exposure times were set to facilitate images matching the depth of those in the DES area to the north of the LMC; in practice, because of varying sky brightness (the moon, with $\sim 60\%$ illumination, set during each half-night) and seeing (many of our fields sit below $-75\degr$ declination) we employed integrations between $60-120$s per frame in $g$ and $40-80$s per frame in $r$. Across all images the median stellar FWHM is $1.20\arcsec$ in $g$ and $1.05\arcsec$ in $r$, each with rms scatter $0.15\arcsec$. All raw data were processed with the {\it DECam} community pipeline \citep{valdes:14}, and then passed through the photometry procedure described in detail by \citet{koposov:15}. Source detection and measurement was carried out with the {\it SExtractor} and {\it PSFEx} software \citep[e.g.,][]{bertin:96,bertin:11}, and a point-source catalogue was constructed by merging individual detection lists and removing galaxies using the {\sc spread\_model} and {\sc spreaderr\_model} parameters \citep[see][]{desai:12}. Photometry was calibrated to the SDSS scale using DR7 of the APASS survey and an ``\"{u}ber-calibration'' method \citep{pad:08}, and foreground reddening was corrected according to \citet*{schlegel:98} with the extinction coefficients from \citet{schlafly:11}.
Following \citet{mackey:16} we estimated the detection completeness as a function of position by constructing the stellar luminosity function in $0.5\degr \times 0.5\degr$ bins and measuring the level of the turn-over; because we are only interested in the young Magellanic population in this paper, we restricted this process to use only stars with $(g-r)_0 < 0.0$. Across the inter-Cloud region the mean $50\%$ completeness level for such stars sits at $r_0 \approx 23.0$, although we note a general trend for the level to be shallower in the east (where it is typically $r_0 \approx 22.5$) and deepest in the west ($r_0 \approx 23.5$). This is partly a result of increased crowding in eastern fields, which lie closest to the LMC, and partly due to the variation in conditions (in particular the lunar illumination) under which the observations were conducted.
\section{Results}
\label{s:results}
Figure \ref{f:cmdmap} shows a map of our full survey area. Part of the inter-Cloud region is enclosed in a box; plotted alongside the map is a colour-magnitude diagram (CMD) for stars at selected locations in this area (specifically, this CMD constitutes the sum of those for the overdensities identified in Figure \ref{f:bridge}) . A substantial young population is clearly present. Up to our saturation limit this is well traced by a PARSEC isochrone \citep{bressan:12} of age 30 Myr and $[$M$/$H$] = -1.0$, shifted to a distance modulus $\mu = 18.7$ (see Section \ref{ss:properties} and Figure \ref{f:youngcmds}). For bright stars, where the main sequence is nearly vertical, this model provides a good fit to the data; however, at fainter magnitudes (where the fiducial isochrone kinks to the red) the width of the main sequence is significantly broader than expected from the photometric uncertainties alone. The fiducial track for unresolved equal-mass binary stars, which sits\ $\approx0.75$\ mag above the single star isochrone \citep[e.g.,][]{hurley:98}, neatly encloses this broadening, suggesting that a substantial population of binaries is likely present. This would be consistent with observations of young Magellanic Cloud clusters, where the binary fraction for mass ratios $q \ga 0.5$ is seen to be as high as $\sim 30-40\%$ \citep[e.g.,][]{elson:98,li:13}. Together the single and binary star fiducial tracks define a CMD selection region, broadened by the observational uncertainties as a function of magnitude, for young Magellanic populations down to $r_0 \sim 22$. Note that the faint bound of this region sits well above the $50\%$ completeness level at all locations in our survey footprint. We use stars inside the upper part of the selection region ($r_0\,\la\,20$) to create the density map in Figure \ref{f:cmdmap}; this confirms that the only significant overdensity of young stellar populations within our survey footprint occurs in the boxed area.\footnote{The old LMC globular cluster NGC 1841 shows up in this map because it possesses an extended blue horizontal branch \citep[e.g.,][]{jeon:14} that overlaps with the young main sequence selection region on the CMD.}
\subsection{Properties of the young inter-Cloud populations}
\label{ss:properties}
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth]{figure3.eps}
\end{center}
\caption{Spatial density map in the eastern half of the inter-Cloud region for young stars isolated using the full CMD selection area from Figure \ref{f:cmdmap}. Significant clustering of the young populations is evident; the associations catalogued by \citet{battinelli:92} are labelled. The striking core-shell structure described in the text sits near $(\xi,\,\eta) \approx (-10.5,\,-7.5)$.
\label{f:bridge}}
\end{figure}
Our new observations reach several magnitudes deeper than all previous surveys of the inter-Cloud region that have comparable spatial coverage and, as a consequence, we are able to trace the extent of the young populations with significantly greater contrast. Figure \ref{f:bridge} shows a spatial density map of the boxed inter-Cloud region from Figure \ref{f:cmdmap}, where young stars have been isolated using the full CMD selection region defined above. This reveals a narrow chain of young clusters and/or associations stretching from the western edge of our survey footprint (approximately $13\degr$ from the LMC and $9\degr$ from the SMC) to the outskirts of the LMC disk at a radius $\sim 8\degr$. Most of these features have previously been identified by \citet{battinelli:92}; in Figure \ref{f:bridge} we show the association names from this catalogue. However, the advantages conferred by the depth of our observations are evident. For example, the structure labelled ICA 72 is in fact a much less significant density peak than two uncatalogued neighbouring features that we name ICA 72a and 72b. Also particularly striking is a ring or shell-like structure surrounding a compact core near $(\xi,\,\eta) \approx (-10.5,\,-7.5)$; $(\alpha,\,\delta) \approx (43.5,\,-73.5)$. Although this feature was briefly noted by \citet{irwin:90}, subsequent works have treated it as multiple independent young associations -- \citet{battinelli:92} identify eleven (of which two fall beyond the edge of our survey area). In contrast, our deeper observations leave little doubt that the young populations at this location constitute a single vast, albeit undoubtably fragmented, structure\footnote{Again, the advantages of our deep imaging are clear -- of the seven associations identified by \citet{battinelli:92} that fall within our survey footprint and lie along the edge of the ring structure, only three match with obvious density peaks in our map. Moreoever, the four most significant density peaks along the edge of the ring are uncatalogued.}.
\begin{figure*}
\begin{center}
\includegraphics[height=100mm]{figure4.eps}
\end{center}
\caption{Colour magnitude diagrams for six young associations in the eastern inter-Cloud region. We split the large structure at the western-most edge of our survey area into a ``core'' region (within $20\arcmin$ of its centre) and a ``shell'' region (stars in the range $20\arcmin-35\arcmin$ from its centre). The panel for ICA 72 includes all stars within $12\arcmin$ of the low-density structure identified by \citet{battinelli:92}, as well as stars within $9\arcmin$ of the previously-uncatalogued association we call ICA 72a and within $6\arcmin$ of the new association we call ICA 72b. The panels for ICA 73, 74 and 75 incorporate all stars within $20\arcmin$, $9\arcmin$, and $9\arcmin$ of the centres of these associations, respectively. In each panel we plot a PARSEC isochrone with age $30$\ Myr and $[$M$/$H$] = -1.0$ (thick line) plus the equal-mass binary sequence (thin line), as shown in Figure \ref{f:cmdmap}. These are shifted to a distance modulus $\mu = 18.70$ for the core and shell, and $18.65$ for the other associations. The panel for ICA 75 also shows a $200$\ Myr isochrone with $[$M$/$H$] = -1.0$. In the panels for ICA 74 and ICA 75 we mark the level of the $50\%$ completeness limit with a horizontal dashed line; for the remaining four panels this level sits fainter than $r_0 = 23.0$.
\label{f:youngcmds}}
\end{figure*}
Figure \ref{f:youngcmds} shows CMDs for each of the main associations visible on our inter-Cloud map. The subplot labelled ``core'' refers to the compact cluster at the centre of the shell-like feature; the subplot labelled ``shell'' incorporates all stars in this feature (specifically, within $20\arcmin-35\arcmin$ from the centre of the core); and the subplot for ICA 72 includes all stars in both this structure and the two nearby higher density associations. The sampled areas are loosely scaled according to the spatial sizes of the associations (i.e., the half-light radii derived in Section \ref{ss:structures}) -- for each target we plot all stars within $2r_h$, and extend this boundary to larger radii in cases where the background contamination is not too severe. As in Figure \ref{f:cmdmap}, on each CMD we overlay a PARSEC isochrone of age 30 Myr and $[$M$/$H$] = -1.0$, shifted vertically to fit the observed stellar distribution. Our choice of metallicity is motivated by observations of gas and young stars in the inter-Cloud region, which, as described above, generally agree that the metallicity of the Magellanic Bridge is depleted by $\sim -1$ dex relative to the Milky Way \citep[e.g.,][]{hambly:94,rolleston:99,lehner:01,lehner:08,lehner:02,dufton:08,misawa:09}.
\begin{table*}
\centering
\caption{Positions, ages, and distances for the main young associations in the eastern inter-Cloud region.}
\begin{minipage}{110mm}
\begin{tabular}{@{}lccccccc}
\hline \hline
Name & \multicolumn{2}{c}{Position (J2000.0)} & \multicolumn{2}{c}{Projected Coordinates} & Age & $\mu$ \\
& $\alpha$ & $\delta$ & $\xi$ & $\eta$ & (Myr) & (mag)\vspace{1mm}\\
\hline
Core & $02^{\rm h}54^{\rm m}10\fs9$ & $-73\degr28\arcmin21\arcsec$ & $-10.45$ & $-7.50$ & $\approx 30$ & $18.70\pm0.05$\\
Shell & $\ldots$ & $\ldots$ & $\ldots$ & $\ldots$ & $\approx 15$ & $18.70\pm0.05$\\
ICA 72 & $03^{\rm h}08^{\rm m}53\fs0$ & $-73\degr18\arcmin48\arcsec$ & $-9.64$ & $-6.74$ & $\la 30$ & $18.65 \pm 0.05$\\
ICA 72a & $03^{\rm h}06^{\rm m}09\fs5$ & $-72\degr50\arcmin31\arcsec$ & $-10.08$ & $-6.45$ & $\la 30$ & $18.65 \pm 0.05$\\
ICA 72b & $03^{\rm h}10^{\rm m}25\fs2$ & $-73\degr30\arcmin11\arcsec$ & $-9.44$ & $-6.84$ & $\la 30$ & $18.65 \pm 0.05$\\
ICA 73 & $03^{\rm h}30^{\rm m}58\fs4$ & $-72\degr58\arcmin02\arcsec$ & $-8.39$ & $-5.59$ & $\la 30$ & $18.65 \pm 0.05$\\
ICA 74 & $03^{\rm h}53^{\rm m}44\fs6$ & $-73\degr55\arcmin49\arcsec$ & $-6.48$ & $-5.76$ & $\la 200$ & $18.65 \pm 0.05$\\
ICA 75 & $04^{\rm h}01^{\rm m}57\fs7$ & $-74\degr03\arcmin21\arcsec$ & $-5.89$ & $-5.66$ & $\la 200$ & $18.65 \pm 0.05$\\
\hline
\label{t:params}
\end{tabular}
\medskip
\vspace{-4mm}
\\
Notes: (i) Positions include the offsets ($x_0,y_0$) calculated as part of our structural measurements;\\
(ii) We list ICA 72 for completeness, even though it does not comprise a significant density peak;\\
(iii) We also list the two strong nearby density peaks that we label as ICA 72a and 72b (see text).
\end{minipage}
\end{table*}
\begin{table*}
\centering
\caption{Structures, luminosities, and masses for the main young associations in the eastern inter-Cloud region.}
\begin{minipage}{120mm}
\begin{tabular}{@{}lcccccccccc}
\hline \hline
Name & $r_h$ & $r_h$ & $e$ & $\theta$ & $N_{*}$ & $M_V$ & $M_{*}$ \\
& (arcmin) & (pc) & & (deg) & & & (${\rm M_{\odot}}$)\vspace{1mm}\\
\hline
Core & $5.6^{+0.6}_{-0.5}$ & $89.5^{+9.6}_{-8.0}$ & $0.53\pm0.06$ & $37.6^{+5.4}_{-5.3}$ & $121\pm12$ & $-5.3^{+0.5}_{-1.1}$ & $900^{+110}_{-100}$ \\
Shell & $\approx 30$ & $\approx 480$ & $\ldots$ & $\ldots$ & $159\pm14$ & $-6.0^{+0.5}_{-0.9}$ & $1220^{+130}_{-120}$ \\
ICA 72a & $4.2^{+1.5}_{-1.3}$ & $65.6^{+23.4}_{-20.3}$ & $(0.00)$ & $\ldots$ & $24\pm7$ & $-3.1^{+0.7}_{-0.8}$ & $180\pm60$ \\
ICA 72b & $1.3^{+0.5}_{-0.3}$ & $20.3^{+7.8}_{-4.7}$ & $(0.00)$ & $\ldots$ & $11\pm3$ & $-2.1^{+0.9}_{-1.0}$ & $80\pm30$ \\
ICA 73 & $8.2^{+1.6}_{-1.3}$ & $128.1^{+25.0}_{-20.3}$ & $0.49\pm0.11$ & $51.7^{+9.9}_{-7.1}$ & $74\pm10$ & $-4.5^{+0.5}_{-1.5}$ & $540^{+90}_{-80}$ \\
ICA 74 & $0.8\pm0.2$ & $12.5\pm3.1$ & $(0.00)$ & $\ldots$ & $17\pm4$ & $-2.1^{+0.6}_{-1.1}$ & $130\pm40$ \\
ICA 75 & $1.8^{+0.6}_{-0.5}$ & $28.1^{+9.4}_{-7.8}$ & $(0.00)$ & $\ldots$ & $16\pm4$ & $-2.0^{+0.6}_{-1.1}$ & $120\pm40$ \\
\hline
\label{t:struct}
\end{tabular}
\medskip
\vspace{-4mm}
\\
Notes: (i) The quoted parameters are as follows: $r_h$ is the (elliptical) half-light radius; $e$ and $\theta$ are the ellipticity and position angle, respectively; $N_{*}$ is the number of member stars falling in our CMD selection box; and $M_V$and $M_{*}$ are the implied total luminosity and mass, respectively.\\
(ii) For the shell structure we list the measured radius in place of $r_h$.\\
(iii) The solutions derived for ICA 72a, 72b, 74, and 75 are constrained to have ellipticity $e=0$.
\end{minipage}
\end{table*}
Our photometry saturates above $r \sim 15.5$, so we are unable to resolve ages younger than $\approx 30$\ Myr using these CMDs; this represents a robust upper age limit for all the young associations plotted in Figure \ref{f:youngcmds} except for ICA 74 and ICA 75. These two objects are poorly populated and lack upper main sequence stars (at least up to our saturation limit) -- this could simply be due to stochastic sampling of the stellar mass function, or it might indicate that these two associations are older than the others. Formally our upper age limit for these two clusters is $\approx 200$\ Myr; an isochrone of this age is plotted on the CMD for ICA 75.
Because our photometry reaches well below the bend in the main sequence between $r_0 \sim 21-22$, it is possible to obtain precise relative distances for the main aggregates plotted in Figure \ref{f:youngcmds}. This has not previously been possible except for targeted deep observations of a handful of associations across the inter-Cloud region \citep[e.g.,][]{demers:98}. We find that the core-shell structure sits at a distance modulus of $\mu = 18.70 \pm 0.05$, while the ICA 72 complex, together with ICA 73, 74, and 75, are all slightly closer with $\mu = 18.65 \pm 0.05$ (here, all the quoted uncertainties are random). These measurements preclude a distance gradient greater than $\Delta\mu \approx 0.15$ mag across the eastern half of the inter-Cloud region. At low significance they are consistent with a mild gradient such that the objects sitting closer to the LMC in projection have shorter line-of-sight distances by $\approx 0.05$ mag than those sitting closer to the SMC in projection \citep[][obtained a similar result]{demers:98}; however the data are also formally consistent with a scenario where all the associations sit at the same distance.
Note that these conclusions assume that all the clusters plotted in Figure \ref{f:youngcmds} have approximately the same metallicity. We find that isochrones of age $30$\ Myr and $[$M$/$H$] = -0.5$ and $-1.5$ fit the data as well as that for our assumed $[$M$/$H$] = -1.0$, but require different vertical offsets. Changes of $\pm 0.5$ dex in metal abundance lead to changes in the measured distance modulus of roughly $\pm 0.2$ mag. However, the fact that we observe strong consistency between the distance estimates across our sample suggests that any cluster-to-cluster metallicity variations are probably much smaller than this. It is worth emphasizing that all the associations we have measured here, even those at easternmost edge of the inter-Cloud region, sit on the far side of the LMC \citep[for which we assume $\mu = 18.49$;][]{degrijs:14}. For our adopted $[$M$/$H$] = -1.0$ the associations have an average line-of-sight distance $\approx 4$\ kpc greater than that to the LMC centre; however, their distance moduli are larger than $18.49$ for all metallicities in the range $-1.0 \pm 0.5$ (with the metal-poor end leading to distances approximately matching that of the LMC). This is in contrast to the results of \citet{bica:15} who measured young associations in the SMC half of the inter-Cloud region to lie {\it closer} to us than the LMC.
Table \ref{t:params} lists coordinates for each of the main associations discussed above, and summarizes our age and distance estimates. Note that the entries for the core-shell structure reflect refined age estimates that we derive in Section \ref{s:discussion}.
\subsection{Structures and luminosities of the main associations}
\label{ss:structures}
It is informative to derive structural parameters, and luminosity and mass estimates, for each of main inter-Cloud associations in our survey. To achieve this we employ the methodology developed by \citet{martin:08,martin:16}. For a given target we select all $N$ stars in some local region $\mathcal{A}$ that fall inside the CMD selection box defined in Figure \ref{f:cmdmap}, and determine the density model:
\begin{equation}
\Sigma(r) = \frac{1.68^2}{2\pi r_h^2 (1-e)} N_* \exp(-1.68r/r_h) + \Sigma_b
\end{equation}
that maximises the likelihood of the data. This consists of a simple exponential radial decline in the surface density\footnote{While there are a wide variety of models that one might consider fitting to a cluster-like system, the objects studied here are sufficiently poorly populated that none provides any particular advantage over the others. The exponential models we adopt provide a convenient description of the data and utilise one fewer parameter than most other families; moreover, it is commonly observed that young star clusters in the Magellanic Clouds tend to exhibit radial surface density profiles that are not obviously truncated \citep*[e.g.,][]{eff:87,mackey:03a,mackey:03b}.}, plus a background level $\Sigma_b$ that is assumed constant over $\mathcal{A}$. This background is computed in terms of $N$ and the integral of the exponential profile over $\mathcal{A}$ \citep[see equation 6 in][]{martin:16}; we note that $\mathcal{A}$ need not be continuous, which is relevant because several of the target associations fall close to the edge of our survey footprint. The quantity $r$ is the elliptical radius, defined in terms of the projected sky coordinates $(x,y)$, an offset ($x_0,y_0)$ from the initial guess for the association centre \citep[for which we use the catalogue of][]{battinelli:92}, and the ellipticity $e = 1-b/a$ and position angle $\theta$ east of north \citep[see equation 5 in][]{martin:16}. The assumed exponential radial profile has a half-light radius $r_h$, while $N_*$ is the number of stars from the input list of $N$ that belong to the system.
We use the Markov chain Monte Carlo (MCMC) package {\it emcee} \citep{fm:13} to sample the posterior probability distribution functions (PDFs) and infer the most likely set of model parameters $\{x_0,y_0,e,\theta,r_h,N_*\}$ given our observations. We assume flat priors for all parameters and impose the following restrictions to ensure that the solutions remain meaningful: $0.0 \leq e < 1.0$, $0.0 \leq \theta < 180.0$, $r_h > 0.0$, and $0 < N_* \le N$. As an example outcome, we consider the solution for the ``core'' association. Figure \ref{f:coremcmc} shows the one- and two-dimensional marginalized PDFs for the four physically important parameters in our model, $\{e,\theta,r_h,N_*\}$. For this amply-populated system and for ICA 73, the solutions are well defined; however we found that the smaller associations ICA 72, 72a, 72b, 74 and 75 all possess too few stars for the algorithm to properly converge. In order to obtain some measure of the sizes of these systems we attempted to simplify the problem by fixing the ellipticity to be zero (and thus removing two parameters) -- this facilitated convergence in all cases except for ICA 72, which appears not to constitute a sufficiently significant density enhancement relative to the background (see below).
Table \ref{t:struct} presents the measured structural parameters for each association; the results are also shown graphically in Figure \ref{f:youngstruct}. The derived positional offsets $(x_0,y_0)$ are not provided explicitly, but are incorporated into the listed coordinates for each target in Table \ref{t:params}, where we have taken the base positions from \citet{battinelli:92}, converted these to J2000.0, and then shifted them by $(x_0,y_0)$. As noted above, we could not obtain a solution for the object catalogued by \citet{battinelli:92} as ICA 72 -- the reason for this can be seen in the relevant panel of Figure \ref{f:youngstruct} where the putative structure is circled in yellow. If there is an association at this location then it is very diffuse with a central surface density barely above the background level.
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth]{figure5.eps}
\end{center}
\caption{Structural solution for the ``core'' association. Marginalized one- and two-dimensional posterior PDFs are shown for the four physically important parameters in our exponential radial density model. These are the ellipticity $e$, the position angle $\theta$ east of north, the half-light radius $r_h$, and the number of stars in the region $\mathcal{A}$ lying in the CMD selection box that belong to the system in this solution. The contours show the $1\sigma$, $2\sigma$, and $3\sigma$ confidence levels. Created with {\it corner.py} \citep{fm:16}.
\label{f:coremcmc}}
\end{figure}
\begin{figure*}
\begin{center}
\includegraphics[width=80mm]{figure6a.eps}
\hspace{5mm}
\includegraphics[width=80mm]{figure6b.eps} \\
\vspace{1mm}
\includegraphics[width=80mm]{figure6c.eps}
\hspace{5mm}
\includegraphics[width=80mm]{figure6d.eps} \\
\vspace{1mm}
\includegraphics[width=80mm]{figure6e.eps}
\hspace{5mm}
\includegraphics[width=80mm]{figure6f.eps}
\end{center}
\caption{Structural properties of the main young stellar associations in the eastern inter-Cloud region, measured using the maximum likelihood procedure described in the text \citep[see also][]{martin:08,martin:16}. Two panels are presented per target. The left panel shows the spatial distribution of stars falling in the CMD selection box for the region $\mathcal{A}$ surrounding the association. The projections are gnomonic, with the origin corrected for the derived offsets $(x_0,y_0)$. Also marked are ellipses with semi-major axis $r_h$ (solid line) and $2r_h$ (dashed line), ellipticity $e$, and orientation $\theta$ east of north. Shaded areas indicate regions that were not imaged in our survey, and which were excluded from the calculation of the background level $\Sigma_b$. The right panel for each target shows the best-fitting exponential density profile as a function of elliptical radius $r$. The data are marked with solid points (each of which has Poissonian error bars), and the background level is indicated with the dashed horizontal line. We could not obtain a solution for the putative association ICA 72; the location of this structure is circled with a yellow dashed line in the left-hand panel for ICA 72b.
\label{f:youngstruct}}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[height=83mm]{figure7.eps}
\end{center}
\caption{Spatial density map of young stars in the eastern half of the inter-Cloud region, together with H{\sc i} column density contours from the Parkes Galactic All-Sky Survey \citep[GASS;][]{naomi:09,kalberla:10,kalberla:15} for radial velocities in the range $v_{\rm LSR} = 150-220$\ km$\,$s$^{-1}$. A low column density hole in the H{\sc i} distribution is evident, aligned perfectly with the stellar core-shell structure. The vertical and horizontal dashed lines indicate the locations of the perpendicular slices through the GASS data cube plotted in Figure \ref{f:slices}. The position of the young star DI 1388 is marked with a yellow point in the upper portion of the hole (see the discussion in Section \ref{s:discussion}).
\label{f:gas}}
\end{figure*}
The two most populous targets -- ICA 73 and the core at the centre of the shell-like structure -- have remarkably large half-light radii of order $\sim 100$\ pc. The remaining four associations (ICA 72a, 72b, 74 and 75), for which we obtained constrained solutions with the ellipticity set to zero, also have sizeable half-light radii -- even the most compact of these (ICA 74, with $r_h = 12.5$\ pc) is many times larger than is typically seen for young clusters and associations in the Magellanic Clouds \citep[$\approx 1-2$\ pc,][]{mackey:03a,mackey:03b,mackey:08}. A similar observation was made by \citet{bica:15} for young associations in the western half of the inter-Cloud region. Returning to ICA 73 and the core cluster, it is also notable that these two systems are both rather elliptical with $e\approx0.5$, and have position angles that match to within $\sim 15\degr$. Intriguingly these are also both closely aligned, to better than $\sim 10\degr$, with the position angle of the LMC relative to the SMC (which is $\theta\approx48\degr$).
To estimate an integrated luminosity and total stellar mass for each association, we follow a procedure similar to that outlined by \citet{martin:16}. For a given association we assume a PARSEC isochrone of the appropriate age and shifted to the measured distance modulus, together with a \citet{kroupa:01} initial mass function (IMF) over the mass range defined by the isochrone. Next, we randomly draw a target number of stars $N_*$ from the structural parameter MCMC chain for the association. To build a model cluster we randomly sample the IMF using the isochrone to determine the $g$- and $r$-band flux per star, and add these fluxes to the running integrated luminosity and total mass. After each star is generated, we test whether it falls into the CMD selection box defined in Figure \ref{f:cmdmap}. Once we have flagged $N_*$ stars as falling in the selection box, the model is complete. We use the photometric transformation defined by Lupton on the SDSS web pages\footnote{\href{http://www.sdss.org/dr13/algorithms/sdssUBVRITransform/\#Lupton2005}{www.sdss.org/dr13/algorithms/sdssUBVRITransform/\#Lupton2005}} to convert the integrated $g$- and $r$-band magnitudes to Johnson $V$.
We generate $10^5$ model realisations per association. We define the integrated luminosity $M_V$ and the total mass $M_{*}$ for a given association as the $50^{\rm th}$ percentiles of the distributions of $10^5$ luminosities and masses respectively. The $1\sigma$ uncertainties are given by the $16^{\rm th}$ and $84^{\rm th}$ percentiles. We adopted this methodology because the distribution of luminosities, in particular, can be very asymmetric due to the presence of one or more very bright evolved stars in some models. This is rare for associations where $N_*$ is relatively small, but occurs in a third to a half of all models for the more luminous systems studied here.
To estimate the luminosity and mass of the shell structure, we counted the number of stars in the CMD in Figure \ref{f:youngcmds} that fall in the usual selection box, and calculated the area of sky over which these stars were distributed. We then assumed the background surface density measured from the structural parameter fits for the cluster at the core of the shell, and subtracted this level from the overall star count to leave $N_* = 159 \pm 14$ for the shell (where the uncertainty is entirely due to that in the assumed background density, which in turn comes from the uncertainty in $N_*$ for the core). With this estimate in hand, it was straightforward to generate random models as described above to determine $M_V$ and $M_{*}$.
The results of our measurements are presented in Table \ref{t:struct}. In general the young stellar associations in our survey footprint are very low luminosity systems: ICA 72a, 72b, 74, and 75 have $M_V$ between $-2$ and $-3$ and $M_{*} \sim 100-200\,{\rm M}_\odot$. ICA 73 and the core and shell structures are considerably more substantial, each with $M_V$ between $-4.5$ and $-6$ and $M_{*} \sim 500-1200\,{\rm M}_\odot$ -- although, as previously noted, the shell appears fragmented into at least a dozen significant concentrations of young stars. On average, each of these fragments has a luminosity much more akin to the smaller isolated systems we have studied (ICA 72a, 72b, 74, and 75). Speculatively, this may reflect the characteristic scale of star formation in this part of the inter-Cloud region; if so, it is also possible that the more massive core and ICA 73 associations were formed via the merger of several such fragments (which could also help explain their much larger half-light radii). Nonetheless, it is worth emphasising that, when considered as a single structure, the shell has the highest luminosity of all the associations studied here -- even greater than the cluster located at its centre.
\subsection{Relationship to the Magellanic Bridge of H{\sc i}}
\label{ss:gas}
It is informative to compare the locations of the main concentrations of young stars in the inter-Cloud region to the distribution of H{\sc i} gas. It is well known that the young populations between the LMC and the SMC trace the Magellanic H{\sc i} Bridge quite closely \citep[e.g.,][]{dinescu:12,skowron:14,belokurov:17}; however our new deep photometry facilitates a much more detailed comparison than has previously been possible.
The H{\sc i} gas in the Magellanic Bridge is well separated from the Galactic foreground in velocity space, with a total span of $v_{\rm LSR} \approx 150-300$\ km$\,$s$^{-1}$ in the local-standard-of-rest frame \citep[see e.g.,][]{bruns:05,nidever:08,nidever:10}. By examining different velocity intervals across this range, we discovered clear signatures of interplay between the young stars and gas. In Figure \ref{f:gas} we reproduce our density map of young stars in the eastern inter-Cloud region, but now overplot contours of H{\sc i} column density from the Parkes Galactic All-Sky Survey \citep[GASS;][]{naomi:09,kalberla:10,kalberla:15} for velocities in the interval $v_{\rm LSR} = 150-220$\ km$\,$s$^{-1}$. While the young associations ICA 73, 74, and 75 show no obvious correspondence with the H{\sc i} at these velocities, there is a striking correlation between the gas and young stars in the region occupied by the core-shell feature. Specifically, there is a relatively low column density ``hole'' in the H{\sc i} distribution centred approximately on the cluster at the core of the stellar structure. The hole is bounded on all sides by some of the densest H{\sc i} features in the inter-Cloud region, except in the direction of increasing $\eta$ where the column density slowly declines. The shell of young stars matches precisely the location of the transition between low and high column densities at the edges of the H{\sc i} hole.
\begin{figure}
\begin{center}
\includegraphics[width=0.48\textwidth]{figure8a.eps}\\
\includegraphics[width=0.48\textwidth]{figure8b.eps}\\
\end{center}
\caption{H{\sc i} position-velocity profiles showing perpendicular slices through the GASS data cube at the location of the core association (i.e., along the lines of constant $\xi = -10.45$ and $\eta = -7.50$ as marked in Figure \ref{f:gas}). The vertical black lines show the position of the core cluster -- the solid line marks its centre and the dotted lines at $\pm 0.5$ degrees indicate the approximate angular extent of the evacuated bubble. The black horizontal dashed line sits at $v_{\rm LSR} = 220$\ km$\,$s$^{-1}$; gas with velocity in the range $v_{\rm LSR} = 150-220$\ km$\,$s$^{-1}$ exhibits a strong correlation with the young stellar populations (again as shown in Figure \ref{f:gas}), while that with velocity in the range $v_{\rm LSR} = 220-280$\ km$\,$s$^{-1}$ shows little correlation (as in Figure \ref{f:gas2}). At the location of the bubble the H{\sc i} velocity profile has peaks near $v_{\rm LSR} \approx 180$\ km$\,$s$^{-1}$ and $\approx 205$\ km$\,$s$^{-1}$ -- i.e., a typical width of $\sim 25$\ \ km$\,$s$^{-1}$.
\label{f:slices}}
\end{figure}
Based on this map, it is reasonable to hypothesize that stellar winds and supernovae from massive stars in the central cluster swept up the surrounding H{\sc i} gas to produce an expanding evacuated ``bubble'' with material piled up on its surface, triggering a shell of new star formation. Other examples of sequential star formation have been observed in the LMC \citep[e.g.,][]{oey:95,oey:98}. It is possible that the bubble has ``blown out'' via the lower column density region in the direction of increasing $\eta$, allowing the hot interior gas to vent.
We investigate this scenario in more detail in Section \ref{ss:feedback}; for now, we show in Figure \ref{f:slices}, perpendicular slices through the GASS data cube at the location of the core association (i.e., along the lines of constant $\xi = -10.45$ and $\eta = -7.50$ as marked in Figure \ref{f:gas}). Considering only the H{\sc i} with $v_{\rm LSR} = 150-220$\ km$\,$s$^{-1}$, the velocity profile exhibits a double-peaked structure; this is especially evident in the top panel of Figure \ref{f:slices}. The peaks sit near $v_{\rm LSR} \approx 180$\ km$\,$s$^{-1}$ and $\approx 205$\ km$\,$s$^{-1}$ -- i.e., the typical width across the region occupied by the evacuated bubble is $\sim 25$\ km$\,$s$^{-1}$ across the region occupied by the evacuated bubble. Under the assumption that this can be attributed entirely to expansion of the shell, it indicates an expansion velocity of $v_{\rm exp} \approx 12.5$\ km$\,$s$^{-1}$.
\begin{figure*}
\begin{center}
\includegraphics[height=83mm]{figure9.eps}
\end{center}
\caption{Spatial density map of young stars in the eastern half of the inter-Cloud region, together with H{\sc i} column density contours from GASS for radial velocities in the range $v_{\rm LSR} = 220-280$\ km$\,$s$^{-1}$. This higher velocity H{\sc i} is more tenuous and possesses a more northerly distribution than the gas shown in Figure \ref{f:gas}. In addition, there is a possible correlation between the highest density H{\sc i} clump and the location of the young associations ICA 73 and 74.
\label{f:gas2}}
\end{figure*}
The position-velocity slices in Figure \ref{f:slices} also show evidence for a gaseous component with velocity in the range $v_{\rm LSR} \sim 220-280$\ km$\,$s$^{-1}$. To explore the origin of this we show, in Figure \ref{f:gas2}, column density contours for H{\sc i} in this velocity interval overplotted on our density map of young stars. At these velocities the distribution of the gas is quite different than for the slice plotted in Figure \ref{f:gas}. Whereas the lower velocity H{\sc i} has higher column densities and is concentrated to the south of our survey region, the higher velocity gas is much more tenuous and preferentially located in the north. This matches previous observations by \citet{muller:04} who suggested that the Magellanic Bridge may be a projection of two kinematically and morphologically distinct gaseous features. The H{\sc i} distribution shown in Figure \ref{f:gas2} shows little apparent correlation with the main young stellar associations, apart from the highest density peak which sits neatly between ICA 73 and 74. Radial velocity measurements for these two associations would be needed to test whether or not this is a chance projection.
\section{Discussion}
\label{s:discussion}
Our new deep imaging of the eastern half of the Magellanic inter-Cloud region has allowed us to study the properties of the young stellar populations in this location with unprecedented clarity. Because these stars have recently formed out of gas in the Magellanic Bridge, they are valuable independent probes for exploring the characteristics and origin of this structure. They are also of considerable intrinsic interest, due to the low density and low metallicity nature of the star formation, and its location in the outer Galactic halo where the tidal force of the Milky Way is relatively benign.
\subsection{Distribution of young stellar populations}
As have many previous studies of this region, we find that the young stars are predominantly concentrated into a number of associations distributed between the LMC and SMC. Our colour-magnitude diagrams for the main young associations in the eastern inter-Cloud region indicate that these stellar systems are generally younger than $\approx 30$\ Myr (although the easternmost objects ICA 74 and 75 could potentially be as old as $\sim 200$\ Myr), and sit at distances intermediate between those of the LMC and SMC. This is consistent with the favoured scenario where the Magellanic Bridge of H{\sc i} gas has been largely stripped from the SMC due to a close interaction $\sim 200$\ Myr ago \citep[e.g.,][]{gardiner:96,besla:12}. Under the assumption that the young associations all share similar metallicities comparable to that observed for the gas in the Bridge -- i.e., $[$M$/$H$] \approx -1$ -- we find that they sit, on average, $\sim 4$\ kpc further along the line-of-sight than the LMC centre. Our measurements allow for a scenario where all the associations have the same line-of-sight distance, or for one where there is a small line-of-sight depth (of $\approx 1.25$\ kpc) to the overall distribution of young populations. We can exclude the possibility of any much greater variations in the distances to individual associations. Previous studies have typically obtained similar results for young inter-Cloud associations further to the west (i.e., closer to the SMC) than those studied here \citep[e.g.,][]{grondin:92,demers:98}; however, see \citet{bica:15} for an alternative view.
If there {\it is} a non-zero line-of-sight depth to the distribution of young stellar populations in our survey region, it could be the result of a simple distance gradient along the Bridge, with clusters projected closer to the SMC (in our case, the core-shell structure) lying at greater distances than those projected closer to the LMC (in our case the ICA 72 complex, ICA 73, 74 and 75). On the other hand, work by \citet{muller:04} revealed an apparent discontinuity in the velocity of the H{\sc i} gas in the Bridge, with the higher velocity and less turbulent gas sitting to the north of the axis joining the LMC and SMC, and the lower velocity and more turbulent gas lying to the south. These authors concluded that the Magellanic Bridge might be a projection of two kinematically and morphologically distinct filaments, possibly representing two distinct arms of gas emanating from the SMC. Absorption-line studies of the gas in the Magellanic Bridge commonly find different components with different velocities, different abundance patterns, and even different ionization fractions along the same line-of-sight \citep[e.g.,][]{lehner:02,lehner:08,misawa:09}.
From Figures \ref{f:gas} and \ref{f:gas2} we have observed that the lower velocity H{\sc i} gas in the portion of the Bridge that we have imaged, with $v_{\rm LSR} \la 220$\ km$\,$s$^{-1}$, is clearly associated with the core-shell structure seen at the western edge of our survey footprint but almost absent from the region around ICA 73, 74, and 75 to the east. On the other hand, the more tenuous higher velocity gas, with $v_{\rm LSR} \ga 220$\ km$\,$s$^{-1}$, shows no apparent correlation with the core-shell structure, but has its highest density peak projected precisely between ICA 73 and 74. The fact that these stellar associations are much smaller than the scale of the H{\sc i} clump may suggest that this is a chance alignment. If, however, the stars and gas are indeed correlated in this region, it could plausibly indicate that the associations at the eastern end of the Bridge were formed from gas in the higher velocity filament. Radial velocity measurements for stars in each of the young associations we have studied would be valuable for testing this scenario. In addition, abundance measurements \citep[as in, e.g.,][]{dufton:05,hunter:07,trundle:07} would (i) allow us to test whether associations in the putative higher velocity gas filament have systematically different abundance patterns to those in the lower velocity filament; (ii) allow us to check for cluster-to-cluster abundance variations which could provide information on how well mixed the gas in the Magellanic Bridge is; and (iii) help fix the absolute distances to the young associations\footnote{In Section \ref{ss:properties} we found that isochrones up to $\sim 0.5$ dex more metal-rich or metal-poor than $[$M$/$H$] = -1$ fit the {\it shape} of the main sequence equally well but shift the measured distance moduli by up to $\pm 0.2$ mag.}.
\subsection{Stellar feedback in the Magellanic Bridge}
\label{ss:feedback}
The most intriguing young stellar structure in the eastern inter-Cloud region is the core-shell feature revealed at high contrast by our deep observations. Based on the morphology and velocity profile of the coincident H{\sc i}, which shows evidence for an expanding low density evacuated region with edges that are perfectly aligned with the arc of young stars, we hypothesized that stellar winds and supernovae from massive stars in the central cluster swept up and compressed the surrounding gas into a shell, triggering a new burst of star formation. Similar examples of H{\sc i} holes and shells are commonly seen in disk environments, including the Milky Way, other Local Group galaxies, and more distant systems \citep[see e.g.,][and references therein]{naomi:02,boomsma:08}. The present case is unusual, however, in that the central power source is still visible and its properties can be measured quite precisely. This allows us to use different methods from the literature to estimate the energy required to drive the formation of the evacuated bubble, and then check their consistency by comparing the results to the available budget given the inferred number of massive stars present in the central cluster.
Perhaps the simplest way to estimate the kinetic energy $E_k$ necessary to form the bubble is by calculating the mass $M_s$ of H{\sc i} swept into the shell:
\begin{equation}
E_k = 0.5 \, M_s \, v_{\rm exp}^{2} = 4\times10^{43} \pi \, R^2 \, \Sigma \, v_{\rm exp}^{2}\,\,\,\,{\rm erg.}
\label{e:kinetic}
\end{equation}
Here $R$ is the observed radius of the shell in pc, $v_{\rm exp}$ is the observed expansion velocity in km$\,$s$^{-1}$, and $\Sigma$ is the surface mass density of the ambient gas into which the bubble expanded. We have already measured $R \approx 480$\ pc, and $v_{\rm exp} \approx 12.5$\ km$\,$s$^{-1}$, while the column density at the position of the bubble is $\approx 5\times 10^{20}$\ cm$^{-2}$, which corresponds to a surface mass density of $4\,{\rm M}_\odot\,{\rm pc}^{-2}$ (see Figure \ref{f:gas}, although this should probably be considered a lower limit given the substantially higher density in most of the immediately adjacent locations). The resulting shell mass is $M_s \approx 1.1\times 10^7\,{\rm M}_\odot$, while the inferred kinetic energy is $E_k \approx 1.8\times 10^{52}$\ erg. The uncertainty in our estimate for $v_{\rm exp}$ is dominant in this calculation; adopting an uncertainty of $20\%$ (i.e., assuming $v_{\rm exp}$ falls in the range $\approx 10-15$\ km$\,$s$^{-1}$ as per Figure \ref{f:slices}) leads to an uncertainty of $\pm 0.7\times 10^{52}$\ erg in the kinetic energy.
It is also possible to define an energy $E_E$, which is the equivalent energy that would have to be instantaneously deposited at the centre of the shell to account for the observed radius and expansion velocity \citep[see][]{heiles:84}. The literature contains a variety of methods for estimating this quantity, although we note that the approximation that the energy is instantaneously deposited is likely not appropriate for our situation (where we are considering an evolving OB association rather than a single supernova explosion). Following \citet{chevalier:74}, who conducted hydrodynamical simulations that included radiative cooling, we first try:
\begin{equation}
E_E = 5.3\times10^{43}\, n_0^{1.12}\, R^{3.12}\, v_{\rm exp}^{1.40} \,\,\,\,{\rm erg}
\label{e:chevalier}
\end{equation}
\citep[see also, e.g.,][]{heiles:79,naomi:02}. In this expression, $n_0$ is the initial ambient density in cm$^{-3}$ of the gas. To estimate this we start with our adopted column density of $\approx 5\times 10^{20}$\ cm$^{-2}$, and assume that the gas is at least as deep along the line-of-sight as the present radius of the shell (otherwise the hot gas would have vented in this direction and stalled the expansion at a smaller size). This leads to $n_0 \approx 0.17$\ cm$^{-3}$, and an expansion energy of $E\approx 5.8\times 10^{52}$ erg. Adopting uncertainties of $10\%$, $5\%$, and $20\%$ in our measurements of $n_0$, $R$, and $v_{\rm exp}$ respectively, yields an overall uncertainty of $\pm 2.0\times 10^{52}$ erg in this calculation.
An alternative estimate of the expansion energy comes from \citet*{cioffi:88}:
\begin{equation}
E_E = 6.8\times10^{43}\, n_0^{1.16} R^{3.16}\, v_{\rm exp}^{1.35}\, \zeta_m^{0.16} \,\,\,\,{\rm erg.}
\label{e:cioffi}
\end{equation}
Again, this expression is derived from hydrodynamical simulations that included radiative cooling, and again it makes the approximation that the energy is deposited instantaneously. The formula has a similar functional form to that of Eq. \ref{e:chevalier} but incorporates an explicit metallicity dependence, $\zeta_m$, which is the abundance of the gas relative to solar. Taking $[$M$/$H$] = -1.0$ for the Magellanic Bridge, such that $\zeta_m \approx 0.1$, we find that for the observed shell radius of $480$\ pc, expansion velocity $v_{\rm exp} \approx 12.5$\ km$\,$s$^{-1}$, and the assumed ambient gas density $n_0 \approx 0.17$\ cm$^{-3}$, the expansion energy $E_E\approx 5.4\times10^{52}$\ erg. Adopting uncertainties of $10\%$, $5\%$, and $20\%$ in our measurements of $n_0$, $R$, and $v_{\rm exp}$ as before, and adding an uncertainty of $\sim 20\%$ in $\zeta_m$, yields an overall uncertainty of $\pm 1.8\times10^{52}$ erg.
We have obtained three estimates for the energy required to drive the expansion of the observed bubble. The expressions of \citet{chevalier:74} and \citet{cioffi:88} take into account radiative energy losses, and thus in these cases the calculated energy of $\approx 5.5 \times 10^{52}$\ erg should be the total required input. On the other hand the first simple estimate of $\sim 1.8 \times 10^{52}$\ erg instead gives only the kinetic energy, which could be as little as $\sim 10\%$ of the total amount needed as most of the supernova explosion energy will ultimately be radiated \citep[e.g.,][]{thornton:98}. Under the assumption that a single supernova injects $\sim 10^{51}$\ erg into the surrounding ISM, our estimates suggest the total energy requirement is roughly equivalent to the expected input from $\ga 50$ OB stars.
Our simple random realisations of the star cluster at the core of the shell, from which we determined an integrated luminosity $M_V = -5.3^{+0.5}_{-1.1}$, indicate a present-day mass of $900^{+110}_{-100}\,{\rm M}_\odot$ in a system containing $1850 \pm 250$ stars \citep[recall that this assumes the IMF of][]{kroupa:01}. With an age of $\approx 30$\ Myr, the present-day main sequence turn-off mass is $\sim 8\,{\rm M}_\odot$. For a \citet{kroupa:01} IMF, the fraction of stars in a zero-age cluster with masses greater than $8\,{\rm M}_\odot$ is about $0.7\%$. This means that in the central cluster of the core-shell system, we expect that only $\sim 10-15$ stars have evolved off the main sequence and exploded as supernovae since its formation. This is discrepant by a factor of roughly $5$ with the number implied by our energy estimates.
This discrepancy could arise because the simple expressions for the expansion energy considered above are not strictly applicable to the system we are investigating. To explore this we consider the theory of superbubble expansion \citep[e.g.,][]{weaver:77,maclow:88}, which may be more appropriate here because it describes a supershell produced around an OB association. In particular, this model accounts for the collective effects of multiple sequential supernovae rather than making the assumption that all the energy is deposited instantaneously. As long as the superbubble has not blown out of the surrounding H{\sc i}, the predicted radius is \citep[e.g.,][]{mccray:87}:
\begin{equation}
R = 97 \, (N_{\rm SN} E_{51})^{0.2} \, n_0^{-0.2} \, t_7^{0.6}\,\,\,\,{\rm pc,}
\label{e:bubble1}
\end{equation}
while the expected velocity of the shell is given by:
\begin{equation}
v_{\rm exp} = 5.7 \, (N_{\rm SN} E_{51})^{0.2} \, n_0^{-0.2} \, t_7^{-0.4}\,\,\,\,{\rm km}\,{\rm s}^{-1}.
\label{e:bubble2}
\end{equation}
In these expressions $N_{\rm SN}$ is the number of stars that will become supernovae over the lifetime of the OB association, $E_{51}$ is the assumed energy of an individual supernova in units of $10^{51}$\ erg, and $t_7$ is the age of the association in units of $10$\ Myr. We assume $E_{51} \sim 1.0$ and that the age of the core cluster is $\approx 30$\ Myr. The number of stars that will become supernovae over the lifetime of the OB association is more difficult to determine. We have shown that $\sim 10-15$ stars have likely already exploded as supernovae in our cluster; how many more will explode in the future is dependent on what we assume the minimum mass of a supernova progenitor to be. Given that the present-day main sequence turn-off mass is $\sim 8\,{\rm M}_\odot$, which is close to the expected minimum mass for a Type II supernova, adopting $N_{\rm SN} = 15 \pm 5$ seems reasonable. In this case, we find the predicted radius and velocity of the supershell to be $R = 460 \pm 60$\ pc and $v_{\rm exp} \approx 9.0 \pm 1.0$\ km$\,$s$^{-1}$, respectively.
These estimates are close to the observed values for our shell. Its measured radius of $R\approx 480$\ pc sits well within the uncertainty on the theoretical prediction, while the expansion velocity of $v_{\rm exp} \approx 12.5$\ km$\,$s$^{-1}$ inferred from Figure \ref{f:slices} is only slightly larger than the predicted velocity. Importantly, the required number of supernova explosions is entirely consistent with the observed luminosity of the core association and a standard \citet{kroupa:01} IMF.
An important caveat is that this model assumes the supernova rate is high enough to effectively provide a continuous injection of energy. If this requirement is not satisfied, the collective action of the supernovae can be substantially less efficient at driving the expansion of the bubble \citep[see e.g.,][and references therein]{vasiliev:17}. The fact that we observe good agreement between the predictions of the model and the observed properties of our bubble suggests that the central OB association is in the regime where the assumptions that Equations \ref{e:bubble1} and \ref{e:bubble2} are based on are valid; however this might require that the central association was more compact in the past (or at least that the massive stars were all centrally located).
In Section \ref{ss:gas} we noted the possibility that the bubble has vented to the north where the column density of the surrounding H{\sc i} is seen to be lowest. In this context it is interesting that the observed velocity of the shell is similar to the value predicted by Equation \ref{e:bubble2}, because it means that not much deceleration has taken place with respect to the prediction for a freely expanding superbubble. This would suggest that either the superbubble has not yet blown out or that if blow-out has occurred it was not long ago. It is also relevant that the observed expansion speed of $v_{\rm exp} \approx 12.5$\ km$\,$s$^{-1}$ is close to the sound speed in a gas at a temperature of $10^4\,$K ($c_s \sim 10$\ km$\,$s$^{-1}$), because the phase during which the bubble is sweeping out material stops once the expansion velocity drops to the speed of sound in the ambient gas. This could suggest that the expansion of the shell being studied here may nearly be over. One complication that is not accounted for by the theory is the presence of the subsequent generation of stars that has formed in the shell. We have shown that this association is $\sim 35\%$ more massive than that in the central cluster; it is therefore more than likely that this population is also contributing a significant amount of energy to the expansion, which may explain why the observed expansion velocity is slightly larger than that predicted by the theory.
Finally, it is also worth considering results from previous, more direct studies of the gas properties towards the core-shell structure. In particular, the young star DI 1388 is projected inside the evacuated bubble (see Figure \ref{f:gas}) and has been used as a target for absorption-line investigations. These have observed ``multiple gas phases in a complex arrangement'' along this line of sight in the Bridge \citep{lehner:02}. About $70\%$ of the gas towards DI 1388 is ionized; moreoever two clear components are detected -- one at lower velocities ($165 \leq v_{\rm LSR} \leq 193$\ km$\,$s$^{-1}$) that is nearly fully ionized ($\sim 95\%$) and one at higher velocities ($193 \leq v_{\rm LSR} \leq 215$\ km$\,$s$^{-1}$) that is partially ionized ($\sim 50\%$) \citep{lehner:01,lehner:08}. Molecular hydrogen (H$_2$) is also seen along this sight line \citep{lehner:02}, but it is not clear how this may relate to the other components. Regardless of the complexities, these measurements directly indicate the presence of hot ionized gas coincident with the evacuated bubble spatially (at least in projection), and in radial velocity. It is also relevant that the maps of diffuse H$\alpha$ emission constructed by \citet{barger:13} exhibit a weak enhancement at this location (see their Figure 7, near $l,b \approx 291.8, -40.9$), providing independent evidence for warm ionized gas.
\subsection{Triggered star formation in the core-shell structure}
If the scenario outlined above is correct, then stellar winds and supernova explosions due to massive stars in the central cluster are responsible for sweeping up the surrounding gas, compressing it, and triggering the formation of a new generation of stars in the shell. We might therefore expect the stars in the shell to be noticeably younger than those in the core. Given that our {\it DECam} observations saturate above $r \sim 15.5$, we were only able to place an upper age limit of $\sim 30$\ Myr on both the core and shell structures. However, shallower photometry for this region has recently become available as part of the SkyMapper Early Data Release \citep[EDR,][]{wolf:16}\footnote{See also \href{http://skymapper.anu.edu.au}{skymapper.anu.edu.au}}; in Figure \ref{f:skymapper} we plot CMDs for stars coincident with the central core (left) and shell (right). These extend our range by at least four magnitudes to $r \approx 11.5$. As with our {\it DECam} observations, the SkyMapper data have been calibrated to the SDSS scale via APASS. Since the SkyMapper zero-points are uncertain at the few percent level due to various limitations in the EDR, we matched stars with those present in our {\it DECam} catalogues over the range $17.5 \la r_0 \la 16.0$ ($6$ stars for the core, and $5$ for the shell) to make small corrections to the zero-points. We plot PARSEC isochrones with $[$M$/$H$] = -1$, distance modulus $\mu = 18.7$, and ages of $5$, $10$, $20$, and $30$\ Myr on the CMDs. Although the core and shell structures each possess only a handful of very luminous stars, it is evident that those in the core are consistent with the $30$\ Myr isochrone, while those for the shell sit between the $10$ and $20$\ Myr tracks. This suggests that stars in the shell are likely to be $\sim 10-15$\ Myr younger than those in the core, consistent with the scenario where feedback from the central core has triggered star formation in the shell.
\begin{figure}
\begin{center}
\includegraphics[width=0.40\textwidth]{figure10.eps}
\end{center}
\caption{Colour-magnitude diagrams for stars coincident with the core and shell structures, where the photometry has been taken from the SkyMapper Early Data Release \citep{wolf:16}. SkyMapper provides measurements for sources up to $\sim 4$ mag brighter than our {\it DECam} survey. Stars which have $r_0 < 17.7$ are considered to have ``reliable'' SkyMapper photometry in these fields, as indicated by the {\sc class\_star} parameter. Those that also have colour $(g-r)_0 < -0.25$ constitute the upper main sequence of the two stellar systems and are highlighted with yellow points. On each CMD we plot PARSEC isochrones with $[$M$/$H$] = -1$, distance modulus $\mu = 18.7$, and ages (upper to lower, as labelled) of $5$, $10$, $20$, and $30$\ Myr. These indicate that the shell structure is likely $\sim 10-15$\ Myr younger than the central cluster.
\label{f:skymapper}}
\end{figure}
\subsection{Properties of the young stellar associations}
All of the young stellar associations that we have studied in the eastern inter-Cloud region appear to possess unusually extended structures with half-light radii ranging between $\sim 10-100$ pc. This contrasts strongly with young clusters in the LMC and SMC, which typically have half-light radii of $\sim 1-2$\ pc. A similar pattern has been noted for associations in the western part of the inter-Cloud region by \citet{bica:15}. The low density of the young stellar associations in the inter-Cloud region might reflect the unusual conditions of star formation at this location. For example, \citet{elmegreen:08} has suggested that star formation in turbulent regions with relatively high Mach number and moderately low density can produce low density bound stellar systems (especially if the background tidal forces are low, to ensure survivability). Given that the gas in the Magellanic Bridge is thought to have been largely stripped from the SMC during a close encounter with the LMC $\approx 200$\ Myr ago, it is not difficult to imagine that some parts of the inter-Cloud region might satisfy these criteria. Indeed, we have noted that the position angles of the two most luminous young associations we have studied (ICA 73 and the cluster at the centre of the core-shell structure) are aligned with the axis joining the LMC and the SMC to better than $\approx 10\degr$. Since these systems are too young to have undergone dynamical relaxation, this must reflect the initial conditions and may suggest that star formation was triggered due to compression of the stripped Bridge gas during the close LMC-SMC encounter.
It would be of considerable interest to use numerical $N$-body models to explore whether the young associations in the Magellanic Bridge will remain bound for a signficant period of time. This may be possible because they already appear to have largely expelled any residual gas, and moreover they now reside in the outer Milky Way halo where tidal forces are relatively benign. If we take our simple random realisations of the cluster at the centre of the core-shell association and fade the stellar populations to an age of $\sim 12$\ Gyr, the implied luminosity is $M_V \sim -1$. A similar calculation for ICA 73, which is presently not quite a factor of two less massive than the core cluster, yields a comparable result. These luminosities are extremely faint but match those for a number of ancient diffuse star clusters discovered in the outer Milky Way halo in recent years, some of which have half-light radii $r_h \ga 10$\ pc \citep[e.g.,][]{munoz:12,balbinot:13,kim:15}. Under the assumption that the diffuse structures seen for the young stellar associations in the inter-Cloud region are a signature of the particular star formation conditions prevalent in the Magellanic Bridge, and that this also holds for more massive clusters than those observed here, it is reasonable to speculate that the luminous extended globular clusters found in many dwarf galaxies and in the outskirts of larger galaxies \citep[see e.g.,][and references therein]{mackey:04,mackey:06,mackey:13,huxor:05,huxor:14,dacosta:09,hwang:11} could be tracers of gas-rich galaxy mergers and/or interactions that occurred in the early Universe.
\section{Summary}
We have used the {\it Dark Energy Camera} to conduct a deep contiguous imaging survey of the eastern half of the Magellanic inter-Cloud region. Our data allow us to explore the distribution and properties of the young stellar populations in this region with unparalleled clarity and resolution. Our main results are:
\begin{enumerate}
\item{As observed by many previous studies, the young inter-Cloud populations stretch all the way from the SMC ``wing'' to the south-western outskirts of the LMC disk. They are strongly spatially clustered, forming a narrow chain of low-mass stellar associations. The young inferred ages of these associations ($\la 30$\ Myr for the largest in our survey), and the observation that they are projected on top of the densest regions of H{\sc i} seen in the Magellanic Bridge, strongly suggest that the young stars have formed {\it in situ} rather than having been stripped from the LMC or SMC. This conclusion is reinforced by the fact that we see clear evidence for stellar feedback altering the properties of the ambient gas in at least one location (see below).\vspace{1mm}}
\item{Under the assumption that the young associations share a similar metallicity to that typically measured for the gas in the Magellanic Bridge (i.e., $[$M$/$H$]\sim -1$), we find them to have distance moduli $\mu \approx 18.65-18.70$. This is intermediate between that for the LMC ($\mu = 18.49$) and SMC ($\mu = 18.96$). Our measurements do not allow for very large, random, cluster-to-cluster variations in $[$M$/$H$]$, nor do they allow for a line-of-sight depth or distance gradient greater than $\Delta\mu \approx 0.15$ across the eastern half of the inter-Cloud region. At low significance the data are consistent with a mild distance gradient such that the objects that are closer to the LMC in projection have shorter line-of-sight distances by $\approx 0.05$ mag; however we cannot rule out that all the associations sit at the same distance. It is reasonable to assume that the distances we measure for the young associations reflect that for the majority of the gas in this part of the Magellanic Bridge.\vspace{1mm}}
\item{The seven main associations in the surveyed area have masses in the range $\approx 100-1200\,{\rm M}_\odot$. They are also remarkably diffuse -- we find half-light radii as large as $\sim 100$\ pc for two of the most populous clusters. It is possible that the extended structures that we observe for the young stellar associations in the eastern inter-Cloud region may reflect the low-density conditions of star formation at this location. It is also notable that the two most populous clusters are both rather elliptical (with $e\approx 0.5$) and oriented such that their major axes are aligned with each other, and the position angle of the LMC relative to the SMC, to within $\sim 10-15\degr$. Since these systems are too young to have undergone dynamical relaxation, this must reflect the initial conditions and may suggest that star formation was triggered due to compression of gas stripped from one or other of the Clouds during their most recent close encounter.\vspace{1mm}}
\item{We identify a vast shell of young stars surrounding a compact core association, that is spatially coincident with a low column density ``bubble'' in the H{\sc i} distribution for velocities in the range $v_{\rm LSR} = 150-220$\ km$\,$s$^{-1}$. This structure has a radius of $R\approx 480$\ pc, and slices through the GASS data cube suggest the gaseous shell is expanding with a velocity of $v_{\rm exp} \approx 12.5$\ km$\,$s$^{-1}$. We argue that stellar winds and supernova explosions from massive stars in the central cluster swept up and compressed the ambient gas, triggering a new burst of star formation. The theory of superbubble expansion developed by, e.g., \citet{weaver:77}; \citet{mccray:87}; and \citet{maclow:88}, closely predicts the observed properties of the shell given the expected number of massive stars at the core inferred from its observed luminosity. We measure the mass of stars formed in the shell to be $\sim 35\%$ greater than the mass of the central cluster, while CMDs for bright stars at this location suggest that the shell could be $\sim 10-15$\ Myr younger than the association at its core. It is notable that young stars in the shell are not uniformly distributed but instead concentrate into at least a dozen fragments, each with a luminosity more akin to the smaller isolated associations seen nearby. This structure constitutes a superb example of positive stellar feedback operating in a relatively isolated environment.}
\end{enumerate}
\section*{Acknowledgments}
ADM is grateful for support from an Australian Research Council (ARC) Future Fellowship (FT160100206). ADM and GDC also acknowledge support from the ARC through Discovery Projects DP1093431, DP120101237, and DP150103294. NMM-G acknowledges the support of the ARC through grant FT150100024. MF acknowledges the support of a Royal Society - Science Foundation Ireland University Research Fellowship. The research leading to these results has received funding from the European Research Council under the European Union's Seventh Framework Programme (FP/2007-2013)/ERC Grant Agreement no. 308024. We thank the International Telescopes Support Office (ITSO) at the Australian Astronomical Observatory (AAO) for providing travel funds to support our 2016A {\it DECam} observing run.
This project has used public archival data from the Dark Energy Survey (DES) together with data obtained with the {\it Dark Energy Camera} ({\it DECam}), which was constructed by the DES collaboration.
Funding for the DES Projects has been provided by
the U.S. Department of Energy,
the U.S. National Science Foundation,
the Ministry of Science and Education of Spain,
the Science and Technology Facilities Council of the United Kingdom,
the Higher Education Funding Council for England,
the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign,
the Kavli Institute of Cosmological Physics at the University of Chicago,
the Center for Cosmology and Astro-Particle Physics at the Ohio State University,
the Mitchell Institute for Fundamental Physics and Astronomy at Texas A\&M University,
Financiadora de Estudos e Projetos, Funda{\c c}{\~a}o Carlos Chagas Filho de Amparo {\`a} Pesquisa do Estado do Rio de Janeiro,
Conselho Nacional de Desenvolvimento Cient{\'i}fico e Tecnol{\'o}gico and the Minist{\'e}rio da Ci{\^e}ncia, Tecnologia e Inovac{\~a}o,
the Deutsche Forschungsgemeinschaft,
and the Collaborating Institutions in the Dark Energy Survey.
The Collaborating Institutions are
Argonne National Laboratory,
the University of California at Santa Cruz,
the University of Cambridge,
Centro de Investigaciones En{\'e}rgeticas, Medioambientales y Tecnol{\'o}gicas-Madrid,
the University of Chicago,
University College London,
the DES-Brazil Consortium,
the University of Edinburgh,
the Eidgen{\"o}ssische Technische Hoch\-schule (ETH) Z{\"u}rich,
Fermi National Accelerator Laboratory,
the University of Illinois at Urbana-Champaign,
the Institut de Ci{\`e}ncies de l'Espai (IEEC/CSIC),
the Institut de F{\'i}sica d'Altes Energies,
Lawrence Berkeley National Laboratory,
the Ludwig-Maximilians Universit{\"a}t M{\"u}nchen and the associated Excellence Cluster Universe,
the University of Michigan,
{the} National Optical Astronomy Observatory,
the University of Nottingham,
the Ohio State University,
the University of Pennsylvania,
the University of Portsmouth,
SLAC National Accelerator Laboratory,
Stanford University,
the University of Sussex,
and Texas A\&M University.
This work has made use of the AAVSO Photometric All-Sky Survey (APASS), funded by the Robert Martin Ayers Sciences Fund.
This research has used data from the SkyMapper Early Data Release (EDR). The national facility capability for SkyMapper has been funded through ARC LIEF grant LE130100104 from the Australian Research Council, awarded to the University of Sydney, the Australian National University, Swinburne University of Technology, the University of Queensland, the University of Western Australia, the University of Melbourne, Curtin University of Technology, Monash University and the Australian Astronomical Observatory. SkyMapper is owned and operated by The Australian National University's Research School of Astronomy and Astrophysics. The survey data were processed and provided by the SkyMapper Team at ANU. The SkyMapper node of the All-Sky Virtual Observatory is hosted at the National Computational Infrastructure (NCI).
|
1,941,325,220,433 | arxiv | \section{Introduction}
\label{Introduction}
What does time mean in cosmology? Are there any physical conditions which must be satisfied in order to speak about cosmic time? If so, how far back can time be extrapolated while still maintaining it as a well-defined physical concept? We have studied these questions in a series of papers
over the last ten years. The present manuscript is a summary of some main points from our investigations, as well as some further considerations regarding time in cosmology.
It is standard to assume that a number of important events took place in
the first tiny fractions of a second `after' the big bang. For instance,
the universe is thought to have been in a quark-gluon phase between
$\sim 10^{-11} - 10^{-5}$ seconds, whereas the fundamental material
constituents are massless due to the electroweak (Higgs) transition at
times earlier than $\sim 10^{-11}$ seconds. A phase of inflation is
envisaged (in some models) to have taken place around $\sim 10^{-34}$ seconds
after the big bang. A rough summary of the phases of the early universe
is given in the figure:\footnote{For a detailed discussion of (the assumptions behind) this figure and the epochs indicated, see also Rugh and Zinkernagel (2009).}
\begin{figure}[h]
\small
\label{timeline}
\setlength{\unitlength}{.87cm}
\begin{picture}(4.5,4.5)(-1.5,-.3)
\thicklines
\put(12.8,1){\vector(-1,0){13}}
\put(.2,.88){{\bf $|$}}
\put(-.3,1.3){{\shortstack {Planck \\ `time'}}}
\put(-.05,.3){{$10^{-43}$}}
\put(2,.88){{\bf $|$}}
\put(1.45,1.4){{Inflation}}
\put(1.7,.3){{$10^{-34}$}}
\put(2,2.3){{\bf $\downarrow$}}
\put(1.45,2,8){\footnotesize{\shortstack{Quantum \\ problem \\ for time}}}
\put(6.6,.88){{\bf $|$}}
\put(5.9,1.4){{Higgs}}
\put(6.1,.3){{$10^{-11}$}}
\put(6.6,2.3){{\bf $\downarrow$}}
\put(5.9,2.8){\footnotesize{\shortstack{Scale \\ problem \\ for time}}}
\put(7.8,.88){{\bf $|$}}
\put(7.2,1.3){{\shortstack{Quark- \\ gluon}}}
\put(7.5,.3){{$10^{-5}$}}
\put(8.8,.88){{\bf $|$}}
\put(8.7,1.4){{Nuclei}}
\put(8.6,.3){{$10^{0}$}}
\put(11.4,.88){{\bf $|$}}
\put(10.7,1.4){{CMB}}
\put(10.8,.3){{$10^{13}$}}
\put(12,.88){{\bf $|$}}
\put(11.9,1.4){{Now}}
\put(11.8,.3){{$10^{17}$}}
\put(12.9,.3){{seconds}}
\end{picture}
\caption{{\small Contemplated phases of the early universe.
The indicated quantum and scale problems for time are discussed in the text. } }
\end{figure}
What could be wrong (or at least problematic) with this backward extrapolation from now?
A main point is that {\em {physical}} time in relativity theory, in contrast to a purely mathematical parameter with the label $t$, is bound up with the notion of proper time.\footnote{Proper time along a (timelike or lightlike) world line (the path of a particle in 4-dimensional spacetime) can be
thought of as the time measured by a ``standard" clock along that world line.}
For example, Misner, Thorne and Wheeler (1973, p. 813) write:
\begin{quote}
...proper time is the most physically significant, most
physical real time we know. It corresponds to the ticking of physical
clocks and measures the natural rhythms of actual events.
\end{quote}
The connection between physical time and proper time leads to two kinds of problems
for the backward extrapolation.
The first of these follows from the fact that proper time is closely related to physical clocks or processes. The nature (and availability)
of such clocks or processes changes as we go back in time. The problem in this regard was hinted at by Misner, Thorne and Wheeler in connection with a discussion of whether a singularity occurs at a finite past proper time. They note that no actual clock can
adequately time the earliest moments of the universe:
\begin{quote}
Each actual clock has its ``ticks" discounted by a suitable factor -
$3*10^{7}$ seconds per orbit from the Earth-sun system, $1.1*10^{-10}$
seconds per oscillation for the Cesium transition, etc. Since no single
clock (because of its finite size and strength) is conceivable all the
way back to the singularity, a statement about the {\em{proper time}} since the
singularity involves the concept of an infinite sequence of successively
smaller and sturdier clocks with their ticks then discounted and added.
[...] ...finiteness [of the age of the universe] would be judged by
counting the number of discrete ticks on {\em realizable clocks}, not by
accessing the weight of unrealizable mathematical abstractions. [Our emphasis]
\vspace{-.2cm}
\begin{flushright}
Misner, Thorne and Wheeler (1973, p. 814)
\end{flushright}
\end{quote}
The authors' discussion regarding this quote seems to imply that the progressively more extreme physical conditions, as we extrapolate the standard cosmological model backwards, demand a succession of gradually more fine-grained clocks to give meaning to (or provide a physical basis of) the time of each of the
epochs.\footnote{Regarding, Misner, Thorne and Wheeler's examples in the quote, it is clear that
one has to distinguish between
{\em how fine-grained} a clock is (its precision) and {\em when} (in which cosmological
epoch) such a clock could in principle be realized. For instance, no stable Cesium atoms -- let alone real functioning Cesium clocks -- can exist before the time of decoupling of radiation and matter, about 380,000 years after the big bang.}
In this spirit, our view is
that a minimal requirement for having a physical notion of time (with a scale) is that it must be possible to find physical processes (what we call `cores of clocks') with a sufficiently fine-grained duration in the physics envisaged in the various epochs of cosmic history. As we shall discuss below, this requirement of linking time to conceivable cores of clocks leads to a {\em {scale problem}} for time, since it becomes progressively more difficult to identify physical processes with a well-defined
(and non-zero) duration in the very early universe.
A second kind of problem with the backward extrapolation
follows since proper time is defined in terms of (possible) particle world lines or trajectories. Within the standard cosmological model, there is a privileged set of such world lines since matter on large scales is assumed to move in a highly ordered manner (allowing for the identification of a
comoving reference frame and a global cosmic time equal to
the proper time of any comoving observer). As we shall discuss, this implies that the notion of cosmic time is closely related to the so-called Weyl principle. Problems with the notion of a global cosmic time
may arise if a privileged set of world lines becomes
difficult to identify, e.g. in the very early universe
above the electroweak (Higgs) phase transition or in a (complicated)
inhomogeneous universe.
A more serious problem for time
(which is a problem
even for a local definition of time) arises if a point is reached in the backward extrapolation where the world lines themselves can no longer be identified.
In particular, this appears to be the case if some point is contemplated, e.g. at the onset of inflation, where all constituents of the universe are of a quantum nature, leading to what can be called the {\em {quantum problem}} of time. Note that this problem arises roughly ten orders of magnitude
`before' (in the backward extrapolation from now) reaching a possible quantum gravity epoch, and so before hitting the usual problem of time in quantum gravity models.
In the following, we first outline the scale problem for time and the close relation between time and clocks. We then address the relation between time and world lines in the set-up of the standard cosmological model. We briefly indicate how this relation may lead to problems for a global (cosmic) time concept in the very early
universe above the electroweak phase transition or in a (complicated)
inhomogeneous universe. We finally discuss the more serious local (quantum) problem for time
in relation to the problem of identifying individual world lines.
\section{Time and clocks}
\label{TimeAndClocks}
The idea that time is dependent on change and/or motion is called relationism. It has been defended by classic thinkers like Aristotle and Leibniz, and in modern times by physicists like Barbour, Smolin and Rovelli. In our version of relationism, we argue in favour of a `time-clock' relation which asserts that time, in order to have a physical basis, must be understood in relation to physical processes which act as `cores' of clocks (Rugh and Zinkernagel 2005, 2009, see also Zinkernagel 2008). In the cosmological context, the time-clock relation implies that a necessary physical condition for {\em interpreting} the $t$ parameter of the standard Friedmann-Lema\^{\i}tre-Robertson-Walker (FLRW) model as cosmic time in some `epoch' of the universe is the (at least possible) existence of a physical process which can function as a core of a clock in the `epoch' in question. In particular, we have suggested that in order to make the interpretation
$$ t \leftrightarrow \mbox{time}, $$
at a specific cosmological
`epoch', the physical process acting as the core of a clock should 1)
have a well-defined duration which is sufficiently
fine-grained to `time' the epoch in question; and 2) be a process which
could conceivably take place among the material constituents available
in the universe at this epoch.
The time-clock relation is in conformity with how time is employed in
cosmology although cosmologists often formulate themselves in
operationalist terms -- that is, invoking observers measuring on factual
clocks. For
instance, Peacock (1999, p. 67) writes concerning the FLRW model:
\begin{quote}
We can define a global time coordinate $t$, which is the time measured
by clocks of these observers -- i.e. $t$ is the proper time measured by
an observer at rest with respect to the local matter distribution.
\end{quote}
\noindent
While this reference to clocks (or `standard' clocks) carried by
comoving observers is widely made in cosmology textbooks, there is
usually no discussion concerning the origin and nature of these clocks.
Part of the motivation for our investigations has been to provide a
discussion of this kind.
The standard definition of the global time coordinate to which Peacock refers -- and, in general, the question of how to make the $t$-time identification -- can be read in at least two different ways: 1) Actual clocks should be available (operationalism); or 2) rudiments (cores)
of clocks with a well-defined duration
should, in principle, be present (time-clock relation). Clearly, the first possibility is not an option in the very early universe where no actual clocks, let alone observers and measurements, are available. As we shall see below, the viability of the second option depends upon the availability of
physical processes with well-defined (and non-zero) duration.
We attempt to develop a position on the time concept which represents a departure
from operationalism in several ways: (i) Time cannot be defined (reductively) in terms of clocks (since clocks and measurements depend on the time concept); (ii) no actual clocks are needed, we allow reference to possible (counterfactual) clocks, compatible with the physics of the epoch in question; (iii) we
attempt to construct the {\em cores} of clocks out of available physics,
but do not require that this core should be associated with a counter
mechanism that could transform it into a real functioning clock; and (iv) we
do not require the existence of observers and actual measurements.
Nevertheless, the above formulated criterion for the $t \leftrightarrow$ time interpretation of being able to identify a process with a well-defined duration may still have an operationalist feel. For, as we shall see below, it means that
there may be limits to time in cases where scales can be found in the physics, but where no physical process (core of a clock) can be identified which could in principle exemplify or realize the time scale in question.
Whereas cosmologists often refer to clocks as sketched above, they also
define cosmic time `implicitly' by the specific cosmological model
employed to describe the universe.\footnote{This is related to a more general discussion of the implicit definition of time via natural laws, see Rugh and Zinkernagel (2009), section 2.1.} This can be done e.g. through the relation between time and the scale factor. If we for instance consider a radiation dominated epoch, the Einstein field equations may yield (see e.g. Rugh and Zinkernagel
(2009), section 4):
$$ R(t) \propto \sqrt{t} $$
In some sense, the scale factor here serves as a (core of a) clock. However, for this idea to work, one needs to have some bound system or a fixed physical length scale
which does not expand (or which expands differently than the universe). Otherwise, there is no physical content of $R(t)$ and hence no physical content of ``expansion''. Eddington, for
example, emphasized the importance of the expansion of the universe to
be defined relative to some bound systems by turning things upside-down:
`The theory of the ``expanding universe" might also be called the theory
of the ``shrinking atom" (Eddington 1933, quoted from Whitrow 1980, p.
293).
The viewpoint presented here assumes that a physical foundation of
time is closely related to {\em which} physical constituents are available (or at least possible) in the early universe. Such an assumption can be circumvented if one subscribes to some sort of Platonism (or mathematical foundationalism) according to which a purely mathematical definition of time, extracted e.g. from the formalism of general relativity (or, as simply the $t$ parameter in some model), is sufficient. According to such a view, there would seem to be no problem in contemplating, say, periods like $10^{-100}$ seconds after the big bang.
However, it is widely accepted that the standard cosmological model cannot be extrapolated below Planck scales and, accordingly, that the $t \leftrightarrow \mbox{time}$ interpretation cannot be made for $t$ values below $10^{-43}$
seconds. This illustrates that a physical condition (namely that quantum gravity
effects
may be neglected) can imply a limitation for the $t \leftrightarrow$
time interpretation. But if it is accepted that there is at least {\em
one} physical condition which must be satisfied in order to trust the
backward extrapolation of the FLRW model and its time concept, it
appears reasonable to require that also {\em other} physical conditions
(which are necessary to set up the FLRW model) should be satisfied
during this extrapolation.\footnote{We shall return to this requirement in the final section. We also note that -- except for the
space-time singularity itself -- there are no internal contradictions in
the {\em mathematics} of the FLRW model (or classical general
relativity) which suggests that this model should become invalid at some
point, e.g. at the Planck scale.} Hence, we take it that Platonism is not a satisfactory position regarding time in cosmology. In our view, time has to have some physical basis (i.e. it must be embedded in the available physics) in order to be a well-defined physical concept.
\subsection{The scale problem for time}
\label{TheScaleProblemForTime}
Let us now shortly review how the above considerations may lead to a scale problem for time in the early universe. The scale problem for time is related to two contemplated phase transitions at $\sim 10^{-5}s$ and $\sim 10^{-11} s$ in the early universe, where the
notion of length and time scales
(and their physical underpinning in terms of cores of clocks and rods)
becomes progressively weaker and disappears at $\sim 10^{-11} s$
(if we consider well-known physics set by the standard model of particle physics).
\subsubsection*{$\sim 10^{-5} s$: No bound systems (the quark-hadron phase transition)}
Physically based time and length scales are not independent notions.
Einstein discussed an elementary clock system, `the light-clock', which involves the propagation of a light signal across some physical length scale as when a light signal is being reflected back and forth between the ends of a rigid rod. If we ask whether we in principle may build up such Einstein light-clocks from the constituents of the early universe we note, as we extrapolate backwards in time
(and the temperature rises), that it becomes progressively more difficult to find
any spatially extended physical systems. In the so-called hadron era
there are still bound (hadron) systems such as pions, neutrons and protons. At a transition temperature
of $T \sim 10^{12}$ K ($\sim 10^{-5} s$ after the big bang) it is, however, believed that there
is a quark-hadron phase transition, and above this transition point no bound states are left. The
universe then consists of particles (like quarks, leptons, gluons and photons) which have no
known spatial extension. If a rudiment (or core) of a rod has to be constructed from a bound
physical system, we no longer have such rudiments of rods left in
the universe, and we have to look elsewhere for physics which can set a
physical length scale.
The quarks and leptons still possess the physical property of
mass. Thus, one still has length scales if the Compton wavelength
$\lambda = \lambda_C = \hbar/(m c)$ of these particles can be taken to
set such a scale. However, a rod with
spatial extension equal to the Compton wavelength leads to a
`pair-production of rods' as a quantum effect (in general, the Compton wavelength is the length scale at which `pair
production' of particle-antiparticle pairs occurs).
It is thus difficult to imagine how the Compton wavelength
divided by $c$ corresponds to a physical process which could function as
the core of a clock e.g. in the above mentioned light-clock.
Note that these considerations, and hence our first proposed time limit at $10^{-5}$ seconds, is based on the somewhat operationalist premise that one should in principle be able to identify a core of a clock (physical process) with a well-defined duration. The contemplated process is one in which a light signal travels a well-defined distance (namely a Compton wavelength of a quark or a lepton), but this process seems physically unrealizable insofar as the photon is converted to a particle-antiparticle pair during flight.\footnote{As different sorts of rudiments (cores) of clocks, we may consider the decay processes of unstable, massive particles such as the decay of the muons $\mu^{-} \rightarrow e^{-} \bar{\nu}_e \nu_{\mu}$, or the decay of the $Z^{0}$ particles, $Z^0 \rightarrow f \bar{f}$ (which can decay into any pair of
fermions). But, as discussed in Rugh and Zinkernagel (2009), also these processes are difficult to conceive as functioning cores of clocks due to their quantum-mechanical and statistical nature.}
\subsubsection*{$\sim 10^{-11} s$: Scale invariance (the electroweak phase transition)}
According to the standard model of particle physics (which embodies a Higgs sector with
a (set of) scalar field(s) $\phi$) there is an electroweak phase transition at a transition
temperature of $T \sim 300$ GeV $\sim 10^{15}$ K when the universe was
$\sim 10^{-11}$ s old. Above this phase transition the Higgs field
expectation value vanishes $< \phi > = 0$. This transition translates into {\em zero rest
masses} of all the fundamental quarks and leptons (and massive force mediators) in
the standard model.
Without any masses in the theory it will exhibit a symmetry known as conformal invariance, and it will be impossible to find physical processes (among the microphysical constituents)
with a well-defined (and non-zero) duration.\footnote{See however Rugh and Zinkernagel (2009, section 5.3) for a brief discussion of some possible rudiments of mass which could remain above the Higgs transition (but which are, in our assessment, insufficient to ground e.g. a physical time scale).}
Thus, not only can no core of a clock be identified. The relevant physics (the electroweak and strong sector)
cannot set physical scales for time, scales for length and no scale for
energy. If there is no scale for length and energy then there is no
scale for temperature $T$. Metaphorically speaking, we may say that not
only the property of mass of the particle constituents `melts away'
above the electroweak phase transition but also the concept of
temperature itself `melts' (i.e. $T$ loses its physical foundation above
this transition point).
In our assessment, therefore,
the time scale assumed (e.g. in cosmology books) above the electroweak phase
transition is purely speculative in the sense that it cannot be founded
upon an extrapolation of well known physics (due to conformal
invariance) above the phase transition point. Thus, the time
scale will have to be founded on the introduction of some new physics
(beyond the standard model of particle physics), and is in this sense as
speculative as the new (speculative) physics on which it is based.
It is of interest that Roger Penrose has recently attempted to turn the scale problem for time into a virtue in the construction of a new kind of cosmological scenario. Penrose (2010, p. 142) cites our study\footnote{Our study appeared as a handout in a first print in 2005 (Rugh and Zinkernagel
2005)
and was published in a revised version in 2009.} in connection with the following quote:
\begin{quote}
\noindent
...close to the Big Bang, probably down to around $10^{-12}$ seconds after that moment, when temperatures exceed about $10^{16}$ K, the relevant physics is believed to become blind to the scale factor $\Omega$, and {\em conformal } geometry becomes the space-time structure appropriate to the relevant physical processes. Thus, all this physical activity would, at that stage, have been insensitive to local scale changes. [Emphasis in original]
\end{quote}
\noindent
Our mentioning this point does not imply an endorsement of Penrose's proposal of an ``Extraordinary New View of the Universe" (a conformal cyclic cosmology) in which approximate
conformal invariance holds in both ends (the beginning and the remote
future) of the universe. Nevertheless, there seems to be a certain agreement in philosophical outlook, also when Penrose mentions (2010, p. 93): ``It is important for the {\em physical basis} of general relativity that extremely precise clocks actually exists in Nature, at a fundamental level, since the whole theory depends upon a naturally defined metric {\bf g}" (our emphasis).
\subsubsection*{Why not refer to the Planck scales?}
The combination of the constants $\hbar$ and $c$ from relativistic
quantum mechanics, and $c$ and $G$ from classical general relativity
yields -- as a mathematical combination of physical constants -- the famous Planck scales. As concerns time, the Planck time scale $t_P = (\hbar G/c^5)^{1/2} \sim 10^{-43}$ seconds
is immensely more fine-grained than time scales set by any
physical process which we (in our investigations) have attempted to
utilize as rudiments of clocks at various stages
in cosmic history.
{\em If} the Planck time scale were
considered sufficient to provide a physical basis for the time scale in the early universe, then there would be no scale problem for time anywhere along the extrapolation from now to the Planck times.
However, we see several related reasons to be suspicious that the Planck time scale does indeed provide a sufficient physical basis for the time scale in the early universe. First of all, the Planck scales are supposed to be the physical relevant scales of theories of quantum gravity, and such theories are still highly speculative. The Planck time scale is therefore at least as speculative as any other imagined time scale above the electroweak phase transition. Second,
it is expected that quantum
gravity effects are totally negligible at energy scales around the electroweak
phase transition point (and negligible well into the `desert' above this
phase transition). It appears dubious to ground time scales of
Higgs-physics on quantum gravity effects which are irrelevant at Higgs-physics
scales.\footnote{Note
that today we define a second as 9.192.631.770 vibrations
of radiation caused by well-defined transitions in Cs-133 (see e.g. 't Hooft and Vandoren 2014).
This is a physical grounding (even operationally) of a timescale (a second)
in terms of physical processes taking place at timescales substantially
(10 orders of magnitude) smaller. It would be of interest if
one could speculate any effect on Higgs-scale
physics stemming from the quantum gravity scales thirty orders of magnitude below.}
Third, even if we bypass the second problem, one may well question how physically reasonable the supposed physical processes would be for grounding the Planck scale (recall that, in our view, a physical basis for a time scale should be related to relevant physical processes). The Planck scales may be arrived at by setting
the Compton wavelength equal to the Schwarschild radius of a black hole.
It is thus a characteristic scale at which there is a
pair production of black holes as a quantum effect. Consider this in the context of the discussion above on time and length scales in connection with the light-clock: At the Planck length scale there is `pair production of rods' (the rod being the spatial extension of the quantum black hole), and
the corresponding Planck time scale is the time it takes a
light pulse to cross this length scale. This appears
to be more of a mathematical construct than a conceivable physical
process since the crossing of a light pulse is hardly a
well-defined physical process within such violent fluctuations in the
geometry. For instance, one may ask at which of the two pair produced quantum black
holes the light pulse is supposed to end its `crossing'?\footnote{Note that this third reason against using the Planck scale as a physical basis for the time scale in the early universe is, just like the $10^{-5}$ second limit discussed above, based on the somewhat operationalist premise that we should be able to point to a process (a core of a clock) which provides a definite time interval.}
In our assessment, then, if one wants to solve (or dissolve) the scale problem for time at and above the Higgs transition by referring to speculative processes in quantum gravity, such as `quantum pair production of black holes',
then it should at least be admitted that the cosmic time scale constructed in this way
is highly speculative.\footnote{As we shall discuss later,
the {\em quantum} problem of time (section \ref{TheQuantumProblemForTime})
does not depend on whether we have a physically well-founded {\em scale} for time. This problem remains (e.g. at the onset of inflation) even if we base length and time scales (throughout cosmic history) on speculative Planck scale physics.}
\section{Time and world lines}
\label{TimeAndWorldLines}
The above section has focused on the consequences of proper time being related to clocks. As we saw, this relation leads to the idea that time is related to physical processes -- which is a version of what is known as relationism. But there is a more direct route to relationism in cosmology which is independent of the mentioned time-clock-relation (even if in conformity with it). This has to do with the fact that proper time is defined in terms of (possible) particle world lines. In the following we shall discuss how this implies a close relation between time and cosmic matter, both at the global and at the local level.
\subsection{Setting up the FLRW model with a cosmic time}
\label{SettingUpTheFLRWModel}
In Rugh and Zinkernagel (2011) we discuss how the set-up of the FLRW model with a global time is closely linked to the motion, distribution and properties of cosmic matter. We now briefly review some key points of this discussion.
In relativity theory time depends on the choice of reference frame. For the universe, a reference frame cannot be given from the outside, so such a frame has to be ``built up from within", that is, in terms of the (material) constituents within the universe. It is often assumed that the FLRW model
may be derived just from the cosmological principle. This
principle states that the universe is spatially homogeneous and isotropic (on large scales). It is much less well
known that another assumption, called Weyl's principle, is necessary in order to arrive at the FLRW model and, in particular, its cosmic time parameter.\footnote{In some cosmology textbooks -- e.g. by
Bondi, Raychaudhuri and Narlikar -- the importance of Weyl's principle is emphasized, and
explicitly referred to. In other textbooks it appears, in our assessment (see Rugh and Zinkernagel 2011), that the Weyl principle is implicitly assumed in the process of setting up the FLRW model.} Whereas the cosmological principle imposes constraints on the {\em {distribution}} of the matter content of the universe, Weyl's principle imposes constraints on the {\em {motion}} of the matter content. Weyl's principle (from 1923)
asserts
that the matter content is so {\em{well behaved}} that a reference frame can be built up from it:
\begin{quotation}
\noindent
Weyl's principle (in a general form): The world lines of `fundamental
particles' form a spacetime-filling family of non-intersecting geodesics (a congruence of geodesic world lines).
\end{quotation}
The importance of Weyl's principle is that it provides a reference frame which is physically based
on an expanding `substratum' of `fundamental particles' (e.g. galaxies or clusters of galaxies).
In particular, if the (non-crossing)
geodesic world lines are required to be orthogonal to a series of space-like hypersurfaces, a comoving reference frame is defined in which constant spatial coordinates are ``carried by" the fundamental particles (see e.g. figure 3.7 in Narlikar 2002, p. 107).
The time coordinate is a cosmic time which labels the series of hypersurfaces, and which may be taken as the proper time along any of the particle world lines. We note that the congruence of world lines is essential to the standard cosmological model since the symmetry constraints of homogeneity and isotropy are imposed w.r.t. such a congruence (see e.g. Ellis 1999). Thus, Weyl's principle is {\em{a precondition}} for the cosmological principle; the former can be satisfied without the latter being satisfied but not vice versa.
In the early universe, problems may arise for the Weyl principle and thus for the possibility of identifying a reference frame and a global cosmic time parameter.\footnote{In Rugh and Zinkernagel (2013) we also argue that there
is no approximate fulfillment of a Weyl principle and no well-defined global (multiverse)
cosmic time concept in the eternal
inflationary multiverse model outlined e.g. by Linde and Guth.}
At present and for most of cosmic history, the comoving frame of reference can be identified as the frame in which the cosmic microwave background radiation (CMB) looks isotropic
(see e.g. Peebles 1993, p. 152), and cosmic matter is
(above the homogeneity scale) assumed to be described as dust particles with zero pressure which fulfill Weyl's principle. But before the release of the CMB, the situation is less straightforward. For, as we go backwards in time, it may become
increasingly more difficult to satisfy, or {\em even formulate}, the Weyl principle
as a physical principle, since the nature of the physical
constituents is changing from galaxies, to relativistic gas particles,
and to entirely massless particles moving with velocity
$c$.\footnote{In the early radiation phase, matter is highly relativistic (moving with random velocities close to $c$), and the Weyl principle is not satisfied for a typical particle but one may still introduce fictitious averaging volumes in order to create substitutes for `galaxies which are at rest';
see e.g. Narlikar (2002, p. 131).}
Indeed, above the electroweak phase transition (before $10^{-11}$ seconds
`after' the big bang), all constituents are massless and move with
velocity $c$ in any reference frame. There will thus be no constituents which are comoving
(at rest).
One might attempt to construct mathematical points (comoving with a reference frame)
like a center of mass (or, in special relativity, center of energy)
out of the massless, ultrarelativistic gas
particles, but this procedure seems to require that length scales be available in order to
e.g. specify how far the particles are apart (which is needed as input in the
mathematical expression for the center of energy). As discussed earlier, the only option for specifying such length scales (above the
electroweak phase transition) will be to
appeal to speculative physics, and the prospects of satisfying Weyl's principle
(and have a cosmic time) will therefore
also rely on speculations beyond current well-established physics. The problem of building up the FLRW model with matter consisting entirely
of consituents moving with velocity $c$, may also be seen by noting that the set-up of
the FLRW model requires matter (the energy-momentum tensor) to be in the form of
a perfect fluid, as this is the only form compatible with the FLRW symmetries,
see e.g. Weinberg (1972, p. 414). For this, a source consisting of pure radiation is not
sufficient since one cannot effectively simulate a perfect fluid by ``averaging over
pure radiation".\footnote{Krasinski (1997, p. 5 - 9) notes
that the energy-momentum tensor in cosmological models may contain many different contributions,
e.g. a perfect fluid, a null-fluid, a scalar field, and an
electromagnetic field. He also emphasizes that a source of a pure null fluid or a pure
electromagnetic field is not compatible with the FLRW geometry, and that solutions with
such energy-momentum sources have no FLRW limit (Krasinski 1997, p. 13).}
On top of this, the physical basis of the Weyl postulate (e.g.
non-intersecting world lines of `fundamental particles'), and even that of proper time, appears questionable if some period in cosmic history is reached where the
`fundamental particles' are described by wave-functions $\psi (x,t)$
referring to (entangled) quantum constituents. What is a `world line' or
a `particle trajectory' then? (See also the section below on the quantum problem for time).
In the following we shall briefly question what happens to cosmic time if/when we cannot assume the validity of the standard FLRW model (next subsection), before we turn to the question of what happens if/when the cosmic constituents become quantum (subsection \ref{TheQuantumProblemForTime}).
\subsection{Cosmic time in an inhomogeneous universe }
\label{CosmicTimeInAnInhomogeneousUniverse}
The cosmological standard model is highly idealized and it is therefore of interest to inquire about cosmic time when the model's idealizing assumptions are relaxed. In particular, one may ask whether we still have a good cosmic time concept in our actual -- at least up to very large scales -- inhomogeneous universe? It is well known that close to massive objects time runs differently than in more `void like' segments of spacetime away from any such massive objects. The complexity of constructing a privileged time notion in such situations has been illustrated e.g. in the following example by Feynman.
How old is our earth? Since clocks (time) run
differently in different gravitational potentials (time dilation in a gravitational field), time
will run at a different rate
in the center of the earth than on the surface of the earth. Feynman (1995, p. 69) remarks:
\begin{quotation}
...we might have to be more careful in the future in speaking of the ages of objects
such as the earth, since the center of the earth should be a day or two younger
than the surface!
\end{quotation}
In fact, the situation is slightly worse,
for integrating up a relative time dilation factor of
$\bigtriangleup \tau/\tau = \bigtriangleup \Phi/c^2$ over a coarse estimate of the (not precisely defined) lifespan
of our earth ( $\sim 5\times 10^{9}$ years) yields some years in time difference.\footnote{In an
order of magnitude estimate we may assume that our earth is homogeneous and the potential
difference between the center and the surface of our earth is then integrated up to
$\bigtriangleup \Phi = G M/2 R \;$ which translates into a relative time dilation effect
$\bigtriangleup \tau/\tau = \bigtriangleup \Phi/c^2 = 1/4 \times (R_{Schw}/R) \sim 1/3\times
10^{-9}$ (Here $R_{Schw} = 2GM/c^2$ is the Schwarzschild radius of our earth with
mass $ M $ and radius $ R $). Integrating this relative time dilation over $\sim 5 \times 10^9$ years
yields an order of magnitude estimate of $\sim 2$ years for the age difference (as measured by
counterfactual clocks located in the center and at the surface of our earth
over the lifespan of our earth).}
In a universe with an inhomogeneous distribution of
the material constituents, the situation is less clear than in
the Feynman example of a slightly inhomogeneous gravitational field
throughout our earth. In some mathematically simplified spatially inhomogeneous models, it
may be possible to maintain a Weyl principle and a notion of a global cosmic time
(cf. e.g. Krasinski (1997) and references therein).
However, if our universe exhibited fractal behavior and collisions on all scales
it would be difficult to uphold a Weyl principle (even in an
`average sense' where small scale
collisions and inhomogeneities are averaged out). We may add that such fractal behavior and collisions on all
scales appear to be a characteristic of envisaged multiverse inflationary scenarios like
chaotic inflation, see e.g. discussion and
references in Rugh and Zinkernagel (2013).
Not least due to the observed microwave background isotropy
(and the remarkable isotropy of X-ray counts, radio source counts, and $\gamma$-ray bursts)
it is generally expected (yet debated)
among cosmologists that there will be a transition from small-scale fractal behavior to
large-scale homogeneity.\footnote{Moreover, it has been emphasized, e.g. by Barrow (2005),
that large contrasts in density $\delta \rho/\rho$
are not necessarily mirrored in similar inhomogeneities in the
gravitational potential $\Phi$ since the equation of the
relative perturbation $\delta \Phi /\Phi$
of the gravitational potential has in it a huge suppression factor
($\delta \Phi/\Phi \sim \delta \rho/\rho \times (L/(c/H))^2$) if the size $L$ of the
density irregularity is small relative to the Hubble radius $c/H$.}
A recent study arguing this case is e.g.
Scrimgeour et al. (2012).\footnote{If we want to observationally test the expected homogeneity at large
scales, one should pay attention to the danger of vicious circularities (``catch-22").
Distance measures like redshift-distance measures (at large distances)
should not have built-in the assumptions we want to test (the FLRW model as space-time metric, etc.).
The analysis provided in e.g. Scrimgeour et al. (2012) is very elaborate but it
is of interest that they
note (p. 4): ``To do this, we assume the FRW metric and
$\Lambda$CDM. This is necessary for any homogeneity measurement, since
we must always assume a metric in order to interpret redshifts. Therefore in the
strictest sense this can {\em only} be used as a consistency test of
$\Lambda$CDM. However, if we find the trend towards homogeneity matches the
trend predicted by $\Lambda$CDM, then this is a strong consistency check for
the model and one that an inhomogeneous distribution would find difficult to
mimic" (our emphasis).}
Nevertheless, even if our universe is not fractal at the largest -- but only at intermediate -- distance
scales, it is an interesting question how significantly this may change the cosmic time concept of the resulting cosmological model. Indeed, inhomogeneous models with a fractal matter
distribution at intermediate scales will presumably
exhibit more complicated conceptions of
cosmic time than in the highly symmetric, idealized FLRW model universes.
One way to address what happens in an inhomogeneous universe,
is to attempt to construct a notion of cosmic time associated with an event
(here, now) by looking at the proper time (e.g. MTW (1973), \S 13.4 \S 27.4)
\begin{equation} \label{propertimeworldline1}
\tau = \tau (\gamma ) = \int_{\gamma} \sqrt{g_{\mu \nu} dx^{\mu} dx^{\nu}} \; \;,
\end{equation}
along (particle) timelike world lines (indicated with subscript $\gamma$ and with
4-velocities $u^{\mu} (v) = dx^{\mu} (v)/d v$), which starts at the beginning of space time, and ends in the
event (here, now).\footnote{Such a definition is only well-defined, i.e. the proper time is only finite, if there is a beginning of space-time e.g. in a ``big bang" (see also Lachi\`eze-Rey 2014, section 5.3), or if we chose some arbitrary starting point (assumed to exist) from which we can integrate (Ellis 2012, section 3).}
But along which world lines $\gamma$ should the proper time integral be taken?
Ellis (2012, p. 9-10) proposes to take the proper time integral along a specific set of preferred fundamental world lines, which (for realistic matter) are uniquely geometrically
determined.
This construction does not invalidate the Weyl principle but rather builds on it and develops it (Ellis, private communication).\footnote{We note that Ellis assumes that there is a uniquely defined
vector field of 4-velocities $u^{\mu} (v) = d x^{\mu}(v)/dv $ (if such 4-velocities
are uniquely defined in each spacetime point on the manifold, this is equivalent
to assuming the existence of a congruence of world lines which
are non-crossing). According to Ellis' proposal, these 4-velocities
(in order to be preferred fundamental world lines) should
satisfy that they are timelike eigenlines of the Ricci tensor,
$R_{\mu \nu} u^{\nu} = \lambda u_{\mu}$.} The `present' is in this
construction defined as the surface $\left\{ \tau = constant \right\}$ determined by
taking the proper time integral (\ref{propertimeworldline1})
over the family of fundamental world lines starting at the
``big bang". However, according to Ellis, the equal time hypersurfaces can in generic situations be much more complicated (see discussion in Ellis 2012, p. 10) than the simple equal (cosmic) time hypersurfaces in the FLRW model universes. In particular, Ellis remarks that the equal time hypersurfaces may not even necessarily be spacelike in an inhomogeneous spacetime.
It therefore appears to
be a complicated -- and to our knowledge still open -- question whether the resulting concept of cosmic time exhibits the properties which allow for a `backward' extrapolation into an `early' inhomogeneous universe.
\subsection{The quantum problem for time}
\label{TheQuantumProblemForTime}
We have seen above that it may be difficult to identify a {\em{global}} cosmic time (without the Weyl principle), and in earlier sections also that there may not be a {\em {scale}} for time (before the Higgs transition). Even if this is so, it might still be possible to maintain a local time {\em order}, i.e to ask about the past of some particular event -- for instance, the past of the onset of inflation. However, as we shall indicate below, it may well be that not even a {\em{local}} (and scale-free) time order is available as time is extrapolated backwards in the very early universe.
The origin of this local (or quantum) problem for time is due to the widely assumed ``quantum fundamentalist" view according to which the material constituents of the universe could be described {\em exclusively} in terms of quantum theory at some early stage of the universe. Such a perspective is natural in quantum cosmology (and quantum gravity), in which spacetime itself is treated
quantum mechanically (see also Hartle 1991). From the point of view of such theories, it has been argued that a quantum problem of time appears already (in the backward extrapolation from now) at the onset of inflation. Thus, Kiefer (e.g. 2003) affirms that:
\begin{quote}
\rightskip=0pt
The Universe was essentially ``quantum" at the onset of inflation. Mainly due to bosonic fields, decoherence set in and led to the emergence of many ``quasi-classical branches" which are dynamically independent of each other. Strictly speaking, the very concept of time makes only sense after decoherence has occurred. In addition to the horizon problem etc., inflation also solves the “classicality problem”. [...]
Looking back from our Universe (our semiclassical branch) to the past, one would notice that at the time of the onset of inflation our component would interfere with other components to form a timeless quantum-gravitational state. The Universe would thus cease to be transparent to earlier times (because there was no time).
\end{quote}
The problem here seems to be that our spacetime (and therefore time) `dissolves' into a superposition of spacetimes at the onset of inflation, and in this sense Kiefer acknowledges a quantum problem of time at this point. The situation, however, might be worse (i.e. the quantum problem may appear earlier in a backward extrapolation from now), since the appeal to decoherence is questionable. To see this, consider what one might call the cosmic measurement problem, which addresses the quantum mechanical measurement problem in a cosmological context:
\begin{quotation}
\noindent
{\em The cosmic measurement problem}:
If the universe, either its content or in its entirety, was once (and still is)
quantum, how can there be (apparently) classical structures now?
\end{quotation}
While many aspects of the cosmic measurement problem have been addressed in the literature, the perspective which we have tried to add is that the problem is closely related to providing a physical basis for the (classical) FLRW model with a (classical) cosmic time parameter. As illustrated in the Kiefer quote above, an often attempted response to the cosmic measurement problem is to proceed via the idea of decoherence. According to this idea, some degrees of freedom are regarded as irrelevant (they are deemed inaccessible to measurements and are traced out), and they are therefore taken to act as an environment for the relevant variables. The picture is that the environment in a sense `observes' the system in a continuous measurement process and thus suppresses superpositions of the system (see e.g. Kiefer 1989).
However, as is widely known, decoherence cannot by itself solve the measurement problem and explain the emergence of a classical world.\footnote{For a simple explanation of this, and some references to
the relevant literature, see e.g. Zinkernagel (2011, section 2.1).} For, if both environment and system are quantum, the total state of the system (relevant plus irrelevant degrees of freedom) is still a superposition. According to quantum mechanics, no definite (classical) state can therefore be attributed to any of the components. As argued by Sudarsky
(2011, section 4.1), this problem is only aggravated in the cosmological context since one cannot here appeal to the usual pragmatic considerations regarding what classical observers and their measurement apparatus would register.\footnote{From a pragmatic point of view, quantum mechanics may be seen as a theory of expected outcomes of measurements, in which both apparatus and observers are kept outside the quantum description. We have pointed out elsewhere (Rugh and Zinkernagel 2005, Zinkernagel 2016) that Bohr went beyond this pragmatic (or instrumental) interpretation. His view was rather a contextual one according to which any system can be treated quantum mechanically but not all systems can be treated this way at the same time.} In spite of such worries, Kiefer (2003) contemplates that decoherence successively classicalizes different constituents of the universe: At the onset of inflation, the inflaton field itself is classicalized and, at the end of inflation, decoherence converts the quantum fluctuations of the inflaton field into classical density perturbations (seeds of structure).\footnote{In this regard, Anastopoulos (2002) mentions a worry about decoherence closely related to the ones already noted: ``\dots a sufficiently classical behavior for the environment seems to be necessary if it is to act as a decohering agent and we can ask what has brought the environment into such a state ad-infinitum".}
But even if one were to bypass the strong arguments against decoherence as a solution to the cosmic measurement problem, a potentially more serious problem is lurking: If decoherence is to explain the emergence of classical structures, it cannot -- as in environmentally induced decoherence -- be a process in (cosmic) time,
insofar as classical structures (particle world lines) are needed from
the start to define time both locally and globally! There thus seems to be a {\em vicious circularity} if one invokes decoherence to explain the `emergence' of time, which we can formulate in slogan form:
\begin{quote}
{\em{Decoherence takes time and cannot therefore provide time.}}
\end{quote}
\noindent This implies that several of the temporal expressions in the quote by Kiefer given above (``decoherence {{\em sets in}}", ``{\em after} decoherence {\em has occurred}", etc.) are strictly speaking without meaning.
Although the discussion above has focused on decoherence, we note that the quantum problem of time seems to be shared by other ``quantum fundamentalist" views even when these do not rely essentially on decoherence (e.g. the spontaneous collapse model described in Sudarsky 2011). Our point is that any interpretation of quantum mechanics will need a time concept -- which is bound up with
the notion of possible (classical) particle world lines -- in order to address the early universe. The assumption of a quantum nature of the material (or otherwise) constituents of the universe makes it hard (or impossible) to associate these with well-defined particle trajectories. During inflation the only relevant constituent of the universe is taken to be the inflaton field $\varphi$ which -- in the last analysis -- is a quantum field. And just as wave functions in non-relativistic quantum theory do not give rise to physical motion (of a particle or wave) in space and time -- without assumptions solving the measurement problem -- so quantum fields do not describe moving elementary particles in space with well-defined trajectories.
Up to this point we have discussed the quantum problem of time from a quantum fundamentalist point of view based on quantum cosmology or quantum gravity. Let us now proceed from the present (and more cautious) perspective, in which we start from a classical point of view and attempt to extrapolate proper time backwards. More specifically, consider the past of some event by extrapolating backwards the proper time integral along a world line
with
4-velocity $u^{\mu}(v) = dx^{\mu} (v)/d v$
which ends in the event (formula as in
equation (\ref{propertimeworldline1}) in section
\ref{CosmicTimeInAnInhomogeneousUniverse}).
This approach assumes that we know the metric and that there are well-defined 4-velocities. The question then becomes whether such 4-velocities (or, equivalently, world lines) can always be constructed, i.e. physically realized as opposed to merely mathematically defined, from the available constituents (e.g. from a scalar field $\varphi$)?\footnote{One idea here would be to equate the energy momentum tensor of the perfect fluid form with the energy momentum tensor for the scalar field. This results in the 4-velocity
$u_{\mu} = A \cdot \partial_{\mu} \varphi$ where $A = (\partial^{\nu} \varphi \; \partial_{\nu} \varphi)^{-1/2}$, see e.g. Krasinski (1997, p. 8) and Hobson et al. (2006, p. 432). }
In the inflationary scenario, the relevant candidate for constructing sensible notions of particle world lines and classical trajectories will have to come from the $\varphi$ field. And even if we were allowed to take this field as effectively classical (described by the lowest order approximation in quantum field theory) during inflation (e.g. during a slow-roll evolution), the quantum problem of time will be faced at the on-set of inflation. At this point (supposed to be the `birth' of our bubble universe in a multiverse setting), the inflaton field is strongly quantum: Quantum fluctuations with amplitudes (within a factor of ten) of the order of the Planck scale are necessary to reset or lift the scalar field to a value where a new bubble (our universe) is born and becomes dominated by inflation (see e.g. Linde 2004, section 4).
Thus, at the beginning of inflation (or the `birth' of our universe), the $\varphi$ field is nowhere close to being a classical field on top of which we have small quantum fluctuations. Rather, it is entirely dominated by Planck scale quantum fluctuations.
In summary, to use the local time concept for contemplating times before inflation (or, indeed, earlier bubble-universes in the multiverse), it must be possible to identify (or, at least, to speculate) a particle world line along which proper time can be extrapolated backwards.\footnote{From our relationist point of view -- in which time is necessarily related to physical processes -- the time-like curves can only be identified (they only have a physical basis) if the motion of objects or test particles along these curves is at least in principle realizable given the available physics.} But, as we have seen, it is unclear how one would go about constructing any individual classical particle world line from the inflationary scalar field $\varphi$ in a regime where its quantum behaviour is dominant (at the onset of inflation). If such world lines (classical trajectories) cannot be constructed from the underlying physics (the $\varphi$ field), it seems that the very conditions for speaking about the past of an event in general relativity are not fulfilled. Hence, in our assessment, a pure quantum phase in the early universe implies that proper time (and even its order aspect, that is, its ability to distinguish before and after) is no longer a well-defined concept.
\section{Summary and discussion}
\label{SummaryAndDiscussion}
It is common practice to extrapolate the standard cosmological model back to at least the Planck time. In this manuscript, we have tried to insist that this is problematic. The underlying philosophical reason is that the extrapolation of
the FLRW model and its time concept requires, in our view, that
the physical basis of time in the model and, more
generally, the physical conditions needed to set up the model,
are not invalidated along this extrapolation.
This situation gives rise to a number of possible limits of time, respectively, at $\sim 10^{-5} s$, $10^{-11} s$, $10^{-34} s$, and $10^{-43} s$ `after' the mathematical point $t = 0$ in the FLRW model.
As briefly hinted in section \ref{TimeAndClocks}, we are aware that we are here making a philosophical choice -- at least concerning the two first limits. For we are assuming that the natural laws need a physical basis at all points along the extrapolation, as opposed to just having a basis at the present epoch (when it is easy to identify not only length and time scales, but also physical processes with well-defined durations). The difference between the first two limits and the Planck time (and possibly the time of onset of inflation) is that the former two (phase transitions) do not mark events where the natural laws are expected to break down. Rather, the two phase transitions are predictions of the natural laws themselves (by contrast, classical gravity is expected to break down at the Planck scale). Hence, in the case of the first two time limits, the problem concerns the {\em interpretation} of the natural laws; i.e. whether we are entitled to interpret the laws as physical laws throughout the backward extrapolation, if the foundation for this interpretation (like the existence of cores of rods and clocks) disappears at some point along the extrapolation.
Given our view of the interpretation of natural laws, the time concept in the early universe becomes speculative before the electroweak phase transition. As we have seen, before this point ($\sim 10^{-11}$ seconds), known physics becomes scale invariant and so one loses any (non-speculative) handle on how close we are to the singularity. We believe, but it should be further examined, that our position is a reasonable compromise between Platonism (mathematical foundationalism) and operationalism (which requires a method for actually measuring cosmic time).
In sections \ref{SettingUpTheFLRWModel} and
\ref{CosmicTimeInAnInhomogeneousUniverse}
we have seen that a {\em{global}} concept of cosmic time (with or without a scale) may become problematic in the early universe if the Weyl principle cannot be satisfied (e.g. if everything moves with the speed of light and no comoving reference frame can be constructed). Moreover, the discussion in section
\ref{TheQuantumProblemForTime}
showed that not even a {\em {local}} concept of time, which could be used to address the past of some local event, may be available as time is extrapolated backwards in the very early universe. In particular, this seems to be the case if one assumes ``quantum fundamentalism"; the idea that everything is quantum, and even if something looks classical {\em now}, there was an early time, e.g. $10^{-34}$ seconds, when nothing did. Thus, if all constituents are quantum at the onset of inflation $10^{-34}$ s, it seems difficult (or impossible) to even construct a physical notion of proper (local) time along individual world lines which could order events in the very early universe.
The upshot of our discussion on these points was that classical systems
appear to be necessary throughout cosmic history (to have a reasonable time concept). It is standard to hold that quantum gravity sets in at $10^{-43}$ s,
i.e. that there is no time concept ``before" this Planck time. But our discussion indicates that {\em{if}} one believes that everything is quantum, then one has a problem with time in general (and not only in quantum gravity)!
Let us finally briefly consider whether the possible limits to time are a misfortune for cosmology. We think not. Limits in science are good for at least two reasons. First, they should not be seen as stumbling blocks for research but rather as invitations to keep asking questions, e.g. as to which theories might describe what lies beyond the present temporal limits (or how the limits
might be circumvented, e.g. by introducing speculative new physics).
Such invitations can be expected to remain open
since for any postulated theory describing earlier times, it will probably always be possible to ask: what lies beyond {\em that} theory?
This leads to the second reason: The fact, if it is a fact, that there will always be something beyond our (current?) scientific understanding may be aesthetically attractive, if not also comforting.\footnote{See Zinkernagel (2014) for some brief remarks on aesthetics in cosmology.} Both of these reasons for endorsing limits are connected to that feeling of wonder which has been an important driving force throughout the history of cosmology.
\section*{Acknowledgements}
We are grateful for discussions on the above topics with many people
over the years (cf. acknowledgements in previous manuscripts). We also
thank the organizers of the Philosophy of Cosmology conference at
Tenerife for the opportunity to present this work. HZ thanks the Spanish
Ministry of Science and Innovation (Project FFI2011-29834-
C03-02) for financial support.
\section*{References}
\noindent
-- Anastopoulos, C. 2002. Frequently asked questions about
decoherence. {\em International Journal of Theoretical Physics}, {\bf 41}, 2002, 1573--1590.
\noindent
-- Barrow, J.D. 2005. Worlds without end or beginning. In D. Gough (ed)
{\em The Scientific Legacy of Fred Hoyle}, 93--101. Cambridge: Cambridge University Press.
\noindent
-- Bondi, H. 1960. {\em Cosmology} (2nd edition). Cambridge: Cambridge University Press.
\noindent
-- Ellis, G.F.R. 1999. 83 years of general relativity and cosmology: progress and problems. {\em Classical and Quantum Gravity}, {\bf 16}, A37--A75.
\noindent
-- Ellis, G.F.R. 2012. Space time and the passage of time,
gr-qc arXiV: 1208.2611.
\noindent
-- Feynman, R.P. 1995. {\em Lectures on Gravitation} (edited by B. Hatfield). Addison-Wesley.
\noindent
-- Hartle, J.B. 1991. The Quantum Mechanics of Cosmology. In S. Coleman et al (eds.)
{\em Quantum Cosmology and Baby Universes}. Singapore: World Scientific.
\noindent
-- Hobson, M. P., Efstathiou, G. P. and Lasenby, A. N. 2006.
{\em General Relativity}. Cambridge University Press.
\noindent
-- Kiefer, C. 1989. Continuous measurement of intrinsic time by fermions.
{\em Classical and Quantum Gravity}, {\bf 6}, 561--566.
\noindent
-- Kiefer, C. 2003. Decoherence in Quantum Field Theory and Quantum Gravity.
In E. Joos et al (eds) {\em Decoherence and the Appearance of a Classical World in
Quantum Theory}, 181--225. Berlin: Springer.
\noindent
-- Krasinski, A. 1997. {\em Inhomogeneous Cosmological Models}. Cambridge: Cambridge University Press.
\noindent
-- Lacheize-Rey, M. 2014. In search of relativistic time.
{\em Studies in History and Philosophy of Modern Physics}, {\bf 46}, 38--47.
\noindent
-- Linde, A. 2004. Inflation, quantum cosmology, and the anthropic principle. In J. D. Barrow, P. C. W. Davies and C. L. Harper, (eds) {\em Science and
Ultimate Reality}, 426--458. Cambridge: Cambridge University Press.
\noindent
-- Misner, C. W., Thorne, K., and Wheeler, J. A. 1973. {\em
Gravitation}. New York: W. H. Freeman.
\noindent
-- Narlikar, J. V. 2002. {\em An Introduction to Cosmology}, (third
edition). Cambridge: Cambridge University Press.
\noindent
-- Peacock, J. A. 1999. {\em Cosmological Physics}. Cambridge: Cambridge
University Press.
\noindent
-- Penrose, R. 2010. {\em Cycles of Time - An Extraordinary New view of the Universe}. London: Random House.
\noindent
-- Peebles, P. J. 1993. {\em Principles of physical cosmology}. Princeton:
Princeton University Press.
\noindent
-- Raychaudhuri, A. K. 1979. {\em Theoretical Cosmology}. Oxford: Clarendon Press.
\noindent
-- Rugh, S. E. and Zinkernagel, H. 2005. Cosmology and the Meaning of Time, 76pp.
Distributed manuscript.
\noindent
-- Rugh, S. E. and Zinkernagel, H. 2009. On the physical basis of cosmic time.
{\em Studies in History and Philosophy of Modern Physics}, {\bf 40}, 1--19.
\noindent
-- Rugh, S. E. and Zinkernagel, H. 2011. Weyl's principle, Cosmic Time and Quantum
Fundamentalism. In D. Dieks et al (eds) {\em Explanation, Prediction and Confirmation. The Philosophy of Science in a European Perspective}, 411--424. Berlin: Springer.
\noindent
-- Rugh, S. E. and Zinkernagel, H. 2013. A Critical Note on Time in the Multiverse,
in V. Karakostas and D. Dieks (eds) {\em Recent Progress in Philosophy of Science: Perspectives and Foundational Problems}, 267--279. Berlin: Springer.
\noindent
-- Scrimgeour, M. I. et al. 2012. The WiggleZ Dark Energy Survey: The
transition to large-scale cosmic homogeneity, arXiv: 1205.6812v2.
\noindent
-- Sudarsky, D. 2011. Shortcomings in the Understanding of Why Cosmological Perturbations Look Classical. {\em International Journal of Modern Physics D}, {\bf 20}, 509--552.
\noindent
-- 't Hooft, G. and Vandoren, S. 2014. {\em Time in Powers of Ten}.
World Scientific.
\noindent
-- Weinberg, S. 1972. {\em Gravitation and Cosmology.} New York: Wiley and
Sons.
\noindent
-- Whitrow, G. J. 1980. {\em The natural philosophy of time.} Oxford:
Clarendon Press.
\noindent
-- Zinkernagel, H. 2008. Did Time have a Beginning? {\em International
Studies in the Philosophy of Science}, {\bf 22} (3), 237--258.
\noindent
-- Zinkernagel, H. 2011. Some trends in the philosophy of physics.
{\em Theoria}, {\bf 26} (2), 215--241.
\noindent
-- Zinkernagel, H. 2014. Introduction: Philosophical aspects of modern cosmology.
{\em Studies in History and Philosophy of
Modern Physics}, {\bf 46}, 1--4.
\noindent
-- Zinkernagel, H. 2016. Niels Bohr on the wave function and the
classical/quantum divide. {\em Studies in History and Philosophy of Modern Physics}, {\bf 53}, 9--19.
\end{document}
|
1,941,325,220,434 | arxiv | \section{Introduction}
The best results of modern physics are the two theories that describe the fundamental interactions, i.e. Standard Model and General Relativity. However, at the present no satisfying way to unify them is known, especially because of the mathematical and physical difficulties in the development of a Quantum General Relativity. Even if it is a general hope that the quantization of gravity would lead to the unification of all interactions, there is no indication that it will follow. For example Loop Quantum Gravity \cite{Ro}, one of the most promising candidate for a quantum General Relativity, does not contain other interactions, so it achieves no unification yet. Moreover, because cosmological sources we observe are only gravitational ones, on cosmological scale we can treat gauge carriers fields as perturbations. Therefore, the coexistence, in a unified picture, of a classical formulation for gravity and of a quantum theory for other interactions can be interpreted as a low energy effect.\\
The great achievement of General Relativity is the geometrization of the gravitational field; following this approach we can try to regard even others fields as geometric properties (geometrical unification models) and, in particular, as space-time metric components (Kaluza-Klein theories). This issue, obviously, implies extra geometrical degrees of freedom, that can be introduced by virtue of extra-dimensions. To make the multidimensional model consistent with a four-dimensional phenomenology, we require the unobservability for this additional coordinates and, due to quantum uncertainty, it can be reached by taking a compactified extra-space. It is also possible to consider non-compact Kaluza-Klein models, as in braneworld scenario \cite{RS99}, and they found applications in string theory; however, in those models a strange kind of unification arise, in fact while the gravitational field propagates in the bulk, the other interactions are restricted to the four-dimensional brane.\\
The oldest model with non-gravitational fields in the metric is the original Kaluza-Klein one \cite{1} \cite{2} \cite{3}, which deals with the unification of gravitational and electromagnetic interactions in a space-time manifold $V^{4}\otimes S^{1}$. To extend this procedure to a generic Yang-Mills theory, a geometrical implementation of the gauge group and of its algebra have to be performed. The latter is easily achieved by the introduction of an homogeneous extra-space (for the definition of homogeneous space see \cite{La99}) and by considering its Killing vectors algebra.
Instead, the role of group transformations is played by extra-dimensional translations; however, unless the above mentioned unobservability is taken into account, the right gauge transformations on fields are not reproduced \cite{Io}. At the end, to geometrize in a Kaluza-Klein approach a gauge interaction, an extra-space, whose Killing vectors algebra is the same as the gauge group one, is required (for a review see \cite{MKKT} \cite{OW97}).\\
In our work we reproduce in this way all the features of the electro-weak model, an issue failed by other multidimensional theories. In particular, we derive not only free gauge bosons Lagrangian from the dimensional splitting of the multidimensional curvature, but also we show how their interaction with spinor fields can be obtained by the splitting of free Dirac Lagrangian. Therefore, we are able to give a completely geometric interpretation to the boson component and we have to introduce just free spinors.\\
Furthermore, the left- and right-handed four-dimensional fields, separately, as the lightest modes of an extra-coordinates expansion arise, while any their linear combination acquires a mass term, having the order of the compactification scale. So the need of a distinct treatment for the two chirality eigenstates becomes a low energy effect.\\
There are two space-times suitable for our approach:
\begin{itemize}
\item a 7-dimensional manifold $V^{4}\otimes S^{1}\otimes S^{2}$,
\item a 8-dimensional manifold $V^{4}\otimes S^{1}\otimes S^{3}$.
\end{itemize}
In the first case, we deal with eight-components spinors, so we assign a geometric meaning to the isospin doublet.\\
In the second case, the number of dimensions of the gauge group is the same as that of the extra-space; from this condition we get the identification of gauge charges with extra-components of the fields momentum and the equality of the number of leptonic families and quark generations.\\ Moreover, as an extension respect to canonical Kaluza-Klein theories, a dependence by some four-dimensional scalar fields ($\alpha^{m}$) for the extra-space metric is considered.\\ At the end, the multidimensional analogous of the Higgs boson is introduced; in particular, we define it so that its two components have got opposite hypercharges. In this way it is possible to realize the spontaneous symmetry breaking to the $U(1)$ electro-magnetic group and to add, in the Lagrangian density, mass terms for all particles.\\
In particular, the work starts in section II with a short review of the electro-weak model, while in section III we define the space-time manifold and show how the Einstein-Hilbert action reduction leads to the Einstein-Yang-Mills one and to the terms describing $\alpha$ dynamics; in section IV, for matter fields we illustrate the form of the extra-coordinates dependence able to give the right gauge transformations and the conservation of gauge charges. In section V we apply the results of the previous analysis to the space-time manifolds $V^{4}\otimes S^{1}\otimes S^{2}$ and $V^{4}\otimes S^{1}\otimes S^{3}$, where we recast the observed four-dimensional fields into eight and sixteen components spinors, respectively. Section VI deals with the introduction of the scalar field responsible of the $SU(2)\otimes U(1)$ symmetry breaking, while in section VII brief concluding remarks follow.
\section{Electro-weak model}
The electro-weak model is a $SU(2)\otimes U(1)$ gauge theory; $SU(2)$
transformations act only on left-handed components in the following way
\begin{equation}
\psi'_{L}=(I+\frac{i}{2}g\delta\omega^{i}T_{i})\psi_{L}
\end{equation}
where $\delta\omega^{i}$ are arbitrary infinitesimal functions, g is group's coupling constants, $T_{i}$ are group's generators (Pauli matrices) and $\psi_{L}$ is constituted by left-handed leptons (quarks) fields of the same family (generation)
\begin{equation}
\psi_{lL}=\left(\begin{array}{c} \nu_{lL} \\ l_{L} \end{array}\right)\qquad\psi^{}_{qL}=\left(\begin{array}{c} u_{qL} \\ d_{qL} \end{array}\right).
\end{equation}
The conserved charges associated to the $SU(2)$ invariance are the three components of the weak isospin
\begin{equation}
I_{i}=\int_{E^{3}}d^{3}x \psi^{\dag}_{l}T_{i}\psi_{l}.
\end{equation}
U(1) transformations act also on right-handed states, in particular the infinitesimal transformation law for matter fields is the following one
\begin{equation}
\psi'=(I+ig'y_{\psi}\delta\omega^{0}I)\psi
\end{equation}
where g' is the U(1) coupling constant and the hypercharge $y_{\psi}$ depends on the field and of the chirality state. In fact, we impose that the weak hypercharge, the conserved charge associated to these transformations
\begin{equation}
Y=y\int d^{3}x \psi^{\dag}\psi,
\end{equation}
is related to the electric charge and the third component of the weak isospin by the following relation
\begin{equation}
\frac{Q}{e}=Y+I_{3}\label{q}.
\end{equation}
By substituting ordinary derivatives with the ones containing gauge connections and by introducing gauge bosons free terms, the $SU(2)\otimes U(1)$ invariant Lagrangian density from the Dirac one is developed, i.e.
\begin{eqnarray*}
\Lambda=\sum_{l,q=1}^{3}\frac{i\hbar c}{2}\bigg[D^{\dag}_{\mu}\bar{\psi}_{lL}\gamma^{\mu}\psi_{lL}-\bar{\psi}_{lL}\gamma^{\mu}D_{\mu}\psi_{lL}+D^{\dag}_{\mu}\bar{\nu}_{lR}\gamma^{\mu}\nu_{lR}-\bar{\nu}_{lR}\gamma^{\mu}D_{\mu}\nu_{lR}+
D^{\dag}_{\mu}\bar{l}_{R}\gamma^{\mu}l_{R}-\\-\bar{l}_{R}\gamma^{\mu}D_{\mu}l_{R}\bigg]+\sum_{g=1}^{3}\frac{i\hbar
c}{2}\bigg[D^{\dag}_{\mu}\bar{\psi}_{qL}\gamma^{\mu}\psi_{qL}-\bar{\psi}_{qL}\gamma^{\mu}D_{\mu}\psi_{qL}+D^{\dag}_{\mu}\bar{u}_{qR}\gamma^{\mu}u_{qR}-\\-\bar{u}_{qR}\gamma^{\mu}D_{\mu}u_{qR}+
D^{\dag}_{\mu}\bar{d}_{qR}\gamma^{\mu}d_{qR}-\bar{d}_{qR}\gamma^{\mu}D_{\mu}d_{qR}\bigg]
-\frac{1}{4}B_{\mu\nu}B^{\mu\nu}-\frac{1}{4}G_{i\mu\nu}G_{i}^{\mu\nu}
\end{eqnarray*}
where
\begin{eqnarray*}
B_{\mu\nu}=\partial_{\nu}B_{\mu}-\partial_{\mu}B_{\nu} \\
G^{i}_{\mu\nu}=\partial_{\nu}W^{i}_{\mu}-\partial_{\mu}W^{i}_{\nu}+gc^{i}_{jk}W^{j}_{\mu}W^{k}_{\nu}\\
D_{\mu}\psi_{lL}=[\partial_{\mu}+\frac{1}{2}ig\tau_{i}W^{i}_{\mu}-i\frac{g}{2}'IB_{\mu}]\psi_{lL}\\
D_{\mu}l_{R}=[\partial_{\mu}-ig'B_{\mu}]l_{R}\\
D_{\mu}\nu_{lR}=\partial_{\mu}\nu_{lR}\\
D_{\mu}\psi_{qL}=[\partial_{\mu}+\frac{i}{2}g\tau_{i}W^{i}_{\mu}+i\frac{g}{6}'IB_{\mu}]\psi_{qL}\\
D_{\mu}u_{qR}=[\partial_{\mu}+i\frac{2}{3}g'B_{\mu}]u_{qR}\\
D_{\mu}u_{qR}=[\partial_{\mu}-i\frac{1}{3}g'B_{\mu}]u_{qR}.\\
\end{eqnarray*}
Bosons we observe are related to $SU(2)\otimes U(1)$ gauge ones by the transformations
\begin{equation}
\left\{\begin{array}{c}B_{\mu}=-\sin\theta_{w}
Z_{\mu}+\cos\theta_{w} A_{\mu}\\\\
W^{1}_{\mu}=\frac{1}{\sqrt{2}}(W^{+}+W^{-})\\\\
W^{2}_{\mu}=\frac{i}{\sqrt{2}}(W^{+}-W^{-})\\\\
W^{3}_{\mu}=\cos\theta_{w}Z_{\mu}+\sin\theta_{w}A_{\mu}
\end{array}\right..
\end{equation}
In order to obtain a spontaneous $SU(2)\otimes U(1)$ symmetry breaking a Higgs scalar field, which is an isospin doublet and has got $\frac{1}{2}$ hypercharge, is introduced with a suitable Lagrangian density, i.e.
\begin{eqnarray}
\Lambda_{\phi}=\frac{1}{2}[D_{\mu}\phi]^{\dag}[D^{\mu}\phi]-\mu^{2}\phi^{\dag}\phi-\lambda[\phi^{\dag}\phi]^{2}-\sum_{l}g_{l}[\bar{\psi}_{lL}l_{R}\phi+\phi^{\dag}\bar{l}_{R}\psi_{lL}]-g_{\nu_{l}}[\bar{\psi}_{lL}\nu_{lR}\widetilde{\phi}+\widetilde{\phi}^{\dag}\bar{\nu}_{lR}\psi_{lL}]-\nonumber\\-\sum_{q}g_{uq}[\bar{\psi}_{qL}u_{qR}\phi+\phi^{\dag}\bar{u}_{qR}\psi_{qL}]-g_{dq}[\bar{\psi}_{qL}d_{qR}\widetilde{\phi}+\widetilde{\phi}^{\dag}\bar{d}_{qR}\psi_{qL}]\label{mass}\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad
\end{eqnarray}
with $\mu^{2}<0$, $\lambda >0$ and $\widetilde{\phi}=-i(\phi^{\dag}T_{2})^{T}$. The Higgs field realizes a spontaneous symmetry breaking, which deals to the conservation of only the electric charge, by fixing its expectation value in the vacuum as the following one
\begin{equation}
\phi_{0}=\left(\begin{array}{c} 0 \\ \frac{v}{\sqrt{2}}
\end{array}\right).
\end{equation}
In this way, a unified picture for weak and electro-magnetic interaction is achieved; furthermore, a mechanism by which massive particles arise is obtained, even if the field responsible of this process have not been detected yet.
\section{Geometrization of the electro-weak bosonic component}
Let us consider a space-time manifold
\begin{equation}
V^{n}=V^{4}\otimes B^{k}
\end{equation}
where $V^{4}$ is the ordinary 4-dimensional space-time, with coordinates $x^{\mu} (\mu=0,\ldots,3)$, and $B^{k}$ stands for the extra-dimensional one. The space $B^{k}$ is introduced in such a way that its Killing vectors reproduce the Lie algebra of a $SU(2)\otimes U(1)$ group, i.e.
\begin{equation}
\xi^{n}_{M}\partial_{n}
\xi_{M}^{m}-\xi^{n}_{M}\partial_{n}
\xi_{\bar{N}}^{m}=C^{P}_{NM}\xi^{m}_{P}
\end{equation}
where $C^{P}_{NM}$ are $SU(2)\otimes U(1)$ structure constants.\\
In particular, we have the following two possible choices for $B^{k}$
\begin{eqnarray}
i)S^{1}\otimes S^{2}\qquad\\
ii)S^{1}\otimes S^{3}\quad;
\end{eqnarray}
we will refer to coordinates on $S^{1}$ and $S^{2}$ or $S^{3}$ by $y^{0}$ and $y^{i} (i=1,..k-1)$, respectively, while $y^{m}\quad(m=0,..,k-1)$ will indicate both of them.\\
We develop a theory invariant under general four-dimensional coordinates transformations and under translations along $B^{k}$
\begin{equation}
\left\{\begin{array}{c} x'^{\mu}=x'^{\mu}(x^{\nu})\\
y'^{m}=y^{m}+\omega^{M}(x^{\nu})\xi^{m}_{M}(y^{n});
\end{array}\right.
\end{equation}
for the metric we take the following ansantz
\begin{equation}
j_{AB}=\left(\begin{array}{c|c}g_{\mu\nu}+\gamma_{mn}\xi^{m}_{M}\xi^{n}_{N}A^{M}_{\mu}A^{N}_{\nu}
& \gamma_{mn}\xi^{m}_{M}A^{M}_{\mu} \\\\
\hline\\
\gamma_{mn}\xi^{n}_{N}A^{N}_{\nu}
& \gamma_{mn}\end{array}\right)
\end{equation}
where $A^{M}_{\mu}$ are gauge bosons fields and $g_{\mu\nu}$ the four-dimensional metric \cite{Io}.\\
For the extra-dimensional metric, we assume
\begin{equation}
\gamma_{mn}(x;y)=\gamma_{mn}(x)\alpha^{m}(x)\alpha^{n}(x)
\end{equation}
(in the latter expression the indices m and n are not summed) with $\alpha=\alpha(x)$ some scalar fields that account for the dynamics of the extra-dimensional space. In order to forbid anisotropies of the space $S^{2}$ or $S^{3}$, we take $\alpha^{1}=\alpha^{2}(=\alpha^{3})=\alpha$, so allowing just size changes.\\
We note that in the 8-dimensional case the extra-space is 4-dimensional, so its dimensionality is the same as that of the gauge group; thus, Killing vectors can be chosen in such a way they are directly related to extra-dimensional n-bein vectors by the following relation
\begin{equation}
\xi^{(n)}_{n}=\alpha^{n}e^{(n)}_{n}
\end{equation}
where in the latter the index n is not summed.\\
We can carry on the dimensional reduction of the Einstein-Hilbert action in 4+k-dimensions as
\begin{equation}
S=-\frac{c^{3}}{16\pi G_{n}}\int_{V^{4}\otimes B^{k}} \sqrt{-j}{}^n\!R d^{4}xd^{k}y
\end{equation}
and we get the geometrization of the gauge bosonic component, i.e. from the multidimensional curvature their Lagrangian density outcomes
\begin{eqnarray}
S=-\frac{c^{3}}{16\pi G}\int_{V^{4}}d^{4}x\sqrt{-g}
\bigg[R+R_{N}-2g^{\mu\nu}\sum_{n=0}^{k-1}\frac{\nabla_{\mu}\partial_{\nu}\alpha^{n}}{\alpha^{n}}-\nonumber\\-g^{\mu\nu}\sum_{n\neq
m=0}^{k-1}\frac{\partial_{\mu}\alpha^{m}}{\alpha^{m}}\frac{\partial_{\nu}\alpha^{n}}{\alpha^{n}}-\frac{1}{4}\sum_{M=1}^{3}(\alpha)^{2}G^{M}_{\mu\nu}G^{M\mu\nu}-\frac{1}{4}(\alpha')^{2}B^{M}_{\mu\nu}B^{M\mu\nu}
\bigg]\label{az}
\end{eqnarray}
where for Newton coupling constant and for the curvature term $R_{N}$ we have, respectively, (V is the volume of the extra-space)
\begin{eqnarray*}
G=\frac{G_{n}}{V}\qquad\qquad\quad\\
R_{N}=\frac{1}{4}\frac{1}{\alpha^{2}}C^{M}_{NT}C^{N}_{TM}.
\end{eqnarray*}
For details, we remand to our precedent work \cite{Io}.\\
Therefore in a Kaluza-Klein approach we carry on the geometrization by identifying gauge bosons with mist metric components. Unless the introduction of super-symmetries this procedure is not extensible to fermions, so to account for
them we have to introduce spinor matter fields.
\section{Matter fields}
Any matter field is a not-geometric and, thus, an undesired term in a geometrical unification model; however, a remarkable result achieved in previous works \cite{Io} \cite{M04} is the possibility to deal simply with free spinor fields, once we assume for them a suitable dependence on extra-coordinates.\\
In other words, we take the following matter fields
\begin{equation}
\Psi(x;y)=\frac{1}{V}e^{-iT_{M}\lambda^{M}_{N}\Theta^{N}(y)}\psi(x)\label{mfield},
\end{equation}
where $T_{M}$ are gauge group's generators, $\Theta^{N}$ scalar densities of weight $\frac{1}{2}$ on the extra-space and $\psi$ the four-dimensional fields, while we define the constant matrix $\lambda$ by the following relation
\begin{equation}
(\lambda^{-1})^{N}_{M}=\frac{1}{V}\int
\sqrt{-\gamma}\Big(\xi^{m}_{M}\partial_{m}\Theta^{N}\Big)d^{K}y.\label{lam}
\end{equation}
With such a dependence on extra-coordinates, the transformation of the field $\psi$ under translations along Killing vectors $\xi_{M}^{m}$, once the role of the unobservability is taken into account \cite{Io2}, is like the action of a gauge group, i.e.
\begin{equation}
\psi'=\psi+i\omega^{M}T_{M}\psi.
\end{equation}
Moreover, with the same assumption we demonstrate the conservation of gauge charges; in particular, in the 8-dimensional case they can be regarded as extra-components of the fields momentum \cite{M04}.\\
The most remarkable result, that follows from the hypothesis (\ref{mfield}), is the geometrization of spinor gauge connections. In fact, starting from the free Dirac Lagrangian, the interaction with gauge bosons outcomes
\begin{equation}
\int_{B^{K}}\sqrt{-\gamma}d^{K}y
\bar{\Psi}\gamma^{(\mu)}D_{(\mu)}\Psi d^{k}y=\bar{\psi}\gamma^{(\mu)}e^{\mu}_{(\mu)}(D_{\mu}+iT_{M}A^{M}_{\mu})\psi.
\end{equation}
However, some additional terms, deriving from
\begin{equation}
\label{mt}\int_{B^{K}}\sqrt{-\gamma}d^{K}y
\bar{\Psi}\gamma^{(m)}\partial_{(m)}\Psi d^{k}y,
\end{equation}
arise; they produce the standard Kaluza-Klein contribution to the spinor mass, of the same order as the compactification scale, and lead to very heavy four-dimensional particles. These particles cannot be described by a low energy theory.\\
Now, because the extra-dimensional $\gamma$ matrices are developed by the four-dimensional $\gamma_{5}$ one, we find that for the left-handed and right-handed fields the expression (\ref{mt}) vanishes.\\
The latter statement explains why, in a multidimensional theory, the four-dimension chirality eigenstate still have to be considered: they correspond to the lightest mode of an expansion in the extra-coordinates.\\
In conclusion, to account for fermions interacting with gauge bosons, we need just free left-handed and right handed spinors, where, again, we stress that the chirality we refer to is the four-dimensional one.
\section{Spinors representation}
At this point, the introduction of fermions is based on the analysis of the previous section. In particular, the action of the $U(1)$ group is reproduced by adding a $y^{0}=\varphi$ dependent phase in front of the four dimensional spinors
\begin{equation}
\phi(x^{\mu})\rightarrow \widetilde{\Psi}=\frac{1}{\sqrt{L}}e^{in\varphi}\phi\qquad \varphi=[0;2\pi)
\end{equation}
where $L$ stands for $S^{1}$ length and n is related to the hypercharge of the field $\phi$; in fact, under the infinitesimal translation $\varphi\rightarrow\varphi+f(x^{\mu})$, the transformation law for $\phi$ is
\begin{equation}
\phi\rightarrow\phi+in f(x^{\mu})=\phi+i \frac{n}{6}g(x^{\mu}).
\end{equation}
From the latter expression, by imposing it reproduces the action of the hypercharge $U(1)$ group, the value of $n$ for each spinor is easily obtained
\begin{equation}
n_{\psi}=6y_{\psi}.
\end{equation}
We observe that, in order to maintain the fundamental periodicity of $\widetilde{\Psi}(\varphi)$, n must be an integer; this gives a justification for the hypercharge spectrum observed.\\
In an analogous but formally more complex way, for the $SU(2)$ group we make use of Pauli matrices and, according with the equation (\ref{mfield}), the form for the dependence on $S^{2}$ or $S^{3}$ coordinates is the following
\begin{equation}
\Psi=\frac{1}{\sqrt{V}}e^{-iT_{M}\lambda^{M}_{N}\Theta^{N}(y^{i})}\widetilde{\Psi}\quad M,N=1,2,3
\end{equation}
where $T_{M}$ are $SU(2)$ generators, $V$ stands for $S^{2}$ or $S^{3}$ volume and $\lambda$ obeys the relation (\ref{lam}).\\
Now, because the number of spinor components in a d-dimensional space-time is $2^{[\frac{d}{2}]}$ \cite{Av05}, we perform a distinct treatment for a 7-dimensional and a 8-dimensional space-time.\\
In the 7-dimensional case, we deal with eight components spinors, thus explaining the isospin doublet. In particular, we introduce the following four spinors for any leptonic family and quark generation
\begin{eqnarray}
\Psi_{lL}=\frac{1}{\sqrt{V}\sqrt{L}}e^{-i\sigma_{I}\lambda^{I}_{M}\Theta^{M}}
\left(\begin{array}{c}e^{in_{L}\varphi}\nu_{lL}\\
e^{in_{L}\varphi}l_{L}\end{array}\right)\quad
\Psi_{lR}=\frac{1}{\sqrt{V}\sqrt{L}}\left(\begin{array}{c}\nu_{lR}\\
e^{in_{lR}\varphi}l_{R}\end{array}\right)\quad l=1,2,3\\
\Psi_{qL}=\frac{1}{\sqrt{V}\sqrt{L}}e^{-i\sigma_{I}\lambda^{I}_{M}\Theta^{M}}
\left(\begin{array}{c}e^{in_{L}\varphi}\nu_{lL}\\
e^{in_{L}\varphi}l_{L}\end{array}\right)\quad
\Psi_{qR}=\frac{1}{\sqrt{V}\sqrt{L}}\left(\begin{array}{c}e^{in_{uR}\varphi}u_{lR}\\
e^{in_{dR}\varphi}d_{lR}\end{array}\right)\quad q=1,2,3
\label{spinors}
\end{eqnarray}
with obvious notation for four-dimensional fields. Thus, we include even the right handed fields of leptons (quarks) of the same family (generation) in the same spinor and this assumption is consistent with their different interaction properties, since we assume for them a different dependence on the coordinate $\varphi$.\\
In the 8-dimensional case spinors have got 16 components and this allow us to reproduce all Standard Model particles from the following six ones
\begin{eqnarray}
\Psi_{lL}=\frac{1}{\sqrt{V}\sqrt{L}}e^{-iT_{I}\lambda^{I}_{M}\Theta^{M}}
\left(\begin{array}{c}e^{in_{L}\varphi}\nu_{lL}\\
e^{in_{L}\varphi}l_{L}\\e^{in_{qL}\varphi}u_{lL}\\
e^{in_{qL}\varphi}d_{lL}\end{array}\right)\quad
\Psi_{lR}=\frac{1}{\sqrt{V}\sqrt{L}}\left(\begin{array}{c}\nu_{lR}\\
e^{in_{lR}\varphi}l_{R}\\e^{in_{uR}\varphi}u_{lR}\\
e^{in_{dR}\varphi}d_{lR}\end{array}\right)\quad l=1,2,3\label{spinors8}
\end{eqnarray}
being
\begin{equation}
T_{I}=\left(\begin{array}{cc} \sigma_{I} & 0
\\ 0 & \sigma_{I} \end{array}\right)
\end{equation}
where we stress that Pauli matrices $\sigma_{I}$ act on isospin doublets.\\
Therefore, in a space-time $V^{4}\otimes S^{1}\otimes S^{3}$ we propose a model in which quarks and leptons are the components of the same spinor, but the Lorentz symmetries connecting them are broken by the compactification. The relic features of such a statement are \begin{itemize}\item i)common properties under the action of (what a fuor dimensional observer interpret as) the gauge group, \item the equality between the number of leptonic families and quark generations.\end{itemize}
In both 7- and 8-dimensional cases, from the dimensional splitting of Dirac Lagrangian for such spinors, we also get some relations between $\alpha$ and $\alpha'$ and electro-weak coupling constants $g$ and $g'$, so an estimate of extra-dimensions lengths, i.e.
\begin{equation}
\alpha^{2}=16\pi G\bigg(\frac{\hbar}{gc}\bigg)^{2}\qquad
\alpha'^{2}=16\pi G\bigg(\frac{\hbar}{g'c}\bigg)^{2}
\end{equation}
\begin{equation}
\alpha=0.18\times 10^{-31}cm\qquad
\alpha'=0.33\times10^{-31}cm.
\end{equation}
These equalities show that if $\alpha$ and $\alpha'$ are not constants, so if the extra-dimensional space changes with space and time (as we expect in a geometrodynamics theory), coupling constants acquire a dependence on (four-dimensional) space-time coordinates, like in Dirac hypothesis on large numbers \cite{Dir} or in the Brans-Dicke theory \cite{BD}.\\
\section{Spontaneous symmetry breaking}
Now, we have to reproduce the spontaneous symmetry breaking mechanism, to account for the only (electro-magnetic) symmetry we find in the physical world and to attribute masses to particles.\\
In a straightforward way we can just introduce a two components complex scalar field, i.e.
\begin{equation}
\Phi=\left(\begin{array}{c} \Phi_{1}
\\ \Phi_{2}
\end{array}\right)=\frac{1}{\sqrt{V}}e^{-i\sigma_{I}\lambda^{I}_{M}\Theta^{M}}\left(\begin{array}{c}e^{-in_{\phi}\varphi}\phi_{1}
\\ e^{in_{\phi}\varphi}\phi_{2}
\end{array}\right)\quad n_{\phi}=3
\end{equation}
and so the four-dimensional one associated
\begin{equation}
\phi=\left(\begin{array}{c}\phi_{1} \\ \phi_{2}
\end{array}\right)
\end{equation}
has two weak isospin doublet components $\phi_{1}$ and $\phi_{2}$, with hypercharges $-\frac{1}{2}$ and $\frac{1}{2}$, respectively.\\
We take a Lagrangian density with a Higgs potential
\begin{equation}
\Lambda_{\Phi}=\frac{1}{2}\eta^{(A)(B)}\partial_{(A)}\Phi^{\dag}\partial_{(B)}\Phi
-\mu^{2}\Phi^{\dag}\Phi-\lambda(\Phi^{\dag}\Phi)^{2}
\end{equation}
and, after the dimensional splitting, we get the action for the Higgs boson
\begin{eqnarray}
S_{\Phi}=\frac{1}{c}\int_{V^{4}} d^{4}x \sqrt{-g}
\bigg[\frac{1}{2}g^{\mu\nu}(D_{\mu}\phi)^{\dag}D_{\nu}\phi-
(\mu^{2}-G)\phi^{\dag}\phi-\lambda(\phi^{\dag}\phi)^{2}\bigg]
\end{eqnarray}
unless an additional mass term $G\simeq\frac{1}{\alpha^{2}}\simeq 10^{19}GeV$, which would imply an extremely accurate fine tuning on the potential parameters $\mu$ and $\lambda$ to get a mass $m_{H}\sim 200GeV$.\\
Now, we have to fix a suitable vacuum expectation value to get the spontaneous symmetry breaking, but because for $\phi$ the hypercharge generator is opposite to the weak isospin third component one, from (\ref{q}) we have that any vacuum state is invariant under the electric charge symmetry.\\
Therefore, in the unitary gauge, we rewrite $\phi$ around the vacuum as
\begin{equation}
\phi=\left(\begin{array}{c} v_{1}+\sigma(x) \\ v_{2} \end{array}\right) \quad v_{1}=v\cos\theta\quad v_{2}=v\sin\theta.
\end{equation}
In order to obtain massive fermions, a distinct treatment between the 7-dimensional and the 8-dimensional scenario is required.\\
In the first case, the form of fermions mass terms in the Lagrangian is the following one
\begin{eqnarray}
\Lambda_{\Psi\phi}=\sum_{l=1}^{3}\big[g_{l}(\bar{\Psi}_{lL}\Phi\Psi_{lR}+\bar{\Psi}_{lR}\Phi^{\dag}\Psi_{lL})\big]
+\sum_{q=1}^{3}\big[g_{u_{q}}(\bar{\Psi}_{qL}\Phi\Psi_{qR}+\bar{\Psi}_{qR}\Phi^{\dag}\Psi_{qL})\big]
\end{eqnarray}
where we define the invariant product of two spinors and the field $\Phi$ as
\begin{equation}
\bar{\Psi}_{1}\Phi\Psi_{2}=\sum_{r=1}^{4}[(\bar{\Psi}_{1})_{r}\Phi_{1}(\Psi_{2})_{r}+(\bar{\Psi}_{1})_{r+4}\Phi_{2}(\Psi_{2})_{r+4}].
\end{equation}
In the second case, we deal with only one constant g, i.e.
\begin{equation}
\Lambda_{\Psi\phi}=g\sum_{l=1}^{3}[\bar{\Psi}_{lL}\Phi\Psi_{lR}+\bar{\Psi}_{lR}\Phi^{\dag}\Psi_{lL}]
\end{equation}
being
\begin{eqnarray*}
\bar{\Psi}_{1}\Phi\Psi_{2}=\sum_{r=1}^{4}[(\bar{\Psi}_{1})_{r}\Phi_{1}(\Psi_{1})_{r}+(\bar{\Psi}_{1})_{4+r}\Phi_{2}(\Psi_{2})_{4+r}+(\bar{\Psi}_{1})_{8+r}\Phi_{1}(\Psi_{lR})_{8+r}+(\bar{\Psi}_{1})_{12+r}\Phi_{2}(\Psi_{2})_{12+r}].
\end{eqnarray*}
However, in both cases we have to redefine the four-dimensional fields by a constant phase (which, obviously, does not modify the previous results)
\begin{equation}
\psi\rightarrow c_{\psi}\psi \qquad c^{*}_{\psi}c_{\psi}=1
\end{equation}
so that the interference between the phases of right-handed and left-handed fields determinate fermion masses
\begin{equation}
m_{\psi}=gv_{1/2}(c^{*}_{\psi R}c_{\psi L}+c^{*}_{\psi L}c_{\psi R}).
\end{equation}
At this point, because of the arbitrariness of the coefficients $c$, the mass spectrum for all particles in Standard Model can be reproduced by setting $g$ greater than the biggest mass observed (top mass $m_{t}\simeq 180 GeV$).\\
Therefore, in this scheme masses are produced by the interference between phases of massless (right-handed and left-handed) fields and not only by the vacuum expectation value of the Higgs field.\\
We note that in the seven dimensional case, an alternative way to obtain massive fermions is simply to rewrite in our formalism mass terms in the Lagrangian density (\ref{mass}). The expression needed for this task is the following one \begin{equation}
\Lambda_{\Psi\phi}=\sum_{l=1}^{3}\big[g_{l}(\bar{\Psi}_{lL}\Phi\Psi_{lR}+\bar{\Psi}_{lR}\Phi^{\dag}\Psi_{lL})+g_{\nu_{l}}(\bar{\Psi}_{lL}\widetilde{\Phi}\Psi_{lR}+\bar{\Psi}_{lR}\widetilde{\Phi}^{\dag}\Psi_{lL})\big].
\end{equation}
Finally, we stress again the result of this section: in our scheme it is possible to introduce a scalar field, which plays the same role of the Higgs bosons in Standard Model. The difference between them stands in the fact that in our case the scalar field has two components with opposite hypercharge; however, after the spontaneous symmetry breaking, the remaining fields Lagrangian density is exactly the same as in Standard Model.
\section{Concluding remarks}
Therefore, we develop a model in which both gravity and electro-weak interactions are geometric; in particular, the geometrization of a $SU(2)\otimes U(1)$ gauge theory require the introduction of an extra-dimensional space and the identification of gauge with space-time isometries. In this way, gauge bosons arise as metric components.\\
We introduce spinors such that their interactions with gauge bosons are automatically contained in the Lagrangian density and their gauge charges are conserved.\\In the last section we also reproduce the spontaneous symmetry breaking mechanism.\\
Our final action is thus the sum of tree terms
\begin{equation}
S=S_{H-E}+S_{\Psi}+S_{\Phi}
\end{equation}
where \begin{itemize}\item the former is the Einstein-Hilbert action, from which the four-dimensional Einstein-Yang-Mills action outcomes; \item the second is the Dirac
action, that gives the four-dimensional
theory for spinors interacting with gauge bosons, \item the latter reproduces the
spontaneous symmetry breaking mechanism and it predicts an Higgs
field constituted by two hypercharge singlets.\end{itemize}
Even if the model is developed in a multidimensional scenario, in the low energy limit the four-dimensional chirality eigenstate arise. In fact, they are the lightest modes, since their mass does not receive any contribution due to the extra-dimensional dependence.\\
We have shown that there are two space-time suitable for our approach: $V^{4}\otimes S^{1}\otimes S^{2}$ and $V^{4}\otimes S^{1}\otimes S^{3}$.\\
In the first case, the geometric properties of the space-time force us to introduce 8-components spinors; thus, the necessity to deal with isospin doublets becomes natural in this scheme. This result suggest us to recast even the right-handed fields into the same geometrical object.\\
Instead, the choice of an 8-dimensional space-time manifold lead to 16 components spinor fields; so that we propose to put into the same spinor quarks and leptons. As a consequence of this approach, the equality of the number of leptonic families and quark generations arises.\\
Furthermore, in our model we assume that extra-dimensional spinor connections vanishes. This assumption stands on the breaking of general covariance in the extra-space; in fact, a spinor does not change under translations, which are the only symmetries of $B^{k}$.\\
However, the breaking of general covariance itself is an open question; a possible answer could come from the Spontaneous Compactification Mechanism \cite{CS76} \cite{CS77}. A role in this sense might also be played by the $\alpha$ fields, whose dynamics is under investigation.\\
Finally, prospectives of our work deal with the inclusion of strong interactions; as a starting point, we can say that we cannot find a space with Killing vectors able to reproduce the $SU(3)$ algebra. Furthermore, in the 8-dimensional case, the gauge group acts only some spinors components. For these reasons, the next task is the application of this approach to Grand Unification Theories (GUT).
|
1,941,325,220,435 | arxiv | \section{Introduction}
The successful operation of the CERN Large Hadron Collider (LHC) and
the ATLAS and CMS experiments have
led to the discovery of the Higgs boson, the final piece of the
standard model (SM)~\cite{Chatrchyan:2012ufa,Aad:2012tfa} of particle physics.
Future high precision experimental investigations on the couplings of the Higgs boson
are required for a refined understanding of the nature of electroweak symmetry
breaking and for searches for possible new physics beyond the SM.
Higgs boson couplings can be measured to percent level precision
at future lepton colliders, e.g., the International Linear Collider~\cite{1310.8361}
and the Circular Electron-Positron Collider (CEPC)~\cite{CEPC-SPPCStudyGroup:2015csa}, or
with less precision at the high luminosity run of the LHC (HL-LHC)~\cite{1310.8361}.
In addition to high precision, $e^+e^-$ colliders provide direct access to
all possible decay channels of the Higgs boson, including invisible decays,
in a clean environment. They can also measure the total width of the Higgs
boson in a model-independent way.
An important prediction for the SM Higgs boson is that the couplings to other
SM particles are proportional to their mass. It will be essential to test
this relation experimentally. In the SM the Yukawa couplings of the
Higgs boson to light quarks $q$ ($u$, $d$, or $s$) are negligibly small due smallness
of their mass. There have been, however, theoretical models that have predicted enhanced
light-quark Yukawa couplings~\cite{0804.1753,1504.04022}. Experimentally,
if such an enhanced-coupling scenario is observed, it will must
indicate the presence of new physics; the quarks also receive masses from
sources other than the Higgs boson in order to maintain a relatively small mass.
However, a direct measurement of light-quark Yukawa couplings is impossible
at hadron colliders due to the huge QCD backgrounds for hadronic decays of
the Higgs boson. Indirect constraints can be obtained based on different
kinematic distributions induced by gluon and quark production
mechanisms~\cite{1308.5453,1606.09253,1606.09621} or
through rare decays of the Higgs
boson~\cite{Bodwin:2013gca,Kagan:2014ila,Zhou:2015wra,Koenig:2015pha,Perez:2015lra,Chisholm:2016fzg}.
At lepton colliders, the main measurement difficulty is separation of the
$q\bar q$ decay channel from the loop-induced gluon channel, both of which generate similar
final states of two untagged jets ($jj$).
In this work, we propose a novel idea of using hadronic event shape observables from the Higgs
boson decays to separate $q\bar q$ from $gg$ channels and to measure the light-quark
Yukawa couplings at lepton colliders.
Another possibility for lepton colliders involves utilizing discrimination of quark
jets and gluon jets~\cite{1605.04692}. We leave this for future investigations. The
idea is motivated by the measurement of the QCD coupling constant at LEP from
hadronic event shape distributions.
\footnote{Event shapes have been employed to study the spin and $CP$ property of the Higgs
boson at the LHC~\cite{Englert:2012ct,Englert:2013opa}.}
Intuitively, in that case the next-to-leading order
QCD corrections, $\sim {\mathcal O}(\alpha_s)$, generate the distribution in
three-jet region. A change of $\alpha_s$ can induce changes of the event shape
distributions, e.g., the position and height of the peak. Similarly, in the case of
the Higgs boson decay, the real radiation is of ${\mathcal O}(C_X\alpha_s)$,
where $C_X$ is the QCD color factor, i.e. $C_A=3$ for decay to gluons and $C_F=4/3$ for
decay to quarks. Thus, a measurement of event shape distributions can reveal
the average color factor and the ratio of decay branching ratios (BR) of
the gluon and the quark channel.
In the remaining paragraphs we demonstrate theoretically how the distributions
differ for quark and gluon channels, and we consider a scenario of the CEPC
and demonstrate a precision of $<1\%$ can be achieved on the
measurement of the decay BR to light quarks. \\
\section{Event shapes}
There have been 6 major observables of hadronic event shapes measured at LEP and
used for the extraction of $\alpha_s(M_Z)$, including thrust $T$ (or $\tau=1-T$),
heavy hemisphere mass $M_H$, $C$ parameter, total hemisphere
broadening $B_T$, wide hemisphere broadening $B_W$, and the
Durham 2 to 3-jet transition parameter $y^D_{23}$~\cite{Abbiendi:2004qz,Heister:2003aj}.
For example, the thrust is defined as
\begin{equation}
T= \max_{\vec n}\left(\frac{\sum_i|p_i\cdot \vec{n}|}{\sum_i|p_i|}\right),
\end{equation}
where $p_i$ is the three-momentum of particle $i$ and the summation runs over all
measured particles. One advantage of the global event-shape observables is that
their distributions can be calculated systematically in perturbative
QCD\cite{Banfi:2014sua,Banfi:2016zlc}.
In case of two-body hadronic decay, at the leading order (LO),
the thrust distribution is a $\delta$ function at $\tau=0$. Finite thrust values
are generated through high-order QCD radiations. Soft and collinear emissions
introduce large logarithmic contributions $\sim \alpha_s^n \ln \tau^{2n-1}/\tau$
at small-$\tau$, the deep two-jet region. They must be resummed to all orders
in QCD to make reliable predictions, e.g., the state of art Next-to-Next-to-Next-to-leading
logarithmic ($\rm N^3LL$) resummation~\cite{0803.0342,Abbate:2010xh,1411.6633} for
$Z/\gamma^*\rightarrow q\bar q$ in the extraction of $\alpha_s(M_Z)$.
Meanwhile, in the three-jet region the resummed results can be further
matched with the fixed-order results, e.g., the Next-to-Next-to-leading
order (NNLO) calculation for $Z/\gamma^*\rightarrow 3\ jets$ production~\cite{0707.1285,1606.03453}.
Usually, for calculations done at parton level, a correction factor due to
hadronization effects needs to be applied when comparing to experimental data,
which can be estimated through various event generators~\cite{Lonnblad:1992tz,0710.3820,0803.0883,0811.4622}.
To our best knowledge, no predictions at comparable precision exist
for hadronic decays of the Higgs boson, although most of the ingredients
are already available. Predictions at $\rm N^3LL+$NNLO level for the Higgs boson
are expected in near future. In this study, we calculate the
event shape distributions using the MC event generator Sherpa 2.2~\cite{0811.4622}
with the effective coupling approach of the Higgs boson. We
use the CKKW scheme~\cite{hep-ph/0109231}, matching parton showers with tree-level
matrix elements with up to three jets, which is effectively partial next-to-leading-logarithmic
and leading-order accuracy. The hadronization corrections are included
automatically in Sherpa simulation through hadronization models and decays of hadrons.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.45\textwidth]{full_ev1_shape.eps}
\end{center}
\vspace{-2ex}
\caption{\label{fig:thrust}
Normalized distributions of thrust in hadronic decays of the Higgs boson,
in $e^+e^-\rightarrow q\bar q$ with a CMS
energy of 125 GeV and in $e^+e^-\rightarrow Z q\bar q$ with a CMS energy of 250 GeV.
In the later two cases the thrust is calculated in the hadronic CMS frame.
The lower panel shows the relative
theoretical uncertainties of the normalized distribution for $H\rightarrow gg$,
due to the variations of renormalization and matching scales.}
\end{figure}
Fig.~\ref{fig:thrust} shows the normalized distribution of the
variable thrust for several different hadronic decay channels of the Higgs boson, including
$gg$, $q\bar q$, $b\bar b$, and $W(q\bar q)W^*(q\bar q)$. We also plot the
distribution in the hadronic center-of-mass (CMS) frame
for $e^+e^-\rightarrow Z^*/\gamma^* \rightarrow q\bar q$ with a CMS energy of
125 GeV and $e^+e^-\rightarrow Zq\bar q$ with a CMS energy of 250 GeV
and requiring a recoil mass of 125 GeV of the $Z$ boson as comparisons. The
distribution peaks at $\tau\sim 0.02$ for light-quark decay channel. The
peak shifts to $\tau\sim 0.05$ for the gluon channel, corresponding to
a scaling of roughly $C_A/C_F$. The distribution is much broader for the
gluon case due to
the stronger QCD radiation. The distribution for the $b\bar b$ channel is very close
to the $q\bar q$ case, except at very small $\tau$, where the mass and hadronization
effects become important. For the $WW^*$ channel there exist already four
quarks at LO and the distribution is concentrated in the large-$\tau$ region.
The distribution for $q\bar q$ from $Z^*/\gamma^*$ differs from
that for the Higgs boson in the
three-jet region because of the different spin. The distribution for $q\bar q$
in $Zq\bar q$ production has a slightly higher peak than the case of $Z^*/\gamma^*$
mostly due to the different hard radiation patterns and different
tunings of parton showering.
In the lower panel of Fig.~\ref{fig:thrust}, we plot the estimated theoretical
uncertainties of the normalized thrust distribution for the decay to
gluons.
Theoretical uncertainties due to the truncation
of the perturbation series are conventionally estimated through QCD scale
variations.
These include variations due to the change of the renormalization
scale and the matching scale~\cite{Jones:2003yv}.
The latter variation mostly affects the distribution in
the large-$\tau$ region.
As one includes higher-order resummation and
fixed-order matching contributions, the scale variations will decrease.
We assume a $\rm N^3LL+$NNLO calculation for the Higgs boson decay to gluons will
be available and estimate the scale variations based on the
calculation for $Z/\gamma^*$~\cite{0906.3436,0803.0342} using a scaling factor
of $C_A/C_F$.
Since the distribution is normalized, the uncertainties are
small in the peak region.
The uncertainty due to the $\alpha_s(M_Z)$ input is negligible if the
world average~\cite{Beringer:1900zz} is used.
There are also uncertainties due to the hadronization model used.
Sherpa uses a cluster fragmentation model implemented in AHADIC++~\cite{Winter:2003tt}
by default with which the results in Fig.~\ref{fig:thrust} are simulated.
In Fig.~\ref{fig:hadro} the left plot shows the size of hadronization corrections
by taking ratio of the normalized distributions with and without turning on
the hadronization module in Sherpa.
We can see roughly three patterns of the hadronization corrections in Fig.~\ref{fig:hadro}.
All distributions initiated from $q\bar q$ and $b\bar b$ final states receive
similar corrections.
The distributions are enhanced by more than 30\% around the peak region and
are greatly reduced when thrust goes to one as a balance.
That can be understood since the hadronization effects will distribute energies
away form the jet axis.
Shape of hadronization corrections for distribution of $H(gg)$ is much broader and shifted
to the right side as comparing to $q\bar q$ cases.
Lastly the distribution of $H(WW)$ is further suppressed at small $\tau$ region
by hadronization corrections.
To estimate uncertainties due to the hadronization corrections we recalculate
all the distributions with the alternative hadronization model in Sherpa by
linking to the Lund string fragmentation in PYTHIA 6.4~\cite{Sjostrand:2006za}.
We plot ratios of predictions from the two different hadronization models in the right
plot in Fig.~\ref{fig:hadro}.
The differences can be large for thrust greater than 0.9, about +10(-5)\% for
$H(b\bar b)$($H(gg)$) at thrust $\sim 0.95$, and become even larger when entering
fully non-perturbative dominant region.
We can take above differences as the size of hadronization
uncertainties which are summarized in Table~\ref{tab:hadro} for two representative
bins of $\tau$.
All the $q\bar q$ cases have small uncertainties in the peak region.
Relative signs in Table~\ref{tab:hadro} indicate the uncertainties in
different bins are either fully correlated or anti-correlated.
Though hadronization uncertainties of all channels discussed are derived
from the same models, we decorrelate the uncertainties of different channels
to be conservative, which are described by individual nuisance parameters.
Below, we will discuss the possibility of measuring the distributions discussed above
at a lepton collider and the sensitivity of these measurements to
the light-quark Yukawa couplings. \\
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.4\textwidth]{cepcerr_1.eps}
\hspace{0.4in}
\includegraphics[width=0.4\textwidth]{cepcnerr_1.eps}
\end{center}
\vspace{-2ex}
\caption{\label{fig:hadro}
%
Ratios of normalized distributions of thrust,
%
left: predictions with AHADIC++ hadronization model to without hadronization corrections;
right: predictions with Lund hadronization model to with AHADIC++ hadronization model.}
\end{figure}
\begin{table}[h!]
\centering
\begin{tabular}{c|cccccc}
\hline
had. unc. (\%) & $H(gg)$ & $H(WW)$ & $H(b\bar b)$ & $H(q\bar q)$ & $Z(q\bar q)$ & $ZZ(q\bar q)$\tabularnewline
\hline
\hline
[0.02, 0.03] & $-5$ & $22$& $-3$ & $-0.3$ & $-0.4$ & $-0.4$\tabularnewline
\hline
[0.05, 0.07] & $-2$ & $-9$& $3$ & $3$ & $4$ & $4$\tabularnewline
\hline
\end{tabular}
\caption{
%
Estimated hadronization uncertainties of normalized distributions of thrust
for two representative bins of $\tau$.
\label{tab:hadro}}
\end{table}
\section{CEPC}
A circular electron-positron collider has been proposed
recently with a center-of-mass energy of 250 GeV and a total integrated luminosity
of 5 ${\rm ab^{-1}}$~\cite{CEPC-SPPCStudyGroup:2015csa}. It can serve as a Higgs
factory with the dominant production channel being the associated production with a
$Z$ boson, with a total cross section of about 212 fb~\cite{Chen:2016zpw}. One great advantage of the
$e^+e^-$ collider is that the Higgs boson events can be selected by measuring
the recoil mass $m_{\rm recoil}$, e.g., for $ZH$ production with the $Z$ boson decay into a pair
of visible fermions $f\bar f$,
\begin{equation}
m_{\rm recoil}^2=s-2E_{f\bar f}\sqrt s+m_{f\bar f}^2,
\end{equation}
where $E_{f\bar f}$ and $m_{f\bar f}$ are the total energy and invariant mass of
the fermion pair. The recoil mass spectrum should present a sharp peak at the
Higgs boson mass. The Higgs boson events can be selected with a high signal
to background ratio independent of the decay modes of the Higgs boson. Using the kinematic
information of the recoil system, we can boost all decay products back to the rest
frame of the Higgs boson and measure the event shape distributions in that frame.
Table~\ref{tab:tot} summarizes the decay BRs of the hadronic decays of the SM Higgs
boson and the expected numbers of events at the CEPC through $ZH$ production, with
the $Z$ boson decaying into electron or muon pairs. As one can see, the $q\bar q$ (light quarks)
channel is negligible in the case of the SM Higgs boson. All the hadronic channels
in Table~\ref{tab:tot} contribute to the distribution of the event shapes.
We must carefully select the one that we are interested in, which is the $jj$
($gg$+$q\bar q$) channel. To suppress the heavy-quark contributions, one
can use flavor tagging of the heavy quarks,
$b$ and $c$, a technique which is well established at hadron and lepton
colliders~\cite{1512.01094}. It has been shown that, assuming an efficiency of 97.2\% for identification
of gluon or light quarks $j$, the misclassification rate of a $b$ or $c$ quark to $j$ at CEPC could reach
8.9\% and 40.7\% respectively~\cite{CEPC-SPPCStudyGroup:2015csa,CEPCtalk}. Since there
are two quarks/gluons from the decay, by requiring both of them untagged one can remove 99(84)\%
of the $b\bar b$($c \bar c$) background while only changing the signal $jj$ by 6\%.
There are also backgrounds from other SM processes, especially from the SM $Zq\bar q$ production,
which have a flat distribution in the recoil mass. After applying further selection cuts,
e.g., on recoil mass, dilepton invariant mass, and the polar angle of the Higgs
boson, we estimate a total signal ($jj$) efficiency of
50\%~\cite{CEPC-SPPCStudyGroup:2015csa,Chen:2016zpw}. We assume a total $q\bar q$-like
background of 30\% of the signal rate from Higgs boson decays to $b\bar b$, $c\bar c$ and
the SM $Zq\bar q$ production of which about 10\% is from $b\bar b$ and $c\bar c$ as can
be calculated from the misidentification rates and various decay BRs. The normalization
of $Zq\bar q$ background is estimated according to Fig.~7 in Ref.~\cite{Chen:2016zpw}.
A second category of backgrounds are from decays to $WW^*$, $ZZ^*$ and further to four
quarks. Since they are away from the peak region of our signal, as shown in Fig.~\ref{fig:thrust},
they do not have
a large impact to the measurement of the light-quark couplings. We estimate
a total rate of 60\% of the signal for these four-quark backgrounds after all selection cuts.
They can be further suppressed if additional cuts on dijet invariant masses are used.
Noted we do not impose any selection cuts directly in our calculations of the signal and
backgrounds but rather estimate their effects on signal and background normalizations.
\begin{table}[h!]
\centering
\begin{tabular}{c|cccccc}
\hline
$Z(l^+l^-)H(X)$ & $gg$ & $b\bar b$ & $c\bar c$ & $WW^*(4h)$ & $ZZ^*(4h)$ & $q\bar q$\tabularnewline
\hline
\hline
$BR$ [\%] & $8.6$ & $57.7$ & $2.9$ & $9.5$ & $1.3$ & $\sim 0.02$\tabularnewline
\hline
$N_{event} $ & $6140$ & $41170$ & $2070$ & $6780$ & $930$ & $14$\tabularnewline
\hline
\end{tabular}
\caption{The decay branching ratios of the SM Higgs boson with a mass of
$125$ GeV to different hadronic channels~\cite{Heinemeyer:2013tqa} and the corresponding expected
numbers of events in $ZH$ production, with subsequent decays at a $e^+e^-$
collider with $\sqrt s={\rm 250\ GeV}$ and an integrated luminosity of
5 ${\rm ab^{-1}}$. Only decays of the associated $Z$ boson
to electrons and muons are included. $h$ represents any of the quarks
except the top quark and $q$ are light quarks.
\label{tab:tot}}
\end{table}
Including both the signal and backgrounds, the event shape distributions
at hadron level can be expressed as
\begin{align}\label{eq:eve}
\frac{d N}{d O}= & N_S(rf_{H(q\bar q)}(O)+(1-r)f_{H(gg)}(O))
+ N_{B,1}f_{H(b\bar b)}(O) \nonumber \\
&+ N_{B,2}f_{ZZ(q\bar q)}(O) + N_{B,3}f_{H(WW)}(O),
\end{align}
where $N_S$, $N_{B,1}$, $N_{B,2}$, and $N_{B,3}$ are the expected number of events
for the signal, the $q\bar q$-like backgrounds from heavy quarks in Higgs decay and from
$Zq\bar q$ production, and the four-quark background,
respectively.
The interference effects between the Higgs gluonic and fermionic
couplings from higher-orders in QCD are suppressed by an additional factor of quark mass over Higgs boson mass
due to chirality violation and are negligible here.
We normalize the signal rate to the SM result,
$N_S=\lambda N_{S,SM}$
with $\lambda=\sigma(HZ){\rm BR}(jj)/\sigma(HZ){\rm BR}(jj)_{SM}$.
From previous discussions, we have $N_{S,SM}=3070$ and
$N_{B,1(2,3)}=0.1(0.2,0.6)N_{S,SM}$. In addition, $r={\rm BR}(q\bar q)/{\rm BR(jj)}$
is the fraction of the Higgs boson BR to light quarks which we
would like to measure. Both $r$ and $\lambda$ allow possible
deviations from the SM which has $r=0$ and $\lambda=1$.
Noted we assume the Higgs boson couplings to be SM-like when calculating
various backgrounds, except for the couplings to gluon and light quarks.
Thus the modification of the gluon coupling can only be due to top quark
or new colored particles in the loop.
In Eq.~(\ref{eq:eve})
$f_{H(q\bar q)/(b\bar b)/(gg)/(WW)}$ are the normalized distributions of the Higgs boson decay
to light quarks, bottom quarks, gluons, or four quarks through $W$ boson pair as shown in
Fig.~\ref{fig:thrust}.
$f_{ZZ(q\bar q)}$ is the normalized distribution for $Zq\bar q$ production.
We simply assume a shape of $f_{H(b\bar b)}$ for the heavy-quark components
of the backgrounds.
Impact of using the actual mixture of bottom- and charm-quark distributions
are small.
We take into account 11 independent systematic uncertainties for the thrust distribution.
Two of them are the perturbative uncertainties of the normalized distribution
$f_{H(gg)}$, as shown in Fig.~\ref{fig:thrust}. Each of them is (anti-)correlated
among all bins. We include five systematic errors for various normalized shapes
in Eq.~(\ref{eq:eve}) due to the hadronization uncertainties as discussed earlier.
The other four are for the normalization
of the signal $N_S$ and of the backgrounds $N_{B,1}$, $N_{B,2}$, and $N_{B,3}$
in Eq.~(\ref{eq:eve}).
We do not assume any correlations among them. Normalization uncertainties on each of
the backgrounds are set to 4\%. Normalization of the signal can be measured separately
using hadronic decays of the $Z$ boson in $ZH$ production with the Higgs boson decay to $jj$,
and the uncertainty is estimated to be 3\%~\cite{CEPC-SPPCStudyGroup:2015csa}.
We have not included any perturbative uncertainties for the normalized shapes of
$q\bar q$ signal and various backgrounds. We estimate their effects to be comparable
or smaller than those of hadronization uncertainties with future high precision
calculations.
We study the expected exclusion limit on
$r$, as a function of $\lambda$, assuming the decay to $q\bar q$ vanishes.
We generate a large ensemble of pseudo-data according to Eq.~(\ref{eq:eve}) with the hypothesis of $r=0$.
Systematic uncertainties are treated using nuisance parameters.
Statistical fluctuations are included according to Gaussian distributions based on the
expected event rates in each bin.
For each of the pseudo-data we determine the exclusion limit on $r$ by using
the profiled log-likelihood ratio $q_{\mu}$ as our
test-statistic~\cite{1007.1727} together with the ${\rm CL_s}$ method~\cite{Read:2000ru}.
Fig.~\ref{fig:ecl1} shows the expected 95\% ${\rm CL_s}$ exclusion limit on
$r$ (in the dashed line) from the thrust distribution. The colored bands indicate
the $1\sigma$ and $2\sigma$ fluctuations of the expected exclusion limit. In case
the true theory is the SM, the expected exclusion limit on $r$ can reach 0.056,
which is the intersection of the curve and the vertical line.
That corresponds to a decay BR of $0.48$\% to $q\bar q$. In term of the Yukawa
coupling strength, that implies $y_q < 0.091 y_b$ for any of $q=u,d,s$,
with $y_b$ being Yukawa coupling of the bottom quark in the SM.
\begin{table}[h!]
\centering
\begin{tabular}{l|cccc}
\hline
& no sys.&+pert.&+nor.&+had.\tabularnewline
\hline
limit on $r$
& $0.036$ & $0.040$ & $0.045$ & $0.056$ \tabularnewline
limit on $r$ (lumi.$\times 10^3$)
& $0.0012$ & $0.0014$ & $0.018$ & $0.019$ \tabularnewline
\hline
\end{tabular}
\caption{
%
Impact of various systematic uncertainties on the expected 95\% CL$_s$ exclusion limit
of $r$ with $\lambda=1$ and a luminosity of 5 ${\rm ab}^{-1}$ or 5000 ${\rm ab}^{-1}$.
Numbers correspond to the exclusion limit without any systematic errors, and adding
various systematic errors in succession.
\label{tab:sys}}
\end{table}
The sensitivity on $r$
can be understood as below. There
are two major discrimination powers when testing finite $r$ against the SM case.
One is from the $q\bar q$-peak region and the other is from the $gg$-peak region.
If neglecting statistical errors, in the $q\bar q$-peak
region, a finite $r$ (an enhancement) can only be mimic by a systematic shift
of $N_{B,1(2)}$. Thus the 95\% ${\rm CL_s}$ limit approximately corresponds
to $r\approx 0.3*0.04*1.64\approx 0.02$. On the other hand, in the $gg$-peak region,
a finite $r$ (a deficit) can only be compensated by a systematic shift
of $N_S$. The limit is about $r=0.03*1.64\approx0.05$. When combining
both the limit is better than 0.02.
After considering the statistical fluctuations and other systematic errors the
limit increases to 0.056 as shown in Fig.~\ref{fig:ecl1}. We further illustrate
impact of various systematic uncertainties on the exclusion limit
of $r$ in Table~\ref{tab:sys}.
We show numbers correspond to the exclusion limit without any systematic errors,
and adding various systematic errors in succession.
We can see the uncertainty is dominated by the
statistical error. The hadronization uncertainties show a moderate
impact. For comparison we also list the results with a data sample of 1000 times
larger.
We can also include invisible decays of the
associated $Z$ boson in the analysis. They have a total
rate 3 times larger than to electrons and muons and suffer from a relatively larger
$Zq\bar q$ background due to a degradation of the signal-background separation power from
the recoil mass. Thus, we simply assume that once the $\nu\nu$ channels are included, both the
signal and backgrounds will double. The expected limit is again plotted in Fig.~\ref{fig:ecl1},
which can reach 0.047 with the SM assumption.
In principle, several of the backgrounds, e.g. $f_{H(b\bar b)}$ and $f_{ZZ(q\bar q)}$
can be measured directly in a controlled region from independent data sample.
We briefly comment on the possibilities in below.
\begin{itemize}
\item
Heavy-quark components: in this case one can require the quark/gluon being flavor
tagged rather than untagged. With a typical $b$-tagging efficiency of 60\%, the
misidentification rate for light flavors are negligible~\cite{1512.01094} not mentioning further
suppression from the Higgs boson decay branching ratio. Thus we can arrive at
a pure sample of $b\bar b$ of around $2\times 10^4$ events for CEPC. That corresponds to an
uncertainty of 1.5\% from statistical fluctuations for the bin $[0.02,\,0.03]$ of $\tau$,
comparable to the number in
Table~\ref{tab:hadro}. One question needs to
be addressed is how various flavor-tagging algorithm may change distributions of the
event shape observables.
\item
$Zq\bar q$ component: we can require the recoil mass of the lepton pair to be slightly
off the Higgs boson mass to remove all events from Higgs boson decay. For instance we
can select two recoil mass windows of $[110,\,120]$ GeV and $[130,\,140]$ GeV and take
the average of the two distributions measured as $f_{ZZ(q\bar q)}$. That contains about
$4\times 10^3$ events in each window and gives an uncertainty of 2.1\% for the
bin $[0.02,\,0.03]$ of $\tau$, which is much larger than the number in Table~\ref{tab:hadro}.
\end{itemize}
Similar exclusion limits can be set based on other event shape observables which are
summarized in Fig.~\ref{fig:ecl2} for $\lambda=1$. Definitions of the event shape observables
shown in Fig.~\ref{fig:ecl2} can be found in Refs.~\cite{Abbiendi:2004qz,Heister:2003aj}.
Here, only the statistical error and the
systematic uncertainty on the signal and background normalizations are included in the analysis.
Thus the limits shown here are optimistic concerning various theoretical uncertainties.
As already seen in Table~\ref{tab:sys} various theoretical uncertainties contribute
equally as the statistical uncertainty for the thrust distribution.
The binnings used in the analysis for all other
distributions are chosen to be the same as in Ref.~\cite{hep-ph/0503051}. All distributions
show a similar sensitivity to the light-quark Yukawa couplings.\\
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.45\textwidth]{full_ev1_cepc_excl.eps}
\end{center}
\vspace{-2ex}
\caption{\label{fig:ecl1}
Expected 95\% $\rm CL_s$ exclusion limit on $r$ and the $1\sigma$ and $2\sigma$
fluctuations as a function of the total cross section of the Higgs boson decay
to $jj$ normalized to the SM value. The dot-dashed line is the expected exclusion limit
when invisible decays of the $Z$ boson are also included in the analysis.}
\end{figure}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.45\textwidth]{sum_obs_withw.eps}
\end{center}
\vspace{-2ex}
\caption{\label{fig:ecl2}
Expected 95\% $\rm CL_s$ exclusion limit on $r$ and the $1\sigma$ and $2\sigma$
fluctuations based on measurements of different event shape observables and assuming
a theory of the SM. Theoretical uncertainties on the event shape distributions are not included.}
\end{figure}
\section{Discussion and summary}
It is interesting to compare our sensitivity to the light-quark
Yukawa couplings with the projection of
the LHC and HL-LHC. Ref.~\cite{1606.09621} claims an expected 95\% CL limit of
the Yukawa couplings $y_{u,d}<0.4y_b$, for LHC 13 TeV run with a total luminosity
of 300 ${\rm fb^{-1}}$, based on analyzing the $p_T$ distribution of the Higgs
boson. Ref.~\cite{1606.09253} reports a sensitivity of $y_s\sim 0.52 y_b$
for the strange quark at the HL-LHC.
Comparing with results above, our method provides a much stronger
sensitivity of $y_{u,d,s}<0.091y_b$ (95\% $\rm CL_s$).
The major limitation on
probing the light-quark Yukawa couplings at the LHC/HL-LHC is that the $gg$ parton
luminosity is much larger than the $q\bar q$ ones for a Higgs boson mass
of 125 GeV. Thus, a small downward shift of the $gg$ induced cross sections comparing
to experimental data, either due to the experimental or theoretical uncertainties, can allow for a
much larger light-quark Yukawa coupling.
We also comment on the comparison of our proposal with the possibility of using
gluon/quark jet discriminators. On the theory side, the event shape distributions can
be calculated systematically in perturbative QCD, and the theoretical uncertainties
are under control. Experimentally, the hadronic even-shape observables have
been studied extensively at LEP. The experimental systematics are well understood.
By comparing with the experimental results on the $\alpha_s(M_Z)$ measurement~\cite{hep-ph/0503051,1101.1470},
we found the sensitivity obtained in this study is realistic. Even
after all the experimental systematics are included, the expected exclusion limit
should not change greatly.
In summary, we have proposed a novel idea for measuring the light-quark Yukawa
couplings using hadronic event shape distributions in addition
to the conventional measurement of Higgs couplings at lepton colliders. We
show that for a $e^+e^-$ collider with a center-of-mass energy of 250 $\rm GeV$
and an integrated luminosity of 5 $\rm ab^{-1}$ one can expect to exclude a decay BR
of 0.48\% for the Higgs boson decay to $q\bar q$, at 95\% $\rm CL_s$, with $q$
be any of the $u,d,s$ quarks, assuming a hypothesis of SM-like theory and only modifications
to the Higgs boson couplings to gluon and light quarks. That corresponds
to an exclusion limit on a light-quark Yukawa coupling of about 9\% of the strength
of the bottom quark coupling in the SM.
\begin{acknowledgments}
JG would like to thank Manqi Ruan, Hua Xing Zhu, C. Wagner and E. Berger
for useful conversations, and
J. Huston for proofreading of the paper.
The work of JG is sponsored by Shanghai Pujiang Program.
\end{acknowledgments}
|
1,941,325,220,436 | arxiv |
\section{Introduction}
Linear Schr\"odinger operators $\calH$ of the form $\calH:=-\triangle\,+\,V$ composed of the $d$-dimen\-sio\-nal Laplacian $\triangle$ and a non-negative potential $V$ are an important building block for the mathematical modeling of quantum-physical processes related to ultracold bosonic or photonic gases -- so-called Bose-Einstein condensates \cite{Gro61,Pit61,LSY01,Alaeian_2017}. The cases where the potential $V$ exhibits disorder \cite{NBP13} or represents quantum arrays in the context of Josephson oscillations \cite{WWW98,ZSL98} have raised particular attention. Surprising phenomena such as Anderson localization of eigenfunctions \cite{And58} are connected to such oscillatory and disordered potentials. Anderson localization refers to exponentially localized low-energy stationary states which are caused exactly by the interplay of disorder (randomness) and high amplitude (contrast) of the potential trap. For matter waves, this has been experimentally observed, cf.~\cite{2008Natur.453..891B,2008Natur.453..895R}. While localization is understood qualitatively in an asymptotic sense or can be justified a posteriori in the mathematical model, the a priori prediction of the localization effect in terms of geometric parameters that characterize the potential and its degree of disorder remained open and this paper aims to close this gap.
\smallskip
We consider a bounded domain $D\subset\R^d$ and prototypical disorder potentials that vary randomly between two values $\alpha$ and $\beta$, where $\alpha \ll \tfrac{1}{\eps^2} \le \beta$ on a small scale $\eps$. The main result of the paper states that sufficiently disordered potentials imply the existence of $K$ points $z_1,\ldots,z_K\in D$ and some rate $c>0$, which only depends logarithmically on~$\eps$, such that the ground state $u_1$ satisfies
\begin{equation*}
\Vvert u_1\Vvert_{D \setminus \bigcup_{j=1}^K B(z_j;\,\eps k^2)}
\lesssim e^{-ck}\, \Vvert u_1\Vvert_D
\end{equation*}
for all $k=1,2,\ldots$.
Here $\Vvert\cdot\Vvert_\omega^2:=\Vert \nabla v\Vert_\omega^2 + \Vert V^{1/2}\cdot\Vert_\omega^2$ denotes the energy norm and $B(z;\, r)$ the ball of radius~$r$ centered at~$z$ in the sup norm. This is illustrated in Figure~\ref{fig:locstates} for a prototypical disorder potential. Similar results hold true for eigenstates in a certain range of energies at the bottom of the spectrum.
The proof of existence of exponentially localized states consists of three main steps. The first step is the quantification of the exponential decay of the Green's function associated with $\calH$ in terms of the oscillation length and the amplitude of the potential in Section~\ref{sec:precond}. Disorder is not essential at this point. This novel result is inspired by numerical homogenization for arbitrarily rough diffusion tensors \cite{MalP14,HenP13} which in turn is closely connected to the exponential decay of the corrector Green's function \cite{Pet16}. An elegant new proof of the latter decay property was later given in \cite{KorY16, KorPY18}. This pioneering approach employs classical results from domain decomposition and the convergence theory of iterative solvers for linear systems arising from the discretization of partial differential equations.
\begin{figure}
\centering
\includegraphics[width=0.29\linewidth]{pics/state3dpot1_checker_eps_6_beta_14}\hspace{2ex}
\includegraphics[width=0.32\linewidth]{pics/state3dpot2_checker_eps_6_beta_14}\hspace{2ex}
\includegraphics[width=0.32\linewidth]{pics/state3dpot3_checker_eps_6_beta_14}\\[1ex]
\includegraphics[width=0.32\linewidth]{pics/absstate1_checker_eps_6_beta_14}\hspace{1ex}
\includegraphics[width=0.32\linewidth]{pics/absstate2_checker_eps_6_beta_14}\hspace{1ex}
\includegraphics[width=0.32\linewidth]{pics/absstate3_checker_eps_6_beta_14}
\caption{Schr\"odinger eigenstates (homogenous Dirichlet Boundary condition) associated to the three smallest eigenvalues (from left to right) in a disorder potential. Top row: Graphs of eigenstates. Bottom row: Isolines of moduli representing exponential decay in scales of $\varepsilon$. The disorder potential is a random checkerboard on a Cartesian mesh of the unit square of width $\epsilon=2^{-6}$ taking values $\beta = 4/\varepsilon^2$ (black) and $\alpha = 1$ (white).
\label{fig:locstates}}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.4\linewidth]{pics/spectrum_per_1_2_3_4}\hspace{2ex}
\includegraphics[width=0.4\linewidth]{pics/spectrum_random_1_2_3_4}\\[1ex]
\includegraphics[width=0.4\linewidth]{pics/spectrum_per_random}\hspace{2ex}
\includegraphics[width=0.4\linewidth]{pics/spectrum_per_random_zoom}
\caption{Schr\"odinger eigenstates in periodic and disorder potential on the unit interval. Top left: Eigenstates associated to the four smallest eigenvalues in a discontinuous periodic potential oscillating on a partition of width $\varepsilon=2^{-8}$ with values $\alpha =1$ and $\beta = 8/\varepsilon^2$, i.e.~$V(x)=\beta$ if $\lfloor{x/\eps}\rfloor$ is even and~$V(x)=\alpha$, otherwise. Top right: Eigenstates for smallest eigenvalues in a realization of a discontinuous random potential on a partition of width $\varepsilon=2^{-8}$ oscillating randomly between i.i.d.~values~$\alpha =1$ and $\beta = 8/\varepsilon^2$. Bottom row: Spectra for periodic ({\color{blue}$\circ$}) and random (${\color{red}\times}$) potential.}\label{fig:spectra}
\end{figure}
In the second step, the decay property of the Green's function is transferred to the decay of eigenstates in the following sense. There exists a subspace of dimension $K$ that contains the lowermost eigenstates up to an arbitrary accuracy \quotes{$\tol$}. More precisely, the subspace is spanned by functions with support in local sub-domains with a diameter of order $\calO(\eps\log(1/\eps)^p\log(1/\tol))$ for some exponent~$p$
and, hence, the eigenfunctions are well approximated by functions supported in the union of $K$ small sub-domains. This is shown in Section~\ref{sect_localization} by designing a preconditioned block inverse iteration for the solution of the eigenvalue problem. The size of the subspace has to be chosen sufficiently large so that the rate of convergence of the block inverse iteration is only weakly-depending on $\varepsilon$ (through a factor of order $\mathcal{O}(\log(1/\eps))$.
The final step then regards the estimation of the parameter $K$ which determines whether or not the localization phenomenon can be observed in the bounded domain $D$. E.g., for perfectly periodic potentials, eigenvalues are clustered in a staircase fashion with large clusters so that $K$ is of order $\eps^{-d}$ and the local sub-domains essentially aggregate to the whole domain (see Figure~\ref{fig:spectra} for an illustration and Section~\ref{sect:gaps:periodic} for the details). Here, the presence of disorder changes the picture.
In the one-dimensional model problem of Figure~\ref{fig:spectra} significant spectral gaps are observed after a few modes so that a moderate $K$ is possible. To prove that $K$ is indeed smaller in the disordered case, we study two model scenarios in Sections~\ref{sect:gaps:random} and~\ref{sect:gaps:domino}. In the first model the potential has a tensor product structure and in the second model the potential consists of randomly structured \quotes{domino blocks}.
We show that in both cases that $K$ is of moderate size (with high probability) and, hence, the eigenstates are essentially localized to $K$ balls of radius $\eps$ each (up to logarithmic factors).
\smallskip
This is not the first attempt to understand this localization phenomenon mathematically. In the remaining part of the introduction, we shall give a brief survey on what is known on the localization of eigenfunctions to the (continuous) Schr\"odinger operator.
A classical localization result for non-negative, real-valued, smooth potentials on $\mathbb{R}^d$ with sufficiently high amplitude states that eigenstates below a certain energy level are exponentially localized towards infinity, cf.~\cite[Th.~3.4]{HiS96} and \cite{Agm82}. The speed of the exponential decay can be measured in an Agmon metric and depends on $V$ and the energy $E$. As for our result, localization may also be triggered by disorder. For certain types of randomly perturbed periodic potentials $V$ in full space (excluding perfectly periodic potentials), the lowermost eigenvalues of~$\calH$ are proved to have finite multiplicity and the corresponding eigenfunctions are exponentially localized towards infinity \cite[Cor.~1.4]{GeK13}. In contrast to our result, this is an asymptotic result where the rate of decay at infinity is qualitatively described in an abstract manner.
An even earlier and one of the first results in the context of localization was obtained in a one-dimensional setting. The results says that for a certain class of random potentials in $1d$ that are generated by regular Markov diffusion processes, {\it all} eigenfunctions in the spectrum decay exponentially \cite{GoM76,GMP77}. This classical result, however, cannot be generalized to higher dimensions, even for potentials with large amplitude \cite[Sect.~4.7.5]{ChS14}. Further literature on Anderson localization for random Schr\"odinger operators in full space includes \cite{CKM87,GMR15,KLS90,KMP86,KlM06}, \cite[Sect.~7.2]{GreN13}, and the references therein. There also exists a vast literature for lattice Schr\"odinger operators (also known as discrete Schr\"odinger operators), which in particular includes Anderson's original tight binding model. Here, upper bounds for the energy are typically not necessary to prove localization, provided that the disorder is sufficiently strong.
Here we refer to the early works~\cite{FrS83,FMS85} and~\cite{AiM93,Aiz94} as well as the monograph~\cite{ChS14} for a more recent overview on localization results for discrete Schr\"odinger operators.
A recent observation in the direction of quantitative results beyond asymptotics at infinity links the localization of the ground state $u_1$ with $\| u_1 \|_{L^{\infty}(D)}=1$ on bounded domains $D\subset \mathbb{R}^d$ to the solution $\psi \in H^1_0(D)$ of the homogeneous elliptic equation $\calH\psi=1$, cf.~\cite{FilM12}.
This so-called landscape function $\psi$ majorizes the modulus $|u_1|$ pointwise up to the multiplicative factor $E$, being the energy of $u_1$. This bound implies that in regions where $\psi$ is small compared to $E$, $u_1$ needs to be small as well. Follow-up work \cite{ArnDJMF16,ArnDFJM19b} demonstrates that the original problem can be reformulated as an eigenvalue problem with an effective confining potential of the form $1/\psi$. In this setting, it is possible to apply the techniques of~\cite{Agm82,HiS96} to establish an exponential decay of the eigenfunction of the form $|u_1(x)| \le \operatorname{exp}(-\rho(x_0,x))$, where $x_0$ is a center of localization of the eigenfunction and $\rho(x_0,x)$ is the distance of $x$ and $x_0$ in an Agmon metric, i.e., $\rho$ minimizes the path energy $\int_{\gamma}\sqrt{}( 1/\psi(\gamma(s)) - E )_+ \,\text{d}s$ amongst all paths $\gamma$ from $x_0$ to $x$. For smooth potentials, this observation shows that the eigenfunction $u_1$ needs to change at least by the factor $2$ in some ball around $x_0$ where the (a priori unknown) difference between potential and eigenvalue becomes sufficiently large~\cite{Steinerberger2017}. Since it is not known where the landscape function $\psi$ is strictly smaller than $1/E$, the above estimate may degenerate to $|u_1(x)|\le 1$ and hence, allows no rigorous a priori prediction of the localization of~$u_1$.
Nevertheless, the landscape function was successfully applied to obtain empirically accurate predictions of localization regions~\cite{LuS18} and its local maxima provide rough approximations of the lower-most eigenvalues~\cite{ArnDFJM19}.
In contrast to the landscape techniques which are purely a posteriori, the new quantitative results of this paper allows one to rigorously predict the emergence of exponentially localized states depending on the degree of disorder a priori.
\section{Schr\"odinger Eigenvalue Problem}\label{sec:evp}
This section introduces the Schr\"odinger eigenvalue problem and discusses, for a representative class of oscillatory potentials described by suitable geometric parameters, some elementary properties such as a lower bound for the minimal energy.
Throughout this paper, we use the notion $a \lesssim b$ for the existence of a generic constant $c>0$, independent of the parameters that characterize the admissible class of the potentials~$V$, such that $a \le cb$. Moreover, we use the notion $\plog$ for polynomials in the logarithm.
\subsection{Model problem}\label{sec:evp:model}
We consider the eigenvalue problem of Schr\"odinger type with a highly oscillatory potential, which may reflect disorder. The following class of potentials is representative for the localization effects to be studied in this paper while its characterization by a small number of geometric and statistical parameters simplifies the presentation significantly.
Let $\calT$ denote a partition that divides $\mathbb{R}^d$ into closed cubes with side length $\eps>0$, where $\eps^{-1} \in \mathbb{N}$ and $\eps \mathbb{Z}^d$ is the set of vertices. The partition induces a mesh on the unit cube $D := (0,1)^d$ through the quotient space $\calT \slash_{\sim} %\sim_{\mathbb{Z}^d}$
with the equivalence relation for $q_1,q_2 \in \calT$ given by
\begin{align*}
q_1 \sim} %\sim_{\mathbb{Z}^d q_2 \qquad \Leftrightarrow \qquad q_1 = \mathbf{k} + q_2\ \mbox{ for some } \mathbf{k} \in \mathbb{Z}^d.
\end{align*}
Observe that the partition $\calT \slash_{\sim} %\sim_{\mathbb{Z}^d}$ consists of equivalence classes $[q]_{\sim} %\sim_{\mathbb{Z}^d}$, each with exactly one representative in $D$. This definition reflects that we can consider our problem on the unit cube, extended by periodicity to the whole $\R^d$.
Defining the space of $D$-periodic $H^1$-functions by $\V := \Hper$, the corresponding variational formulation of the eigenvalue problem reads as follows: Given a non-negative potential $0\leq V\in L^\infty(D)$, find non-trivial eigenpairs $(u, \E) \in \V\times\R$ such that
\begin{align}
\label{eq:EVP}
a(u, v)
:= \int_{D} \nabla u(x) \cdot \nabla v(x) + V(x)\, u(x) v(x) \,\text{d}x
= \E\, (u, v)
\end{align}
for all test functions $v \in \V$. Here, $(\,\cdot\,,\cdot\,)$ denotes the $L^2$-inner product on $D$. The periodic boundary conditions encoded in this variational problem are not essential and may be replaced by homogeneous Dirichlet boundary conditions. The potential~$V$ is assumed to be piecewise constant with respect to the mesh $\calT \slash_\sim} %\sim_{\mathbb{Z}^d$, cf.~Figure~\ref{fig_potentials}. This prototype of large-amplitude and highly oscillatory potentials is defined through
\[
V(x) = \begin{cases}
\alpha, & x\in \Oa, \\
\beta, & x\in \Ob.
\end{cases}
\]
This means that $\Oa$ and $\Ob$ are the sub-domains of $D$, on which $V$ equals $\alpha$ and $\beta$, respectively. Further, we assume $\overline{D} = \overline{\Oa} \cup \overline{\Ob}$. The corresponding subpartitions are denoted by ${\calTa\slash_{\sim} %\sim_{\mathbb{Z}^d}}$ and ${\calTb\slash_{\sim} %\sim_{\mathbb{Z}^d}}$.
\begin{figure}
\input{fig_potential2}
\caption{Illustrations of the potential $V$ in two space dimensions, in which the gray parts represent $\Ob$ (where $V(x)=\beta$) and the white parts $\Oa$ (where $V(x)=\alpha$). Periodic (upper left), realization of a random tensor product (upper right), domino block (lower left), and a fully random potential (lower right).}
\label{fig_potentials}
\end{figure}
We are interested in the particular regime of $\beta\gg 1$, moderate $\alpha\ge0$, and small $\eps$. Furthermore, we assume that $\beta$ is not smaller than $\eps^{-2}$, which we will make more precise later on. We have in mind potentials where the distribution of $\Oa$ and $\Ob$ follows statistical laws. However, the actual statistics will become relevant only in Section~\ref{sect:gaps} in connection with the identification of spectral gaps.
In Section~\ref{sect_localization} we will exploit the operator formulation of \eqref{eq:EVP}. For this, we introduce the operators $\calA\colon \V\to\V^*$ and $\calI\colon \V\to\V^*$, defined by
\[
\langle \calA u, v \rangle_{\V^*,\V} := a(u, v), \qquad
\langle \calI u, v \rangle_{\V^*,\V} := (u, v)
\]
for functions $u,v\in\V$. Note that $\calA$ denotes the weak form of the Schr\"odinger operator $\calH$ and that the eigenvalue problem \eqref{eq:EVP} is equivalent to $\calA u = E\, \calI u$.
To shorten notation, we simply write $\Vert\cdot\Vert:=\sqrt{(\cdot,\cdot)}$ for the canonical $L^2$-norm on $D$. We also introduce the $V$-weighted $L^2$-norm,
\[
\Vert v \Vert_V^2
:= (Vv,v)
= \int_D V(x)\, |v(x)|^2\,\text{d}x
\]
as well as the energy norm,
\[
\Vvert v \Vvert^2
:= a(v, v)
= \Vert \nabla v\Vert^2 + \Vert v\Vert_V^2.
\]
Furthermore, we denote the norm on a sub-domain $\Oa$ or $\Ob$ by an additional subscript.
\subsection{Geometry and cut-off function}\label{sec:evp:cutoff}
Before we can estimate the Schr\"odinger states from below, we need to introduce additional notation on the geometry of the potential. First, we define the set of cubes within the partition $\calT \slash_{\sim} %\sim_{\mathbb{Z}^d}$, namely
\[
\calQ
:= \left\{ \hspace{2pt} \bigcup_{i=1}^{m} \hspace{2pt} [q_i]_{\sim} %\sim_{\mathbb{Z}^d} \ \Big|\ Q = \bigcup_{i=1}^{m} q_i \subseteq \R^d \text{ is a (closed) cube and union of $m$ elements } q_i\in \calT \right\}.
\]
Note that this implies that all cubes in $\calQ$ have side length $\eps k$ for some natural number $1 \le k\le \eps^{-1}$. Second, we define the set of {\em maximal cubes} in $\Oa$ and $\Ob$, respectively, by
\[
\calQnu
:= \big\{ Q\in \calQ\ |\ Q \subseteq \overline{\Onu} \text{ and there is no } Q' \in \calQ \text{ with } Q\subset Q'\subseteq \overline{\Onu} \big\},
\]
for $\nu=\alpha,\beta$. Note that $\bigcup_{Q\in\calQnu}Q = \overline{\Onu}$. Since $\calT \slash_{\sim} %\sim_{\mathbb{Z}^d}$ is a quotient space, we can interpret $\calQ$ and $\calQnu$ as containing \quotes{cubes} that are extended over the periodicity interface. Such cubes are connected in $\R^d$, but can be disconnected as subsets of the unit cube $D$. Whenever one of the following arguments requires an element of $\calQ$ or $\calQnu$ to be connected, we shall interpret it as a subset of $\R^d$, where values outside of $D$ are obtained through periodicity. This will be done without further mentioning. For brevity, we shall from now on abuse the notation and simply write $\calT$ instead of $\calT \slash_{\sim} %\sim_{\mathbb{Z}^d}$. The same is done for $\calTa$ and $\calTb$.
The cubes in $\calQa$ somehow characterize the potential valleys, i.e., regions where the potential has the value $\alpha$. Finally, we define the {\em maximal width of a potential valley} in $\calT$ by
\[
L := \max_{Q\in \calQa} \frac{h_Q}{\eps} \in \N,
\]
where $h_Q$ denotes the side length of a cube $Q$.
In the trivial setting $V\equiv \beta$, where $\calQa$ is empty, we set $L:=1$.
With this characteristic value, we are able to bound the maximum number of overlaying maximal cubes in $\calQa$, namely
\[
\kappa_\calT
:= \max_{q\in \calTa} \big| \{ Q\in\calQa\ |\ q\subseteq Q \} \big|
\le L^d.
\]
\begin{remark}
In the periodic setup as in Figure~\ref{fig_potentials} (upper left) we have $L=1$ and $\kappa_\calT=1$. Note that the value of $L$ remains unchanged if $\alpha$ and $\beta$ are swapped in this example.
\end{remark}
\begin{remark}
In the one-dimensional setting there are no overlapping maximal cubes, i.e., we have $\kappa_\calT = 1$ for $d=1$.
\end{remark}
The exponential localization of the Green's function and eigenstates in oscillatory potentials requires sufficiently high amplitudes of the potential. This is quantified in the subsequent assumption depending on the oscillation length $\eps$. Loosely speaking we shall assume that the strength of potential peaks, characterized by the parameter $\beta$, is large compared to the inverse of the square of the oscillation length $\eps$. This assumption resembles the scaling in a typical physical setup \cite{NatureGreinerEtAl}. Considering for instance an optical lattice potential $V$ that is based on a laser diode operating at a wavelength $\eps=\lambda$, then the potential oscillates at a frequency of order $\eps^{-1}$. On the other hand, the maximum potential depth $\beta$ is measured in units of the recoil energy, which itself is proportional to $\lambda^{-2}=\eps^{-2}$. This is precisely the relation that we shall assume for $\eps$ and $\beta$.
Assumptions on the strength of the potential valleys, characterized by the parameter $\alpha$, are not needed for the exponential decay of the Green's function.
Thus, until Section~\ref{sect:gaps} we only assume $0\le\alpha\le\beta$, which includes the particular case of the constant potential $V\equiv \beta$.
\begin{assumption}
\label{ass_epsBeta}
The coefficient $\beta$ is large in the sense that it satisfies the estimate $\beta \gtrsim \eps^{-2}$ and we assume that $\calTb\neq\emptyset$.
\end{assumption}
To derive energy estimates we introduce a cut-off function $\eta\colon D \to [0,1]$. This function is assumed to be smooth, constant $1$ in $\Oa$, and vanishes in each cube of side length $\eps/2$, which is centered in elements of $\calTb$, cf.~Figure~\ref{fig_cutoff}. In other words, $\eta$ hits zero in each $\beta$-peak of the potential $V$. Further, we assume that $\Vert \nabla\eta\Vert_{L^\infty(D)} \lesssim \eps^{-1}$. We emphasize that this implies some kind of Friedrichs inequality.
\begin{figure}
\begin{center}
\begin{tikzpicture}[scale=1.0]
\fill[gray, opacity=0.2] (0,0) -- (4,0) -- (4,4) -- (0,4) -- cycle;
\draw[thin, pattern=north west lines, pattern color=myBlue, opacity=0.5] (0,0) -- (4,0) -- (4,4) -- (0,4) -- cycle;
\fill[myBlue] (1,1) -- (3,1) -- (3,3) -- (1,3) -- cycle;
\node at (3.4, 0.5) {$\Ob$};
\node at (5.0, 3.5) {$\Oa$};
\draw[myRed,decorate,decoration={brace}] (-0.1, 0) -- (-0.1, 4);
\node[myRed] at (-0.4, 2) {$\eps$};
\draw[myRed,decorate,decoration={brace}] (0.9, 1) -- (0.9, 3);
\node[myRed] at (0.5, 2) {$\eps / 2$};
\node at (5.0, 0.5) {\small $\eta = 1$};
\node at (1.1, 3.5) {\small $\eta \in (0,1)$};
\node at (2.0, 2.0) {\small $\eta = 0$};
\end{tikzpicture}
\end{center}
\caption{Illustration of the cut-off function $\eta$, which is constant $1$ in~$\Oa$ and vanishes in the interior of each element of $\calTb$. }
\label{fig_cutoff}
\end{figure}
\begin{lemma}
\label{lem_eta_Friedrich}
Assume that $\calTb\neq\emptyset$.
Consider a function $v\in \V$ and the cut-off function $\eta$ introduced above. The product $\eta v$ then satisfies the estimate
$$
\| \eta v \|_{L^2(D)} \leq
\cl \hspace{1pt} \eps L \hspace{2pt} \|\nabla (\eta v) \|_{L^2(D)}
$$
with some generic constant $\cl\lesssim \kappa_\calT L^{d}\le L^{2d}$.
\end{lemma}
The proof of the lemma is given in Appendix \ref{appendix-a}, where we also present a refined version of this Friedrichs-type inequality.
\begin{example}
In the extreme case of only a single $\beta$-element in $\calT$, we have $L\approx \eps^{-1}$ and the result degenerates in the sense that we loose the factor~$\eps$ in the estimate.
On the other hand, if the potential satisfies $V\equiv \beta$, then we get the classical Friedrichs inequality~$\| \eta v \|_{L^2(D)} \le \eps\, \|\nabla (\eta v) \|_{L^2(D)}$.
In the case of a random potential with correlation length of the order $\varepsilon$, one obtains with high probability maximal valleys of size $L\approx\log(1/\eps)$. Thus, we get a Friedrichs-type inequality, which contains the factor~$\eps$ but also logarithmic terms.
\end{example}
\subsection{Lower bound on the energy}\label{sec:evp:bound}
With the estimate of Lemma~\ref{lem_eta_Friedrich}, we are able to give a lower bound for the spectrum of $\calH$.
For this, we will bound the scaled energy of a function $v\in\V\setminus\{ 0\}$,
\begin{align}
\label{def-Ev}
\E(v)
:= \frac{a(v,v)}{\Vert v\Vert^2}
= \frac{\Vvert v\Vvert^2}{\Vert v\Vert^2}
\end{align}
from below. In the assumed regime $\beta\gtrsim\varepsilon^{-2}$ this lower bound is in the order of~$\eps^{-2}$.
In the following, we will no longer mention the silent convention that $E(v)$ is only defined for $v\not=0$.
\begin{lemma}
Under Assumption~\ref{ass_epsBeta} we have
\[
E^1
:= \min_{v\in \V} \E(v)
\gtrsim \frac{1}{\cl^2 (\eps L)^2}.
\]
\end{lemma}
\begin{proof}
For an arbitrary function $v\in \V$ we obtain
\begin{align*}
\Vert v \Vert^2
&= \Vert v \Vert_\Oa^2 + \frac{1}{\beta}\, \Vert v \Vert_{V,\Ob}^2 \\
&\le \Vert \eta v \Vert^2 + \frac{1}{\beta}\, \Vert v \Vert_{V,\Ob}^2 \\
&\lesssim
\cl^2 \hspace{1pt} (\eps L)^2\, \Vert \nabla(\eta v) \Vert^2 + \frac{1}{\beta}\, \Vert v \Vert_{V, \Ob}^2 \\
&\lesssim \cl^2 \hspace{1pt} (\eps L)^2\,
\Big( \frac{1}{\eps^2\beta}\, \Vert v \Vert_{V,\Ob}^2 + \Vert \nabla v \Vert^2 \Big) + \frac{1}{\beta}\, \Vert v \Vert_{V, \Ob}^2 \\
&= \cl^2 \hspace{1pt} (\eps L)^2\, \Vert \nabla v \Vert^2 +
\big( 1 + \cl^2 L^{2} \big) \frac{1}{\beta}\, \Vert v \Vert_{V, \Ob}^2.
\end{align*}
Thus, Assumption~\ref{ass_epsBeta} yields the estimate
\begin{align}
\label{estimate_L2norm}
\Vert v \Vert^2
\lesssim \cl^2 \hspace{1pt} (\eps L)^2\, \Vvert v \Vvert^2.
\end{align}
This shows that the energy is bounded from below by
\begin{align*}
\E(v)
= \frac{\Vvert v\Vvert^2}{\Vert v\Vert^2}
\gtrsim \frac{1}{\cl^2 (\eps L)^2}.
\end{align*}
Using the characterization of eigenvalues by the Rayleigh quotient, we directly obtain the stated lower bound for the ground state of the Schr\"odinger equation.
\end{proof}
\begin{remark}
\label{rem_sharp}
Estimate~\eqref{estimate_L2norm} is sharp with respect to the maximum width of potential valleys $\eps L$, i.e., there exists a non-trivial function $u\in \V$ with $\Vert u \Vert^2 \gtrsim (\eps L)^2 \, \Vvert u \Vvert^2$. To see this we consider the first eigenfunction $u\in H^1_0(Q_L)$ of the shifted Laplace eigenvalue problem
\[
\int_{Q_L} \nabla u(x) \cdot \nabla v(x) \,\text{d}x
= \big(\lambda-\alpha\big) \int_{Q_L} u(x)\, v(x) \,\text{d}x
\]
with test functions $v\in H^1_0(Q_L)$ on a maximal $\alpha$-cube $Q_L \in \calQa$ with side length $\eps L$. According to \cite[Ch.~10.4]{Str08}, the first eigenvalue satisfies $\lambda-\alpha = d \hspace{2pt} \pi^2 / (\eps L)^2$ such that~$u$, extended by zero to a function in $\V$, satisfies
\[
\Vert u\Vert^2
= \Vert u\Vert_{Q_L}^2
= \frac{1}{\lambda}\, \Vvert u\Vvert_{Q_L}^2
= \frac{1}{\lambda}\, \Vvert u\Vvert^2
\gtrsim \frac{(\eps L)^2}{d\pi^2} \Vvert u\Vvert^2,
\]
provided that~$\alpha$ is of moderate size.
Similar arguments will be used in Section~\ref{sect:gaps} where we prove the existence of spectral gaps.
\end{remark}
\section{Exponential Decay of the Green's Function}\label{sec:precond}
This section shows that the Green's function associated with the Schr\"odinger operator decays exponentially relative to the parameter $\varepsilon$ that reflects the characteristic length of oscillation of the potential. The proof is strongly inspired by a recent innovative proof of the exponential decay of the corrector Green's function in the context of numerical homogenization for arbitrarily rough diffusion coefficients by Kornhuber and Yserentant \cite{KorY16}, see also \cite{KorPY18} and earlier work \cite{MalP14, HenP13, Pet16}. The idea is to show that the Schr\"odinger operator can be preconditioned by an operator that is local with respect to a decomposition of the domain into cubic sub-domains with diameter~$2\eps$.
The spectrum of the arising preconditioned operator is proved to be clustered around~$1$ so that simple iterative solvers approximate the action of the inverse Schr\"odinger operator applied to some compactly supported function (or the point evaluation functional) up to an accuracy $\tol$ in only~$\mathcal{O}(\plog(1/\eps) \log(1/\tol))$ steps.
The locality of the preconditioned operator ensures that the diameter of the support of the approximation is of the same order. This means that the Green's function associated with $\calH$ decays exponentially in units of~$\eps L$. The result is independent of the degree of disorder of the potential and remains valid in the perfectly periodic case.
\subsection{Overlapping domain decomposition on the $\varepsilon$-scale}
We introduce an overlapping decomposition of $D$, which we will later use to define the local preconditioner.
For this, we consider the nodes corresponding to the mesh $\calT$, which we denote by~$\calN$.
For each node $z\in \calN$ let $\lambda_z$ be the standard $Q_1$-basis function \cite[Sect.~3.5]{BreS08}, i.e., $\lambda_z$ is a piecewise polynomial of partial degree one with $\lambda_z(z)=1$ and $\lambda_z(w)=0$ for any other node $w\in\calN\setminus\{z\}$. Again one has to take care of the assumed periodicity of the domain and the resulting identification of the boundary nodes. All together, this gives a set of functions, which forms a partition of unity on $D$, i.e.,
\begin{align}
\label{eqn_partunity}
\sum\nolimits_{z\in \calN} \lambda_z \equiv 1.
\end{align}
The patches of the $Q_1$-hat functions define small subdomains
\[
D_z := \supp \lambda_z
\]
for each $z\in\calN$.
By definition, $D_z$ are cubes of side length~$2\eps$ and each $T\in\calT$ is contained in $2^d$ of these subdomains.
For an illustration of such a patch we refer to Figure~\ref{fig_localPatch}.
\begin{figure}
\input{fig_patch2}
\caption{Illustration of local (overlapping) patches $D_i$ for three exemplary nodes. Gray squares are regions with $V(x)= \beta$.
}
\label{fig_localPatch}
\end{figure}
Based on the patches $D_z$, we define local $H^1$-spaces by
\[
\V_z
:= H^1_0(D_z)
= \big\{ v\in H^1(D_z)\ |\ v=0 \text{ on } \partial D_z \big\}.
\]
Elements of $\V_z$ are considered to be extended by zero outside of $D_z$. Moreover, recall that we interpret $D_z$ as a subset of $\R^d$. This implies that a function $v\in H^1_0(D_z)$ respects the periodicity over opposite edges (or faces for $d=3$) of $\partial D$ and does not necessarily fulfill $v=0$ on $\partial D$.
We define the corresponding $a$-orthogonal projection $\Pz\colon \V\to\V_z$ by the variational problem
\[
a(\Pz u, v) = a(u,v)
\]
for test functions $v\in \V_z$. Note that the trivial embedding $\V_z\ensuremath{\hookrightarrow}\V$ allows to consider $\Pz$ as a mapping from $\V$ to $\V$. We may also define $\tPz\colon \V^*\to \V_z$ by
\[
a(\tPz F, v) = \langle F, v\rangle_{\V^*,\V}
\]
for test functions $v\in \V_z$. Letting $\calA\colon \V\to\V^*$ denote the operator representation of $a(\cdot\,,\cdot)$, we have the relation $\Pz = \tPz \calA$.
\subsection{Optimal $\varepsilon$-local preconditioner}
Combining all local projections, we obtain the operator
\begin{align}
\label{def_calP}
\calP
:= \sum\nolimits_{z\in\calN} \Pz.
\end{align}
This defines a mapping $\calP\colon \V \to\V$ if we assume that the canonical embeddings $\V_z\ensuremath{\hookrightarrow} \V$ are exploited. It is easy to see that this operator is continuous. Accordingly, we define $\tilde \calP\colon \V^*\to\V$ by~$\tilde \calP:= \sum\nolimits_{z\in\calN} \tPz = \calP \calA^{-1}$.
We emphasize that the operator $\calP$ is quasi-local with respect to the $\eps$-mesh $\calT$, since \quotes{information} can only propagate distances of order $\eps$ each time that $\calP$ is applied.
The remaining part of this section aims to show that $\tilde\calP$ defines a good approximation of $\calA^{-1}$ and thus, serves well as a preconditioner within iterative solvers for linear equations and the Schr\"odinger eigenvalue problem. Following the abstract theory for additive subspace correction
or additive Schwarz methods for operator equations \cite{KorY16} (see also \cite{Xu92,yserentant_1993} for the matrix case) we need to verify that the energy norm of a function $u\in \V$ can be bounded in terms of the sum of local contributions from $\V_Q$ and $\V_z$.
\begin{lemma}
\label{lem_K2}
For every decomposition $u = \sum_{z\in\calN} u_z$ with $u_z\in \V_z$ we have
\[
\Vvert u \Vvert^2
\le K_2\, \sum_{z\in\calN} \Vvert u_z \Vvert^2
\]
with $K_2 = 2^d$.
\end{lemma}
\begin{proof}
We use the local supports of $u_z$ and the fact that for $T\in\calT$ there are at most $2^d$ functions $u_z$ with support on $T$. Thus, we can estimate on a single element,
\[
\Vvert u \Vvert_{T}^2
= \bigVvert \sum\nolimits_{z\in\calN} u_z \bigVvert_{T}^2
\le 2^d \sum\nolimits_{z\in\calN} \Vvert u_z \Vvert_T^2.
\]
A summation over all $T$ yields the assertion.
\end{proof}
We now need the reverse estimate for one specific decomposition of $u\in\V$ in the local spaces~$\V_z$.
Therefore, we define the local functions $u_z := \lamz u$ for all $z\in\calN$.
From~\eqref{eqn_partunity} we know that $\sum_{z\in\calN} u_z = u$.
For this particular decomposition we can prove the following lemma.
\begin{lemma}
\label{lem_K1}
Given Assumption~\ref{ass_epsBeta} and the decomposition of $u\in\V$ as above, it holds that
\[
\sum_{z\in\calN} \Vvert u_z \Vvert^2
\lesssim K_1\, \Vvert u \Vvert^2
\]
with constant $K_1 := 2^{d+1} (1 + \cl^2 L^2 \big)$.
\end{lemma}
\begin{proof}
With the estimate~\eqref{estimate_L2norm} we directly obtain
\begin{align*}
\sum_{z\in\calN} \Vvert u_z \Vvert^2
&= \sum_{z\in\calN}\Big(\Vert \nabla(\lamz u) \Vert^2 + \Vert \lamz u \Vert_V^2 \Big)\\
&\le \sum_{z\in\calN} \Big( 2\, \Vert \nabla u \Vert_{D_z}^2 + 2\eps^{-2}\Vert u \Vert_{D_z}^2 + \Vert u \Vert_{V,D_z}^2 \Big)\\
&\le 2^{d+1} \Vvert u \Vvert^2 + 2^{d+1} \eps^{-2} \Vert u \Vert^2\\
&\lesssim 2^{d+1} (1 + \cl^2 L^2) \Vvert u \Vvert^2.
\end{align*}
Note that we have again used the fact that the maximal number of overlapping patches is $2^d$.
\end{proof}
\begin{remark}
In the periodic setting with $L=1$ one can show that $K_1 \lesssim 2^{d+2}$, i.e., $K_1$ is independent of $\eps$.
\end{remark}
Note that we have used Assumption~\ref{ass_epsBeta} in the previous lemma. We emphasize that such a condition is necessary, since the general case would lead to a constant $K_1 \approx \eps^{-2}$ in the worst case.
The subsequent result is a direct consequence of the previous Lemmata~\ref{lem_K2} and \ref{lem_K1}, cf.~\cite[Lem.~3.1 and Th.~3.2]{KorY16}.
\begin{corollary}
\label{cor_K1K2}
Given Assumption~\ref{ass_epsBeta}, we obtain the norm equivalence
\begin{align}
\label{estimates_P}
K_1^{-1} a(v,v)
\le a(\calP v, v)
\le K_2\, a(v,v)
\end{align}
for all $v\in\V$ with the constants $K_1$ and $K_2$ from Lemmata~\ref{lem_K1} and~\ref{lem_K2}.
\end{corollary}
Recall that $\calP\colon\V\to\V$ from \eqref{def_calP} has led to the definition of $\tilde\calP\colon\V^*\to\V$ by $\tilde{\calP}\calA = \calP$. With this operator, the estimate \eqref{estimates_P} can be rewritten in the form
\[
{K_1}^{-1}\, a(v,v)
\le \langle \calA \tilde\calP \calA v, v \rangle
\le K_2\, a(v,v).
\]
We summarize a number of properties of the operator $\calA \tilde\calP \calA$.
\begin{lemma}
\label{lem_APA}
The operator $\calA \tilde\calP \calA\colon\V \to\V^*$ is symmetric, coercive, and continuous. As a consequence, the operator is also invertible.
\end{lemma}
\begin{proof}
The symmetry follows from the definition of $\calP$, namely
\begin{align*}
\langle \calA \tilde\calP \calA u, v \rangle
= a(\calP u, v)
&= \sum\nolimits_{Q\in\calQa} a(\PQ u, v) + \sum\nolimits_{z\in\calNb} a(\Pz u, v) \\
&= \sum\nolimits_{Q\in\calQa} a(u, \PQ v) + \sum\nolimits_{z\in\calNb} a( u, \Pz v)
= a(u, \calP v)
= \langle u, \calA \tilde\calP \calA v \rangle .
\end{align*}
The coercivity follows directly from the coercivity of $a(\cdot\,,\cdot)$ and \eqref{estimates_P}. Finally, the continuity follows from the boundedness of $a(\cdot\,,\cdot)$ and $\calP$.
\end{proof}
Estimates of the form \eqref{estimates_P} are well-known from the preconditioner community for the computation of eigenvalues of a symmetric and positive definite matrix. The following approximation result, together with the local computability, then results in a well-designed preconditioner.
\begin{theorem}
\label{thm_tildeP_gamma}
Under Assumption~\ref{ass_epsBeta} and with the scaling factor $\vartheta := 1/(K_2+K_1^{-1})$ with $K_1$ and $K_2$ from Lemmata~\ref{lem_K2} and~\ref{lem_K1}, there exists a positive constant $\gamma_\calP < 1$ such that
\[
\Vvert \id - \vartheta {\calP}\Vvert
= \Vvert \id - \vartheta\tilde{\calP}\calA\Vvert
:= \sup_{v\in\V} \frac{\Vvert v - \vartheta\tilde{\calP}\calA v\Vvert}{\Vvert v\Vvert}
\le \gamma_\calP < 1.
\]
\end{theorem}
\begin{proof}
By the spectral equivalence \eqref{estimates_P} we conclude that for any $v\in\V$ it holds
\[
(1 - \vartheta K_2)\, \Vvert v \Vvert^2
\le a(v-\vartheta {\calP}v, v)
= a(v,v) - \vartheta a(\calP v, v)
\le (1- \vartheta K_1^{-1})\, \Vvert v \Vvert^2.
\]
Furthermore, for any linear operator $\calQ\colon \V\to\V$ we have
\[
\Vvert \calQ\Vvert^2
:= \sup_{v\in\V, \Vvert v\Vvert = 1} a(\calQ v, \calQ v)
\le \sup_{v\in\V, \Vvert v\Vvert = 1} \sup_{w\in\V, \Vvert w\Vvert = 1} a(\calQ v, w)\, \Vvert \calQ\Vvert
\le \Vvert \calQ\Vvert^2.
\]
Thus, all estimates are in fact equalities.
If $a(\calQ\, \cdot\,, \cdot)$ defines in addition a scalar product in $\V$, then we get by the polarization identity
\[
\Vvert \calQ\Vvert
= \sup_{v\in\V, \Vvert v\Vvert = 1} \sup_{w\in\V, \Vvert w\Vvert = 1} a(\calQ v, w)
= \sup_{v\in\V, \Vvert v\Vvert = 1} a(\calQ v, v).
\]
By the mentioned spectral equivalence we know that $\calQ := \id -\vartheta {\calP}$ defines a scalar product for $\vartheta < 1/K_2$ such that the choice $\vartheta := 1/(K_2+K_1^{-1})$ gives
\[
\Vvert \id - \vartheta {\calP}\Vvert
= \sup_{v\in\V, \Vvert v\Vvert = 1} a((\id - \vartheta {\calP})v, v)
\le 1- \vartheta K_1^{-1}
= \frac{K_2}{K_1^{-1} + K_2}. \qedhere
\]
\end{proof}
The proof of Theorem~\ref{thm_tildeP_gamma} provides an explicit formula of the upper bound, namely $\gamma_\calP \le K_2 / (K_1^{-1} + K_2)$.
This shows that, given Assumption~\ref{ass_epsBeta}, $\gamma_\calP$ only depends on the geometry of the potential, which is encoded in the constants $K_1$ and $K_2$, but not on the actual values $\alpha$ and $\beta$ and thus, not on their contrast.
Note, however, that $\gamma_\calP$ depends on $L$, which itself may depend on $\eps$.
\subsection{Exponential decay}
\label{sect_decay_Greens_func}
In the last part of this section we want to relate the previous results to the exponential decay of the Green's function associated with the differential operator $\calH$. For this, let $f\in L^2(D)$ be a given function with local support and consider the problem of finding $u \in \V$ with $\calA u=F$, where $F:=(f,\,\cdot\,)\in \V^*$. To approximate $u$, we define the iteration
\begin{align}
\label{sec3-preliminiary-iterations}
u^{(k)}
:= u^{(k-1)} + \vartheta \big( \tilde \calP F - \calP u^{(k-1)} \big)
= u^{(k-1)} + \vartheta \calP \big( u - u^{(k-1)} \big)
\end{align}
for $k\ge 1$ and trivial starting value $ u^{(0)}=0$.
First, we observe that $u^{(1)}=\vartheta \tilde \calP F = \vartheta \calP u$ is a local function, because its computation only includes the solution of local problems. To see this, note that
\[
u^{(1)}
= \vartheta\tilde\calP F
= \sum_{z \in\calN} \vartheta\tilde\calP_z F
=: \sum_{z \in\calN} u_z^{(1)} ,
\]
where all $u_z^{(1)}$ are fully defined by local test functions $v_z\in\V_z$ through
\[
a(u_z^{(1)} , v_z)
= \vartheta\, a(\calA^{-1} F, v_z)
= \vartheta\, \langle F, v_z \rangle_{\V^*, \V}
= (\vartheta f, v_z).
\]
This implies that the application of $\tilde \calP$ maintains locality in the sense that the support of $u^{(1)}=\vartheta\tilde\calP F$ is at most one~$\eps$-layer larger than the support of $f$. Inductively, we immediately see that the support of
$$
u^{(k)} = u^{(1)} + (\id - \vartheta \calP )\, u^{(k-1)}
$$
is at most $k$ $\eps$-layers larger than the support of $f$.
Next, we observe from the definition of $u^{(k)} $ in~\eqref{sec3-preliminiary-iterations} that
\begin{align*}
u - u^{(k)}
= (\id - \vartheta \calP) \big( u - u^{(k-1)} \big)
= (\id - \vartheta \calP)^k \big( u - u^{(0)} \big)
= (\id - \vartheta \calP)^k u.
\end{align*}
Applying Theorem \ref{thm_tildeP_gamma}, we obtain
\begin{align*}
\Vvert u - u^{(k)} \Vvert
\le \gamma_\calP^k\, \Vvert u \Vvert,
\end{align*}
i.e., we have that $u^{(k)}$ converges exponentially fast to $u$ with rate~$\gamma_\calP$.
Recall the earlier observation that the support of $u^{(k)}$ is at most $k$ $\eps$-layers bigger than the the support of the local function $f$, i.e., $\supp(u^{(k)})\subseteq B^\infty_{k\eps }(\supp f)$, where $B^\infty_{r}$ denotes the ball of radius $r$ in the sup norm.
This then shows that the Green's function associated with the Schr\"odinger operator must have an exponential decay as summarized in the following corollary.
\begin{corollary}[Exponential decay of the Green's function]
Consider Assumption~\ref{ass_epsBeta} and let $f\in L^2(D)$ be local.
Then, the solution $u\in \calV$ of the variational problem $\calA u = f$ decays exponentially fast in the sense of
\[
\Vvert u\Vvert_{D\setminus B^\infty_{k\eps}(\supp f)}
\lesssim \gamma_\calP^k\, \Vvert u\Vvert.
\]
Here, $0<\gamma_\calP<1$ denotes the constant from Theorem~\ref{thm_tildeP_gamma}.
\end{corollary}
\begin{example}
We again consider the extreme cases and analyze the number of necessary steps to achieve $\gamma_\calP^k \le \tol$.
In the worst case~$L\approx \eps^{-1}$, i.e., $K_1\approx 2^{d+1}\eps^{-2}$, we need $k = \mathcal{O}(\log(1/\tol)\, \eps^{-2})$ steps.
Thus, the solution~$u$ may not be localized.
On the other hand, the periodic setting with $L=1$ yields $K_1 = 2^{d+2}$ and thus, $k = \mathcal{O}(\log(1/\tol))$. This means that the number of needed steps to reach the error tolerance is independent of $\eps$.
In the case of interest with $L\approx\log(1/\eps)$ we have~$k = \mathcal{O}(\log(1/\tol)\log(1/\eps)^p)$ for a certain polynomial degree $p\le 4d+2$.
This means that the number of steps only depends logarithmically on $\eps$ and that $u$ is of local nature.
\end{example}
The obtained decay result is in agreement with the well-known exponential decay of the Green's function associated with $\calH$ for positive $V$ on a sufficiently large sub-domain. In the case of constant potentials $V\equiv \beta$, this is for instance shown in \cite[Lem.~3.2]{Glo11}.
We shall revisit the discussion of this paragraph later in Section~\ref{sect_localization_oneStep} as part of the localization proof for the eigenfunctions of $\calH$.
\begin{remark}
\label{rem_smallGamma}
For later arguments, it is important to achieve an error reduction factor (per step) below some prescribed value~$\gap<1$.
This is easily achieved by considering multiple steps of the iteration~\eqref{sec3-preliminiary-iterations} with preconditioner $\vartheta\calP$.
More precisely, we introduce the operator~$\calPP\colon \V\to\V$, which includes~$k$ steps with $k$ large enough such that
\[
\Vvert \id - \calPP \Vvert
\le \gamma:= \gamma^k_\calP
\le \frac{1-\gap}{2} < 1.
\]
This then implies $\gamma+\gap \le (1+\gap)/2 < 1$.
It goes without saying that this enlarges the support of the solution, i.e., the operator~$\calPP$ spreads information over $k$ $\eps$-layers.
The order of $k$ can be estimated by
\[
k \approx \frac{\log(1-\gap) - \log 2}{\log \gamma_\calP}.
\]
In the discussed case $L\approx\log(1/\eps)$, where $\gamma_\calP$ depends polylogarithmically on~$\eps$, we have $k = \calO(\log(1/(1-\gap)) \log(\plog(1/\eps)))$.
\end{remark}
\section{Quantitative Localization of Eigenfunctions}\label{sect_localization}
In the following we want to transfer the localization arguments from Section \ref{sect_decay_Greens_func} to the spectrum of $\calH$.
Assuming a sufficiently large spectral gap between the smallest and the ($K\hspace{-0.5ex}+\hspace{-0.5ex}1$)-st eigenvalue for some moderate $K$, we show that the ground state is indeed quasi-local. The assumption will later turn out to be valid for potentials with a high level of disorder. Nevertheless, the convergence results of this section are independent of the actual value of $K$ in the sense that the ground state is always in the span of $K$ exponentially localized functions. This shows localization of the eigenfunction itself if $K$ is sufficiently small compared to $\eps^{-d}$.
\subsection{Inverse power iteration}\label{sect_localization_power}
As mentioned in Section~\ref{sec:evp:model}, the Schr\"odinger eigenvalue problem \eqref{eq:EVP} can be written as an operator equation in the dual space of $\V$, namely
\[
\calA u = E\, \calI u.
\]
Recall that $\calA\colon\V\to\V^*$ is the operator corresponding to $a(\cdot\,,\cdot)$ whereas $\calI\colon\V\to\V^*$ denotes the extension of the inner product in $L^2(D)$. We consider the inverse power method for PDE eigenvalue problems and illustrate the method first for the case of a spectral gap after the first eigenvalue $E^1$. Due to the ellipticity of $\calA$, we may assume that there exists a Hilbert basis of $\V$ composed of the eigenfunctions $u_1, u_2, \dots$ normalized in the $L^2(D)$-norm. Further, we assume that a given starting function $v^{(0)}\in \V$ satisfies $(u_1, v^{(0)})\neq 0$, i.e., we may express $v^{(0)}$ in the form $v^{(0)} = \sum_{i=1}^\infty \alpha_i u_i$ with $\alpha_1\neq 0$.
The inverse power method, known from numerical linear algebra \cite[Ch.~10.3]{AllK08}, also converges in the Hilbert space setting, cf.~\cite{EriSL95, AltF18ppt}. The iteration, including a normalization by $E^1$, has the form
\begin{align}
\label{eqn_powIteration}
v^{(k)}
= {E^1} \calA^{-1} \calI v^{(k-1)}
=: \calB v^{(k-1)}
= \calB^k v^{(0)}
\end{align}
with $\calB := E^1\, \calA^{-1} \calI\colon \V\to\V$.
With $\gap:= E^1/E^2 < 1$ the iteration leads to
\[
v^{(k)}
= \alpha_1 u_1 + \sum_{i=2}^\infty \alpha_i\, \Big( \frac{E^1}{E^i}\Big)^k u_i.
\]
Note that~$(u_1, v^{(k)}) = \alpha_1$ remains unchanged due to the scaling factor $E^1$.
Measuring the distance of $v^{(k)}$ to the eigenspace of $u_1$ in the energy norm by
\[
\err^{(k)}
:= \min_{c\in \R} \Vvert v^{(k)} - c\,u_1 \Vvert
= \Vvert v^{(k)} - \alpha_1 u_1 \Vvert
= \Big( \sum_{i=2}^\infty |\alpha_i|^2 \Big( \frac{E^1}{E^i}\Big)^{2k} E^i \Big)^{1/2} ,
\]
we obtain due to the orthogonality of the eigenfunctions,
\[
\err^{(k)}
= \Vvert v^{(k)} - \alpha_1 u_1 \Vvert
\le \gap^k \Vvert v^{(0)} - \alpha_1 u_1\Vvert
= \gap^k \err^{(0)}.
\]
This shows the exponential convergence of the inverse power iteration with a rate depending on the gap between the first two eigenvalues.
\subsection{Preconditioned iteration step}\label{sect_localization_oneStep}
For the proof of localization of the Schr\"odinger states we need to replace the inverse iteration \eqref{eqn_powIteration} by an inexact iteration including the preconditioner from Section~\ref{sec:precond}.
More precisely, we like to replace one inverse power step by a fixed number of preconditioned steps, which we indicate by the operator~$\calPP$, cf.~Remark~\ref{rem_smallGamma}.
Recall that this includes an error reduction by a factor~$\gamma$ with $\gamma+\gap < 1$ and that $\calPP$ enlarges the support by only a few~$\eps$-layers.
One step of the inverse power iteration $v^{(k)} = \calB v^{(k-1)}$ is replaced by
\begin{align}
\label{ideal_PINVIT}
\tilde v^{(k)}
:= \tilde v^{(k-1)} + \calPP \big( E^1 \calA^{-1}\calI \tilde v^{(k-1)} - \tilde v^{(k-1)} \big)
= \tilde v^{(k-1)} + \calPP \big( \calB \tilde v^{(k-1)} - \tilde v^{(k-1)} \big).
\end{align}
In the numerical linear algebra community, this iteration is called {\em preconditioned inverse iteration} (PINVIT) \cite{DyaO80,BraKP95,Kny98} if the (unknown) factor $E^1$ is replaced by an approximation of the energy, e.g., by the Rayleigh quotient.
Here, however, we consider again the exact scaling by the first eigenvalue.
\begin{remark}
For practical computations it is also of interest that $\tilde v^{(k)}$ is cheap to compute. This is indeed the case, since its computation only involves the solution of local problems.
\end{remark}
The locality of the iterates was already discussed in Section~\ref{sect_decay_Greens_func}, i.e., up to logarithmic terms in $1/(1-\gap)$ and $1/\eps$ the support of $\tilde v^{(k)}$ is at most $k$ $\eps$-layers larger than the support of the initial function $v^{(0)}$.
It remains to show the exponential convergence of the iteration scheme is maintained, despite the inclusion of the preconditioner. For this, we show that the error reduces by a fixed factor in every iteration step.
As before, we assume $v^{(0)} = \sum_{i=1}^\infty \alpha_i u_i$ with $\alpha_1\neq 0$.
For the (exact) inverse iteration we have seen that $(u_1, v^{(0)})$ remains unchanged during the iteration process.
This changes by the implementation of the preconditioner.
However, assuming $\tilde v^{(k-1)} = \sum_{i=1}^\infty \hat \alpha_i u_i$ we can estimate
\begin{align*}
\err^{(k)}
= \min_{c\in \R} \Vvert \tilde v^{(k)} - c\,u_1 \Vvert
&\le \Vvert \tilde v^{(k)} - \hat\alpha_1 u_1\Vvert \\
&\le \Vvert \calB \tilde v^{(k-1)} - \hat\alpha_1 u_1 \Vvert
+ \Vvert (\id- \calPP) ( \calB \tilde v^{(k-1)} - \tilde v^{(k-1)} )\Vvert \\
&\le \Vvert \sum_{i=2}^\infty \tfrac{E^1}{E^i} \hat \alpha_i u_i \Vvert
+ \gamma\, \Vvert \sum_{i=2}^\infty (\tfrac{E^1}{E^i}-1) \hat\alpha_i u_i \Vvert \\
&\le \gap\, \Vvert \tilde v^{(k-1)} - \hat\alpha_1 u_1\Vvert
+ \gamma\, \Vvert \tilde v^{(k-1)} - \hat\alpha_1 u_1\Vvert \\
&= (\gap + \gamma)\, \err^{(k-1)}.
\end{align*}
Thus, we have an error reduction by a factor $(\gap + \gamma)$ in each step.
\subsection{Block iteration}\label{sect_localization_block}
Since we cannot guarantee a spectral gap after the first eigenvalue, we need a block iteration.
For the general case we assume that there is a spectral gap within the first $K+1$ eigenvalues, which leads to the definition $\gap:= E^1/E^{K+1} < 1$. We aim to perform a block version of the inverse power iteration \eqref{eqn_powIteration}. For this we need a starting subspace to initiate the power iteration.
Let $V^{(0)}$ denote a basis of such a $K$-dimensional subspace,
\[
V^{(0)}
= \begin{bmatrix}
v^{(0)}_1, & v^{(0)}_2, & \dots, & v^{(0)}_K
\end{bmatrix}
\in \V^K.
\]
As before, we can express these functions in terms of the eigenfunctions of the Schr\"odinger operator, i.e., for $j=1, \dots, K$,
\begin{align*}
v_j^{(0)} = \sum_{i=1}^\infty \alpha_{i,j}\, u_i
\qquad\text{and}\qquad
\calC:= [\alpha_{i,j}]_{i,j= 1, \dots, K} \in \R^{K,K}.
\end{align*}
The matrix $\calC$ contains the coefficients of the initial functions $v_j^{(0)}$ in terms of $u_1, \dots, u_{K}$. As generalization of the condition $\alpha_1\neq 0$ in Section~\ref{sect_localization_power}, we assume here that~$\calC$ is invertible.
The block inverse iteration (or simultaneous inverse iteration) including normalization then reads
\begin{align}
\label{eqn_blockIteration}
V^{(k)}
= {E^1} \calA^{-1} \calI V^{(k-1)}
= \calB^k V^{(0)}.
\end{align}
For a single function $v_j^{(0)}$ this means
\[
v_j^{(k)}
= \calB^k v_j^{(0)}
= {(E^1)^k} \sum_{i=1}^\infty \alpha_{i,j} (\calA^{-1}\calI)^k u_i
= \sum_{i=1}^\infty \alpha_{i,j} \Big( \frac{E^1}{E^i}\Big)^k u_i.
\]
With $x := \calC^{-1} e_1 \in \R^{K}$ the linear combination $V^{(k)}x \in \V$ satisfies
\[
V^{(k)}x
= \sum_{i=1}^\infty\, [\alpha_{i,1}, \dots, \alpha_{i,K}]\, x\, \Big( \frac{E^1}{E^i}\Big)^k u_i
= u_1 + \sum_{i=K+1}^\infty \overline\alpha_{i} \Big( \frac{E^1}{E^i}\Big)^k u_i,
\]
where $\overline\alpha_i := [\alpha_{i,1}, \dots, \alpha_{i,K}]\, x$.
Similarly as in Section~\ref{sect_localization_power} we show that~$V^{(k)}x$ converges exponentially to the span of~$u_1$ with rate~$\gap$,
\[
\err^{(k)}
= \min_{c\in\R} \Vvert V^{(k)}x - c\, u_1 \Vvert
= \Vvert V^{(k)}x - u_1 \Vvert
\le \gap^k \Vvert V^{(0)}x - u_1 \Vvert
= \gap^k \err^{(0)}.
\]
Note that the initial error $\err^{(0)}$ is bounded in terms of $\calC^{-1}$ and the energy of the starting functions $v^{(0)}_1, v^{(0)}_2, \dots, v^{(0)}_K$.
Thus, for the convergence of the block iteration it remains to find a suitable starting block $V^{(0)}$. Its precise construction depends on the considered potential $V$, see e.g.~Section~\ref{sect:gaps:random:starting} for a potential with a tensor product structure or
~Section~\ref{sect:gaps:domino:starting} for a potential consisting of domino blocks of different size.
To prove the quasi-locality of the Schr\"odinger ground state, we need to include the preconditioner from Section~\ref{sec:precond}.
\subsection{Inexact block iteration}\label{sect_localization_inexactIter}
We install the preconditioner $\calPP$ into the block iteration, which yields a block version of the preconditioned inverse iteration introduced in Section~\ref{sect_localization_oneStep}.
Given $V^{(0)}$ we locally compute a sequence $\tilde V^{(k)}$ by a simultaneous application of the preconditioned iteration.
In the main result of this paper we show that the first eigenfunction $u_1$ is essentially an element of $\sspan \tilde V^{(k)}$, i.e., a $K$-dimensional function space that is spanned by basis functions that are exponentially decaying in distances of $\eps$.
If $V^{(0)}$ only contains local functions and $K$ is of moderate size, i.e., if there is a significant spectral gap after the first few eigenvalues, then $u_1$ itself is exponentially localized as the linear combination of $K$ exponentially localized functions.
\begin{theorem}[Convergence of inexact block iteration]
\label{theorem-convergence-inexact-block}
Given Assumption~\ref{ass_epsBeta}, $\gap = E^1/E^{K+1}$, and a prescribed tolerance $\tol$, we consider a starting subspace~$V^{(0)}$ with invertible coefficient matrix~$\calC$.
Assume that the preconditioner~$\calPP$ from Remark~\ref{rem_smallGamma} satisfies $\gamma \lesssim \gap^k$ with $k \approx \log(1/\tol) / \log(1/\gap)$.
Then, $k$ steps of the preconditioned block iteration yields an approximation $\tilde v \in \sspan \tilde V^{(k)}$ with
\[
\Vvert \tilde v - u_1 \Vvert
\lesssim \tol\, \err^{(0)}
= \tol\, \Vvert V^{(0)} \calC^{-1} e_1 - u_1\Vvert.
\]
Moreover, the support of $\tilde v$ is only $k^2$ $\eps$-layers larger than the union of the supports of the starting functions in $V^{(0)}$.
\end{theorem}
\begin{proof}
First note that, due to the scaling by $E^1$, we have $\Vvert \calB\Vvert \le 1$.
We prove that the inexact iteration yields a good approximation of the inverse power method.
For this, we compare the space obtained by the inexact block iteration $\tilde V^{(k)}$ with $V^{(k)}$.
Since we consider a simultaneous iteration, it is sufficient to consider a single vector of~$V^{(0)}$, which we denote by~$v^{(0)} \in \V$.
For the error $e^{(k)} := v^{(k)} - \tilde v^{(k)}$ we have
\begin{align*}
e^{(k)}
= v^{(k)} - \tilde v^{(k)}
&= \calB v^{(k-1)} - \tilde v^{(k-1)} - \calPP \big(\calB \tilde v^{(k-1)} - \tilde v^{(k-1)} \big) \\
&= (\id - \calPP) \big( \calB v^{(k-1)} - \tilde v^{(k-1)} \big) + \calPP \calB e^{(k-1)} \\
&= (\id - \calPP) (\calB - \id) v^{(k-1)} + (\id - \calPP) e^{(k-1)} + \calPP \calB e^{(k-1)}.
\end{align*}
Thus, with $v^{(k-1)} = \calB^{k-1} v^{(0)}$, $\Vvert\calB\Vvert\le 1$, and $\Vvert\calB-\id\Vvert\le 2$, we obtain
\begin{align*}
\Vvert e^{(k)}\Vvert
\le 2 \gamma\, \Vvert v^{(k-1)} \Vvert + (1+2\gamma)\, \Vvert e^{(k-1)} \Vvert
\le 2 \gamma\, \Vvert v^{(0)} \Vvert + (1+2\gamma)\, \Vvert e^{(k-1)} \Vvert.
\end{align*}
Since $e^{(0)} = v^{(0)} - v^{(0)} = 0$, we conclude
\[
\Vvert e^{(k)} \Vvert
\le 2\gamma\, \Vvert v^{(0)} \Vvert\, \sum_{\nu=0}^{k-1} (1+2\gamma)^\nu
\lesssim \gamma\, (1+2\gamma)^{k} \Vvert v^{(0)} \Vvert.
\]
From Section~\ref{sect_localization_block} we know that for~$k \approx \log(1/\tol) / \log(1/\gap)$ steps of the inverse power method we have~$\Vvert V^{(k)}x - u_1 \Vvert \lesssim \gap^k \err^{(0)}$.
Thus, with~$\gamma \lesssim \gap^k \approx \tol$ we conclude by the triangle inequality
\[
\Vvert \tilde V^{(k)} x - u_1 \Vvert
\lesssim \Vvert e^{(k)} \Vvert + \Vvert V^{(k)}x - u_1 \Vvert
\lesssim \tol \err^{(0)}.
\qedhere
\]
\end{proof}
Theorem~\ref{theorem-convergence-inexact-block} shows that the choice of $V^{(0)}$ is crucial for the localization result.
First, its locality bounds the support of the constructed approximation of $u_1$.
Second, the quality of the starting subspace enters the estimate through~$\err^{(0)}$, which can be bounded in terms of the matrix $\calC^{-1}$ and the energies of $V^{(0)}$, namely by
\[
\err^{(0)}
= \Vvert V^{(0)} x - u_1 \Vvert
= \Vvert V^{(0)} \calC^{-1} e_1 - u_1 \Vvert
\lesssim \|\calC^{-1}\|_1 \max_{j=1,\dots,K} \Vvert v_j^{(0)} \Vvert.
\]
The verification of the locality of $V^{(0)}$ and the boundedness of $\|\calC^{-1}\|_1$ depends on the considered potential and will be in the focus of Section~\ref{sect:gaps}.
\begin{remark}
Note that the localization result of Theorem~\ref{theorem-convergence-inexact-block} is not optimal in the sense that the support of $\tilde V^{(k)}$ grows quadratically in $k$.
To improve this, we need to show that a fixed number of preconditioner steps is sufficient as outlined in Remark~\ref{rem_smallGamma}.
In a similar fashion as in Section~\ref{sect_localization_oneStep} we can show that for $\tilde V^{(k-1)}$ with corresponding coefficient matrix $\hat \calC\in \R^{K,K}$, $\hat x:= \hat \calC^{-1}e_1$, we have
%
\begin{align*}
\min_{y\in\R^K} \Vvert \tilde V^{(k)}y - u_1 \Vvert
\le \Vvert \tilde V^{(k)}\hat x - u_1 \Vvert
&\le (\gap + \gamma)\, \Vvert \tilde V^{(k-1)} \hat x - u_1 \Vvert.
\end{align*}
Thus, we have a similar error reduction as in the non-block case if we can prove that~$\tilde V^{(k-1)} \hat x$ is at least a quasi-optimal approixmation of $u_1$ in the span of $\tilde V^{(k-1)}$.
This, in turn, would imply that the ground state $u_1\in \calV$ decays exponentially fast in the sense of
\[
\Vvert u_1\Vvert_{D\setminus B^\infty_{k\eps}(\operatorname{supp} V^{(0)})}
\lesssim (\gap + \gamma)^{k}\, \err^{(0)}.
\]
\end{remark}
\begin{remark}
\label{remark-on-theorem-convergence-inexact-block}
The result of Theorem~\ref{theorem-convergence-inexact-block} generalizes to the first $r$ eigenfunctions provided that the gap condition $E^r/E^{K+1} + \gamma<1$ holds true. For this, one needs to consider $\calB := E^r \calA^{-1} \calI$ and $x := \calC^{-1} e_r$.
\end{remark}
\section{Application to Prototypical Potentials}\label{sect:gaps}
This section aims to validate the assumption on the spectral gap in Theorem~\ref{theorem-convergence-inexact-block} in three model scenarios.
This will turn out to fail in the periodic case. The lower part of the spectrum indeed decomposes into well separated eigenvalue clusters, but the clusters are too large. The introduction of disorder changes the picture. We first state two general results, which are needed in this context.
\begin{lemma}
\label{lem_RayleighBound}
Recall the definition of the energy $E(v)$ from \eqref{def-Ev} and let $u_1, \dots, u_N \in \V$ be orthogonal w.r.t.~$(\cdot,\cdot)$ as well as $a(\cdot,\cdot)$. Furthermore, assume the uniform bounds
\[
c_1 \le \E(u_i) \le c_2
\]
for all $i=1,\dots, N$. Then, $c_1 \le \E(u) \le c_2$ for all $u\in\sspan \{u_1, \dots, u_N \}$.
\end{lemma}
\begin{proof}
The assumption on the Rayleigh quotient of $u_i$ can be translated into
\[
c_1 \Vert u_i \Vert^2
\le a(u_i, u_i)
\le c_2 \Vert u_i \Vert^2
\]
for all $i=1,\dots, N$. For a given linear combination $u:= \sum_{i=1}^N \alpha_iu_i \in \sspan \{u_1, \dots, u_N \}$ we get due to the assumed orthogonality
\[
\Vert u \Vert^2
= \big\Vert \sum\nolimits_{i=1}^n \alpha_i u_i \big\Vert^2
= \sum\nolimits_{i=1}^n \alpha_i^2\, \Vert u_i\Vert^2
\le \frac{1}{c_1} \sum\nolimits_{i=1}^n \alpha_i^2\, a(u_i, u_i)
= \frac{1}{c_1}\, a(u,u)
\]
and thus, $c_1\le\E(u)$. Analogously, one shows that $\E(u)\le c_2$.
\end{proof}
We emphasize that orthogonality in both inner products is especially given for functions having disjoint support.
\begin{lemma}
\label{lem_evBound}
Assume that $u_1, \dots, u_N\in \V$ are pairwise orthogonal w.r.t.~$(\cdot,\cdot)$ and $a(\cdot,\cdot)$. If we have the property $\E(u_i)\le c$ for all $i=1,\dots, N$ then the eigenvalue problem
\[
a(u,v) = E\, (u,v)
\]
for test functions $v\in\V$ has at least $N$ eigenvalues with $E \le c$.
\end{lemma}
\begin{proof}
The Courant min--max principle in Hilbert spaces states that the $\ell$-th eigenvalue satisfies
\[
E^\ell
= \min_{\dim \V^{(\ell)} = \ell}
\hspace{4pt}
\max_{\ v\in \V^{(\ell)}}
\hspace{3pt} \E(v).
\]
Thus, for $\ell \le N$, the choice $\V^{(\ell)} := \sspan \{u_1, \dots, u_{\ell} \}$ yields together with the previous lemma
\[
E^\ell
\le \max\nolimits_{v\in \sspan \{u_1, \dots, u_{\ell} \}} E(v)
\le \max\nolimits_{v\in \sspan \{u_1, \dots, u_N \}} E(v)
\le c. \qedhere
\]
\end{proof}
In the following we derive bounds for the spectral gaps of $\calH$, where we first investigate the case of periodic potentials. As we will see, in sufficiently disordered media, significant spectral gaps are expected to appear much earlier than in the periodic case, cf.~Figure~\ref{fig:spectra}.
Note that, up to now, we have only assumed $\beta$ to be large (cf.~Assumption~\ref{ass_epsBeta}). For the proof of spectral gaps we also need $\alpha$ to be small.
\begin{assumption}
\label{ass_epsAlpha}
The coefficient $\alpha$ is of moderate size in the sense that $\alpha \lesssim (\eps L)^{-2}$, i.e., the contrast satisfies $\beta/\alpha\gtrsim L^2$.
\end{assumption}
\subsection{Periodic potential}\label{sect:gaps:periodic}
We aim to derive lower and upper eigenvalue bounds for the periodic case shown in Figure~\ref{fig_potentials} (upper left). This includes $N :=(2\eps)^{-d}$ cubes of side length~$\eps$, on which $V$ takes the value $\alpha$. Recall that we have $L=1$ in this case.
\subsubsection{Upper eigenvalue bounds}
We provide an upper bound on the first $\ell N$ eigenvalues by the construction of particular functions and Lemma~\ref{lem_evBound}. Restricted to a single element $q\in\calT$, on which the potential $V$ equals $\alpha$, we consider the standard Laplace eigenvalue problem with homogeneous Dirichlet boundary and shift $\alpha$. For this problem, eigenfunctions and -values are well-known \cite[Ch.~10.4]{Str08}. On $q$ the first $\ell$ eigenfunctions $\hat u_1, \dots, \hat u_\ell$ (extended by zero on $D$) satisfy the bound
\[
E(\hat u_j)
\le \alpha + \frac{\pi^2}{\eps^2} \big( \ell^2 + d-1 \big)
\lesssim \frac{\ell^2}{\eps^2},
\]
since $\alpha \lesssim (\eps L)^{-2}$ by Assumption~\ref{ass_epsAlpha}. As this holds true for each of the $N$ cubes in $\calQa$, Lemma~\ref{lem_evBound} yields
\begin{align}
\label{eqn_periodic_upperBound}
E^{N\ell} \lesssim \frac{\ell^2}{\eps^2}.
\end{align}
Note that we exploit here the orthogonality of the eigenfunctions on a single element $q$ of~$\calT$ and the orthogonality due to the disjoint support of the cubes.
\subsubsection{Lower eigenvalue bounds}
In order to prove gaps in the spectrum, we also need lower bounds on the eigenvalues. For this, we use the reformulation of the min-max principle, namely the max-min principle. This is then combined with quasi-interpolation results from the theory of finite elements.
\begin{lemma}
\label{lem_periodic_lowBounds}
Assume the periodic setting with $L=1$ and $N=(2\eps)^{-d}$ cubes on which the potential $V$ equals $\alpha$, as before. If $\beta \gtrsim (\ell-1)^2 \eps^{-2}$, then it holds the estimate
\begin{align}
\label{eqn_periodic_lowerBound}
E^{N \ell^d+1}
\ \gtrsim\ \frac{(\ell-1)^{2}}{\eps^{2}}.
\end{align}
\end{lemma}
\begin{proof}
We apply the max-min eigenvalue characterization of the form
\begin{align}
\label{eqn_lambda_Nell}
E^k
\ =\ \adjustlimits \max_{\dim\V^{(k-1)}=k-1\ \ } \min_{v \in [\V^{(k-1)}]^\text{c}}\ E(v),
\end{align}
where $[\V^{(k-1)}]^\text{c} \subset \V$ is any complementary space to the $(k-1)$-dimensional space $\V^{(k-1)}$.
The strategy is to construct a subspace $\W \subseteq \V$ of dimension $N \ell^d$. For this, we consider a uniform triangulation $\calT_h$ of $D$ into cubes with local mesh size $h:=\eps/(\ell-1)$. The corresponding set of nodes is denoted by $\mathcal{N}_h$. The finite element space $\W$ is then defined by the span of all $Q_1$-basis functions corresponding to the nodes in $\mathcal{N}_h \cap \overline{\Oa}$. Note that the dimension of $\W$ equals $N \ell^d$ and that functions in $\W$ may have a slight support in $\Ob$, namely one layer of width $h$ surrounding the $\alpha$-valleys.
To characterize a complementary space we construct a local projection operator
\[
\Pi\colon \V = \Hper \to \W
\]
and define $\W^\text{c} := \ker \Pi$. The operator is based on the Scott-Zhang quasi-interpolation operator $I^\text{sz}\colon H^1(\Oa)\rightarrow \W\vert_{\Oa}$, cf.~\cite{ScoZ90, HeuS07}. The operator $\Pi$ is then defined by the property
$$(\Pi u)\vert_{\Oa} = I^\text{sz} (u\vert_{\Oa})$$
in a unique way. Since $I^\text{sz}$ does not depend on the behavior of functions in $\Ob$, $\Pi$ has the important property that information is not spread from $\Oa$ to $\Ob$. By the properties of the Scott-Zhang interpolation, this implies the $a$-stability of $\Pi$ with a constant depending only on $\beta h^2$. Due to $\V= \ker \Pi \oplus \range \Pi$, $\W^\text{c}$ is indeed a closed complementary space. Moreover, by construction we have the error estimate
\[
\Vert u - \Pi u\Vert_{\Oa}=\Vert u - I^\text{sz} u\Vert_{\Oa}
\lesssim h\, \Vert \nabla u \Vert_{\Oa}.
\]
For $u \in\W^\text{c}$ we thus have by the assumption $\beta \gtrsim (\ell-1)^2 \eps^{-2}=h^{-2}$ that
\[
\Vert u\Vert^2
= \Vert u - \Pi u\Vert^2_{\Oa} + \Vert u\Vert^2_\Ob
\lesssim h^2 \Vert \nabla u \Vert^2_{\Oa} + \frac 1\beta \Vert u\Vert^2_{V, \Ob}
\lesssim \frac{\eps^2}{(\ell-1)^2}\, \Vvert u \Vvert^2.
\]
This directly results in the estimate
\[
E^{N\ell^d+1}
= \adjustlimits \max_{\dim\V^{(N\ell^d)}=N\ell^d\ } \min_{v \in [\V^{(N\ell^d)}]^\text{c}}\, E(v)
\ \ge\ \min_{v \in \W^\text{c}}\, E(v)
= \min_{v \in \W^\text{c}} \frac{\Vvert v \Vvert^2}{\Vert v \Vert^2}
\ \gtrsim\ \frac{(\ell-1)^{2}}{\eps^{2}}.
\qedhere
\]
\end{proof}
\subsubsection{Spectral gaps}
The combination of the above estimates shows that there will be a spectral gap of order $\calO(1)$ after the first $\calO(N)$ eigenvalues with $N = (2\eps)^{-d}$. We formulate this result in form of a corollary.
\begin{corollary}
Assume $\beta \gtrsim (\ell-1)^2 \eps^{-2}$ for a natural number $\ell>1$ and Assumption~\ref{ass_epsAlpha}. Then, the periodic setting leads to a spectral gap of the form
\[
\frac{E^{N k}}{E^{N \ell^d+1}}
\lesssim \frac{k^2}{(\ell-1)^2}
\]
for natural numbers $\ell>k\ge 1$. In particular, we have $E^1 / E^{N \ell^d+1} \lesssim (\ell-1)^{-2}$.
\end{corollary}
This result shows that the absence of disorder leads to large eigenvalue blocks and thus, no locality of eigenfunctions can be shown.
\subsection{Tensor product potential}\label{sect:gaps:random}
We consider a potential $V$, which generalizes the periodic setup from the previous subsection and is a special case of the general random potential studied in Sections~\ref{sec:evp}-\ref{sect_localization}. This includes different valley formations, namely cuboids of varying side lengths, cf.~Figure~\ref{fig_potentials} (upper right). We follow the ideas of the periodic setting but need to adjust the construction of the finite element space $\W$ in order to prove a spectral gap.
\subsubsection{Description of the potential}\label{sect:gaps:random:pot}
We consider a potential $V$, composed of one-dimensional potentials. For this, define $V_1, \dots, V_d \in L^\infty(0,1)$, each based on an $\eps$-partition of $(0,1)$ with values $0$ and $1$ only. Then, $V \in L^\infty(D)$ is given by
\[
V(x) := \beta + (\alpha-\beta) \big[V_1(x_1)\cdot\dots\cdot V_d(x_d) \big].
\]
Note that this construction provides non-overlapping $\alpha$-valleys in form of cuboids, each surrounded by at least one $\eps$-layer of $\beta$ values. We characterize the cuboids by their {\em shortest} side length and $N_j$ denotes the number of such valleys with minimal side length~$\eps j$. We define $D_{\alpha,j}$ as the union of these $N_j$ valleys with width~$\eps j$. The maximal (shortest) side length of an $\alpha$-valley is $\eps L$ and thus, $N_L\ge 1$. Furthermore, we need a bound for the possible anisotropy of $\alpha$-valleys of a certain size. Let $\rho_{\tilde \ell}$ denote the maximal quotient of maximal and minimal side length of all $\alpha$-valleys with width greater or equal to $\tilde{\ell}$.
\begin{remark}
With the presented construction of $V$ we obtain the periodic potential of Section~\ref{sect:gaps:periodic} by the choice $V_j = [\, 1,\, 0,\, 1,\,0,\, \dots,1,\, 0\, ]$ for all $j=1,\dots, d$.
\end{remark}
\begin{remark}
\label{rem_expectation_Nell}
In the range $\ell \approx \log(1/\eps)$ the expectation of $N_\ell$ is of order $(2^{-\ell}/\eps)^d$ and thus, of order $\calO(1)$. In this setting, we also have $\rho_{\tilde \ell} = \calO(1)$ with high probability.
\end{remark}
Although the given setting is more obscure than the periodic case, we will show that there is -- with high probability -- a spectral gap already after $\calO(\eps^{-p})$ with $p<d$, instead of $\calO(\eps^{-d})$ eigenvalues. To prove this, we need to exploit the disorder of the potential.
The overall strategy is to prove upper bounds for the first $N_L$ eigenvalues and lower bounds for higher energy levels. For this, we apply once more the max-min principle and construct a specific finite-dimensional subspace based on the valley formations of the potential.
\subsubsection{Quasi-interpolation operator}\label{sect:gaps:random:quasiInt}
To obtain lower bounds on the eigenvalues we construct a finite element subspace and a corresponding quasi-interpolation operator similar to Section~\ref{sect:gaps:periodic}. For this, we fix a level $1<\tilde\ell<L$ and consider partitions of all sets $D_{\alpha,j}$ with $j>\tilde{\ell}$, using a mesh size $h \approx \eps\tilde\ell$, cf.~the illustration in Figure~\ref{fig_lowerBound_hat}.
More precisely, on an $\alpha$-valley $Q$ with dimensions $\eps j_1\le\dots\le \eps j_d$ and $\tilde\ell<j_1$ we consider a Cartesian mesh defined by $\Pi_{k=1}^d \bigl(\lfloor j_k / \tilde\ell \rfloor +2\bigr)$ equally distributed nodes. We extend this mesh by one more layer of elements of width $\eps/2$ into the surrounding $\beta$ region, cf.~Figure~\ref{fig_lowerBound_hat}. A local projection operator $\Pi_{\tilde{\ell}}$ onto $Q_1$-basis functions is defined by prescribing values at nodes in $\overline{Q}$ for each $\alpha$-valley with minimal side length $j>\tilde{\ell}$ via
$$(\Pi_{\tilde \ell} u)\vert_Q=I^\text{sz} (u\vert_Q)$$ and enforcing vanishing traces at the boundary of each local mesh to ensure $\V$-conformity.
\begin{figure}
\input{fig_hats}
\caption{Basis functions of $\W_{\tilde{\ell}}$ on large $\alpha$-valleys with mesh size $h \approx \eps \tilde{\ell}$ in the one-dimensional setting. The basis functions at the boundary are slightly deformed. }
\label{fig_lowerBound_hat}
\end{figure}
The image of $\Pi_{\tilde \ell}$ defines the finite element space $\W_{\tilde{\ell}}$ with a dimension bounded by
\begin{align}
\label{def_K}
\dim \W_{\tilde{\ell}}
\, =:\, K
\, \lesssim\, \rho_{\tilde \ell}^{d-1}\sum\nolimits_{j=\tilde \ell+1}^L N_j\, \big\lfloor j /{\tilde\ell} \big\rfloor^{d}.
\end{align}
We emphasize that $\rho_{\tilde \ell}$ enters the estimate as we characterized the $\alpha$-valleys by means of their shortest side length.
As in the periodic case, the kernel of the projection operator $\Pi_{\tilde{\ell}}$ defines an appropriate complement space $\W_{\tilde{\ell}}^\text{c}$ to be used in connection with eigenvalue estimates via the max-min characterization.
Finally, we derive for $u\in \V$ an approximation estimate of the form
\begin{equation}
\label{eq_qi}
\Vert u - \Pi_{\tilde{\ell}} u\Vert
\, \lesssim\, \eps\tilde \ell\, \Vvert u \Vvert.
\end{equation}
To see this, we split the left-hand side into
\begin{align*}
\Vert u - \Pi_{\tilde{\ell}} u \Vert^2
\le \sum\nolimits_{j\le \tilde\ell}\, \Vert u \Vert_{D_{\alpha,j}}^2
+ \sum\nolimits_{j> \tilde\ell}\, \Vert u - I^\text{sz} u \Vert_{D_{\alpha,j}}^2
+ 2\, \Vert u \Vert^2_{\Ob} + 2\, \Vert \Pi_{\tilde{\ell}} u \Vert^2_{\Ob}.
\end{align*}
For small $\alpha$-valleys with width $j\le \tilde{\ell}$ we apply the Poincar\'e-Friedrich's inequality to~$\eta u$, where $\eta$ denotes a cut-off function similarly as in Section~\ref{sec:evp:cutoff}. For that we note that $\eta u \in H^1_0(\hspace{1pt}N(D_{\alpha,j})\hspace{1pt})$, where $N(D_{\alpha,j})$ equals $D_{\alpha,j}$ extended by a surrounding $\tfrac \eps 2$-layer. With this, the Poincar\'e-Friedrich's inequality yields
\begin{equation}
\label{eq_smallValleys}
\Vert u \Vert_{D_{\alpha,j}}
\le \Vert \eta u \Vert_{N(D_{\alpha,j})}
\lesssim \eps\tilde\ell\ \Vert \nabla (\eta u) \Vert_{N(D_{\alpha,j})}
\lesssim\, \eps\tilde\ell\ \Vvert u \Vvert_{N(D_{\alpha,j})}.
\end{equation}
In the last step we have used the same arguments as in the proof of estimate \eqref{estimate_L2norm}.
On $\alpha$-valleys with width $j> \tilde{\ell}$ we can directly apply the approximation property of the Scott-Zhang operator \cite{HeuS07}.
Finally, for $T$ being an element of the $\beta$-region of width~$\tfrac\eps 2$, we show that $\Vert \Pi_{\tilde \ell}\, u \Vert_T \lesssim \eps\tilde\ell\, \Vvert u \Vvert_{N(T)}$. In this case, $N(T)\subset \Ob$ denotes the union of $T$ and all its neighbors within the $\tfrac \eps 2$-layer in $\Ob$, surrounding an $\alpha$-valley. Note that, due to the construction of the operator $\Pi_{\tilde \ell}$, we need to estimate $u$ along edges $E$. Using the trace identity of \cite{CarGR12}, we get for an edge $E$ of $T$,
\[
\frac {1}{|E|} \int_E u \,\text{d}x
\le \frac{1}{|T|} \int_T |u| \,\text{d}x + \frac{\eps\tilde\ell}{|T|} \int_T |\nabla u| \,\text{d}x
\le \frac{1}{|T|^{1/2}} \Vert u\Vert_{T} + \frac{\eps\tilde\ell}{|T|^{1/2}} \Vert \nabla u \Vert_T.
\]
Considering appropriate edges, such estimates lead to
\[
\Vert \Pi_{\tilde \ell}\, u \Vert_T^2
\lesssim |T|\, \sum_{\tilde T \in N(T)} \frac{1}{|\tilde T|} \Big( \Vert u\Vert^2_{\tilde T} + (\eps\tilde\ell)^2 \Vert \nabla u \Vert^2_{\tilde T} \Big)
\le \sum_{\tilde T \in N(T)} (\eps\tilde\ell)^2\, \Vvert u \Vvert^2_{\tilde T}
\le (\eps\tilde\ell)^2\, \Vvert u \Vvert^2_{N(T)}.
\]
Here we used again that $N(T) \subset \Ob$ and that $\beta^{-1} \lesssim \eps^2$.
\subsubsection{Eigenvalue bounds}\label{sect:gaps:random:bounds}
As in the periodic setting, we consider eigenfunctions of the shifted Laplace eigenvalue problem with homogeneous Dirichlet boundary conditions on the $N_L$ $\alpha$-valleys with width $\eps L$. On such a valley we know that the first eigenvalue equals $\alpha + d\pi^2/(\eps L)^2$. If we extend the corresponding eigenfunctions by zero, we obtain $N_L$ functions $\hat u_1, \dots, \hat u_{N_L} \in\V$ for which we have $E(\hat u_j)= \alpha + d\pi^2/(\eps L)^2$. With $\alpha \lesssim (\eps L)^{-2}$ from Assumption~\ref{ass_epsAlpha}, we obtain by Lemma~\ref{lem_evBound} the eigenvalue bound
\[
E^{N_L} \lesssim \frac{1}{(\eps L)^2}.
\]
To ensure a spectral gap, we also need lower bounds on the eigenvalues. Using \eqref{eq_smallValleys} and the approximation property of $I^\text{sz}$, we estimate for $u \in \W_{\tilde{\ell}}^\text{c}$, i.e., $\Pi_{\tilde\ell} u = 0$,
\begin{align}
\label{lowerbound-E-complementary-space}
\Vert u \Vert^2
&= \sum\nolimits_{j\le \tilde\ell}\, \Vert u \Vert_{D_{\alpha,j}}^2
+ \sum\nolimits_{j> \tilde\ell}\, \Vert u - I^\text{sz} u \Vert_{D_{\alpha,j}}^2
+ \Vert u \Vert^2_{\Ob} \\
\nonumber&\lesssim (\eps\tilde\ell)^2\, \sum\nolimits_{j\le \tilde\ell}\, \Vvert u \Vvert^2_{N(D_{\alpha,j})}
+ (\eps\tilde\ell)^2\, \sum\nolimits_{j> \tilde\ell}\, \Vert \nabla u \Vert_{D_{\alpha,j}}^2 + \frac{1}{\beta} \Vert u \Vert^2_{V, D_\beta}
\lesssim (\eps\tilde\ell)^2\, \Vvert u \Vvert^2.
\end{align}
Note that we have used here once more Assumption~\ref{ass_epsBeta}. The application of the max-min eigenvalue characterization in \eqref{eqn_lambda_Nell} then proves
\[
E^{K+1}
= \adjustlimits \max_{\dim\V^{(K)}=K\ } \min_{v \in [\V^{(K)}]^\text{c}}\, E(v)
\ \ge\ \min_{v \in \W_{\tilde{\ell}}^\text{c}}\, E(v)
\ \gtrsim\ \frac{1}{(\eps\tilde\ell)^{2}}.
\]
A combination of the lower and upper bounds of the eigenvalues yields a guaranteed spectral gap of order $\calO(1)$ within the first $K+1$ eigenvalues, if $\tilde\ell$ is sufficiently small compared to $L$.
\begin{corollary}
\label{cor_generalCase}
In the considered setting including Assumptions~\ref{ass_epsBeta} and~\ref{ass_epsAlpha} we have an estimate of the form
\begin{equation}\label{eq_ellleqL}
\frac{E^{N_L}}{E^{K+1}}
\ \le\ c_1 \frac{\eps^2\tilde\ell^2}{\eps^2 L^2}
\ =\ c_1 \frac{\tilde\ell^2}{L^2} =: q,
\end{equation}
for some generic constant $c_1$. Thus, for sufficiently small $\tilde \ell$, i.e. $\tilde{\ell}^2 \le q \hspace{2pt}c_1^{-1} L^2 $ with $0<q<1$, we obtain a spectral gap of size $q<1$.
\end{corollary}
\begin{remark}
\label{rem_Kissmall}
Corollary~\ref{cor_generalCase} shows that any choice $\tilde{\ell}^2 < L^2 / c_1$ ensures a reasonable spectral gap~$q<1$.
Here, $c_1>0$ is the multiplicative constant in \eqref{eq_ellleqL}. On the other hand, we also need to ensure that $K=\dim\W_{\tilde \ell}$ is sufficiently small.
Since the probability of an $\alpha$-valley to be of size $\eps L$ is of the order $(2^{-L}/\eps)^d$, $L$ is with high probability larger than $\log(c_2/\eps)$ with some other constant $c_2>0$ that has an expectation of $c_2=1/4$.
Next, we choose $\tilde{\ell}= \lceil L/\sqrt{q^{-1} c_1}\rceil$, leading to $K = \calO(\eps^{-p})$ for $p<d$ (with high probability).
To detail this claim, recall that $N_{\tilde{\ell}} \lesssim (2^{-\tilde \ell}/\eps)^d$ and $\rho_{\tilde \ell} \lesssim 1$ with high probability, cf.~Remark~\ref{rem_expectation_Nell}. Hence, with $c_q:= \sqrt{q^{-1} c_1}$ we have
\[
K
\lesssim \sum\nolimits_{j=\tilde \ell+1}^L N_j
\lesssim \eps^{-d} 2^{-\tilde \ell d}
\le \eps^{-d} 2^{-Ld / c_q }
= c_2^{-d / c_q } \eps^{-d} \eps^{d / c_q }
= c_2^{-d / c_q } \eps^{-d \frac{c_q-1}{c_q}} = \mathcal{O}(\eps^{-p}),
\]
where $p := d \frac{c_q-1}{c_q} < d$.
Note that in this setting, $D_{\alpha,j}$ is the union of $\calO(\eps^{-p})$ cuboids of side length $\eps j$ for $j>\tilde{\ell}$. As a result, $\W_{\tilde \ell}$ is a local space that covers a domain that has a volume of order $\calO(\eps^{d-p})$ up to logarithmic terms, where $d-p>0$. Hence, the support of functions in $\W_{\tilde \ell}$ is asymptotically vanishing for $\eps \rightarrow 0$.
\end{remark}
\begin{remark}
Numerical examples indicate that -- in the presence of disorder -- $K$ is even $\calO(1)$.
In the one-dimensional example presented in Figure~\ref{fig:spectra}, he have $\gap\le\frac 12$ with $K=2$.
\end{remark}
For the construction of a suitable set of starting functions $V^{(0)}$ in the next subsection, we also need an upper bound on $E^K$. In an $\alpha$-valley $Q \subset D_{\alpha, j}$, $j>\tilde{\ell}$, we consider the first $r$ eigenvalues of the shifted Laplacian on $Q$, where $r$ equals the number of degrees of freedom in $\W_{\tilde{\ell}}$, associated with the $\alpha$-valley $Q$. Note that $r$ depends on the anisotropy of the given valley characterized by $\rho_{\tilde \ell}$. Then, $Q$ having the dimensions $\eps j_1 \le \dots \le \eps j_d$ with $j=j_1>\tilde \ell$ implies $j_d/j_1 \le \rho_{\tilde \ell}$ and
\[
r
\, \approx\, \frac{j_1 \cdot \dots \cdot j_d}{\tilde \ell^d}
\, \le\, \rho_{\tilde \ell}^{d-1} \frac{j^d}{\tilde \ell^d}.
\]
Extending these $r$ eigenfunctions by zero, we obtain (considering all $\alpha$-valleys) in total~$K$ orthogonal functions with an energy bounded by
\[
\alpha + \frac{\pi^2}{(\eps j)^2} r^{2/d}
\, \lesssim\, \frac{1}{(\eps j)^2} \frac{j^2}{\tilde \ell^2} \rho_{\tilde \ell}^{2(d-1)/d}
\, \le\, \rho_{\tilde \ell}^{4/3} \frac{1}{(\eps \tilde \ell)^2}
\]
and thus, by Lemma~\ref{lem_evBound},
\begin{equation}
\label{rem:upperbound_K}
E^K \lesssim \rho_{\tilde \ell}^{4/3} (\eps \tilde{\ell})^{-2}.
\end{equation}
\subsubsection{Starting subspace}\label{sect:gaps:random:starting}
To apply the convergence result of Theorem~\ref{theorem-convergence-inexact-block} in the present setting, it remains to construct a suitable set of starting functions $V^{(0)}$ with an invertible matrix $\calC$, cf.~Section~\ref{sect_localization_block}.
Here the idea is to extend the space $\W_{\tilde \ell}$ from Section~\ref{sect:gaps:random:quasiInt} by a few levels of $\alpha$-valleys such that it remains local and then project the first eigenfunctions into this space.
We introduce the finite element space $\W_{\tilde m}$, which is defined as $\W_{\tilde \ell}$ but considers all $\alpha$-cuboids with (minimal) side length $\eps j$, $j> \tilde m$, for some parameter $\tilde m < \tilde \ell$.
The local mesh size then equals~ $h\approx \eps\tilde m$.
Thus, the extension enlarges the number of $\alpha$-valleys but also refines the previous mesh. The corresponding quasi-interpolation operator reads $\Pi_{\tilde m}\colon \V \to \W_{\tilde m}$ and the dimension of $\W_{\tilde m}$ is bounded by
\[
\dim \W_{\tilde \ell}
\, \le\, \dim \W_{\tilde m}
\, =:\, K_{\tilde m}
\, \lesssim\, \rho_{\tilde m}^{d-1}\sum\nolimits_{j=\tilde m+1}^L N_j\, \big\lfloor j /{\tilde m} \big\rfloor^{d}.
\]
Recall that $N_j$ denotes the number of $\alpha$-cuboids with minimal side length $\eps j$.
We show that this construction leads to an invertible matrix~$\calC$.
\begin{lemma}
\label{lem:startingV}
Consider a tensor product potential as described in Section~\ref{sect:gaps:random:pot} together with Assumptions~\ref{ass_epsBeta} and~\ref{ass_epsAlpha}.
Further, we define the starting space $V^{(0)}$ as the $L^2$-projection $P_{\tilde{m}}$ of the first $K$ eigenfunctions into $\W_{\tilde m}$, i.e., we set
$$
v_j^{(0)} := P_{\tilde{m}} u_j,
$$
where $u_j$ denote the first (normalized) eigenfunctions, $j=1,\dots, K$. This choice leads to an invertible matrix $\calC$ with~$\| \calC^{-1} \|_1 \lesssim \eps^{-r}$ for some power $r>0$. Hence, it is uncritical for the localization.
\end{lemma}
\begin{proof}
The entries of the matrix $\calC$ are given by
$$
\alpha_{i,j} = (u_i , v_j^{(0)} ) = (P_{\tilde{m}} u_i , P_{\tilde{m}} u_j ) = ( v_i^{(0)} , v_j^{(0)} ).
$$
Hence, $\calC$ is a symmetric mass matrix and invertible if the functions $v_j^{(0)}$ are linearly independent. The linear independence follows from the injectivity of $P_{\tilde{m}}$ on the span of the first $K$ eigenfunctions, which is proved by contradiction. Assume that there exists $\mathbf{a} \in \R^K \setminus \{ 0\}$ and $u := \sum_{j=1}^K \mathbf{a}_j u_j$ such that $P_{\tilde{m}}u=0$.
Then $u$ is in the $L^2$-orthogonal complement of $\W_{\tilde m}$ and we have $\| u \| = \| u - P_{\tilde{m}} u \| \le \| u - \Pi_{\tilde{m}} u \| $. This and \eqref{eq_qi} show that
\begin{equation*}
E(u) \gtrsim (\eps \tilde{m})^{-2}.
\end{equation*}
From Lemma~\ref{lem_RayleighBound} and \eqref{rem:upperbound_K} we also have the upper bound
\begin{equation*}
E(u) \lesssim (\eps \tilde{\ell})^{-2}.
\end{equation*}
If $\tilde m$ is sufficiently small compared to $\tilde \ell$, the previous lower and the upper energy bounds are contradictive. Hence, $P_{\tilde{m}}$ is injective and the functions $v_j^{(0)}$ are linearly independent, which in turn implies the regularity of the matrix $\calC$.
Since $\| \calC \|_2^{-1} = \| \calC^{-1} \|_2$, we can bound the $1$-norm by
\begin{align*}
\| \calC^{-1} \|_1 \le c_K \| \calC \|_1^{-1} \le c_K \Big( \max_{1\le j \le K} |\alpha_{j,j}|\ \Big)^{-1}
\end{align*}
with some constant $c_K$ that depends at most polynomially on $K$ through norm equivalence. Using \eqref{eq_qi} and $\Vert u_j - P_{\tilde{m}} u_j \Vert \le \Vert u_j - \Pi_{\tilde{m}} u_j \Vert $ we have
\begin{align*}
\alpha_{j,j}
= 1 - (u_j, u_j- P_{\tilde{m}} u_j)
\ge 1 - \Vert u_j - \Pi_{\tilde{m}} u_j \Vert
\ge 1 - c\, \eps \tilde{m}\, \Vvert u_j \Vvert
= 1 - c\, \eps\tilde{m}\, \sqrt{E^j}.
\end{align*}
The upper bound $E^K \lesssim \rho_{\tilde \ell}^{4/3} (\eps \tilde{\ell})^{-2}$ from \eqref{rem:upperbound_K} with anisotropy constant $\rho_{\tilde \ell}$ then implies
\[
\alpha_{j,j}
\ge 1 - c\, \eps\tilde{m}\, \sqrt{E^K}
\ge 1 - c' \tilde{m}/\tilde \ell.
\]
Hence, we can bound $\| \calC^{-1} \|_1$ by a constant that depends at most polynomially on $K$, respectively polynomially on $\eps^{-1}$.
Recall once more that the space $\W_{\tilde m}$ is with high probability local, cf.~Remark~\ref{rem_Kissmall} where the argument is elaborated.
This implies that $V^{(0)}$ is a suitable starting subspace.
\end{proof}
With this, all assumptions of Theorem \ref{theorem-convergence-inexact-block} (and Remark \ref{remark-on-theorem-convergence-inexact-block}) are verified. Starting from an initial space that is spanned the few localized basis functions in $V^{(0)}$, we can approximate the first $N_L$ eigenfunctions of $\calH$ in $\calO(\log(1/\tol) /\log(1/\gap))$ steps with an accuracy of order $\tol$ times the energy of the functions in $V^{(0)}$.
Since the support of the starting space only increases slowly during the iteration process, we conclude that the eigenfunctions are well approximated in a domain that has a volume of order $\calO(\eps^{d-p})$, up to logarithmic terms, with $d-p>0$.
Hence, the exponential localization of the eigenfunctions is shown asymptotically for vanishing $\eps \rightarrow 0$.
\begin{remark}
In the range $\beta \approx \eps^{-2}$ we have the stability estimate $\Vvert \Pi_{\tilde{m}} u\Vvert \lesssim \tilde m^2 \Vvert u\Vvert$ and thus, the functions in $V^{(0)}$ satisfy $\Vvert v_j^{(0)}\Vvert = \Vvert \Pi_{\tilde{m}} u_j\Vvert \lesssim \log^2(1/\eps) \Vvert u_j \Vvert$.
\end{remark}
\subsection{Domino block potential}\label{sect:gaps:domino}
As a third example we consider a potential that is formed by a disordered domino block structure. This example aims to demonstrate how our technique can be applied if the $\alpha$-valleys are not surrounded by $\beta$-layers, i.e., a setting that cannot be reduced to a quasi-one-dimensional case. In order to prove the existence of relevant spectral gaps we again follow the ideas from the previous two examples.
\subsubsection{Description of the potential}
To make the setting precise we call $B$ a $j$-domino block (or simply $j$-block) if it is a closed cuboid in $\R^d$ consisting of elements in $\calT$, which is composed of an $\alpha$-cube with side length $\eps j$ and a $\beta$-cube with the same side length. Hence, such a cuboid has the dimension $2\eps j$ in one space direction and $\eps j$ in all other directions. Such a domino block has no preferred orientation. We shall now assume that the potential is formed by a non-overlapping union of such $j$-domino blocks where $j\in\N$ can take any value between $1$ and $L$. An example for such a potential is given in Figure~\ref{fig_potentials} (lower left). The set of all these blocks, which then defines the potential $V$, is denoted by $\mathcal{B}$ and the set of all $j$-domino blocks by $\mathcal{B}_j$. We observe that
$$
\overline{D}
\, =\, \bigcup\nolimits_{B\in \mathcal{B}} B
\, =\, \bigcup\nolimits_{j=1}^L \bigcup\nolimits_{B\in \mathcal{B}_j} B.
$$
We further assume that the probability of finding a small domino block is much higher then finding a large domino block. More precisely, we assume that when selecting a domino block from $\mathcal{B}$ the expectation that it is a $j$-block is approximately $2^{-j}$. Note that the parameter $L$ is, except for unlikely situations, the same as in Section~\ref{sec:evp:cutoff}. For a $j$-domino block $B \in \mathcal{B}_j$ we denote the cube where $V$ is equal to $\alpha$ by $B_{\alpha}\subset B$.
\begin{remark}
Since the total number of domino blocks in $D$ can be at most of order $\eps^{-d}$, we conclude that the expected number of $j$-blocks is of order $2^{-j}\eps^{-d}$. Hence, for any $j\ge d\, \log(1/\eps)$ we expect the number of $j$-blocks to satisfy
$$
N_j := |\mathcal{B}_j| = \calO(1).
$$
\end{remark}
As for the tensorized potential in Section~\ref{sect:gaps:random:bounds}, we will show the existence of a relevant spectral gap after the first $K$ eigenvalues.
For this, we again prove upper and lower eigenvalue bounds and construct certain finite element spaces in the largest valleys.
\subsubsection{Quasi-interpolation operator}\label{sect:gaps:domino:quasiInt}
Once more we need to introduce a suitable local finite element space on a subset of $D$ and to exploit the approximation properties of the Scott-Zhang quasi-interpolation operator.
For that purpose, let us fix a level $1<\tilde\ell<L$ satisfying $\tilde\ell\gtrsim \log(1/\eps)$. With this, we consider all sets $\mathcal{B}_j$ of $j$-blocks at level $j>\tilde{\ell}$. Next, we restrict our attention to the $\alpha$-parts of these domino blocks and define the set
$$
D_{\alpha,\, >{\tilde\ell}} \, :=\,
\bigcup\nolimits_{j=\tilde \ell + 1}^{L} D_{\alpha, j},
\qquad \mbox{where }\ D_{\alpha, j}:= \bigcup\nolimits_{B \in \mathcal{B}_j } B_{\alpha}.
$$
On $D_{\alpha,\, >{\tilde\ell}}$ we introduce a uniform Cartesian mesh with mesh size $h\approx \eps\tilde\ell$ and extend it by one more layer of elements with width $\eps/2$, cf.~Section~\ref{sect:gaps:random:quasiInt}. Observe that the extended mesh will intersect the $\beta$-parts of all contributing domino blocks, but it will also intersect other domino-blocks on possibly lower levels.
The space occupied by the extended mesh is denoted by $\tilde{D}_{\alpha,\, >{\tilde\ell}}$. On $\tilde{D}_{\alpha,\, >{\tilde\ell}}$ we consider the arising $Q_1$-finite element space based on the extended mesh and with homogeneous Dirichlet boundary conditions, as before. The resulting space is denoted by $\W_{\tilde{\ell}}$ and we have
\begin{align*}
\dim \W_{\tilde{\ell}}
\, =:\, K
\, \lesssim\, \sum\nolimits_{j=\tilde \ell+1}^L N_j\, \big\lfloor j /{\tilde\ell} \big\rfloor^{d},
\end{align*}
which is expected to be of order $\calO(\eps^{-p})$ for some $p<d$, cf.~Remark~\ref{rem_Kissmall}.
Note that this also implies that $\W_{\tilde{\ell}}$ is a local space, since its basis functions are only supported on few subsets with maximum diameter~$\eps L$.
Based on the Scott-Zhang operator $I^\text{sz}$, restricted to the mesh on $D_{\alpha,\, >{\tilde\ell}}$, we uniquely define the local projection operator
$$
\Pi_{\tilde{\ell}} : \V \rightarrow \W_{\tilde{\ell}}
$$
by means of the property $\Pi_{\tilde \ell} u = I^\text{sz} u$ on $D_{\alpha,\, >{\tilde\ell}}$. As in the previous example, $\Pi_{\tilde{\ell}}$ prevents that information spreads from $\beta$-regions into $D_{\alpha,\, >{\tilde\ell}}$. At the same time it also prevents that information from smaller domino blocks spreads into $D_{\alpha,\, >{\tilde\ell}}$. The key to the analysis is again an interpolation error estimate.
\begin{lemma}\label{lemma:interpol-est:domino}
Consider the domino block potential from above under Assumption \ref{ass_epsBeta}. Then, for the local projection operator $\Pi_{\tilde{\ell}}$ and all $u \in \V$ it holds that
\begin{align*}
\Vert u - \Pi_{\tilde \ell}\, u \Vert
\, \lesssim\, \eps\tilde\ell\, \Vvert u \Vvert.
\end{align*}
\end{lemma}
\begin{proof}
We split the domain into three parts
$$
D = D_{\alpha,\, >{\tilde\ell}} \hspace{3pt}
\cup \hspace{3pt} \big(D \setminus \tilde{D}_{\alpha,\, >{\tilde\ell}} \big) \hspace{3pt} \cup \hspace{3pt} \big(\tilde{D}_{\alpha,\, >{\tilde\ell}} \setminus D_{\alpha,\, >{\tilde\ell}} \big)
$$
and estimate $u - \Pi_{\tilde \ell} u$ on these sub-domains individually. On $D_{\alpha,\, >{\tilde\ell}}$, the estimate follows immediately with the properties of the Scott-Zhang operator, see Section~\ref{sect:gaps:periodic}.
On $D \setminus \tilde{D}_{\alpha,\, >{\tilde\ell}}$ we have that $\Pi_{\tilde \ell} u=0$. Consequently, the claimed estimate reduces to an estimate of $u$. For this, we derive once more a Friedrichs-type inequality as in Lemma~\ref{lem_eta_Friedrich}. Considering averaged Taylor polynomials as in Appendix~\ref{appendix-a} and introducing a modified cut-off function that exploits the particular structure of the potential (a large $\alpha$-block is always adjacent to a $\beta$-block of the same size), we obtain
\begin{align*}
\| u\|_{D \setminus D_{\alpha,\, >{\tilde\ell}}} \lesssim \eps \ell \hspace{2pt} \Vvert u\Vvert_{D \setminus D_{\alpha,\, >{\tilde\ell}}}.
\end{align*}
We emphasize that this estimate is free from pollution constants $\kappa_\calT$ and ${\tilde \ell}^d$ that occur in the general case.
Finally, for the estimate in $\tilde{D}_{\alpha,\, >{\tilde\ell}} \setminus D_{\alpha,\, >{\tilde\ell}}$ we can proceed analogously as in Section~\ref{sect:gaps:random:quasiInt}. In particular, for any element $T$ of the extended mesh that lies in the layer $\tilde{D}_{\alpha,\, >{\tilde\ell}} \setminus D_{\alpha,\, >{\tilde\ell}}$ we have
\begin{align*}
\Vert \Pi_{\tilde \ell}\, u \Vert_T^2
\lesssim |T|\, \sum_{\tilde T \in N(T)} \frac{1}{|\tilde T|} \Big( \Vert u\Vert^2_{\tilde T} + (\eps\tilde\ell)^2 \Vert \nabla u \Vert^2_{\tilde T} \Big)
\lesssim \Vert u \Vert^2_{N(T)} + (\eps\tilde\ell)^2\, \Vert \nabla u \Vert^2_{N(T)},
\end{align*}
where $N(T) \subset \tilde{D}_{\alpha,\, >{\tilde\ell}} \setminus D_{\alpha,\, >{\tilde\ell}}$ is the union of $T$ and all its neighbors in $\tilde{D}_{\alpha,\, >{\tilde\ell}} \setminus D_{\alpha,\, >{\tilde\ell}}$. The combination of all these estimates finishes the proof.
\end{proof}
With the quasi-interpolation operator in hand we are ready to establish eigenvalue bounds based on the max-min eigenvalue characterization.
\subsubsection{Eigenvalue bounds}\label{sect:gaps:domino:bounds}
As for the tensor potential we directly obtain by Assumption~\ref{ass_epsAlpha} the upper eigenvalue bound $E^{N_L} \lesssim (\eps L)^{-2}$. By Lemma \ref{lemma:interpol-est:domino} and the max-min principle we also have
\[
E^{K+1}
= \adjustlimits \max_{\dim\V^{(K)}=K\ } \min_{v \in [\V^{(K)}]^\text{c}}\, E(v)
\ \ge\ \min_{v \in \W_{\tilde{\ell}}^\text{c}}\, E(v)
\ \gtrsim\ \frac{1}{(\eps\tilde\ell)^{2}}.
\]
Consequently, Corollary \ref{cor_generalCase} remains valid and for sufficiently small $\tilde \ell$ we obtain a spectral gap of order $\calO(1)$ as we have
\begin{equation*}
\frac{E^{N_L}}{E^{K+1}}
\ \lesssim\ \frac{\tilde\ell^2}{L^2}.
\end{equation*}
For further discussions on this estimate, we refer to Remark \ref{rem_Kissmall}. Furthermore, note that the arguments from Section \ref{sect:gaps:random:bounds} also allow us to derive the upper eigenvalue bound
\begin{equation*}
E^K \lesssim (\eps \tilde{\ell})^{-2}.
\end{equation*}
\subsubsection{Starting subspace}\label{sect:gaps:domino:starting}
Finally, it remains to show the existence of a suitable set of starting functions $V^{(0)}$ to apply Theorem~\ref{theorem-convergence-inexact-block}. This can be done analogously to the case of a tensorized potential as elaborated in Section~\ref{sect:gaps:random:starting}. This can be done without modifying the arguments, thanks to the availability of the interpolation error estimate from Lemma~\ref{lemma:interpol-est:domino}. In this case, we define $V^{(0)}$ as the projection of the first $K$ eigenfunctions of the Schr\"odinger operator $\calH$ into a refined version of the local finite element space $\W_{\tilde \ell}$.
As before, this choice also ensures the invertibility of the matrix~$\calC$, cf.~Lemma~\ref{lem:startingV}.
In conclusion, Theorem \ref{theorem-convergence-inexact-block} and Remark \ref{remark-on-theorem-convergence-inexact-block} imply that the first $N_L$ eigenfunctions can be expressed as the union of $K$ functions that show an exponential decay in units of $\eps$. Since $K$ is (with high probability) sufficiently small compared to $\eps^{-d}$, we obtain that the first $N_L$ eigenfunctions are exponentially localized, where the localization centers are the $j$-domino blocks with the highest $j$-levels.
\section{Conclusion}
This paper provided quantitative estimates for the lowermost part of the spectrum of random Schr\"odinger operators in the PDE setting that cannot be extracting from existing theoretical studies. These findings provide a theoretical basis for the recent trend on computational studies of Anderson localization in the linear Schr\"odinger eigenvalue problem, e.g., \cite{ArnDJMF16,Steinerberger2017,LuS18,AltPV18,ArnDFJM19,2018arXiv180600565X}. Moreover, the constructive proofs inspire novel localized computational approaches for the approximation of localized states.
The block inverse iteration used in the proof can be turned into a fast algorithm for the computation of eigenstates and spectra as presented, e.g., in~\cite{AltP19}.
In addition to the particular results on the Schr\"odinger eigenvalue problem, the paper illuminates the large potential of classical tools of numerical analysis such as domain decomposition and the theory of iterative solvers for the mathematical analysis of multiscale partial differential equations and the corresponding eigenvalue problems. Since Anderson localization is an almost universal wave phenomenon known also for sound \cite{2008NatPh...4..945H} and electromagnetic waves (in particular light) \cite{1997Natur.390..671W} we believe that the techniques presented here will be useful in many other physical contexts.
\newcommand{\etalchar}[1]{$^{#1}$}
|
1,941,325,220,437 | arxiv | \section{Introduction}
In two dimensional systems, applying perpendicular magnetic
field strongly modifies the wave function of electrons
leading to many interesting phenomena at low temperatures.
The fractional quantum Hall effects\cite{Laughlin} are typical example,
where incompressible ground states are realized
only at some fractional fillings of Landau levels.\cite{Tsui,FQHEex}
Since fractional quantum Hall effects are observed only
in high quality samples, the Coulomb interaction between
the electrons is thought to be essential rather than
random potentials from impurities.
This is contrasted with the case of integer quantum Hall effect
where random potentials are essential.
The importance of the Coulomb interaction in high magnetic field
is followed from
the increase in the energy scale of the Coulomb interaction.
The wave function is scaled by the magnetic length
$\ell = \sqrt{\hbar/eB}$,
which is equivalent to the classical cyclotron
radius $r_c$ in the lowest Landau level.
The increase in the magnetic field decreases
the magnetic length and enhances the energy scale of the
Coulomb interaction between the electrons, $e^2/\varepsilon \ell$.
At typical magnetic field of 10T, $\ell$ is about 8nm,
which is still much larger than the atomic length of 0.1nm.
Since the conduction electrons are on the positive
background charge from ions over length scale of $\ell$,
the positive charge may be simplified to be uniform.
Then the system is equivalent to the electron gas
in a magnetic field, and $\ell$ becomes unique length
scale of the system.
In the magnetic field, the kinetic energy is scaled by the
cyclotron frequency $\omega_c=eB/m$, which is
determined by the magnetic field $B$.
The quantization of the wave function
discretizes the classical cyclotron radius $r_c$,
which also discretizes the kinetic energy
and makes Landau levels $E=\hbar \omega_c (n+1/2)$.
This means that the macroscopic number of electrons have
the same energy in each Landau level, and
large-scale degeneracy appears in the ground state.
This macroscopic degeneracy is lifted by
the Coulomb interaction between the electrons and
various types of liquid states\cite{RezHal,Grei,Moor,Pan,Eis}
and CDW states\cite{Kou1,Lill,Du,Coop} are realized
depending on the filling of the Landau levels.
Since the ground state has macroscopic
degeneracy in the limit of weak Coulomb interaction,
standard perturbation theories are not useful.
Thus numerical diagonalizations of the many body
Hamiltonian have been used to study this system.
Since numerical representation of the Hamiltonian
needs complete set of many body basis states,
we divide the system into unit cells with
finite number of electrons in each cell.
The properties of the infinite system are obtained by
the finite size scalings.
However, the number of many body basis states
increases exponentially with the number of electrons.
For example, when we study the ground state at $\nu=1/3$ with
18 electrons, each unit cell has 54 degenerated orbitals.
The number of many body basis states is given by the combination of
occupied and unoccupied orbitals, $_{54}C_{18} \sim 10^{14}$,
which is practically impossible to manage by using standard
numerical method such as exact diagonalization.
To study systems with typically more than 10 electrons,
we need to reduce the number of many body basis states.
For this purpose, we use the density matrix renormalization
group (DMRG) method, which was originally developed by
S. White in 1992.\cite{White1,White2}
This method is a kind of variational method combined with
a real space renormalization group method, which enables
us to obtain the ground-state wave function of large-size
systems with controlled
high accuracy within a restricted number of many body basis states.
The DMRG method has excellent features
compared with other standard numerical methods.
In contrast to the quantum Monte Carlo method,
the DMRG method is free from statistical errors and
the negative sign problem, which inhibit convergence
of physical quantities at low temperatures.
Compared with the exact diagonalization method, the DMRG method
has no limitation in the size of system.
The error in the DMRG calculation
comes from restrictions of the number of basis states,
which is systematically controlled by the density
matrix calculated from the ground-state wave function,
and the obtained results are easily improved by
increasing the number of basis states retained in the system.
The application of the DMRG method to two-dimensional quantum
systems is a challenging subject and many algorithms have been
proposed. Most of them use mappings on to effective
one-dimensional models with long-range interactions.
However, the mapping from two-dimensional systems to
one-dimensional effective models is not unique and proper
mapping is necessary to keep high accuracy.
In two-dimensional systems under a perpendicular magnetic field,
all the one-particle wave functions $\Psi_{N X}(x,y)$
are identified by the Landau level index $N$ and the
x-component of the guiding center, $X$, in Landau gage.
The guiding center is essentially the center coordinate of the
cyclotron motion of the electron and it is natural to use $X$ as a
one-dimensional index of the effective model.
More importantly, $X$ is discretized in finite unit cell
of $L_x\times L_y$ through the relation to $y$-momentum,
$X=k_y \ell^2 $,
which is discretized under the periodic boundary condition,
$k_y=2\pi n/L_y$ with $n$ being an integer.
Therefore, the two-dimensional continuous systems in magnetic field
are naturally mapped on to effective one-dimensional
lattice models, and we can apply the standard DMRG method.\cite{Shibata1}
This method was first applied to interacting
electron systems in a high Landau level and
the ground-state phase diagram, which consists of various
CDW states called stripe, bubble and Wigner crystal,
has been determined.\cite{Shibata2,Yoshioka}
The ground state and low energy excitations in the lowest
and the second lowest Landau levels have also been
studied by the DMRG and the existence of
various quantum liquid states such as Laughlin state and charge
ordered states called Wigner crystal have been confirmed
and new stripe state has been proposed.\cite{Shibata3,Shibata6}
In the following, we first explain the effective one-dimensional
Hamiltonian used in the above studies and then show the results
obtained for the spin polarized single layer system.
We next review recent study on the spin transition and domain
formation at $\nu=2/3$,\cite{Shibata4} and
finally explain the results on bilayer quantum Hall systems
at $\nu=1$,\cite{Shibata5}
where crossover from a Fermi liquid state to an excitonic
incompressible state occurs.
\section{DMRG method}
Here we briefly describe how the effective 1D Hamiltonian is
obtained from 2D quantum Hall systems.\cite{Shibata1}
To describe the many body Hamiltonian for a interacting system,
we first need to define one-particle basis states.
Here, we use the eigenstates of free electrons
in a magnetic field as one-particle basis states and
represent the wave function $\Psi_{N X}(x,y)$ in Landau gauge:
\begin{equation}
\label{BWF}
\Psi_{N X}(x,y) = C_{N} \exp{\left[i {k_y y} -\frac{(x-X)^2}
{2\ell^2}\right]} H_N\left[\frac{x-X}{\ell}\right],
\end{equation}
where $H_N$ are Hermite
polynomials and $C_{N}$ is the normalization constant.
Then all the eigenstates $\Psi_{N X}(x,y)$ are specified
using two independent parameters $N$ and $X$;
$N$ is the Landau level index and $X$ is the $x$-component
of the guiding center coordinates of the electron.
Since the guiding center $X$ is related to the momentum
$k_y$ as $X=k_y\ell^2$, and $k_y$ is discretized under
the periodic boundary conditions,
the guiding center $X$ takes only discrete values
\begin{equation}
X_n=2\pi\ell^2 n/ L_y,
\end{equation}
where $L_y$ is the length of the unit cell in the $y$-direction.
If we fix the Landau level index $N$,
all the one-particle states are
specified by one-dimensional discrete parameter $X_n$.
Since many body basis states are product states of
one-particle states, they are also described by the
combinations of $X_n$ of electrons in the system.
Thus the system can be mapped on to an effective
one-dimensional lattice model.
The macroscopic degeneracy in the ground state of
free electrons in partially filled Landau level
is lifted by the Coulomb interaction
\begin{equation}
V(r)= \frac{e^2}{\epsilon r}.
\end{equation}
The Coulomb interaction makes correlations between
the electrons and stabilizes various types of ground states
depending on the filling $\nu$ of Landau levels.
When the magnetic field is strong enough so that
the Landau level splitting is sufficiently
large compared with the typical Coulomb interaction
$e^2/(\epsilon \ell)$,
the electrons in fully occupied Landau
levels are inert and the ground state is
determined only by the electrons in the top most
partially filled Landau level.
The Hamiltonian is then written by
\begin{equation}
\label{2DH}
H= S \sum_n c_n^\dagger c_n +
\frac{1}{2}\sum_{n_1} \sum_{n_2} \sum_{n_3} \sum_{n_4}
A_{n_1 n_2 n_3 n_4} c_{n_1}^\dagger c_{n_2}^\dagger c_{n_3} c_{n_4},
\end{equation}
where we have imposed periodic boundary conditions in both
$x$- and $y$-directions, and
$S$ is the classical Coulomb energy of Wigner crystal
with a rectangular unit cell of $L_x \times L_y$\cite{QHHS}.
$c_n^\dagger$ is the creation operator of the electron represented
by the wave function defined in equation (\ref{BWF}) with $X=X_n$.
$A_{n_1 n_2 n_3 n_4}$ are the matrix elements of the Coulomb
interaction defined by
\begin{eqnarray}
A_{n_1 n_2 n_3 n_4}&=&\delta'_{n_1+n_2,n_3+n_4}\frac{1}{L_xL_y}
\sum_{\mib q} \delta'_{n_1-n_4,q_yL_y/2\pi}\frac{2\pi e^2}{\epsilon q}
\nonumber\\
&& {\mbox{\hspace{0.5cm}}}\times\left[L_N(q^2\ell^2/2)\right]^2
\exp{ \left[-\frac{q^2 \ell^2}{2}-i(n_1-n_3)\frac{q_xL_x}{M} \right] } ,
\end{eqnarray}
where $L_N(x)$ are Laguerre polynomials with $N$ being the Landau level
index\cite{Yoshioka}.
$\delta_{n_1,n_2}' = 1$ when $n_1=n_2 (\mbox{mod}\ M)$ with
$M$ being the number of one-particle states in the unit cell,
which is given by the area of the unit cell
$2\pi M \ell^2=L_xL_y$.
\begin{figure}[t]
\begin{center}
\epsfxsize=110mm \epsffile{Fig1.eps}
\caption{\label{DMRG}
Schematic diagrams for (a) infinite system algorithm
and (b) finite system algorithm of the DMRG method.
$\bullet$ represents a one-particle orbital
in a given Landau level. B$_L$ and B$_R$ are left and right blocks,
respectively.}
\end{center}
\end{figure}
In order to obtain the ground-state wave function
we apply the DMRG method.\cite{Shibata1}
As shown in Fig.~\ref{DMRG} (a), we start from a small-size
system consisting of
only four one-particle orbitals whose indices $n$
are 1, 2, $M-1$, and $M$, and we
calculate the ground-state wave function. We then construct
the left block containing one-particle orbitals of $n=1$ and 2, and the
right block containing $n=M-1$ and $M$ by using eigenvectors
of the density matrices which are calculated from the
ground-state wave function.
We then add two one-particle orbitals $n=3$ and $M-2$ between the
two blocks and repeat the above procedure until
$M$ one-particle orbitals are included in the system.
We then apply the finite system algorithm of the DMRG
shown in Fig.~\ref{DMRG} (b)
to refine the ground-state wave function.
After we have obtained the convergence,
we calculate correlation functions to identify the ground state.
The ground-state pair correlation function $g({\mib r})$ in
guiding center coordinates is defined by
\begin{equation}
g({\mib r})=\frac{L_xL_y}{N_e(N_e-1)}
\langle \Psi | \sum_{i\ne j} \delta({\mib r-\mib R}_i+{\mib R}_j) | \Psi \rangle ,
\end{equation}
where ${\mib R}_i$ is the guiding center coordinate of the $i$th
electron, and it is calculated from the following equation
\begin{eqnarray}
g({\mib r})&=&
\frac{1}{N_e(N_e-1)}\sum_{\mib q}\sum_{n_1,n_2,n_3,n_4} \exp
\left[ i{\mib q \cdot\mib r}-\frac{q^2\ell^2}{2}-i(n_1-n_3)\frac{q_xL_x}{M}
\right] \times \nonumber\\
&& \mbox{\hspace{3.5cm}} \delta'_{n_1-n_4,q_yL_y/2\pi} \langle
\Psi | c_{n_1}^\dagger c_{n_2}^\dagger c_{n_3} c_{n_4} | \Psi \rangle,
\end{eqnarray}
where $\Psi$ is the ground state and $N_e$ is the total number of
electrons.
\begin{figure}[t]
\begin{center}
\epsfxsize=75mm \epsffile{Fig2.eps}
\caption{\label{Fig_2DDM}
Eigenvalues $w_\alpha$ of the density matrix for
two-dimensional system of 54 orbitals with 18 electrons.
Sum of $w_\alpha$ is equivalent to
the norm of the ground-state wave function
and normalized to be unity.}
\end{center}
\end{figure}
The accuracy of the results depends on the
distribution of eigenvalues of the density matrix.
A typical example of the eigenvalues of the
density matrix for system of $M=54$ with $18$ electrons
is shown in Fig.~\ref{Fig_2DDM},
which shows an exponential decrease of eigenvalues $w_\alpha$.
In this case accuracy of $10^{-4}$ is obtained
by keeping more than one hundred states in each block.
\section{Single layer system}
\begin{figure}[t]
\begin{center}
\epsfxsize=75mm \epsffile{Fig3.eps}
\caption{\label{Fig_Gap}
The lowest excitation gap at various $\nu$ in
the lowest Landau level.
Relatively large excitation gap is obtained at
fractional fillings $\nu=n/(2n+1)$.
The excitation gap is in units of $e^2/(\epsilon \ell)$.
}
\end{center}
\end{figure}
Here we present diverse ground states obtained by the
DMRG method applied to the single layer quantum Hall systems.
In the limit of strong magnetic field, the electrons
occupy only the lowest Landau level $N=0$. In this limit,
fractional quantum Hall effect (FQHE) has been
observed at various fractional fillings\cite{FQHEex}.
The FQHE state is characterized by incompressible liquid
with a finite excitation gap\cite{Laughlin}.
These FQHE states are confirmed by the DMRG calculations,
where relatively large excitation gaps are obtained at
various fillings between $\nu=1/2$ and 3/10\cite{Shibata3}
as shown in Fig.~\ref{Fig_Gap}.
We clearly find large excitation gaps at fractional
fillings $\nu=1/3,2/5,3/7,4/9$ and $5/11$,
which correspond to primary series of the FQHE at
$\nu=n/(2n+1)$.
The pair correlation function at $\nu=1/3$ is
presented in Fig.~\ref{Fig_Lau}, which shows a
circularly symmetric correlation consistent with the
Laughlin's wave function.\cite{Laughlin}
In the limit of low filling $\nu\rightarrow 0$, mean
separation between the electrons becomes much longer than the
typical length-scale of the one-particle wave function.
In this limit the quantum fluctuations are not important
and electrons behave as classical point charges.
The ground state is then expected to be the Wigner crystal.
The formation of the Wigner crystal is also confirmed by
the DMRG calculations at low fillings
as shown in Fig.~\ref{Fig_CDW} (a).
The $\nu$-dependence of the low energy spectrum shows
that the first-order transition to Wigner crystal occurs
at $\nu\sim 1/7$.\cite{Shibata3}
With decreasing magnetic field, electrons occupy higher
Landau levels.
In high Landau levels, the one-particle wave function
extends over space leading to effective long range
exchange interactions between the electrons.
The long range interaction
stabilizes CDW ground states and
various types of CDW states called stripe and bubble are
predicted by Hartree-Fock theory.\cite{Kou1}
These CDW states are confirmed by the DMRG calculations
as shown in Figs.~\ref{Fig_CDW} (b) and (c),
where two-electron bubble state and stripe state are
obtained at $\nu=8/27$ and $3/7$, respectively,
in the $N=2$ Landau level.
Although the CDW structures are similar to those
obtained in the Hartree-Fock calculations, the ground
state energy and
the phase diagram are significantly different\cite{Shibata2}.
The DMRG results are consistent with recent
experiments\cite{Lill},
and the discrepancy is due to the quantum fluctuations
neglected in the Hartree-Fock calculations.
The ground-state phase diagram obtained by the DMRG
is shown in Fig.~\ref{Phase}.
In the lowest Landau level,
we find many liquid states at fractional
fillings and around $\nu=1/2$.
Nevertheless,
CDW states dominate over the whole range of filling
in higher Landau levels.
This difference in the ground state phase diagram
comes from different effective interactions between the electrons.
In the lowest Landau level, the one particle
wave function is localized within the magnetic length
$\ell$, that yields strong short-range repulsion between the
electrons. Since quantum liquid states such as Laughlin
state are stabilized by the strong short-range repulsion,
liquid states are realized in the lowest Landau level.
In higher Landau levels, however, the wave function extends over
space with the increase in the classical cyclotron radius $r_c$.
Thus the short-range repulsion is reduced and
liquid states become unstable.
As shown in Fig.~\ref{effective} (a),
the real space effective interaction between the electrons
in higher Landau levels
has a shoulder structure around the distance twice the classical
cyclotron radius.
This structure of effective interaction makes
minimum in the Coulomb potential near the guiding
center of the electron as shown in Fig.~\ref{effective} (b)
and stabilizes the clustering of electrons.
This is the reason why stripe and bubble states are
realized in higher Landau levels.\cite{Shibata6}
\begin{figure}
\begin{center}
\epsfxsize=75mm \epsffile{Fig4.eps}
\caption{\label{Fig_Lau}
Pair correlation function $g({\bf r})$ at $\nu=1/3$ in the lowest
Landau level. The length is in units of $\ell$.
}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\epsfxsize=95mm \epsffile{Fig5.eps}
\caption{\label{Fig_CDW}
Pair correlation functions $g({\bf r})$ in guiding center coordinates.
(a) Wigner crystal realized in an excited state at $\nu=1/6$ in
the lowest Landau level. The number of electrons in the unit cell $N_e$ is 12.
(b) Two-electron bubble state at $\nu=8/27$ in $N=2$ Landau level.
$N_e=16$.
(c) Stripe state at $\nu=3/7$ in $N=2$ Landau level. $N_e=18$.
}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\epsfxsize=75mm \epsffile{Fig6.eps}
\caption{\label{Phase}
The ground state phase diagram obtained by the DMRG method.
$N$ is the Landau level index and $\nu_N$ in the filling
factor of the $N$th Landau level.
}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\epsfxsize=120mm \epsffile{Fig7.eps}
\caption{\label{effective}
(a) Effective interaction between the electrons in the
$N$th Landau level. $R_c$ is the classical cyclotron radius.
(b) Coulomb potential made by two electrons separated by $\Delta x$.
}
\end{center}
\end{figure}
\section{Spin transitions}
In two dimensional systems, strong perpendicular magnetic field
completely quenches the kinetic energy of electrons.
Since the kinetic energy is independent of the spin polarization,
the exchange Coulomb interaction easily aligns the electron spin.
The ferromagnetic ground state at
$\nu=1/q$ ($q$ odd) is thus realized even in the absence of the
Zeeman splitting\cite{qhe-review}.
At the filling $\nu=2/3$ and $2/5$, however, the
paramagnetic ground states compete with the
ferromagnetic state, and the Zeeman splitting $\Delta_z=g\mu_BB$
induces a spin transition\cite{chacraborty}.
Such a spin transition in fractional quantum
Hall states has been naively explained by the
composite fermion theory.\cite{jain}
The composite fermions are electrons coupled with even number
of fluxes. These fluxes effectively reduces external
magnetic field and the $\nu=p/(2p\pm 1)$ fractional
quantum Hall effect (FQHE)
state is mapped on to the $\nu'=p$ integer QHE state of
composite fermions.
The spin transitions at $\nu=2/3$ and
2/5\cite{jain2} correspond to the spin transition at $\nu=2$,
where the Zeeman splitting corresponds to
the effective Landau level separation, and
the energy levels of the minority spin state in the
lowest Landau level and the majority spin state
in the second lowest Landau level coincide.
Extensive experimental\cite{exp1,exp1.1,exp2.1,exp2,exp3,exp4,
exp5,exp7,exp8}
and theoretical \cite{thry1,thry2,thry3,karel}
studies have been made on this transition.
Nevertheless, there is no clear theoretical consensus on this
issue. This is due to the difficulties of
studies in this system. A number of states
possibly compete in energy, and large enough systems are needed to
see non-uniform structures in the partially polarized states.
Here we use the DMRG method\cite{Shibata1},
and study the spin transition and the spin structures in large system
to clarify the nature of the spin transition at $\nu=2/3$.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.55\textwidth]{Fig8.eps}
\caption{\label{spin_e}
Lowest energies for fixed polarization ratio $P$ as a function
of magnetic field $B$ at filling factor $\nu=2/3$ in units of
$e^2/(\epsilon \ell)$.
The total number of electron is 20. The aspect ratio is fixed at 2.0.
The $g$-factor is 0.44.
}
\label{figure1}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.48\textwidth]{Fig9.eps}
\caption{\label{spin_gap}
Charge gap of $\nu=2/3$ spin polarized states ($\Box$), unpolarized
states ($\bullet$), and partially polarized states ($\circ$)
for various $N_e$ and aspect ratios $L_x/L_y$. $\Delta_c$ is
in units of $e^2/(\epsilon \ell)$.
}
\label{figure2}
\end{center}
\end{figure}
We first calculate the energy at various polarization
$P$ as a function of the Zeeman splitting, $\Delta_z=g\mu B$.
The obtained results are shown in Fig.~\ref{spin_e}.
In the absence of the Zeeman splitting,
the unpolarized state ($P=0$) is the lowest.
The energy of the polarized state ($P>0$) monotonically increases
as $P$ increases.
With the increase in Zeeman splitting $\Delta_z$, however,
the energy of polarized state decreases and
the fully polarized state ($P=1$) becomes the lowest.
Figure \ref{spin_e} shows that the transition from the unpolarized state
to the fully polarized state occurs at $B\simeq 6$T which is roughly
consistent to the earlier work done in a spherical geometry.
\cite{chacraborty}
In the present calculation on a torus,
all partially polarized
states ($0< P < 1$) are higher in energy than the ground states ($P=0$
or 1).
This feature is independent of the size of the system and
the aspect ratio $L_x/L_y$, and
indicates phase separations of $P=0$ and
$P=1$ in partially polarized states.
The unpolarized state of $P=0$ and the fully polarized
state of $P=1$ are both quantum Hall states
with finite charge excitation gap, which is
defined by
\begin{equation}
\Delta_c(P)=E(N_{\phi}+1,P)+E(N_{\phi}-1,P)-2E(N_{\phi},P),
\end{equation}
where $N_{\phi}$ is the number of one-particle states
in the lowest Landau level.
The filling factor $\nu$ is then given by $N_e/N_{\phi}$.
The charge gap $\Delta_c$ for various $N_{\phi}$
and aspect ratios of the unit cell is presented in Fig.~\ref{spin_gap}.
In this figure, the gap $\Delta_c$ seems to vanish
for partially polarized state $P\sim 1/2$
in the limit of $N_e\rightarrow\infty$.
This result clearly indicates that partially polarized
state with
$P\sim 1/2$ is a compressible state in contrast to the
incompressible states at $P=0$ and $1$, where
$\Delta_c$ remains to be finite in the limit of
$N_e\rightarrow\infty$.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.55\textwidth]{Fig10.eps}
\caption{\label{spin_dom}
Pair correlation functions for minority spins
$g_{\downarrow\downarrow}$ at $\nu=2/3$
for several polarization ratios
(a) $P=0.8$, (b) $P=0.6$, (c) $P=0.5$, and (d) $P=0.4$.
}
\label{figure3}
\end{center}
\end{figure}
To study the spin structure in the partially polarized states,
we next calculate the pair-correlation function defined by
\begin{equation}
g_{\sigma\sigma}({\mib r})=\frac{L_xL_y}{N_{\sigma}(N_{\sigma}-1)}
\langle \Psi |\sum_{nm}\delta({\mib r}+{\mib R}_{\sigma,n}-{\mib R}_{\sigma,m})
|\Psi\rangle ,
\end{equation}
where $\sigma=\pm1/2$ is the spin index and $N_{\sigma}$ is the number
of electrons with spin $\sigma$.
The spin structures in partially spin polarized states
are clearly shown in
the pair correlation function between minority spins.
Namely, if unpolarized regions are formed
in the partially polarized states, then
electrons with minority spins
are concentrated in the unpolarized regions.
This concentration of the minority spins is
shown in Fig.~\ref{spin_dom}, which shows
$g_{\downarrow\downarrow}(x,y)$ for partially polarized states
at (a) $P=0.8$, (b) $P=0.6$, (c) $P=0.5$, and (d) $P=0.4$.
When $P$ is close to $1$, for example $P=0.8$ shown
in Fig.~\ref{spin_dom}(a), a pair of minority spins is found
only near the origin.
As the polarization ratio $P$ decreases, minority
spins make a domain around the origin, and two domain walls
along the $y$-direction are formed.
These domain walls move along $x$-direction
and the domain of minority spin finally
covers entire unit cell in the limit of $P=0$.
This change in the size of the domain is consistent with the expectation
that the domain in Fig.~\ref{spin_dom}
corresponds to the unpolarized spin singlet region
where the density of up-spin electrons
and the down-spin electrons are the same.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.55\textwidth]{Fig11.eps}
\caption{\label{domain_4}
Local densities of up spin, and down spin electrons
for various polarization ratios $P$ at $\nu=2/3$.
The number of electrons is 20.
}
\label{figure4}
\end{center}
\end{figure}
To confirm the separation of the unpolarized and polarized spin regions,
we next consider the local electron density
of up-spin electrons $\nu_{\uparrow}(x)$ and down-spin electrons
$\nu_{\downarrow}(x)$.
Figure \ref{domain_4} shows $\nu_{\uparrow}(x)$ and $\nu_{\downarrow}(x)$
for partially polarized states with $P=0.2,\ 0.4,\ 0.6$ and $0.7$.
Here $\nu_{\uparrow}(x)$ and $\nu_{\downarrow}(x)$ are
scaled to be the local filling factor of the
lowest LL. Thus, the total local electron density
$\nu_{\uparrow}(x)+\nu_{\downarrow}(x)$ is almost $2/3$.
In this figure the separation to two regions is clearly seen;
the unpolarized spin region around $L_x/2$, where
both $\nu_{\uparrow}$ and $\nu_{\downarrow}$ are close to 1/3,
and the fully polarized spin region around $x\sim 0$ or equivalently
$x\sim L_x$, where $\nu_{\uparrow}$ is almost 2/3 while
$\nu_{\downarrow}$ is close to 0.
These results confirm the separation of the unpolarized and
polarized spin regions as expected from the
pair correlation functions shown in Fig.~\ref{domain_4}.
The polarized and unpolarized spin regions are separated
by the domain walls whose width is about $4\ell$. This means that the
phase separation is realized only for systems whose
size of the unit cell $L_x , (L_y) $ is larger than twice the width of
domain wall; $L_x,(L_y) > 8\ell$.
Indeed, exact diagonalization studies up to $N_e=8$ electrons have
never found the phase separation at $\nu=2/3$.\cite{karel}
We have found the phase separation only for large systems with
$N_e > 12$.
We note that above behavior is generic over the aspect ratio.
In an ideal system, the two states separate into two regions even when the
system size is infinitely large. In experimental situations, however,
multi-domain structures are realized due to the inhomogeneity and
coupling with randomly distributed nuclear spins.
The DMRG study on the ground state energy for various polarization $P$
shows that the ground state at $\nu=2/3$
evolves discontinuously from the unpolarized $P=0$
state to the fully polarized $P=1$ state as the Zeeman splitting increases.
In partially polarized states $0<P<1$, the electronic system separates
spontaneously into two states; the $P=0$ and the $P=1$ states.
These two states are separated by the domain wall of width $4\ell$.
Since the energy of the domain wall is positive,
the partially polarized states always have higher energy than
that of $P=1$ or $P=0$ states.
We think this is the reason of the direct first order transition
from $P=0$ to $P=1$ state in the ground state.
It is useful to compare our result with the spin transition
at $\nu=2$ which occurs when minority spin states in the lowest LL
and majority spin states in the second lowest LL cross
by varying the ratio of the Zeeman and Coulomb energy.
The ground state at $\nu=2$ is thus a fully polarized state
or a spin singlet state.
In analogous to the case of $\nu=2/3$ the transition between them is
first order\cite{tomas},
and spin domain walls have been found in high energy states.\cite{nomura}
This analogy can be expected, because the $\nu=2$ states and the $\nu=2/3$
states are connected in the composite fermion theory\cite{jain,jain2},
although the effective interaction between composite fermions is different
from that for electrons.
\section{Bilayer system}
The properties of quantum Hall systems sensitively depend on the
magnetic field, and various types of ground states including
incompressible liquids\cite{Laughlin}, compressible
liquids\cite{Jain,Halp}, spin singlet liquid,
CDW states called stripes, bubbles, and
Wigner crystal are realized depending on the filling $\nu$
of Landau levels.
In bilayer quantum Hall systems, additional length scale of
the layer
distance $d$, and the degrees of freedom of layers make the
ground state much more diverse and interesting.\cite{EM}
Excitonic phase, namely Haplerin's $\Psi_{1,1,1}$ state,
is one of the ground states realized in
bilayer quantum Hall systems at total filling $\nu=1$ at small
layer separation $d$, where electrons and holes in different layers
are bound with each other due to strong
interlayer Coulomb interaction.
This excitonic state has recently attracted much attention
because a dramatic enhancement of zero bias tunneling conductance
between the two layers\cite{ZTNC}, and the vanishing of the
Hall counterflow resistance are observed\cite{CFH1,CFH2}.
As the layer separation $d$ is increased, the excitonic phase vanishes,
and at large enough separation, composite-fermion Fermi-liquid
state is realized in each layer.
Several scenarios have been proposed for the transition
of the ground state as the layer separation
increases\cite{HFT,PCF,Mac,Kim,SH,NY,SRM}.
However how the excitonic state develops into
independent Fermi-liquid state has not been fully understood.
In this section we investigate the ground state of $\nu=1$ bilayer
quantum Hall systems by using the DMRG method\cite{Shibata1}.
We calculate energy gap, two-particle correlation function $g(r)$
and excitonic correlation function for
various values of layer separation $d$, and show the
evolution of the ground state with increasing $d$.
The Hamiltonian of the bilayer quantum Hall systems is written by
\begin{eqnarray}
H &=& \sum_{i<j} \sum_{\mib q} V(q)\ {\rm e}^{-q^2\ell^2/2}
{\rm e}^{{\rm i}{\mib q} \cdot ({\mib R}_{1,i}-{\mib R}_{1,j})} \nonumber \\
&&+ \sum_{i<j} \sum_{\mib q} V(q)\ {\rm e}^{-q^2\ell^2/2}
{\rm e}^{{\rm i}{\mib q} \cdot ({\mib R}_{2,i}-{\mib R}_{2,j})} \nonumber \\
&&+ \sum_{i,j} \sum_{\mib q} V(q)\ {\rm e}^{-qd}e^{-q^2\ell^2/2}
{\rm e}^{{\rm i}{\mib q} \cdot ({\mib R}_{1,i}-{\mib R}_{2,j})},
\label{Coulomb}
\end{eqnarray}
where ${\mib R}_{1,i}$ are the two-dimensional guiding center
coordinates of the $i$th electron in the layer-1 and
${\mib R}_{2,i}$ are that in
the layer-2. The guiding center coordinates
satisfy the commutation relation,
$[{R}_{j}^x,{R}_{k}^y]={\rm i}\ell^2\delta_{jk}$.
$V(q) =2\pi e^2/(\epsilon q)$ is the Fourier transform of the
Coulomb interaction and
the wave function is projected on to the lowest Landau level.
We consider uniform positive background charge to cancel the
component at $q=0$.
We will assume zero interlayer tunneling and fully spin polarized
ground state.
\begin{figure}[t]
\begin{center}
\epsfxsize=80mm \epsffile{Fig12.eps}
\caption{\label{bi_cor}
The exciton correlation of bilayer quantum Hall systems at
$\nu=1$. The solid line represents $g_{\rm ex}(M/2)$.
The dashed line represents $g_{\rm ex}(M/2-1)$.
}
\end{center}
\end{figure}
In the limit of $d=0$, electrons in different layers
can not occupy the same position
because of the strong interlayer Coulomb repulsion.
The strong interlayer repulsion makes electron-hole
pairs, which is called excitons whose
degrees of freedom are represented by interlayer dipoles or
pseudo-spins at total filling $\nu=1$.
The Coulomb exchange interaction aligns the interlayer
dipoles (the pseudo-spins) leading to the macroscopic
coherence of the excitons and Haplerin's $\Psi_{1,1,1}$
state is realized.
To confirm the coherence of the excitons,
we calculate the exciton correlation defined by
\begin{equation}
g_{\rm ex}(n) \equiv
\frac{2M-1}{N_1N_2}
\langle \Psi | c^\dagger_{1,n} c_{2,n}
c^\dagger_{2,0} c_{1,0} |\Psi \rangle,
\end{equation}
where $|\Psi\rangle$ is the ground state and $c^\dagger_{1,n}$
($c^\dagger_{2,n}$) is the creation operator of
the electrons in the $n$th one-particle state defined by
\begin{equation}
\phi_n({\mib r})=\frac{1}{\sqrt{L_y\pi^{1/2}\ell}}
\exp\left\{i k_y y - \frac{(x-X_n)^2}{2\ell^2}\right\}
\end{equation}
in the layer-1 (layer-2).
$X_n=nL_x/M=k_y\ell^2$ is the $x$-component of the guiding
center coordinates and
$L_x$ is the length of the unit cell in the $x$ direction.
$M$ is the number of one-particle state in each layer.
$N_1$ and $N_2$ are the number of
electrons in the layer-1 and layer-2, respectively, and
we impose the symmetric condition of $N_1=N_2$.
Since $g_{\rm ex}(n)$ represents the correlations between the
two excitons at
$X=0$ and $X=X_n$, $\lim_{n\rightarrow \infty} g_{\rm ex}(n)\ne 0$ indicates
existence of macroscopic coherence of excitons.
As is shown in Fig.~\ref{bi_cor}, $g_{\rm ex}(n)$ tends to 1 as $d \to 0$,
that confirms the macroscopic coherence of excitons at $d=0$.
Indeed, Haplerin's $\Psi_{1,1,1}$ state has the macroscopic
coherence of excitons and $g_{\rm ex}(n)=1$ independent of $n$.
In this figure we have shown $g_{\rm ex}(M/2)$ instead of
$\lim_{n\rightarrow \infty} g_{\rm ex}(n)$,
because the largest distance between the two excitons is
$L_x/2$ in the finite unit cell of $L_x\times L_y$ under the
periodic boundary conditions.
In order to check the size effect, we also plot $g_{\rm ex}(M/2-1)$
with the dashed line. Since the difference between
$g_{\rm ex}(M/2)$ and $g_{\rm ex}(M/2-1)$ is small, we expect
$g_{\rm ex}(M/2)$ well represents the macroscopic coherence
in the limit of $N\rightarrow \infty$.
With increasing $d/\ell$, the excitonic correlation decreases monotonically
and finally falls down to negligible value at $d/\ell \sim 1.6$.
\begin{figure}[t]
\begin{center}
\epsfxsize=80mm \epsffile{Fig13.eps}
\caption{\label{bi_sgap}
The pseudo-spin excitation gap $\Delta_{ps}$
of bilayer quantum Hall systems at the total filling
factor $\nu=1$. The dashed lines are guide for the eye.
}
\end{center}
\end{figure}
The presence of macroscopic coherence of excitons shown in the
Fig.~\ref{bi_cor}
means the existence of ferromagnetic order of the
interlayer dipoles (the pseudo-spins). Since interaction between
the interlayer dipoles has SU(2) symmetry at $d=0$,
corrective gapless excitations called pseudo-spin waves
are expected. Even in the case of finite layer distance $d$,
the Hamiltonian has continuous XY symmetry, and
gapless pseudo-spin wave excitations are still expected.
This is confirmed by the size dependence of the pseudo-spin
excitation gap shown in Fig.~\ref{bi_sgap}, where
the pseudo-spin excitation gap $\Delta_{ps}=E(N_1+1,N_2-1,M)-E(N_1,N_2,M)$
in finite system decreases as a function of $1/L_x$.
\begin{figure}[t]
\begin{center}
\epsfxsize=80mm \epsffile{Fig14.eps}
\caption{\label{bi_cgap}
The charge excitation gap $\Delta_{c}$
of bilayer quantum Hall systems at the total filling
factor $\nu=1$. The dashed lines are guide for the eye.
}
\end{center}
\end{figure}
In contrast to the pseudo-spin excitation gap $\Delta_{ps}$,
the charge excitation gap $\Delta_c$ defined by
$\Delta_{c}=E(N_1,N_2,M-1)+E(N_1,N_2,M+1)-2E(N_1,N_2,M)$
seems to be finite even in the limit of $L_x \rightarrow \infty$
for small $d$ as shown in Fig.~\ref{bi_cgap}.
The charge excitation brakes at least one electron-hole pair
and it needs energy of order $V_0^{(1,2)}$ which is the
pseudopotential between the electrons in different layers
whose relative angular momentum is $0$.
This pseudopotential decreases with the increase in the layer
distance $d$, and thus the charge gap decreases with
the increase in $d$.
We next see the lowest excitation gap in a fixed size of system.
Figure~\ref{bi_gap} shows the result for $N_e=24$.
The aspect ratio $L_x/L_y=1.8$ is chosen from the minimum of
the ground state energy with respect to $L_x/L_y$ around
$d/\ell=1.8$, where minimum structure appears in the ground state
energy.
We can see clear excitation gap of finite system for $d/\ell<1.2$,
where excitonic ground state
is expected both theoretically and experimentally
\cite{ZTNC,CFH1,CFH2,HFT,PCF,Mac,Kim,SH,NY,SRM}.
The excitation gap rapidly decreases with increasing $d/\ell$
from $1.2$, and it becomes very small for $d/\ell>1.6$.
This behavior is consistent with experiments.\cite{Wies}
Although the excitation gap for $d/\ell> 1.7$ is not presented
in the figure for $N_e=24$ because of the difficulty of the calculation
of excited states in large system, we do not find any sign
of level crossing in
the ground state up to $d/\ell\sim 4$, where two layers are
almost independent.
These results suggest that the excitonic state at small $d/\ell$
continuously crossovers to compressible state at large $d/\ell$,
that is consistent with the behaviors of exciton correlations
$g_{\rm ex}(M/2)$ in Fig.~\ref{bi_cor}, which shows
$g_{\rm ex}(M/2)$
continuously approaches zero around $d/\ell \sim 1.6$.
In the present calculation it is difficult to conclude whether
the gap closes at finite $d/\ell\sim 1.6$ in the thermodynamic limit
or excitonic state survives with exponentially small finite gap
even for large $d/\ell>1.6$.
We have calculated the excitation gap
in different size of systems and aspect ratios,
and obtained similar results as shown in the inset of Fig.~\ref{bi_gap}.
\begin{figure}[t]
\begin{center}
\epsfxsize=80mm \epsffile{Fig15.eps}
\caption{\label{bi_gap}
The lowest excitation gap $\Delta$
of bilayer quantum Hall systems at the total filling
factor $\nu=1$. $N_e=24$ and $L_x/L_y=1.6$.
The inset shows the result for $N_e=18$ and $L_x/L_y=1.0$.
}
\end{center}
\end{figure}
Concerning the first excited state, however, Fig.~\ref{bi_gap}.
shows a level crossing at $d/\ell\sim 1.2$,
where we can see sudden decrease in the excitation gap.
We expect that the lowest excitation at $d/\ell < 1.2$ is
the pseudo-spin excitation whose energy gap decreases with the increase
in the size of system and tends to zero
in the limit of large system.
On the other hand, the lowest excitation at $d/\ell > 1.2$ shown
in Fig.~\ref{bi_gap}
is expected to be the excitation to the roton minimum
which corresponds to the bound state of
quasiparticle and quasihole excitatins, whose energy
increases with the decrease in $d/\ell$.
This change in the character of the low energy excitations
at $d/\ell \sim 1.2$ will be confirmed
in a clear change in correlation functions in the excited
state as shown later. We note that the position of the
level crossing in the first excited state itself depends on
the size of system because the pseudo-spin excitation gap
decreases with the increase in the system size. However,
the change in the character of the low energy excitations
of finite systems is expected to remain even in the limit
of large system, since the spectrum weight of pseudo-spin
waves transfers to high energy with the increase in $d$.
We next calculate pair correlation functions of the electrons to
see detailed evolution of the ground-state wave function.
The interlayer pair correlation functions are defined by
\begin{eqnarray}
g_{12}({\mib r}) &\equiv& \frac{L_x L_y}{N_1N_2}\langle
\Psi | \sum_{n\ m} \delta({\mib r}+{\mib R}_{1,n}
-{\mib R}_{2,m})|\Psi
\rangle,
\end{eqnarray}
where $|\Psi\rangle$ is the ground state.
We present $\Delta g_{12}(r)$ in Fig.~\ref{bi_inter}, which is defined by
\begin{eqnarray}
\Delta g_{12}(r) &=& \int (g_{12}({\mib r'})-1)\delta(|{\mib r'}|-r)
\ {\rm d}{\mib r'},
\end{eqnarray}
where ${\mib r'}$ is the two-dimensional position vector in each layer.
$\Delta g_{12}(r)$ represents the difference from the uniform correlation
of independent electrons.
\begin{figure}[t]
\begin{center}
\epsfxsize=80mm \epsffile{Fig16.eps}
\caption{\label{bi_inter}
The inter-layer pair correlation function of electrons
in the ground state of bilayer quantum Hall systems at $\nu=1$.
$N_e=24$ and $L_y/L_x=1.6$.
}
\end{center}
\end{figure}
At $d/\ell=0$ we find clear negative $\Delta g_{12}(r)$ around
$r/\ell =1$, which
is a characteristic feature of the excitonic state made by
the binding of electrons and holes between the two layers.
The binding of one hole means the exclusion of one electron
caused by the strong interlayer Coulomb repulsion.
The increase in the layer separation weakens
Coulomb repulsion between the two layers and
reduces $|\Delta g_{12}(r)|$ around $r/\ell =1$.
The decrease in the interlayer correlation $|\Delta g_{12}(r)|$
opens space to enlarge correlation hole in the same layer
and reduce the Coulomb energy between the electrons within the layer.
This is shown in Fig.~\ref{bi_intra}, which shows the pair correlation
functions of the electrons in the same layer defined by
\begin{eqnarray}
g_{11}({\mib r}) &\equiv& \frac{L_x L_y}{N_1(N_1-1)}\langle
\Psi | \sum_{n m} \delta({\mib r}+{\mib R}_{1,n}-{\mib R}_{1,m})|\Psi
\rangle,
\end{eqnarray}
\begin{eqnarray}
\Delta g_{11}(r) &=& \int
(g_{11}({\mib r'})-1)\delta(|{\mib r'}|-r) \ {\rm d}{\mib r'} .
\end{eqnarray}
The obtained results indeed show
that the correlation hole in the same layer
around $r/\ell \sim 1$ is enhanced with the increase in
$d/\ell$ contrary to the decrease in size of
interlayer correlation hole in Fig.~\ref{bi_inter}.
The correlation hole in the same layer monotonically increases in size
up to $d/\ell=1.8$, and then it becomes almost constant.
The correlation function $g_{11}(r)$ for $d/\ell>1.8$
is almost the same to that of $\nu=1/2$ monolayer quantum Hall
systems realized in the limit of $d/\ell=\infty$.
This is consistent with the almost vanishing
excitation gap and exciton correlation at $d/\ell>1.8$
shown in Figs.~\ref{bi_gap} and \ref{bi_cor}.
\begin{figure}[t]
\begin{center}
\epsfxsize=80mm \epsffile{Fig17.eps}
\caption{\label{bi_intra}
The intra-layer pair correlation function of electrons
in the ground state of bilayer quantum Hall systems at $\nu=1$.
$N_e=24$ and $L_y/L_x=1.6$.
}
\end{center}
\end{figure}
Figure \ref{bi_intra} also shows that the growing of
the correlation hole around $r/\ell\sim 1.5$ is
accompanied with the increase in
$\Delta g_{11}(r)$ around $r/\ell\sim 4$.
The distance $4\ell$ is comparable to the approximate
mean distance between the electrons $3.54\ell$
estimated from $(L_xL_y/N_1)^{1/2}= (2\pi L/N_1)^{1/2} \ell$.
This means the electrons in the same layer tend to
keep distance of about $4\ell$ from other electrons
with the large correlation hole around $r/\ell\sim 1.5$
for $d / \ell \stackrel{>}{_\sim} 1$.
This is consistent with the formation of composite fermions
at $d=\infty$,
where two magnetic flux quanta are attached to each electron,
which is equivalent to enhance the correlation hole
around each electron in the same layer to keep distance
from other electrons.
The large correlation hole in $g_{11}(r)$
attracts electrons in the other layer as shown in Fig.~\ref{bi_inter},
where we find a clear peak in $\Delta g_{12}(r)$ at $r/\ell \sim 3$.
This peak at $r/\ell \sim 3$ is comparable to the neighboring
correlation hole at $r/\ell \sim 1$, which suggests that the
electrons excluded from the origin by strong interlayer Coulomb
repulsion are trapped by the correlation hole in $g_{11}(r)$
within $r/\ell\sim 4$.
Since the intra-layer correlation $g_{11}$ for $d/\ell>1.6$
is almost the same to that of composite-fermion liquid state,
$\Delta g_{12}(r)$ represents the correlation of composite
fermions between the layers.
The almost same amplitude of $\Delta g_{12}(r)$ at
$r/\ell \sim 1$ and $3$ for $d/\ell>1.6$ actually shows that
the electrons in the other layer bind holes to
form composite fermions.
With decreasing $d/\ell$ from infinity, the
correlations of composite fermions in
different layers monotonically increases down to
$d/\ell\sim 1.2$ as shown in the enhance of
$|\Delta g_{12}(r)|$ at $r/\ell \sim 1$ and $3$.
But further decrease in $d/\ell$ broadens
the peak at $r/\ell \sim 3$ in $\Delta g_{12}(r)$
with the decrease in the correlation hole in $\Delta g_{11}(r)$,
and the peak at $r/\ell \sim 3$ in $\Delta g_{12}(r)$
finally disappears.
This change in the correlation function shows how the
composite-fermion liquid state evolves into excitonic state:
The large correlation hole in the same layer, which is a
characteristic feature of the composite fermions,
is transfered into the other layer to form excitonic state.
The correlation functions in Figs.~\ref{bi_inter} and \ref{bi_intra} are
continuously modified with the decrease in $d/\ell$ from
$\infty$ to $0$, which supports continuous
transition from the compressible liquid state to the
excitonic state.
Fig.~\ref{bi_inter} also shows that the peak in $\Delta g_{12}(r)$ at
$r/\ell \sim 3$ made by the binding of an electron to
the hole around the origin gradually disappears with
decreasing $d/\ell$ from 1.2.
This means the gradual break down of the concept of
composite fermions.
\begin{figure}[t]
\begin{center}
\epsfxsize=80mm \epsffile{Fig18.eps}
\caption{\label{bi_exc}
The change in correlation function through the excitation
from the ground state to the first excited state.
$\nu=1$ and $N_e=18$ with $L_y/L_x=1.0$.
}
\end{center}
\end{figure}
The break down of the composite fermions around $d/\ell \sim 1.2$
affects the character of the lowest excitations,
which is clearly shown in the
level crossing of the excited state at $d/\ell\sim 1.2$.
The change in the character of excitation
is confirmed by the correlation functions in the
excited state. Figure \ref{bi_exc} shows the difference
in the pair correlation
functions $g_{ij}(r)$ between the ground state
and first excited state defined by
\begin{eqnarray}
\delta g_{ij}(r) &=& \int
(g_{ij}^{E}({\mib r'})-g_{ij}^{G}({\mib r'}))
\delta(|{\mib r'}|-r) \ {\rm d}{\mib r'} ,
\end{eqnarray}
where $g_{ij}^{G}({\mib r})$ and $g_{ij}^{E}({\mib r})$ are
the pair correlation functions
in the ground state and the first excited state, respectively.
$\delta g_{ij}(r)$ in Fig.~\ref{bi_exc} show that there is a discontinuous
transition between $d/\ell=1.2$ and $1.3$, which supports
the level crossing in the first excited state.
Below $d/\ell\sim 1.2$, $\delta g(r)$ have large amplitude
at $r/\ell\sim 2$ and $6$, which shows electrons are
transfered between the inside of $r/\ell\sim 4$ and its outside.
Small singularity at $r/\ell \sim 5.5$ is due to finite
size effects of square unit cell.
Above $d/\ell\sim 1.2$, only $\delta g_{12}(r)$ have
large amplitude at $r/\ell\sim 1$ and 2, which shows
the electrons within $r/\ell\sim 4$ in different layers
are responsible for the lowest excitation.
This result suggests that the low energy excitations are
made by composite fermions in different layers for $d/\ell > 1.2$.
\section{Conclusions}
In this paper we have reviewed the ground state and
low energy excitations of the quantum Hall systems studied
by the DMRG method.
We have applied the DMRG method to two dimensional quantum
systems in magnetic field by using a mapping on to an
effective one-dimensional lattice model.
Since the Coulomb interaction between the electrons is
long-range, all the electrons in the system interact
with each other. This fact seems to severely reduce
the accuracy of the DMRG calculations. However, in the
magnetic field, one-particle wave functions are
localized within the
magnetic length $\ell$, and the overlap of the one-particle
wave functions exponentially decreases
with increasing the distance between the two electrons.
This means the quantum fluctuations are restricted to
short-range and the effective Hamiltonian is suited for the
DMRG scheme. This is the reason why relatively small
number of keeping states is enough for quantum Hall systems
compared with usual two dimensional systems.
In quantum Hall systems, filling $\nu$ of Landau levels
is determined by $\nu=N_e/N_\phi$, where $N_\phi$ is the number
of flux quanta and related to the magnetic field
as $N_\phi=(e/h)L_xL_y B $.
Thus so many types of the ground state
are realized only by changing the uniform magnetic field $B$.
Since the ground state of free electrons in
partially filled Landau level has macroscopic degeneracy,
Coulomb interaction drastically changes the
wave function. The character of the ground state
is sensitive to the Landau level index $N$ and the filling $\nu$,
which modify the effective interaction and the mean distance
between the electrons.
This is the source of many interesting low temperature
properties of quantum Hall systems and their inherent difficulties.
\section*{Acknowledgments}
The author would like to thank Prof. Yoshioka Daijiro
and Dr. Kentaro Nomura for valuable discussions.
This work is supported by
Grant-in-Aid No. 18684012 from MEXT, Japan.
|
1,941,325,220,438 | arxiv | \section{\label{sec1}Introduction}
Supersymmetric localization \cite{Banerjee:2009af,Dabholkar:2010uh} has lead to the possibility of evaluating exactly the $\text{AdS}_2$ path integral that computes the quantum entropy \cite{Sen:2008vm} of BPS black holes. This technique has been particularly successful for computing perturbative quantum corrections to the Bekenstein-Hawking entropy in toroidal compactifications , where an almost exact matching with the microscopic theory was obtained \cite{Dabholkar:2011ec}.
The goal of this work is to address instead non-perturbative corrections to black hole entropy related to polar states in the microscopic theory. We want to understand the origin of these effects, perhaps as new saddle points of the path integral, and if so how to compute quantum corrections around each saddle. In toroidal compactifications, such non-perturbative effects are not present, which in a way is what explains the simplicity of the microscopic formulas. Nevertheless, for $\mathcal{N}=4$ and $\mathcal{N}=2$ compactifications, these non-perturbative effects are crucial to understand black hole entropy in the limit of very large central charge, which is where the four dimensional semiclassical description holds. Though exponentially subleading, these non-perturbative effects can become relevant when the number of polar states grows exponentially, which is the case of large central charge.
Recent attempts to compute exactly the quantum entropy, rely on the four dimensional effective action, which includes instanton effects in the prepotential of supergravity. In order to address the problem of non-perturbative corrections within the context of supersymmetric localization, we need to first understand the UV dynamics that are responsible for those effects. With this in mind, the following question arises: can we rely on effective field theory such as supergravity or do we need the full string theory?
Another issue is of concern. Since localization is an off-shell computation and as such does not depend on the values of the coupling constants, it is valid at strong and weak coupling. Translating to supersymmetric black holes and their quantum entropy, this means that the localization computation should hold for any value of charges. The reason is that, in string theory, the scalar fields, which play the role of the coupling constants, become functions solely of the charges at the black hole attractor. However, in view of the AdS/CFT correspondence and the fact that we are working in the microcanonical ensemble, this raises many issues related to the validity of effective field theory. For example, the characteristic length scale of the geometry is a function of the charges, and so by scaling these it is possible that a particular dimensional point of view is more appropriate than other. Conversely, we may ask which microscopic Lagrangian are we localizing?
To better understand these issues, we take a pedestrian approach. We start by recalling the original localization computation of \cite{Dabholkar:2010uh} in four dimensional supergravity and discuss its validity using effective field theory. Then in section \S\ref{sec 1.1} we consider the five dimensional point of view. We argue this is more appropriate to describe the physics of the Rademacher expansion. In section \S\ref{sec 1.2}, we discuss the connection between non-perturbative effects in the black hole entropy and the counting of polar states. Along the way, we present the main lines of our solution, which, in essence, is a reformulation of the OSV formula. We require this formula to be compatible with the Rademacher expansion and to reproduce at the same time the counting of polar states.
\subsection{The four dimensional point of view and the OSV formula}
In \cite{Dabholkar:2010uh}, it is shown that the path integral of $\mathcal{N}=2$ supergravity \footnote{The Lagrangian is based on the off-shell superconformal formalism and it is thus related to the Wilsonian rather than the 1PI effective action of string theory.}on $AdS_2\times S^2$, which computes the entropy of a BPS black hole, reduces to a finite dimensional integral by means of supersymmetric localization. The answer for the black hole degeneracy $d(q,p)$ is schematically of the form
\begin{equation}\label{OSV}
d(q,p)\sim \int d\phi\, e^{-\pi q\phi+4\pi\text{Im}F(\phi+ip)},
\end{equation}
where $F(X)$ is the four dimensional holomorphic prepotential that encodes the couplings of the vectors to the Weyl multiplet, and $q,p$ are respectively the electric and magnetic charges; the integration variables $\phi$ correspond to normalizable modes that are left unfixed by localization.
The result (\ref{OSV}) is a reincarnation of the conjectured OSV formula \cite{Ooguri:2004zv}, which relates the black hole quantum entropy to the topological string partition function. In the original formulation \cite{Ooguri:2004zv}, the reason this happens is because the supergravity prepotential $F(X)$ in (\ref{OSV}) is directly related to the perturbative free energy $F(g_{top},t)$ of the topological string on the Calabi-Yau compactification manifold. To be more precise, the topological string computes four dimensional higher derivative terms, also called F-type terms, of the form $F_{g}(t)R_{-}^2T_{-}^{2g-2}+\text{h.c.}$ ($g>0$) with $R_{-}$ being the anti-self-dual curvature two-form and $T$ the graviphoton field \cite{Gopakumar:1998ii,Gopakumar:1998jq}.
The $F_g(t)$ \footnote{In the present discussion, we keep the free energies $F_g(t)$ holomorphic, which is appropriate for the Wilsonian point of view of the entropy function and hence also of the localization computation. We will comment on possible non-holomorphic corrections later in section \S \ref{sec effective action}. } are defined in a perturbative expansion in powers of the topological string coupling constant $g_{\text{top}}$, that is,
\begin{equation}\label{top free energy}
F(g_{top},t)=\sum_{g=0}g_{\text{top}}^{2g-2}F_{g}(t).
\end{equation}
For $t\gg 1$, the tree level $g=0$ term can be approximated by $F_0(t)\simeq D_{abc}t^at^bt^c/g_{top}^2$
with $D_{abc}$ the intersection matrix of the Calabi-Yau threefold, while the function $F_1(t)$ is the one-loop correction which approximates to $F_1(t)\simeq \frac{c_{2a}t^a}{24}+\mathcal{O}(e^{-t})$. The corrections of order $e^{-t}$ are due to worldsheet instantons, while the parameter $c_{2a}$ can be identified with the second Chern-class $c_2(X)$ (tangent bundle) of the Calabi-Yau. The map between the topological string variables and the supergravity fields is the following: the complexified K\"{a}hler parameter $t^a=X^a/X^0$ and $g_{top}=1/X^0$, where $X^0$ is the dilaton that sits in the supergravity multiplet and $X^a$ are the vectormultiplet complex scalar fields. \footnote{We are omitting minor details regarding the map between the complex scalar fields $X^a$ and the topological string variables. These details will become clear later on.}.
For a general Calabi-Yau, the function (\ref{top free energy}) is known only as an asymptotic expansion in $g_{top}$. If one tries to use localization at the level of the four dimensional effective Lagrangian as in \cite{Dabholkar:2010uh} we run into serious problems; not only we have to constrain the scalar $X^0\sim 1/g_{top}$ to very large values, but we also need to include an arbitrary number of higher derivative corrections. The best we can do is to compute order by order in the inverse of the charges using perturbative methods.
Nonetheless, in compactifications that preserve more supersymmetry, like in $\mathcal{N}=8$ and $\mathcal{N}=4$, the prepotential (\ref{top free energy}) is one-loop exact, that is, all $F_{g>1}(t)$ vanish. In this case, the tree level free energy is given exactly by $F_0(t)=D_{abc}t^at^bt^c$ and the one-loop contribution $F_1(t)=\ln g(t)$, with $g(t)$ being a worldsheet instanton partition function. Since the prepotential is now one-loop exact one might expect to be able to use supersymmetric localization at the level of the four dimensional effective action. As a matter of fact, previous work shows that in the $\mathcal{N}=8$ theory it is possible to reproduce exactly all the perturbative corrections to the area formula \cite{Dabholkar:2011ec} including non-perturbative corrections, related to orbifold geometries, that depend on intricate Kloosterman sums \cite{Dabholkar:2014ema}. In the $\mathcal{N}=4$ case, however, the situation is not so satisfactory because the localization integral (\ref{OSV}) is not able to reproduce all the features predicted by the microscopic theory. In particular, it fails to reproduce the measure that is known from microscopics even after taking into account the one-loop determinants \cite{Murthy:2015yfa,Gupta:2015gga}. In a way, this is partially justified from the microscopic studies \cite{Shih:2005he,Gomes:2015xcf,Murthy:2015zzy}. These studies predict a measure $\mathcal{M}(\phi,p)$ to (\ref{OSV}) that depends strongly on the worldsheet instantons. The precise dependence is of the form
\begin{equation}\label{worldsheet corrections}
\mathcal{M}(\phi,p)\sim \pi p^2-\frac{\partial}{\partial_{\text{Im}\tau}} \ln |g(\tau)|^2,\;\;\tau=\frac{\phi^1+ip^1}{\phi^0},
\end{equation}
where $g(\tau)$ can be identified with a worldsheet instanton partition function \cite{Harvey:1996ir} that appears in topological string free energy, and $p^2$ is a quadratic magnetic charge invariant. Since the instanton corrections carry non-trivial information about the Calabi-Yau manifold, related to Gromov-Witten invariants, it would be puzzling if the four dimensional localization computation, including the one-loop determinants, could explain those corrections. In general the determinants are given in terms of equivariant indices of the four dimensional background with no connection to the Calabi-Yau invariants. Instead, one needs to understand the dynamics that are responsible for the quantum corrections that one observes at the level of the microscopic measure.
The near-horizon geometry can help clarify some of these issues by drawing a clear contrast between four and five dimensional physics. Lets consider the attractor geometry of the $\D0-\D4$ black hole in IIA and uplift to M-theory. The near horizon geometry \cite{Beasley:2006us} is spherically symmetric and contains a local $AdS_3$ factor which consists of the M-theory circle fibered over $AdS_2$, that is,
\begin{equation}\label{AdS3}
ds^2=\vartheta(p)\left[ds^2_{AdS_2}+\frac{1}{(\phi^0)^2}(du+A)^2+ds^2_{S^2}\right]+ds^2_{CY}.
\end{equation}
Here $u$ parametrizes the circle, $A$ is the Kaluza-Klein gauge field, $\vartheta(p)$ is the physical size which can be taken to be large, and $ds^2_{CY}$ is the Calabi-Yau metric. Both $AdS_2$ and $S^2$ factors inside the brackets have unit size in string units. When reducing to four dimensional IIA string theory the radius of the circle becomes the scalar $1/X^0$ in (\ref{top free energy}). Given the map between the supergravity varibles and the topological string, finite radius means finite topological string coupling constant. So for finite radius, the Kaluza-Klein modes, that one obtains after compactification on the circle, have masses which are comparable to the $AdS_2$ inverse size and thus the solution is best described in five dimensional supergravity. In contrast to four dimensions, the part of five dimensional $\mathcal{N}=2$ Lagrangian that contains the couplings of the vectors to the Weyl multiplet, is completely determined by the parameters $D_{abc}$ and $c_{2a}$, which appear in the topological string free energy. Therefore, at the level of the Lagrangian that one obtains after dimensional reduction on the Calabi-Yau, there seems to be no information about the worldsheet instantons.
\subsection{The five dimensional point of view and the Rademacher expansion}\label{sec 1.1}
The four dimensional problem just described holds in the regime for which the radius of the circle is parametrically smaller than the size of $AdS_2$, or equivalently in the regime of weak topological string coupling constant. However, if
supersymmetric localization should hold for any value of the charges \footnote{In the black hole problem, charges play the role of the coupling constants in quantum field theory. When computing perturbative corrections to black hole entropy, we expand in inverse powers of the charges.}, the regime of $g_{top}\sim 1$ \footnote{The scalar fields are functions of the charges in the attractor geometry and so is $g_{top}$ too.} should be equally well valid, but this corresponds to take the five dimensional point of view. In the following, we shall argue that the microscopic Rademacher expansion is more appropriate to describe the five dimensional physics, and we build our solution based on this idea.
As we explain later in more detail, localization at the level of the five dimensional theory, initiated in \cite{Gomes:2015xcf,Gomes:2013cca}, gives instead the finite dimensional integral
\begin{equation}\label{5d localization int}
d(q,p)=\int \prod_{a=0}^{n_V}d\phi^a\ \frac{\vartheta(p)}{\phi^0} e^{-\pi q_b\phi^b+4\pi\text{Im}F_{cl}(\phi+ip)},
\end{equation}
with $F_{cl}(X)$ the "classical" prepotential that we define as
\begin{equation}\label{classical prepotential}
F_{cl}(X)=\frac{1}{6g_{\text{top}}^2}D_{abc}t^at^bt^c+\frac{c_{2a}t^a}{24}.
\end{equation}
This is the one-loop approximation of the prepotential (\ref{top free energy}) without the worldsheet instanton corrections. It is thus clear that the dependence on the worldsheet corrections (\ref{worldsheet corrections}) must arise from other effects. In contrast, the localization integral captures only perturbative quantum corrections around the attractor background (\ref{AdS3}), as an expansion in the area.
We can check that the integral (\ref{5d localization int}) matches the expectations from the microscopic theory. Performing the various integrals, it is found that the final result matches the microscopic degeneracy - a Bessel function, valid precisely for large $g_{top}$ \cite{Gomes:2015xcf}, including the measure factor in $\mathcal{N}=8$ as well as in all $\mathcal{N}=4$ CHL examples in both $K3\times T^2$ and $T^4\times T^2$ compactifications. For this reason, we consider the five dimensional point of view to be a step closer to understanding the quantum measure and the role of the worldsheet instantons.
Besides the leading Bessel, the microscopic $\mathcal{N}=4$ answer contains a series of subleading Bessel contributions. Schematically, they arise after expanding the functions $g(\tau)$ (\ref{worldsheet corrections}) as instanton sums and then performing appropriate integrals \cite{Gomes:2015xcf,Murthy:2015zzy}. As a result, non-perturbative corrections to black hole entropy are generated. Remarkably, this series of Bessel contributions can be matched, to a certain extent, to the polar state contributions in a mock Jacobi Rademacher expansion \cite{Murthy:2015zzy}.
The main goal of this work is to clarify the origin of the subleading Bessel functions, in a way consistent with the quantum entropy functional. Even though they are non-perturbative for large $g_{\text{top}}$, they can become relevant in the opposite regime of $g_{\text{top}}\ll 1$, which occurs for large central charge, because the number of Bessel functions can increase exponentially. According to the supergravity/topological string map, that regime corresponds to the four dimensional point of view, which is why it is crucial to understand the origin of the subleading Bessels. Preliminary steps in this direction were already taken in \cite{Gomes:2015xcf}, where it was suggested that the subleading Bessel contributions could arise from additional configurations of the full string theory path integral. To better understand the claim lets look in more detail to the Rademacher expansion \cite{Rademacher:1964ra}, which is an exact formula for the Fourier coefficients of modular forms- the black hole microscopic degeneracy \footnote{To be more precise one needs to consider also Mock modular forms \cite{Dabholkar:2012nd}. The Rademacher expansion suffers some modifications but these are not relevant for the present discussion. }. Schematically one has
\begin{equation}\label{Rademacher intro}
d(\Delta)=\sum_{\Delta_{\text{polar}}>0}^{\text{Max}}\Omega(\Delta_{\text{polar}})\sum_{c=1}^{\infty}\frac{1}{c}Kl(\Delta,\Delta_{\text{polar}},c)\int_{\epsilon-i\infty}^{\epsilon+i\infty}\frac{dt}{t^{\nu+1}}\exp{\left(\frac{\Delta}{4t c}+\frac{\Delta_{\text{polar}}t}{c}\right)},
\end{equation}
where $d(\Delta)$ is the black hole degeneracy, which is a function of the charge combination $\Delta(q,p)$, $\Omega(\Delta_{\text{polar}})$ is the degeneracy associated with the polar terms and $Kl(\Delta,\Delta_{\text{polar}},c)$ are Kloosterman sums; each of the integrals are modified Bessel functions of the first kind. The microscopic $\mathcal{N}=4$ answer derived in \cite{Murthy:2015zzy} has precisely this form after neglecting the $c>1$ terms. Also, in this work we will only be considering the terms with $c=1$, usually referred as polar Bessels. For $\Delta\gg 1$ with $\Delta_{\text{polar}}$ fixed, the Bessel functions have saddle points at
\begin{equation}\label{saddles intro}
t\sim \sqrt{\frac{\Delta}{\Delta_{\text{polar}}}}\gg 1,
\end{equation}
and growth
\begin{equation}
\exp{\left(\sqrt{\Delta\Delta_{\text{polar}}}\right)}\gg 1.
\end{equation}
The leading contribution in (\ref{Rademacher intro}) therefore comes from the term with maximal $\Delta_{\text{polar}}$, with $\Delta_{\text{polar}}$ the polarity. In terms of the bulk physics, this is precisely the leading Bessel function that one obtains by evaluating the localization integral (\ref{5d localization int}), with $\text{Max}(\Delta_{\text{polar}})$ given by the charge combination $D_{abc}p^ap^bp^c+c_{2a}p^a$. The terms with $\Delta_{\text{polar}}<\text{Max}$ generate exponentially suppressed corrections and hence are non-perturbative. Furthermore, the value of $t$ can be identified with the topological string coupling constant $1/\phi^0$, and so we see that the saddles (\ref{saddles intro}) lie at large values of $1/\phi^0$, when the five dimensional point of view makes sense. In this regime of charges it is thus pertinent to ask to what the non-perturbative saddles correspond from the five dimensional point of view. This is puzzling because, given the localization computation (\ref{5d localization int}), there seems to be no room for any other additional contribution to the path integral. Though, it is possible that these saddles arise from other configurations in the full M-theory path integral. From which ones and how do they contribute? These are some of the questions that we want to address.
Our approach is mainly heuristic. In essence, we propose that the full path integral of M-theory receives the contribution of a new family of configurations which are euclidean geometries of the type $AdS_2\times S^1\times S^2$. The $AdS_2\times S^1$ factor is a local $AdS_3$ geometry, such as (\ref{AdS3}), with euclidean time contractible, and guarantees that after reduction on the circle, one obtains the four dimensional $AdS_2\times S^2$ attractor background. This also follows from the fact that the four dimensional localization equations fix the geometry to be exactly $AdS_2\times S^2$ \cite{Gupta:2012cy}. Therefore we see that from a five dimensional point of view there is not much room for the space of allowed geometries, except that it must have a circle fibered over the attractor geometry.
To be consistent with the path integral and the localization computation, we argue that the new configurations are exact solutions of different five dimensional Lagrangians that we see as effective descriptions. The difference between Lagrangians is a finite renormalization of the parameters that define the theory such as $c_{2a}$ (\ref{classical prepotential}), which is the gauge-gravitational Chern-Simons coupling in five dimensions. The supersymmetric localization computation at the level of the five dimensional theory reproduces the tail of polar Bessel functions observed in the microscopic answers, including the exact spectrum of $\Delta_{\text{polar}}$. That is, for each euclidean $AdS_2\times S^1\times S^2$ geometry we find a Bessel function with index and argument given as in (\ref{Rademacher intro}), thus explaining the origin of the non-perturbative effects from a five dimensional point of view.
The renormalization of $c_{2a}$ has an additional effect. It corrects the physical size of the $AdS_2\times S^1\times S^2$ geometry in such a way that it can become zero, thus imposing a physical condition on the number of geometries. We find that this bound is in perfect agreement with the stringy exclusion principle \cite{Maldacena:1998bw}. The bound on the number of possible geometries is essentially the reason why there is only a finite number of polar Bessel functions in the Rademacher expansion. In the semiclassical limit, that is, when the central charge is very large, the number of geometries close to maximal polarity is dense which allows for a saddle point approximation. The result of this can be identified with the dilute gas approximation of the $AdS_3$ path integral as in \cite{Gaiotto:2006ns}, and the non-perturbative corrections around that saddle correspond to excitations of the fields dual to the chiral primary states.
It is also instructive to compare the above proposal with the effective field theory computation in $\mathbb{R}^4\times S^1$, which is the setup considered in \cite{Gopakumar:1998ii,Gopakumar:1998jq} and revisited in \cite{Dedushenko:2014nya}, for deriving the Gopakumar-Vafa formula. They consider a one-loop computation for the Kaluza-Klein modes of vectors and hypermultiplets on the circle $S^1$, in the background of a self-dual graviphoton field. The result of this computation is the four dimensional higher derivative terms proportional to the topological string free energies $F_g(t)$ (\ref{top free energy}). At the on-shell level we do not expect the $\mathbb{R}^4$ and $AdS_2\times S^2$ computations to differ much when the curvatures are very small. So computing the instanton contributions, in $F_g(t)$, to the on-shell entropy function, we can generate non-perturbative corrections to the entropy area formula \cite{Murthy:2015zzy}. Nevertheless, at the quantum level, placing the theory on the $AdS_2\times S^1\times S^2$ background (\ref{AdS3}), leads to problems related to the stringy exclusion principle \cite{Maldacena:1998bw}. The path integral of the reduced theory \footnote{By the AdS/CFT we keep track of all the Kaluza-Klein modes.} on $AdS_2\times S^1$ \footnote{To be more precise on thermal $AdS_3$, which, in a way, is a modular transformed version of $AdS_2\times S^1$. } contains fluctuations that are not unitary and hence are expected to backreact on the background solution \cite{Castro:2011ui}. The role of the exclusion principle is to artificially truncate the perturbative spectrum of fluctuations in agreement with the dual field theory. The exclusion principle is more relevant for small central charge which makes it a non-perturbative effect. The way we circumvent this problem is by considering the full M-theory path integral, instead of using the effective five dimensional Lagrangian with the massive hypermultiplets that are needed to obtain the Gopakumar-Vafa formula.
In fact, we show that in the limit of charges for which the circle becomes parametrically smaller than the size of $AdS_2$, while keeping the curvature small, we recover the perturbative partition function, in an expansion in the charges, as determined by the four dimensional effective action. This regime of charges is obtained by scaling $\text{Max}(\Delta_{\text{polar}})$ faster than $\Delta$ such that we have $1/\phi^0\ll 1$ at the saddle point. In this regime, we shall recover the Gopakumar-Vafa corrections to black hole entropy. We explain, in addition, how the on-shell logarithmic corrections computed in \cite{Banerjee:2010qc,Banerjee:2011jp} arise from our formalism.
To put it more explicitly, we provide with a non-perturbative formula for black hole entropy that correctly interpolates between the five and four dimensional physics. For small central charge $c$, one has only a small number of geometries and thus also a small number of Bessels. Schematically we have the gravitational answer
\begin{equation}
d_{\text{grav}}(\Delta)\simeq \int \frac{dt}{t^{\nu+1}}\exp{\left[\frac{\Delta}{4t}+c\,t\right]},\qquad c\sim 1
\end{equation}
which is the Bessel function, in agreement with the Rademacher expansion. Whereas for large central charge the high density of geometries, and so of Bessels, allows for a saddle point approximation. For the $\mathcal{N}=8$ and $\mathcal{N}=4$ models, we recover partially the OSV formula, that is, the holomorphic part, with corrections that we can systematically compute,
\begin{equation}
d_{\text{grav}}(\Delta)\sim \int d\phi\, e^{\Delta \phi}|Z_{\text{top}}(\phi,p)|^2+\ldots,\qquad c\propto p^3\gg 1
\end{equation}
Here $Z_{\text{top}}(\phi,p)$, which encodes the holomorphic free energies, can be seen as a canonical partition function for the non-perturbative effects. These effects can then be related to the Gromov-Witten worldsheet instantons.
\subsection{The polar state side of the story}\label{sec 1.2}
So far we have described the problem from the black hole point of view. However, there is another side to this story, which is not directly connected to black holes. This is the context of polar states and its relation to chiral primary states. We will study these states, which can be seen as $\D6-\aD6$ bound states, and we shall argue that the proposed $AdS_2\times S^1\times S^2$ geometries are the bulk duals of these microscopic configurations, after a modular transformation.
Polar states are characterized by having negative charge discrimant $-\Delta_{\text{polar}}<0$ in the R sector of the CFT. Since black holes have necessarily positive charge discriminant, polar states must correspond in the bulk to configurations that do not form single center black holes. However, the reason why the information about polar states enters in the black hole counting formula (\ref{Rademacher intro}) is due to the modular properties of the CFT
partition function. In fact, knowing the spectrum of polar states defines
completely the spectrum of non-polar states, and so using modularity we can relate one to the other.
There is an extensive literature on the problem of determining the spectrum of polar states and then use modularity to study corrections to black hole entropy
\cite{Gaiotto:2006ns,Gaiotto:2006wm,Denef:2007vg}. One of the most complete of such studies is the work of Denef and Moore. Succintly, they perform an extensive study of polar multi-center black hole solutions in four dimensional $\mathcal{N}=2$ supergravity, with the goal of determining their contribution to the spacetime index. The main ingredients used are attractor flow trees \cite{Denef:2000nb} and the wall-crossing phenomena. They find that at large topological string coupling, the main contribution to the index comes from two center black hole solutions, corresponding to a configuration of a single $\D6$ and a single anti-$\D6$ ($\overline{\text{D}}6$) with worldvolume fluxes, located at different positions in $\mathbb{R}^3$. The fluxes considered contain, besides the smooth part, a singular component, which is represented by ideal sheaves. Their contribution to the index gives rise to a refined version of the OSV answer, which includes a measure of the sort described by (\ref{worldsheet corrections}).
The multi-center black hole solutions studied by Denef and Moore, admit a decoupling limit after an uplift to five dimensions \cite{Denef:2007yt,deBoer:2008fk}. In particular for the two center solution, the region near the core, where the $\D6$ and $\aD6$ sit, is zoomed in, and the decoupled geometry becomes asymptotically $AdS_3\times S^2$ with no black holes inside \cite{deBoer:2008fk}. It can be shown that these solutions carry Virasoro charges
consistent with the values expected for $\Delta_{\text{polar}}$. Nevertheless, this result holds only for $\Delta_{\text{polar}}$ very close to its maximal value. We revisit this construction and establish a parallelism with our solutions.
In all the works on black hole entropy through polar state counting, one uses the $\text{CFT}_2$ as an intermediate step. First, we build a partition function for the
polar states $Z_{\text{polar}}(\tau,z^i)$, with $\tau$ the complex structure of
the torus where the $\text{CFT}_2$
lives, and $z^i$ are chemical potentials. Then, we use modularity to
construct the black hole partition function \cite{deBoer:2006vg} as
\begin{equation}\label{Zpolar}
Z_{\text{BH}}\simeq Z_{\text{polar}}(-1/\tau,z^i/\tau).
\end{equation}
This is only an approximate equality because we are not including the contribution due to other elements in the $SL(2,\mathbb{Z})$ modular group. Nevertheless, for the purpose of studying non-perturbative effects due to the polar contributions, it is enough to consider only the modular transformation $\tau\rightarrow -1/\tau$.
From the $\text{CFT}$ point of view, $Z_{\text{polar}}$ naturally receives the contribution from only a finite number of states, those with negative discriminant. Nevertheless, from the bulk, one has to truncate artificially the perturbative
spectrum of Kaluza-Klein fields on $AdS_3$, which are the fields dual to chiral primary states (polar states in the R sector). The truncation is known as the stringy exclusion principle \cite{Maldacena:1998bw} and asserts that quantum gravity in $AdS_3$ is inherently non-perturbative.
The solution that we propose in this work is greatly inspired by the polar geometries studied in \cite{deBoer:2008fk}. The asymptotically $AdS_3\times S^2$ polar configurations have a complicated geometry, but for large central charge we can write the metric as a background global $AdS_3\times S^2$ geometry plus corrections proportional to the singular fluxes, which are of the order of the inverse of central charge. A modular transformation makes the euclidean time circle contractible giving rise to asymptotically black hole like $AdS_2\times S^1\times S^2$ geometries \cite{Maldacena:1998bw,Dijkgraaf:2000fq,Murthy:2009dq}. However, in view of the localization computation that we want to perform, these solutions are not satisfactory because they do not have an exact $AdS_2\times S^2$ factor \cite{Gupta:2012cy,Gomes:2013cca}. In a sense, which we would like to understand in more detail, our solutions are the non-perturbative analog, when taking into account the full string theory, of these modular transformed polar configurations. Conversely, we expect the fully quantum corrected polar configuration to have an exact global $AdS_3\times S^2$ factor.
To build intuition about the quantum corrected polar configurations we proceed as follows. The approach described in \cite{Denef:2007vg,deBoer:2008fk} considers first the backreaction of a two center $\D0-\D2-\D4-\D6$ configuration in four dimensions and then its uplift to M-theory. Equivalently, we can think of the same bound state as a $\D6-\aD6$ brane configuration with worldvolume fluxes $F$. It is well known that such configuration uplifts in M-theory to a Taub-Nut/anti-Taub-Nut geometry with fluxes $G\sim \omega\wedge F$, with $G$ the field strength of the M-theory three-form and $\omega$ is a normalizable two form of the Taub-Nut geometry. Therefore, fluxes on the $\D6$ branes map to fluxes in M-theory. If the worldvolume fluxes are ideal sheaves \cite{Denef:2007vg} we can generate arbitrary $\D0-\D2$ charges while keeping fixed the $\D4$ charge. We argue that the presence of such fluxes on the Calabi-Yau can induce corrections on the five dimensional Lagrangian after reduction. Then, solving the full five dimensional equations of motion we find instead the black hole $AdS_2\times S^1\times S^2$ geometry without corrections, but with the physical size $\vartheta$ (\ref{AdS3}), and the attractor values of the scalar fields depending explicitly on the fluxes. Localization on these backgrounds reproduces the finite tail of polar Bessel functions in the Rademacher expansion, thus setting the stage for a possible derivation of the solutions that we propose. The presence of singular M-theory fluxes can be understood as quantum fluctuations of the K\"{a}hler class of the Calabi-Yau, which allows us to make a connection with the quantum foam picture of topological strings studied in \cite{Iqbal:2003ds}.
To guide the construction of our solution, we will revisit the counting of chiral primary states on $AdS_3$ and its relation to $Z_{\text{polar}}$ (\ref{Zpolar}) following \cite{Gaiotto:2006ns,Gaiotto:2006wm}. Since our goal is to interpret the quantum black hole entropy as a partition function of M-theory, we will want to reproduce the counting of chiral primaries purely in terms of the eleven dimensional M-theory fields, and this will lead us inevitably to the polar $\D6-\aD6$ configurations with singular fluxes. The counting consists essentially in building multi-particle states on top of the vacuum $AdS_3$ by acting with the quanta that we obtain from the fields dual to the chiral primary states \cite{deBoer:1998ip,deBoer:1998us}. To do that we need to analyze the Kaluza-Klein tower of fields on $AdS_3$ coming from the supergravity fields and the $\M2$, and (anti)-$\aM2$ branes, wrapping cycles on the Calabi-Yau. Contrary to \cite{Gaiotto:2006ns}, which works in the dilute gas approximation, we will reconsider the same counting but for finite central charge. Imposing the stringy exclusion principle and spectral flow symmetry will enable us to reproduce the non-perturbative corrections induced by the polar Bessels in the Rademacher expansion, including the polar coefficients $\Omega(\Delta_{\text{polar}})$.
\subsection{Outline}
The plan of the paper is as follows. In section \S \ref{sec quantum saddles}, we start by describing in more detail
the Rademacher expansion and connect it to previous work on black hole entropy and localization in supergravity. Then we review the microscopic formula for the entropy of one-quarter BPS black holes, which includes the effect of the subleading Bessel contributions. We use this formula as a check of our proposal and later make comments on $\mathcal{N}=2$ black hole entropy. Before moving to the discussion about the $\D6-\aD6$ configurations, in section \S \ref{sec polar states} we review the problem of counting chiral primaries on $AdS_3$, which includes M2 and anti-M2-branes wrapping holomorphic cycles of the Calabi-Yau \cite{Maldacena:1998bw,Gaiotto:2006ns}. Taking into account the stringy exclusion principle and spectral flow symmetry we obtain a formula that agrees precisely with the microscopic $\mathcal{N}=4$ answer at finite charges; this formula will serve as a guide for the solution that we propose. Then in section \S \ref{sec bh bound states}, we review the $\D6-\aD6$ configurations with worldvolume fluxes and their decoupling limit. We argue for the existence of a family of $AdS_2\times S^1\times S^2$ configurations and then in section \S \ref{sec Loc} we compute the partition function using localization. The result of this is a finite sum over Bessel functions, whose spectrum is in agreement with the spectrum of polar states of a Jacobi form. Finally in section \S \ref{sec Foam} we discuss a connection between our solutions and the quantum foam picture of non-perturbative topological string.
\section{Quantum saddle points from Rademacher expansion}\label{sec quantum saddles}
The Fourier coefficients of Jacobi forms of non-positive weight admit an exact expansion in terms of an infinite sum of Bessel functions. This expansion is known as Rademacher expansion \cite{Dijkgraaf:2000fq} and provides with a simple way to address the asymptotic behaviour of the integer Fourier coefficients. We review this expansion and connect to previous work on black hole entropy corrections.
Consider a Jacobi form $J_{k,\omega}$ of level $k$ and negative weight $\omega$, with
Fourier expansion
\begin{equation}
J_{k,\omega}(\tau,z)=\sum_{n\geq 0,l}c(n,l)q^ny^l,\quad q=e^{2\pi
i\tau},\,y=e^{2\pi i z}.
\end{equation}
The coefficients $c(n,l)$ with non-negative discriminant $\Delta\equiv n-l^2/(4k)\geq 0$, which are known as non-polar coefficients, admit an exact expansion in terms of an infinite sum of Bessel functions. Known as Rademacher expansion
\cite{10.2307/2371313,Dijkgraaf:2000fq} it has the form
\begin{equation}\label{rademacher}
c(n,l)=\sum_{(m,s)\in\text{polar}}c(m,s)\sum_{c=1}^{\infty}\frac{1}{c}Kl(n,l;m,s;c)\,\int_{\epsilon-i\infty}^{\epsilon+i\infty}\frac{du}{u^{5/2-\omega}}\exp{\left[2\pi
\frac{\Delta}{cu}-2\pi\frac{\Delta_p u}{c}\right]}.
\end{equation}
The coefficients $c(m,s)$ have negative discriminant, or polarity, $\Delta_p\equiv m-s^2/(4k)<0$, and are thus the polar coefficients, and $Kl(n,l;m,s;c)$ are Kloosterman sums \cite{Kloos}. The structure in (\ref{rademacher}) is completely fixed by modularity except for the knowledge of the polar coefficients.
One of the great advantages of the expansion (\ref{rademacher}), is that it is very appropriate to the study of asymptotics. For $\Delta\gg1$ with finite $\Delta_p$ each of the Bessel functions have a saddle point at
\begin{equation}\label{saddles up}
u_{p}=\sqrt{\frac{\Delta}{|\Delta_p|}}\gg 1.
\end{equation}
Around each saddle we can expand perturbatively in powers of $\sqrt{\Delta|\Delta_p|}\gg 1$ such that
\begin{eqnarray}\label{saddle expansion}
c(n,l)\simeq&& \sum _{\Delta_{\text{min}}\leq\Delta_{p}\leq\Delta_{\text{max}}} e^{4\pi \sqrt{\Delta|\Delta_{p}|}}\left(1+\ldots\right)+\sum _{\Delta_{\text{min}}\leq\Delta_{p}\leq\Delta_{\text{max}}} \sum_{c>1}e^{4\pi \sqrt{\Delta|\Delta_p|}/c}\left(1+\ldots\right),
\end{eqnarray}
where the $\ldots$ denote perturbative corrections in powers of $1/\Delta$ around each of the saddles $u_p$, we are ignoring a normalization factor for each of the perturbative series. Therefore we see that the sum of polar contributions results in a tail of exponentially suppressed terms relative to the term of maximal polarity.
For holomorphic Jacobi forms\footnote{The discussion for nearly-holomorphic Jacobi forms is very similar.}, the leading term in the expansion (\ref{saddle expansion}) comes from the most polar term, which has $\Delta_p=-k/4$. From the bulk physics point of view, we can identify the leading exponential growth with the black hole entropy area formula, since we have $4\pi \sqrt{\Delta|\Delta_p|} =A/4$, where $A$ is the area of the black hole horizon. Similarly, in the near-horizon attractor geometry (\ref{AdS3}) the saddle value of $u_p$ is identified with $1/\phi^0$.
In addition, we can compute quantum perturbative corrections to the leading saddle using localization and a connection to Chern-Simons theory \cite{Gomes:2015xcf}. The result is the finite dimensional integral (\ref{5d localization int}), which we review in section \S\ref{sec 5.1} using localization at the level of five dimensional supergravity. The idea of \cite{Gomes:2015xcf} is roughly the following. We start with the four dimensional localization integral (\ref{OSV}) and approximate the prepotential $F(X)$ according to the regime $\phi^0\ll 1$, where the leading saddle lives. Indeed, the on-shell complexified K\"{a}hler class becomes large, that is, $t\gg 1$ and the one-loop topological string free energy approximates to $F_1(t)\simeq c_{2a}t^a/24$ leading to the classical prepotential (\ref{classical prepotential}). The quantum measure, on the other hand, is fixed by a zero mode argument using the Chern-Simons formulation. Though this formulation is well justified in the regime of $\phi^0\ll 1$, it is argued in \cite{Gomes:2015xcf} that the zero mode argument can be extrapolated also for the regime $\phi^0\gg 1$, which allows to define a quantum measure.
The subleading saddle points in (\ref{saddle expansion}), corresponding to the polar terms with $|\Delta_p|<k/4$, lead to exponentially suppressed corrections of the form
\begin{equation}\label{subleading saddles}
c(n,l)\sim e^{\frac{A}{4}}+\sum_{\Delta_p<\Delta_{\text{max}}}e^{4\pi \sqrt{\Delta|\Delta_p|}}+\ldots,
\end{equation}
with $4\pi \sqrt{\Delta|\Delta_p|}<A/4$. Given what we know already for the leading Bessel function in terms of bulk physics, it becomes pertinent to understand what is the origin of the subleading saddles from the quantum entropy functional. In fact, there is partial understanding for the leading saddles in the $c>1$ tails (\ref{saddle expansion}),
\begin{equation}\label{orbifold saddles}
c(n,l)\sim e^{\frac{A}{4}}+\ldots \sum_{c>1} e^{\frac{A}{4 c}}+\ldots,
\end{equation}
at the level of the quantum entropy path integral \cite{Sen:2009vz,Dabholkar:2014ema}. In this case, the subleading terms that grow as $\exp{A/4 c}$ arise after including in the path integral $\mathbb{Z}_c$ orbifolds of locally $AdS_3$ geometries \cite{Murthy:2009dq}. The orbifold explains the exponential growth $\sim \exp{A/4c}$ that characterizes them, because the area is reduced by a factor of $1/c$ due to the orbifold.
There is something particular to the subleading polar terms when compared with the orbifold saddles, which is partially the reason why their bulk interpretation is more difficult. While for the orbifold saddles the values of $u_p$ are consistent with the attractor background, for the subleading polar saddles (\ref{subleading saddles}) the values of $u_p$ (\ref{saddles up}) are quite distinct from the on-shell attractor background values, which can be determined from the leading Bessel; they differ from finite renormalizations. If these saddles indeed correspond to bulk saddle configurations, then they can not be solutions of five dimensional supergravity that one obtains after compactification on the Calabi-Yau.
\subsection{Degeneracy from Siegel Modular Forms}\label{deg 1/4 dyons}
In the following we present a study of the microscopic $\mathcal{N}=4$ degeneracy for dyons in $K3\times T^2$ and $T^4\times T^2$ CHL orbifold compactifications \cite{Gomes:2015xcf,Murthy:2015zzy}; we describe in detail the role of the polar contributions. Though, our considerations are valid also for $\mathcal{N}=2$ compactifications, we will use the $\mathcal{N}=4$ answer as a check of our proposal.
The index $d(m,n,l)$ of $1/4$-BPS dyons in four dimensional $\mathcal{N}=4$ compactifications, has generating function the reciprocal of a Siegel modular form $\Phi_{k}$, that is,
\begin{equation}\label{Siegel modular form}
\frac{1}{\Phi_{k}(\rho,\tau,z)}=\sum_{m,n,l}d(m,n,l) e^{2\pi i m\rho}e^{2\pi i n\tau }e^{2\pi i lz}.
\end{equation}
Here $k$ is the weight of the modular form under a congruence subgroup of $Sp(4,\mathbb{Z})$, and depends on the orbifold compactification. The integers $m,n,l$ label respectively the T-duality invariants $P^2/2$, $Q^2/2$ and $Q.P$ with electric charges $Q$ and magnetic charges $P$ (in a particular heterotic frame). For further details we point the reader to \cite{Sen:2007qy}.
Conversely we can extract the integers $d(m,n,l)$- the black hole degeneracies, by performing an inverse Fourier transform. The function $1/\Phi_{k}$ contains poles, and thus by deforming the contour, the integral picks the residues at those poles. It turns out that the dominant contribution to the black hole degeneracy is
\begin{eqnarray}\label{Siegel deg}
d(m,n,l)\simeq(-1)^{l+1}\int_{\mathcal{C}}\frac{d^2 u}{u_2^{k+3}}\left(2\pi m-\partial_{u_2}\Omega(u,\bar{u})\right)\,\exp{\left(\pi\frac{n+|u|^2 m-u_1 l}{u_2}-\Omega(u,\bar{u})\right)},
\end{eqnarray}
with
\begin{equation}\label{Omega instanton}
\Omega(u,\bar{u})=\ln g(u)+\ln g(-\bar{u}),\;u=u_1+iu_2.
\end{equation}
The functions $g(u)$ are modular forms of weight $k+2$, with Fourier expansion
\begin{equation}
g(u)=\sum_{n} d(n)e^{2\pi i u n}.
\end{equation}
Choosing appropriately the contour $\mathcal{C}$ in (\ref{Siegel deg}) \cite{Gomes:2015xcf}, we can rewrite the degeneracy (\ref{Siegel deg}) as a finite sum of integrals of Bessel type, that is,
\begin{eqnarray}
d(m,n,l)\simeq &&(-1)^{l+1}2\pi i\sum_{r=0}^{m+2n_p-1}\Big(m+2n_p-r\Big)\,\nonumber\\
&&\times\sum_{\substack{s\geq 0\\ |r-2s|<m\\ c_q(m,r,s)>0}}^{r}d(r-s)d(s)e^{\pi i (r-2s)\frac{l}{m}}\,\nonumber\\
&&\times\int_{\epsilon-i\infty}^{\epsilon+i\infty}du_2\int_{-i\infty}^{i\infty}du_1 \frac{1}{u_2^{k+3}}\exp{W_{r,s}(u,m,n,l)},\nonumber\\
{}\label{finite sum Bessels}
\end{eqnarray}
with
\begin{eqnarray}\label{W potential}
W_{r,s}(u,m,n,l)=&&\pi\frac{n+|u|^2 m-u_1 l}{u_2}+2\pi (2n_p-r)u_2+2\pi i(r-2s)u_1,
\end{eqnarray}
and
\begin{equation}\label{quantum central charge}
\frac{c_q(m,r,s)}{24}=n_p-s+\frac{(m-r+2s)^2}{4m},
\end{equation}
with $n_p=1,0$ for $K3$ and $T^4$ CHL models respectively. Integrating over $u_1$ we obtain a sum over Bessel functions with the series resembling the Rademacher expansion (\ref{rademacher}). This has led the authors in \cite{Murthy:2015zzy} to test this possibility against an exact mock-Jacobi Rademacher expansion \cite{Dabholkar:2012nd}.
For $r=s=0$, extremization of (\ref{W potential}) gives the Cardy formula
\begin{equation}
d(m,n,l)\sim e^{2\pi\sqrt{c_L \Delta/6}},\;\Delta=n-\frac{l^2}{4k}\gg 1,
\end{equation}
where $c_L=6m+24n_p$ can be identified with the left central charge (of the non-supersymmetric side of the $(0,4)$ $\text{CFT}_2$). The values of $u_1$ and $u_2$ at the saddle point are
\begin{equation}\label{leading saddles n=4}
u_1=\frac{l}{2m},\;u_2=\sqrt{\frac{\Delta}{m+4n_p}}.
\end{equation}
From the bulk physics, $u_1$ and $u_2$ are mapped respectively to the values of the scalar fields $X^1/X^0$ and $1/X^0$ of the four dimensional supergravity.
For $(r,s)\neq (0,0)$ we can proceed similarly. Each term has exponential growth
\begin{equation}
\exp{W_{r,s}(u,m,n,l)}\sim e^{2\pi\sqrt{c_q \Delta/6}},\;\Delta\gg 1,
\end{equation}
and the values of the saddles are
\begin{equation}
u_1=\frac{l}{2m}-i\frac{(r-2s)}{2m}u_2,\;u_2=\sqrt{\frac{6\Delta}{c_q}}.
\end{equation}
From here we see that these saddles differ from (\ref{leading saddles n=4}) by finite renormalizations parametrized by $r,s$.
\section{Polar states from M2 and anti-M2 branes}\label{sec polar states}
In this section, we review the problem of counting chiral primary states from the bulk theory on $AdS_3\times S^2\times X_{CY}$, with $X_{CY}$ a Calabi-Yau manifold, and how it connects to the study of black hole entropy. This section is essentially a review of \cite{Gaiotto:2006ns} and companion works \cite{Gaiotto:2006wm,Simons:2004nm}. We develop on their formulas for black hole entropy and provide with corrections, which follow mainly from the stringy exclusion principle. The final result for the degeneracy of one-quarter BPS dyons in $\mathcal{N}=4$ compactifications can be shown to agree with the microscopic formula (\ref{finite sum Bessels}).
The idea of \cite{Gaiotto:2006ns} is to compute the contribution to the elliptic genus coming from the fields on $AdS_3$, which are dual to the chiral primary states of the CFT. The elliptic genus is nevertheless formulated in the R sector, and thus to count primary states, we must first do a spectral flow transformation to the NS sector. This map consists on the identification
\begin{eqnarray}\label{spectral flow map}
&&L_0|_{NS}=L_0|_{R},\\
&&\bar{L}_0|_{NS}=\bar{L}_0|_{R}+J^3_{0}|_{R}+\frac{c_R}{24},\\
&&J^3_0|_{NS}=J^3_{0}|_{R}+\frac{c_R}{12},
\end{eqnarray}
where the $|_{R,NS}$ subscripts denote the R and NS sectors, $L_0,\bar{L}_0$ and $J^3_{0}$ are the Virasoro and R-symmetry generators respectively, and $c_L,c_R$ are the left and right central charges. Under this transformation, polar states essentially map to chiral primary states, which we can count from the spectrum of Kaluza-Klein fields on $AdS_3$ \cite{Maldacena:1998bw,deBoer:1998ip,deBoer:1998us}. These include not only the contribution coming from the supergravity fields but also the contribution of M2 and anti-M2 branes wrapping holomorphic two-cycles on the Calabi-Yau. The black hole partition function is obtained by performing a modular transformation, after flowing to the R sector.
The main result of \cite{Gaiotto:2006ns} is the factorization of the partition function (index), over the chiral primary states, as the square of the topological string partition function $Z_{\text{top}}$. Essentially the result is
\begin{equation}\label{Ztop^2}
\text{Tr}_{\text{ch.p.}}(-1)^Fe^{2\pi i \tau L_0}e^{2\pi i z^aJ_a}=Z_{\text{top}}(\tau,z^a)\times Z_{\text{top}}(\tau,-z^a),
\end{equation}
where the trace only goes through the chiral primary states (ch.p.); for simplicity we have omitted a $-c_L/24$ factor in $L_0$. In the NS sector, chiral primary states obey the condition $\overline{L}_0=J^3_0$, which maps to the condition $\bar{L}_0-c_R/24=0$ in the R sector (\ref{spectral flow map}). In addition, chiral primary states can carry charges under $U(1)$ currents $J_ a$. In the dilute gas approximation of \cite{Gaiotto:2006ns}, the trace in the bulk is a trace over BPS multi-particle states \cite{deBoer:1998us,deBoer:1998ip,Maldacena:1998bw} with arbitrary spin and occupation numbers. As a consequence, the result of the trace is a formal infinite product over all the quantum numbers, which can be related to $Z_{\text{top}}$ using the Gopakumar-Vafa invariants \cite{Gopakumar:1998ii,Gopakumar:1998jq}. That is,
\begin{equation}\label{Ztop}
Z_{\text{top}}(\tau,z^a)=\prod_{m_a,n}(1-e^{2\pi i \tau (\frac{1}{2}m_ap^a+n)}e^{2\pi i m_az^a})^{N_{m_a,n}}.
\end{equation}
This is the key step that allows the authors in \cite{Gaiotto:2006ns} to make a connection with the OSV conjecture \cite{Ooguri:2004zv}. Here $p^a$ is the magnetic flux on the sphere $S^2$, induced by the M5-brane wrapping a divisor four cycle in the homology class $P$ Poincare dual to $[P]=p^a\Sigma_a$ with $\Sigma_a\in H^2(X_{CY},\mathbb{Z})$.
The factors $Z_{\text{top}}(\tau,z^a)$ and $Z_{\text{top}}(\tau,-z^a)$ arise essentially from the contributions of respectively M2-branes and anti-M2-branes wrapping holomorphic cycles in the Calabi-Yau; there is also a contribution coming from the supergravity fields but they will not be relevant for the discussion of $\mathcal{N}=4$ compactifications, which is our main interest in this section. What allows the sum over arbitrary M2 and anti-M2 charges is the fact that in AdS space, a brane and its anti-brane can preserve mutual supersymmetries. Indeed, a M2-brane wrapping a holomorphic cycle $Q\in H_2(X_{CY})$, siting at the origin of $AdS_3$ and at the north pole of $S^2$ preserves the same set of supersymmetries as an anti-M2-brane wrapping the same cycle $Q$, sitting at the origin of $AdS_3$ but now at the south pole of $S^2$ \cite{Simons:2004nm,Gaiotto:2006ns}. The fact that these are supersymmetric configurations on $AdS_3\times S^2\times X_{CY}$ will play a very important role in the remaining of the letter.
If the theory has $\mathcal{N}=4$ supersymmetry, for example when $X_{CY}=K3\times T^2$, the partition function (\ref{Ztop}) simplifies considerably, that is,
\begin{equation}\label{dedekind function}
Z_{\text{top}}(\tau,z^a)=\prod_{m_1>0}\left(1-e^{2\pi i m_1(\tau\frac{p^1}{2}+z^1)}\right)^{-24}.
\end{equation}
Here $p^1$ parametrizes the class of $K3\in H_4(\mathbb{Z})$, which is Poincare dual to $H^2(T^2)$. The coefficient $24$ in the product is the Euler character of $K3$, which allows for a generalization to other $\mathcal{N}=4$ compactifications. In the case of $\mathcal{N}=8$ supersymmetry this partition function is trivially one.
Formula (\ref{Ztop^2}) is valid only in the limit of very large central charge and for low density of chiral primaries, and so it is not the complete answer. The reason is that it does not take into account the stringy exclusion principle, which puts a bound on the total spin of the multi-particle states, that is,
\begin{equation}\label{exclusion principle bound}
J^3_0|_{NS}\leq \frac{c_R}{12}.
\end{equation}
It makes sense as a grand canonical partition function valid for infinite central charge, in which case the stringy exclusion principle constraint can be relaxed. The exclusion principle can not be seen in perturbation theory on $AdS_3$, because from the bulk point of view the multi-particle states are free bosonic excitations with no limit in their particle number. Instead, for finite central charge the contribution coming from the perturbative spectrum of Kaluza-Klein fields on $AdS_3$ must be truncated due to the stringy exclusion principle. Since $\bar{L}_0=J^3_0$, by supersymmetry, then the bound on $J^3_0$ imposes a bound on $\bar{L}_0$. Moreover, we have $L_0=\bar{L}_0$ for a static solution and so $L_0$ is also bounded. Therefore, only a finite number of states contribute to (\ref{Ztop^2}).
Physically, adding $q_a$ M2-branes sitting at the north pole adds non-zero angular momentum
\begin{equation}\label{M2 ang momentum}
J^3_0|_{NS}=\frac{1}{2}q_ap^a,
\end{equation}
much like an electron in a background magnetic field, while the $\bar{q}_a$ anti-M2-branes, because they sit at the south pole, contribute with the same sign angular momentum, that is, $J^3_0=\bar{q}_ap^a/2$. Therefore, flowing to the R sector, we find that the state carries R-charge
\begin{equation}
J^3_0|_R=-\frac{c_R}{12}+\frac{1}{2}(q_a+\bar{q}_a)p^a.
\end{equation}
We then see that the exclusion principle gives a bound on the number of M2 and anti-M2 branes.
The trace in the R sector must contain only states that do not form black holes, up to a spectral flow transformation. In terms of the Virasoro charges this implies
\begin{equation}\label{polarity chiral}
L_0-\frac{c_L}{24}+\frac{1}{2}D^{ab}j_aj_b<0,
\end{equation}
in the R sector, where we have reincorporated a $-c_L/24$ factor. Here $j_a=q_a-\bar{q}_a$ is the total M2-brane charge where $q_a$ and $\bar{q}_a$ are respectively the M2 and anti-M2 charges, and $D_{ab}=D_{abc}p^c$, with $D_{abc}$ the intersection matrix of the Calabi-Yau. Since $j_a\in \mathbb{Z}$ and $D_{ab}$ is not unimodular, $j^a$ lives on the lattice $\Lambda^*/\Lambda$ with $\Lambda$ the lattice $k^a\in \mathbb{Z}$ and $\Lambda^*$ its dual under the metric $D_{ab}$; the quotient removes spectral flow charges \cite{deBoer:2006vg}. Since the configuration is static, that is, $L_0=\bar{L}_0$ in the NS sector, the condition becomes
\begin{eqnarray}\label{M2 polarity constraint}
L_0-\frac{c_L}{24}+\frac{1}{2}D^{ab}j_aj_b<0\Leftrightarrow \frac{p^3}{24}+\frac{c_2\cdot p}{24}-\frac{1}{2}(q_a+\bar{q}_a)p^a-\frac{1}{2}D^{ab}(q_a-\bar{q}_a)(q_a-\bar{q}_a)>0\nonumber, \\
{}
\end{eqnarray}
where we used the fact that $c_L=p^3+c_2\cdot p$, with $p^3=D_{abc}p^ap^bp^c$ and $c_2\cdot p=c_{2a}p^a$ \cite{Maldacena:1997de}. We can show that (\ref{M2 polarity constraint}) is spectral flow invariant. In particular, for $K3\times T^2$ or $T^4\times T^2$ CHL orbifolds this condition becomes
\begin{equation}\label{c_L corrected bulk}
n_p-\bar{q}_1+\frac{\left(P^2/2-q_1+\bar{q}_1\right)^2}{2P^2}>0,
\end{equation}
with $n_p=0,1$ for the $T^4,K3$ respectively. Here we have used the fact that the only non-vanishing components of $D_{abc}$ are $D_{1ab}\equiv C_{ab}$ and permutations, together with $c_{2a}\equiv 24 n_p\delta_{a,1}$ and $P^2\equiv C_{ab}p^ap^b$. Setting $P^2=2m$, $q_1=(r-s)$ and $\bar{q}_1=s$ we obtain precisely the effective central charges (\ref{quantum central charge}).
The formula (\ref{Ztop^2}) also misses important degeneracy factors when taking the trace over the chiral primaries. In the limit when the M2-brane charge $q_a,\bar{q}_a$ is parametrically smaller than $p$, which we are taking to be large, these degeneracy factors are irrelevant for the purpose of arriving at (\ref{Ztop^2}). This is the dilute gas approximation of \cite{Gaiotto:2006ns}. However, since our main interest is for finite central charge, we need to take into account those degeneracy factors. Essentially we follow the discussion in \cite{Gaiotto:2006wm}. Under spectral flow from NS to R sector the chiral primaries, which are annihilated by $J^{-}_{1}$, flow to lowest $SU(2)_R$ weight states because $J^{-}_{1}$ flows to $J^{-}_{0}$. For example the $AdS_3$ R vacuum corresponds to a lowest weight state with $J^3_0=-c_R/12=-k/2$ with $k$ the $SU(2)_R$ level. Therefore acting with $J^{+}_{0}$ we generate the full multiplet, which leads to a degeneracy of $2|J|+1$ states. In addition, these states have to be tensored with the zero modes $\psi^{+\pm}$ of the centre of mass multiplet\footnote{The $(0,4)$ MSW $\text{CFT}_2$ superconformal algebra \cite{Maldacena:1997de} contains the centre of mass multiplet, besides the small $\mathcal{N}=4$ algebra.} which carry spin $1/2$. The total angular momentum after including the contribution of the M2-branes is
\begin{equation}\label{total angular momentum}
J^3_0=\frac{c_R}{12}-\frac{1}{2}(q_a+\bar{q}_a)p^a-\frac{1}{2},
\end{equation}
which leads to a degeneracy
\begin{equation}
2J^3+1=\frac{c_R}{6}-(q_a+\bar{q}_a)p^a.
\end{equation}
Substituting in this expression the values of $p^a$ and $q_a,\bar{q}_a$ for the $K3\times T^2$ and $T^4\times T^2$ CHL examples, that is, $q_1=(r-s)$ and $\bar{q}_1=s$, one obtains
precisely
\begin{equation}
\frac{c_R}{6}-(q_a+\bar{q}_a)p^a=p^1(m+2n_p-r),
\end{equation}
which we identify with the measure factor in the first line of expression (\ref{finite sum Bessels}). Since degeneracy is always positive we must have
\begin{equation}\label{stringy bound}
m+2n_p-r>0,
\end{equation}
which is the bound implied by the stringy exclusion principle \cite{Maldacena:1998bw}. In the limit when $p^3\propto m\rightarrow \infty$ this bound can be relaxed which is why one obtains the infinite products (\ref{Ztop}).
In addition to the $SU(2)_R$ degeneracy, we need to tensor with the states associated with the quantization of fluctuations of the M2 and anti-M2-branes wrapping holomorphic cycles in the Calabi-Yau. For $K3\times T^2$ they can be identified with the degeneracies of $r-s$ M2-branes and $s$ anti-M2-branes wrapping $T^2$, which are given by the dedekind function (\ref{dedekind function}). These explain the factors $d(s)d(r-s)$ in the second line of (\ref{finite sum Bessels}).
Assembling all the factors, we construct, in the R sector, the polar partition function
\begin{equation}
Z_{\text{polar}}(\tau,z^a)=\sum_{L_0,j_a\in \text{ polar}} e^{2\pi i\tau (L_0-\frac{c_L}{24})}e^{2\pi iz^aj_a},
\end{equation}
where the sum is over the states obeying the condition (\ref{c_L corrected bulk}) and (\ref{stringy bound}). The black hole partition function is obtained after a modular transformation \cite{Gaiotto:2006ns}, that is,
\begin{equation}\label{modular transf polar states}
Z_{\text{BH}}(\tau,z^a)\simeq\tau^{-\omega} e^{\pi i\frac{D_{ab} z^az^b}{\tau}}Z_{\text{polar}}(-1/\tau,z^a/\tau).
\end{equation}
There are further corrections to this formula coming from other elements of $SL(2,\mathbb{Z})$; they give contributions of the orbifold type (\ref{orbifold saddles}). The parameter $\omega$ is the weight of the elliptic genus under modular transformations and can be determined as follows. We decompose the elliptic genus in spectral flow sectors as $\chi(\tau,z)=\sum_{\mu}h_{\mu}(\tau)\theta_{\mu}(\tau,z^a)$, with $\theta_{\mu}(\tau,z^a)$ a multidimensional theta function \footnote{The part of the elliptic genus that contains the black hole entropy is the vector-valued modular form $h_{\mu}(\tau)$. Its Fourier coefficients are the quantities that are invariant under U-duality.}. The function $h_{\mu}(\tau)$ contains the information about black hole degeneracy, while the theta functions contains the states related by spectral flow symmetry.
On one hand, from the Siegel modular form (\ref{Siegel modular form}), of weight $k$, one finds Jacobi forms of weight $-k$ and a single chemical potential $z$. This implies that part of the Jacobi form that contains the information about black hole degeneracy, which is a vector valued modular form, must have weight $-k-1/2$, and hence also $h_{\mu}(\tau)$. On the other hand, since $\theta_{\mu}(\tau,z^a)$ has weight $b_2/2$ with $b_2$ the second Betti number of the Calabi-Yau, we find that the weight of the elliptic genus is $\omega=-k-1/2+b_2/2$. For the $K3\times T^2$ compactification we have $b_2=23$ and thus $\omega=1$. Similarly for the other CHL compactifications we have $b_2=2k+2+1$ \cite{Sen:2007qy}, which also gives $\omega=1$!
The black hole degeneracy is computed by an inverse Fourier transform, that is,
\begin{eqnarray}
&&d_{\text{BH}}(n,l_a)\sim\int \prod_{a=1}^{b_2} dz^a d\tau\, Z_{\text{BH}}(\tau,z^a)e^{-2\pi i\tau n-2\pi i z^al_a}\\
&=&\sum_{\substack{q_a,\bar{q}_a\\ \frac{c_R}{6}-(q_a+\bar{q}_a)p^a>0}}d(q_a)d(\bar{q}_a)\left(\frac{c_R}{6}-(q_a+\bar{q}_a)p^a\right)\times \nonumber\\
&&\int \prod_{a=1}^{b_2} dz^a d\tau \tau^{-\omega} e^{\pi i\frac{D_{ab} z^az^b}{\tau}-\frac{2\pi i}{\tau}z^a(q_a-\bar{q}_a)}e^{\frac{2\pi i}{\tau}\left(\frac{p^3}{24}+\frac{c_2\cdot p}{24}-\frac{1}{2}(q_a+\bar{q}_a)p^a\right)}e^{-2\pi i\tau n-2\pi i z^al_a}\nonumber,\\
{}
\end{eqnarray}
with the additional constraint that the sum obeys (\ref{M2 polarity constraint}). Specializing the various parameters to the $\mathcal{N}=4$ examples and performing the various gaussian integrals in $z$, we obtain almost precisely the one-quarter BPS degeneracy described in section \S\ref{deg 1/4 dyons}. The only difference is the contour. While in the formula above we take $\tau$ over the Fourier contour $]0,1]$, in the Rademacher expansion one has $1/\tau$ running over $]\epsilon -i\infty,\epsilon+i\infty[$. It looks puzzling how to go from one contour to the other without picking additional contributions. Nevertheless, for the purpose of computing saddle point corrections, both integrals are equally valid. As we explain later, one of the great advantages for using localization is that it naturally picks the Rademacher contour, which then acquires a physical interpretation.
\section{Black hole bound states and horizonless geometries}\label{sec bh bound states}
In this section, instead of thinking in terms of M2 and anti-M2 branes wrapping cycles on the Calabi-Yau, we consider an equivalent description in type IIA string theory consisting of a $\D6$ and a $\aD6$ configuration wrapping the Calabi-Yau, and carrying $U(1)$ fluxes $F$ in their worldvolume. This section is essentially a review of the polar configurations of Denef and Moore \cite{Denef:2007vg} and their decoupling limit \cite{deBoer:2008fk}. The main goal is to find a microscopic description for the family of saddle geometries that we propose. Under certain assumptions, we argue that the quantum entropy path integral should be seen as an M-theory path integral with eleven dimensional instanton solutions. Then we propose an effective five dimensional description, which is amenable for using localization.
For the charge configuration of interest, the total $\D6$ charge is zero but the presence of fluxes induce lower dimensional charges due to the couplings of the worldvolume fields to the Ramond-Ramond gauge fields $A_3,A_1$, such as
\begin{equation}\label{Ramond couplings}
\int_{\D6}F\wedge F\wedge A_3,\;\int_{\D6}F\wedge F\wedge F\wedge A_1 ,
\end{equation}
which generate $\D2$ and $\D0$ charges respectively. Uplifting to M-theory, the pair $\D6-\aD6$ becomes a Taub-Nut and anti-Taub-Nut configuration, while the $\D2$ and $\aD2$ charges lift to $\M2$ and $\aM2$ charges and the $\D0$ charges become momentum along the M-theory circle.
From the M-theory point of view the fluxes on the $\D6$-brane lift to four fluxes $G=dC_3$ \cite{Sen:1997js} in M-theory, with $C_3$ the three-form that couples to M2-branes, that is,
\begin{equation}
G\propto \omega_{TN}\wedge F.
\end{equation}
$\omega_{TN}$ is the self-dual normalizable two form of the Taub-Nut geometry, and $F$ is the total flux in the $\D6$; and similarly for the $\aD6$ brane. Therefore fluxes on the $\D6$ branes map to fluxes in the bulk M-theory.
To be consistent with the $\M2$ brane picture of the previous section, we want to turn on fluxes that generate arbitrary $\M2\sim \D2$ charges as well as $\D0$ charges, but keep fixed the $\D4$ charge, which lifts to the $\M5$ brane, parametrized by the magnetic charges $p^a$. In order to do that, we parametrize the flux in the form $F=p+\mathcal{F}$, where $p\in H_2(X)$. To keep the $\D4$ charge equal to $p$, we need to impose that the first Chern-class of $\mathcal{F}$ is zero, whereas to generate arbitrary $\D0,\D2$ charges we must keep its higher Chern-classes arbitrary. In other words, this amounts to
\begin{equation}\label{singular fluxes}
\int_{C^a}\mathcal{F}=0,\;\int \mathcal{F}\wedge \mathcal{F}\neq 0,\;\int \mathcal{F}\wedge \mathcal{F}\wedge \mathcal{F}\neq 0,
\end{equation}
for any two cycle $C_a$ in the Calabi-Yau. Such conditions on the fluxes are only possible if the flux has singularities. If the flux was smooth then vanishing of the first Chern-class would imply vanishing of the higher Chern-classes. The way to regularize the singularities is to drop the notion of line bundle and use ideal sheaves, which are torsion free sheaves with vanishing first Chern-class \cite{Denef:2007vg}. The ideal sheaf is a generalization of the notion of line bundle. Usually if the Calabi-Yau is an algebraic variety, then the singularities can be blown up leading to a new space $\hat{X}$ where the torsion free sheaves become line bundles. It was argued in \cite{Iqbal:2003ds} that we should include such configurations in the quantum gravity path integral. We follow a similar approach and consider the path integral of M-theory in the presence of such configurations.
The induced four dimensional charges have the form \cite{Denef:2007vg}
\begin{eqnarray}
&&\Gamma_6=e^{p/2}(1-\beta+n\omega)\left(1+\frac{c_2(X)}{24}\right)\nonumber\\
&&=\left(1,\frac{p^a}{2},\frac{p_a}{8}+\frac{c_{2a}}{24}-\beta_{a},\frac{p^3}{48}+\frac{c_{2}\cdot p}{48}-\frac{1}{2}\beta\cdot p+n\right)\label{Gamma6},
\end{eqnarray}
for the first $\D6$ center, where we have defined $p_a=D_{abc}p^cp^b$, while for the second center $\aD6$, we have
\begin{eqnarray}
&&\Gamma_{\bar{6}}=-e^{-p/2}(1-\bar{\beta}+\bar{n}\omega)\left(1+\frac{c_2(X)}{24}\right)\nonumber\\
&&=\left(-1,\frac{p^a}{2},-\frac{p_a}{8}-\frac{c_{2a}}{24}+\bar{\beta}_{a},\frac{p^3}{48}+\frac{c_{2}\cdot p}{48}-\frac{1}{2}\bar{\beta}\cdot p+\bar{n}\right)\label{Gamma6bar}.
\end{eqnarray}
We have denoted $\beta$ and $n$ respectively the second and third Chern-classes of the ideal sheaves, and similarly for $\bar{\beta}$ and $\bar{n}$. We have used the notation in \cite{Denef:2007vg} which assigns charges (D6,D4,D2,D0) as $(p^0,p^a,q_a,q_0)$. We see that the total charges are as follows: the total $\D6$ charge is zero, and the total $\D4$ charge is $p^a$ as required; on the other hand the total $\D2\sim \M2$ charge is $\beta_a-\bar{\beta}_a$ and the $\D0$ charge is $p^3/24+c_2\cdot p/24-(\beta+\bar{\beta})\cdot p/2+n+\bar{n}$.
Once we consider backreaction in four dimensional supergravity this charge configuration gives rise to a two center supersymmetric black hole solution. In fact we can obtain multi-center configurations by considering many other non-local charges \cite{Denef:2000nb}. For a generic charge configuration there can exist both single and multi-center solutions. However, certain configurations have the property that for the same total charge, only multi center solutions exist; these are the polar configurations. A special feature of multi center configurations is that for certain values of the asymptotic moduli the distance between two centers goes to infinity and the bound state leaves the spectrum leading to the phenomenon of wall-crossing.
The two center black hole solution is a complicated geometry. The details about the metric can be found in \cite{Denef:2000nb,Denef:2007vg}. Nevertheless, the essential feature that we need is that the metric is determined in terms of $\mathbb{R}^3$ harmonic functions $H(x)$,
\begin{eqnarray}\label{harmonic functions}
&&H^0=\frac{p^0}{R^{1/2}|x-x_{6}|}-\frac{p^0}{R^{1/2}|x-x_{\bar{6}}|}+h^0,\;H^a=\frac{p^a}{2R^{1/2}}\left(\frac{1}{|x-x_{6}|}+\frac{1}{|x-x_{\bar{6}}|}\right)+h^a\\
&&H_a=\frac{1}{R^{1/2}}\left(\frac{q_a}{|x-x_{6}|}-\frac{\tilde{q}_a}{|x-x_{\bar{6}}|}\right)+h_a,\;H_0=\frac{1}{R^{1/2}}\left(\frac{q_0}{|x-x_{6}|}+\frac{\tilde{q}_0}{|x-x_{\bar{6}}|}\right)+h_0\nonumber,
{}
\end{eqnarray}
where $q_a$ and $\tilde{q}_a$ are the $\D2$ charges induced respectively by the $\D6$ and $\aD6$, and similarly for $q_0,\tilde{q}_0$. The parameter $R$, which is a free parameter, is the asymptotic radius of the M-theory circle. In addition, supersymmetry imposes a certain integrability condition on the harmonic functions, which force the centers to stay at a predetermined distance. This equilibrium distance is a function of the charges and also the asymptotic moduli. The wall-crossing phenomena happens when we tweak the moduli such that this distance goes to infinity in which case the bound state splits into its constituents. Moreover, the two center solution is not a static geometry and carries angular momentum \cite{Denef:2000nb}
\begin{equation}\label{total J}
\vec{J}=\frac{1}{2}\langle\Gamma_6,\Gamma_{\bar{6}}\rangle\frac{\vec{x}_{6\bar{6}}}{r_{6\bar{6}}},
\end{equation}
where $\langle\Gamma_1,\Gamma_{2}\rangle$ is the symplectic charge inner product defined as $\langle\Gamma_1,\Gamma_{2}\rangle \equiv -p^0_1q^2_0+p^a_1q^2_a-q^1_ap^a_2+q^1_0p^0_2$.
The region near the core of the centers admits a decoupling limit \cite{deBoer:2008fk} after an uplift to five dimensions. Before moving to a general discussion about the decoupled two center configurations, first we describe the simplest $\D6-\aD6$ configuration, which corresponds to setting the singular fluxes to zero. Without loosing generality we consider $c_2(X)=0$ for the moment. Therefore, the configuration consists of a $\D6$ at a position $x_6$ and a $\aD6$ at a position $x_{\bar{6}}$, carrying $U(1)$ fluxes $F=p^a\omega_a/2$, with $\omega_a$ a basis of $H^2(X)$, and $\bar{F}=-p^a\omega_a/2$ for the $\aD6$. The charge vectors are therefore
\begin{equation}\label{D6 charges}
\Gamma_6=e^{\frac{p}{2}},\;\Gamma_{\bar{6}}=-e^{-\frac{p}{2}}.
\end{equation}
Each center has non-zero entropy but from asymptotic infinity one finds that a black hole with the same total charges cannot exist because the discriminant function $\hat{q}_0=q_0-D_{ab}q^aq^b/2$ is positive which renders the entropy formula $\sim \sqrt{-\hat{q}_0}$ imaginary. From the M-theory point of view this configuration lifts to a Taub-Nut and anti-Taub-Nut configuration with M-theory three form flux $G\propto \omega_{TN}\wedge p$, with $\omega_{TN}$ the normalizable self-dual two form of the Taub-Nut geometry. This means that from the M-theory point of view the solution is completely smooth with no horizon.
The decoupling limit consists effectively in taking the constants of the harmonic functions to zero with the exception of the component $h_0$ which becomes $-R^{3/2}/4$. This renders a configuration where the centers sit at a fixed distance completely determined by their charges, that is,
\begin{equation}\label{r66}
r_{6\bar{6}}=\frac{4\langle \Gamma_6,\Gamma_{\bar{6}}\rangle}{p^0_6 R^2},
\end{equation}
where $\langle \Gamma_6,\Gamma_{\bar{6}}\rangle$ is the symplectic charge inner product. For the charge configuration (\ref{D6 charges}) we find $\langle \Gamma_6,\Gamma_{\bar{6}}\rangle =p^3/6$.
The decoupled geometry based on the harmonic functions is still a complicated solution. Nevertheless, we can follow the observation made in \cite{Denef:2007yt}, and use oblate-spheroidal coordinates defined as
\begin{eqnarray}\label{oblate coords}
|x-x_{6}|=\frac{r_{6\bar{6}}}{2}(\cosh(2\eta)+\cos(\theta)),\\
|x-x_{\bar{6}}|=\frac{r_{6\bar{6}}}{2}(\cosh(2\eta)-\cos(\theta)),
\end{eqnarray}
to simplify considerably the problem. The five dimensional metric in the new coordinates becomes precisely the global $AdS_3\times S^2$ geometry
\begin{eqnarray}\label{purely fluxed ads3}
ds^2=4U^{2/3}\left(-\cosh^2\eta d\tau^2+d\eta^2+\sinh^2\eta d\sigma^2\right)+U^{2/3}(d\theta^2+\sin^2\theta (d\phi+A)^2),
\end{eqnarray}
where $U=p^3/6$. Furthermore, the attractor equations fix the five dimensional vector-multiplet scalars $M$ to constants $M^a=U^{1/3}p^a$ while the gauge fields have constant flux on the sphere $F^a= p^a e_2$, with $e_2$ the volume form. Furthermore, the sphere is twisted by the gauge field
\begin{equation}\label{R gauge field}
A=d\sigma -d\tau,
\end{equation}
which is flat everywhere except at the origin where it has a delta function singularity; this follows from the fact that four dimensional solution carries angular momentum. From the CFT point of view, the Wilson line $\oint A$ around the boundary circle $\sigma$, which is contractible in the full geometry, is necessary to make the fermions periodic, as expected for the R vacuum. In other words, the Wilson line generates a spectral flow transformation which takes the NS vacum with $L_0=\bar{L}_0=0$ and $J^3_0=0$ to the R vacuum with $L_0=0$ and $\bar{L}_0=p^3/24$ and maximal $J^3_0=-p^3/12$ \cite{Kraus:2006nb,deBoer:2008fk}. Finally note that according to the change of coordinates (\ref{oblate coords}), the positions of the $\D6$ and $\aD6$ map to the center of $AdS_3$ and respectively to the north and south of $S^2$, which is consistent with the M2 branes description.
The geometry described above corresponds to a particular case of what is known as ambi-polar Eguchi-Hanson metric with Gibbons-Hawking (GH) charges $q=1$ and $-q$ \cite{Bena:2010gg}. For general $q$ this is a $\mathbb{Z}_q$ quotient of global $AdS_3\times S^2$. We review this construction following \cite{Bena:2010gg}. We use cylindrical polar coordinates $(z,\rho,\phi)$ on $\mathbb{R}^3$, and consider GH charges located on the $z$-axis at $z=\pm a$, and define
\begin{equation}
r_{\pm}=\sqrt{\rho^2+(z\mp a)^2}.
\end{equation}
The five dimensional geometry is entirely determined by the harmonic functions
\begin{eqnarray}\label{Eguchi-Hanson}
&&V^0=q\left(\frac{1}{r_+}-\frac{1}{r_-}\right),\; V^1=k\left(\frac{1}{r_+}+\frac{1}{r_-}\right),\nonumber\\
&&V_{1}=-\frac{k}{q}\left(\frac{1}{r_+}-\frac{1}{r_-}\right),\; V_0=-\frac{2k^2}{a q^2}+\frac{k^2}{2q^2}\left(\frac{1}{r_+}+\frac{1}{r_-}\right).\nonumber
\end{eqnarray}
Using the oblate spheroidal coordinates we can map the GH space to global $AdS_3\times S^2$ (\ref{purely fluxed ads3}) with size
\begin{equation}
L^2=(k^2)^{2/3}.
\end{equation}
So to agree with (\ref{purely fluxed ads3}) we need $k^2=U$ or $k=(p^3)^{1/2}$.
Lets now consider the case of a general two center charge configuration and its decoupling limit \cite{deBoer:2008fk}. In this case, the geometry is only asymptotically $AdS_3\times S^2$
\begin{eqnarray}\label{approximate AdS3}
ds^2\simeq U^{2/3}\left(d\eta^2+e^{2\eta}(-d\tau^2+d\sigma^2)\right)+U^{2/3}(d\theta^2+\sin^2\theta (d\phi+\tilde{A})^2),\;\eta\gg 1
\end{eqnarray}
with $U=p^3/6$ and
\begin{equation}\label{twisting gauge field}
\tilde{A}=\frac{J}{J_{\text{max}}}(d\tau-d\sigma),\; J_{\text{max}}=\frac{p^3}{12}.
\end{equation}
Here $J=\langle\Gamma_6,\Gamma_{\bar{6}}\rangle /2$ is the total angular momentum of the two center configuration (\ref{total J}). The remaining field configuration consists of five dimensional gauge fields $A^{a}$ and scalars $M$, which have the attractor solution
\begin{eqnarray}\label{attractor fields}
&&A^{a}_{5D}\simeq -p^a \cos\theta (d\phi+\tilde{A})+2D^{ab}q_b (d\sigma +d\tau),\\
&&M^a\simeq U^{-1/3} p^a,
\end{eqnarray}
for $\eta\gg 1$, with $q_a$ the total $\D2$ charge.
We can use the expansion of the metric and the gauge fields at infinity to compute the Virasoro charges. After removing the contribution of the $U(1)$ and $SU(2)_R$ currents to the stress tensor \cite{deBoer:2008fk}, we find
\begin{eqnarray}
\bar{L}_0-\frac{c_R}{24}&=&0,\\
L_0-\frac{c_L}{24}&=&-(q_0-\frac{1}{2}D^{ab}q_aq_b),
\end{eqnarray}
with $q_0,q_a$ the total charges. These represent the contributions to the stress tensor coming purely from the gravitational sector. For the charge configuration (\ref{Gamma6}) and (\ref{Gamma6bar}) this gives
\begin{equation}
q_0-\frac{1}{2}D^{ab}q_aq_b=\frac{p^3}{24}+\frac{c_2\cdot p}{24}-\frac{p\cdot(\beta+\bar{\beta})}{2}+n+\bar{n}-\frac{1}{2}D^{ab}\Delta\beta_a\Delta\beta_b,
\end{equation}
with $\Delta\beta_a=\beta_a-\bar{\beta}_a$. Since $q_0-\frac{1}{2}D^{ab}q_aq_b>0$ is the polarity, we see that these configurations indeed map to the polar states in the $\text{CFT}_2$.
Let us now consider the limit of charges $p^a\rightarrow \lambda p^a$ with large $\lambda$ while keeping fixed the fluxes $\beta,\bar{\beta}$ and $n,\bar{n}$ in (\ref{Gamma6}) and (\ref{Gamma6bar}). In particular, this ensures that the individual center black hole charges are kept large, which is necessary for the supergravity solution to be valid. The harmonic functions (\ref{harmonic functions}) split into a term proportional to the smooth fluxes $p$ and a term coming from the contribution of the ideal sheaves fluxes $\beta,\bar{\beta}$ and $n,\bar{n}$, which are parametrically smaller of order $1/\lambda^2, 1/\lambda^3$ respectively. Since in the absence of the singular fluxes the geometry is global $AdS_3\times S^2$, we can write the full geometry as global $AdS_3\times S^2$ plus corrections, that is,
\begin{equation}
ds^2=ds^2_{AdS_3\times S^2}+\delta g_{\mu\nu},
\end{equation}
where $\delta g_{\mu\nu}$ is a function of the singular fluxes $\beta,\bar{\beta}$ and $n,\bar{n}$ and thus of order $ \mathcal{O}(1/\lambda^3,1/\lambda^2)$. We can proceed similarly for the gauge fields and scalars
\begin{eqnarray}
&&A^{a}_{5D}= -p^a \cos\theta (d\phi+\tilde{A})+\delta A^a,\\
&&M^a= U^{-1/3} p^a+\delta \sigma^a,
\end{eqnarray}
with $\delta A^a$ and $\delta M^a$ of order $\mathcal{O}(1/\lambda,1/\lambda^2)$. Moreover, near the boundary $r=e^{\eta}\sim \infty$ the perturbations $\delta g_{\mu\nu}$ and $\delta A^a$ are of order $\mathcal{O}(r^0)$ while $\delta M^a$ is of order $\mathcal{O}(1/r)$ and thus they are normalizable. We can identify these perturbations as coming from the backreaction of the fields dual to the chiral primary states described in section \S\ref{sec polar states}.
\subsection{Purely fluxed solutions from Gauge-Gravitational Chern-Simons}
In this section, we set the singular fluxes to zero and consider the case of charges induced by mixed gauge-gravitational Chern-Simons terms, which are proportional to $c_2$, the second Chern-class of the Calabi-Yau. We saw previously that the decoupling of the two-center solution, after uplift to five dimensions, gave a geometry that was asymptotically $AdS_3\times S^2$. In this section we try a different approach. Instead of solving for the backreacted four dimensional solution and then uplift, we consider the problem directly in five dimensional supergravity in the presence of higher derivatives terms, given by the supersymmetrization of gauge-gravitational Chern-Simons terms. We find that the theory admits global $AdS_3\times S^2$ with same quantum numbers as the asymptotically AdS solutions.
The Ramond couplings (\ref{Ramond couplings}) must be supplemented with the terms $\int R\wedge R\wedge A_3$ and $\int R\wedge R\wedge F\wedge A_1 $, where $R$ is the curvature two-form. The two center charge configuration in the absence of the singular fluxes is
\begin{eqnarray}
\Gamma_6=&&e^{p/2}(1+c_2(X)/24)\nonumber\\
=&&\left(1,\frac{p^a}{2},\frac{1}{8}D_{abc}p^bp^c+\frac{c_{2a}}{24}, \frac{p^3}{48}+\frac{c_{2}\cdot p}{48}\right),\\
{}\nonumber\\
\Gamma_{\bar{6}}=&&-e^{-p/2}(1+c_2(X)/24)\nonumber\\
=&&\left(-1,\frac{p^a}{2},-\frac{1}{8}D_{abc}p^bp^c-\frac{c_{2a}}{24}, \frac{p^3}{48}+\frac{c_{2}\cdot p}{48}\right),
\end{eqnarray}
with $c_2(X)$ the second Chern-class (tangent bundle) of Calabi-Yau $X$. As explained in the previous section, the solution that we find in two derivative supergravity after decoupling limit, is a geometry which is asymptotically $AdS_3\times S^2$, and carries total angular momentum
\begin{equation}\label{total ang mom higher derivative}
J=\frac{1}{2}\langle \Gamma_6,\Gamma_{\bar{6}}\rangle=\frac{p^3}{12}+\frac{c_{2}\cdot p}{24}.
\end{equation}
The decoupled geometry is not a solution of the full equations of motion because the four dimensional multi-center geometries, described in \cite{deBoer:2008fk,Denef:2000nb}, are solutions of two derivative supergravity and thus higher derivative corrections are not taken into account. To correctly describe the exact solution we need to consider the problem in the presence of the higher derivatives terms, which arise from the reduction of the eighth-derivative $C\wedge I_8(R)$ term in M-theory. After compactification on the Calabi-Yau this gives rise to gauge-gravitational Chern-Simons terms plus their supersymmetric completion. This includes terms such as
\begin{equation}
c_{2a}\int A^a\wedge R\wedge R ,\;c_{2a}\int M^a R\wedge\star R,
\end{equation}
with $A^a$ the five dimensional gauge field and $M^a$ the real scalar field that sits in the vector multiplet.
This problem was studied in \cite{Castro:2008ne,Castro:2007hc} by considering five dimensional off-shell supergravity with a mixed gauge-gravitational Chern-Simons term. The solution found is in fact the near-horizon geometry of a black ring, which has the form $AdS_2\times S^1\times S^2$. But after a simple analytic continuation, that we describe in further detail in section \S \ref{sec subleading Bessels}, we can bring the metric to global $AdS_3\times S^2$. A few properties of the solution are the following. The physical size $L^2$ is given by
\begin{equation}\label{U with c_2 correction}
L^2=\left(\frac{p^3}{6}+\frac{c_2\cdot p}{12}\right)^{2/3},
\end{equation}
in contrast with the two derivative result $L^2=(p^3/6)^{2/3}$ (\ref{approximate AdS3}). The difference is nevertheless negligible in the limit of $p\gg 1$, which is when the two derivative solution is justified. Note that in this case $L^2$ agrees precisely with the decoupling two center distance $r_{6\bar{6}}$ (\ref{r66}); this fact will become relevant later. Furthermore, the attractor equations for the $U(1)$ gauge fields and the scalar fields are exactly
\begin{eqnarray}
&&A^{a}_{5D}= -p^a \cos\theta d\phi,\\
&&M^a= U^{-1/3} p^a,
\end{eqnarray}
which contrasts with the approximate solutions in the two derivative theory.
So far the sphere is not twisted and thus the gravitini are antiperiodic along the spatial circle, which makes this a solution in the NS sector. Since our interest is in the R sector we need to turn on a non-trivial connection on the sphere, such as (\ref{R gauge field}), so that its holonomy around the contractible cycle effectively changes the gravitini periodicities. This has an effect on the total angular momentum carried by the solution. This is easier to see from an holographic point of view. To do that, we reduce the theory on the sphere keeping its isometries gauged, which gives rise to three dimensional $SU(2)_R$ Chern-Simons terms \cite{Hansen:2006wu} with level $k_R$ \cite{Kraus:2005vz}
\begin{equation}
k_R=\frac{p^3}{6}+\frac{c_2\cdot p}{12}.
\end{equation}
From the three dimensional point of view the twisting connection $A=d\sigma-d\tau=d\bar{z}$ on the sphere, with $z$ a right-moving coordinate, induces a current $J_{R\bar{z}}=ik_R A_{\bar{z}}/2$ \cite{Hansen:2006wu}. The angular momentum is the $R$-charge of the solution, that is,
\begin{equation}
J_{R}^0=-\oint \frac{d\bar{z}}{2\pi i}J_{R\bar{z}}=-\frac{k_R}{2},
\end{equation}
in precise agreement with (\ref{total J}). Note also that the solution corresponds to a lowest $SU(2)_R$ weight state. In this case the ratio $J/J_{\text{max}}=1$ in (\ref{twisting gauge field}). The key difference is on the $SU(2)_R$ level, which suffers a correction due to the mixed Chern-Simons terms.
\subsection{Purely fluxed solutions from Ideal Sheaves}\label{sec Ideal sheaves}
In this section, we want to consider the purely fluxed $\D6-\aD6$ configurations directly from the M-theory point of view, without resorting to the four dimensional charge configuration and its supergravity solution. In particular we want to follow the intuition from the previous section and search for exact solutions to the full equations of motion that carry the same charges. Our goal in this section is to reproduce the sum over $\M2/\aM2$-branes on $\text{AdS}_3$ of the section \S \ref{sec polar states}, but now in terms of the eleven dimensional M-theory fields, such that we can interpret the degeneracy as an M-theory partition function. Since our main interest is the $\mathcal{N}=4$ theory we will only consider ideal sheaves with second Chern-classes.
In section \S\ref{sec polar states} we considered configurations of M2 and anti-M2 branes wrapping holomorphic cycles on the Calabi-Yau and sitting at the origin of $AdS_3$ and at the north and south poles of $S^2$ respectively. The configuration carries M-theory flux $G\propto e_2\wedge p$ where $p$ is the flux along the Calabi-Yau and $e_2$ is the volume form of the sphere. The map between the fluxes $\beta,\bar{\beta}$ and the number of M2 and anti-M2 branes living on $AdS_3\times S^2\times X_{CY}$ is
\begin{equation}
\beta_{a}=q_a,\;\bar{\beta}_{a}=\bar{q}_a.
\end{equation}
It follows that the positions of the M2 and anti-M2 branes on $AdS_3\times S^2$ are consistent with the positions of the $\D6$ and $\aD6$ branes (\ref{oblate coords}). Indeed, in the absence of M2 branes the geometry corresponds to the decoupling limit of a $\D6$ and $\aD6$ configuration with worldvolume flux $p$ after uplift to M-theory.
We have explained that turning fluxes on the $\D6$ brane is equivalent to turning on fluxes in M-theory. The question we want to answer is how the singular M-theory fluxes affect the full eleven dimensional geometry. At this point we should proceed carefully because we do not really know how to deal with singular gauge field configurations in the path integral. Therefore, our approach is mainly heuristic. A crucial aspect in the M2 brane construction was that the configuration on global $AdS_3\times S^2$ preserved the same set of supersymmetries independently on the number of M2 branes. Inspired by this result we consider an ansatz for the exact geometry which we take to be global $AdS_3\times S^2$.
We start from the $AdS_3\times S^2$ ansatz and write the metric using the ambi-polar Eguchi-Hanson coordinates defined in (\ref{Eguchi-Hanson}), which depend on the parameters $q,a,k$. In this work we consider only smooth geometries and so we set $q=1$; $q$ maps to the number of $\D6$ branes, which is consistent with our problem. The parameter $a$ is the physical distance between the centers while $k$ parametrizes the harmonic functions. From the decoupling limit of the multi-center geometry, the distance between the centers (\ref{r66}) is proportional to the symplectic charge inner product, which has the value
\begin{equation}\label{quantum size}
r_{6\bar{6}}= \frac{4}{R^2}\left(\frac{p^3}{6}+\frac{c_2\cdot p}{12}-(\beta+\bar{\beta})\cdot p\right).
\end{equation}
This is the charge product $\langle\Gamma_{6},\Gamma_{\bar{6}}\rangle$ for the charges (\ref{Gamma6}) and (\ref{Gamma6bar}).
Since we are turning on singular fluxes in M-theory, we can expect corrections to the harmonic functions (\ref{harmonic functions}) and also to the distance formula (\ref{r66}). Nevertheless the charge combination $\langle \Gamma_6,\Gamma_{\bar{6}}\rangle$ is a topological invariant and hence it is integer quantized. In view of this, it seems natural to assume that the distance between the centers remains unchanged. Moreover, we expect the geometry to asymptote to the perturbative geometry (\ref{purely fluxed ads3}) when we take the fluxes $\beta,\bar{\beta}$ to be parametrically smaller than $p$. This means that the constant term in the harmonic function $V_0$ (\ref{Eguchi-Hanson}) must equal the corresponding term $h_0=R^{3/2}/4$ in the formula (\ref{harmonic functions}), so we have
\begin{equation}
\frac{2 k^2}{a}=\frac{R^{3/2}}{4}.
\end{equation}
Hence, using that $a=r_{6\bar{6}}$, we find
\begin{equation}\label{quantum corrected size}
k^2=\frac{1}{2 R^{1/2}}\langle \Gamma_6,\Gamma_{\bar{6}}\rangle.
\end{equation}
We thus see that both the parameters $a$ and $k$, that parametrize the full solution, depend only on the combination
\begin{equation}\label{c2 renormalization}
\hat{c}_2=c_2-12(\beta+\bar{\beta}),
\end{equation}
that appears in (\ref{quantum size}). This suggests that the effect of the singular fluxes is to renormalize the second Chern-class $c_2$ by a shift $-12(\beta+\bar{\beta})$. Assuming this renormalization we can easily determine other parameters of the theory such as the central charges and angular momentum. What we need to do is to reconsider the problem studied in the previous section but with the renormalized second Chern-class $\hat{c}_2$.
For example the effective three dimensional Chern-Simons theory that we obtain after reduction on the sphere contains $SL(2,\mathbb{R})_L\times SL(2,\mathbb{R})_R\times SU(2)_R$ and abelian Chern-Simons terms. The levels for each gauge group are respectively $\tilde{k}_L,\tilde{k}_R$ and $k_R$. Furthermore, by supersymmetry we must have $\tilde{k}_R=k_R$. Since we have $c_L=6\tilde{k}_L$ and $c_R=6\tilde{k}_R$, using the values of the central charges with the renormalized $c_2$, we find
\begin{equation}\label{quantum kr level}
\tilde{k}_R=k_R=\frac{p^3}{6}+\frac{\hat{c}_2\cdot p}{12},
\end{equation}
and
\begin{equation}
\tilde{k}_L=\frac{p^3}{6}+\frac{\hat{c}_2\cdot p}{6}.
\end{equation}
As explained in a previous occasion, to describe the R sector of the theory, the sphere must be twisted by the Wilson line (\ref{R gauge field}), to impose the correct boundary conditions on the gravitini. Following the derivation presented in the previous section, the geometry acquires angular momentum
\begin{equation}
J=\frac{k_R}{2}=\frac{p^3}{12}+\frac{\hat{c}_2\cdot p}{24}.
\end{equation}
This is in perfect agreement with the angular momentum formula (\ref{total J}) for the two center bound state, and it also agrees with the total angular momentum contribution due to the M2-branes (\ref{total angular momentum}) as described in section \S\ref{sec polar states}.
Given that the size (\ref{quantum corrected size}) must be positive for the geometry to make sense, we must have
\begin{equation}\label{condition on the size}
\frac{p^3}{6}+\frac{c_2\cdot p}{12}-(\beta+\bar{\beta})\cdot p >0.
\end{equation}
As we discuss later, restricting the fluxes to $\beta,\bar{\beta}\geq 0$, guarantees that the partition function agrees with the Cardy limit of the CFT. If it was not the case then there would be contributions to the path integral overwhelming the area formula predicted from microscopics (\ref{subleading saddles}). Therefore for $\beta,\bar{\beta}>0$ we obtain an upper bound on the possible amount of fluxes, which in turn leads to a \emph{finite number of geometries}. This bound on the spectrum was also observed in a similar context \cite{deBoer:2009un}.
The bound on the number of geometries is precisely the bound imposed by the stringy exclusion principle \cite{Maldacena:1998bw}. The principle was introduced in order for the number of chiral primaries in the $\text{CFT}_2$ to match the spectrum of Kaluza-Klein fields on $AdS_3$. The reason is that while from the $\text{CFT}$ the number of chiral primaries follows from fermi statistics and is therefore finite, from the bulk point of view the Kaluza-Klein fields have free bosonic excitations and thus with no limit in their particle number. The exclusion principle gives an unitarity condition that is non-perturbative in nature. In terms of quantum numbers, the exclusion principle translates into a bound on the R-charge carried by the fields excitations on top of $AdS_3$. For example, in the $\M2$ brane picture it implies that $\frac{1}{2}(q+\bar{q})\cdot p<\frac{p^3}{12}+\frac{c_2\cdot p}{24}$ where $q,\bar{q}$ are the number of $\M2$ and $\aM2$ respectively, in agreement with the bound (\ref{condition on the size}).
At this point the renormalization (\ref{c2 renormalization}) is only a conjecture, which is very difficult to show given the nature of the solution. Nevertheless, we can already provide with preliminary evidence, by giving an alternative derivation for the $SU(2)_R$ level (\ref{quantum kr level}), and then of $\tilde{k}_R$ by supersymmetry. The idea is to determine the coefficient of the $SU(2)_R$ Chern-Simons terms in three dimensions starting directly from the eleven dimensional action. We follow closely \cite{Hansen:2006wu,Freed:1998tg}. We write the M-theory four form flux as
\begin{equation}
G=e_2(A)\wedge F,
\end{equation}
where $e_2$ is the volume form of the sphere and contains the effect of gauging the isometries, that is, it depends explicitly on the $SU(2)_R$ connections $A$, which have legs on the $AdS_3$ directions. We decompose the flux $F$ in the smooth component $p=p^a\omega_a$ with $\omega_a\in H^2(X,\mathbb{Z})$ and the singular term $\mathcal{F}$ , that is $F=p+\mathcal{F}$. The ideal sheaf flux $\mathcal{F}$ has zero first Chern-class and
\begin{eqnarray}\label{Chern classes}
c_2(\mathcal{F})=\frac{1}{2}\int_{\alpha^a} \mathcal{F}\wedge \mathcal{F}=-(\beta_a+\bar{\beta}_a),\;\;c_3(\mathcal{F})=\frac{1}{6}\int_X \mathcal{F}\wedge \mathcal{F}\wedge \mathcal{F}=n+\bar{n},
\end{eqnarray}
where $\alpha^a\in H_4(X,\mathbb{Z})$ and $\beta_a,\bar{\beta}_a,n,\bar{n}\in \mathbb{Z}$. The contributions $\beta,n$ and $\bar{\beta},\bar{n}$ are due to the $\D6$ and $\aD6$ respectively. The expressions (\ref{Chern classes}) have to be taken with care since the fluxes $\mathcal{F}$ are singular and require an appropriate regularization. For our purpose the Chern-classes $c_2(\mathcal{F})$ and $c_3(\mathcal{F})$ are well defined and given by the values (\ref{Chern classes}). The $SU(2)_R$ Chern-Simons coupling is implicitly related to a lack of gauge invariance of the action and thus we can focus only on the Chern-Simons terms in M-theory. We compute
\begin{equation}
\frac{1}{6}\int C\wedge G\wedge G=\frac{1}{6}\int_{AdS_3\times S^2} e^{(0)}_1\wedge e_2\wedge e_2\int_X F\wedge F\wedge F,
\end{equation}
with $e^{(0)}_1$ defined locally by the equation $de^{(0)}_1 =e_2$. The first integral on the RHS gives the "descent" of the Pontryagin class of the sphere bundle, that is,
\begin{equation}
\int_{S^2} e^{(0)}_1\wedge e_2\wedge e_2=-\frac{1}{2(2\pi)^2} \text{Tr}\left(AdA+\frac{2}{3}A^3\right),
\end{equation}
with $A$ the $SU(2)_R$ connection. In addition one has
\begin{eqnarray}
\frac{1}{6}\int_X F\wedge F\wedge F&=&\frac{p^3}{6}+p^a\int \omega_a\wedge c_2(\mathcal{F})+\int c_3(\mathcal{F})\\
&=&\frac{p^3}{6}-p\cdot (\beta +\bar{\beta})+n+\bar{n}.
\end{eqnarray}
Furthermore we have the contribution from the eighth derivative term $ C\wedge I_8(R)$ in M-theory. This term is easier to compute. Since it depends linearly on $C$ only the first Chern-class of $\mathcal{F}$ can contribute but that is zero by definition. Though, it contributes with a term proportional to $c_2\cdot p$ \cite{Kraus:2005vz}. The final contribution is therefore
\begin{eqnarray}
&&\frac{1}{6}\int C\wedge G\wedge G+\int C\wedge I_8(R)\propto\frac{1}{4\pi}\left(\frac{p^3}{6}+\frac{c_2\cdot p}{12}-p\cdot (\beta+ \bar{\beta})+n+\bar{n}\right) \int \text{Tr}(AdA+\frac{2}{3}A^3)\nonumber.
\end{eqnarray}
The overall normalization is fixed by setting the fluxes to zero. The coefficient of the Chern-Simons term agrees precisely with the level $k_R$ (\ref{quantum kr level}). We have kept the dependence on the fluxes $n,\bar{n}$ arbitrary to note that the Chern-Simons coefficient is proportional to the symplectic charge product $\langle \Gamma_6,\Gamma_{\bar{6}}\rangle$. This is in agreement with our expectations since as we have shown, the R-charge, and thus the angular momentum, is proportional to the Chern-Simons level.
We now return to the backreacted geometry. There is an important comment regarding the definition of the five and four dimensional Newton's constants, which will be important later.
So far we have been following the conventions used in \cite{deBoer:2008fk} which fix the five dimensional Einstein-Hilbert (EH) term as
\begin{equation}\label{EH 5d}
\int d^5x\sqrt{g_5} R^{(5)},
\end{equation}
where $g_5$ and $R^{(5)}$ correspond to the five dimensional metric and Ricci scalars respectively.
In the five dimensional off-shell theory, the EH term contains, in contrast to the on-shell version (\ref{EH 5d}), a conformal coupling to vector-multiplet scalars $M$ as
\begin{equation}\label{conformal coupling}
\int d^5x\sqrt{g_5} D_{abc}M^aM^bM^c R^{(5)}.
\end{equation}
The attractor equations impose $LM^a=p^a$, with $L^2$ the conformal factor of the metric. After reducing on the circle with radius $L/\phi^0$, we obtain the four dimensional EH term $\int d^4x \sqrt{g_4}\frac{L^{-2}p^3}{\phi^0}R$. Instead we want to have
\begin{equation}\label{EH choice}
\int d^4x \sqrt{g_4}\frac{1}{\phi^0}R,
\end{equation}
in four dimensions. We keep the factor $1/\phi^0$ such that the conformal factor of the four dimensional metric remains constant in the problem; thus we need $L^2\propto p^3$. If we include higher derivatives then we must impose $L^2\propto p^3/6+c_2\cdot p/12$, which is the result found in \cite{Castro:2008ne}.
In order to establish a map between the on-shell and the off-shell theory, we write the five dimensional EH in terms of the unit size metric $g^{(0)}_5$, which gives $\sqrt{g^{(0)}_5} \langle\Gamma_{6},\Gamma_{\bar{6}}\rangle R^{(0)}$. Following the same logic outlined above, we find by dimensional reduction that the size $L^2$ is precisely $\langle\Gamma_{6},\Gamma_{\bar{6}}\rangle$.
We can also show that this is consistent with the four dimensional off-shell superconformal gravity and the renormalization of $c_2$. Following \cite{Mohaupt:2000mj}, the EH term is given by
\begin{equation}
\int d^4x \sqrt{g_4}i(X^I\bar{F}_I-\bar{X}^IF_I)R,
\end{equation}
where the combination $i(X^I\bar{F}_I-\bar{X}^IF_I)$ is the Kahler potential $e^{-K}$. Assuming the renormalization induced by the fluxes, the prepotential $F(X)$ is
\begin{equation}
F(X)=-\frac{1}{6}D_{abc}\frac{X^aX^bX^c}{X^0}+\frac{\hat{c}_{2a}}{24}\frac{X^a}{X^0},
\end{equation}
with $\hat{c}_2$ (\ref{c2 renormalization}). Using the on-shell attractor solutions $LX^0=\phi^0$ and $LX^a=\phi^a+ip^a$, the Kahler potential becomes
\begin{equation}
e^{-K}=\frac{L^{-2}}{\phi^0}\left(\frac{p^3}{6}+\frac{\hat{c}_2\cdot p}{12}\right).
\end{equation}
Therefore to obey the choice (\ref{EH choice}) we must have
\begin{equation}
L^2=\frac{p^3}{6}+\frac{\hat{c}_2\cdot p}{12}.
\end{equation}
\section{Localization and Non-perturbative Corrections}\label{sec Loc}
In this section, we consider supersymmetric localization at the level of the five dimensional theory. We follow closely the four dimensional solution studied in \cite{Dabholkar:2010uh,Gupta:2012cy}. Different aspects of this computation such as boundary conditions or the choice of localization supercharge were discussed previously in the works \cite{Gomes:2013cca,Dabholkar:2014ema}, which we review along the way.
The relation between the quantum entropy functional on $AdS_2$ and the partition function on $AdS_3\simeq AdS_2\times S^1$ was discussed in \cite{Gupta:2008ki}. The essential observation is that the ground states of the conformal quantum mechanics dual to the theory on $AdS_2$ map to a chiral half of the $\text{CFT}_2$, which is dual to the theory on $AdS_3$. This gives a simple way to relate the microscopic index computations to the black hole degeneracy \cite{Sen:2009vz, Dabholkar:2010rm}.
According to our proposal, the full partition function will be the sum of different contributions, each coming from a solution parametrized by $\beta,\bar{\beta}$ , that is
\begin{equation}
Z_{AdS_2\times S^1\times S^2}=\sum_{\beta,\bar{\beta}}\int D[\Phi]e^{-S_{\text{E}}[\Phi,\beta,\bar{\beta}]},
\end{equation}
where $D[\Phi]$ denotes a measure for all the fields in five dimensional supergravity, and $S_{\text{E}}[\Phi,\beta,\bar{\beta}]$ is the euclidean action whose Lagrangian depends explicitly on the values of the fluxes $\beta,\bar{\beta}$. For each of the geometries parametrized by $\beta,\bar{\beta}$, we perform localization.
As we explain shortly, one also needs to consider the contribution of $U(1)$ connections that have a delta function singularity at the origin of $AdS_2\times S^1$. This was also pointed out in \cite{Dijkgraaf:2000fq}. The field is pure gauge but it is not well defined everywhere. The reason to include them is motivated from the fact that in the decoupled geometry, the M2 brane total charge gives rise to a gauge transformation $\sim D^{ab}(\beta-\bar{\beta})_b dy$ in the five dimensional gauge field (\ref{attractor fields}), with $y$ parameterizing the spatial circle that is contractible. From a physical point of view, this gauge transformation generates a spectral flow transformation for the $U(1)$ currents in the CFT. In the path integral we use a regularization scheme to avoid the singularity at the origin, and show that is consistent with the localization procedure. The computation ends up depending only on the abelian Chern-Simons terms, as expected for the contribution of a large gauge transformation.
\subsection{5D Supersymmetric Localization}\label{sec 5.1}
We start by reviewing the localization computation of $\mathcal{N}=2$ supergravity on $AdS_2\times S^2$ \cite{Dabholkar:2010uh}. The four dimensional Lagrangian is constructed using the off-shell superconformal formalism, for which a good review is \cite{Mohaupt:2000mj}. The part of the Lagrangian that is most relevant for the computation is based on the holomorphic prepotential
\begin{equation}\label{N2 prepotential}
F(X,\hat{A})=\frac{1}{6}D_{abc}\frac{X^aX^bX^c}{X^0}+g\left(\frac{X}{X^0}\right)\hat{A},
\end{equation}
where $X$ are the complex scalar fields in the vector-multiplets and $\hat{A}=(T^{-})^2$ is the bottom component of the chiral multiplet $\mathbf{W}^2$, with $\mathbf{W}$ the Weyl superfield; in the on-shell theory $T^{-}$ becomes the graviphoton field. Besides the usual Einstein-Hilbert and Maxwell terms, the Lagrangian contains in addition higher derivative terms parametrized by the function $g(X/X^0)$. It determines the coupling of the vector-multiplets to the square of the Weyl tensor as
\begin{equation}
\sim g\left(\frac{X}{X^0}\right)C_{abcd}C^{abcd}+\text{h.c.},
\end{equation}
with $C_{abcd}$ the Weyl tensor. Therefore at the on-shell level, we have two derivative supergravity with Weyl square higher derivative corrections. Later we show how to introduce Gauss-Bonnet type of corrections, which are known to contribute to black hole entropy \cite{Sen:2005iz}.
The original localization solutions of \cite{Dabholkar:2010uh}, solve the BPS equation $Q\Psi=0$, with $Q$ a real supercharge that squares to a self-dual\footnote{On $AdS_2\times S^2$ we have isometries $L$ and $J$, respectively rotations on $AdS_2$ and $S^2$. The supercharge $Q$ obeys $Q^2=L-J$. } $U(1)$ isometry of $AdS_2\times S^2$, and $\Psi$ are the vector-multiplet fermions. The remaining equations for the other fields, including the Weyl multiplet, were solved in \cite{Gupta:2012cy}\footnote{There is an important caveat in this computation (which is not fault of the authors). The reason is that in supergravity we can not really define a supercharge as we usually do in supersymmetric field theory. So the construction of the localization deformation of the sort $QV$, which includes all the supergravity fields and is invariant under local supersymmetry, is still unknown. Part of this work provides steps in that direction. }. In particular the localization equations for the Weyl multiplet imply that the four dimensional metric must be of the form $AdS_2\times S^2$, with equal sizes for $AdS_2$ and the sphere. Since the theory is off-shell the solutions are universal and thus independent of the particular details of the Lagrangian. The crucial result of the localization computation is that the vector-multiplet scalars have non-trivial radial profiles, in contrast to the constant on-shell values. In fact, the solution does not obey the equations of motion. This happens because the localization action contains flat directions, which allows the scalar fields $X$ to go off-shell at the expense of turning on the auxiliary fields $Y$. More precisely, the solution is
\begin{equation}\label{localization solutions 4D}
X=X^*+\frac{C}{r},\;Y=\frac{C}{r^2},
\end{equation}
where $r\in [1,\infty[$ is the radial coordinate of $AdS_2$ and $X^*$ is the attractor value of the scalar, which is constant. All the other fields remain fixed to their attractor background. Moreover, the solutions are parametrized by constants $C^0,C^a$, with $a=1\ldots n_V$, with $n_V$ the number of vector multiplets \footnote{The Weyl multiplet does not contain the graviphoton gauge field, which requires to add the compensator vector-multiplet, which contains the scalar $X^0$.}.
Given the solutions to the localization equations we need to determine their contribution to the physical action. After removing IR divergences \footnote{Integrating on $AdS_2$ leads to infinite volume divergences, which requires introducing a cutoff at finite radius.}, following the prescription in \cite{Sen:2008vm}, we obtain the renormalized action
\begin{equation}\label{ren action 5D}
\text{Ren}(S)=-\pi q_I\phi^I+4\pi \text{Im}F(\phi+ip),
\end{equation}
where $\phi+ip$ is the value of the scalar $X$ at the origin of $AdS_2$, with $\phi\sim \text{Re}(X^*)+C$, which is free to fluctuate. The function $F$ is the holomorphic prepotential (\ref{N2 prepotential}). To arrive at the expression (\ref{ren action 5D}) one needs to use, at an intermediate step, the on-shell equations of motion
\begin{equation}\label{attractor equations}
q_I=4\text{Im}\left(\frac{\partial F}{\partial X^I}\right)|_{X^*}.
\end{equation}
It ensures that the saddle point equations that we obtain from varying the renormalized action against $\phi$ are consistent with the attractor equations of motion. The localization integral is thus given by
\begin{equation}\label{4D loc integral}
Z_{AdS_2\times S^2}\sim \int \prod_{I=0}^{n_V} d\phi^I\exp{\left[-\pi q_I\phi^I+4\pi \text{Im}F(\phi+ip)\right]}.
\end{equation}
The symbol $\sim$ means that we are ignoring a measure factor. A measure was proposed in \cite{Murthy:2015yfa-1,Gupta:2015gga}, which includes the contribution of the localization one-loop determinants.
As an example, lets evaluate the integral (\ref{4D loc integral}) for the theory on $K3\times T^2$. The prepotential is
\begin{equation}
F(X,\hat{A})=\frac{X^1}{X^0}C_{ab}X^aX^b+\ln \eta^{24}\left(\frac{X^1}{X^0}\right)\hat{A},
\end{equation}
with $\eta^{24}(t)$ the worldsheet instanton partition function. We have used the fact that $D_{1ab}=C_{ab}$ with the other components zero. The integral (\ref{4D loc integral}) becomes
\begin{equation}
Z_{AdS_2\times S^2}\sim \int d\tau_1d\tau_2 \exp{\left[\pi \frac{|Q+\tau P|^2}{\tau_2}-\ln |\eta^{24}(\tau_1+i\tau_2)|^2\right]},\;\tau_1+i\tau_2=\frac{\phi^1+ip^1}{\phi^0},
\end{equation}
after evaluating the $\phi^a$ integrals, with $a=2\ldots n_V$, which are gaussian. We have defined $|Q+\tau P|^2\equiv Q^2+2\tau_1 Q.P+ |\tau|P^2$, with $Q^2,P^2,Q.P$ the T-duality invariants, which are quadratic in the charges $q,p$. Comparing with the microscopic answer (\ref{Siegel deg}) described in section \S\ref{deg 1/4 dyons}, we find agreement up to the measure factor that contains a derivative of $\ln |\eta^{24}|^2$.
Now we turn gears to the localization computation in five dimensions. In contrast to four dimensions, the coupling between the Weyl multiplet and the vector-multiplets are determined completely by the constants $D_{abc}$ and $c_2a$. The first parametrizes the different couplings in the two derivative Lagrangian, while the second constant parametrizes the supersymmetrization of the gauge-gravitational Chern-Simons term
\begin{equation}
c_{2a}\int A^a\wedge R\wedge R,
\end{equation}
where $A^a$ is the gauge field in the vector-multiplet, and $R$ is the curvature two form.
The Lagrangian contains the following terms
\begin{equation}\label{5D lagrangian}
\mathcal{L}_{5D}=\mathcal{L}_{VVV}+\mathcal{L}_{\text{hyper}}+\mathcal{L}_{VWW}.
\end{equation}
$\mathcal{L}_{VVV}$ is the two derivative Lagrangian, which is cubic in the vector-multiplets. It contains the coupling of the vectors to the Weyl-multiplet, in particular to the Einstein-Hilbert term, and abelian Chern-Simons terms of the form $\int D_{abc}A^a\wedge F^b\wedge F^c$. $\mathcal{L}_{\text{hyper}}$ is the hypermultiplet lagrangian and $\mathcal{L}_{VWW}$ contains the supersymmetrization of the gauge-gravitational Chern-Simons term, which is linear in the vector-multiplet fields and quadratic in the Weyl multiplet fields.
Supersymmetric localization of the five dimensional theory on $AdS_2\times S^1\times S^2$ goes along the lines described in \cite{Gomes:2013cca}. The off-shell reduction described in \cite{Banerjee:2011ts} plays a very important role and we use it extensively here. We can show that the five dimensional localization equations \cite{Gomes:2013cca} do not allow the fields to depend on the circle coordinate. Therefore, we can write the five dimensional equations in terms of the four dimensional ones, using the off-shell reduction of \cite{Banerjee:2011ts}. As a consequence, the five dimensional solutions are an uplift of the four dimensional.
The uplift goes as follows. Since the four dimensional scalar $X^0$ maps to the radius of the circle and the four dimensional metric is fixed by the localization equations to be of the form $AdS_2\times S^2$, the five dimensional metric becomes
\begin{equation}\label{metric localization}
ds^2=\vartheta\left[(r^2-1)d\tau^2+\frac{dr^2}{r^2-1}+\frac{1}{((\phi^0)^*+C^0/r)^2}(du+i(\phi^0)^*(r-1)d\tau)^2\right] +\vartheta ds^2_{S^2},
\end{equation}
where $(\phi^0)^*=\vartheta^{1/2}\text{Re}(X^0)^*$ is the on-shell value of $\phi^0$. The factor $\vartheta$ is a constant free parameter, since the theory is Weyl invariant by construction. Physically we need to use a gauge fixing condition, which makes $\vartheta$ a function of the charges. This was explained at the end of section \S\ref{sec Ideal sheaves}. Similarly, the five dimensional gauge fields are not fixed by the localization equations. The scalars $X^a$ map to the Wilson lines of the five dimensional gauge field, and so we obtain
\begin{equation}\label{localization 5D gauge field}
A_{5D}^a=-2\frac{(\phi^a)^*+\frac{C^a}{r}}{(\phi^0)^*+\frac{C^0}{r}}\left(du+i(\phi^0)^*(r-1)d\tau\right)+(A^a)^*_{4D},
\end{equation}
where $(\phi^a)^*=\vartheta^{1/2}\text{Re}(X^a)^*$ are the on-shell values of $\phi^a$ and $(A^a)^*_{4D}$ is the on-shell four dimensional gauge field. In addition, there is a map between the four and five dimensional auxiliary fields, which we refer the reader to \cite{Banerjee:2011ts,Gomes:2013cca} for more details. For $C^0=C^a=0$ the metric becomes the locally $AdS_3$ metric (\ref{AdS3}) and the five dimensional gauge fields become flat, in agreement with the five dimensional attractor equations \cite{deWit:2009de}.
Before moving to the computation of the renormalized action, we discuss the boundary terms. These are necessary to ensure a well defined variational problem. At the boundary, the gauge fields have the form
\begin{equation}
A_{5D}^a\simeq A_f^a+p^aA_{\text{Dirac}},\;r\sim \infty,
\end{equation}
with $A_f$ a flat connection on $AdS_2\times S^1$ and $A_{\text{Dirac}}$ is the Dirac monopole gauge field, which is defined locally. In order to compute the boundary terms it is enough to consider the abelian Chern-Simons terms in $\mathcal{L}_{VVV}$. In contrast, the Maxwell terms give a contribution proportional to $dA_f$ that vanishes at the boundary. The boundary terms are thus of Chern-Simons type as discussed in \cite{Elitzur:1989nr}. Moreover, we need to include a Wilson line for the Kaluza-Klein gauge field, since we keep fixed the electric fields in the microcanonical ensemble \cite{Sen:2008vm}. Details about their computation can be found in \cite{Dabholkar:2014ema}. We will denote these boundary terms generically by $S_{\text{Bnd}}$.
The renormalized action in five dimensions consists of the following terms
\begin{equation}
\text{Ren }(S_{5D})=S_{\text{Bnd}}\left(A_{5D}^a(C^a,C^0)\right)+S_{\text{bulk}}\left(g_{\mu\nu}(C^0),A_{5D}^a(C^a,C^0)\right)+S_{\text{ct}}.
\end{equation}
$S_{\text{bulk}}$ is the bulk action based on the Lagrangian (\ref{5D lagrangian}) and $S_{\text{ct}}$ are boundary counter terms necessary to remove IR divergences. Computing the action above is a very complicated task, because the various fields, including the metric, have non-trivial radial profiles. Instead of performing directly the five dimensional computation, we can simplify the problem by reducing the different terms to four dimensions and then use the results of \cite{Dabholkar:2010uh} described in the beginning of this section. The reduction is possible because the fields do not carry any dependence on the circle coordinate. Nevertheless, this is still a complicated task because the five dimensional Lagrangian contains a large amount of terms. Fortunately, such problem has been object of study in recent years \cite{Butter:2013lta,Butter:2014iwa,Banerjee:2011ts}. The main conclusion is that under the reduction, the different four dimensional terms can be assembled in four dimensional supersymmetric invariants.
The way the reduction works is succinctly the following. The two derivative Lagrangian $\mathcal{L}_{VVV}$ gives rise to the four dimensional Lagrangian based on the holomorphic prepotential
\begin{equation}
F(X)=\frac{1}{6}D_{abc}\frac{X^aX^bX^c}{X^0}.
\end{equation}
On the other hand, the reduction of the higher derivative $\mathcal{L}_{VWW}$, gives rise to two different supersymmetric invariants. The first is the supersymmetrization of the Weyl squared tensor term, which together with the reduction of $\mathcal{L}_{VVV}$, can both be written in terms of a single supersymmetric invariant based on the holomorphic prepotential
\begin{equation}\label{classical prepotential reduction}
F(X,\hat{A})=D_{abc}\frac{X^aX^bX^c}{X^0}+c_{2a}\frac{X^a}{X^0}\hat{A}.
\end{equation}
This is the one-loop $\mathcal{N}=2$ prepotential (\ref{top free energy}), after neglecting the contribution of the world-sheet instantons. Since it depends only on the geometry we call it classical prepotential. The second set of terms can be written in terms of a chiral superspace integral based on the non-linear Kinetic multiplet $\mathbb{T}(\ln \mathbf{X}^0)$ \cite{Butter:2013lta}. In superspace it has the form
\begin{equation}
ic_{2a}\int d^4x d^4\theta \frac{\mathbf{X}^a}{\mathbf{X}^0}\mathbb{T}(\ln \bar{\mathbf{X}}^0)\;+\text{h.c.},
\end{equation}
which is a particular case of the more general type of supersymmetric invariants
\begin{equation}\label{NL Kinetic multiplet}
\int d^4x d^4\theta \Phi'\mathbb{T}(\ln\bar{\Phi}_{\omega}),
\end{equation}
with $\Phi'$ a chiral superfield of weyl weight zero and $\Phi_{\omega}$ a chiral superfield of weyl weight $\omega$. The Kinetic chiral multiplet $\mathbb{T}(\ln\bar{\Phi})$, has non-linear supersymmetry transformations due to the anomalous transformation of $\ln\bar{\Phi}_{\omega}$ under Weyl transformations. For $\omega=0$, $\ln \bar{\Phi}_{\omega=0}$ is a chiral superfield with well defined Weyl transformations. In this case the invariant (\ref{NL Kinetic multiplet}) falls under the category of supersymmetric invariants studied in \cite{deWit:2010za}.
Following \cite{Butter:2013lta,Butter:2014iwa}, we can develop in components the Lagrangian density $\mathcal{L}_{NL}$ of the invariant (\ref{NL Kinetic multiplet}) as
\begin{align}
\label{NL action Components}
e^{-1} \mathcal{L}_{NL} =&\,
4\,\mathcal{D}^2 A'\,\mathcal{D}^2 \hat{\bar A}
+ 8\,\mathcal{D}^a A'\, \big[\mathcal{R}_{ab}
-\frac{1}{3} \mathcal R \,\eta_{ab}\big]\mathcal{D}^b \hat{\bar A}
+ C'\,\hat{\bar C}
\nonumber \\[.1ex]
&\,
- \mathcal{D}^\mu B'_{ij} \,\mathcal{D}_\mu \hat B^{ij}
+ (\frac{1}{6} \mathcal{R} +2\,D) \, B'_{ij} \hat B^{ij}
\nonumber\\[.1ex]
&\,
- \big[\varepsilon^{ik}\,B'_{ij} \,\hat F^{+\mu\nu} \,
R(\mathcal{V})_{\mu\nu}{}^{j}{}_{k}
+\varepsilon_{ik}\,\hat B^{ij}\,F'^{-\mu\nu} R(\mathcal{V})_{\mu\nu j}{}^k \big]
\nonumber\\[.1ex]
&\,
-8\, D\, \mathcal{D}^\mu A'\, \mathcal{D}_\mu \hat{\bar A}
+ \big(8\, \mathrm{i}\, R(A)_{\mu\nu}
+ 2\, T_\mu{}^{cij}\, T_{\nu cij}\big)
\mathcal{D}^\mu A' \,\mathcal{D}^\nu \hat{\bar A} \nonumber\\[.1ex]
&\,
-\big[\varepsilon^{ij} \mathcal{D}^\mu T_{bc ij}
\mathcal{D}_\mu A' \,\hat F^{+bc}
+ \varepsilon_{ij} \mathcal{D}^\mu T_{bc}{}^{ij}
\mathcal{D}_\mu \hat{\bar A}\,F'^{-bc}\big] \nonumber\\[.1ex]
&\,
-4\big[\varepsilon^{ij} T^{\mu b}{}_{ij}\,\mathcal{D}_\mu A'
\,\mathcal{D}^c \hat F^{+}_{cb}
+ \varepsilon_{ij} T^{\mu bij}\,\mathcal{D}_\mu \hat{\bar A}
\,\mathcal{D}^c F'^{-}_{cb}\big]
\nonumber\\[.1ex]
&\,
+ 8\, \mathcal{D}_a F'^{-ab}\, \mathcal{D}^c \hat F^+_{cb}
+ 4\,F'^{-ac}\, \hat F^+_{bc}\, \mathcal R_a{}^b
+\tfrac1{4} T_{ab}{}^{ij} \,T_{cdij} F'^{-ab} \hat F^{+cd}
\nonumber\\[.1ex]
&\,
+\omega\,\Big\{ - \frac{2}{3} \mathcal{D}^a A' \,\mathcal{D}_a
\mathcal{R} + 4 \mathcal{D}^a A'\, \mathcal{D}_a D
- T^{acij} T_{bc ij}\, \mathcal{D}^b \mathcal{D}_a A'
\nonumber\\[.1ex]
&\quad \qquad
- 2 \mathcal{D}^a F_{ab}'^- \,\mathcal{D}_c T^{cb}{}^{ij} \varepsilon_{ij}
+ \mathrm{i}\, F'^{-ab} R(A)_{ad}^- \,T_b{}^{dij} \varepsilon_{ij}
+ F_{ab}^- T^{ab ij} \varepsilon_{ij} (\frac{1}{12} \mathcal{R} - \frac{1}{2} D)
\nonumber\\[.1ex]
&\quad\qquad
+ A' \,\big[\tfrac{2}{3} \mathcal{R}^2 - 2\, \mathcal{R}^{ab} \mathcal{R}_{ab} - 6\, D^2
+ 2 \, R(A)^{ab} R(A)_{ab} - R(\mathcal{V})^{+ab}{}^i{}_j\, R(\mathcal{V})^+_{ab}{}^j{}_i
\nonumber \\
& \quad \qquad\qquad
+ \frac{1}{128} T^{ab ij} T_{ab}{}^{kl} T^{cd}{}_{ij} T_{cd kl}
+ T^{ac ij} D_a D^b T_{bc ij}\big]\Big\} \,~ +\text{total derivatives},
\end{align}
where we used the notation $A,\Psi_i,B_{ij},F_{ab}^{-},\Lambda_i,C$ for the components of a chiral superfield $\Phi$. The prime variables correspond to the components of the chiral multiplet $\Phi'$ and the hat variables correspond to the components of $\ln\bar{\Phi}_{\omega}$; $e$ is the volume element. The remaining fields $D,T_{ab ij}, A_a,\mathcal{V}^i_{\mu j}$ are respectively the auxiliary scalar, antisymmetric tensor and R-symmetry vector field respectively, while $\mathcal{R}_{ab}$ is the Ricci tensor. These fields sit in the Weyl multiplet.
For our problem we have $A'=A|_{X^a/X^0}$ and $\hat{\bar A}=A|_{\ln \bar{X}^0}$ and similarly for all the other components of the chiral multiplets. Developing the components of the chiral multiplets in terms of the vector-multiplet fields, we obtain a density $\mathcal{L}_{NL}$ that has a very complicated dependence on the fields. To compute its contribution on the localization solution we proceed as follows. First we note that in the four dimensional attractor background we have \cite{Mohaupt:2000mj,Dabholkar:2010uh}
\begin{eqnarray}
D=0,\;f_{\mu}^{a}=0,\;b_{\mu}=0,\;A_a=0,\;\mathcal{V}_{\mu i}^{ j}=0\\
\mathcal{R}_a^{b}=\frac{1}{16}T^{-}_{ac}T^{+cb},\;\mathcal{R}_a^a=0,\;\mathcal{D}_cT_{ab}^{-}=\mathcal{D}_cT_{ab}^{+}=0,
\end{eqnarray}
with all the other fermionic fields in the Weyl multiplet set to zero. This implies that the covariant derivative $\mathcal{D}_a$ and the superconformal invariant derivative $D_a$ become the usual covariant derivative with no dependence on the weight $\omega$. This is enough to show that the last two lines of (\ref{NL action Components}) vanish identically as noticed in \cite{Butter:2013lta}. Furthermore, the two lines
\begin{eqnarray}
&&\omega \Big\{- \frac{2}{3} \mathcal{D}^a A' \,\mathcal{D}_a
\mathcal{R} + 4 \mathcal{D}^a A'\, \mathcal{D}_a D
- T^{acij} T_{bc ij}\, \mathcal{D}^b \mathcal{D}_a A'
\nonumber\\[.1ex]
&&\quad \qquad
- 2 \mathcal{D}^a F_{ab}'^- \,\mathcal{D}_c T^{cb}{}^{ij} \varepsilon_{ij}
+ \mathrm{i}\, F'^{-ab} R(A)_{ad}^- \,T_b{}^{dij} \varepsilon_{ij}
+ F_{ab}^- T^{ab ij} \varepsilon_{ij} (\frac{1}{12} \mathcal{R} - \frac{1}{2} D)\Big\},
\nonumber
\end{eqnarray}
also vanish, except for the term $T^{acij} T_{bc ij}\, \mathcal{D}^b \mathcal{D}_a A'$ since $A'=X^a/X^0$ is not constant in the localization background. Nevertheless, that term can be replaced by $\mathcal{D}^b(T^{acij} T_{bc ij})\, \mathcal{D}_a A'$ after an integration by parts, which vanishes on the solution. Note that we are not taking into account the total derivatives in (\ref{NL action Components}), and so in practice that term is ambiguous \footnote{Total derivatives can nevertheless contribute to the renormalized action. However such contributions must be consistent with the non-renormalization theorems of \cite{deWit:2010za,Butter:2014iwa}, which states that the either the on-shell values of the BPS black hole entropy and the definition of the electric charges remain unaffected by adding the supersymmetric invariants based on the Kinetic multiplet. }. On the other hand, the remaining lines in (\ref{NL action Components}), those which do not come multiplied by $\omega$, give rise to terms that fall in the category of the D-type terms studied in \cite{deWit:2010za}. A few characteristic terms are \cite{Banerjee:2011ts}
\begin{align}
\label{D-term Kahler potential}
&\,
\frac{1}{4}\,\mathcal{H}_{IJ\bar K \bar L}
\big( F_{ab}^-{}^I\, F^{-ab\,J}
-\frac{1}{2} Y_{ij}{}^I\, Y^{ijJ} \big)
\big(F_{ab}^+{}^K \, F^{+ab\,L} -\frac{1}{2} Y^{ijK}\,
Y_{ij}{}^L \big)
\nonumber\\[.5ex]
&\,-\Big\{ \mathcal{H}_{IJ\bar K}\big(
F^{-ab\,I}\, F_{ab}^{-\,J} -\frac{1}{2} Y^I_{ij}\, Y^{Jij})
\big( \Box_\mathrm{c} X^K
+ \frac{1}{8} F^{-\,K}_{ab}\, T^{ab kl} \varepsilon_{kl}\big)
+\mathrm{h.c.}\Big\} \displaybreak[0] \nonumber\\[.5ex]
&\,+\mathcal{H}_{I\bar J}\Big[ 4\big( \Box_\mathrm{c} \bar X^I + \frac{1}{8}
F_{ab}^{+\,I}\, T^{ab}{}_{ij} \varepsilon^{ij}\big)
\big( \Box_\mathrm{c} X^J + \frac{1}{8} F_{ab}^{-\,J}\, T^{abij}
\varepsilon_{ij}\big) \nonumber\\
& \qquad\qquad +8\,\mathcal{D}_{a}F^{-\,abI\,}\,
\mathcal{D}_cF^{+c}{}_{b}{}^J - \mathcal{D}_a Y_{ij}{}^I\,
\mathcal{D}^a Y^{ij\,J}
\nonumber\\
&\qquad\qquad + 8\,\mathcal{R}^{\mu\nu}\, \mathcal{D}_\mu X^I
\,\mathcal{D}_\nu \bar X^J \nonumber\\
&\qquad\qquad -\big[\varepsilon^{ik}\, Y_{ij}{}^I\,(F^{+ab\,J}
-\frac{1}{4} X^J T^{ab}{}_{lm}\varepsilon^{lm} )\,
R(\mathcal{V})_{ab}{}^j{}_k +[\mathrm{h.c.}; I\leftrightarrow J]
\big] \Big]\nonumber \\
&\, +\cdots \,,
\end{align}
with
\begin{equation}
\mathcal{H}(X,\bar{X})=\frac{i}{384}c_a\left(\frac{X^a}{X^0}\ln \bar{X}^0-\frac{\bar{X}^a}{\bar{X}^0}\ln X^0\right),
\end{equation}
the K\"{a}hler potential and $\mathcal{H}_{IJ\ldots}$ its derivatives. Plugging in the localization solution we still obtain a very complicated expression. Fortunately, this is precisely the problem studied in \cite{Murthy:2013xpa}, where it is shown that D-type terms of the form (\ref{D-term Kahler potential}) vanish identically on the localization solution.
In conclusion, only the reduction of $\mathcal{L}_{VVV}$ and the part of $\mathcal{L}_{VWW}$ that contains a coupling to a Weyl squared term survive once evaluated on the localization solution. The resulting Lagrangian density is based on the holomorphic prepotential (\ref{classical prepotential reduction}). Therefore, the renormalized action on $AdS_2\times S^1\times S^2$ reduces exactly to the four dimensional one based on the classical prepotential. That is,
\begin{equation}
\text{Ren}(S_{5D})=-q_I\phi^I+4\pi\text{Im}F_{cl}(\phi+ip),
\end{equation}
with $F(X)_{cl}$ the classical prepotential (\ref{classical prepotential reduction}). Developing further this expression we obtain
\begin{equation}\label{Ren 5D}
\text{Ren}(S_{5D})=-\pi \hat{q}_0\phi^0 +\frac{\pi}{6}\frac{p^3+c_2\cdot p}{\phi^0}-\frac{\pi}{2\phi^0}D_{ab}(\phi^a+q^a\phi^0)(\phi^b+q^b\phi^0),
\end{equation}
with $D_{ab}=D_{abc}p^c$, $c_2\cdot p= c_{2a}p^a$, and $\hat{q}_0=q_0-D_{ab}q^aq^b/2$. The parameter $p^3+c_2\cdot p$ can be identified with the central charge of the dual CFT \cite{Maldacena:1997de}, which is also proportional to the $SL(2,\mathbb{R})_L$ Chern-Simons level of the three dimensional effective theory; similarly, $D_{ab}$ parametrizes the Chern-Simons couplings of the abelian gauge fields \cite{Dabholkar:2014ema,Gomes:2015xcf}. This gives further support in view of the observations made in \cite{Gomes:2015xcf}, which relates the quantum black hole entropy (\ref{Ren 5D}) to the Chern-Simons path integral. In particular, $-\pi \hat{q}_0\phi^0 +\frac{\pi}{6}(p^3+c_2\cdot p)/\phi^0$
can be identified with the Chern-Simons action of a flat $SL(2,\mathbb{R})_L$ connection. The quadratic term proportional to $D_{ab}$ can be identified with the contribution of zero modes of the abelian Chern-Simons terms.
To compute the one-loop determinants we follow essentially the discussion in \cite{Gomes:2015xcf}, which provides with a simple derivation of the measure using a connection to supersymmetric Chern-Simons theory on $AdS_2\times S^1$. The derivation of the measure relies on the fact that the structure of perturbative corrections in the Cardy limit, which is given by the most polar Bessel function, in essence, is determined by the modular properties of the dual $\text{CFT}_2$ partition function. This can be used to determine the measure and hence the one-loop determinants. Here we give a more refined version of that derivation and argue that the localization one-loop determinants are consistent with the Chern-Simons computation.
In the four dimensional problem, the localization deformation was quadratic in the fields. As a consequence the one-loop determinants could not carry any dependence on the off-shell solution \footnote{In \cite{Murthy:2015yfa,Gupta:2015gga} the one-loop determinants depend explicitly on the localization manifold because the conformal factor of the metric is chosen to be field dependent. Nevertheless, such an approach requires the inclusion the Weyl multiplet in the localization determinants together with understanding the conformal gauge fixing procedure, features that are not clarified in those works. }. In contrast, in five dimensions, the metric is fluctuating and so, on general grounds, we expect the one-loop determinant to be a function of the localization modes $\phi^0$, $\phi^a$ and the physical size $\vartheta(p)$. Furthermore, given the off-shell nature of the localization computation, the one-loop determinants can not depend on the couplings of the theory. In our problem these are effectively determined by the Chern-Simons couplings $\tilde{k}_L\propto p^3+c_2\cdot p$, $\tilde{k}_R\propto p^3+c_2\cdot p/2$ and $D_{ab}\sim p_a$, as we can see from the renormalized off-shell action. Another aspect to take into account is the fact that the localization deformation is Weyl invariant \footnote{The localization deformation $t QV$, with $V$ fermionic, is of the form $t\int \sqrt{g}\mathcal{L}_{\text{loc}}$. Since the path integral is $t$ invariant we must have that $\sqrt{g}\mathcal{L}_{\text{loc}}$ is Weyl invariant, otherwise, we could absorb the $t$ dependence in a Weyl rescaling of the metric. } and in odd dimensions Weyl invariance is preserved also at the quantum level. This means that the one-loop determinants, that is, the determinant over the non-zero modes, can not carry any dependence on $\vartheta(p)$, but only on the fields $\phi^0,\phi^a$ that are scale invariant. A dependence on $\vartheta(p)$ can arise due to zero modes, which we know are present in the $AdS_2$ path integral \cite{Sen:2009vz,Gomes:2015xcf,Banerjee:2009af}. Therefore, we can conclude that the one-loop contribution must have the form $Z^{\text{Loc}}_{\text{1-loop}}=\vartheta^{\alpha}f(\phi^0,\phi^a)$ with $\alpha$ determined by a zero mode counting. At this point we assume that $f(\phi^0,\phi^a)$ can depend on $\phi^0,\phi^a$ only polynomially. If this was not the case, the presence of exponential terms would correct the various terms in the renormalized action (\ref{Ren 5D}) and change the saddle point equations. As a consequence, we would find that the on-shell values of $\phi^0,\phi^a$ were no longer the ones determined by the physical attractor geometry. It would be important, nevertheless, to check by explicit computation that integration over the Kaluza-Klein modes on the circle does not give rise to such exponential terms in the one-loop determinants. Later we use the chiral primary picture of section \S\ref{sec polar states} to argue that this is the case.
To determine $\vartheta^{\alpha}f(\phi^0,\phi^a)$ we can compare with the one-loop computation in the supersymmetric Chern-Simons theory \cite{Gomes:2015xcf}. For a Chern-Simons theory based on the gauge group $SL(2,\mathbb{R})_L\times SU(1,1|2)_R\times U(1)^{b_2}$ we compute the one-loop correction to the partition function as $Z=e^{\text{CS}(A)}Z^{\text{CS}}_{\text{1-loop}}$, where $\text{CS}(A)$ is the Chern-Simons action of a flat connection. The classical part given by the action of the flat connection $CS(A)$ can be shown to match with the renormalized entropy function (\ref{Ren 5D}) \cite{Dabholkar:2014ema,Gomes:2015xcf}. This gives
\begin{equation}\label{Z_CS one-loop}
Z^{\text{CS}}_{\text{1-loop}}=\vartheta\;\frac{(\phi^0)^{b_2/2+1/2}}{\sqrt{\tilde{k}_L\text{det}(D_{ab})}},
\end{equation}
where $\tilde{k}_L$ is the $SL(2,\mathbb{R})_L$ level and $D_{ab}$ parametrize the abelian $U(1)$ levels. On the other hand, from the localization computation we obtain after extremization of the finite dimensional integral, the following one-loop correction
\begin{equation}\label{Z saddle one-loop}
Z_{\text{1-loop}}=\vartheta^{\alpha}f(\phi^0,\phi^a)\frac{(\phi^0)^{b_2/2+3/2}}{\sqrt{\tilde{k}_L\text{det}(D_{ab})}}.
\end{equation}
The term $(\phi^0)^{b_2/2+3/2}/\sqrt{\tilde{k}_L\text{det}(D_{ab})}$ comes from evaluating the gaussian integrals at the saddle of the renormalized action (\ref{Ren 5D}). The saddle point approximation only requires that $|\hat{q}_0 p^3|\gg 1$, since we have $-\pi \hat{q}_0\phi^0 +\frac{\pi}{6}\frac{p^3}{\phi^0}\sim \sqrt{|\hat{q}_0 p^3|}(x+1/x)$, with $x\sim \phi^0 \sqrt{|\hat{q}_0|/p^3}$. At the extremum, we have $x\sim 1$ and thus the saddle value of $\phi^0$ can range from small values, for $|\hat{q}_0|\gg p^3$, to large values for $|\hat{q}_0|\ll p^3$, while keeping $|\hat{q}_0 p^3|\gg 1$. Since the Chern-Simons computation is valid for any value of $\phi^0$ \footnote{In the Chern-Simons computation the value of $\phi^0$ corresponds to a choice of metric, and hence it is equivalent to a choice of gauge.}, comparing the expressions (\ref{Z_CS one-loop}) and (\ref{Z saddle one-loop}), we must have that the localization one-loop determinant is given by
\begin{equation}\label{1-loop localization}
Z^{\text{Loc}}_{\text{1-loop}}=\frac{\vartheta}{\phi^0}.
\end{equation}
We can check that the chiral primary computation of section \S\ref{sec polar states} reproduces the result (\ref{1-loop localization}). To do that, note that we have already included the effect of the massive hypermultiplets by means of the shift in the parameter $c_2$ of the five dimensional Lagrangian. Therefore, we only need to care about the supergravity modes, which include the effect of the graviton multiplet, $n_V$ vector multiplets and $n_H$ massless hypermultiplets. The idea is to repeat the computation of section \S\ref{sec polar states} for these modes. Since we have $n_V=n_H$ for the $\mathcal{N}=4$ theory, we can show that the supergravity modes cancel exactly for any value of $L_0>0$ in the trace (\ref{Ztop^2}). This is so because the canonical partition function for these modes is proportional to the MacMahon function to the power $\chi=-2(n_V-n_H)$ \cite{Gaiotto:2006ns}, which is also the Euler character of the Calabi-Yau. Therefore, for this supergravity theory we have only one polar term, that is, the trace over the chiral primaries is trivial. From this point of view, we also see that the five dimensional supergravity theory does not lead to problems related to the stringy exclusion principle. To obtain the black hole entropy we must perform a modular transformation as (\ref{modular transf polar states}). The degeneracy becomes
\begin{eqnarray}
d_{\text{BH}}(n,l_a)|_{\text{Sugra}}&=&\frac{c_R}{6}\int \prod_{a=1}^{b_2} dz^a d\tau \tau^{-\omega} \exp{\left(\pi i\frac{D_{ab} z^az^b}{\tau}+\frac{\pi i}{12}\frac{c_L}{\tau}-2\pi i\tau n-2\pi i z^al_a\right)}\nonumber,\\
{}
\end{eqnarray}
where $c_R$ and $c_L$ are respectively $c_R=p^3+\hat{c}_2\cdot p/2$ and $c_L=p^3+\hat{c}_2\cdot p$. As explained before, the $c_R$ factor in the measure comes from counting states in the angular momentum multiplet. Using the map
\begin{equation}
\tau=\frac{i}{2}\phi^0,\;z^a=\frac{\phi^a}{2i},\;n=-q_0,\,l_a=q_a,
\end{equation}
we can show that the integrand above is the exponential of the entropy function computed using localization. Moreover, from the analysis in section \S\ref{sec polar states} we have found $\omega=1$, and so we conclude that the one-loop determinant is $Z_{1-\text{loop}}^{\text{Loc}}=c_R/\phi^0$. From the analysis at the end of section \S \ref{sec Ideal sheaves}, we have found that the physical size $\vartheta$ of the geometry is proportional to $c_R$, which allows to reproduce the result (\ref{1-loop localization}), as we wanted to show.
The full partition function, including the one-loop determinants, is therefore
\begin{equation}\label{Z_5D localization}
\int_{\mathcal{C}} \prod^{b_2}_{I=0}d\phi^I\,\frac{\vartheta}{\phi^0}\, \exp{\left[-\pi \hat{q}_0\phi^0 +\frac{\pi}{6}\frac{p^3+c_2\cdot p}{\phi^0}-\frac{\pi}{2\phi^0}D_{ab}(\phi^a+q^a\phi^0)(\phi^b+q^b\phi^0)\right]}.
\end{equation}
To determine the integration contour $\mathcal{C}$ we proceed as follows. In the path integral, the measure includes an integration over the five dimensional metric and the gauge fields. Therefore, in the finite dimensional integral (\ref{Z_5D localization}) the appropriate variables of integration are the radius\footnote{To be more precise we are integrating over the vielbein $dR\sim de_u$. } $R\sim 1/\phi^0$ and the Wilson lines $A_{u}^a\sim \phi^a$, and so the integration measure must be proportional to $dR\,d\phi^a$. Furthermore, in the Euclidean four dimensional supergravity theory one has an $SO(1,1)$ R-symmetry instead of the usual $U(1)$ in the Minkowski theory. From a five dimensional point of view this effectively amounts to reduce the theory on a time-like circle \cite{Cortes:2003zd} instead of the euclidean circle. Looking at the geometry (\ref{metric localization}) we see that we must integrate $R\sim 1/\phi^0$ over the imaginary axis, which determines the contour. To avoid the singularity at $R=0$, we take $R$ along the contour $\mathcal{C}_R=]-i\infty,-i\epsilon]\cup C_{\epsilon}\cup [i\epsilon,+i\infty]$ with $C_{\epsilon}$ a semicircle of radius $\epsilon\ll 1$ going around the origin in the anti-clockwise direction as depicted in figure (\ref{fig:contour}) (the integral would be zero if it circled the origin in the clockwise direction). Besides, one also has that the matrix $D_{ab}$ is not positive definite \footnote{The Hodge theorem ensures that for an $SU(3)$ Calabi-Yau the matrix $D_{ab}$ has exactly one negative eingenvalue \cite{Maldacena:1997de}, while for other Calabi-Yau there can be more than one.}, which renders the gaussian integral in (\ref{Z_5D localization}) ill-defined when $\text{Re}(R)>0$, or equivalently for $R\in C_{\epsilon}$. In a diagonal basis for $D_{ab}$, with $D_{ab}\phi^a\phi^a=\lambda_{+}\tilde{\phi}_{+}^2-\lambda_{-}\tilde{\phi}_{-}^2$ and $\lambda_{+},\lambda_{-}>0$, the solution is to analytically continue $\tilde{\phi}_{-}$ to imaginary values, making the gaussian integral convergent. Deforming the contour, as described in figure (\ref{fig:contour}), the final integral is a modified Bessel function of the first kind, that is,
\begin{equation}
Z_{AdS_2\times S^1\times S^2}=\frac{\vartheta}{\sqrt{\text{det}(D)}}\int_{\epsilon'-i\infty}^{\epsilon'+i\infty} \frac{dR}{R^{1+b_2/2}}\exp{\left[-\pi \frac{\hat{q}_0}{R} +\frac{\pi}{6}(p^3+c_2\cdot p)R\right]}.
\end{equation}
\begin{figure}
\centering
\begin{tikzpicture}
\draw (0,-2) -- (0,2) ;
\draw (-2,0) -- (2,0) ;
\node at (0,0) {$\times$};
\draw[thick,red,xshift=2pt,
decoration={ markings,
mark=at position 0.2 with {\arrow{latex}},
mark=at position 0.98 with {\arrow{latex}}
},
postaction={decorate}]
(0,-2) -- (0,-0.2) arc (-90:90:.2) -- (0,2);
\draw[red,thick,dashed,
decoration={ markings,
mark=at position 0.1 with {\arrow{latex}},
mark=at position 0.5 with {\arrow{latex}},
mark=at position 0.93 with {\arrow{latex}}
},
postaction={decorate}]
(0.05,-2)--(1,-2) -- (1,2) -- (0.05,2);
\node at (0.5,0.2) {$C_{\varepsilon}$};
\node at (0.5,2.3) {$C_3$};
\node at (0.5,-2.3) {$C_1$};
\node at (1.3,1) {$C_2$};
\node at (0.4,1) {$\mathcal{C}_R$};
\end{tikzpicture}
\caption{Contour deformation for the integral $\int dz \frac{e^{az+b/z}}{z^c}$ with $a,b>0$ and $c>1$. $C_{\varepsilon}$ denotes the semicircle contour of radius $\epsilon$ and $\mathcal{C}_R$ is the original contour. It is easy to show that the integrals along the contours $C_1$ and $C_3$ vanish when we take $\text{Im}z(C_3)\rightarrow +\infty$ and $\text{Im}z(C_1)\rightarrow -\infty$. Since there is no pole inside the contour we must have that the integral along $\mathcal{C}_R$ equals the integral along $C_2$. In turn, the integral along $C_2$ is precisely the modified Bessel function of the first kind.} \label{fig:contour}
\end{figure}
\subsection{Subleading Bessel contributions}\label{sec subleading Bessels}
The global $AdS_3\times S^2$ solutions described in section \S\ref{sec Ideal sheaves} can be used to generate black hole $AdS_2\times S^1\times S^2$ geometries. For example, start with the $AdS_3$ global metric
\begin{eqnarray}
ds^2_{AdS_3}=4\left(-\cosh^2\eta d\tau^2+d\eta^2+\sinh^2\eta d\sigma^2\right)\nonumber,
\end{eqnarray}
and write $y=\tau-\sigma$ and $\rho=2\eta$ such that, after some algebra,
\begin{equation}
ds^2_{AdS_3}=-4\left(d\tau +\frac{1}{2}(\cosh\rho-1)dy\right)^2+d\rho^2+\sinh^2\rho dy^2.
\end{equation}
For the purpose of computing the euclidean path integral we can declare $y$ to be the euclidean time and $\tau$ the spatial circle instead. Then, the $AdS_2$ factor $d\rho^2+\sinh^2\rho dy^2$ together with the sphere becomes the euclidean near-horizon geometry of the four dimensional black hole \cite{Sen:2008vm}. The spatial circle has nevertheless, a time-like signature and so we have to consider its Wick rotation. In particular, we consider the identification $i\tau\sim i\tau+ 2\pi/\phi^0$- similar details concerning the analytic continuation were considered in the context of string theory on $\mathbb{R}^4\times S^1$ in \cite{Dedushenko:2014nya}. The metric becomes
\begin{equation}
ds^2_{AdS_2\times S^1}=\frac{4}{(\phi^0)^2}\left(du +\frac{i}{2}\phi^0(\cosh\rho-1)dy\right)^2+ds^2_{AdS_2},
\end{equation}
where we have defined $u=i\tau \phi^0$ with periodicity $u\sim u+2\pi$. This metric can also be seen as the extremal limit of a BTZ black hole in $AdS_3$. Physical considerations for this spacetime in the context of the entropy function were made in \cite{Gupta:2008ki,Murthy:2009dq}.
Furthermore, on top of the geometry we need to consider the effect of the singular gauge field configurations, which generate spectral flow transformations. This is motivated from the gauge field configuration (\ref{attractor fields}), that one obtains in the decoupling limit of the polar configurations. Asymptotically we have
\begin{equation}\label{5d connection after flux}
A^{a}_{5D}\simeq -p^a \cos\theta (d\phi+\tilde{A})-i2D^{ab}\frac{\Delta\beta_b}{\phi^0} du -2D^{ab}\Delta\beta_b dy,
\end{equation}
after analytically continuing the time-like circle, as described above. We see that the total M2 charge contribution $\Delta\beta=\beta-\bar{\beta}$ induces the gauge transformation
\begin{equation}\label{M2 singular connection}
-2D^{ab}\Delta\beta_a dy.
\end{equation}
Nevertheless, this gauge transformation is not well defined because the circle parametrized by $y$ is contractible at the origin $\rho=0$, which gives rise to a delta function singularity in its field strength. Similar gauge field configurations were discussed in \cite{Dijkgraaf:2000fq}.
Note that $\Delta\beta^a\equiv D^{ab}\Delta\beta_a$ lives in $\Lambda^*/\Lambda$, where $\Lambda$ corresponds to the lattice $k_a \in \mathbb{Z}$, while $\Lambda^*$ is the dual lattice. As $D_{ab}$ is not unimodular (its determinant is not one), $\Delta\beta^a$ is not necessarily an integer, and so the holonomies $\exp{\oint A_{5D}}$ around the contractible cycle can be non-trivial.
The connection (\ref{M2 singular connection}) is thus a large gauge transformation and we want to understand its contribution to the localization computation. We will argue that its contribution to the renormalized action comes solely from the abelian Chern-Simons terms, since these are the only terms in the Lagrangian that are not gauge invariant. To deal with the delta function singularity we remove a disk of radius $\epsilon$ from the origin \footnote{The space has topology $D\times S^1$ where $D$ is a disk.} and at the end of the computation we consider the limit when $\epsilon$ goes to zero. This way we ensure that the localization equations are left unchanged, as they depend only on the field strengths. There is a subtlety in this procedure. It turns out, that we also need to supplement the Chern-Simons integral with a boundary term at $r=\epsilon$, which is the inner boundary disk. This term is necessary to correctly take into account the delta function singularity. To exemplify this, consider the integral on the disk $D$
\begin{equation}
I=\int_{D} f(r) F,
\end{equation}
where $F=d(d\theta)$, with $\theta$ is the angle on the disk $D$, and $f(r)$ is a test function. Since we have $F=\delta(r)dr\wedge d\theta$, the integral gives $I=2\pi f(0)$. In contrast, if we put a regulator at $r=\epsilon$ we would find zero since in that case we have $F=0$ everywhere outside the origin. Instead the regulated integral should have the form
\begin{equation}\label{reg at the origin}
I_{\epsilon}=\int_{D_{\epsilon}} f(r) F+\int_{\partial D_{\epsilon} }f(\epsilon)A,
\end{equation}
with $D_{\epsilon}$ the regulated disk and $\partial D_{\epsilon}$ being the boundary of the inner disk. In the limit when $\epsilon\rightarrow 0$ we recover the result $I=2\pi f(0)$. This example can be easily adapted to the Chern-Simons form $A\wedge F$, in which case the inner boundary term becomes $\int A_{\theta}A_{u}$, with $A_{u}$ the component along the circle $S^1$.
We now discuss the contribution of large gauge transformation to the renormalized action coming from the Chern-Simons action. To simplify the problem, we first integrate on the sphere. The five dimensional Chern-Simons terms give rise to three dimensional abelian Chern-Simons terms
\begin{equation}\label{Chern-Simons terms}
\frac{i}{192 \pi^2}D_{abc}\int A^a\wedge F^b\wedge F^c\rightarrow \frac{i}{16\pi}D_{abc}p^c\int A^a\wedge F^b.
\end{equation}
By the localization equations, the five dimensional gauge field has the form (\ref{localization 5D gauge field})
\begin{eqnarray}\label{localization sol}
A^{a}_{5D}= -2\frac{\phi^a(\rho)}{\phi^0(\rho)}(du +A^{*0}) +A^{*a}_{4D}-2D^{ab}\Delta\beta_b dy,
\end{eqnarray}
where $A^{*0}$ and $A^{*a}_{4D}$ have the attractor values of the unperturbed solution. In contrast the fields $\phi^a$ and $\phi^0$ have non-trivial radial profiles. The boundary conditions are such that at infinity one has
\begin{equation}\label{boundary phi^a}
\lim_{\rho\rightarrow\infty}\frac{\phi^a}{\phi^0}=-q^a,
\end{equation}
which follows from the equations of motion. Note that due to the Chern-Simons terms only the component of the gauge field along $du$, which is proportional to $q^a$ at infinity, is kept fixed while the component $dy$ is allowed to fluctuate \cite{Dabholkar:2014ema}. To simplify the discussion we consider the case of only one gauge field. Due to the boundary conditions the Chern-Simons action has to be supplemented with a boundary action, that is,
\begin{equation}\label{simple CS action}
\int_{D\times S^1} A\wedge F+\int_{T^2}A_y A_u,
\end{equation}
with $A_u$ fixed but $A_y$ can fluctuate. We write the gauge field as a non singular part $A_{\text{n.s.}}$ plus a singular part, which is the gauge transformation $d\Lambda=d\Lambda_y\, dy$. Then the action (\ref{simple CS action}) splits into a term that depends only on $A_{\text{n.s.}}$ and a term linear in $d\Lambda$. The first joins the remaining terms in the Lagrangian to give the renormalized action (\ref{Ren 5D}). On the other hand, the term linear in the gauge transformation is
\begin{equation}\label{large gauge transf action}
2\int_{D_{\epsilon}\times S^1} d\Lambda\wedge F_{\text{n.s}} +\int_{D_{\epsilon}\times S^1} d(d\Lambda\wedge A_{\text{n.s}})+ \int_{T^2} (d\Lambda)_y A^{\text{n.s }}_u.
\end{equation}
The term $\int d(d\Lambda\wedge A_{\text{n.s}})$ gives rise to the integral of a delta function and so we need the regularization term at the inner boundary of $D_{\epsilon}$ as in (\ref{reg at the origin}). Being a total derivative, it gives rise to two contributions. The first is a boundary term at infinity that cancels the third term. The second is a term at the inner boundary $\partial D_{\epsilon}$, but this cancels against a similar term in the regularization (\ref{reg at the origin}). In the present problem the components of $F$ which are relevant are those coming from the term $-2\partial_{\rho}(\phi^a/\phi^0)d\rho\wedge du$. Then the term $d\Lambda\wedge F$ gives rise to a total derivative that we can calculate. Specializing for the Chern-Simons terms (\ref{Chern-Simons terms}), (\ref{large gauge transf action}) gives
\begin{equation}
2\pi i \left.\frac{\phi^a}{\phi^0}\right|^{\rho=\infty}_{\rho=0}\Delta\beta_a=-2\pi iq^a\Delta\beta_a-2\pi i\left.\frac{\phi^a}{\phi^0}\right|_{\rho=0}\Delta\beta_a.
\end{equation}
where $\left.\frac{\phi^a}{\phi^0}\right|_{\rho=0}$ is the field computed at the origin, which coincides with the variables in the renormalized action. The final result for the renormalized action including the effect of the large gauge transformation is, therefore,
\begin{eqnarray}\label{full renormalized S}
\text{Ren}(S_{5D})|_{ A+d\Lambda}=-\pi q_I\phi^I+\frac{\pi}{6}\frac{p^3+\hat{c}_{2}\cdot p}{\phi^0}-\frac{\pi}{2}\frac{D_{ab}\phi^a\phi^b}{\phi^0}-2\pi i\frac{\phi^a}{\phi^0}\Delta\beta_a-2\pi iq^a\Delta\beta_a,
\end{eqnarray}
where we have included the effect of the shift in $c_2\rightarrow c_2-12(\beta+\bar{\beta})$ in $\hat{c}_2$. After some algebra this can be written as
\begin{eqnarray}\label{renorm action w large gauge transf}
-\pi \hat{q}_0\phi^0 +\frac{\pi}{6}\frac{p^3+\hat{c}_{2}\cdot p-12D_{ab}\Delta\beta^a\Delta\beta^b}{\phi^0}
-\frac{\pi}{2\phi^0}D_{ab}(\phi^a+q^a\phi^0+2i\Delta\beta^a)(\phi^b+q^b\phi^0+2i\Delta\beta^b)\nonumber,
\end{eqnarray}
with $\hat{q}_0=q_0-D_{ab}q^aq^b/2$.
Note that by extremezing $\text{Ren}(S_{5D})$, the saddle values for $\phi^0,\phi^a$ are different from the ones defined by the attractor values (\ref{attractor equations}). The difference is due to the terms proportional to $\Delta\beta$, which do not appear in the classical prepotential for computing the on-shell values. At first site, this could have looked puzzling. The fact is that the fields $\phi^0,\phi^a$ in the renormalized action (\ref{full renormalized S}) are the values of the fields computed at the origin, which can be different from the values at the boundary. The mismatch is due to the singular gauge transformation, whose contribution to the action behaves more like a non-local term. As a side comment, note that if (\ref{renorm action w large gauge transf}) was the result of a local contribution, then by the equations of motion we would find the attractor value $\phi^a=-2i\Delta\beta^a$, for $q^a=0$. We would recover precisely the asymptotic value of the five dimensional gauge field given in (\ref{5d connection after flux}).
The localization one-loop determinant works much the same way as discussed in the previous section. The only difference is the singular gauge transformation. Nevertheless, since the localization action depends only on the field strengths, given the regularization procedure, the determinant is not expected to depend on the effect of the large gauge transformation. Therefore for the purpose of computing the one-loop determinant we can set $\Delta\beta_a=0$. Hence, using (\ref{1-loop localization}) we obtain
\begin{equation}
Z^{\text{Loc}}_{1-\text{loop}}=\frac{\vartheta(p,\beta,\bar{\beta})}{\phi^0}=\frac{1}{\phi^0}\left(\frac{p^3}{6}+\frac{\hat{c}_2\cdot p}{12}\right),
\end{equation}
where $\vartheta$ is the physical size of the geometry (\ref{condition on the size}), which contains the backreaction of the fluxes. Note that this expression is precisely the K\"{a}hler potential of the four dimensional theory, computed with the classical prepotential (\ref{classical prepotential reduction}).
We are ready to assemble the $\mathcal{N}=4$ answer. In both the $K3\times T^2$ and $T^4\times T^2$ CHL compactifications we have $\beta_a=\beta\delta_{1a}$ and $\bar{\beta}_a=\bar{\beta}\delta_{1a}$, following the description in terms of M2 and anti-M2 branes of section \S \ref{sec polar states}. For the $\mathcal{N}=8$ compactification one is effectively turning off the fluxes, since the index vanishes otherwise. Furthermore, the intersection matrix is $D_{ab1}=D_{a1b}=D_{1ab}=C_{ab}$ with the other components zero, and the second Chern-class is $c_{2a}=24 n_p\delta_{1a}$ with $n_p=1,0$ for $K3$ and $T^4$ respectively. We compute
\begin{eqnarray}
&&\frac{p^3}{6}=\frac{p^1}{2}P^2,\;\;\frac{\hat{c}_2\cdot p}{12}=p^1\left(2-(\beta+\bar{\beta})\right),\;\;D_{ab}\Delta\beta^a\Delta\beta^b=\Delta\beta^1\Delta\beta_1=-\frac{p^1}{P^2}(\beta-\bar{\beta})^2\nonumber,
\end{eqnarray}
where we defined $P^2=C_{ab}p^ap^b$. This gives
\begin{eqnarray}
\frac{1}{6}\left(p^3+\hat{c}_{2}\cdot p-12D_{ab}\Delta\beta^a\Delta\beta^b\right)=4p^1\left[\frac{(P^2/2-(\beta-\bar{\beta}))^2}{2P^2}-\bar{\beta}+n_p\right],
\end{eqnarray}
and
\begin{equation}
\vartheta(p,\beta,\bar{\beta})=p^1(P^2/2+2n_p-(\beta+\bar{\beta})).
\end{equation}
The full $\mathcal{N}=4$ answer is therefore the sum of the five dimensional partition function for each of the saddles parametrized by $\beta,\bar{\beta}$, that is,
\begin{equation}
Z_{\mathcal{N}=4}(q_I,p^I)=\sum_{\beta,\bar{\beta}\in \vartheta(p,\beta,\bar{\beta})>0} \mathcal{D}(\beta)\mathcal{D}(\bar{\beta})Z_{5D}(q_I,p^I,\beta,\bar{\beta}),
\end{equation}
where $\mathcal{D}(\beta),\mathcal{D}(\bar{\beta})$ represent a measure, or better an Euler characteristic, for the ideal sheaves contribution; in the M2 brane picture this measure is given by an index. It would be important to understand more clearly how these Euler characteristics are computed from the full M-theory path integral. Nevertheless, if we use the M2-brane counting we find that $\mathcal{D}(\beta)=\oint \frac{dq}{q^{\beta+1}}\frac{1}{g(q)}$ \footnote{It is possible that the ideal sheaves counting differs from the M2-brane index. The reason is that while the index is computed in flat spacetime transverse to the Calabi-Yau, the ideal sheaves moduli space might carry a $AdS_2\times S^2$ factor instead of $\mathbb{R}^4$. The instanton computation on $AdS_2\times S^2$ of \cite{Beasley:2006us} using string worldsheet methods seems to point in that direction. It is found that a string wrapping a two-cycle in the Calabi-Yau, has both bosonic and fermionic collective coordinates on $AdS_2\times S^2$.},where $g(q)$ is the worldsheet instanton partition function. For example, for $K3$ one has $g(q)=\eta^{24}(q)$. Assembling all the pieces in the formula above we obtain
\begin{eqnarray}\label{non-perturbative entropy}
d(q_I,p^I)=&&\sum_{\beta,\bar{\beta}>0}^{\text{Max}(\beta,\bar{\beta})}p^1\left(\frac{P^2}{2}+2n_p-(\beta+\bar{\beta})\right)\times \mathcal{D}(\beta) \mathcal{D}(\bar{\beta}) \nonumber\\
&&\times \frac{1}{\sqrt{\text{det}(D_{ab})}}\int_{\epsilon'-i\infty}^{\epsilon'+i\infty} \frac{dR}{R^{1+b_2/2}} \exp{\left[-\pi \frac{\hat{q}_0}{R} +4\pi p^1\left(\frac{(P^2/2-(\beta-\bar{\beta}))^2}{2P^2}-\bar{\beta}+n_p\right) R\right]}.\nonumber\\
{}
\end{eqnarray}
Furthermore, the Bessel contour forces the condition
\begin{equation}
\frac{(P^2/2-(\beta-\bar{\beta}))^2}{2P^2}-\bar{\beta}+n_p>0,
\end{equation}
otherwise the integral vanishes. That is, if the condition is non-positive we can close the contour on the right hand side by an infinite semicircle, because the integral along this arc vanishes. Since there is no pole inside the contour, the Bessel integral must vanish too.
Putting all factors together we find that the macroscopic answer (\ref{non-perturbative entropy}) is in perfect agreement with the microscopic answer (\ref{finite sum Bessels}) up to a phase involving the charges, and an overall $p^1$ dependence. The phase $\exp{[\pi i (r-2s)l/m]}$ is a Kloosterman sum. In a related work \cite{Dabholkar:2014ema}, Kloosterman sums were shown to be related to a sum over flat connections in Chern-Simons theory on a Dhen filled solid torus. It would be important to check if the missing phase arises in the same way \cite{Gomes17-2}. The correct $p^1$ dependence can be incorporated by taking into account a redefinition of the Chern-Simons couplings \cite{Gomes:2015xcf}.
We end this section by making a comment about the mock-modular nature of the $\mathcal{N}=4$ microscopic answer \cite{Dabholkar:2012nd}. The microscopic degeneracy of one-quarter BPS dyons still admits a Rademacher expansion but with some important differences, which result essentially from the meromorphicity of the Jacobi form. As expected from the usual Rademacher expansion, one finds a sum over Bessel functions of index $-\omega+3/2$, with $\omega$ the weight of the meromorphic Jacobi form \cite{Murthy:2015zzy}, but due to its mock-modular nature, the degeneracy formula contains in addition Bessel functions of index $-\omega+2$ and an integral over a Bessel of index $-\omega+5/2$ \cite{Ferrari:2017msn} (they have used $\omega=-10$). The argument of the unusual Bessel functions lies precisely at the lower boundary of the polarity.
We can see that our construction breaks down precisely when such deviations of the usual Rademacher expansion are expected. This happens when the geometry attains its possible minimal size, that is, when
\begin{equation}
\beta+\bar{\beta}=\frac{P^2}{2}
\end{equation}
and the spectral flow invariant combination that appears in the argument of the Bessel function, becomes zero, that is
\begin{equation}
\frac{(P^2/2-(\beta-\bar{\beta}))^2}{2P^2}-\bar{\beta}=0.
\end{equation}
This last condition signals a breakdown of the Rademacher expansion for holomorphic Jacobi forms. The solution to both equations is
\begin{equation}
\beta=\frac{P^2}{2},\;\bar{\beta}=0,\vee\; \beta=0,\;\bar{\beta}=\frac{P^2}{2},
\end{equation}
and the contribution to the full partition function is
\begin{equation}\label{mock Bessel}
4p^1 \mathcal{D}(0) \mathcal{D}(P^2/2) \frac{1}{\sqrt{\text{det}(D_{ab})}}\int_{\epsilon'-i\infty}^{\epsilon'+i\infty} \frac{dR}{R^{1+b_2/2}} \exp{\left[-\pi \frac{\hat{q}_0}{R} +4\pi p^1 n_p R\right]}
\end{equation}
Even though the index of this Bessel function does not match with the prediction coming from the mixed Rademacher expansion \cite{Ferrari:2017msn}, we can easily check that the argument of the Bessel (\ref{mock Bessel}) is in perfect agreement (compare with equation 3.42 of \cite{Ferrari:2017msn}).
The origin of the mock-modular nature may be related to the fact that in the canonical ensemble there is a configuration which competes with a similar growth of (\ref{mock Bessel}). This happens when $P^2=0$ and $\beta=\bar{\beta}=0$. In this case $\text{det}(D_{ab})$ in (\ref{mock Bessel}) is zero and so we need to reconsider the computation of the one-loop determinants. Following the Chern-Simons formulation, we see that the $U(1)_L$ factor has level $k_L=P^2/2=0$. This is consistent with the fact that at the level of the renormalized action the coefficient of $(\phi^1)^2$ is zero. We have to truncate the integration over $\phi^1$ to a finite interval. Since the combination $\tau=(\phi^1+ip^1)/\phi^0$ transforms under electric-magnetic duality as $\tau\rightarrow \tau+1$, it is natural to truncate $\phi^1$ to the interval $[0,\phi^0]$. Therefore, integration over $\phi^1$ gives a factor of $\phi^0$ in the measure, which corresponds to an additional power of $(\phi^0)^{1/2}=R^{-1/2}$ relative to the Bessel (\ref{mock Bessel}). For the $K3\times T^2$ compactification we have $b_2=23$ and so we find a Bessel function with the same argument as (\ref{mock Bessel}) but with index $12$.
\subsection{4D quantum effective action and the topological string}\label{sec effective action}
In this section, we make a comparison with four dimensional string theory. To do that, we consider the limit when the M-theory circle radius becomes parametrically smaller than the size of $AdS_2\times S^2$. This amounts to take $1/\phi^0\ll 1$, which is equivalent to the regime of weak topological string coupling. We show that in this limit the number of Bessel functions grows exponentially, allowing to perform the sum by a saddle approximation. As a consequence, we can expand the entropy $\ln d(q,p)$ in a perturbative expansion in powers of $g_{\text{top}}=1/\phi^0$. We can then identify some of the terms in the expansion with the topological string free energies (holomorphic part), as expected from the OSV proposal \cite{Ooguri:2004zv}. Nevertheless, we will encounter other terms such as logarithmic corrections, which signal a departure from the Wilsonian action point of view. This may be regarded as a way to derive the non-holomorphic corrections proposed in \cite{LopesCardoso:2006bg,Cardoso:2008fr,Cardoso:2010gc,Cardoso:2012nh,Cardoso:2014kwa}.
In the regime of charges
\begin{equation}\label{scale charges}
q_I\rightarrow \lambda q_I,\;p^I\rightarrow \lambda p^I,\;\lambda\gg 1,
\end{equation}
the attractor values scale as
\begin{equation}\label{scale phi}
\phi^0\rightarrow \lambda \phi^0,\;\phi^a\rightarrow \lambda\phi^a,
\end{equation}
and thus the M-theory circle, which is proportional to $1/\phi^0$, becomes very small for $\lambda\gg 1$. Moreover, in this limit we keep fixed the K\"{a}hler class of the Calabi-Yau, which is proportional to $p/\phi^0$ \cite{Beasley:2006us}, while taking the size $L^2\propto p^3$ of $AdS_2\times S^2$ to large values.
However, the scaling limits (\ref{scale charges}) and (\ref{scale phi}) are valid only for the solution without fluxes. For $p^I\gg 1$ the number of fluxes increases because of the condition $p^3/6+c_2\cdot p/12-(\beta+\bar{\beta})\cdot p>0$. At the order $\beta,\bar{\beta}\sim \lambda^2$ we can expect a breakdown of the scaling limits because the factors $\mathcal{D}(\beta\sim \lambda^2)$ in (\ref{non-perturbative entropy}) will have exponential growth. In the case of $K3$ for example, we have
\begin{equation}
\mathcal{D}(\beta)\simeq e^{4\pi\sqrt{\beta}},\;\beta\gg 1.
\end{equation}
We can use this to obtain an approximate formula for the degeneracy (\ref{non-perturbative entropy}) in the limit (\ref{scale charges}). Using a saddle point approximation for each of the Bessel functions, we obtain
\begin{equation}
d(q_I,p^I)\sim \sum_{\beta,\bar{\beta}\gg 1}^{\lambda^2} \exp{\left[4\pi\sqrt{\beta}+4\pi\sqrt{\bar{\beta}}+4\pi\sqrt{p^1|\hat{q}_0|\left(\frac{(P^2/2-(\beta-\bar{\beta}))^2}{2P^2}-\bar{\beta}+n_p\right)}\right]},\;p\gg 1.
\end{equation}
Next, we approximate the sum over fluxes by a continuum, which allows to make a new saddle point approximation with respect to $\beta,\bar{\beta}$. After some algebra, we find that the saddle is at $\beta=\bar{\beta}$ with $\beta$ finite of order $\sim P^2/(p^1\hat{q}_0)\sim \mathcal{O}(\lambda^{0})$. Therefore, we see that the saddle point approximation is not consistent with $\beta,\bar{\beta}\gg 1$, and thus the leading contribution must come, instead, from small values of $\beta,\bar{\beta}$. In this case the dominant contribution comes from the term with $\beta=\bar{\beta}=0$, which is the Bessel function of maximal polarity.
Therefore for large $\lambda$ we approximate
\begin{eqnarray}
&&d(q_I,p^I)\sim\nonumber\\ &&\sim \sum_{\beta,\bar{\beta}\lesssim \lambda^2}\mathcal{D}(\beta)\mathcal{D}(\bar{\beta})e^{\left[-\pi \hat{q}_0\phi^{*0}+\frac{\pi p^1 P^2}{2\phi^{*0}}+\frac{\pi P^2}{2p^1\phi^{*0}}(\phi^{*1}+q^1\phi^{*0})^2-2\pi \frac{p^1}{\phi^{*0}}(\beta+\bar{\beta})+2\pi i(\beta-\bar{\beta})\frac{\phi^{*1}}{\phi^{*0}}\right]}\nonumber\\
&&=e^{\left[-\pi \hat{q}_0\phi^{*0}+\frac{\pi p^1 P^2}{2\phi^{*0}}+\frac{\pi P^2}{2p^1\phi^{*0}}(\phi^{*1}+q^1\phi^{*0})^2-\ln\left(\frac{p^1}{\phi^{*0}}\right)^{12}-\ln\Big|\eta\left(\frac{\phi^{*1}+ip^1}{\phi^{*0}}\right)\Big|^{48}\right]}+\mathcal{O}(1/\lambda^2),\nonumber\\
{}\label{on-shell entropy function}
\end{eqnarray}
where $\phi^{*0}$ and $\phi^{*1}$ are the on-shell values determined from the Bessel of maximal polarity. To arrive at the formula above we approximated the subleading Bessel contributions by their saddle point value, and then expanded their growth formula for $\beta,\bar{\beta}\ll p^3$. The measure factor $\sim P^2-4(\beta+\bar{\beta})$ in (\ref{non-perturbative entropy}), is offset by the saddle point gaussian integrals, which give an overall factor of $1/P^2$, and hence in the limit $P^2\sim \lambda^2$, they are of the same order. In the second line, we have extended the range of $\beta$ to infinity which is justified for $\lambda\gg 1$. This allowed to resum the contributions of $\mathcal{D}(\beta)$ into their generating function $\eta^{24}(\beta)$. Similarly, we could have repeated the same exercise for CHL models, with $\eta^{24}(\beta)$ being replaced by $g(\beta)$, the worldsheet instanton partition function.
From the four dimensional point of view, the exponential in (\ref{on-shell entropy function}) can be interpreted as the renormalized 1PI effective action computed on the near-horizon geometry $AdS_2\times S^2$ \cite{Sen:2008vm,Sen:2007qy}. In particular, the expression
\begin{equation}\label{2der leading}
-\pi \hat{q}_0\phi^{*0}+\frac{\pi p^1 P^2}{2\phi^{*0}}+\frac{\pi P^2}{2p^1\phi^{*0}}(\phi^{*1}+q^1\phi^{*0})^2,
\end{equation}
can be identified with the two derivative Lagrangian contribution, which gives the dominant contribution to the entropy since it scales as $\lambda^2$. On the other hand, the logarithmic part
\begin{equation}\label{lambda^zero}
-\ln\left(\frac{p^1}{\phi^{*0}}\right)^{12}-\ln\Big|\eta\left(\frac{\phi^{*1}+ip^1}{\phi^{*0}}\right)\Big|^{48},
\end{equation}
grows as $\lambda^{0}$. It can be computed by evaluating the contribution of the Gauss-Bonnet $R^2$ corrections on $AdS_2\times S^2$ \cite{Sen:2007qy}. Furthermore, it is modular invariant in the variable $\tau=\frac{\phi^{*1}+ip^1}{\phi^{*0}}$, as expected from the four dimensional electric-magnetic duality of string theory on $K3\times T^2$. Finally, there is no term of order $\ln\lambda$, which is in agreement with the logarithmic correction computed in the four dimensional $\mathcal{N}=4$ supergravity theory \cite{Banerjee:2010qc,Banerjee:2011jp}.
From the topological string point of view we can identify the different corrections as an expansion in $g_{\text{top}}\sim 1/\lambda\ll 1$. For example, the leading contribution (\ref{2der leading}) corresponds to the real part of the tree level free energy $F_0(t)$ multiplied by $1/g_{\text{top}}^2$, while the term of order $\lambda^0$ (\ref{lambda^zero}) can be identified with the one-loop contribution, with the complexified K\"{a}hler class $t$ being $\tau$. Such an expansion is precisely the conjectured OSV formula \cite{Ooguri:2004zv}. However, we must stress that further corrections can carry the imprint of the $AdS_2\times S^2$ physics and deviate from the topological string free energies that we obtain from the $\mathbb{R}^4$ computation.
\section{Quantum Foam and Non-perturbative topological string}\label{sec Foam}
In this work, we have argued that the path integral of M-theory should include the contribution of singular gauge field configurations. Their effect was to produce a finite renormalization of the parameter $c_a$ of the five dimensional Lagrangian that parametrizes the mixed gauge-gravitational Chern-Simons terms. If this idea is correct, this would imply that one is effectively summing over different topologies of the internal manifold, since the parameter $c_a$ descends from the second Chern-class $c_2(X)$ of the Calabi-Yau $X$. In a way, this is reminiscent of the idea of quantum foam and melting crystals discussed in \cite{Iqbal:2003ds}, which we briefly review now.
The goal of \cite{Iqbal:2003ds} was to provide with a non-perturbative definition for the topological string. The example under discussion was the case of the A-model. From the target space perspective, the A-model can be described by a theory known as K\"{a}hler gravity \cite{Bershadsky:1994sr}, and the classical solutions of this theory are given by K\"{a}hler forms $k$, with action proportional to the volume form
\begin{equation}
S=\frac{1}{g^2\,3!}\int_{X} k\wedge k\wedge k,
\end{equation}
with $g$ the topological string coupling constant. We can consider higher derivatives in this action by adding the term $\frac{1}{24}\int k\wedge c_2(X)$.
In the quantum problem we consider fluctuations of the macroscopic solution $k_0$ as
\begin{equation}
k=k_0+g\,F,
\end{equation}
with $F$ the fluctuation. Since it obeys $dF=0$ due to the K\"{a}ler condition, $F$ can be seen as the field strength of a gauge field. In addition, we want to preserve the macroscopic K\"{a}hler form, that is, we need $\int_{\alpha} F=0$
for any two-cycle $\alpha\in H_2(\mathbb{Z},X)$. As explained before, if we require $F$ to have non-trivial higher Chern-classes, then it must be the field strength of a singular gauge connection, or in the appropriate bundle generalization an ideal sheaf. It is argued in \cite{Iqbal:2003ds} that these singular fluctuations lead to a foamy description of quantum gravity characterized by wild changes of the geometry and the topology. Instead of dealing directly with the quantum gravity picture, which may lead to puzzles related to black hole formation, they propose that the same physics should be described in terms of the topologically twisted maximally supersymmetric $U(1)$ theory, that is, the $\D6$ brane worldvolume theory.
After some algebra, the quantum action for $k=k_0+gF$ becomes
\begin{equation}\label{Kahler grav action}
S_{\text{Quantum}}=\frac{1}{g^2\,3!}\int_{X} k_0\wedge k_0\wedge k_0+\int k_0\wedge c_2(X)+\frac{1}{2}\int k_0\wedge F\wedge F+\frac{g}{3!}\int F\wedge F\wedge F.
\end{equation}
We have included the effect of higher derivative corrections proportional to $c_2\cdot k_0$. Using localization, they show that the partition function of the $\D6$ theory on $S^1\times X$, with periodic boundary conditions on $S^1$ for the fermions, reproduces the gravity path integral.
Our problem is slightly different because in this case we have a $\D6-\aD6$ configuration, but we can easily mimic many of the quantum foam features. We thus expect fluctuations $F$ and $\overline{F}$ of the K\"{a}hler form, coming from each of the centers. The total quantum action receives the contribution from both the D-branes, that is, $S_{\text{Quantum}}=S_{\D6}+S_{\overline{\D} 6}$ with
\begin{eqnarray}
&&S_{\D6}=\frac{1}{g^2\,3!}k_0^3+c_2\cdot k_0-\beta\cdot k_0-g n,\\
&&S_{\overline{\D} 6}=\frac{1}{g^2\,3!}k_0^3+c_2\cdot k_0-\bar{\beta}\cdot k_0-g \bar{n},
\end{eqnarray}
where we have defined the second and third Chern-classes of the bundles $F,\,\overline{F}$ by $-\beta,-\bar{\beta}$ and $-n,-\bar{n}$ respectively. We have $g=1/\phi^0$, and $k_0^a=g p^a/2$ so that the total K\"{a}hler class is $2k_0=p/\phi^0$ in agreement with the attractor geometry. Note that the quantum K\"{a}hler class $k_0+gF$ is equivalent to the flux $p/2+F$ that we turn in M-theory, as discussed in the section \S\ref{sec Ideal sheaves}. From the K\"{a}hler gravity point of view it becomes clear that the effect of the M-theory fluctuations parametrized by $\beta,\bar{\beta}$ is to renormalize the second Chern-class $c_2(X)$, as we see from (\ref{Kahler grav action}). The final result is
\begin{equation}\label{quantum action Sq}
S_{\text{Quantum}}=\frac{p^3+c_2\cdot p}{24\phi^0}-\frac{(\beta+\bar{\beta})\cdot p}{2\phi^0}-\frac{n+\bar{n}}{\phi^0}.
\end{equation}
For $n,\bar{n}=0$, we recognize the action $2\pi S_{\text{Quantum}}$ as the on-shell renormalized physical action on the near-horizon geometry (\ref{Ren 5D}), before including the electric charges and the effect of the large gauge transformation.
We thus see that there are striking similarities between our problem and the quantum foam description, with the topological string playing a special role. It would be important to make more precise the connection between quantum black hole entropy, K\"{a}hler gravity and the $\text{D} 6$ brane theory.
\section{Discussion and Conclusion}
In this work we have discussed a proposal for explaining the non-perturbative corrections to black hole entropy, related to the polar subleading Bessel contributions in the Rademacher expansion. In summary, the main results are:
\begin{itemize}
\item \emph{New family of saddle geometries and the stringy exclusion principle}: we have argued that the path integral of M-theory on the near-horizon geometry of the black hole receives the contribution of a new family of saddle geometries, whose contribution is related to the polar terms in the Rademacher expansion. We discussed the possibility that these saddles arise after turning on ideal sheaves fluxes on the Calabi-Yau, which in turn induce corrections on the parameters that define the effective five dimensional Lagrangian. This picture has an alternative description in terms of M2 and anti-M2 branes wrapping holomorphic cycles on the Calabi-Yau. However, in previous works such as \cite{Simons:2004nm,Gaiotto:2006ns}, the backreaction of the M2 branes on the geometry is not taken into account. Instead, our proposal considers backreaction, which allows to solve many puzzles. In fact, one can show that effective field theory on a fixed background is a good approximation for very large central charge, as expected. But for finite central charge, which is the main goal of this work, backreaction becomes important and we find that a finite number of geometries contributes. The bound on the number of geometries is precisely the bound imposed by the stringy exclusion principle \cite{Maldacena:1998bw}.
\item \emph{5D Supersymmetric localization}: as an intermediate problem, we have considered supersymmetric localization at the level of five dimensional supergravity on the $AdS_2\times S^1\times S^2$ background, generalizing the results found in \cite{Dabholkar:2010uh}. Two main results stand out in this computation. The first is that the solutions of the localization equations are an uplift of the four dimensional solutions found in \cite{Dabholkar:2010uh}. The second, and most important, is that the five dimensional supergravity action computed on the localization locus, which includes the contribution from the supersymmetrization of the gauge-gravitational Chern-Simons term, gives precisely the renormalized four dimensional result using the classical holomorphic prepotential (the one that does not contain the contribution from instantons). This result elevates the non-renormalization theorem, pointed out recently in \cite{Butter:2014iwa}, to the quantum level. The finite dimensional integral obtained using localization, is exactly the modified Bessel function that appears in the Rademacher expansion, including the exact spectrum of the polar terms.
\item \emph{Quantum effective action and the topological string}: from our formulas, it becomes clear that the effect of backreaction is relevant for small central charge, because it induces only a small number of Bessel contributions. Whereas for large central charge, the number of Bessel functions grows exponentially, which allows for a saddle point approximation on the sum over Bessels. This limit is equivalent to take the M-theory circle parametrically much smaller than the size of $AdS_2$. We find that the final result for the degeneracy, after integrating out the contribution from the fluxes, matches with the four dimensional quantum effective action computed on the near-horizon geometry of the black hole, in agreement with previous results. From this effective action we can read the topological string contributions as an expansion in powers of $g_{\text{top}}=1/\phi^0 \ll 1$, in agreement with the OSV proposal \cite{Ooguri:2004zv}.
\end{itemize}
There are many features in our construction that are similar to the work of Denef and Moore \cite{Denef:2007vg}. Many of those have been discussed in previous sections. The differences though, are essential for deriving an exact formula for the entropy of four dimensional $\mathcal{N}=2$ black holes. In the following we discuss some of them. First of all, we consider a path integral formulation directly in the near-horizon geometry, without relying on the properties of the dual CFT partition function. Furthermore, our construction avoids the enigmatic multi-center configurations discussed in \cite{Denef:2007vg}. The reason is that localization only allows for geometries of the type $AdS_2\times S^1\times S^2$, but the decoupling limit of the enigmatic configuration, discussed in \cite{deBoer:2008fk}, contains a black hole localized on the sphere, which is, thus, physically very different. Without entering in many details about a derivation of $\mathcal{N}=2$ black hole entropy, which we leave for future work \cite{Gomes17-1}, we can already see a few advantages of our construction in comparison to \cite{Denef:2007vg}. First, our degeneracy formula is always finite, without the need of including any cutoff. This solves many issues related to the original OSV proposal \cite{Ooguri:2004zv}. That is, the topological string partition function is defined formally in the form of the Gopakumar-Vafa infinite products, which in general is not convergent. In our work, finiteness of the degeneracy follows essentially from the microcanonical ensemble and the bound on the number of geometries. Another key aspect of our construction, is that it can be extended to the regime of weak topological string coupling, in contrast with Denef and Moore's work. This is possible because the localization computation provides with a result that is valid for any value of the charges, which obviously includes the regime $g_{\text{top}}=1/\phi^0\ll 1$, and moreover is finite.
Finally we comment on three ongoing projects. These are an application of the ideas proposed in this work and provide with further support of our claims, by testing our proposal against microscopic and macroscopic computations. These projects extend through four dimensional black holes in Calabi-Yau compactifications, including a study of the logarithmic corrections following \cite{Belin:2016knb}, and the study of number theoretic properties of the black hole degeneracy related to Kloosterman sums and U-duality invariants. Succinctly, these projects can be summarized as:
\begin{itemize}
\item \emph{$\mathcal{N}=2$ black hole entropy }\cite{Gomes17-1}: we will extend the present results to black holes in $\mathcal{N}=2$ compactifications \cite{Maldacena:1997de}. In particular, the goal is to derive an exact formula for the entropy. There are two important requirements. First, the entropy formula must agree with Denef and Moore's formula \cite{Denef:2007vg} in the regime of strong topological string coupling. Second, it should reproduce the logarithmic corrections computed using the quantum entropy formalism in $\mathcal{N}=2$ supergravity \cite{Keeler:2014bra,Sen:2011ba}. In particular, this is the regime of weak topological string coupling.
\item \emph{Generalized Kloosterman Sums from M2-branes }\cite{Gomes17-2} : in this project we will consider the contribution of smooth $AdS_2\times S^1\times S^2/\mathbb{Z}_c$ orbifolds to the path integral in $\mathcal{N}=4$ compactifications, following previous work \cite{Dabholkar:2014ema}. The goal is to reproduce the structure of the Rademacher expansion, and in particular, to derive expressions for the Kloosterman sums that can be compared with arbitrary level (mock) Jacobi forms \cite{Ferrari:2017msn}. These orbifolds result from an $SL(2,\mathbb{Z})$ Dehn filling of the bulk solid torus and are thus topology changing. We will follow \cite{Dabholkar:2014ema} and compute the contribution of flat connections to the Chern-Simons path integral.
\item \emph{Quantum entropy, Koosterman Sums and U-duality invariance }\cite{Gomes17-3}: we will consider the contribution of Eguchi-Hanson space with GH charges $q$ and $-q$ (\ref{Eguchi-Hanson}), which are $AdS_2\times S^1\times S^2$ geometries with $\mathbb{Z}_q$ orbifold singularities. We argue that these are the geometries corresponding to a configuration of $q\,\D6$ and $q\,\aD6$. We will study the dependence of the exact entropy on arithmetic invariants and compare with microscopic formulas for both the $\mathcal{N}=8$ \cite{Sen:2008sp} and $\mathcal{N}=4$ \cite{Banerjee:2008pu,Dabholkar:2008zy} examples. In a sense, we want to extend the results \cite{Sen:2009vz,Sen:2009gy} to the quantum theory.
\end{itemize}
\subsection*{Acknowledgments}
We would like to thank Jan de Boer, Frederik Denef, Alejandra Castro, Bernard de Wit and Atish Dabholkar for discussions on related topics and for comments on the draft. This work is part of the Delta ITP consortium, a program of the Netherlands Organisation for Scientific Research (NWO) that is funded by the Dutch Ministry of Education, Culture and Science (OCW).
\bibliographystyle{JHEP}
|
1,941,325,220,439 | arxiv | \section{Introduction}
A thermodynamic system contains a huge number $N$ of
interacting particles, with $N$ typically
in the order of $10^{23}$ or larger. The microscopic configurations of such a
system changes with time in a complicated and stochastic manner under the
joint action of internal forces and perturbations from the environment.
At the macroscopic level the
collective properties of the system are, on the other hand, essentially
time-invariant and can be described
by only a few phenomenological parameters such as the mean energy density and
the specific heat.
Nevertheless, at certain values of the temperature $T$ or other environmental control
parameters, the macroscopic behavior of the system may also
change abruptly and qualitatively.
Such phase-transition phenomena, being a major research branch of
statistical mechanics for many years,
are deeply connected with the break down of the ergodicity property of
the system \cite{Huang-1987,Ruelle-1989}.
For a large class of complex systems with quenched disorder (heterogeneity) and
frustrations in the interactions among particles as best represented by
spin-glasses \cite{Binder-Young-1986}, when ergodicity breaks down, exponentially
many thermodynamic states will form, each of which corresponds to one ergodic
sub-space of the whole configurational space of the system \cite{Mezard-etal-1987}.
For these systems, it is widely believed (see, e.g.,
Refs.~\cite{Mezard-etal-1987,Rivoire-etal-2004,Castellani-Cavagna-2005,Parisi-2006}) that,
the {\em equilibrium} properties of the system are determined by the {\em ground}
thermodynamic states which have the global minimal free-energy density
$f_{\rm min}$, and the distribution of equilibrium
free-energies follows an exponential
law. The {\em excited} thermodynamic states
of free-energy densities $f > f_{\rm min}$
are regarded as irrelevant as long as equilibrium properties are
concerned, although they dominate the out-of-equilibrium dynamics of
the system (see, e.g.,
\cite{Monasson-1995,Franz-etal-2001,Montanari-RicciTersenghi-2004,Horner-2007}).
For example, a disordered $p$-spin interaction Ising model
($p\geq 3$) \cite{Gross-Mezard-1984,Gardner-1985} is known to have an
ergodic--non-ergodic transition at a temperature $T_{\rm d}$ (the so-called
dynamic transition temperature), but it is expected that the equilibrium
spin-glass phase transition will occur only at a lower temperature $T_{\rm s}$
(the static transition temperature). For $T_{\rm d} > T > T_{\rm s}$,
although there are exponentially many thermodynamic states,
all the relevant configurations for the equilibrium properties
are still assumed to reside in the same ergodic
sub-space of the whole configuration space.
In this paper, however, we argue that these
statements may not necessarily be
correct. Through a general theoretical analysis, we show that the
equilibrium free-energy densities of an ergodicity-broken system may
actually follow
a Gaussian distribution with a mean value larger than $f_{\rm min}$.
Then the equilibrium behavior of the system will be determined by a group of
excited thermodynamic states rather than by the ground thermodynamic states.
Our statement is further supported by analytical and simulation results
on an exactly solvable model system.
This work clarifies that, the excited thermodynamic states of a
system of broken ergodicity are important not only to
the dynamical (non-equilibrium) properties of the system
but also to its equilibrium properties.
The theoretical analysis of this paper
may help us to understand more deeply the equilibrium (static) properties
of spin-glasses and
other complex systems.
When the equilibrium free-energies of an ergodicity-broken
system follows a Gaussian distribution, the ground thermodynamic states of the
system may not be reached by any dynamical process, no matter how long one waits
or which specific cooling schedule is used. In other words, equilibrium studies
based on the Gibbs measure will give a dynamics-independent lower-bound on the
reachably free-energy density. We hope this work will shed light on further
studies of various
fascinating dynamic behaviors of complex systems
\cite{Monasson-1995,Franz-etal-2001,Montanari-RicciTersenghi-2004,Horner-2007,Lunkenheimer-etal-2000}.
\section{General theoretical analysis}
The configuration of a general classical system of
$N$ particles can be
denoted by $\vec{{\bf \sigma}}\equiv\{ \sigma_1, \sigma_2, \ldots, \sigma_N \}$, where the configurational
variable $\sigma_i$ of a particle need not to be discrete or be
scalar. Each configuration has an energy ${\cal H}(\vec{\bf \sigma})$.
Starting from an initial configuration, the system evolves with
time and forms a stochastic trajectory in the
whole configurational space $\Gamma$
of the system. At sufficiently high temperatures the system is
ergodic and its trajectory will visit all the (relevant)
configurations in $\Gamma$ if waited
long enough. More precisely we say a system is ergodic if
two trajectories evolved
from a pair of randomly chosen initial configurations will, with
probability unity, intersect with each other. In this ergodic situation
the total partition function of the system is expressed as
\begin{equation}
\label{eq:Z-ergodic}
Z(\beta)= \sum\limits_{\vec{\bf \sigma} \in \Gamma} \exp\bigl(-\beta {\cal H}(\vec{\bf \sigma}) \bigr) \ ,
\end{equation}
where $\beta\equiv 1/T$ is the inverse temperature. When the system reaches equilibrium, its
free energy is minimized, but its total internal energy still fluctuate
with time. If many measurements are performed on
the internal energy, one will realize that the measured energy values
follows a Gaussian distribution \cite{Huang-1987,Ruelle-1989}
\begin{equation}
\label{eq:energy-gaussian}
\rho(E)= \sqrt{\frac{ \beta^2 }{2 \pi C_E(\beta)}}
\exp\Biggl( -\frac{ \beta^2 }{2 C_E(\beta)} (E- \langle E \rangle )^2 \Biggr) \ ,
\end{equation}
where $\langle E \rangle$ and $C_E(\beta)$ are, respectively, the
mean total energy and the specific heat of the system. Both $\langle E \rangle$ and
$C_E(\beta)$ are proportional to $N$.
At low temperatures, however, ergodicity may no longer hold.
As the environmental perturbations become weak,
the system may be impossible
to overcome the large free energy barriers between different
regions of the configurational space $\Gamma$; it is then trapped in
one of many ergodic sub-spaces $\Gamma_\alpha$ of $\Gamma$.
In this ergodicity-broken case, a sub-space $\Gamma_\alpha$
is referred to as a thermodynamic state of the system, which has an
equilibrium free energy $F_\alpha$ as given by
\begin{equation}
\label{eq:F-alpha}
F_\alpha(\beta) = - \beta^{-1}
\log\Bigl( \sum{_{\vec{\bf \sigma} \in \Gamma_\alpha}}
e^{-\beta {\cal H}({\bf \sigma})} \Bigr)\ .
\end{equation}
The energy distribution Eq.~(\ref{eq:energy-gaussian}) still holds in each
thermodynamic state $\alpha$, but now both $\langle E \rangle$ and $C_E(\beta)$ are
thermodynamic state $\alpha$-dependent.
When there are more than one thermodynamic state, the total partition function
Eq.~(\ref{eq:Z-ergodic}) can be re-expressed as a summation over all the thermodynamic
states,
\begin{equation}
\label{eq:Z-nonergodic}
Z(\beta)= \sum{_{\alpha}} \exp\bigl(-\beta F_\alpha(\beta) \bigr) \ ,
\end{equation}
with each thermodynamic state $\alpha$ contributing a term $\exp(-\beta F_\alpha)$.
Equation (\ref{eq:Z-nonergodic}) contains all the information about the
equilibrium properties of an
ergodicity-broken system.
It has the same form as Eq.~(\ref{eq:Z-ergodic}), but with the
configurations $\vec{\bf \sigma}$ being replaced by the thermodynamic
states $\alpha$.
This equation indicates that the contribution of a
thermodynamic state $\alpha$ to the equilibrium property of the system is
proportional to $\exp(-\beta F_\alpha(\beta))$. Although such a
Gibbs measure is arguably not holding in an out-of-equilibrium
dynamics, it is commonly used in equilibrium studies. In this work we
also stick to this Gibbs measure.
To further understand this Gibbs measure, in this paragraph we try to
give an interpretation based on a gedanken dynamical process of
heating and annealing (but we emphasize that the results of this paper is
independent of this interpretation).
For the
system to escape a thermodynamic state $\alpha$, a large external perturbation
has to be applied. This might be achieved by first heating the system
and then cooling it \cite{Zhou-2007a,Zhou-2007b}.
As the system is heated to a high temperature, it becomes ergodic and memory
about its prior history is lost. After the system is cooled down
slowly to its original low temperature, it may reach a different thermodynamic
state $\alpha^\prime$ at the end of this process. (During the annealing
process of this gedanken experiment, the system
may be driven by a global and parallel dynamical rule.)
All the thermodynamic states of the system at a low temperature $T$
will therefore be explored if one repeats extremely many times
this heating-annealing experiment. With this external assistance, the system
again becomes ergodic at the level of thermodynamic states.
Since the prior history of the system is completely destroyed in
the heating-annealing experiment, the frequency of the system
reaching a thermodynamic state
$\alpha$ supposed to be
given by the Gibbs measure $e^{-\beta F_\alpha}/ Z(\beta)$.
Let us denote by $\Omega_{{\rm s}}(F)$ the total number of thermodynamic states in the
system with free energy $F$. Then the equilibrium free energy distribution
is governed by
\begin{equation}
\label{eq:F-profile}
P(F) \propto \Omega_{\rm s}(F) e^{-\beta F} =
\exp\bigl( - \beta F + S_{\rm s}(F) \bigr) \ ,
\end{equation}
where, $S_{\rm s}(F)= \log \Omega_{\rm s}(F)$ is the entropy at the level of
thermodynamic states. $S_{\rm s}(F)$ is a concave and increasing function of $F$.
We are interested in systems with exponentially many thermodynamic states,
i.e., systems with $S_{\rm s}(F)$ being proportional to the size $N$ in leading
order.
If at the minimal free energy $F_{\rm min}(\beta)$, the first derivative of
$S_{\rm s}(F)$
is greater than $\beta$, i.e., $S^\prime_{\rm s}(F_{\rm min}) > \beta$,
there exists a free energy
value $F=\overline{F} > F_{\min}(\beta)$ such that
$S^\prime_{\rm s}(\overline{F}) = \beta$.
At the vicinity of $\overline{F}$, the entropy $S_{\rm s}(F)$ is expressed as
\begin{equation}
\label{eq:S-s-expand}
S_{\rm s}(F)= S_{\rm s}(\overline{F}) + \beta (F- \overline{F} )
- \frac{ \beta^2 }{2 C_{F}(\beta)} (F- \overline{F} )^2 \ .
\end{equation}
After inserting Eq.~(\ref{eq:S-s-expand}) into Eq.~(\ref{eq:F-profile}) we find that,
at equilibrium, the probability of being in a state of free energy $F$ is governed
by the following Gaussian distribution
\begin{equation}
\label{eq:r-F-1}
P(F) = \sqrt{\frac{ \beta^2 }{2 \pi C_F(\beta)}}
\exp\Biggl( -\frac{ \beta^2 }{2 C_F(\beta)} (F- \overline{F})^2 \Biggr) \ .
\end{equation}
From Eq.~(\ref{eq:r-F-1}) it is clear that
$\overline{F}$ is the mean free energy value of the equilibrium thermodynamic
states, and $C_F(\beta)\propto N$ characterizes the fluctuation of the
equilibrium free energies.
Since $\overline{F}(\beta) > F_{\rm min}(\beta)$, we conclude that the
equilibrium properties of the system at inverse temperature $\beta$ are
determined by those excited thermodynamic states whose free energy density
$f(\beta)=\overline{F}/N$ is larger than the minimal free energy density
$f_{\rm min}(\beta)=F_{\rm min}/N$. The ground thermodynamic states of
free energy density $f_{\rm min}(\beta)$ actually do not
contribute to the equilibrium properties of the system.
On the other hand, if the entropy $S_{\rm s}(F)$ has the property that at
$F=F_{\rm min}(\beta)$ its first derivative is less than $\beta$, i.e.,
\begin{equation}
S^\prime (F_{\rm min}) = x \beta
\end{equation}
with $0 \leq x < 1$,
then Eq.~(\ref{eq:F-profile}) suggests that
the equilibrium free energies will follow an exponential low:
\begin{equation}
\label{eq:r-F-2}
P(F) \propto e^{- \beta(1-x) (F-F_{\rm min}(\beta) )} \ , \;\;\; F \geq F_{\rm min}(\beta) \ .
\end{equation}
Consequently, the equilibrium properties of the system will be
contributed by the ground thermodynamic states of free
energy density $f_{\rm min}(\beta)$; and the fluctuation of the observed free
energies is only of order unity.
\section{Grand Free Energy}
To treat the two free-energy distributions
of the preceding section
with the same mathematical framework, we need to define a grand
free energy for the system. Following the work of
M{\'{e}}zard, Parisi, and Zecchina
\cite{Mezard-etal-2002,Mezard-Parisi-2003} on the mean-field theory of
$T=0$ spin-glasses, we can decouple microscopic configurations and
macroscopic states by introducing an artificial
inverse temperature $y$ at the level of thermodynamic states.
The system's grand free energy $G(\beta; y)$ \cite{Zhou-2007b} is defined by
\begin{eqnarray}
G(\beta; y) &\equiv& - y^{-1}
\log\Bigl( \sum{_{\alpha}} e^{-y F_\alpha( \beta ) } \Bigr)
\label{eq:grand-free-energy} \\
&= &- y^{-1} \log\Bigl[ \int {\rm d} f e^{N
\bigl( \Sigma(f)- y f \bigr)} \Bigr] \ .
\label{eq:grand-free-energy-2}
\end{eqnarray}
In the thermodynamic limit of $N \to \infty$, the grand
free energy density is
\begin{equation}
\label{eq:gfe-density}
g(\beta;y)\equiv \lim\limits_{N\to\infty} \frac{G(\beta;y)}{N} \ .
\end{equation}
In Eq.~(\ref{eq:grand-free-energy-2}), $\Sigma(f)\equiv S_{\rm s}(N f)/ N$ measures the
entropy density at the level of thermodynamic states; it is called
the complexity of the system at free energy density $f$
\cite{Mezard-Parisi-2003}.
The adjustable
parameter $y$ controls which thermodynamic states will contribute to the
grand free energy $G(\beta;y)$.
Equation~(\ref{eq:grand-free-energy-2}) indicates that,
when the re-weighting parameter $y$ is
not too large, the grand free energy is contributed by the excited
thermodynamic states of
free energy density satisfying $\Sigma^\prime(f)= y$. The relevant free energy
density and complexity are
related to the grand free energy density by
\begin{eqnarray}
f(\beta; y) &=& \frac{ \partial y g(\beta; y) }{\partial y} \ , \label{eq:fe} \\
\Sigma(\beta;y) &=& y^2 \frac{\partial g(\beta;y)}{\partial y} > 0\ . \label{eq:cpl}
\end{eqnarray}
On the other hand,
when $y > y^*(\beta) \equiv \Sigma^\prime\big(f_{\rm min}(\beta) \bigr)$, the grand free energy
is contributed by the ground thermodynamic states of the system, therefore
\begin{eqnarray}
f\bigl(\beta; y > y^*(\beta) \bigr) &=& f_{\rm min}(\beta) \label{eq:fe2} \\
\Sigma\bigl(\beta; y > y^*(\beta) \bigr) &=& 0 \label{eq:cpl2}
\end{eqnarray}
From Eq.~(\ref{eq:cpl}) and (\ref{eq:cpl2}) we know that, (1) the minimal free energy
density $f_{\rm min}(\beta)$ corresponds to $y=y^*(\beta)$, where the complexity
$\Sigma(\beta;y)$ drops to zero;
(2) if $\Sigma(\beta;\beta) > 0$, then
$f(\beta; \beta)> f_{\rm min}(\beta)$ is the mean free energy density of the
thermodynamic states which dominate the equilibrium properties of the system.
\section{Results on the $p$-spin interaction Ising spin-glass model}
Let us complement the above-described general
analysis with a concrete example, namely the
$p$-spin interaction Ising model on a complete
graph \cite{Gardner-1985}. The Hamiltonian of the model is
\begin{equation}
\label{eq:P-spin-model}
{\cal H}({\bf \sigma}) =
- \sum\limits_{1\leq i_1 < \ldots < i_p \leq N} J_{i_1 i_2 \ldots i_p} \sigma_{i_1}
\sigma_{i_2} \ldots \sigma_{i_p} \ ,
\end{equation}
where the spin variables $\sigma_{i}=\pm 1$
and the quenched (time-independent) coupling
constant $J_{i_1 \ldots i_p}$ is identically and independently distributed
according to
\begin{equation}
\label{eq:J-form}
\omega(J_{i_1 i_2 \ldots i_p} ) = \sqrt{\frac{N^{p-1}}{ \pi p! J^2 }} \exp\Biggl(
-\frac{N^{p-1}}{p! J^2} J_{i_1 i_2 \ldots i_p}^2
\Biggr)
\end{equation}
with $J$ being a constant parameter (the energy unity of the system).
For $p=2$, Eq.~(\ref{eq:P-spin-model}) is
the celebrated Sherrington-Kirkpatrick model
\cite{Sherrington-Kirkpatrick-1975,Mezard-etal-1987}. For $p\geq 3$, earlier efforts
\cite{Gardner-1985,Rivoire-etal-2004} have found that
the system has two transitions, a dynamic transition followed
by a lower-temperature static transition. The dynamic transition is related to
the onset of ergodicity-breaking and is important for out-of-equilibrium
processes, but it was not regarded as a real equilibrium phase-transition.
\begin{figure}[ht]
\includegraphics[width=0.85\linewidth]{figure01.eps}
\caption{\label{fig:free-energy}
The mean equilibrium free energy density and the
minimal free energy density of the $3$-spin interaction
Ising model Eq.~(\ref{eq:P-spin-model}) on a complete graph of $N=\infty$.
Inset shows the
complexity $\Sigma(\beta;\beta)$ as a function of $\beta$.
For $\beta \in (1.468,1.5352)$ the
equilibrium properties of the system are determined by excited thermodynamic
states.}
\end{figure}
If we assume that all the thermodynamic states of the model system
Eq.~(\ref{eq:P-spin-model}) are evenly distributed in the whole configurational space
$\Gamma$, i.e., there is no further clustering of the thermodynamic states, the grand
free-energy density of the system as defined by Eq.~(\ref{eq:gfe-density})
can be obtained through the cavity method \cite{Mezard-etal-1987} (see
also \cite{Zhou-2007b}). The final expression for $g(\beta; y)$ is
\begin{eqnarray}
& & g(\beta; y) = -\frac{1}{\beta} \log 2
- \frac{p-1}{4} J^2 (y q_0^p + (\beta-y) q_1^p) \nonumber \\
& & \;\;\; - \frac{1}{4} \beta J^2 (1- p q_1^{p-1})
- \frac{1}{y} \int \frac{ {\rm d} z_0}{ \sqrt{\pi}} e^{-z_0^2}
\log\Bigl[ \int \frac{ {\rm d} z_1}{\sqrt{ \pi}} e^{-z_1^2} \nonumber \\
& & \;\;\; \times \cosh^{y/\beta} (
\beta J \lambda_0 z_0 + \beta J \lambda_1 z_1 ) \Bigr]
\ ,
\label{eq:gfe-density2}
\end{eqnarray}
where $\lambda_0=\sqrt{p} q_0^{(p-1)/2}$,
$\lambda_1= \sqrt{p} ( q_1^{p-1}-q_0^{p-1})^{1/2}$,
$q_0 = \langle m \rangle^2$, and $q_1= \langle m^2 \rangle$, with $m$
being the magnetization of a
vertex in one thermodynamic state, and $\langle \cdots \rangle$ means
averaging over all
the thermodynamics states $\alpha$ of the system (each of them is
weighted with
the factor $e^{-y F_\alpha(\beta)}$).
$q_0$ and $q_1$ satisfy $\partial g/ \partial q_0 = \partial g / \partial q_1 = 0$.
Equation (\ref{eq:gfe-density2}) was first derived in \cite{Gross-Mezard-1984}
using the replica trick, and was regarded as the free-energy density of
the system \cite{Gross-Mezard-1984,Gardner-1985}. But we see that actually $g(\beta;y)$ is
the grand free-energy density, which combines
both the free energy effect and the entropy effect (at the level of thermodynamic
states) of the system.
\begin{figure}[ht]
\includegraphics[width=0.85\linewidth]{figure02.eps}
\caption{
\label{fig:overlap}
Overlap histograms for a
$3$-spin interaction Ising systems of $N=200$ vertices (the main figure)
and $N=100$ vertices (the inset).
Different curves correspond to different inverse temperatures.
}
\end{figure}
For an infinite system with $p=3$, the mean values of the equilibrium and the minimal
free energy density are shown in Fig.~\ref{fig:free-energy}
as a function of the inverse temperature $\beta$. Ergodicity of
the system breaks down
at $\beta_{1}\simeq 1.468$, where the whole configuration space splits into
exponentially
many ergodic sub-spaces. The equilibrium and the minimal free energy density of
the system has a
jump at $\beta_1$, but the energy and grand free-energy densities
are both continuous at this point. For $\beta_1 < \beta < \beta_2\simeq 1.5352$,
the mean equilibrium free-energy density is higher
than the minimal free-energy density (which is obtained by setting
$y > \beta$), and the complexity of the system decreases continuously with
$\beta$ and drops to zero at
$\beta_2$. For $\beta > \beta_2$, the mean equilibrium
free-energy density is identical to the minimal
free energy density of the system. The above-mentioned results also hold when
one considers the possibility of further clustering of the thermodynamic states
or splitting of each thermodynamic state into sub-states
\cite{Gardner-1985,Montanari-RicciTersenghi-2003}.
For a system with small size $N$ ergodicity will be preserved even at low
temperatures; but the relevant configurations of the system may show some degree of
clustering. To detect this organization, we can
calculate the overlaps between the sampled independent configurations of the
system. The overlap of two configurations $\vec{\bf \sigma}^1$ and
$\vec{\bf \sigma}^2$ is defined as \cite{Mezard-etal-1987}
\begin{equation}
\Lambda_{12}= \frac{1}{N} \sum\limits_{j=1}^{N} \sigma_j^1 \sigma_j^2 \ .
\end{equation}
The overlap histograms for two finite systems of sizes $N=100$ and
$N=200$
are shown in Fig.~\ref{fig:overlap}. Two peaks appear in the histograms when
$\beta$ approaches the theoretically predicted value $\beta_1$.
The peak at $\Lambda \simeq 0$
is due to pairs of configurations from different domains of the
configurational
space, and the other peak at $\Lambda \simeq 0.8$ (for $N=100$) or
$\Lambda \simeq 0.6-0.8$ (for $N=200$)
corresponds to the overlaps between
configurations from the same domain of the configurational space.
Figure~\ref{fig:overlap} also demonstrates that, as the system size $N$ increases,
the organization of the configurational phase space becomes more complex.
\section{Conclusion and discussion}
In this paper we studied the equilibrium
properties of a thermodynamic system with broken ergodicity such as a spin-glass.
If the number of thermodynamic states increases exponentially fast with
the system size $N$ at low temperatures, we show that the equilibrium
free-energy distribution of the system may be Gaussian, and consequently the
equilibrium static properties of the system are determined by excited
thermodynamic states
of the system, whose free-energy densities are higher than the minimal free-energy
density of the system. A grand free energy function (with an adjustable
parameter $y$) was defined in this paper following
the earlier work of Refs.~\cite{Mezard-Parisi-2003,Mezard-etal-2002}
to calculate the mean value of the
equilibrium free-energy density and the complexity of the system.
The mean-field theory of spin-glasses by Parisi and colleagues
\cite{Mezard-etal-1987,Parisi-2006} was based on the assumption that the
equilibrium free-energies of the system obey an exponential distribution.
Under that theory, only the thermodynamic states of the ground
free-energy density are {\em allowed}
to contribute to the equilibrium properties of the
system. As we now know, for disordered systems with two-body interactions
\cite{Sherrington-Kirkpatrick-1975} this assumption of exponential-distribution is
valid. But for a system with many-body interactions, there may exist a temperature
window within which the free-energy distribution is Gaussian. In this later case,
Fig.~\ref{fig:free-energy} demonstrates that the mean value of
the equilibrium free-energy densities
{\em decreases} with temperature. This apparently will cause an
entropy crisis, but actually the entropy of a thermodynamic state is positive.
Notice that when the free-energy distribution is Gaussian, different groups of
thermodynamic states are taking the dominant role as the temperature changes.
The predictions of the present work can be further
checked by Monte Carlo simulations
on a large finite-connectivity complex system with many-body interactions.
In this work, we focused on the equilibrium statical properties of an
ergodicity-broken system and assumed that the significance of
each thermodynamic state $\alpha$ is proportional to $\exp(-\beta F_\alpha)$, with
$F_\alpha$ being its free energy. This assumption may not be valid for
out-of-equilibrium dynamical processes. For these later non-equilibrium processes,
it has been suggested that
the system will typically be
trapped to a free energy level which corresponds to the maximal
complexity. When the system is cooled down slowly from a high temperature,
the reachable thermodynamic states depend strongly
on the specific dynamical rules
used \cite{Montanari-RicciTersenghi-2004,Horner-2007}. The mean equilibrium
free energy density discussed in this paper, although may not being achievable
in a dynamical experiment, sets a lower-bound on the dynamically reachable
free energy density. As demonstrated by Fig.~\ref{fig:free-energy}, in an intermediate
temperature range,
this lower bound may be well above the minimal free energy density of the system.
\acknowledgments
{H.Z.} acknowledges the hospitality of
Pik-Yin Lai and other colleagues at the Physics Department of the
National Central University, where this work was finished. The simulation
of Fig.~\ref{fig:overlap} was performed in the PC clusters of the State Key
Laboratory for Scientific and Engineering Computing (Beijing).
|
1,941,325,220,440 | arxiv | \section{Introduction} \label{sec:intro}
One of the most remarkable results in theoretical physics lies in random matrix models \cite{DiFrancesco:1993cyw}
whose critical limit by 't Hooft's topological expansion \cite{tHooft:1973alw} provides a universal random geometry as a Brownian map \cite{legall}, which is proven \cite{miller2015} equivalent to Liouville continuum gravity (quantum gravity with dilaton field in two-dimensions) \cite{liouville}.
Upon introduction of nontrivial dynamics, matrix models
can be shown to be mathematically rich.
The theories based on Kontsevich-type matrix models \cite{kontsevich} can be reformulated as a non-commutative quantum field theory \cite{grossewulk}, namely, the Grosse-Wulkenhaar model.
The Grosse-Wulkanhaar model
is an appealing quantum field theory with mathematical rigor,
and exhibits properties like constructive renormalizability,
asymptotic safety \cite{gwasympsafe}, integrability \cite{gwintegrable}, and Osterwalder-Schrader positivity \cite{gwos}.
Tensor models are higher rank analogues of such random matrix models, which therefore lend themselves well to be a candidate to produce even more remarkable results for higher dimensional random geometry and quantum gravity \cite{tensortrack, Gurau:2016cjo, Guraubook}.
Colored tensor models \cite{Gurau:2011xp} in particular, are shown to represent fluctuating piecewise-linear (PL) pseudomanifolds
via their perturbative expansion in Feynman graphs encoding topological spaces \cite{Bandieri:1982}. Colored tensor models admit a $1/N$ expansion of the partition function
\cite{GuraulargeN} with a resummable leading order, given by melonic graphs \cite{Bonzom:2011zz}, exhibiting critical behavior and a continuum limit \cite{Bonzom:2011zz}.
Melonic graph amplitudes satisfy a Lie algebra encoded in the large $N$ limit of the Schwinger-Dyson equations for tensor models \cite{Gurau:2011tj}.
Nonperturbative aspects such as Borel summability \cite{borel} and topological recursion \cite{tr} are also studied.
Tensor models also provide a very interesting platform to explore new types of quantum/statistical field theory, owing to their non-local interactions and their vast combinatorics.
As with matrix models, the combinatorial nature of tensor models can be enriched by introducing differential operators such that the resulting theory contains nontrivial dynamics.
Consequently, the statistical model acquires a notion of scale and its $1/N$ expansion can be translated into a renormalization group flow of the theory.
A series of analyses and results to understand the renormalization group flow can be found in the works \cite{bengeloun, Carrozza:2012uv, Carrozza:2014rba, Carrozza:2016vsq}. Different methods have been developed to accommodate the non-local nature of tensor models coming from combinatorics, such as dimensional regularization \cite{BenGeloun:2014qat} and $4-\epsilon$ expansion \cite {Carrozza:2014rya}.
Having a formulation of renormalization group flow, one can then search for non-trivial fixed points, e.g., via functional renormalization group \cite{frg} and check their stability via Ward-Takahashi identities \cite{ward}.
Other nonperturbative studies include Polchinski equations \cite{Krajewski:2015clk}.
Moreover, in recent years, tensor models have found a new avenue of research in holography via the
large $N$ melonic limit, which is
shared with the Sachdev-Kitaev-Ye model \cite{witten}.
Indeed, tensor models are a conceptually and computationally powerful tool not only to address random geometric problems but also problems in holography \cite{syk}, non-local quantum and statistical field theories, artificial intelligence \cite{Lahoche:2020txd}, turbulence \cite{Dartois:2018kfy}, linguistics \cite{Ramgoolam:2019}, and condensed matter \cite{pspin},
and serve as a very rich playground for theoretical physicists and mathematicians alike.
In this present work, we focus on studying the topological information encoded in the graphs generated by rank-$4$ colored tensor models.
Understanding and revealing topological information and structure of PL-manifolds generated by tensor models are important work in the context of random geometry and quantum gravity.
Of course, this present work is not the first one nor the only one to address the topological properties encoded in the PL-pseudomanifolds that colored tensor models represent.
In fact, there are precedent works examining topological spaces of tensor models \cite{Gurau:2009tw, Gurau:2010nd, Gurau:2011xp, Guraubook} e.g., homology and homotopy of the graphs have been presented.
For three-dimensions, therefore correspondingly for rank-$3$ colored tensor models, Heegaard splitting has been identified in \cite{Ryan:2011qm}.
However, this particular work of ours focuses on a novel concept, trisections in four-dimensional topology, which were recently introduced by Gay and Kirby in 2012 \cite{GayKirby}.
Trisections are a novel tool to describe $4$-manifolds by revealing the nested structure of lower-dimensional submanifolds. In particular, the trisection genus of a $4$-manifold is a topological invariant. In the context of discrete manifolds, the trisection of all standard simply connected PL 4-manifolds has been studied for example in \cite{bell2017},
and trisections in so-called crystallization graphs have been investigated in \cite{Casali:2019gem}.
In the former work \cite{bell2017}, they rely on Pachner moves to ensure that these submanifolds are handlebodies.
However, in colored tensor models, we do not have the priveledge to perform Pachner moves, since they are not compatible with colors in rank-$4$ tensor models.
In the latter work \cite{Casali:2019gem}, the study focused on crystallization graphs, which are very special graphs that ensure the connectivity of each of the submanifolds.
However, in tensor models we generate also graphs which are not crystallizations, and furthermore, in the continuum limit of tensor models, where we are interested in large volume and refined triangulations, we will not find crystallization graphs dominating. Hence, crystallizations have a limited applicability in tensor models.
We therefore, would like to address and formulate trisection in the colored tensor model setting in this work.
We organize our paper as follows.
In sec.~{\ref{sec:tensormodels}}, we review some key points related to colored tensor models, which our work is based on.
In particular, in sec.~\ref{sec:coloredtensormodels}, we review the construction of tensor models and the definition of their partition function.
In sec.~\ref{sec:topcolgraph}, we recall how Feynman graphs of colored tensor models can encode manifolds and what kind of topological information they store.
\\
In sec.~\ref{sec:heegaardsplitting}, we illustrate a few key concepts of three-dimensional topology necessary to our work.
In sec.~\ref{sec:attachinghandles}, we explain how to describe manifolds via their handle decomposition and recall how, in the case of $3$-manifolds, it encodes their Heegaard splitting.
Section~\ref{sec:connected-sum} analyzes the behavior of Heegaard splittings under connected sum, which will be of great importance in the later part of the paper, while in sec.~\ref{sec:jackets-heeg} and \ref{sec:more-heeg-split}, we review two constructions of Heegaard surfaces that are known in the literature and are based on combinatorial methods.
\\
Sec.~\ref{sec:trisections}, finally, is dedicated to the construction of trisections.
After introducing the concept of trisection for smooth $4$-manifolds, in sec.~\ref{sec:stab} we review a particular kind of move, known as stabilization, and highlight some features that stabilization shares with connected sum of trisections.
In sec.~\ref{sec:cutting-simplices}, we focus on how to partition the vertices of a $4$-simplex in three sets, which is the starting point of our construction of trisections.
In sec.~\ref{sec:split4bubbles}, we study the structure obtained in a PL-manifold via our combinatorial construction and point out what kind of problems are encountered for a generic graph of a four-dimensional manifold. From this point onward, our work departs from previous results studying trisections via triangulations of $4$-manifolds.
In sec.~\ref{sec:connect4bubbles}, finally, we show how the information about trisection can be extracted from the colored graph of reank-$4$ colored tensor models.
Sec.~\ref{sec:4dhandlebodies} and sec.~\ref{sec:central-surface} elaborate on the analyses of the result. In particular we prove that, indeed, we split the manifold under investigation into three four-dimensional handlebodies and we analyze the trisection diagram generated with our procedure.
Sec.~\ref{sec:pseudo-mfd} addresses relaxation of the hypothesis of graphs dual to manifolds and illustrate in some cases that it is possible to draw a few topological conclusions for a wider class of graphs.
\\
Finally, in sec.~\ref{sec:conclusions}, we summarize our results and point out a few possible future directions which may benefit from our present work.
\section{Tensor models}
\label{sec:tensormodels}
\subsection{$(d+1)$-colored tensor models}
\label{sec:coloredtensormodels}
In this section, we introduce tensor models, and in particular colored tensor models and some of their relevant objects which will be used later in order to contsruct trisections.
Tensor models are statistical theories of random tensors and can be thought as zero-dimensional field theories. Due to their low dimensionality, tensor models mostly encode combinatorial information and many of their properties can be directly imported to their higher dimensional copunterpart: tensor field theories. Colors are introduced via an extra index labeling the tensor themselves and we require the covariance of the theory to be diagonal with respect to the color indices. This last requirement will allow us to have a much greater control on the combinatorics encoded in the theory.
Besides the field content of the theory (e.g., rank of tensors and amount of colors considered), a colored tensor model is defined in perturbation theory upon specifying a free covariance and an array of interactions (deformations around the free theory). In this paper, we restrict to a simplicial model (the meaning of this name will be clear soon). We therefore consider the multiplicative group of integers modulo $N$, $\mathbb{Z}_N$ and let $I$ be $\mathbb{Z}_N^{\times d}$ with elements ${\bf {n}} \in I$, ${\bf{n}}=\{n_1, \dots , n_d\}$ and $F(I)$ the space of complex functions on $I$. We give the following definition:
\begin{definition}
\label{def:simplicial-tensor-model}
A $(d+1)$-colored tensor model of rank $d$ tensors is defined via a measure $d \nu$
\begin{equation}
d \nu = \prod_{i=0}^d d \mu_{C^i} (\phi^i, \bar{\phi}^i) e^{-S}, \quad S = \lambda \sum_{{\bf{n}}_i \in I} {\cal K}_{\bf{n}_0\cdots {\bf{n}}_d} \prod_{i=0}^{d} \phi^i_{{\bf{n}}_i} +{\bar \lambda} \sum_{{\bar {\bf{n}}}_i \in I} {\bar {\cal K}}_{{\bar {\bf{n}}}_0\cdots {\bar {\bf{n}}}_d} \prod_{i=0}^{d} \bar{\phi}^i_{{\bar {\bf{n}}}_i}
\end{equation}
where
\begin{itemize}
\item $\phi^i : I \rightarrow {\mathbb C}$ are $d+1$ complex random fields;
\item $C^i$ : $F(I) \rightarrow F(I)$ are $d+1$ covariances;
\item $ {\cal K}$, $\xbar{ {\cal K}}$ : $I^{\times (d+1)} \rightarrow \mathbb{C}$ are two vertex kernels.
\end{itemize}
\end{definition}
If $\cal K$ and $\bar{ {\cal K}}$ are such that every tensor has exactly one index ($n_i$) contracted with another tensor in the interaction, we call the model a \textit{simplicial} colored tensor model. Note that in the interaction term, every color index appears on the same footing, while the free measure factorizes in the product of single color measures. Thanks to this structure, the Feynman diagrams of a simplicial colored tensor model can be represented as {\textit colored graph}, i.e., a connected bipartite regular graph such that each line has a color in $\{0, 1, \dots, d\}$ and each node is incident to exactly one line of each color\footnote{In the following we will often have to go back and forth between graphs and triangulation. Therefore, in order to avoid confusion, we will adopt the terms \textit{node} and \textit{line} for, respectively, zero-dimensional and one-dimensional objects in a graph, while we will call \textit{vertex} and \textit{edge} a zero-dimensional and a one-dimensional object in the triangulation. When referring to edges on the boundary of two-dimensional polygons we might use the term \textit{sides}.}.
\begin{definition}
A closed $(d+1)$-colored graph is a graph $\cal G = (\cal V, \cal E)$ with node set $\cal V$ and line set $\cal E$ such that:
\begin{itemize}
\item $\cal V$ is bipartite; there is a partition of the node set ${\cal V} = V \cup \xbar V$, such that for any element $l \in \cal E$, $l= \{v, \bar v\}$ where $v\in V$ and $\bar v \in \xbar V$. The cadinalities satisfy $\vert{ \cal V} \vert = 2 \vert V \vert = 2 \vert {\xbar V} \vert$.
\item The line set is partitioned into $d+1$ subsets ${\cal E} = \cup_{i = 0}^d {\cal E}^i$, where ${\cal E}^i$ is the subset of lines with color $i$.
\item It is $(d+1)$-regular (i.e., all nodes are $(d+1)$-valent) with all lines incident to a given node having distinct colors.
\end{itemize}
\end{definition}
To distinguish, we call the elements $v \in V$ (${\bar v}\in {\xbar V}$) positive (negative) nodes and draw them with the colors clockwise (anti-clockwise) turning. We often denote by these postive (negative) nodes in colors black (white) in graphs.
The bipartition also induces an orientation on the lines, say from $v$ to ${\bar v}$.
We notice that $(d+1)$-colored graphs are dual to (colored) simplicial triangulations of piecewise linear (PL) orientable $(d+1)$-dimensional pseudomanifolds in $d$ dimensions \cite{Bandieri:1982, cristSurvey}. In particular, every node in the graph corresponds to a top dimensional simplex, every line is dual to a $(d-1)$-dimensional face and two nodes joined by a line of color $i$ represent a pair of $d$-simplices sharing the same $(d-1)$-face (i.e., an orientation reversing homeomorphism between the two boundary faces is implied). In fact, given a simplicial colored triangulation ${\cal T}$ of a PL pseudo-manifold $M$, one can consider the dual cellular decomposition ${\cal T}^*$ and notice that a colored graph is nothing but the $1$-skeleton of ${\cal T}^*$. Therefore, colored graphs are often referred to as \textit{graph encoding manifolds} (GEM), and play a fundamental role in the study of PL topological invariant from a combinatorial point of view, especially within the framework of crystallizations \cite{cristSurvey}. We remark that not every triangulation can be colored, though a refinement compatible with can always be found by means of barycentric subdivision.
We postpone a more detailed explanation of the topological description of colored graphs to the following sections. Nevertheless, it is useful to recall here how to embed a colored graphs in its dual triangulation. Consider a triangulation ${\cal T}$ of a $4$-manifold $M$, and a colored graph $\cal G$ dual to ${\cal T}$, therefore $K(\cal G) = {\cal T}$ and $\vert K(\cal G)\vert =M$. The most natural prescription is to embed the graph such that every component of the graph intersects its dual simplex transversally and at the barycenter. Since the graph is the $1$-skeleton of the dual cellular decomposition of $M$, it is only made of nodes and lines. Therefore, we will only have to embed nodes in the barycenter of $d$-simplices and have $i$-colored lines intersecting $i$-colored $(d-1)$-faces transversally. Examples are shown in fig.~\ref{fig:dual}. For example in four dimensions we will have nodes at the center of $4$-simplices and $i$-colored lines intersecting $i$-colored tetrahedra transversally. Though very simple, this embedding represents a very powerful tool to understand many topological properties of PL manifolds using colored graphs.
\begin{figure}[h]
\begin{minipage}[t]{0.9\textwidth}
\centering
\def0.75\columnwidth{0.6\columnwidth}
\centering
\includegraphics[scale=.18]{dual.png}
\caption{We show $d$-simplices in $d=2, 3, 4$-dimensions, where we also embedded $d+1$-colored graphs. From left to right, $d=2, 3, 4$, and on the top row, embedded tensor model graphs are shown in stranded representation, and on the bottom row, shown in colored representation.
We showin red, $0$-colored faces (one-dimensional for rank $2$, two-dimensional for rank $3$, and three-dimensional for rank $4$).}
\label{fig:dual}
\end{minipage}
\end{figure}
As a final remark, we point out that bipartiteness of a colored graphs $\cal G$, which from a tensor model point of view stems from employing complex tensors and a real free covariance, implies orientability of $K({\cal G})$ \cite{Bandieri:1982}. Both in the GEM formalism and in tensor models, this condition can be relaxed if nonorientable (pseudo-)manifolds shall be considered, nevertheless, in this paper we restrict ourselves to the orientable case.
\subsection{Topology of colored graphs}
\label{sec:topcolgraph}
As advertized, these colored graphs are extensively studied in topology especially in the form of crystallization \cite{Casali:2017tfh, Ferri:1982,Lins:1995}.
One can say that the colors therefore are responsible to encode enough topological information to construct a $d$-dimensional cellular complex, rather than the a-priori naive 1-complex of a graph. Most of the topological information is encoded within different kinds of embedded sub-complexes of $K({\cal G})$ and their combinatorial description in terms of colored graphs.
\paragraph{Bubbles.}
The first structure we present is that of \textit{bubbles}\footnote{Sometimes referred to as \textit{residues} in the literature}. Starting from a colored graph $\cal G$ dual to a colored triangulation ${\cal T}=K({\cal G})$, a $n$-bubble ${\cal B}^{i_1, \dots , i_n}_a$ is the $a$-th connected component of the subgraph spanned by the colors $i_1, \dots, i_n\in\{0,\dots, d\}$.
In order to lighten the notation, we will indicate $d$-bubbles by their only lacking color and sometimes we will refer to them as ${\widehat{i}}$-bubble, for example in four dimensions we might consider the $\widehat{0}$-bubble ${\cal B}_a^{\widehat{0}}={\cal B}_a^{1, 2, 3, 4}$.
Each bubble identifies a single simplex in ${\cal T}$, in particular given a $n$-bubble ${\cal B}^{i_1, \dots , i_n}_a$, its dual $K({\cal B}^{i_1, \dots , i_n}_a)$ is PL-homeomorphic to the link of a $(d-n)$-simplex $\sigma_a$ in the first barycentric subdivision of ${\cal T}$. Upon the embedding procedure described above, we can think about $K({\cal B}^{i_1, \dots , i_n}_a)$ as the boundary of a $n$-dimensional submanifold of ${\cal T}$, intersecting $\sigma_a$ transversally. The most important bubbles for our work are $d$-bubbles and $2$-bubbles. $d$-bubbles represent the link of vertices ($0$-simplices) in ${\cal T}$. A standard result states that $K({\cal G})$ is a manifold if and only if all $d$-bubbles are topological spheres. $2$-bubbles will be referred to as bicolored cycles\footnote{In the tensor models literature, we often refer to bicolored cycles as faces, however, in this paper, we will keep the word faces for general simplices.}, they identify $(d-2)$-simplices (triangles in four dimensions) and are often depicted in tensor models when employing the ``stranded'' notation for Feynman graphs. From a tensor model perspective, while nodes of $\cal G$ correspond to interaction vertices and lines to free propagators of the theory, bicolored cycles come from the contraction patterns of tensor indices.
\paragraph{Jackets.}
Let $\cal G$ be a $(d+1)$-colored graph. For any cyclic permutation $\eta = \{\eta_0, \dots, \eta_d\}$ of the color set, up to inverse, there exist a regular cellular embedding of $\cal G$ into an orientable surface $\Sigma_\eta$, such that regions of $\Sigma_\eta$ are bounded by bicolored cycles labeled by $\{\eta_i, \eta_{i+1}\}$ \cite{BenGeloun:2010wbk, GuraulargeN}. Then, we define a jacket ${\cal J}_{\eta}$ as the colored graph having the same nodes and lines as $\cal G$, but only the bicolored cycles $\{\eta_i, \eta_{i+1}\}$:
\begin{definition}
A colored {\textit jacket} ${\cal J}_{\eta}$ is a $2$-subcomplex of $\cal G$, labeled by a permutation $\eta$ of the set $\{0, \dots, d\}$, such that
\begin{itemize}
\item $\cal J$ and $\cal G$ have identical node sets, ${\cal V}_{\cal J} = {\cal V}_{\cal G}$;
\item $\cal J$ and $\cal G$ have identical line sets, ${\cal E}_{\cal J} = {\cal E}_{\cal G}$;
\item the bicolored cycle set of ${\cal J}_{\eta}$ is a subset of the bicolor set of $\cal G$: ${\cal F}_{\cal J} = \{ f \in {\cal F}_{\cal G} \vert f = \{\eta_i, \eta_{i+1}\}, i\in \mathbb Z_{d+1}\}$.
\end{itemize}
\end{definition}
From a tensor model perspective, jackets are merely ribbon graphs (only comprise of nodes, lines and bicolored cycles), like the ones generated by matrix models graphs. Therefore, jackets represent embedded surfaces in the cellular complex represented by colored tensor models graphs. Let us clarify this last point. The regular embedding of $\cal G$ into $\Sigma_{\eta}$ defines a cellular decomposition of $\Sigma_{\eta}$ with polygonal $2$-cells having $(d+1)$-sides. Each $2$-cell is dual (in $\Sigma_{\eta}$) to a node of $\cal G$ and each side is dual to a line (furthermore, every vertex is dual to a bicolored cycle $\{\eta_i, \eta_{i+1}\}$). Therefore, sides inherit the colors carried by lines of $\cal G$. One may notice that the transversal intersection of a surface with a codimension-$1$ $i$-simplex is a one dimensional edge homeomorphic to such an $i$-colored side. Therefore, we can think about $K({\cal J}_{\eta})$ as an embedding of $\Sigma_{\eta}$ in $K({\cal G})$, such that it intersects transversally all the $(d-1)$-faces. If $d>3$, the dimensionality of $\Sigma_{\eta}$ is too low to define two different regions within the top dimensional simplices. If $d = 3$, though, $\Sigma_{\eta}$ splits every top dimensional simplex and have been shown to represent Heegaard surfaces of three-dimensional PL-manifold $K(\cal G)$ \cite{Ryan:2011qm}; we will be discuss this further in section~\ref{sec:jackets-heeg}.
It is evident that $\cal J$ and $\cal G$ have the same connectivity. We note here that the number of independent jackets is $d!/2$.
We define the {\textit {Euler characteristic}} of the jackets as $\chi ({\cal J}) = 2- 2 g_{\cal J} = \vert {\cal V}_{\cal J}\vert - \vert {\cal E}_{\cal J}\vert + \vert {\cal F}_{\cal J}\vert$, where $g_{\cal J}$ is the genus of the jacket and corresponds to the genus of $\Sigma_{\eta}$. Note that we only define jackets for the closed colored graphs here.
We also remark that jackets are also bipartite reflecting the definition above, and therefore represent orientable surfaces.
\begin{figure}[h]
\begin{minipage}[t]{0.9\textwidth}
\centering
\def0.75\columnwidth{0.9\columnwidth}
\centering
\includegraphics[scale=.2]{bubblesjackets.png}
\caption{
We show in the top row the colored representations of the elementary melon ${\cal G}$ in rank $3$ tensor bipartite colored model, its bubbles ${\cal B}^{\hat{0}}$, ${\cal B}^{\hat{1}}$, ${\cal B}^{\hat{2}}$, and ${\cal B}^{\hat{3}}$ from left to right.
In the middle row, we show the stranded representations of the same objects as the top row.
In the bottom row, we show in the stranded representation, the jackets of the elementary melon in rank $3$ tensor bipartite colored model.
}
\label{fig:bubblesjackets}
\end{minipage}
\end{figure}
\paragraph{Gurau degree.}
From a tensor model perspective, jackets play a crucial role in the large $N$ expansion of colored tensor models, as they define the so-called Gurau degree, which is the parameter that governs the large $N$ expansion. For completeness, we introduce the Gurau degree of a graph $\cal G$ as follows:
\begin{definition}
given colored graph $\cal G$ and the set of its its jackets, we define a combinatorial invariant, called {\textit {Gurau degree}}, as the sum of genera of all jackets of $\cal G$.
\begin{equation}
\omega({\cal G}) = \sum_{\cal J} g_{\cal J}.
\end{equation}
\end{definition}
It is easy to see that $\omega$ is a non-negative integer.
A remarkable feature of Gurau degree is that if $\omega =0$, then the $K({\cal G})$ is a topological sphere, although the converse is not always true. While in $d=2$ the degree equals the genus of the triangulation dual to $\cal G$, it is not a topological invariant for $d > 2$. However, it is an important quantity in tensor models, as the classification of graphs organized by the Gurau degree allows for a $1/N$ expansion where $N$ is the size of the tensors, just like the $1/N$ expansion of matrix models according to the genus.
We defer a more detailed discussion on the large $N$ expansion of colored tensor models to other literature \cite{GuraulargeN}.
\section{Heegaard splittings of $3$-manifolds} \label{sec:heegaardsplitting}
In this section we introduce some of the concepts that are pedagogical to understanding trisections and to which we will refer often in later sections of the paper, namely handle decomposition and Heegaard splittings. We will begin defining such constructions for ojects in theTOP category (specifically for three-dimensional topological manifolds in the case of Heegaard splittings), and we will restrict later to the PL category, which is the main focus of this work.
\subsection{Attaching handles}
\label{sec:attachinghandles}
A handle decomposition of a closed and connected topological $d$-manifold $M$ is a prescription for the construction of $M$ by subsequently attaching handles of higher index.
We can define a $i$-handle in $d$ dimensions as a topological $d$-ball ${{\rm{D}}}^d$ parametrized as ${\rm{D}}^i\times{\rm{D}}^{d-i}$ and is glued to a manifold $K$ along ${\rm{S}}^{i-1}\times{\rm{D}}^{d-i}$, i.e., there exist an orientation reversing homeomorphism from ${\rm{S}}^{i-1}\times{\rm{D}}^{d-i}$ to a subset of $\partial K$. An $i$-handle can therefore be viewed as the thickening of an $i$-dimensional ball (which we call \textit{spine}); we will refer to the boundary of this ball as the \textit{attaching sphere} of the handle.
A $(d-i)$-ball intersecting the spine transversally, will be called compression disc, and its intersection with the boundary of the handle will be referred to as \textit{belt sphere}. Note that, unless the handle decomposition of a manifold includes at least one top dimensional handle, the result will always have a boundary.
\begin{figure}[h]
\begin{minipage}[t]{0.9\textwidth}
\centering
\def0.75\columnwidth{0.9\columnwidth}
\centering
\includegraphics[scale=.24]{handleanotomy.png}
\caption{Handles in three dimensions.
The figure shows (from left to right) a three-dimensional a $0$-handle (${\rm{D}}^3$), a $1$-handle (${\rm{D}}^1 \times {\rm{D}}^2$ glued along ${\rm{S}}^0 \times {\rm{D}}^2$), a $2$-handle (${\rm{D}}^2 \times {\rm{D}}^1$ glued along ${\rm{S}}^1 \times {\rm{D}}^1$), and a $3$-handle (${\rm{D}}^3$ glued along ${\rm{S}}^2$).
The gluing surfaces are colored in brown.
For $0$-handle, the spine is a point, for $1$-handle the spine is a line, and for $2$-handle the spine is a disc, all colored in solid black.
We also show in red belt spheres for $1$-handle and $2$-handle.
Lastly, we also illustrate how $1$-handles and $2$-handles attach to a $0$-handle at the gluing region which are colored in brown.
}
\label{fig:handleanotomy}
\end{minipage}
\end{figure}
\begin{definition}\label{def:handlebody}
A \textbf{handlebody} $H$ (sometimes referred to as \textbf{1-handlebody}) is a manifold whose handle decomposition contains only a $0$-handle and $1$-handles. The genus $g$ of $H$ can be defined as the number of $1$-handles in its decomposition.
\end{definition}
Note that, if $H$ is three-dimensional, then $g$ equates the genus of $\partial H$. Moreover, a manifold is a handlebody iff it collapses to a one-dimensional spine.
\begin{definition}\label{def:heegSplitt}
Let $H_1$ and $H_2$ be two three-dimensional handlebodies of genus $g$ and let $f$ be an orientation reversing homeomorphism from $\partial H_1$ to $\partial H_2$. We call $(H_1, H_2, f)$ a \textbf{Heegaard splitting} of the 3-manifold $M$ if
\begin{equation}\label{eq:heegSplit}
M = H_1 \cup_f H_2\,.
\end{equation}
The common boundary $\Sigma = \partial H_1 = \partial H_2$ is then called a \textbf{Heegaard surface}.
\end{definition}
From now on, making use of a slight abuse of notation and for the sake of clarity, we will represent a Heegaard splitting with the triple $M=(H_1, H_2, \Sigma)$, by asserting $\Sigma = \partial H_1 = \partial H_2$ is provided by the homeomorphism $f$.
A Heegaard splitting allows us to represent a closed and compact 3-manifold\footnote{
In the present manuscript we focus on closed and orientable manifolds, nevertheless the definition of Heegaard splitting applies to a wider class of manifolds. In particular, we point out that in the case of non-orientable $3$-manifold, the Heegaard surface is non-orientable as well \cite{Rubinstein:1978}. Moreover, the definition of Heegaard splitting can be extended to manifold with boundary making use of compression bodies instead of handlebodies \cite{Meier:2016}. } $M$ via a surface and two sets of closed lines on the surface representing the homotopically inequivalent belt spheres of each handlebody. These curves, namely $\alpha$- and $\beta$-curves, encode the information on how $H_1$ and $H_2$ are glued to their boundaries. We refer to $\alpha$- and $\beta$- curves collectively as
\textit{attaching curves}.
The representation we just described is called a \textit{Heegaard diagram} for $M$.
It is important to point out that cutting $\Sigma$ along the $\alpha$-curves or along the $\beta$-curves never leads to a disconnected surface, instead we obtain a $2$-sphere from which an even number of discs (two per each curve) have been removed. See fig.~\ref{fig:cutting}.
\begin{figure}[h]
\begin{minipage}[t]{0.8\textwidth}
\centering
\def0.75\columnwidth{0.65\columnwidth}
\centering
\includegraphics[scale=0.4]{heegaarddiagramS3.pdf}
\caption{Heegaard diagrams for ${\rm{S}}^3$.
The picture shows two Heegaard diagrams (out of infinitely many with arbitraty genus $g$) for the sphere ${\rm{S}}^3$: for minimum genus $g=0$ (on the left), and for $g=1$ (on the right).
The diagram with a Heegaard surface $g=0$ (${\rm{S}}^2$) does not have any attaching curves.
The $\alpha$- and $\beta$-curves on the Heegaard surface $g=1$ (${\rm{S}}^1 \times {\rm{S}}^1$)are shown in red and blue.
The toric Heegaard surface in the latter is the common boundary of two solid tori (${\rm{D}}^2 \times {\rm{S}}^1$): we can view them such that inside this toric Heegaard surface, there is one solid torus, and there is another one outside.
In particular, we can view the diagram as embedded in $\mathbb{R}^3$ plus a point at infinity (therefore in a space homeomorphic to ${\rm{S}}^3$).
The outside solid torus is specified by the blue $\beta$-curve which is the boundary of a horizontally lying compression disc.
Its spine would circle around the torus intersecting this compression disc transversally.
Note that if one views the blue curve as the attaching sphere of a $2$-handle, the resulting manifold would be a topological ball ${\rm{D}}^3$, which can be easily capped-off to generate ${\rm{S}}^3$.}
\label{fig:heegaardsiagramS3}
\includegraphics[scale=0.4]{cutting.pdf}
\caption{Cutting along attaching curves. An example of viable attaching curves (top) and a not viable one (bottom). Note that cutting along the latter separated the would-be Heegaard surface in two connected components.
}
\label{fig:cutting}
\end{minipage}
\end{figure}
We should point out the symmetry between $i$-handles and $(d-i)$-handles in $d$ dimensions. Since $\partial({\rm{D}}^i\times{\rm{D}}^{d-i}) = ({\rm{S}}^{i-1}\times{\rm{D}}^{d-i})\cup({\rm{D}}^{i}\times{\rm{S}}^{d-i-1})$, the difference between the two types of handles is which portion of the handle's boundary will glue to a onto a manifold and which part will remain for other handles to be glued on. In particular, the $1$-handles and $3$-handles of $H_2$ in \eqref{eq:heegSplit}, glue onto $H_1$ as $2$-handles and $3$-handles respectively.
Finally, we point out that a Heegaard splitting of a 3-manifold is not unique, nevertheless two splittings of the same manifold (and the respective Heegaard diagrams), are always connected by a finite sequence of moves, called \textit{Heegaard moves}, consisting in:
\begin{itemize}
\item handle slides,
\item insertion/removal of topologically trivial couples of $1$-handle and $2$-handle (i.e. glued in such a way that together they form a $3$-ball ${\rm{D}}^3$).
\end{itemize}
\begin{definition}
\label{def:heeg-genus}
Given a 3-manifold $M$, the minimal genus over all the possible Heegaard surfaces is a topological invariant. We call this number \textbf{Heegaard genus}.
\end{definition}
\subsection{Connected sum and Heegaard splittings}
\label{sec:connected-sum}
The \textit{connected sum} $M\,\sharp\, N$ of two $d$-manifolds $M$ and $ N$ is constructed by removing a topological $d$-ball $D^d$ from their interior and gluing $M$ and $N$ by identifying their boundaries (homemorphic to $S^{d-1}$). If $M$ and $N$ are both oriented, there is a unique connected sum constructed through an orientation reversing map between the boundaries after the removal of the $d$-balls and the resulting manifold is unique up to homeomorphisms.
We define the \textit{boundary-connected sum} of two $d$-manifolds with boundaries, $M$ and $N$, as the manifold $ M\,\natural\,N$ obtained by performing a connected sum of their boundaries $\partial M\,\sharp\,\partial N$.
Note that the boundary connected sum of handlebodies $H_1$ and $H_2$ is a handlebody itself.
The spine of $H_1\,\natural\,H_2$ can be represented by joining the two spines through a line or a point\footnote{The line connecting the two spines does not represent any handle, rather, the identification of two discs on the boundaries of the two handlebodies and, therefore, can be contracted to a point. Nevertheless is useful for the moment to consider it as a specification of the way the boundary-connected sum is performed.}.
A question that naturally arises is: given two $3$-manifolds $M$ and $N$, is there a way to represent a Heegaard splitting of $M\,\sharp\, N$ in terms of Heegaard splittings $M=\{H_1, H_2, \Sigma_{M}\}$ and $N=\{K_1, K_2, \Sigma_{N}\}$?
To answer this question, we consider
a $3$-ball ${\rm{D}}_M$ (resp. ${\rm{D}}_N$) intersecting $\Sigma_M$ ($\Sigma_N$) transversally
in one $2$-ball.
Since the result is unique up to homeomorphism, we can choose the ball to be removed as better suits us.
Since the intersection of the $3$-ball with each element of the splittings is a ball of the appropriate dimension, the connected sum of $M\,\sharp\,N$
performed removing ${\rm{D}}_M$ and ${\rm{D}}_N$
will naturally give rise to a Haagaard splitting of the form $\{H_1\,\natural\,K_1,H_2\,\natural\,K_2,\Sigma_M\,\sharp\,\Sigma_N\}$.
\begin{figure}[h]
\begin{minipage}[t]{0.8\textwidth}
\centering
\def0.75\columnwidth{0.5\columnwidth}
\centering
\includegraphics[scale=.15]{connectsum.png}
\caption{Connected sum. We represent here the connected sum of $3$-manifolds via their Heegaard diagrams (Lens space $l(1, 1)$ on the left, ${\rm{S}}^1\times{\rm{S}}^2$ on the right). The picture shows the balls to be removed (castration) from the manifolds and how they intersect the Heegaard surfaces. Note that this correspond to the boundary-connected sum of the handlebodies. We show in the solid black thick line as the spine of the handlebodies, and the attaching curves are in red and in green. The arrows along the circles on the Heegaard surface show the reversed orientation.
}
\label{fig:connectsum}
\end{minipage}
\end{figure}
A few comments are in order.
Firstly, we remark that the Heegaard splitting of closed manifolds is symmetric with respect to the two handlebodies. By this we mean that we can differentiate $H_1$ and $H_2$ through labels induced by the construction of the splitting, but ultimately their role (and therefore the role of $\alpha$-curves and $\beta$-curves) can be interchanged. For example, if we have in mind a handle decomposition of $M$ we can say that $H_1$ is given by the set of handles of index $i\leq 1$ while $H_2$ is given by the set of handles with $i\geq 2$ but, as we explained above, these characterizations can be easily switched for three-dimensional manifolds upon inverting the gluing order of the handles. If we induce the Heegaard splitting via a self-indexing Morse function $f$ via $f^{-1}(3/2)$, the role of the handlebodies can be switched upon sending $f$ to $-f+3$. In agreement with this feature of Heegaard splittings, we notice that $H_1$ and $H_2$ induce, as submanifolds of $M$, an opposite orientation of $\Sigma_M$. This might create an ambiguity in performing the connected sum $M\,\sharp\,N$ through the Heegaard splittings of $M$ and $N$ since reversing the orientation of one of the two Heegaard surfaces corresponds to a different boundary-connected sum of the handlebodies involved in the construction.
This ambiguity reflects the fact that the connected sum is unique only after specifying the orientation of the manifolds involved\footnote{An example of connected sum between three-dimensional manifolds in which reversing the orientation of one of the manifolds involved changes the result is $l(3,1)\,\sharp\,l(3,1)$, which is not homeomorphic to $l(3,1)\,\sharp\,\overline{l(3,1)}$, where $\overline{l(3,1)}$ represents $l(3,1)$ with the opposite orientation. A similar feature happens in four dimensions with the two possible connected sums of $\mathbb{CP}^2$ with itself.}.
Ultimately, a choice of $\alpha$- and $\beta$-curves for the two diagrams corresponds to a choice of relative orientation for the two manifolds and specifies a connected sum constructed such that the set of $\alpha$-curves in $M\,\sharp\,N$ will be the union of the sets of $\alpha$-curves in $M$ and $\alpha$-curves in $N$ and similarly for the $\beta$-curves.
Secondly, we point out that the choice of a disc to be removed from each Heegaard surface during the connected sum operation is irrelevant, provided it does not intersect any attaching curve.
To convince oneself, it is sufficient to remember that cutting along all the $\alpha$-curves we obtain a pinched sphere on which any discs are equivalent, and similarly for the $\beta$-curves.
\subsection{Jackets as Heegaard surfaces}
\label{sec:jackets-heeg}
Turning our attention to objects in the PL category, in particular to PL $3$-manifolds encoded in colored graphs, one might wonder whether there exists a natural formulation of Heegaard splittings in terms of combinatorial objects.
In \cite{Ryan:2011qm}, it is shown that the Riemann surfaces corresponding to the jackets of a rank-$3$ colored tensor model are Heegaard surfaces, and that if the corresponding triangulation is a manifold, then the triple $(K({\cal J}^{(ij, {\widehat{ij}})}), H^{(ij)}, H^{({\widehat{ij}})})$ is a Heegaard splitting of the triangulation. Although the complex structure of the Riemann surfaces studied in \cite{Ryan:2011qm} was merely a consequence of the field content of the model examined, the Heegaard structure is purely combinatorial. In fact, this identification was already known in the crystallization theory literature, and led to the formulation of the concept of regular genus \cite{Gagliardi81}. Here, we revise such construction which will be of great importance in the following.
\begin{figure}[h]
\begin{minipage}[t]{0.7\textwidth}
\centering
\def0.75\columnwidth{0.5\columnwidth}
\includegraphics[scale=.12]{tetrahedronmorese.png}
\caption{Mapping of a 3-simplex $\Delta^{(2)}$. $f$ is a map from a $d$-simplex
$\Delta^{(d)}$ to a $d-2$-simplex
$\Delta^{(d-2)}$; here, it is for $d=3$. As we will explain later, the $0$-skeleton of the first barycentric
subdivision of
$\Delta^{(d-2)}$ minus the $0$-skeleton of
$\Delta^{(d-2)}$ itself defines the splitting and its preimage represents, in this case, a square in the Heegaard surface. The $0$-skeleton of
$\Delta^{(d-2)}$ define the spine of the handlebodies. By linearly extending this identification we can reconstruct the entire tetrahedron.}
\label{fig:tetrahedronMorse}
\end{minipage}
\end{figure}
Let us consider a three-dimensional connected orientable closed manifold $M$ dual to a rank $3$ colored tensor model graph $\cal G$ which is introduced in section~\ref{sec:tensormodels}: $M=K({\cal G})$.
For every 3-simplex $\Delta^{(3)} \in {\cal T}$, we consider a function $f$ mapping $\Delta^{(3)}$ onto $[0, 1]\in\mathbb{R}$ as in fig.~ \ref{fig:tetrahedronMorse}.
We recall that in every $\Delta^{(3)}$, each edge is uniquely defined by a pair of colors $\{i, j\}$, where $i, j \in 0,1,2,3$.
We can construct $f$ such that the preimage of the points $\{\{0\} , \{1\}\}$ under $f$ identifies everywhere in ${\cal T}$
two non-intersecting edges of given colors $f^{-1}(0)=\{i, j\}$, $f^{-1}(1)=\{k, l\}$, $i, j, k, l\in\{0, 1, 2, 3\}$,
$i \ne j \ne k \ne l$, while the preimage of any point in $(0, 1)$ gives us a square cross section of each $\Delta^{(3)}$.
We can glue these squares via their boundaries according to the colors, obtaining a surface embedded in $M$.
The surface $\Sigma$ constructed in this way is a realization of a quadrangulation represented by one the jackets ${\cal J}_{\{i, j\}\{k, l\}}$ of $\cal G$ and is dual to the corresponding matrix model obtained by removing the strands $\{i, j\}$ and $\{k, l\}$
\footnote{ Here we employ a slightly different notation for jackets with respect to the one introduced in section~\ref{sec:topcolgraph}. Notice that, if $d=3$, the set of bicolored cycles in the jacket is lacking only two elements from the set of bicolored cycles of $\cal G$. Thus, by writing ${\cal J}_{\{i, j\}\{k,l\}}$ we mean that $\{i, j\}\notin\{\{\eta_i, \eta_{i+1}\}\forall i\in \mathbb{Z}_4\}$ and similarly for $\{k,l\}$.
This notation is especially convenient in order to understand jackets in terms of Heegaard splittings.
}.
Since the graph $\cal G$ is closed, bipartite and connected, so is $\cal J$.
The surface $\Sigma$ therefore splits $M$ in two manifolds $H_0$ and $H_1$ with their common boundary being the surface $\Sigma$ itself. It is easy to notice that the spine of each $H_i$ is one-dimensional. In fact, it is given by the set of edges $f^{-1}(i)$ for $i\in\{0, 1\}$. Thus, $H_0$ and $H_1$ are handlebodies and a jacket $\cal J$ identifies a Heegaard surface $\Sigma$.
\begin{figure}[h]
\begin{minipage}[t]{0.8\textwidth}
\centering
\def0.75\columnwidth{0.4\columnwidth}
\centering
\includegraphics[scale=.12]{steeringWheelTriangulation.png}
\caption{A compression disc, an attaching curve and a spine of three-dimensional handlebody. The central green line is a single edge in the triangulation, shared by six $3$-simplices, which is to be identified as a spine of a three-dimensional handlebody. The rectangular faces (one of them colored in yellow) of the hexagonal prism are part of the Heegaard surface $\Sigma$. A compression disc is depicted in pink (a hexagon) and its intersection with the Heegaard surface is an attaching curve. As illustrated above, a segment of an attaching curve can be viewed as a projection on $\Sigma$ of the edge opposite to the spine ($e$) in each $3$-simplex.}
\label{fig:triangulated-compression}
\end{minipage}
\end{figure}
Once we identified a Heegaard splitting of $M$ in terms of combinatorial objects (i.e., via jackets) as described above, it is natural to wonder how the attaching curves arise.
As we can see from fig.~\ref{fig:triangulated-compression}, for every edge $e$ in the spine of $H_i$ we can construct a compression disc in the shape of a polygon. The intersection of the compression disc with each of the tetrahedra sharing $e$ is a triangle (fig.~\ref{fig:triangulated-compression})
and the disc is therefore a polygon whose sides are as many as the number of the $3$-simplices sharing $e$.
Importantly, we see that the perimeter of the polygon is the projection of the edges opposite to the spine on the Heegaard surface.
This implies that, given the quadrangulation of $\Sigma$ defined by a jacket $\cal J$, we can draw the attaching curves by connecting the opposite edges of each square.
A remark is in order.
The construction of attaching curves drawn on a Heegaard surface we described so far is, in a way, overcomplete since it provides us with a redundant description; we will end up having many copies of the same curve (i.e. homotopically equivalent ones) and furthermore, curves that are homotopic to a point (which therefore should not be considered since they describe the attaching of a sphere).
It is sufficient to consider only one representative of each equivalence class\footnote{We stress that an $\alpha$-curve and a $\beta$-curve can be homotopically equivalent and that the operation of modding out the equivalence class should be performed in either set independently.}, nevertheless, when constructing a trisection later on, a bit of care will be needed to convince ourselves
that such freedom does not imply any ambiguity in the construction.
For completeness, we compute here the genus of the Heegaard surface obtained with the procedure described above. Since $\Sigma = K({\cal J})$, we have that the genus $g_\Sigma$ is given by:
\begin{equation}
g_\Sigma = \frac{2-\chi_{\cal J}}{2} = \frac{1}{2}\left(2-V_{\cal J}+E_{\cal J}-F_{\cal J}\right)\,,
\end{equation}
where $\chi_{\cal J}$ is the Euler characteristic of $K({\cal J})$ and $V_{\cal J}$, $E_{\cal J}$ and $F_{\cal J}$ the vertices, edges and bicolored paths in $\cal J$ respectively. Since the vertices and the edges in the jacket are the same as those in $\cal G$, and they satisfy $2 E_{\cal G} = 4 V_{\cal G}$, we can further write:
\begin{equation}
\label{eq:jacket-genus-reduced}
g_\Sigma = 1 + \frac{1}{2}V_{\cal G} - \frac{1}{2}F_{\cal G} \geq 0\,.
\end{equation}
\subsection{More Heegaard splittings in triangulable manifolds}
\label{sec:more-heeg-split}
\begin{figure}[h]
\begin{minipage}[t]{0.7\textwidth}
\centering
\def0.75\columnwidth{0.5\columnwidth}
\centering
\includegraphics[scale=0.5]{anotherheegaardsplit.pdf}
\caption{A schematic representation of the two handlebodies obtained for a single tetrahedron using the $1$-skeletons of $\cal T$ and $\cal T^*$.
}
\label{fig:1-skeleton-handlebodies}
\end{minipage}
\end{figure}
For later convenience, we illustrate now a different construction of Heegaard splittings from which we will borrow its technique later on.
Consider a triangulation $\cal T$ of a PL manifold $M$ and its dual cellular decomposition $\cal T^*$.
The $1$-skeletons of $\cal T$ and $\cal T^*$ are perfect candidates to be identified as spines of $H_1$ and $H_2$.
In fact, $H_1$ and $H_2$ are nothing but tubular neighborhoods of these two $1$-skeletons,
providing an orientation reversing homeomorphism between their boundaries, we can identify the Heegaard surface (see fig.~\ref{fig:1-skeleton-handlebodies}).
Note that if $\cal T$ is the triangulation associated with a colored graph $\cal G$, the 1-skeleton of $\cal T^*$ is the graph itself.
The Heegaard genus $g$ is then given by
\begin{equation}
\label{eq:trisec-genus-dual-graphs}
g = E_{{\cal T}} - V_{{\cal T}} +1 = E_{{\cal T}^*} - V_{{\cal T}^*} +1
= V_{{\cal T}^*}+1
\,,
\end{equation}
where $E_{{\cal T}}$ ($E_{{\cal T}^*}$) and $V_{{\cal T}}$ ($V_{{\cal T}^*}$) are the number of edges and vertices in the $1$-skeletons of ${\cal T}$ (${\cal T}^*$).
The genus, then, corresponds to the number of independent loops of each graph, i.e., the dimension of the first homology groups of the $1$-skeletons.
Note that, by definition, $V_{{\cal T}^*}$ corresponds to the number of tetrahedra in ${\cal T}$, which we denote by $F_{\Delta^{(3)}_{{\cal T}}}$, while $E_{{\cal T}^*}$ is the number of triangles in ${\cal T}$, which we denote by $F_{\Delta^{(2)}_{{\cal T}}}$.
Therefore eq.~\ref{eq:trisec-genus-dual-graphs} leads to the following identity for the Euler characteristic of $M$:
\begin{equation}
\chi(M) = V_{{\cal T}} - E_{{\cal T}} + F_{\Delta^{(2)}_{{\cal T}}} - F_{\Delta^{(3)}_{{\cal T}}}= 0\,,
\end{equation}
which is always true for odd-dimensional manifolds due to the Poincar\'e duality \cite{Nakahara}.
Finally, if we compare the present construction with the one obtained in sec.~\ref{sec:jackets-heeg} we can find from eq.~\eqref{eq:jacket-genus-reduced} (and using the fact that $\cal G$ is the 1-skeleton of ${\cal T}^*$):
\begin{equation}
\begin{split}
g_\Sigma - g &= -\frac{1}{2}\left(V_{{\cal T}^*}+F_{\cal J}\right) < 0\,.
\end{split}
\end{equation}
Therefore, we notice that $g_\Sigma < g$, which imply that this way of constructing a Heegaard splitting is actually less advantageous, as the topological invariant is the minimum genus of Heegaard surface.
\section{Trisections}
\label{sec:trisections}
A construction analogous to a Heegaard splitting (in three-dimensions) can be performed in four-dimensions, which is called \textit{trisection} \cite{GayKirby}.
Note that one can perform trisections for non-orientable manifolds \cite{SpreerTillmann:2015}, however in this paper, we restrict ourselves to orientable manifolds.
Again, we start by working within the TOP category. We will restrict to objects in the PL category later in the paper.
\begin{figure}[h]
\begin{minipage}[t]{0.8\textwidth}
\centering
\def0.75\columnwidth{0.7\columnwidth}
\begin{subfigure}{0.4\textwidth}
\begin{tikzpicture}
\draw (0, 0) circle (2.5cm);
\filldraw (0, 0) circle (2pt);
\draw (0:0) -- (90:2.5);
\draw (0:0) -- (-30:2.5);
\draw (0:0) -- (210:2.5);
\node at (150:1.25) {${X}_1$};
\node at (30:1.25) {${X}_2$};
\node at (-90:1.25) {${X}_3$};
\node[right] at (90:1.25) {${H}_{12}$};
\node[below] at (-30:1.25) {${H}_{23}$};
\node[left] at (210:1.25) {${H}_{13}$};
\node[below] at (0,0) {$\Sigma$};
\end{tikzpicture}
\caption{}
\end{subfigure}
\begin{subfigure}{0.4\textwidth}
\begin{minipage}[t]{1\textwidth}
\centering
\def0.75\columnwidth{0.7\columnwidth}
\centering
\includegraphics[scale=.15]{trisectionlowerdim.png}
\end{minipage}
\caption{}
\label{fig:trisec-3d-diagram}
\end{subfigure}
\caption{(a) Schematic representation of the trisection of a $4$-manifold $M$.
$X_1$, $X_2$, and $X_3$ are four-dimensional submanifolds whose boundaries are $H_{12} \cup H_{13}$, $H_{12} \cup H_{23}$, and $H_{13} \cup H_{23}$ respectively.
$H_{12}$, $H_{13}$, and $H_{23}$ are three-dimensional handlebodies and $\Sigma$ is a Heegaard surfaces for the pairs $\{H_{12}, H_{13}\}$, $\{H_{12}, H_{23}\}$, and $\{H_{13}, H_{23}\}$.
$\Sigma$ is called the central surface of a trisection.
(b) Lower three-dimensional representation of the trisected manifold. $H_{12}$ is represented in half of ${\rm{S}}^2$ colored in red, $H_{13}$ in green, and $H_{23}$ in blue surface. Inside of ${\rm{S}}^2$, namely ${\rm{D}}^3$ bounded by $H_{12}$ and $H_{13}$ represents $X_1$, whereas the outside space above (below) $H_{12}$ ($H_{13}$) and $H_{23}$ represents $X_2$ ($X_3$). The yellow circle represents $\Sigma$. This representation of a trisection is very crude and strictly speaking wrong (e.g., keep in mind that $M$ ought to be closed); obviously, all the submanifolds and the given manifold itself are in principle general, however in this representation, they are depicted in a very special way.
}
\label{fig:trisection}
\end{minipage}
\end{figure}
\begin{definition}\label{def:trisection}
Let $M$ be a closed, orientable, connected 4-manifold. A trisection of $M$ is a collection of three submanifolds $X_1, X_2, X_3 \subset M$ such that:
\begin{itemize}
\item each $X_i$ is a four-dimensional handlebody of genus $g_i$,
\item the handlebodies have pairwise disjoint interiors $\partial X_i\supset (X_i\cap X_j)\subset\partial X_j$ and $M= \cup_i X_i$,
\item the intersection of any two handlebodies $X_i\cap X_j = H_{ij}$ is a three-dimensional handlebody,
\item the intersection of all the four-dimensional handlebodies $ X_1 \cap X_2 \cap X_3$ is a closed connected surface $\Sigma$ called \textbf{central surface},
\end{itemize}
for $i, j\in{1, 2, 3}$.
\end{definition}
Note that any two of the three-dimensional handlebodies $\{H_{ij}, H_{jk}, \Sigma\}$ form a Heegaard splitting of $\partial X_j$.
In four dimensions, we have the following \textit{extending theorem} \cite{montesinos}:
\begin{theorem}\label{th:extending-th}
Given a four-dimensional handlebody $H$ of genus $g$ and an homeomorphism $\phi:\partial H\rightarrow\partial H$, there exists a unique homeomorphism $\Phi:H\rightarrow H$ which extends $\phi$ to the interior of $H$.
\end{theorem}
It implies that closed $4$-manifolds are determined by their handles of index $i\leq 2$ and that there is a unique cap-off determining the remaining $3$- and $4$-handles (recall the symmetric roles of $i$-handles and $(4-i)$-handles in four dimensions). However, in the context of trisections, the extending theorem plays an even bigger role, for it can be applied to each handlebody $X_i$ in definition \ref{def:trisection}. Consequenstly, a trisection of $M$ is fully determined by the three three-dimensional handlebodies $H_{ij}$ which, in turn, can be represented by means of Heegaard diagrams.
Hence, similarly to the three-dimensional case of Heegaard splittings,
a trisection can be represented with a diagram consisting of the central surface\footnote{From now on we may adopt the term ``central surface'' for both the case of trisections and Heegaard splittings
when a feature is clearly common to the central surface of a trisection and the Heegaard surface of a Heegaard splitting.}
$\Sigma$ and three sets of curves: $\alpha$-curves, $\beta$-curves and $\gamma$-curves (collectively, attaching curves).
These curves are constructed, as before, by means of compression discs and represent the belt spheres of the $1$-handle of each of the three-dimensional handlebodies $H_{ij}$.
A trisection diagram therefore combines the three Heegaard diagrams for $\partial X_i$ into a single diagram. Therefore, one can say that the construction of trisection, together with the extending theorem, allows us to study four-dimensional topology, within a two-dimensional framework.
Again, infinitely many possible trisection diagrams are viable for a given manifold and they are connected by a finite sequence of moves generalizing Heegaard moves.
We therefore have the following:
\begin{definition}
\label{def:trisection-genus}
Given a 4-manifold $M$, the minimal genus over
all the possible central surfaces trisecting $M$ is a topological invariant. We call this number \textbf{trisection genus}.
\end{definition}
We remark that the connected sum of two $4$-manifolds $M=\{H_{12}, H_{23}, H_{13}, \Sigma_M\}$ (defining implicitly the handlebodies $X_1$, $X_2$ and $X_3$) and $N=\{K_{12}, K_{23}, K_{13}, \Sigma_N\}$ (defining $Y_1$, $Y_2$ and $Y_3$) can be constructed in analogy to the three-dimensional case by removing $4$-balls which intersect all the elements of each trisection in balls of the appropriate dimension. The resulting manifold will
support
a trisection of the form $M\,\sharp\,N = \{H_{12}\,\natural\,K_{12}, H_{23}\,\natural\,K_{23}, H_{13}\,\natural\,K_{13}, \Sigma_M\,\sharp\,\Sigma_N\}$ implicitly defining the handlebodies $X_1\,\natural\,Y_1$, $X_2\,\natural\,Y_2$ and $X_3\,\natural\,Y_3$.
\subsection{Stabilization}
\label{sec:stab}
\begin{figure}[h]
\begin{minipage}[t]{0.9\textwidth}
\centering
\def0.75\columnwidth{0.9\columnwidth}
\centering
\begin{subfigure}{0.4\textwidth}
\includegraphics[scale=.4]{stabilization.pdf}
\caption{}
\label{fig:stabilization-3d-scheme}
\end{subfigure}
\begin{subfigure}{0.55\textwidth}
\begin{minipage}{.2\textwidth}
\flushleft
\includegraphics[scale=.25, left]{trisectedSph1.pdf}
\end{minipage}\hspace{2cm}
\begin{minipage}{.2\textwidth}
\flushright
\includegraphics[scale=.25, right]{trisectedSph2.pdf}
\end{minipage}
\par\medskip
\centering
{\includegraphics[scale=.25]{trisectedSph3.pdf}}
\caption{}
\label{fig:genus-one-trisection-sphere}
\end{subfigure}
\caption{Lower dimensional (in three dimensions) representation of stabilization in four dimensions. Fig.~\ref{fig:stabilization-3d-scheme} represents the process of adding a four-dimensional $1$-handle to one of the handlebodies preserving the trisection structure. The figure follows the conventions of fig.~\ref{fig:trisec-3d-diagram}. We also show the resulting spine of the four dimensional handlebody as well as the compression disc for the new handle. Fig.~\ref{fig:genus-one-trisection-sphere} shows the three genus-$1$ trisection diagrams for ${\rm{S}}^4$. Stabilization can be represented at the level of trisection diagrams as the connected sum with one of these three diagrams.}
\label{fig:stabilization}
\end{minipage}
\end{figure}
Both in the context of Heegaard splittings and of trisections there is a move that increases the genus of the central surface by one. It is instructive to illustrate how this can be achieved and to point out small differences between the four-dimensional and three-dimensional cases.
We consider a three dimensional manifold $M^{(3)}$ with a Heegaard diagram of genus $g$, $\Sigma_M$, and the genus $1$ Heegaard diagram of ${\rm{S}}^3$, which we call $T_{\rm{S}}$ (see fig.~\ref{fig:heegaardsiagramS3}).
Since ${\rm{S}}^3$ has trivial topology we have the following identity:
\begin{equation}
M^{(3)}\,\sharp\,{\rm{S}}^3 = M^{(3)}\,.
\end{equation}
As explained in sec.~\ref{sec:connected-sum}, this operation can be represented with the diagram $\Sigma_M\,\sharp\,T_{\rm{S}}$ which has genus $g'=g+1$. We can understand this operation in terms of carving a handle out of one of the two handlebodies in $M^{(3)}$ and adding it to the other one.
For a given $d$-manifold $N$ with boundary, the operation of drilling out a tubular neighborhood of a properly embedded ball ${\rm{D}}^{d-k-1}\subset N$ is equivalent to adding a $k$-handle whose attaching sphere bounds a ball ${\rm{D}}^k$ in $\partial N$. The properly embedded $(d-k-1)$-ball is bounded by the belt sphere of a $(k+1)$-handle which we may add in order to cancel the $k$-handle and to recover $N$. This describes how to increment the genus of the central surface of a Heegaard diagram if we consider the case\footnote{ Note that for $k=1$ we are identifying two discs on the boundary of a handlebody and represent their identification through the spine of the resulting $1$-handle. From this point of view, we can treat the operation of increasing the genus of a handlebody and the connected sum of two handlebodies (see fig.~\ref{fig:connectsum}) on the same footing, with the only difference being whether the considered discs lie on the boundary of the same handlebody or not. Note that in both cases it is sufficient to specify the spine of the new handle in order to recover the full topological information.} $d=3$ and $k=1$; note that a $2$-handle for one handlebody plays the role of a $1$-handle for the other handlebody.
In this way it is clear how we are actually not changing anything in the overall manifold but rather rearranging its handle decomposition.
In four dimensions there exist a similar operation which takes the name of \textit{stabilization}. The genus $1$ trisection diagrams of ${\rm{S}}^4$ are shown in fig.~\ref{fig:genus-one-trisection-sphere} and each represents a trisection where two handlebodies $X_i$ and $X_j$ are $4$-balls while the third $X_k$ has genus-$1$. Note that the boundary $\partial X_k$ has the topology of ${\rm{S}}^1\times{\rm{S}}^2$ as can be seen from each diagram by removing the curve circulating around the toroidal direction.
If we consider a $4$-manifold $M^{(4)}$ we can clearly increment the genus of its central surface by considering the connected sum of its trisection diagram with one of the three in fig.~\ref{fig:genus-one-trisection-sphere} . Although this is not within the investigation scope of the present work, we should mention that the stabilization operation allows us to always obtain a trisection where all the four-dimensional handlebodies have the same genus. This type of trisection is referred to as \textit{balanced}. In fact, it is worth noticing that the stabilization operation, although affecting the topology of all the three-dimensional handlebodies $H_{ij}$, only affects one of the four-dimensional one, while leaving the other two unmodified.
Stabilization too can be understood as a specific carving operation. As before, we identify a ${\rm{D}}^1$ that will constitute the spine of the carved $1$-handle. Since we are going to increase the genus of, say, $X_1$, the $1$-ball will need to be properly embedded in the complement $M^{(4)}\setminus X_1$ (we will carve the handle out of the complement and add it to $X_1$). The central surface simultaneously represents the boundary of all the three-dimensional handlebodies which, therefore, need to have their genus increased as well. Since we are only specifying one $1$-handle, and with simple symmetry considerations, it is easy to guess that the spine shall be a disc ${\rm{D}}^1$ embedded in $H_{23}$, with endpoints on the central surface. Fig.~\ref{fig:stabilization} shows a schematic representation of this procedure following the same conventions of fig.~\ref{fig:trisec-3d-diagram}.
Under such a move, the topology of $X_2$ and $X_3$ remains unaffected. To understand this, it is sufficient to notice that $X_2$ in fig.~\ref{fig:trisec-3d-diagram} (respectively $X_3$) intersects only half of the $1$-handle and the intersection is a $4$-disc intersecting $\partial X_2$ (respectively $\partial X_3$) in ${\rm{D}}^3$ (see fig.~\ref{fig:stabilization}). In other words, carving the $1$-handle leads to two manifolds $X'_2$ and $X'_3$ satisfying:
\begin{equation}
\begin{split}
X'_2\,\natural\,{\rm{D}}^4 = X_2\,,\\
X'_3\,\natural\,{\rm{D}}^4 = X_3\,.
\end{split}
\end{equation}
Note that the portion of the boundary of the four-dimensional $1$-handle that does not constitute the attaching sphere has the topology of ${\rm{D}}^1\times{\rm{S}}^2$. Upon the following decomposition
\begin{equation}
{\rm{D}}^1\times{\rm{S}}^2 = ({\rm{D}}^1\times {\rm{D}}^2)\cup_{{\rm{D}}^1\times{\rm{S}}^1}({\rm{D}}^1\times{\rm{D}}^2)\,,
\end{equation}
we can understand it as a pair of three-dimensional 1-handles with common boundaries and ``parallel'' spines. Therefore a regular neighborhood of a one-dimensional disc properly embedded in one of the three-dimensional handlebodies intersects all the elements of a trisection without spoiling the construction, but rather defining an alternative trisection for the same manifold.
\subsection{Subdividing $4$-simplices}
\label{sec:cutting-simplices}
We would like to understand trisections from {\it colored triangulations}, i.e., triangulations dual to colored graphs which can be generated by colored tensor models.
It amounts to formulating trisections relying on combinatorics.
We will do so, by generalising the three-dimensional Heegaard splittings formulated in the colored tensor models \cite {Ryan:2011qm}.
From now on, we therefore restrict to the PL category.
\begin{figure}[h]
\begin{minipage}[t]{0.85\textwidth}
\def0.75\columnwidth{0.85\columnwidth}
\includegraphics[scale=.15]{4simplexmap.png}
\caption{
Illustration of the linear map from $\Delta^{(4)}$ to $\Delta^{(2)}$. The sets $P_i$ partitioning the vertices of the $4$-simplex $\Delta^{(4)}$, as well as their images in 2-simplex $\Delta^{(2)}$, are shown. Removing the $0$-skeleton of $\Delta^{(2)}$ from the $0$-skeleton of its first barycentric subdivision provides with the cubical decomposition. The preimage of $\Delta^{(2)}$ under $f$ splits the $4$-simplex in three four-dimensional pieces, whose boundaries are three-dimensional blocks ${\cal D}_{\Delta^{(4)}}$, ${\cal Q}_{\Delta^{(4)}}$, and ${\cal R}_{\Delta^{(4)}}$. The latter three three-dimensional blocks meet at one two-dimensional square $s$.
${\cal D}_{\Delta^{(4)}}$ is in blue, ${\cal Q}_{\Delta^{(4)}}$ in red, ${\cal R}_{\Delta^{(4)}}$ in green, and the common surface $s$ in yellow.
$f({\cal D}_{\Delta^{(4)}})=c_{23}$, $f({\cal Q}_{\Delta^{(4)}})=c_{12}$, $f({\cal R}_{\Delta^{(4)}})=c_{13}$, and $f(s)=c_{123}$.
}
\label{fig:4simplexmap}
\end{minipage}
\end{figure}
Following \cite{bell2017}, let us consider a $4$-simplex $\Delta^{(4)}$ and define a partition of its vertices in three sets $P_0$, $P_1$ and $P_2$ such that one vertex belongs to one of the sets and the rest is divided in two pairs. For example, labeling each vertex with the color of its opposite 3-face we might have that the vertex $v^{\widehat{0}}$ is assigned to $P_0$, the vertices $v^{\widehat 1}$ and $v^{\widehat 2}$ are assigned to $P_1$ and the vertices $v^{\widehat 3}$ and $v^{\widehat 4}$ to $P_2$.
Given such a partition, any pair of sets $(P_i, P_j)$ is identified with a $n$-dimensional subsimplex on the boundary of $\Delta^{(4)}$
while the third set $P_k$ is identified with the opposite $(3-n)$-dimensional subsimplex, where $i, j, k \in \{0,1,2\}$ and $i\ne j \ne k$.
For example, $(P_1 = \{v^{\widehat 1}, v^{\widehat 2}\}, P_2 = \{v^{\widehat 3}, v^{\widehat 4}\})$ and $P_0 = \{v^{\widehat 0}\}$ give a 3-simplex spanned by $v^{\widehat 1}, \; v^{\widehat 2}, \; v^{\widehat 3},\; v^{\widehat 4}$ and a 0-simplex $v^{\widehat 0}$, whereas $(P_0 = \{v^{\widehat 0}\}, P_1 = \{v^{\widehat 1}, v^{\widehat 2}\})$ and $P_2 = \{v^{\widehat 3}, v^{\widehat 4}\}$ give a 2-simplex spanned by $v^{\widehat 0}, \; v^{\widehat 1}, \; v^{\widehat 2}$ and a 1-simpex with endpoints $v^{\widehat 3}$ and $ v^{\widehat 4}$.
Then, we can define a map $f$ from $\Delta^{(4)}$ to
$\Delta^{(2)}$ such that each set $P_i$ is sent into each of the three vertices in $\Delta^{(2)}$ and
extend it linearly to the interiors of $\Delta^{(4)}$ and $\Delta^{(2)}$.
We proceed by considering the subcomplex spanned by the $0$-skeleton of the first barycentric subdivision of $\Delta^{(2)}$
minus the $0$-skeleton of $\Delta^{(2)}$.
The resulting cubical decomposition of $\Delta^{(2)}$ is shown in fig.\ref{fig:4simplexmap}.
$\Delta^{(2)}$ is decomposed in three $2$-cubes
$c_i$
with $i \in 1,2,3$, pairwise intersections of which result in $1$-cubes
$c_{ij}$,
all sharing a central $0$-cube, $c_{123}$.
The preimage of this construction under $f$ gives us the splitting of $\Delta^{(4)}$ we are looking for.
Notice that the boundary faces of $\Delta^{(2)}$ (each spanned by two vertices) are subdivided into two $1$-cubes. The preimage of $f$ therefore induces splittings of the subsimplices on the boundary of $\Delta^{(4)}$ identified with the pairs $(P_i, P_j)$.
Focusing on $\Delta^{(3)}\ni \{ v^{\widehat 1}, v^{\widehat 2}, v^{\widehat 3}, v^{\widehat 4}\}$,
which is sitting opposite to $v^{\hat 0}$,
and considering the partition $P_1 = \{v^{\widehat 1}, v^{\widehat 2}\}$, $P_2 = \{v^{\widehat 3}, v^{\widehat 4}\}$ and $P_0 = \{v^{\widehat 0}\}$,
$\Delta^{(3)}$ is mapped via $f$ to a 1-simplex of $\Delta^{(2)}$ in precisely the same manner as in
fig.~\ref{fig:tetrahedronMorse}.
The coning of the splitting surface of $\Delta^{(3)}$
with respect to $v^{\widehat 0}$,
generates a square prism which we call ${\cal D}_{\Delta^{(4)}}$, whose image under $f$ is $1$-cube $c_{23}$.
Similarly, in the two $2$-subsimplices of $\Delta^{(4)}$ defined by
$\{P_0, P_1\}$ and $\{P_0, P_2\}$,
we identify a one-dimensional cross section, which then will be coned toward $P_2$ and $P_1$ respectively.
These conings will generate triangular prisms ${\cal Q}_{\Delta^{(4)}}$ and ${\cal R}_{\Delta^{(4)}}$, whose images are $c_{12}$ and $c_{13}$ in $\Delta^{(2)}$.
The intersection ${\cal Q}_{\Delta^{(4)}} \cap {\cal R}_{\Delta^{(4)}} \cap D_{\Delta^{(4)}}$
is a two-dimensional cube\footnote{The bidimensionality of the central square is ensured by the fact that all the pairwise intersections of the three-dimensional blocks are transverse.}.
Fig.~\ref{fig:4simplexmap}
shows such coning operations.
\subsection{Splitting $4$-bubbles}
\label{sec:split4bubbles}
At this point, one would like to induce the above subdivision in every simplex $\sigma$ of a triangulation ${\cal T}$ of a given manifold $M$ and prove the emerging structure of a trisection, namely see that each of the sets ${\cal D}=\bigcup_\sigma {\cal D}_\sigma$, ${\cal Q}=\bigcup_\sigma {\cal Q}_\sigma$, ${\cal R}=\bigcup_\sigma {\cal R}_\sigma$ is connected and homeomorphic to a handlebody. In order to achieve this{\footnote{
Indeed we will have to define a new structure related to $\cal D$, which improves its topological properties in order to obtain a handlebody.
}} we will have to perform a few manipulations. For later reference, we will call the attaching curves determined by manipulations of $\cal Q$ (respectively $\cal R$, $\cal D$) as $\alpha$ (respectively $\beta$, $\gamma$).
Let us therefore consider a colored triangulation ${\cal T}$ of a $4$-manifold $M$, and a colored graph $\cal G$ dual to ${\cal T}$, i.e., $K(\cal G) = {\cal T}$ and $\vert K(\cal G)\vert =M$.
If we take seriously a partition of the vertices induced by colors, we notice soon that
the main immediate obstacle is achieving the connectedness of $ {\cal Q}$ and ${\cal R}$.
Evidently, the union ${\cal Q} \cup {\cal R} $ consists of disconnected three-dimensional polytopes surrounding the vertices of ${\cal T}$ which belong to the isolated partition set whose element is only one vertex per $4$-simplex.
Let us elaborate on the structure of ${\cal Q}$ and ${\cal R}$.
In the triangulation ${\cal T}$, a $4$-bubble ${\cal B}_a^{\widehat{i}}$ identifies a three-dimensional subcomplex which surrounds a vertex $v_a^{\widehat{i}}$.
In particular, $v_a^{\widehat{i}}$ sits opposite to a $3$-face of color $i$ in every $4$-simplex containing it and the triangulation dual to ${\cal B}_a^{\widehat{i}}$, $K({\cal B}_a^{\widehat{i}})$, is PL-homeomorphic to the union of such 3-faces.
Moreover, we point out that such a triangulation, $K({\cal B}_a^{\widehat{i}})$, is also homeomorphic to the link of $v_a^{\widehat{i}}$ which, for the case of $M$
being a manifold, turns out to be a topological $3$-sphere\footnote{Nevertheless, colored tensor models and colored graphs generate in general pseudo-manifolds and, therefore, the topology of $K({\cal B}_a^{\widehat i})$ might turn out to be very different.
We comment on this case in section \ref{sec:pseudo-mfd}.}.
Given the combination of colors defining the $4$-bubble, though, a possibly more accurate way to address the corresponding triangulation is not as the union of the $3$-faces situating opposite to $v_a^{\widehat{i}}$, but rather as the union of a set of three-dimensional cross sections parallel to such $3$-faces which cut $4$-simplices midway between $v_a^{\widehat{i}}$ and its opposite $3$-faces, namely, ${\cal Q}_{\sigma_a} \cup {\cal R}_{\sigma_a}$ in fig.~\ref{fig:doublepentachoron}.
See fig.~\ref{fig:bubble-scheme} for a lower dimensional representation of $K({\cal B}^{\widehat{0}})$.
\begin{figure}[h]
\begin{minipage}[t]{0.8\textwidth}
\centering
\def0.75\columnwidth{0.6\columnwidth}
\centering
\includegraphics[scale=.12]{bubbleScheme.png}
\caption{Representation of two components of $K({\cal B}^{\widehat{0}})$ in a three-dimensional complex. In three dimensions, $K({\cal B}^{\widehat{0}})$ is a two-dimensional complex whose edges are shown in grey. In the picture we present two components:$K({\cal B}^{\widehat{0}}_a)$ and $K({\cal B}^{\widehat{0}}_b)$, surrouding $v_a^{\widehat{0}}$ and $v_b^{\widehat{0}}$ respectively.
The same $0$-colored face (shown in red and shared by two 3-simplices) gives rise to two different building blocks (shown in orange) in $K({\cal B}^{\widehat{0}})$.}
\label{fig:bubble-scheme}
\end{minipage}
\end{figure}
Consequently, given the set $\Delta_a =\{\sigma\; 4\text{-simplex s.t. } v_a^{\widehat{i}}\in \sigma\}$, the $4$-bubble identifies the union
\begin{equation}
K({\cal B}_a^{\widehat{i}}) = \bigcup_{\sigma\in\Delta_a}{\cal Q}_\sigma\cup{\cal R}_\sigma\,.
\end{equation}
For later reference, we call the four-dimensional neighborhood{\footnote{Note that we choose to call this $X_1^a$ as it will be part of one of the trisection four-dimensional handlebodies defined earlier in definition~\ref{def:trisection}.}} of $v_a^{\widehat{0}}$ bounded by
$K({\cal B}_a^{\widehat{0}})$,
$X^a_{1}$
and we define the following unions: ${\cal Q}_a = \bigcup_{\sigma\in\Delta_a}{\cal Q}_\sigma$, ${\cal R}_a = \bigcup_{\sigma\in\Delta_a}{\cal R}_\sigma$.
We pick $0$ as a special color and define a specific partition\footnote{Here we picked the color $0$ to identify the vertex that in every $4$-simplex is ``isolated'' by the partition, nevertheless, we stress that at this level any permutation of the colors would be an equivalent choice.}, i.e.,
$P_0 =\{v^{\widehat{0}}\}$, $P_1 =\{v^{\widehat{1}}\}\cup\{v^{\widehat{2}}\}$ and $P_2 =\{v^{\widehat{3}}\}\cup\{v^{\widehat{4}}\}$.
Then we consider the $4$-bubbles ${\cal B}_a^{\widehat{0}}$ and, in each such 4-bubble, the jacket ${\cal J}_{P_1, P_2}={\cal J}_{\{1, 2\}, \{3, 4\}}$. Combining the constructions described in sec.~\ref{sec:jackets-heeg} and sec.~\ref{sec:cutting-simplices}, we readily obtain the sets ${\cal Q}$ and ${\cal R}$. Nevertheless, each of these sets, is disconnected and constituted by as many connected components as many vertices $v^{\widehat{0}}$ are in the triangulation ${\cal T}$.
Recalling how jackets identify Heegaard surfaces for the realizations of $4$-colored graphs, it is easy to see that ${\cal Q}_a$ and ${\cal R}_a$ are the two handlebodies in a Heegaard splitting of a given ${\cal B}_a^{\widehat{0}}$. Looking at the Heegaard splittings ${\cal B}_a^{\widehat{0}} = (H_{1, a}, H_{2, a}, K({\cal J}({\cal B}_a^{\widehat{0}})))$, we have that:
\begin{equation}\label{eq:interm-bubble-decomposition}
\begin{split}
{\cal Q}_a = H_{1, a}\,,\\
{\cal R}_a = H_{2, a}\,,\\
{\cal Q} = \bigsqcup_a H_{1, a}\,,\\
{\cal R} = \bigsqcup_a H_{2, a}\,,
\end{split}
\end{equation}
with $\sqcup$ representing the disjoint union of sets.
It is now clear that there is a limitation of partitioning the vertices in the triangulation according to colors if we try to identify a trisection naively.
Moreover, the information on $\cal D$, although formally present, appears to be
implicit and hidden in the construction.
In previous works \cite{bell2017}, as we briefly mentioned, these problems have been tackled in two different ways. In \cite{bell2017}, the authors perform Pachner moves on the triangulation. The specific type of Pachner move employed ($2\to 4$ Pachner move) increases the number of $4$-simplices in ${\cal T}$ without affecting the topology (replaces a $4$-ball with another $4$-ball having the same triangulation on the boundary). This allows to connect the spines of the four-dimensional handlebodies at will, as well as to clearly infer the structure of compression discs for all the three-dimensional handlebodies. Nevertheless, Pachner moves are not compatible with the colors in the present case, since the complete graph with six vertices cannot be consistently $5$-colored. In \cite{Casali:2019gem}, on the other hand, the authors considered a special class of colored graphs encoding crystallizations. By definition, all $K(B^{\widehat{i}})$
are connected in crystallization theory.
Such requirement imposes a limited amount of nodes in the graph encoding a manifold $M$, which results in a very powerful tool to study the topology of PL-manifolds\footnote{As we will explain later, the authors of \cite{Casali:2019gem} actually consider a wider class of graphs. Nevertheless they still base their construction on connectedness of some chosen $\widehat{i}$-bubble.}. However, crystallization graphs
only reflect a small amount of cases of interest to the tensor model community. In the following section we present an alternative approach which allows to generalize the construction of trisections to a wider class of graphs.
\subsection{Connecting $4$-bubbles}
\label{sec:connect4bubbles}
\begin{figure}[h]
\centering
\begin{minipage}[t]{0.8\textwidth}
\centering
\def0.75\columnwidth{0.7\columnwidth}
\includegraphics[scale=.18]{doublepentachoron.png}
\caption{
$\pi_{ab}$.
}
\label{fig:doublepentachoron}
\end{minipage}
\end{figure}
In order to overcome the issues coming from having disconnected realizations of $4$-bubbles, we follow a similar
construction of Heegaard splittings discussed in section~\ref{sec:more-heeg-split}.
Let us start by embedding the colored graph dual to ${\cal T}$ in ${\cal T}$ itself via the prescription described in section~\ref{sec:coloredtensormodels}.
We consider four-dimensional regular neighborhoods of the $0$-colored lines embedded in ${\cal T}$.
Topologically, each such four-dimensional neighborhood $n$ is ${\rm{D}}^3\times {\rm{D}}^1$, and its boundary is $({\rm{S}}^2\times {\rm{D}}^1) \cup ({\rm{D}}^3\times{\rm{S}}^0)$.
This boundary intersects ${\cal D}$ (three-dimensional) transversally and, therefore, the longitudinal component ${\rm{S}}^2\times {\rm{D}}^1$ is split by $\cal D$ into two parts: $\partial_+ n$ and $\partial_- n$, each of topology ${\rm{D}}^2\times {\rm{D}}^1$. As a convention we fix $\partial_+ n$ to be between $\cal D$ and $\{v^{\widehat{1}}, v^{\widehat{2}}\}$ and $\partial_- n$ between $\cal D$ and $\{v^{\widehat{3}}, v^{\widehat{4}}\}$.
\begin{construction}
\label{const:aboutD}
Given a colored triangulation ${\cal T}$ of a manifold $M$, dual to a colored graph $\cal G$, and a choice of a jacket for its $\widehat{0}$-bubbles, ${\cal J}({\cal B}^{\widehat{0}}_a)$, there exist three 3-submanifolds of ${\cal T}$: $\cal Q'$, $\cal R'$ and $\cal D'$, such that they share the same boundary
\begin{equation}
\Sigma = \partial{\cal Q}' = \partial{\cal R}' = \partial{\cal D}'\,,
\end{equation}
and which are constructed carving regular neighborhoods of the embedded $0$-colored lines of $\cal G$ as:
\begin{equation}
\begin{split}
{\cal Q}' &=[ {\cal Q}\setminus (\bigcup_l n)]\cup [\bigcup_l \partial_+ n]\,,\\
{\cal R}' &=[ {\cal R}\setminus (\bigcup_l n)]\cup [\bigcup_l \partial_- n]\,,\\
{\cal D}' &= {\cal D}\setminus [\bigcup_l \dot{n}]\,,
\end{split}
\end{equation}
where $l$ runs over the set of $0$-colored lines and $\dot{n}$ indicates the interior of $n$.
\end{construction}
In order to understand construction~\ref{const:aboutD}, let us consider two vertices $v_a^{\widehat{0}}$ and $v_b^{\widehat{0}}$ sitting opposite to the same $0$-colored $3$-face, $\tau_{ab}$ and call $n_{ab}$, the regular neighborhood of the $0$-colored line $\ell_{ab}$ dual to $\tau_{ab}$.
We call the $4$-simplex spanned by $v_a^{\widehat 0} \cup \tau_{ab}$, $\sigma_{a}$, and similarly for $b$.
One can view the $3$-ball ${\rm{D}}^3$ in this four-dimensional regular neighborhood of a $0$-colored line, $n_{ab}$, as a retraction of the
tetrahedron ${\cal Q}_{\sigma_a} \cup {\cal R}_{\sigma_a} = K({\cal B}^{\widehat{0}}_a) \cap \sigma_a$ (or for $b$)
inside each 4-simplex, $\sigma_{a}$ (or $\sigma_{b}$), where ${\cal Q}_{\sigma_a}$ is ${\cal Q} \cap \sigma_a$, etc.
Using $n_{ab}$, we perform a connect sum of the $3$-submanifolds defined by $4$-bubbles $K({\cal B}^{\widehat{0}})$'s and, at the same time, perform a boundary-connect sum of the four-dimensional neighborhoods of vertices in the triangulation.
The union $\sigma_a\cup\sigma_b$ via their shared face $ \tau_{ab} $ defines a polytope{\footnote{These are called a double pentachora in \cite{bell2017}.}} $\pi_{ab}$, spanned by $v_a^{\widehat 0} \cup \tau_{ab} \cup v_b^{\widehat 0}$.
In each $\pi_{ab}$, there are two central squares $s_a$ and $s_b$ which are the intersections of $\pi_{ab}$ with the realization of jackets of $\widehat{0}$-bubbles $K ({\cal J} ({\cal B}_a^{\widehat{0}}))$ and $K( {\cal J} ({\cal B}_b^{\widehat{0}}))$ respectively.
We note that
a neighborhood of the barycenter of $s_a$ (resp. $s_b$) intersects $K({\cal B}^{\widehat{0}}_a)$ (resp. $K({\cal B}^{\widehat{0}}_b)$) in a $3$-ball satisfying the requirements presented in sec.~\ref{sec:connected-sum}.
Therefore, by removing such neighborhoods and identifying their boundaries, we can easily construct the connected sum $K({\cal B}^{\widehat{0}}_a)\,\sharp\,K({\cal B}^{\widehat{0}}_b)$ preserving the Heegaard splitting defined by the chosen jackets, i.e., connecting the components of $\cal Q$ (resp. $\cal R$) surrounding $v_a^{\widehat{0}}$ and $v_b^{\widehat{0}}$.
For later convenience we require the neighborhood of the barycenter of $s_a$ (resp. $s_b$) to be small enough not to intersect $\partial s_a$ (resp. $\partial s_b$).
Note that, by construction, this also yields $ X_1^a\,\natural\,X_1^b$.
As we discussed in sec.~\ref{sec:connected-sum}, we can represent the boundary-connected sum of handlebodies through a line connecting the boundaries.
This is precisely the role of $\ell_{ab}$; $X_1^{ab}=X_1^a\cup n_{ab}\cup X_1^b$ is homeomorphic to $ X_1^a\,\natural\,X_1^b$.
The intersections $q_a=s_a\cap n_{ab}$ and $q_b=s_b\cap n_{ab}$ identify smaller squares splitting each ${\rm{D}}^3$ in $({\rm{D}}^3\times{\rm{S}}^0)\subset\partial n_{ab}$.
The interiors of $q_a$ and $q_b$ now belong to the interior of $X_1^{ab}$ while their boundaries define a surface
\begin{equation}
\Sigma_{ab}=\partial q_a \times \ell_{ab} = \partial q_b \times \ell_{ab} \,.
\end{equation}
It is now straightforward to see that we just constructed the connected sum of the surfaces dual to ${\cal J}({\cal B}_a^{\widehat{0}})$ and ${\cal J}({\cal B}_b^{\widehat{0}})$ by simply considering the following union:
\begin{equation}
[K({\cal J}({\cal B}_a^{\widehat{0}}))\setminus q_a] \cup \Sigma_{ab} \cup [K({\cal J}({\cal B}_b^{\widehat{0}}))\setminus q_b]\,.
\end{equation}
With a similar construction and following the arguments of sec.~\ref{sec:connected-sum}, it is not hard to see that we also constructed the boundary-connected sums ${\cal Q}_{\sigma_a}\,\natural\,{\cal Q}_{\sigma_b}$ and ${\cal R}_{\sigma_a}\,\natural\,{\cal R}_{\sigma_b}$.
Notice that the boundary-connected sum of the three-dimensional handlebodies is made preserving the combinatorics defined by the chosen jacket, and this clarifies any ambiguity due to a choice of orientation.
Since $\ell_{ab}$ is transversal to $\tau_{ab}$, it is easy to see that it lies inside ${\cal D}_{\sigma_a}\cup{\cal D}_{\sigma_b}$. The intersections $(n_{ab}\cap{\cal D}_{\sigma_a})\cup(n_{ab}\cap{\cal D}_{\sigma_b})$ identify what shall be carved out of ${\cal D}_{\sigma_a}$ and ${\cal D}_{\sigma_b}$. Here, we require $(n_{ab}\cap\partial{\cal D}_{\sigma_a}) = \emptyset$ (and similarly for $\partial{\cal D}_{\sigma_b}$) in order to avoid singularities. The operation is, thus, very similar to a stabilization up to the fact that we are identifying balls on the boundaries of two disconnected handlebodies. The boundary of the (three-dimensional) carved region in ${\cal D}_{\sigma_a}\cup{\cal D}_{\sigma_b}$ is, again, $\Sigma_{ab}$. Hence, $\Sigma_{ab}$ is identified as the central surface obtained through such a carving operation.
In general, there are more than one $0$-colored 3-faces sitting opposite to the same pair $v_a^{\widehat{0}}$ and $v_b^{\widehat{0}}$; we denote this number $E_{ab}$.
It means that there are $E_{ab}$-many embedded $0$-colored lines connecting the two realizations of the bubbles $K({\cal B}_a^{\widehat{0}})$ and $K({\cal B}_b^{\widehat{0}})$.
Repeating the above procedure for all $E_{ab}$ lines not only defines the boundary-connected sums ${\cal Q}_a\natural{\cal Q}_b$ and ${\cal R}_a\natural{\cal R}_b$, but also adds to each of them $E_{ab}-1$ extra $1$-handles via stabilization.
We are left to clarify how ${\cal D}=\bigcup_\sigma {\cal D}_\sigma$ behaves under the iterated carving operation.
Let us first notice that each ${\cal D}_{\sigma}$ is bounded by six rectangular faces.
One, as we defined earlier, is $s$ and is determined by the intersection of $\sigma$ with the realization of a jacket of a $\widehat{0}$-bubble. $s$ is the only face of ${\cal D}_{\sigma}$ whose interior lies in the interior of $\sigma$.
The interior of the other five faces lies inside the interior of one of the five boundary faces of $\sigma$.
Hence, each boundary face of ${\cal D}_{\sigma}$ naturally carries a single color from the colored graph $\cal G$.
The face carrying the color $0$ is the one sitting opposite to $s$ and we call it $o$.
For every ${\cal D}_{\sigma}$ in $M$ there is one and only one ${\cal D}_{\sigma'}$ sharing $o$ with ${\cal D}_{\sigma}$.
The union ${\cal D}_{\sigma}\cup{\cal D}_{\sigma'}$ can be thought as the effective building blocks of $\cal D$ and they are in one to one correspondence with the $0$-colored lines of $\cal G$.
These building blocks are also bounded by ten faces; in $\pi_{ab}$, we have: $s_a$, $s_b$, four lateral faces carrying colors $i\neq 0$ coming from ${\cal D}_{\sigma_a}$ and four lateral faces carrying colors $i\neq 0$ coming from ${\cal D}_{\sigma_b}$.
Note that faces of the same color coming from ${\cal D}_{\sigma_a}$ and ${\cal D}_{\sigma_b}$ are glued to each other via a boundary edge.
When we compose such blocks to build $\cal D$, each block glues to another sharing a lateral face according to the colors.
\begin{figure}[h]
\begin{minipage}[t]{0.8\textwidth}
\centering
\def0.75\columnwidth{0.9\columnwidth}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[scale=.4]{carved3dHb.pdf}
\caption{}
\label{fig:carved-D}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[scale=.35]{tripleCarvedHb.pdf}
\caption{}
\label{fig:carved-triple-D}
\end{subfigure}
\caption{Structure of $\cal D'$. Fig.~\ref{fig:carved-D} shows an effective building block of $\cal D'$, namely ${\cal D}_{ab}^{(\ell)}$.
There are eight blue $2$-faces which are to be glued to other effective building blocks of $\cal D'$.
The yellow surface is going to be part of the central surface of the trisection and, therefore, will constitute the boundary of $\cal D'$, i.e., to be glued onto $\cal Q$ and $\cal R$.
Four pieces of the $\gamma$-curves describing $\cal D'$ are pictured in blue lines.
The brown rectangle with one of the $\gamma$-curves as boundary represents part of a compression disc.
The spine of the effective block of $\cal D'$ is shown in thick solid black loop piercing through the compression disc.
Fig.~\ref{fig:carved-triple-D} shows three effective building blocks of $\cal D'$ glued along their $i$-colored faces (let us pick $i=1$).
Here we show a $\gamma$-curve in blue, circulating along all of the three effective building blocks and defining the boundary of a compression disc (shown in brown).
In this example, the $\gamma$-curve is defined by the color set $\{0, 1\}$.
All the other $\gamma$-curves, which we do not show, only travel through one block
and then move away on other patches of the central surface which are not shown in the picture.
As before, patches of the central surface are shown in yellow an lateral faces in blue.
}
\label{fig:carved-both-D}
\end{minipage}
\end{figure}
It is important to realize that the embedding of $0$-colored lines connects opposite faces of such building blocks, namely $s_a$ and $s_b$, therefore a tubular neighborhood of a $0$-colored line always intersects $s_a$ and $s_b$.
After carving such neighborhoods out of ${\cal D}$,
each building block is turned into a solid torus (pictorially, we can think of tunneling through them along a $0$-colored line, see fig.~\ref{fig:carved-D}). In $\pi_{ab}$, we refer to such new effective building blocks as
\begin{equation}
{\cal D}_{ab}^{(\ell)} = ({\cal D}_{\sigma_a}\cup{\cal D}_{\sigma_b})\setminus n_{ab}\,,
\end{equation}
and the resulting entire structure corresponds to
\begin{equation}
{\cal D}' = {\cal D}\setminus({\mathlarger{\cup}}_{\ell_{ab}} n_{ab}) = \bigcup_{\ell_{ab}} {\cal D}^{(\ell)}_{ab}={\mathlarger{\natural}}{\cal D}^{(\ell)}_{ab}\,.
\end{equation}
Before moving on, an important remark is in order. So far we discussed the case of $v^{\widehat{0}}_a$ is different from $v^{\widehat{0}}_b$. Nevertheless, it may easily happen that $\tau_{ab}$ opposes to the same $\widehat{0}$-colored vertex (in fact, it is sufficient that the two $4$-simplices in $\pi_{ab}$ share one more face, beside $\tau_{ab}$, for this to be true). In this case, as explained in section \ref{sec:stab}, most of the features we just discussed would still hold. Simply, instead of performing a connected sum between two ${\widehat 0}$-bubbles, we would be adding a $1$-handle to a single ${\widehat 0}$-bubble via stabilization (as in fig.~{\ref{fig:stabilization}})and increase by one the genus of the central surface defined by $K({\cal J}({\cal B}_a^{\widehat{0}}))$. In particular, this situation would correspond to a single building block ${\cal D}_{ab}^{(\ell)}$ in which two lateral faces of the same color $i$ are identified. One can understand such operation as the retraction to a point of a disc on the boundary of ${\cal D}_{ab}^{(\ell)}$, bounded by a trivial element in the first homotopy group of the $2$-torus\footnote{Remember that two faces of the same color in ${\cal D}_{ab}^{(\ell)}$ already share a side.}. Topologically, such ${\cal D}_{ab}^{(\ell)}$ would therefore remain a solid torus.
We are now ready to state the main result of this work.
\begin{theorem}
\label{theor:ourtheor}
$\cal Q'$, $\cal R'$ and $\cal D'$ are handlebodies.
\end{theorem}
\begin{proof}
The submanifolds $\cal Q'$ and $\cal R'$, as explained in the construction~\ref{const:aboutD}, are stabilizations of the boundary connected sum of the handlebodies $\{{\cal Q}_a\}$ and $\{{\cal R}_a\}$ respectively and, as such, are handlebodies themselves. Their spines are defined as described in sections \ref{sec:connected-sum}, \ref{sec:jackets-heeg} and \ref{sec:stab}, i.e., via the bicolored paths defining the jacket, joined by the embedded $0$-colored lines of $\cal G$.
${\cal D}'$ is the boundary-connected sum of the building blocks ${\cal D}_{ab}^{(\ell)}$ performed via their lateral faces. Since the ${\cal D}_{ab}^{(\ell)}$ are solid tori, ${\cal D}'$ is a handlebody by construction. The prescription to perform such boundary-connected sum is encoded in the combinatorics of $\cal G$.
Eventually, no lateral 2-face of ${\cal D}_{ab}^{(\ell)}$ will be left free (for any $a$ and $b$ in the graph ) and the only contributions to the boundary of $\cal D'$ will come from $s_a\setminus q_a$, $s_b\setminus q_b$ and $\Sigma_{ab}$ (for any $a$ and $b$).
Its spine can be identified by noticing that each solid torus ${\cal D}^{(\ell)}_{ab}$ can be collapsed along $\ell_{ab}$ onto a ${\rm{S}}^1$ homeomorphic to the boundary of $o_{ab}$. The spine of ${\cal D}'$ can, thus, be constructed by gluing the spines of each building block\footnote{We recall that the boundary of each $o_{ab}$ face consists of four sides carrying colors $i\neq 0$.}.
\end{proof}
Let us turn our attention draw a set of $\gamma$-curves on the boundary of $\cal D'$.
Four sectors of compression discs can be built in each ${\cal D}_{ab}^{(\ell)}$ intersecting the central surface on $\Sigma_{ab}$ as well as on $s_a \setminus q_a$ and $s_b \setminus q_b$ (see fig.~\ref{fig:carved-D}).
The resulting four arcs of $\gamma$-curves correspond to arcs of four circles coplanar to the axis of revolution of the torus boundary of each ${\cal D}_{ab}^{(\ell)}$. Each arc starts from one of the sides of $s_a$ (determined by a color $i\neq 0$), proceed along $\Sigma_{ab}$ (therefore parallel to a $0$-colored line of $\cal G$), and end on the side of $s_b$ carrying the same color as the side they started from, as depicted in fig.~\ref{fig:carved-both-D}.
Here, each arc will connect to another one coming from a neighboring building block of $\cal D'$.
Thanks to the combinatorics of $\cal G$, inherited by the building block of $\cal D'$, the composition of a $\gamma$-curve through the union of such arcs will go on according to the $\{0i\}$-colored cycles
in the graph and close after as many iteration as the half of the length of the $\{0i\}$-cycle.
Therefore from each $0$-colored tetrahedron
$\tau_{ab}$, four $\gamma$-curves depart each going around a boundary triangle.
We remark here that this procedure will give us redundant $\gamma$-curves.
We conclude this section by simply performing the following identifications with respect to our definition \ref{def:trisection}:
\begin{equation}
\begin{split}
H_{23} &= {\cal D}' \,,\\
H_{12} &= {\cal Q}'\,,\\
H_{13} &= {\cal R}'\,.
\end{split}
\end{equation}
\begin{figure}[h]
\begin{minipage}[t]{0.8\textwidth}
\centering
\def0.75\columnwidth{0.9\columnwidth}
\centering
\includegraphics[scale=.4]{bubblesconnected.png}
\caption{
We illustrate how we connect the isolated dual of $4$-bubbles via the carving operation explained in sec.~{\ref{sec:connect4bubbles}}.
For simplicity, the figure is an analogue in lower three dimensions rather than four.
Two ${\rm{S}}^2$s on the right and on the left represent $K(B^{0}_a)$ and $K(B^{0}_b)$ and they are connected via a tubular neighborhood of $\ell$ (here represented with a solid black line).
Part of $H_{12}$ is shown as a red surface, $H_{13}$ in green, and $H_{23}$ in blue.
Part of the central surface to be is depicted as lines in yellow.
The three-dimensional space above the red and the blue surfaces is an analogue of $X_2$, the one below is $X_3$, whereas the tubular neighborhood $\ell$ and the spheres constitute $X_1$.
The light blue triangles represent ${\cal Q}_{\sigma_a} \cup {\cal R}_{\sigma_a}$ and ${\cal Q}_{\sigma_b} \cup {\cal R}_{\sigma_b}$. (See fig.~{\ref{fig:doublepentachoron}}.)
}
\label{fig:bubblesconnected}
\end{minipage}
\end{figure}
\subsection{Four-dimensional handlebodies}
\label{sec:4dhandlebodies}
Let us briefly comment on the four-dimensional pieces $X_1$, $X_2$, and $X_3$ we obtained with our prescription.
As we discussed at the beginning of section~\ref{sec:trisections}, theorem~\ref{th:extending-th} implies that
there is a unique cap-off of ${\cal D}'\cup{\cal Q}'\cup{\cal R}'$, i.e., there is a unique way of defining $X_1$, $X_2$, and $X_3$ using only $3$- and $4$-handles such that the pairwise unions ${\cal D}'\cup{\cal Q}'$, ${\cal D}'\cup{\cal R}'$ and ${\cal Q}'\cup{\cal R}'$, are the boundaries of $X_1$, $X_2$, and $X_3$.
Due to the symmetric nature of $i$-handles and $(d-i)$-handles in $d$ dimensions, all $X_1$, $X_2$, and $X_3$ are guaranteed to be handlebodies.
The statement, therefore, is equivalent to saying that there is a unique set of three handlebodies with the given boundaries.
Nevertheless one might wonder whether, given a triangulation, these handlebodies actually reconstruct the PL-manifold or not.
In fact, embedding ${\cal D}'$, ${\cal Q}'$ and ${\cal R}'$ in the triangulation as we illustrated above provides us with three four-dimensional submanifolds $\overline{X}_1$, $\overline{X}_2$ and $\overline{X}_3$.
These manifolds share the same boundaries as $X_1$, $X_2$, and $X_3$ but they are a priori different. If that were the case, $\overline{X}_1$, $\overline{X}_2$ and $\overline X_3$ would automatically not be handlebodies due to the aforementioned uniqueness. In order to clarify this point we look for the spines of $\overline{X}_1$, $\overline{X}_2$ and $\overline{X}_3$.
\begin{corollary}
\label{corollary:4dhbody}
Given a colored triangulation ${\cal T}$ of a manifold $M$, dual to a colored graph $\cal G$, and a choice of a jacket for its $\widehat{0}$-bubbles, ${\cal J}({\cal B}^{\widehat{0}}_a)$, construction \ref{const:aboutD} defines a trisection of $M$.
\end{corollary}
\begin{proof}
Since the three-dimensional handlebodies ${\cal Q}'$, ${\cal D}'$ and ${\cal R}'$ satisfy the hypothesis of definition \ref{def:trisection} by construction (i.e., they share the same boundary and their interiors are disjoint), we can focus on the four-dimensional submanifolds $\overline{X}_1$, $\overline{X}_2$ and $\overline{X}_3$. Their interior is disjoint by construction, therefore the only issue is to prove that they are handlebodies.
$\overline{X}_1$ is bounded by ${\cal Q}'\cup {\cal R}'$.
Its spine is easily found by collapsing $K({\cal B}_a^{\widehat{0}})$ to points \footnote{For the moment we are only dealing with manifolds rather than pseudomanifolds therefore this just represents the retraction of a topological ball to its center.} and keeping the connection
encoded by $0$-colored embedded lines.
Therefore $\overline{ X}_1$ is a handlebody by construction.
$\overline{ X}_2$ is bounded by ${\cal Q}'\cup {\cal D}'$.
Bearing in mind the linear map from a $\Delta^{(4)}$ to $\Delta^{(2)}$ as in section \ref{sec:cutting-simplices}
(fig.~\ref{fig:4simplexmap}),
we notice that in every four-simplex, $\overline{X}_2$ can be retracted to an edge identified by the set of colors $\{1, 2\}$ via its endpoints: $v^{\widehat{1}}$ and $v^{\widehat{2}}$.
The set of these edges therefore constitutes a spine of $\overline{X}_2$.
Moreover, $\overline{X}_2$ is connected since its boundary $\partial\overline{X}_2$ is connected by construction.
This is enough to prove that $\overline{X}_2$ too is a handlebody.
The argument for $\overline{X}_3$ follows in complete analogy with the one for $\overline{X}_2$ upon replacing the set of colors $\{1, 2\}$ with $\{3, 4\}$ and the boundary $\partial\overline{X}_2 = {\cal Q}'\cup {\cal D}'$ with $\partial\overline{X}_3 ={\cal R}'\cup {\cal D}'$.
The uniqueness of the handlebodies with the given boundary implies $\overline{X}_1 = { X}_1$, $\overline{X}_2 = { X}_2$ and $\overline{X}_3 = { X}_3$.
\end{proof}
\subsection{Central surface and trisection diagram}
\label{sec:central-surface}
In this present section, we discuss the trisection diagram encoded in what we illustrated in section \ref{sec:trisections}.
Let us slowly reveal the topological information somewhat deeply hidden in our construction.
From our construction, in general, the genus of the central surface will not coincide with the trisection genus.
In a rare case the genus of the central surface is equal to the trisection genus, one could imagine it being a very special type of triangulation and is suppressed in the statistical theory dictated by the tensor model.
This is not necessarily a dramatic problem, provided that there is a clear understanding of $\alpha$-, $\beta$- and $\gamma$-curves.
This information of curves, however, is also not necessarily trivial to extract since we generate many copies of the same curve which, in principle, intersect other curves on the diagram differently and choosing one curve over the other corresponds to a different diagram with the same central surface\footnote{Therefore connected by a series of handle slides and by as many handle addition as handle cancellations.}.
Nevertheless we are hopeful that future works might unentangle this information and overcome this ambiguity.
To start, we look at the genus of the central surface.
Let us define the following graph $\widetilde{\cal G}$ derived from a colored graph $\cal G$.
Starting from the original colored graph $\cal G$, we collapse all the $\widehat{0}$-bubbles to points which will become the
nodes of $\widetilde{\cal G}$. Then, we connect these nodes via the $0$-colored lines of $\cal G$ encoding
the same combinatorics of the original graph $\cal G$.
Effectively, the $0$-colored lines of $\cal G$ simply become the lines of $\widetilde {\cal G}$.
Note that the number of connected components of a graph is preserved under this operation; if $\cal G$ is connected, $\widetilde{\cal G}$ is connected.
The number of loops\footnote{We refer here to the notion of loops of a graph that is commonly used in physics in the framework of Feynman diagrams, not to the graph theoretical notion of a line connecting a node to itself.
What we refer to as loop is, in graph theory, sometimes referred to as independent cycles.} of $\widetilde {\cal G}$ corresponds to the dimension of its first homology group and evaluates to:
\begin{equation}
L=\vert {\cal E} \vert-\vert {\cal V} \vert +1\,,
\end{equation}
where $\vert {\cal E} \vert$ is the number of lines and $\vert {\cal V} \vert$ is the number of
nodes of $\widetilde {\cal G}$.
By construction, $\vert {\cal V} \vert$ corresponds to the number of different $\widehat{0}$-bubbles which, in turn, is the number of vertices
opposing to $0$-colored tetrahedra.
$\vert \cal E \vert$, on the other hand, corresponds to the number of $0$-colored tetrahedra and evaluates to $p/2$ for a triangulation of $p$ simplices\footnote{Note that we are considering only orientable manifolds and, therefore, the original graph $\cal G$ is bipartite.}.
\begin{proposition}
Construction \ref{const:aboutD} defines a trisection with a central surface $\Sigma$ of genus $g_c$ given by
\begin{equation}
\label{eq:central-genus}
g_c = \sum_{a=1}^{\vert \cal V \vert} g_{{\cal J}({\cal B}_a^{\widehat{0}})} +L\,,
\end{equation}
with $g_{{\cal J}({\cal B}_a^{\widehat{0}})}$ being the genus of the jacket ${\cal J}_{\{1, 2\}\{3, 4\}}$ of the bubble ${\cal B}_a^{\hat{0}}$.
\end{proposition}
Notice that $g_c$ is invariant under the insertion of $d$-dipoles in the $0$-colored lines, while inserting a $d$-dipole in a line of color $i \ne 0$ increases $L$, and therefore $g_c$, by one. In fact, as we show in the appendix \ref{app:examples}, the
elementary melon
yields the genus $1$ trisection diagram for ${\rm{S}}^4$ and the insertion of a $d$-dipole can be understood as the connected sum with
the elementary melon
at the level of the colored graph.
Let us look at the curves we have drawn on $\Sigma$. We remark that the genus $g_c$ also corresponds to the number of independent $\alpha$-, $\beta$- and $\gamma$-curves.
The $\gamma$-curves are obtained as paths on $\Sigma$ and composed by segments parallel to the lines of $\widetilde {\cal G}$, and segments crossing the boundaries between different $s$'s, according to an associated color $i\neq 0$.
The composition of these segments according to the combinatorics of ${\cal G}$ will force the $\gamma $ curve to close in a loop (see fig.~\ref{fig:carved-triple-D}).
This tells us that the $\gamma$-curves are isomorphic to embedded $\{0 i\}$-cycles in ${\cal T}$.
Note that by representing the graph $\cal G$ in stranded notation, these curves are literally drawn on the surface\footnote{Note that every vertex of $\cal G$ corresponds to a square in the surface dual to ${\cal J}({\cal B}^{\widehat{0}})$ and the $0$-colored embedded lines are interpreted as handles.
Therefore the $\{0 i\}$-strand is really isomorphic to one of the $\gamma$-curves.}
Similarly, given a chosen jacket ${\cal J}_{\{i, j\}\{k, l\}}$ the $\alpha$- and $\beta$-curves are given by the $(i, j)$- and $(k, l)$-strands of $\cal G$ (see fig.~\ref{fig:trisection-curves}).
Furthermore, we shall add one $\alpha$- and one $\beta$-curve for every
line $\widetilde{\cal G}$.
These last additions correspond to the attaching cuves of the Heegaard splitting of ${\rm{S}}^1 \times {\rm{S}}^2$ in the genus one trisection diagram of ${\rm{S}}^4$ (see section~\ref{sec:stab}).
\begin{figure}[h]
\begin{minipage}[t]{0.8\textwidth}
\centering
\def0.75\columnwidth{0.3\columnwidth}
\centering
\includegraphics[scale=.35]{trisectioncurves.pdf}
\caption{
This sub-$3$-simplex is an element in the triangulation dual to a $\widehat{0}$-bubble.
We can identify $\alpha$-curve (red line) and $\beta$-curve (green line) on the central surface (yellow square) by projecting the edges (red and green) of the sub-$3$-simplex sitting opposite to the central surface down to the central surface itself. These edges become part of the spine of the corresponding three-dimensional handlebodies.
Part of compression discs are shown in pink (light blue) which is bounded by an $\alpha$-curve ($\beta$-curve).
We highlight the matrix model dual to a jacket ${\cal J}({\cal B}^{\widehat{0}})$. The yellow square which is part of central surface is nothing but the quadrangulation dual to a jacket ${\cal J}({\cal B}^{\widehat{0}})$.
We illustrate that the trisection curves ($\alpha$-curve in red and $\beta$-curve in green) coincide with drawing the strands of the original graph directly on the quadrangulation of the central surface.
}
\label{fig:trisection-curves}
\end{minipage}
\end{figure}
As we stated above, not all these curves are independent. Each of them is a viable attaching curve,
but not all of them should be considered at the same time.
For the $\alpha$-curves (and similarly for the $\beta$-curves), we can constrain slightly more;
the independent ones should be chosen to be $g_{{\cal J}({\cal B}_a^{\widehat{0}})}$-many in each realization of a $\widehat{0}$-bubble plus $L$-many among the extra ones we draw around the now embedded lines of $\widetilde {\cal G}$ (up to Heegaard moves).
Remember that attaching curves of a graph are defined by the condition that cutting along them we obtain a connected punctured sphere (see fig.~\ref{fig:cutting}).
$L$ is by construction the maximal number of lines we can cut before disconnecting the graph $\widetilde {\cal G}$.
Once these first $L$ curves are cut, we can proceed identifying the rest of the $\alpha$-curves given by each of the $\vert \cal V \vert$-many $\widehat 0$-bubbles through ${\cal J}({\cal B}^{\widehat{0}})$.
So far, we have treated color $0$ to be special, however, of course that is an arbitrary choice for an easy illustration, and any other color choice will suffice.
Hence, there are $15$ possible trisections (up to handle slides) that can be generated with our construction ($5$ choices of $4$-bubbles and $3$ choices of jackets per each choice of $4$-bubble).
A final remark is in order. If we compare our results with the one presented in \cite{Casali:2019gem}, the genus of the central surface we obtain is obviously higher and less indicative of the topological invariant. A more striking difference is that we have an extra combinatorial contribution. By construction, and due to the properties of the graphs considered, the result presented in \cite{Casali:2019gem} is only affected by the Heegaard splitting of an embedded $3$-manifolds, in particular, the Heegaard splitting of the link of a vertex. Moreover, for a closed compact $4$-manifold $M$, such link is always PL-homeomorphic to ${\rm{S}}^3$.
We can, thus, understand the trisection genus of a manifold $M$, which is a smooth invariant, as a lower bound for the possible Heegaard splittings of embedded spheres induced by colored triangulations of $M$. In our construction, though, an extra contribution to the genus of the central surface is produced in the form of $L$ in equation \eqref{eq:central-genus}. One may wonder whether this contribution is actually necessary or just an artifact of our construction of trisections. In other words, if the relevant topological information could indeed be rephrased in terms of Heegard splittings of embedded $3$-manifolds, it might be enough to consider the connected sum of the realizations of $4$-bubbles, without systematically stabilizing the trisection with $L$ extra of $1$-handles.
\subsection{Singular manifolds}
\label{sec:pseudo-mfd}
What we have discussed so far strictly applies only to manifolds, i.e., to graphs where all $\widehat{i}$-bubbles are dual to PL-spheres. Nevertheless, colored graphs generated by a colored tensor model of the form \eqref{def:simplicial-tensor-model} encode pseudo-manifolds as well.
It is natural to wonder whether our construction might encode any sensible topological information for such wider class of graphs.
In \cite{Casali:2019gem} such an extension has been made clear starting from crystallization graphs.
We will follow similar steps in order to extend the same construction beyond graphs encoding closed compact manifolds.
Let us restrict to the case of $\widetilde{M}=K({\cal G})$ being singular manifolds.
Then, all the $\widehat{i}$-bubbles are dual to PL-manifolds and the singularity is only around vertices in ${\cal T}$ (rather then higher dimensional simplices).
One can obtain a compact manifold $M$ out of $\widetilde{M}$ by simply removing open neighborhoods of the singular vertices in ${\cal T}$. The number of connected components of $\partial M$ will increase by the number of singular vertices with respect to the number of connected components of $\partial \widetilde{M}$. Conversely, one can obtain a singular manifold by coning all the boundary components of a manifold with (non-spherical) boundary. If $\cal G$ is a closed graph, then the above correspondence is a bijection between the set of manifolds with non-spherical boundary components and singular manifolds.
Though such bijection allows us to work with manifolds in a larger class of graphs,
the definition of trisections as formulated in definition~\ref{def:trisection} only applies to closed manifolds. Hence, we shall extend it to include boundary components in order to connect with our combinatorial construction. Following \cite{Casali:2019gem} we define a \textit{quasi-trisection} by allowing one of the four-dimensional submanifolds not to be a handlebody:
\begin{definition}\label{def:quasi-trisection}
Let $M$ be an orientable, connected 4-manifold with $n$ boundary components $\partial M_1\,, \dots\,,\partial M_n$. A quasi-trisection of $M$ is a collection of three submanifolds $X_1, X_2, X_3 \subset M$ such that:
\begin{itemize}
\item each $X_1$ and $X_2$ are four-dimensional handlebodies of genus $g_1$ and $g_2$ respectively,
\item $X_1$ is a compression body with topology ${\mathlarger{\natural}}_{r=1}^n (\partial M_r\times [0,1]) \bigcup_{s=0}^{g_1}{\rm h}_s$, ${\rm h}_s$ being $1$-handles,
\item $X_i$s have pairwise disjoint interiors $\partial X_i\supset (X_i\cap X_j)\subset\partial X_j$ and $M= \cup_i X_i$,
\item the intersections $X_i\cap X_j = H_{ij}$ are three-dimensional handlebody,
\item the intersection of all the four-dimensional handlebodies $ X_1 \cap X_2 \cap X_3$ is a closed connected surface $\Sigma$ called \textbf{central surface}.
\end{itemize}
\end{definition}
Let us further denote with $G_s^{(0)}$ the set of connected $5$-colored graphs with only one $\widehat{0}$-bubble and with all $\widehat{i}$-bubbles dual to topological spheres, and let us denote with $\xbar{G}_s^{(0)}$ the set of connected $5$-colored graphs whose only non-spherical bubbles are $\widehat{0}$-bubbles (but we do not restrict the number of such bubbles). Obviously, an element in $\xbar{G}_s^{(0)}$ describes a manifold that can be decomposed into the connected sum of realizations of elements of $G_s^{(0)}$. The connected sum, in this case, can be performed at the level of two graphs ${\cal G}_1$ and ${\cal G}_2$ by cutting a $0$-colored line in each graph and connecting the open lines of ${\cal G}_1$ to the open lines of ${\cal G}_2$. The construction of trisections we illustrated in the previous sections can be straightforwardly applied to graphs in $\xbar{G}_s^{(0)}$ and is easy to see that the outcome satisfies the conditions in def.~\ref{def:quasi-trisection}. In this regard, the result is the simplest generalization of the result presented in \cite{Casali:2019gem}. A more complicated extension would require the inclusion of singular vertices defined by different color sets; we leave such study for future works.
\section{Conclusions}
\label{sec:conclusions}
We have formulated trisections in the colored triangulations encoded in colored tensor models, restricting to the ones which are realized by manifolds (as opposed to pseudo-manifolds).
We utilized the embedding of colored tensor model graphs in their dual triangulations to facilitate our construction of trisections.
Generally speaking, the genus of the central surface of the trisection, given a colored tensor model graph, is higher as the graph is bigger (i.e., the number of nodes is larger).
Therefore, statistically speaking, it is unlikely to obtain the trisection genus (which is a topological invariant) of the corresponding manifold of a given colored tensor model graph. Nevertheless, it would be interesting to investigate whether the construction of trisections might lead to new insights on the organization of the partition function of colored tensor models.
With the Gurau degree classifying tensor model graphs, we can achive a large $N$ limit, where we only select the dominating melonic graphs which are a subclass of spheres. Melons in the continuum limit have been shown to behave like branched polymers with Hausdorff dimension $2$ and the spectral dimension $4/3$ \cite{Bonzom:2011zz, Gurau:2013cbh}.
Reflecting and motivated by the quantum gravity context, we dream of a possibility of finding a new parameter for colored tensor model which may classify the graphs in a new large $N$ limit, which may then give some new critical behavior.
There have been works in this direction \cite{Bonzom:2012wa, Bonzom:2015axa, Bonzom:2016dwy, BenGeloun:2017xbd}, where the authors studied how to achieve different universality classes than the melonic branched polymer (tree).
In \cite{Lionni:2017yvi}, given random discrete spaces obtained by gluing families of polytopes together in all possible ways, with a systematic study of different building blocks, the author achieved the right scalings for the associated tensor models to have a well-behaved $1/N$ expansion.
So far, one could achieve in addition to the tree-like phase, a two-dimensional quantum gravity planar phase, and a phase transition between them which may be interpreted as a proliferation of baby universes \cite{Lionni:2017xvn}.
In \cite{Valette:2019nzp}, they have defined a new large $N$ expansion parameter, based on an enhanced large $N$ scaling of the coupling constants. These are called generalized melons, however, this class of graphs is not yet completely classified, and it is not proven yet what kind of universality class they belong to in the continuum limit, but strong hints point toward branched polymers.
In our present case, knowing that in rank $3$, the realisation of a jacket is identified to be a Heegaard surface, and knowing that jackets govern the Gurau degree which is responsible for the melonic large $N$ limit,
it is tempting to delve further into the possibility of finding a specific parameter for rank $4$ colored tensor model based on trisections which may classify the graphs in the large $N$ limit.
Our next hope is to explore possibilities around trisections to find such a parameter.
Looking at the structure of equation \eqref{eq:central-genus} and its properties under $d$-dipoles insertion/contraction we expect melons to persist in dominating the large $N$.
Nevertheless, a different parameter of topological origin might be induced by the above construction. An example is the intersection form, which we plan to investigate in the future following \cite{Feller:2016}. Hopefully investigations in this direction might shed some light on the path integral of tensor models beyond the leading order in the large $N$.
\section*{Acknowledgements}
We would like to thank Andrew Lobb for giving us a lecture on Morse theory, for supervising us on a study on trisections and for other discussions while he was visiting OIST as an excellence chair in the OIST Math Visitor Program. We would also like to thank David O'Connell for leading study sessions on trisections with us. Furthermore, we thank Maria Rita Casali and Paola Cristofori as well as Razvan Gurau for checking our formulation and the manuscript.
|
1,941,325,220,441 | arxiv | \section{introduction}
\label{sec_introduction}
As one promising scheme of implementing quantum engineering, Floquet dynamics
have been widely studied in many quantum systems
\cite{goldmanPRX2014,shirley,salzman,dietz and holthaus}. The interest lies in
that one can substantially modify the long-time dynamical properties of a
quantum system via driving it with a short-time period as well as its
potential in realizing novel new quantum device, which was demonstrated in
tremendous experimental and theoretical works such as matter wave jet
\cite{chinNature2017,fengScience2019}, Floquet-Bloch bands
\cite{holthausJPB2016,weldPRL2019}, Bloch oscillation in a two-band model
\cite{plotz}, quantum ratchets
\cite{smirnovPRL2008,grifoniPRL2002,creffieldPRL2007,lundhPRL2005,chienPRA2013,wimberger,mark
, driven optical lattices \cite{eckardtRMP2017,fanPRA2019,kolovsky}, kicked
rotor \cite{rotor}, Floquet time crystal \cite{elsePRL2016,huangPRL2018} and
monopole magnetic field \cite{zhouPRL2018}.
Very recent experiments have demonstrated Floquet dynamics in spinor $^{87}$Rb
Bose-Einstein condensate (BEC), with the emphasis on spin oscillation
\cite{chapmanNC2016,gerbier2018} and quantum walk in momentum space
\cite{gil}, respectively. Experimental realization of spinor BEC has opened up
a new research direction of cold atom physics \cite{stengerNature1998}, in
which superfluidity and magnetism are simultaneously achieved. In spinor BEC
the spin-dependent collision interactions \cite{spinor condensate} allow for
the population exchange among hyperfine spin states and give rise to coherent
spin-mixing dynamics
\cite{uedaPR2012,hanPRL1998,chapmanNP2005,lettPRL2007,zhangPRA2005,wideraPRL2005,kronjagerPRL2006,gerbierPRA2012R,shinPRL2016
. In principle spin-mixing is Josephson-like effect that takes place in
internal degrees of freedom of atomic spin as compared with that in external
degrees of freedom such as a BEC in a double-well potential, for which the
Floquet dynamics can be studied via periodic modulation of the barrier height
(Josephson coupling) or the difference between the well depths
\cite{haiPRA2009,haroutyunyanPRA2004,bigelowPRA2005,eckardtPRL2005,wangPRA2006,kuangPRA2000,ashhabPRA2007
. Similar to that, in the two experiments \cite{chapmanNC2016,gerbier2018}
magnetic field plays an important role, which modifies the relative energy
among spin states via the quadratic Zeeman effect. Parametric resonance (or
Shapiro resonance) and spin oscillation have been observed via applying biased
magnetic field.
On the other hand besides the magnetic field, recent years have witnessed
growing interest in mediating atomic dynamics via the coupling of BEC to an
optical cavity \cite{cold atom and cavity review}. With the aid of cavity
light field, researchers have successfully implemented photon-mediated
spin-exchange interactions \cite{norciaScience2018,davisPRL2019}, formation of
spin texture \cite{donnerPRL2018} and spinor self-ordering \cite{levPRL2018}.
Cavity-induced superfluid-Mott insulator transition
\cite{larsonPRL2008,zhouPRA2013}, cavity backaction-driven atom transport
\cite{goldwinPRL2014} and BECs with cavity-mediated spin-orbit coupling are
also reported \cite{dongPRA2014R,dengPRL2014}. In these works cavity feedback
plays an important role.
In this work, by considering the fact that effective quadratic Zeeman effect
can be generated by a strong off-resonant laser field \cite{santosPRA2007}, we
propose an experimentally feasible scheme to realize cavity-driven Floquet
dynamics in spinor BECs. An interesting problem in this setup is that the
Floquet dynamics and the modulating parameter will become mutually dependent
through the cavity feedback. As compared with previous theoretical works
\cite{cosmePRL2018,zhangPRL2018} in which cavity drives the external
centre-of-mass motion of the BECs, here we will look into the problem of what
will take place in the "internal" Floquet dynamics of a spinor BEC driven by
the cavity light field.
The article is organized as follows: In Sec. \ref{sec_model} we present our
model and the effective Hamiltonian is derived for the driven system on
resonance. Sec. \ref{sec_amplify} is devoted to the discussion of how the
Floquet dynamics are affected by the cavity-induced nonlinearity. The
possibility of performing real-time observation of the Floquet dynamics in the
present system is explored in Sec. \ref{sec_measure}. Finally we conclude in
Sec. \ref{sec_conclusion}.
\section{model}
\label{sec_model}
We consider the following model depicted in Fig. \ref{fig_scheme}: A spinor
BEC of $^{87}$Rb atoms with hyperfine spin $F_{g}=1$ confined in an optical
dipole trap is placed inside a unidirectional ring cavity. The intracavity
mode is driven by a coherent laser field with frequency $\omega_{p}$ and
time-dependent amplitude $\varepsilon_{p}\left( t\right) $, which we assum
\begin{equation}
\varepsilon_{p}\left( t\right) =\varepsilon_{0}\left[ 1+f_{0}\sin\left(
\omega_{m}t\right) \Theta\left( t\right) \right] , \label{eq_drive
\end{equation}
with $\Theta\left( t\right) $ the Heaviside step function\ implying that a
sinusoidal modulation around a bias value $\varepsilon_{0}$ is activated at
$t=0$.\ The cavity mode is described by an annihilation operator $\hat{a}$,
which is $\pi$-polarized and characterized by a frequency $\omega_{c}$ and a
decay rate $\kappa$. Furthermore we assume that $\omega_{c}$ is detuned away
from the $F_{g}=1\longleftrightarrow F_{e}=1$ atomic transition such that the
atom-photon interaction is essentially of dispersive nature. The transition
selection rule allows states $\left\vert F_{g}=1,m_{g}=\pm1\right\rangle $ to
be coupled to the corresponding states in the excited manifold with the same
magnetic quantum numbers $\left\vert F_{e}=1,m_{e}=\pm1\right\rangle $ while
it forbids state $\left\vert F_{g}=1,m_{g}=0\right\rangle $ to make dipole
transitions to any excited states. The resulting ac Stark shift of $m_{g
=\pm1$ states relative to the $m_{g}=0$ state then generates an effective
quadratic Zeeman energy shift. On the other hand the atomic population can be
redistributed in the ground state manifold via the two-body $s$-wave spin
exchange collisions, which are described by the numbers $c_{0}=4\pi\hbar
^{2}\left( 2a_{2}+a_{0}\right) /3m_{a}$ and $c_{2}=4\pi\hbar^{2}\left(
a_{2}-a_{0}\right) /3m_{a}$ with $m_{a}$ the atom mass and $a_{f}$ the
$s$-wave scattering lengths in the hyperfine channel with a total spin $f=0$
or $2$ \cite{spinor condensate}. We anticipate that this model can be readily
implemented in experiment with the recent advance in coupling ring cavity with
cold atoms \cite{ring cavity1} and BECs \cite{ring cavity2}.\begin{figure}[h]
\includegraphics[width=8cm]{scheme}\caption{{\protect\footnotesize Schematic
diagram for generating cavity-amplified parametric resonance. (a) An }$F=1$
{\protect\footnotesize spinor condensate is trapped inside a ring cavity. The
cavity is coherently driven by an external laser with time-dependent amplitude
}$\varepsilon_{p}\left( t\right) ${\protect\footnotesize \ and decays with a
rate }$\kappa${\protect\footnotesize . (b) The cavity field is }$\pi
${\protect\footnotesize -polarized and is dispersively coupled to the atomic
system. In the meanwhile the spin-dependent collisions will lead to population
transfer among the three spin components.}
\label{fig_scheme
\end{figure}
For the present system we apply single-mode approximation (SMA), under which
all three atomic spin states are described by the same spatial wavefunction
$\psi\left( \mathbf{r}\right) $. SMA is appropriate for a condensate whose
size is smaller than the spin healing length $\xi_{s}=h/\sqrt{2m_{a}\left\vert
c_{2}\right\vert n}$ ($n$ is the atomic density). The case beyond SMA and with
unbiased driving field was considered in \cite{our model2}.
After adiabatically eliminating the excited atomic level, the atom-cavity
system can be described by the following Hamiltonian in a rotating frame with
$\hbar=1$
\begin{equation}
\hat{H}=\hat{H}_{0}+\left[ U_{0}\left( \hat{c}_{+}^{\dagger}\hat{c}_{+
+\hat{c}_{-}^{\dagger}\hat{c}_{-}\right) -\delta_{c}\right] \hat{a
^{\dagger}\hat{a}+i\varepsilon_{p}\left( t\right) \left( \hat{a}^{\dagger
}-\hat{a}\right) \label{eq_h
\end{equation}
where $U_{0}$ characterizes the strength of atom-photon coupling and
$\delta_{c}=\omega_{p}-\omega_{c}$ is the cavity-pump detuning. $\hat{H}_{0}$
describes the dynamics of the spinor condensate \cite{spinor condensate,
hanPRL1998} and is given b
\begin{equation}
\hat{H}_{0}=\frac{\lambda}{N}\hat{c}_{a}^{\dagger}\hat{c}_{a^{\prime
}^{\dagger}\mathbf{F}_{ab}\cdot\mathbf{F}_{a^{\prime}b^{\prime}}\hat{c
_{b}\hat{c}_{b^{\prime}} \label{eq_h0
\end{equation}
with $\lambda=Nc_{2}\int d\mathbf{r}\left\vert \psi\left( \mathbf{r}\right)
\right\vert ^{4}/2$. Here, the total particle number $N=\sum_{s}N_{s}$ is a
constant-of-motion, $\hat{c}_{s}$ ($\hat{c}_{s}^{\dagger}$) is the bosonic
annihilation (creation) operator of the atomic spin-$s$ ($s=0,\pm1$) state,
and the indices $a$, $a^{\prime}$, $b$, $b^{\prime}$ are summed over the
spins. $\mathbf{F}$ are spin-1 matrices wit
\begin{align}
F_{x} & =\frac{1}{\sqrt{2}}\left(
\begin{array}
[c]{ccc
0 & 1 & 0\\
1 & 0 & 1\\
0 & 1 & 0
\end{array}
\right) \text{, }F_{y}=\frac{i}{\sqrt{2}}\left(
\begin{array}
[c]{ccc
0 & -1 & 0\\
1 & 0 & -1\\
0 & 1 & 0
\end{array}
\right) ,\nonumber\\
F_{z} & =\left(
\begin{array}
[c]{ccc
1 & 0 & 0\\
0 & 0 & 0\\
0 & 0 & -1
\end{array}
\right) . \label{eq_spin matrix
\end{align}
The evolution of the cavity-spinor BEC system can be described by the master
equatio
\begin{equation}
\frac{d\hat{\rho}}{dt}=-i\left[ \hat{H},\hat{\rho}\right] +\kappa\left(
2\hat{a}\hat{\rho}\hat{a}^{\dagger}-\hat{a}^{\dagger}\hat{a}\hat{\rho
-\hat{\rho}\hat{a}^{\dagger}\hat{a}\right) , \label{eq_master
\end{equation}
with $\hat{\rho}$ denoting the total density operator for the atomic spin and
cavity degrees of freedom.
The mean-field equations of motion for the $\mathcal{C}$-numbers
$\alpha=\left\langle \hat{a}\right\rangle $ and $\left\langle \hat{c
_{s}\right\rangle =\sqrt{N\rho_{s}}\exp\left( -i\theta_{s}\right) $
($\rho_{s}$ is the population normalized with respect to the total atom number
$N$ while $\theta_{s}$ is the corresponding phase) can then be derived from
the master equation (\ref{eq_master}) as
\begin{subequations}
\label{eq_mean field
\begin{align}
\dot{\alpha} & =\left[ i\delta_{c}-iU_{0}N\left( 1-\rho_{0}\right)
-\kappa\right] \alpha+\varepsilon_{p}\left( t\right)
,\label{eq_mean field_a}\\
\dot{\rho}_{0} & =2\lambda\rho_{0}\sqrt{\left( 1-\rho_{0}\right)
^{2}-m^{2}}\sin\theta,\label{eq_mean field_b}\\
\dot{\theta} & =-2U_{0}\left\vert \alpha\right\vert ^{2}+2\lambda\nonumber\\
& \times\left[ 1-2\rho_{0}+\frac{\left( 1-\rho_{0}\right) \left(
1-2\rho_{0}\right) -m^{2}}{\sqrt{\left( 1-\rho_{0}\right) ^{2}-m^{2}}
\cos\theta\right] , \label{eq_mean field_c
\end{align}
where $\theta=2\theta_{0}-\theta_{+}-\theta_{-}$ is the relative phase, and
$m=\rho_{+}-\rho_{-}$ the magnetization. Here $\cdot$ means derivative with
respect to time $t$. For simplicity we assume zero magnetization $m=0$ in the
following discussion.
At this point we specify the parameters used in the present work: For spinor
$^{87}$Rb condensate considered in \cite{chapmanNC2016}, $\lambda=-2\pi
\times14$ Hz and $N=4\times10^{4}$, for typical cavity setup we assume that
$\kappa=2\pi\times1$ MHz, $U_{0}=-2\pi\times10$ Hz, $\varepsilon_{0}=4\kappa$
and $f_{0}=0.1$. By considering the fact that the cavity decay rate $\kappa$
is typically much larger than both the frequency of atomic spin oscillation
(characterized by the intrinsic frequency $\lambda$) and the modulation
frequency $\omega_{m}$ (around hundreds Hz as we will show below), we can
adiabatically eliminate $\alpha$ from Eq. (\ref{eq_mean field_a}) and replace
$\alpha$ in Eq. (\ref{eq_mean field_c}) wit
\end{subequations}
\begin{equation}
\alpha\left( t\right) \approx\frac{\varepsilon_{p}\left( t\right)
{\kappa-i\delta_{c}+iU_{0}N\left( 1-\rho_{0}\right) }.
\label{eq_adiabatical elimination
\end{equation}
Thus $\left\vert \alpha\left( t\right) \right\vert ^{2}\approx\left\vert
\alpha_{0}\right\vert ^{2}\left[ 1+2f_{0}\sin\left( \omega_{m}t\right)
\right] $ with $\alpha_{0}=\varepsilon_{0}/\left[ \kappa-i\delta_{c
+iU_{0}N\left( 1-\rho_{0}\right) \right] $, where we have kept only the
lowest order in $f_{0}$ by considering weak driving.
By introducing $\theta\left( t\right) =\phi\left( t\right) +z\cos\left(
\omega_{m}t\right) $ with $z=2\omega_{0}f_{0}/\omega_{m}$ and $\omega
_{0}=2U_{0}\left\vert \alpha_{0}\right\vert ^{2}$, Eqs. (\ref{eq_mean field_b
) and (\ref{eq_mean field_c}) becom
\begin{align}
\dot{\rho}_{0} & =2\lambda\rho_{0}\left( 1-\rho_{0}\right) \sum
_{n=-\infty}^{\infty}J_{n}\left( z\right) \sin\left[ \phi+n\left(
\omega_{m}t+\frac{\pi}{2}\right) \right] ,\nonumber\\
\dot{\phi} & =-\omega_{0}+2\lambda\left( 1-2\rho_{0}\right) \left\{
1\right. \nonumber\\
& \left. +\sum_{n=-\infty}^{\infty}J_{n}\left( z\right) \cos\left[
\phi+n\left( \omega_{m}t+\frac{\pi}{2}\right) \right] \right\} ,
\label{eq_tempo
\end{align}
where we have implicitly assumed that for evolution at high field with
relatively large $\left\vert \omega_{0}/\lambda\right\vert $ the system is in
the Zeeman energy dominated regime in which the oscillation dynamics are
suppressed, and consequently $\rho_{0}$ and $z$\ can be assumed to be
approximately constant \cite{chapmanNC2016}. Note also that the Jacobi-Anger
expansion
\begin{align}
\cos\left( z\cos\varphi\right) & =\sum_{n=-\infty}^{\infty}J_{n}\left(
z\right) \cos\left[ n\left( \varphi+\frac{\pi}{2}\right) \right]
,\nonumber\\
\sin\left( z\cos\varphi\right) & =\sum_{n=-\infty}^{\infty}J_{n}\left(
z\right) \sin\left[ n\left( \varphi+\frac{\pi}{2}\right) \right]
\label{eq_jacobi expansion
\end{align}
have been used in deriving Eqs. (\ref{eq_tempo}), where $J_{n}\left(
z\right) $ is the $n$th-order Bessel function of the first kind.
Replacing $\phi\rightarrow\phi-n\left( \omega_{m}t+\pi/2\right) $, one can
see that only at some specific values of $n=k$ with $k\omega_{m}\sim\omega
_{0}$ the value of $\phi$ \textit{does not} monotonically depend on $t$, i.e.,
yielding nonzero time average of $\dot{\rho}_{0}$. Around these specific
values of $k$ giving rise to parametric resonances, Eqs. (\ref{eq_tempo})
resort t
\begin{align}
\dot{\rho}_{0} & =2\lambda\eta_{k}\rho_{0}\left( 1-\rho_{0}\right)
\sin\phi,\nonumber\\
\dot{\phi} & =\delta_{k}+2\lambda\left( 1-2\rho_{0}\right) \left(
1+\eta_{k}\cos\phi\right) , \label{eq_resonance
\end{align}
where $\eta_{k}=J_{k}\left( z\right) $ and $\delta_{k}=k\omega_{m
-\omega_{0}$. The equations-of-motion (\ref{eq_resonance}) have similar form
as the secular equations derived in \cite{gerbier2018}. However one should
notice that $\delta_{k}$ relates to $\left\vert \alpha_{0}\right\vert ^{2}$
and thus is a complex function of $\rho_{0}$, which introduces nonlinearity
into the system.
To illustrate the dynamical properties near parametric resonance, one can use
$\dot{\rho_{0}}=-2\partial H_{k}/\partial\phi$ and $\dot{\phi}=2\partial
H_{k}/\partial\rho_{0}$ to construct, in terms of two conjugate variables
$\rho_{0}$ and $\phi$, the following mean-field Hamiltonian $H_{k}$
\begin{equation}
H_{k}=\lambda\rho_{0}\left( 1-\rho_{0}\right) \left( 1+\eta_{k}\cos
\phi\right) +U_{k}\left( \rho_{0}\right) , \label{eq_secular energy
\end{equation}
wher
\begin{equation}
U_{k}\left( \rho_{0}\right) =\frac{k\omega_{m}}{2}\rho_{0}+\frac
{\varepsilon_{0}^{2}}{N\kappa}\arctan\left[ \frac{NU_{0}}{\kappa}\left(
1-\rho_{0}\right) -\frac{\delta_{c}}{\kappa}\right] \label{eq_potential
\end{equation}
represents the cavity-mediated atom-atom interaction.
\section{cavity-amplified parametric resonance}
\label{sec_amplify}
We first consider the cavity-free case where in Eqs. (\ref{eq_mean field})
$U_{0}\left\vert \alpha\right\vert ^{2}$ represents a quadratic Zeeman shift
independent of $\rho_{0}$, then $U_{k}\left( \rho_{0}\right) $ in Eq.
(\ref{eq_potential}) resorts to $\delta_{k}\rho_{0}/2$. If the periodic
modulation is not applied ($f_{0}=0$) one can estimate that $\left\vert
U_{0}\left\vert \alpha\right\vert ^{2}/\lambda\right\vert \approx11.4$ at
$\delta_{c}=0$. One can further show that \cite{zhangPRA2005}, under this high
field the maximum oscillation amplitude for $\rho_{0}$ is approximately $0.02$
when $\rho_{0}\left( 0\right) =0.5$ and goes to zero when $\rho_{0}\left(
0\right) =0$ or $1$. When one approaches the $k$-th parametric resonance with
the periodic modulation applied, we can make use of Eq.
(\ref{eq_secular energy}) and rewrite Eqs. (\ref{eq_resonance}) a
\begin{align}
\left( \dot{\rho}_{0}\right) ^{2} & =4\lambda^{2}\rho_{0}^{2}\left(
1-\rho_{0}\right) ^{2}\left\{ \eta_{k}^{2}-\left[ \frac{H_{k}\left(
\rho_{0}\left( 0\right) ,\phi\left( 0\right) \right) }{\lambda\rho
_{0}\left( 1-\rho_{0}\right) }\right. \right. \nonumber\\
& \left. \left. -\frac{\delta_{k}}{2\lambda\left( 1-\rho_{0}\right)
}-1\right] ^{2}\right\} . \label{eq_pollinomial
\end{align}
Eq. (\ref{eq_pollinomial}) typically represents undamped cubic anharmonic
oscillator whose analytical solution can generally be written in the form of
Jacobi elliptic functions.\begin{figure}[h]
\includegraphics[width=8cm]{contour}\caption{{\protect\footnotesize Phase-space
contour plot of }$H_{k}${\protect\footnotesize \ (in unit of }$\left\vert
\lambda\right\vert ${\protect\footnotesize ) at }$k=-1
{\protect\footnotesize \ resonance. (a) Cavity-free case with }$\delta_{k
=0${\protect\footnotesize . The case incoporating cavity backaction are shown
in (b) }$\delta_{k}=0${\protect\footnotesize \ and (c) }$\delta_{k
=0.09\lambda${\protect\footnotesize \ with }$\delta_{c}=-0.35\kappa
${\protect\footnotesize . The red-dashed lines refer to the contour determined
by the initial state of the system. The white dots refer to the initial state
of the system, while the black dots refer to equilibrium position and the
white triangles refer to the states when the system passing through the
equilibrium position in the pendulum analogy.}
\label{fig_contour
\end{figure}
Physical insights into the oscillation properties can be obtained via the
phase-space contour plot of $H_{k}$. We assume that the spinor condensate is
initially prepared in a state with $\rho_{0}\left( 0\right) =0.5$ and
$\theta\left( 0\right) =-\pi$ (corresponding to an effective large negative
quadratic Zeeman energy as compared with \cite{chapmanNC2016} in which the
initial state is $\theta\left( 0\right) =\pi$ with a large positive
quadratic Zeeman energy, $\phi\left( 0\right) =\theta\left( 0\right)
-z+k\pi/2$). When the driving frequency $\omega_{m}$ is appropriately tuned to
the $k=-1$ resonance with $\delta_{k=-1}=0$, the equal-$H_{-1}$ contour
diagram in the phase space defined by the conjugate pair $\left( \phi
,\rho_{0}\right) $ is plotted in Fig. \ref{fig_contour}(a). The contour plot
typically reproduces the phase diagram of a simple pendulum, indicating that
the system evolves along a contour (marked as a red-dashed line) determined by
its initial state (marked as a white dot). The center of the contour (marked
as a black dot) represents the equilibrium position in the pendulum analogy,
which is a stable stationary solution of Eqs. (\ref{eq_resonance}) (the
dynamical properties of the stationary solutions can be studied via the
standard linear stability analysis). The two points marked as white triangles
are two real stationary solutions of Eq. (\ref{eq_pollinomial}) located in the
region $\rho_{0}\in\left[ 0,1\right] $, symbolizing a pendulum passing
through its equilibrium position with maximum speed from different direction.
Their difference is the oscillation amplitude taking the value around $0.33$.
When the cavity backaction is taken into account, one should notice that the
value of $\delta_{k}$ is implicitly $\rho_{0}$-dependent. We first assume that
the driving frequency $\omega_{m}$ is appropriately tuned to $\delta_{k=-1}=0$
with respect to the initial state of $\rho_{0}\left( 0\right) =0.5$ and
$\theta\left( 0\right) =-\pi$, and the corresponding phase diagram is shown
in Fig. \ref{fig_contour}(b). Although the contour plot still captures the
main features of a pendulum, its topology changes as compared with Fig.
\ref{fig_contour}(a). In this case one cannot find stationary solutions of Eq.
(\ref{eq_pollinomial}) in the $\rho_{0}\in\left[ 0,1\right] $ region,
implying a non-rigid pendulum. The oscillation amplitude is estimated to take
the value of $0.55$, which is much larger than that of the cavity-free case.
If $\omega_{m}$ is tuned slightly deviate from the resonance with
$\delta_{k=-1}=0.09\lambda$, as shown in Fig. \ref{fig_contour}(c), the
red-dashed line changes its topology from a closed to an open line, and in the
pendulum analogy it signals that the pendulum swings all the way over the
vertical upright position and continues with the same direction of swing. In
this case the oscillation amplitude has the maximum value of about $0.7$,
doubles as compared to the cavity-free case. A drastic topology change is
usually associated with additional fixed points (more than $1$ at $\phi=n\pi
$), which can be determined from the stationary solutions of Eqs.
(\ref{eq_resonance}). From numerical simulations we find that for the $k=-1$
resonance additional fixed points appear in the region $\delta_{c}\in\left[
-0.24\text{, }-0.68\right] \kappa$ for the present parameter setup,
indicating that one can seek parametric resonance amplification in this
parameter region. \begin{figure}[h]
\includegraphics[width=8cm]{resonance}\caption{{\protect\footnotesize Oscillation
amplitude }$\Delta\rho_{0}${\protect\footnotesize \ versus modulation
frequency }$\omega_{m}${\protect\footnotesize \ (in unit of }$\left\vert
\lambda\right\vert ${\protect\footnotesize ) for (a) cavity-free case and the
cases with cavity backaction at (b) }$\delta_{c}=-0.35\kappa
{\protect\footnotesize ; (c) }$\delta_{c}=-0.4\kappa${\protect\footnotesize .
The numbered color zone indicates the parametric region in which the
$k${\protect\footnotesize -th order resonance is excited.}
\label{fig_resonance
\end{figure}
A sketch of cavity-mediated parametric resonance is presented in Fig.
\ref{fig_resonance} via numerical simulations of Eqs. (\ref{eq_mean field}),
in which the regions of different $k$-th order resonances (from $k=-1$ to
$-4$) can be well identified. Since $\omega_{0}<0$ (due to $U_{0}<0$), on
parametric resonances $k$ should take negative values. One can notice that the
oscillation amplitude $\Delta\rho_{0}$ significantly decreases for higher
$\left\vert k\right\vert $-th order resonance and those resonances beyond
$k=-4$ are not marked as the oscillation amplitudes are too small to be
unambiguously distinguished from those not excited. This can be traced to the
coupling coefficient $\eta_{k}=J_{k}\left( z\right) =J_{k}\left(
2\omega_{0}f_{0}/\omega_{m}\right) \sim J_{k}\left( k/5\right) $, from
which one can estimate that the value of $\eta_{k}$ decays from $10^{-1}$ to
$10^{-4}$ when $k$ varies from $-1$ to $-5$. This indicates that
high-$\left\vert k\right\vert $-th order parametric resonances are much less
likely to be excited. In the pendulum analogy it corresponds to the case that
the system evolves along an ellipse with large curvature, i.e., the pendulum
velocity is small while passing through the equilibrium position.
On resonance the oscillation amplitude $\Delta\rho_{0}$ can display a typical
two-peak structure, as can be seen from the $k=-1$ and $-2$ resonances for the
cavity-free case shown in Fig. \ref{fig_resonance}(a). The exact resonance
point $\omega_{m}=\omega_{0}/k$ locates in the middle of the two peaks, which
is also demonstrated in experiment \cite{gerbier2018}. In experiment
\cite{chapmanNC2016} population $\rho_{0}$ are measured after $100$ ms of
parametric excitation and near the lowest-order resonance population $\rho
_{0}$ behave as a sinusoidal function of $\omega_{m}$ with the resonance point
on the node, which also supports our predictions here. The peaks signal the
critical points at which the pendulum possesses enough energy to pass through
the top position, and they also represent dynamical phase transitions of the
system from $\phi$-running modes to $\phi$-$\pi$ modes. Cavity-induced
nonlinearity substantially modifies the topology of the phase diagram and as
such the two peaks merge into one, as shown in Fig. \ref{fig_resonance}(b) and (c).
More importantly, through cavity-mediated parametric excitation the
oscillation amplitude $\Delta\rho_{0}$ can be significantly amplified. For the
lowest $k=-1$ resonance, Fig. \ref{fig_resonance} demonstrates that cavity
backaction can amplify the oscillation amplitude to the value of $0.83$ as
compared with $0.45$ in the cavity-free case. For high-order resonances such
as $k=-3$, $\Delta\rho_{0}$ can still be amplified to $0.21$ as compared with
the cavity-free value of $0.13$. These results suggest that cavity backaction
can not only make the low-order parametric resonances more prominent, but also
can make the detection of the original weak high-order resonances easier.
\section{measurement discussion}
\label{sec_measure}
In experiments \cite{chapmanNC2016,gerbier2018} spin dynamics are probed via
Stern-Gerlach imaging, which performs fluorescence detection or absorption
imaging after a time-of-flight of spinor condensate in a magnetic field
gradient separating the different spin components. The condensate is
destructed after each detection, which means one will have to repeat the
experiment many times to measure the dynamics. Since the intracavity photon
number $\left\vert \alpha\right\vert ^{2}$ relates to the normalized spin
population $\rho_{0}$ as can be seen from Eq.
(\ref{eq_adiabatical elimination}), this indicates that it can be used for
observing real-time evolution of spin dynamics.\begin{figure}[h]
\includegraphics[width=8cm]{measure}\caption{{\protect\footnotesize (a)
Oscillation amplitude }$\Delta\rho_{0}${\protect\footnotesize \ and (b) the
corresponding cavity oscillation amplitude }$\Delta\left\vert \alpha
_{0}\right\vert ^{2}${\protect\footnotesize \ versus modulation frequency
}$\omega_{m}${\protect\footnotesize \ (in unit of }$\left\vert \lambda
\right\vert ${\protect\footnotesize ) at }$\delta_{c}=-0.35\kappa
${\protect\footnotesize .}
\label{fig_measure
\end{figure}
As $\left\vert \alpha\left( t\right) \right\vert ^{2}\approx\left\vert
\alpha_{0}\right\vert ^{2}\left[ 1+2f_{0}\sin\left( \omega_{m}t\right)
\right] $, one can integrate $\left\vert \alpha\left( t\right) \right\vert
^{2}$ over several periods of modulation to eliminate the high-frequency
oscillation while during this relatively short time (compared to the
oscillation period) the value of $\left\vert \alpha_{0}\right\vert ^{2}$ is
roughly unchanged. In Fig. \ref{fig_measure} we plot the oscillation amplitude
of spin population $\Delta\rho_{0}$ as well as that of averaged intracavity
photon number $\Delta\left\vert \alpha_{0}\right\vert ^{2}$, the results
indicate that continuous observation of spin dynamics can be realized via
measuring the corresponding averaged intracavity photon number $\left\vert
\alpha_{0}\right\vert ^{2}$. Parametric resonances can also be well
identified. We note that the idea of probing spin dynamics with cavity
transmission spectra was also proposed in \cite{zhangPRA2009}.
\section{summary and outlook}
\label{sec_conclusion}
It is interesting to note that bistability in spin-1 condensate was found in
\cite{gerbier2018}, which is aroused by the dissipation of spinor condensate
and hysteresis (usually associated with bistability) was observed for long
evolution times. In the present work we concentrate on relatively short-time
dynamics in which spin relaxation will not play a significant role. However we
would like to note that the interplay between the atomic spin mixing and the
cavity light field can lead to a strong matter-wave nonlinearity and
bistability, which has been demonstrated in previous works \cite{our model,our
model2}. So certainly one can expect that bistability will take place with
parametric excitations here even for short times at appropriate conditions.
In summary we have studied nonlinear Floquet dynamics of spinor condensate in
an optical cavity. Floquet driving leads to parametric resonance while the
cavity-induced nonlinearity makes it amplified. Since the order of observable
resonances is limited by the maximum quadratic Zeeman energy (maximal magnetic
field) achievable \cite{chapmanNC2016,gerbier2018}, thus the scheme propsed in
the present work provide a way to experimentally probe high-order parametric
resonances without the request of increasing the quadratic Zeeman energy.
Feasibility of real-time observation of spin dynamics via cavity output is
also discussed. Other interesting phenomena in this system which can be
modified via the coupling to the cavity, such as quantum spin squeezing
\cite{chapmanNP2012}, entanglement \cite{youexperiment,massonPRL2019,gil} as
well as phase transition \cite{floquet spinor}, will be left for further
investigation. It is also interesting to note that a quite recent work
\cite{clarkNature2019} demonstrated "Floquet polaritons" via the coupling of
Floquet modulated $^{87}$Rb atoms with cavity light modes.
\begin{acknowledgments}
We thank Han Pu and Yongping Zhang for helpful discussions. This work is
supported by National Natural Science Foundation of China (Grants No.
11374003, No. 11574086), the National Key Research and Development Program of
China (Grant No. 2016YFA0302001), and the Science and Technology Commission of
Shanghai Municipality (Grant No. 16DZ2260200).
\end{acknowledgments}
|
1,941,325,220,442 | arxiv | \section{Introduction}
\subsection{Relationalism and Product-type actions.}
Classical physics is usually taken to follow from difference-type actions\footnote{$\Sigma$
is the notion of space of extent (3-space for field theories and trivial for particle theories, for
which $\int_{\Sigma}\textrm{d}\Sigma$ is taken to become $\times 1$).
$Q^{\Lambda}$ are generalized coordinates with $\Gamma$ a multi-index over particle and/or field species
as well as over space of extent.
I use round brackets for functions and square brackets for functionals.}
\begin{equation}
\mbox{\sffamily I} = \int\textrm{d} \mbox{\tt t}\int_{\Sigma}\textrm{d}\Sigma\{\mbox{\sffamily T}_{\mbox{\scriptsize\tt t}} - \mbox{\sffamily V}\}
\mbox{ } .
\label{diff}
\end{equation}
Here, $\mbox{\sffamily T}_{\mbox{\scriptsize\tt t}}$ is the kinetic terms that is usually\footnote{One can include a sufficient set of
fields in this context to study classical fundamental physics (there being no difficulty
\cite{Van, Phan} with additionally incorporating terms linear in the velocities into this scheme and
this paper's workings, e.g. permitting fermionic as well as bosonic matter coupled to GR.}
homogeneous quadratic in the velocities,
\begin{equation}
\mbox{\sffamily T}_{\mbox{\scriptsize\tt t}} = \frac{1}{2}\sum \mbox{}_{\mbox{}_{\Gamma, \Delta}}M_{\Gamma\Delta}\frac{\textrm{d} Q^{\Gamma}}{\textrm{d} \mbox{\tt t}}
\frac{\textrm{d} Q^{\Delta}}{\textrm{d} \mbox{\tt t}}
\mbox{ } ,
\end{equation}
where $M_{\Gamma\Delta} = M_{\Gamma\Delta}(Q^{\Lambda})$ is the kinetic metric of the configuration
space and $\mbox{\sffamily V}[Q^{\Lambda}]$ is the potential term.
However, classical physics can also be taken to follow from product-type actions
\begin{equation}
\mbox{\sffamily I} = 2\int\textrm{d} \lambda\int_{\Sigma}\textrm{d} \Sigma\sqrt{\mbox{\sffamily T}\mbox{\sffamily W}} \mbox{ } .
\label{prod}
\end{equation}
Here, $\mbox{\sffamily T}$ takes the form
\begin{equation}
\mbox{\sffamily T} = \sum \mbox{}_{\mbox{}_{\Gamma, \Delta}}
M_{\Gamma\Delta}\mbox{\Large$\circ$} Q^{\Gamma}\mbox{\Large$\circ$} Q^{\Delta}/2 \mbox{ } ,
\end{equation}
$\mbox{\Large$\circ$} = \textrm{d}/\textrm{d}\lambda$ and $\lambda$ is a label-time, and $\mbox{\sffamily W}[Q^{\Lambda}]$ is minus the potential
(possibly up to an additive constant energy, as explained in the examples below).
Action (\ref{diff}) leads to action (\ref{prod}) by, firstly (parametrization): adjoining $\mbox{\tt t}$ to the
configuration space so that $\textrm{d} \mbox{\tt t}/\textrm{d}\lambda$ now features in it.
Secondly, provided that $\mbox{\sffamily V}$ is independent of $\mbox{\tt t}$ and $\textrm{d} Q^{\Gamma}/\textrm{d} \mbox{\tt t}$, which can be held to be the
case when one is considering fundamental classical physics of the universe as a whole, Routhian
reduction \cite{Lanczos} subsequently serves to eliminate $\textrm{d} \mbox{\tt t}/\textrm{d}\lambda$ from the variational equation
for $\mbox{\tt t}$.
Product-type actions have advantages over difference-type actions for consideration of whole-universe
fundamental physics -- the setting for quantum cosmology.
Product-type actions arise if one sets up physics from the Leibniz--Mach--Barbour relational first
principles, which both have philosophical significance and are sharply mathematically implementable.
Relationalism provides an alternative foundation for physics to absolutism; which of these to use has
been the subject of a long debate \cite{Newton, AORM}, and relationalism would appear to have a good
case for use in whole-universe situations.
I use relationalism in this Leibniz--Mach--Barbour's sense of the word \cite{AORM, BB82, B94I, EOT,
RWR, fqxi, Ak}; see \cite{Rovelli} for Rovelli's distinct use of the same word and \cite{08I} for a
brief comparison of the two.
The Leibniz--Mach--Barbour relational first principles are as follows.
\noindent
A physical theory is {\it temporally relational} if there is no meaningful primary notion of time for
the whole system thereby described (e.g. the universe) \cite{BB82, RWR}.
This is implemented by using actions that are {\it manifestly reparametrization invariant} while also
being free of extraneous time-related variables (such as external Newtonian time or the
geometrodynamical formulation of general relativity (GR)'s lapse coordinate \cite{MTW}).
\noindent
A physical theory is {\it configurationally relational} if a certain group $G$ of transformations that
act on the theory's configuration space $\mbox{\sffamily Q}$ are physically meaningless \cite{BB82, RWR,
ABABFOLanThanABFKO, Van, B03, FORD}.
As subcases, this includes {\it spatially relational} and {\it internally relational} (in the usual
gauge-theoretic sense) depending on the mathematical form and physical interpretation of $G$.
Spatial relationalism suffices for the examples covered in this paper.
It is temporal relationalism that is directly tied to the product-type actions in this paper; the
reparametrization invariance of these actions is clear since changing from $\lambda$ to another
$\lambda^{\prime}$ clearly cancels as $\mbox{\sffamily T}$ is homogeneous quadratic.
Indeed, one could implement temporal relationalism, rather, in a {\it parametrization irrelevant} way
(i.e. one which makes no reference whatsoever to any label-time parameter $\lambda$):
\begin{equation}
\mbox{\sffamily I} = 2\int\int_{\Sigma}\textrm{d} \Sigma\sqrt{\mbox{\sffamily W}\,\sum \mbox{}_{\mbox{}_{\Gamma, \Delta}}
M_{\Gamma\Delta}\textrm{d} Q^{\Gamma}\textrm{d} Q^{\Delta}/2} \mbox{ } .
\end{equation}
Starting from product-type actions, using a $\mbox{\tt t}^{\mbox{\scriptsize J}\mbox{\scriptsize B}\sB}$
(`Jacobi--Barbour--Bertotti time') \cite{B94I, SemiclI, SemiclII, fqxi} such that
\begin{equation}
{\textrm{d}}/{\textrm{d} \mbox{\tt t}^{\mbox{\scriptsize J}\mbox{\scriptsize B}\sB}} = \sqrt{\mbox{\sffamily W}/\mbox{\sffamily T}}\mbox{\Large$\circ$} =
\sqrt{\mbox{\sffamily W}}\textrm{d}/\sqrt{\sum \mbox{}_{\mbox{}_{\Gamma,\Delta}}M_{\Gamma\Delta}\textrm{d} Q^{\Gamma}\textrm{d} Q^{\Delta}/2}
\end{equation}
{\sl is found} to considerably simplify the equations of motion that follow from the relational principles.
In this sense such a $\mbox{\tt t}^{\mbox{\scriptsize J}\mbox{\scriptsize B}\sB}$ is a privileged parametrization; it {\sl coincides} with the
conventional difference action's $\mbox{\tt t}$, but is now to be considered as {\sl emergent and provided by the
entirety of the model universe's contents}.\footnote{Barbour
furthermore considers \cite{B94I, fqxi} this to be the timestandard such that isolated observers who
choose to use it obtain clocks that march in step with each others'.
However, as I have never seen this quantitativly demonstrated, it will play no further part in this
paper.}
A further point of note is that product-type actions' reparametrization invariance gives as a primary
constraint
\begin{equation}
N^{\Gamma\Delta}P_{\Gamma}P_{\Delta}/2 + \mbox{\sffamily V} = \mbox{\sffamily E} \mbox{ } ,
\label{set}
\end{equation}
which is quadratic in the momenta. [Here $N^{\Gamma\Delta}$ is the inverse of $M_{\Gamma\Delta}$.]
\subsection{Examples of product-type actions}
\noindent{\bf Example 1)} The Jacobi action \cite{Lanczos} for the Newtonian mechanics of N particles
with positions $\underline{q}_I$, $I$ = 1 to N is
\begin{equation}
\mbox{\sffamily I}_{\mbox{\scriptsize J}\mbox{\scriptsize a}\mbox{\scriptsize c}\so\mbox{\scriptsize b}\mbox{\scriptsize i}} = 2\int\textrm{d}\lambda\sqrt{\mbox{\sffamily T}\{\mbox{\sffamily U} + \mbox{\sffamily E}\}} \mbox{ } .
\label{7}
\end{equation}
Here
\begin{equation}
\mbox{\sffamily T} = \frac{1}{2}\sum\mbox{}_{\mbox{}_{I = 1}}^{\sN}m_I\{\mbox{\Large$\circ$} q_I\}^2 \mbox{ } ,
\end{equation}
$\mbox{\sffamily U}(Q_{\Lambda})$ is minus the potential term, $\mbox{\sffamily V}$, and $\mbox{\sffamily E}$ is the total energy of the system.
\cite{Lanczos} lucidly covers how to obtain this from $\mbox{\sffamily I} = \int\textrm{d} t\{\mbox{\sffamily T}_t - \mbox{\sffamily V}\}$ by Routhian
reduction.
(I use $t$ in place of $\mbox{\tt t}$ for mechanical theories).
For the recovery from (\ref{7}) of the usual Euler--Lagrange formalism from this but with the emergent
$t^{\mbox{\scriptsize J}\mbox{\scriptsize B}\sB}$ now taking over the role of Newtonian absolute time, see e.g. \cite{B94I, SemiclI}.
The quadratic constraint is in this case the energy constraint $ \sum_{I = 1}^Np_I^2/2 m_I+ \mbox{\sffamily V} = \mbox{\sffamily E}$.
\noindent{\bf Example 2A)} One could consider versions of the Jacobi action for `relational particle
mechanics' theories in which the velocities come with arbitrary Euclidean \cite{BB82, B94I, EOT, GGM06I,
ParisTriCl, 08I, Cones, scaleQM, 08III, Ultra} or similarity \cite{B03, Piombino, ParisTriCl, 06II,
FORD, 08I, 08II, AF, +tri, Ultra} group frame corrections\footnote{Here,
$\stackrel{\longrightarrow}{G_{\mbox{\scriptsize$\circ$}{g}}}$ is the group action corresponding to an infinitesimal group
generator $\mbox{$\circ$}{g}$; note that this notation extends from the current relational particle
mechanics context of the current example
to the GR case of Example 3 as well.}
\begin{equation}
\mbox{\Large$\circ$}_gQ^{\Delta} \equiv {\textrm{d}_g Q^{\Delta}}/{\textrm{d} \lambda} \equiv
\mbox{\Large$\circ$}{Q}^{\Delta} - \stackrel{\longrightarrow}{G_{\mbox{$\circ$}{g}}}Q^{\Delta}
\end{equation}
that implement spatial relationalism as well, so that one is considering mechanical theories that are
temporally {\sl and} spatially relational.
The quadratic constraint continues in this case to be an energy constraint of the form in the preceding
example.
Now one is to use $\textrm{d}_g/\textrm{d} t$ in place of $\textrm{d}/\textrm{d} t$ in (2), $\mbox{\Large$\circ$}_g$ in place of $\mbox{\Large$\circ$}$ in (4, 8) and $\textrm{d}_g Q^{\Gamma}$ in place of
$\textrm{d} Q^{\Gamma}$ in (5) and (6).
Also, variation with respect to the auxiliary variables $g$ produces constraints that are linear in the
momenta.
For Euclidean relational particle mechanics, these are
$\underline{\mbox{\tt P}} = \sum_{I = 1}^{\sN}\underline{p}_I = 0$ (zero total momentum for the model universe) and
$\underline{\mbox{\tt L}} = \sum_{I = 1}^{\sN}\underline{q}_I \mbox{\scriptsize{\bf $\mbox{ } \times \mbox{ }$}} \underline{p}_I = 0$ (zero total momentum for the model universe),
while similarity relational particle mechanics has these again alongside $\mbox{\tt D} = \sum_{I = 1}^{\sN}
\underline{q}_I\cdot \underline{p}_I = 0$.
By analogy with Example 3 below, relational particle mechanics are additionally useful models \cite{BS89, K92, B94II, Kieferbook,
06II, SemiclI, SemiclII, Records, SemiclIII, Smolin08, BF08Gryb, 08II, 08III} for the Problem of Time
in Quantum Gravity \cite{K92, I93} and other issues of interest in Quantum Cosmology \cite{DeWitt67,
QCosLit, Wiltshire}.
\noindent{\bf Example 2B)} Relational particle mechanics can be cast in reduced form in spatial
dimension 1 or 2.
Here all the above constraints can be eliminated, producing reduced kinetic terms of the form
$$
\mbox{\sffamily T}^{\mbox{\scriptsize r}\mbox{\scriptsize e}\mbox{\scriptsize d}} = M_{\Gamma\Delta}(Q^{\Lambda})\mbox{\Large$\circ$}{Q}^{\Gamma}\mbox{\Large$\circ$}{Q}^{\Delta}/2
$$
for $M_{\Gamma\Delta}$ the usual metric on $\mathbb{S}^{\sN - 2}$ for N particles in 1-d and the
Fubini--Study metric on $\mathbb{CP}^{\sN - 2}$ for N particles in 2-d.
The quadratic constraint is still an energy constraint of form (\ref{set}) built from inverses of the
above-mentioned metrics.
\noindent{\bf Example 3A)} An action (see e.g. \cite{RWR, Phan}) for a geometrodynamical formulation of
GR [in terms of 3-metrics $h_{\mu\nu}(x^{\omega})$ on a fixed topology $\Sigma$, for simplicity taken to be compact without
boundary; $x^{\omega}$ are spatial coordinates] is
\begin{equation}
\mbox{\sffamily I}^{\mbox{\scriptsize B}\mbox{\scriptsize F}\mbox{\scriptsize O}-\sA}_{\mbox{\scriptsize G}\mbox{\scriptsize R}} = 2\int\textrm{d}\lambda\int_{\Sigma}\textrm{d}^3{x}\sqrt{h}\sqrt{\mbox{\sffamily T}_{\mbox{\scriptsize B}\mbox{\scriptsize F}\mbox{\scriptsize O}-\sA}
\{\mbox{Ric}(h) - 2\Lambda\}} \mbox{ } .
\label{BFOA}
\end{equation}
Here,
\begin{equation}
\mbox{\sffamily T}^{\mbox{\scriptsize B}\mbox{\scriptsize F}\mbox{\scriptsize O}-\sA}_{\mbox{\scriptsize G}\mbox{\scriptsize R}} =
{\cal M}^{\mu\nu\rho\sigma}\mbox{\Large$\circ$}_{\mbox{\scriptsize F}}h_{\mu\nu}\mbox{\Large$\circ$}_{\mbox{\scriptsize F}}h_{\rho\sigma}/4
\mbox{ } , \mbox{ }
\mbox{\Large$\circ$}_{\mbox{\scriptsize F}}h_{\mu\nu} \equiv \mbox{\Large$\circ$}{h}_{\mu\nu} -
\stackrel{\longrightarrow}{\mbox{Diff}_{\mbox{$\circ$}{\mbox{\scriptsize F}}}}h_{\mu\nu} =
\mbox{\Large$\circ$}{h}_{\mu\nu} - \pounds_{\mbox{$\circ$}{\mbox{\scriptsize F}}}h_{\mu\nu} \mbox{ } ,
\label{TBFO}
\end{equation}
where ${\cal M}^{\mu\nu\rho\sigma} = h^{\mu\rho}h^{\nu\sigma} - h^{\mu\nu}h^{\rho\sigma}$ (the GR
configuration space metric, alias inverse of the undensitized DeWitt supermetric \cite{DeWitt67}), Diff
is the group of 3-diffeomorphisms on $\Sigma$, $\pounds_{\mbox{$\circ$}{\mbox{\scriptsize F}}}$ is the Lie derivative with respect
to the `velocity of the frame' $\mbox{F}_{\mu}$,
Ric($h$) is the Ricci 3-scalar corresponding to $h_{\mu\nu}$, $h$ is the determinant of $h_{\mu\nu}$ and
$\Lambda$ is the cosmological constant.
This action would be the better-known Baierlein--Sharp--Wheeler (BSW) \cite{BSW} one if the kinetic
term were, rather, $\mbox{\sffamily T}_{\mbox{\scriptsize B}\mbox{\scriptsize S}\mbox{\scriptsize W}}$ which is the same up to being built out of shift corrections
$\beta^{\mu}(x^{\omega})$ in place of `velocities of the frame'
$\mbox{\Large$\circ$}{\mbox{F}}^{\mu}(x^{\omega})$.\footnote{There are various
other equivalent pairs of Principles of Dynamics objects in this paper that are related to each other
by the one using auxiliary frame velocities where the other uses auxiliary coordinates (and, sometimes
additionally, auxiliary {\sl instant} velocities, $\mbox{$\circ$}{\mbox{\scriptsize I}}$, in place of of auxiliary lapse coordinates,
$\alpha$).
For details of how the equivalence of each of these pairs works out, see \cite{FEPI}.}
However it would not then be manifestly temporally relational.
Moreover, the BSW action is equivalent to the even more familiar `Lagrangian ADM' \cite{ADM} action,
\begin{equation}
\mbox{\sffamily I}^{\sL-\sA\mbox{\scriptsize D}\sM}_{\mbox{\scriptsize G}\mbox{\scriptsize R}} = \int\textrm{d}\lambda\int_{\Sigma}\textrm{d}^3x\sqrt{h}\alpha
\{{\mbox{\sffamily T}_{\sA\mbox{\scriptsize D}\sM}}/{\alpha^2} + \mbox{Ric}(h) - 2\Lambda\}
\label{ADM}
\end{equation}
(for $\mbox{\sffamily T}_{\sA\mbox{\scriptsize D}\sM}$ taking the same form as the above $\mbox{\sffamily T}_{\mbox{\scriptsize B}\mbox{\scriptsize S}\mbox{\scriptsize W}}$).
The former follows from the latter by elimination of the Lagrange multiplier coordinate lapse $\alpha$
from its own variational equation.
Parallely \cite{FEPI}, one can also obtain the BFO-A action from a now also-unfamiliar action that is the
Lagrange--ADM's equivalent pair in the sense of footnote 4,
\begin{equation}
\mbox{\sffamily I}^{\sA}_{\mbox{\scriptsize G}\mbox{\scriptsize R}} = \int\textrm{d}\lambda\int_{\Sigma}\textrm{d}^3x\sqrt{h}\mbox{\Large$\circ$}{\mbox{I}}
\{ \mbox{\sffamily T}^{\sA}_{\mbox{\scriptsize G}\mbox{\scriptsize R}}/\{\mbox{\Large$\circ$}{\mbox{I}}\}^2 + \mbox{Ric}(h) - 2\Lambda\}
\label{A}
\end{equation}
(for $\mbox{\sffamily T}^{\sA}_{\mbox{\scriptsize G}\mbox{\scriptsize R}}$ taking the same form as $\mbox{\sffamily T}_{\mbox{\scriptsize B}\mbox{\scriptsize F}\mbox{\scriptsize O}-\sA}$).
The former now follows from the latter by using Routhian reduction to eliminate $\dot{I}$ (an even
closer parallel of the equivalence at the end of Example 1 than the preceding coordinate elimination).
The quadratic constraint is now the GR Hamiltonian constraint ${\cal H} \equiv
{\cal N}_{\mu\nu\rho\sigma}\pi^{\mu}\pi^{\nu}/\sqrt{h} - \sqrt{h}\{\mbox{Ric}({\cal M}) - 2\Lambda\}
= 0$ for ${\cal N}^{\mu\nu\rho\sigma}$ the inverse of ${\cal M}_{\mu\nu\rho\sigma}$ (i.e. the
undensitized DeWitt supermetric itself), while the linear constraint from variation with respect to
$F^{\mu}$ is the GR momentum constraint, ${\cal L}_{\mu} \equiv - 2D_{\nu}\pi^{\nu}\mbox{}_{\mu} = 0$.
{\noindent \bf Example 3B.}
Each of the pairs (\ref{BFOA}, BSW) and (\ref{ADM}, \ref{A}), A become indistinguishable for minisuperspace.
Therein, ${\cal M}^{\mu\nu\rho\sigma}(h_{\gamma\delta}(x^{\omega}))$ collapses to an ordinary
$6 \times 6$ matrix $M_{\Gamma\Delta}$ or further in the diagonal case (a $3 \times 3$ matrix
$M_{\Gamma\Delta}$) -- the `minisupermetric'.
\subsection{Banal-conformal invariance of product-type actions and its consequences.}
In Sec 2, I start at the classical level, both to set up the foundations for the paper's principal
quantum cosmology operator ordering issue and also to consider a second classical issue concerning
parametrization of dynamical curves.
In considering these two applications together, I follow Misner's Hamiltonian treatment \cite{Magic};
my work differs from his in considering the Lagrangian formulation and the Jacobi formulation, which
sits on relational first principles.
I show that product-type actions are preserved under the simple and natural
{\it banal conformal transformation}
\begin{equation}
\mbox{\sffamily T} \longrightarrow \widetilde{\mbox{\sffamily T}} = \Omega^2 \mbox{\sffamily T} \mbox{ }
\mbox{ } , \mbox{ } \mbox{ }
\mbox{\sffamily W} \longrightarrow \widetilde{\mbox{\sffamily W}} = \mbox{\sffamily W}/\Omega^2 \mbox{ } .
\end{equation}
This makes the kinetic factor a banal-vector and the potential factor a banal covector.
Moreover the first of these can be viewed as
\begin{equation}
M_{\Gamma\Delta} \longrightarrow \widetilde{M}_{\Gamma\Delta} = \Omega^2M_{\Gamma\Delta} \mbox{ } ,
\end{equation}
so that the kinetic metric is a banal-vector.
It immediately follows from the above banal transformation that the emergent timefunction can be
regarded as a banal covector.
If one then considers this scaling property to carry over to the difference-type action formulations'
timefunction, a more complicated manifestation of banal-conformal invariance is discovered for
difference-type actions.
Clearly, performing such a transformation should not (and does not) affect one's classical equations of
motion.
Moreover, working through how the scaling of $\mbox{\sffamily T}$, $\mbox{\sffamily W}$ and the timefunction conspire to cancel out
at the level of the classical equations of motion reveals interesting connections between the
simplifying effects of using the emergent timefunction on the equations of motion and those of the
rather better-known affine parametrization \cite{Wald, Stewart}.
Section 2 ends by preparing for quantization by discussing how momenta, constraints and
Hamiltonian-type objects banal-scale.
If one then (Sec 3) wishes for this banal-conformal invariance -- displayed simply and naturally by
relationalism-implementing product actions for whole-universe fundamental physics -- to continue to
hold at the {\sl quantum} level, then this alongside the otherwise theoretically-desirable (and fairly
standard) requirement that one's quantum theory should not depend on how $\mbox{\sffamily Q}$ is coordinatized, then
one is led to the operator ordering for $N^{\Gamma\Delta}(Q^{\Lambda})P_{\Gamma}P_{\Delta}$ that is based
on the conformally-invariant modification of the Laplacian.
The latter requirement is due to DeWitt \cite{DeWitt57} (see also \cite{Magic, K73, HPT, HP86}), and is
true for 1-parameter family of scalar operators
\begin{equation}
\nabla^2 - \xi\mbox{Ric($M$)}
\label{Family}
\end{equation}
where $\mbox{Ric($M$)}$ is the Ricci scalar corresponding to the configuration space metric
$M_{\Gamma\Delta}$ and $\nabla^2$ is the Laplacian,
\begin{equation}
\nabla^2 = \frac{1}{\sqrt{M}} \frac{\nabla}{\nabla {Q}^{\Gamma}}
\left\{
\sqrt{M}N^{\Gamma\Delta}\frac{\nabla}{\nabla{Q}^{\Delta}}
\right\} \mbox{ } .
\end{equation}
Here, $\nabla$ denotes partial derivative for finite theories and functional derivative for field
theories and $\sqrt{M} = \sqrt{\mbox{det}(M)}$.
The conformal ordering, which fixes a particular value of $\xi$, had been previously suggested by e.g.
Misner \cite{Magic}, Halliwell \cite{Halliwell}, Moss \cite{Moss} and Ryan--Turbiner \cite{RT},
albeit without any reference to the immediacy of this in product-type actions which themselves rest
on the relationalist first principles.
Additionally, Kucha\v{r} \cite{K73} and Henneaux--Pilati--Teitelboim \cite{HPT} have advocated the Laplacian
ordering itself ($\xi = 0$).
So have Page \cite{Page91}, Louko \cite{Louko} and Barvinsky \cite{Barvin}, however their specific
examples are 2-dimensional, for which the Laplacian and conformal orderings coincide.
Wiltshire advocates both \cite{Wiltshire}.
Christodoulakis and Zanelli \cite{CZ} consider the case with an arbitrary $\xi$, as do Hawking and
Page \cite{HP86}, albeit the latter then also pass to a 2-d example for which $\xi$ drops out.
Finally, all of these orderings coincide to O($\hbar$) \cite{Barvin}.
The reason why arguments for such a choice of operator ordering is of interest is the well-known
physical as well as just formal inequivalence of different operator orderings, so that how to make
such a choice is a major issue in Quantum Gravity and Quantum Cosmology \cite{DeWitt57, DeWitt67,
Magic, K73, HPT, HP86, CZ, Halliwell, Moss, Page91, Louko, Barvin, K92, I93, RT, K78Thiemann}.
Thus my answer that relationalism and coordinate invariance combine to imply the elegant and
mathematically well-distinguished conformal ordering should be of considerable interest.
In being revealed to possess philosophical foundations of this kind alongside the technical advantages
already found in the above-cited papers, I argue that the case to adopt the conformal ordering is
considerably strengthened.
\section{Banal invariance of product-type actions, and classical consequences}
The equations of motion that follow from the spatially relational generalization of (3) are the
momentum--velocity relations
\begin{equation}
P_{\Gamma} = \sqrt{{\mbox{\sffamily W}}/{\mbox{\sffamily T}}}M_{\Gamma\Delta}\mbox{\Large$\circ$}_g Q^{\Delta}
\end{equation}
and the Euler--Lagrange equations
\begin{equation}
\mbox{\Large$\circ$}
\left\{
\sqrt{{\mbox{\sffamily W}}/{\mbox{\sffamily T}}}\,\mbox{\Large$\circ$}_gQ^{\Gamma}
\right\} + \sqrt{\mbox{\sffamily W}/\mbox{\sffamily T}}\,{\Gamma^{\Gamma}}\mbox{}_{\Delta\Lambda}\mbox{\Large$\circ$}_gQ^{\Delta}\mbox{\Large$\circ$}_gQ^{\Lambda} =
\sqrt{{\mbox{\sffamily T}}/{\mbox{\sffamily W}}}\,\nabla^{\Gamma}\mbox{\sffamily W} + M_{\Delta\Lambda}\sqrt{\mbox{\sffamily W}/\mbox{\sffamily T}}\,\mbox{\Large$\circ$}_{g}Q^{\Delta}
\nabla^{\Gamma}\stackrel{\longrightarrow}{\{G}_{\mbox{\Large$\circ$}{g}}Q_{\Lambda}\}
\label{tw}
\end{equation}
(for $\nabla^{\Gamma} \equiv \nabla/\nabla Q_{\Gamma}$ and $\Gamma^{\Gamma}\mbox{}_{\Delta\Lambda}$ the
configuration space Christoffel symbols).
Upon inspection (see e.g. \cite{B94I}), (\ref{tw}) simplifies for particular choices of parameter
in 2 generally different ways (the two coincide if the potential is constant).
\noindent
A)
$$
\frac{\textrm{d}}{\textrm{d}\lambda}
\left\{
\sqrt{\frac{\mbox{\sffamily W}}{\mbox{\sffamily T}}}\frac{\textrm{d} Q^{\Delta}}{\textrm{d}\lambda}
\right\}
= \frac{\textrm{d}^2Q^{\Delta}}{\textrm{d} \lambda} +
\frac{1}{2\sqrt{\mbox{\sffamily W}\mbox{\sffamily T}}}\frac{\textrm{d}\mbox{\sffamily W}}{\textrm{d}\lambda}\frac{\textrm{d} Q^{\Delta}}{\textrm{d}\lambda} -
\frac{1}{2}\frac{\textrm{d} \mbox{\sffamily T}}{\textrm{d} \lambda}\sqrt{\frac{\mbox{\sffamily W}}{\mbox{\sffamily T}^3}}\frac{\textrm{d} Q^{\Delta}}{\textrm{d} \lambda}
\mbox{ versus } \frac{\textrm{d}^2 Q^{\Gamma}}{\textrm{d} \mu^2}
$$
which corresponds to $\frac{\textrm{d}}{\textrm{d} \mu} = \sqrt{\frac{\mbox{\sffamily{\scriptsize W}}}{\mbox{\sffamily{\scriptsize T}}}}\frac{\textrm{d}}{\textrm{d}\lambda}$, which parameter
$\mu$ we denote by $\mbox{\tt t}^{\mbox{\scriptsize J}\mbox{\scriptsize B}\sB}$: emergent Jacobi--Barbour--Bertotti time \cite{B94I, RWR}.
In the case of a mechanical theory, this emergent time turns out
also to imply conservation of energy and amounts to a recovery of Newtonian time;
it is also aligned with the mechanics case's emergent semiclassical (WKB) time \cite{SemiclI}.
In the case of geometrodynamics (I use $T$ in place of $\mbox{\tt t}$ in this context), this emergent time amounts to
a recovery of local proper time, as well as being aligned with the geometrodynamical emergent
semiclassical (WKB) time \cite{SemiclII}, and corresponding to cosmic time in the case of homogeneous
cosmology.
\noindent B) $\nabla^{\Gamma}\mbox{\sffamily W} \neq 0$ versus = 0, the latter corresponding to `the dynamical curve
being an affinely-parametrized geodesic on configuration space'.
In this case I denote the time parameter by $\mbox{\tt t}^{\mbox{\scriptsize a}\mbox{\scriptsize f}\sf-\sg\mbox{\scriptsize e}\so}$.
Next, note the following properties.
\mbox{ }
\noindent 1) The action (\ref{prod}) is banal-conformally invariant under
\begin{equation}
\mbox{\sffamily T} \longrightarrow \widetilde{\mbox{\sffamily T}} = \Omega^2\mbox{\sffamily T} \mbox{ } \mbox{ } , \mbox{ } \mbox{ }
\mbox{\sffamily W} \longrightarrow \widetilde{\mbox{\sffamily W}} = \mbox{\sffamily W}/\Omega^2 \mbox{ } \mbox{ } :
\end{equation}
$$
\widetilde{\mbox{\sffamily I}} = 2\int\textrm{d}\lambda\int_{\Sigma}\textrm{d}\Sigma \sqrt{ \widetilde{\mbox{\sffamily T}}\widetilde{\mbox{\sffamily W}} } =
2\int\textrm{d}\lambda \int_{\Sigma}\textrm{d}\Sigma \sqrt{ \Omega^2{\mbox{\sffamily T}}\mbox{\sffamily W}/\Omega^2} =
2\int\textrm{d}\lambda \int_{\Sigma}\textrm{d}\Sigma \sqrt{\mbox{\sffamily T}\mbox{\sffamily W}} \mbox{ } .
$$
2) Subsequently, ${\textrm{d}}/{\textrm{d} \mbox{(emergent time)}}$ is a banal-conformal covector,
$$
\widetilde{\mbox{\Large$\ast$}} \equiv \sqrt{\widetilde{\mbox{\sffamily W}}/\widetilde{\mbox{\sffamily T}}}\mbox{\Large$\circ$} =
\sqrt{\{\mbox{\sffamily W}/\Omega^2\}/{\Omega^2\mbox{\sffamily T}}}\mbox{\Large$\circ$} = \Omega^{-2}\sqrt{{\mbox{\sffamily W}}/{\mbox{\sffamily T}}}\mbox{\Large$\circ$} = \Omega^{-2}\mbox{\Large$\ast$}
\mbox{ } .
$$
Thus the emergent time \cite{B94I, B94II, SemiclI} depends on the choice
of banal-conformal factor.
In the case of geometrodynamics, one can also think of ${\textrm{d}}/{\textrm{d}\lambda}$ being invariant and
${1}/{\widetilde{\mbox{\Large$\circ$}{I}}} = {1}/\{\Omega^2 \mbox{\Large$\circ$}{I}\}$ so that the velocity of the instant scales as a
banal-conformal vector $\mbox{\Large$\circ$}{I} \longrightarrow \Omega^2\mbox{\Large$\circ$}{I}$ (or, equivalently, the emergent lapse
coordinate scales as a banal-conformal vector $N \longrightarrow \Omega^2N$).
\noindent
To not confuse `$\mbox{\tt t}^{\mbox{\scriptsize J}\mbox{\scriptsize B}\sB}$ as present in the previous literature' and the banal covector
discovered in this paper, I denote the latter by $\vec{\mbox{\tt t}}$, rather.
I also use the notation
$$
\mbox{\Large$\ast$} \equiv {\textrm{d}}/{\textrm{d} \vec{\mbox{\tt t}}} = \sqrt{{\mbox{\sffamily W}}/{\mbox{\sffamily T}}}\mbox{\Large$\circ$} \mbox{ } .
$$
\mbox{ }
\noindent 3) Next observe that, provided that its timefunction scales as $\vec{\mbox{\tt t}}$ does, the
difference-type Euler--Lagrange action is also banal-conformally invariant (albeit in a more complicated
way):
$$
\widetilde{\mbox{\sffamily I}} =
\int\int_{\Sigma}\textrm{d}\Sigma\{\widetilde{\mbox{\sffamily T}_{\mbox{\scriptsize\tt t}}} -
\widetilde{\mbox{\sffamily V}}\}\textrm{d}\widetilde{\vec{\mbox{\tt t}}}
= \int\int_{\Sigma}\textrm{d}\Sigma
\left\{
\widetilde{M}_{\Gamma\Delta}\widetilde{\mbox{\Large$\ast$}_g}Q^{\Gamma}\widetilde{\mbox{\Large$\ast$}_g}Q^{\Delta}/2
- \widetilde{\mbox{\sffamily V}}
\right\}
\textrm{d}\widetilde{\vec{\mbox{\tt t}}} =
\int\int_{\Sigma}\textrm{d}\Sigma
\{\Omega^2M_{\Gamma\Delta}\Omega^{-2}\mbox{\Large$\ast$}_gQ^{\Gamma}\Omega^{-2}\mbox{\Large$\ast$}_gQ^{\Delta}/2
- \Omega^{-2}{\mbox{\sffamily V}}\}\Omega^2\textrm{d} \vec{\mbox{\tt t}}
$$
$$
=
\int\int_{\Sigma}\textrm{d}\Sigma\{M_{\Gamma\Delta}\mbox{\Large$\ast$}_gQ^{\Gamma}\mbox{\Large$\ast$}_g Q^{\Delta}/2 -
{\mbox{\sffamily V}}\}\textrm{d} \vec{\mbox{\tt t}} = \mbox{\sffamily I} \mbox{ } .
$$
\noindent 4) The Euler--Lagrange equations of motion following from (\ref{prod}), will clearly be
invariant under the {\it full banal-conformal transformation} $(\mbox{\sffamily T}, \mbox{\sffamily W}, \mbox{\Large$\ast$}) \longrightarrow
(\widetilde{\mbox{\sffamily T}}, \widetilde{\mbox{\sffamily W}}, \widetilde{\mbox{\Large$\ast$}}) = (\Omega^2\mbox{\sffamily T}, \Omega^{-2}\mbox{\sffamily W}, \Omega^{-2}\mbox{\Large$\ast$})$,
as the action that they follow from is.
\noindent
[Note that in the case of the relational formulation, the banal transformation of $\mbox{\sffamily T}$, $\mbox{\sffamily W}$ directly
implies the full banal transformation, so that these are not here distinct entities.]
\mbox{ }
\noindent
This seemingly trivial extra fact does however generate some interesting comments when one looks at the
details of the cancellations at the level of the Euler--Lagrange equations themselves.
Let us begin with the largely-sufficient finite and trivially spatially relational case (i.e. mechanics
with temporal relationalism only, fully reduced 1- or 2-d relational particle mechanics and
minisuperspace).
Then the Euler--Lagrange equations are
\begin{equation}
{D^2 Q^{\Gamma}/D\vec{\mbox{\tt t}}}^{2} \equiv
\mbox{\Large$\ast$}\Star Q^{\Gamma} + \Gamma^{\Gamma}\mbox{}_{\Lambda\Sigma}\mbox{\Large$\ast$} Q^{\Lambda}\mbox{\Large$\ast$} Q^{\Sigma}
= \partial^{\Gamma}\mbox{\sffamily W} \mbox{ } ,
\label{*}
\end{equation}
which is the geodesic equation modulo the right-hand-side term ($D/D\vec{\mbox{\tt t}}$ being the absolute
derivative with respect to $\vec{\mbox{\tt t}}$).
The path of motion is not in general an affinely-parametrized geodesic [`simplification B)'], however a
banal conformal transformation to a such exists, in the following sense.
\noindent
Case 1: if $\mbox{\sffamily W}$ is prescribed as a constant, then (\ref{*}) {\sl is} the geodesic equation, which is
the case in mechanics if $\mbox{\sffamily V}$ is constant and in (for the moment) minisuperspace GR if $\mbox{\sffamily R}$ is
constant.
Indeed, this corresponds to having an action proportional to $\int\textrm{d}\lambda\sqrt{\mbox{\sffamily T}}$ and so to
$\int\textrm{d} s$ for $\textrm{d} s^2$ the line-element corresponding to the kinetic metric ${M}_{\Gamma\Delta}$.
\noindent
Case 2: if not, banal conformal-transform with $\Omega^2 = k\mbox{\sffamily W}$ for $k$ constant so $\mbox{\sffamily T}
\longrightarrow \widetilde{\mbox{\sffamily T}} = k\mbox{\sffamily W}\mbox{\sffamily T}$ and $\mbox{\sffamily W} \longrightarrow \widetilde{\mbox{\sffamily W}} = \mbox{\sffamily W}/k\mbox{\sffamily W} = 1/k$.
This corresponds to obtaining an action $\mbox{\sffamily I} \propto = \int\textrm{d}\widetilde{s}$ and passing from
$\mbox{\tt t}^{\mbox{\scriptsize J}\mbox{\scriptsize B}\sB}$ to $\mbox{\tt t}^{\mbox{\scriptsize a}\mbox{\scriptsize f}\sf-\sg\mbox{\scriptsize e}\so}$, that is from banal conformal factor $\Omega^2 = 1$
to banal conformal factor $\Omega^2 = k\mbox{\sffamily W}$.
Thus simplifications A) and B) are related by a banal transformation.
Moreover, case 2 has range of validity caveats \cite{Magic, BT} for regions containing zeros of $\mbox{\sffamily W}$ as
the conformal transformation's definition precludes these; infinities and non-smoothnesses of $\mbox{\sffamily W}$ can
likewise be disruptive.
In the minisuperspace case, this paragraph's contents were spelled out by Misner \cite{Magic} following
more partial mention in earlier work of DeWitt \cite{DeWitt70}.
In mechanics, this is in e.g. \cite{Lanczos, B94I}.
I then ask the following question.
How does performing two transformations -- conformal transformation and non-affine parametrization --
each of which complicates the equations of motion, nevertheless work out to preserve them when applied
together?
Understanding this requires looking at the alternative, longer proof of 4) at the level of the equations
of motion themselves.
By
$$
{\widetilde{\Gamma}^{\Gamma}}\mbox{}_{\Delta\Lambda} = {\Gamma^{\Gamma}}_{\Delta\Lambda} +
\{2{\delta^{\Gamma}}_{(\Delta} \partial_{\Lambda)}\Omega - M_{\Delta\Lambda}\partial^{\Gamma}\Omega\}/\Omega
\mbox{ } ,
$$
symmetry and the definition of $\mbox{\sffamily T}_{\vec{\mbox{\scriptsize\tt t}}}$ in terms of velocities with respect to $\vec{\mbox{\tt t}}$,
$$
{\widetilde{\Gamma}^{\Gamma}}\mbox{}_{\Delta\Lambda}\mbox{\Large$\ast$} Q^{\Delta}\mbox{\Large$\ast$} Q^{\Lambda} =
\Omega^{-4}{\Gamma^{\Gamma}}_{\Delta\Lambda}\mbox{\Large$\ast$} Q^{\Delta}\mbox{\Large$\ast$} Q^{\Lambda} +
2\Omega^{-5}\{\partial_{\Delta}\Omega\mbox{\Large$\ast$} Q^{\Delta}\mbox{\Large$\ast$} Q^{\Lambda} - \mbox{\sffamily T}_{\vec{\mbox{\scriptsize\tt t}}}\partial^{\Gamma}\Omega\}
\mbox{ } ,
$$
which, then, alongside using obvious product rule expressions for $\mbox{\Large$\ast$}\{\Omega^{-2}\mbox{\Large$\ast$} Q^{\Gamma}\}$
and $\partial^{\Gamma}\{\Omega^{-2}{\mbox{\sffamily W}}\}$ gives
$$
0 = \mbox{\Large$\ast$}\Star Q^{\Gamma} +
{\widetilde{\Gamma}^{\Gamma}}\mbox{}_{\Lambda\Delta}\mbox{\Large$\ast$} Q^{\Delta}\mbox{\Large$\ast$} Q^{\Lambda} -
\widetilde{\partial}^{\Gamma}\widetilde{\mbox{\sffamily W}} =
$$
\begin{equation}
\Omega^{-4}
\mbox{\Large\{}
\mbox{\Large$\ast$}\Star Q^{\Gamma} + {{\Gamma}^{\Gamma}}_{\Lambda\Delta}\mbox{\Large$\ast$} Q^{\Delta}\mbox{\Large$\ast$} Q^{\Lambda}
- {\partial}^{\Gamma}{\mbox{\sffamily W}}
\mbox{\Large\}}
+ 2\Omega^{-5}
\mbox{\Large\{}
\partial_{\Delta}\Omega \mbox{\Large$\ast$} Q^{\Delta}\mbox{\Large$\ast$} Q^{\Gamma} + \{\mbox{\sffamily W} - \mbox{\sffamily T}_{\vec{\mbox{\scriptsize\tt t}}}\}\partial_{\Gamma}\Omega
\mbox{\Large\}}
\mbox{ } .
\end{equation}
Then the second big bracket cancels by the chain rule and conservation of energy in mechanics:
$\mbox{\sffamily W} - \mbox{\sffamily T}_{\vec{\mbox{\scriptsize\tt t}}} = \mbox{\sffamily E} - \mbox{\sffamily V} - \mbox{\sffamily T}_{\vec{\mbox{\scriptsize\tt t}}}$ or the Lagrangian form of the Hamiltonian
constraint in (for
the moment) minisuperspace GR: $\mbox{\sffamily W} - \mbox{\sffamily T}_{\mbox{\scriptsize G}\mbox{\scriptsize R}} = \mbox{Ric}(h) - 2\Lambda - \mbox{\sffamily T}_{\mbox{\scriptsize G}\mbox{\scriptsize R}}$.
Next, analyze the above in terms of non-affine parametrization and conformal transformation subworkings.
This reveals the second term in the second large bracket to be the result of non-affine parametrization.
It cancels with the first term, which is one of two complicating terms from conformal transformation,
the other one being the $\mbox{\sffamily T}_{\vec{\mbox{\scriptsize\tt t}}}$, which itself cancels with the banal conformal transformation's
compensatory conformal scaling of $\mbox{\sffamily W} = \mbox{\sffamily E} - \mbox{\sffamily V}$ by the conservation of energy or the Hamiltonian
constraint.
This is therefore an interesting configuration space generalization of the result by which null
geodesics conformally map to null geodesics \cite{Wald}.
There, the first conformal complication is balanced by a change of what is the suitable affine
parametrization, while the second one vanishes by the geodesic being null with respect to the indefinite
spacetime metric.
In our case, the first of these cancellations continues to occur with the same interpretation, but
what was the null combination (and thus working for indefinite metrics only) becomes, in the
configuration space context, the kinetic term whether for indefinite or definite kinetic metrics, and
the null condition becomes replaced by the energy or Hamiltonian constraint (granted the banal conformal
transformation's compensatory scaling of the potential factor $\mbox{\sffamily W}$).
Thus `in indefinite spaces null geodesics conformal-map to null geodesics' becomes `in configuration
spaces of whatever signature, paths of motion banal-conformal map to paths of motion'.
Let us now afford a slight generalization to finite models with non-trivial spatial relationalism.
At least Euclidean and similarity relational particle mechanics then have as equations of motion
\begin{equation}
D_g^2 Q^{\Gamma}/D\vec{\mbox{\tt t}}^2 \equiv
\mbox{\Large$\ast$}_g \mbox{\Large$\ast$}_g Q^{\Gamma} + \Gamma^{\Gamma}\mbox{}_{\Lambda\Sigma}\mbox{\Large$\ast$}_g Q^{\Lambda}\mbox{\Large$\ast$}_g
Q^{\Sigma} = \partial^{\Gamma}\mbox{\sffamily W} \mbox{ } ,
\label{**}
\end{equation}
for $D_g/D\vec{\mbox{\tt t}}$ the G-gauged absolute derivative, and then the preceding analysis carries
through under $\mbox{\Large$\ast$} \longrightarrow \mbox{\Large$\ast$}_g$.
To the extent that the previous paths of motion were geodesics, the current paths are
`geodesics provided that we suitably align the constituent snapshots by auxiliary G-transformations'.
We have determined that solely non-affinely parametrizing or solely rescaling the kinetic metric
complicate the equations of motion away from the simple form (\ref{*}) or (\ref{**}) that using emergent
Jacobi--Barbour--Bertotti time places them in, while performing both of these operations alongside the
compensating $\mbox{\sffamily W}$ rescaling preserves this simple form, choice of emergent time indeed being nonunique
up to this `banal' freedom.
Thus, if one's problem requires rescaling {\sl or} non-affinely parametrizing, one's problem may
permit one to `complete' the required transformation to a full banal conformal transformation, whereby
the effect of solely rescaling or solely non-affinely parametrizing kicking one out of the form
(\ref{*}) or (\ref{**}) is circumvented, and so emergent time's being a banal covector leads to a
robustness result for its property of giving simple equations of motion.
Preservation under full banal transformation means that $\vec{\mbox{\tt t}}$ corresponding to {\sl any} $\Omega$
carries out simplification A).
One can then pick $\Omega^2 = k\mbox{\sffamily W}$ so that $\widetilde{\partial}^{\Gamma}\widetilde{\mbox{\sffamily W}} =
\widetilde{\partial}^{\Gamma}k = 0$, and then $\widetilde{*} = \Omega^{-2}* = \{k\mbox{\sffamily W}^{-1}\}* =
\{k\sqrt{\mbox{\sffamily W}\mbox{\sffamily T}}\}^{-1}\mbox{\Large$\circ$}$;
i.e. so that simplification B) -- taking affine geodesic form rather than additionally
containing a $\partial^{\Gamma}\mbox{\sffamily W}$ term -- {\sl also} holds.
Thus one has gone from physics with a restricted class of affine parameters under which the equations of
motion take the form (\ref{*}) or (\ref{**}) to physics with a restricted class of banal conformal
factors under which the equations of motion take geodesic form.
Each of these, moreover, is nonunique up to a constant multiplicative time-scale (evident in the
specification of the geodesic equation forming $\Omega$) and an additive constant time-origin (evident
since what a power of $\Omega$ scales is $\textrm{d}/\textrm{d} \vec{\mbox{\tt t}}$ and so $\vec{\mbox{\tt t}}$ itself has
an addititive constant of integration more freedom than $\Omega$ itself \cite{SemiclI, SemiclII}).
These retain one's civilization's freedom of choice for calendar year zero and unit of time, as should
be the case.
Finally, affine transformations send $\mbox{\tt t}_{\so\sll\mbox{\scriptsize d}}$ to $\mbox{\tt t}_{\sn\mbox{\scriptsize e}\mbox{\scriptsize w}}(\mbox{\tt t}_{\so\sll\mbox{\scriptsize d}})$
subject to
\noindent
I) nonfreezing and monotonicity, so $\textrm{d} \mbox{\tt t}_{\sn\mbox{\scriptsize e}\mbox{\scriptsize w}}/\textrm{d} \mbox{\tt t}_{\so\sll\mbox{\scriptsize d}} > 0$ which can be encoded by
having it be a square of a quantity $\mbox{\tt Q}$ with no zeros in the region of use, and
\noindent
II) this derivative and hence $\mbox{\tt Q}$ being a physically-reasonable function
(to stop the transition damaging the equations of motion).
But this can be recast as $\textrm{d}/\textrm{d} \mbox{\tt t}_{\sn\mbox{\scriptsize e}\mbox{\scriptsize w}} = \mbox{\tt Q}^{-2}\textrm{d}/\textrm{d} \mbox{\tt t}_{\so\sll\mbox{\scriptsize d}}$, by which
(and other
properties matching\footnote{It
may be interesting to find out whether the one restricts the other's function space more than usual.})
we are free to identify this $\mbox{\tt Q}$ with $\Omega$, so any affine transformation is of a form that extends
to a (full) banal conformal transformation.
If one then chooses to `complete' it to a full banal conformal transformation, the above calculation can
be interpreted as the extra non-affine term being traded for a $\mbox{\sffamily T}$ term by having an accompanying
conformal transformation of the kinetic metric, and then this being traded for $\partial^{\Gamma}\mbox{\sffamily W}$ by
energy conservation and the compensating banal conformal transformation of $\mbox{\sffamily W}$.
Thus the freedom to affinely-transform the geodesic equation on configuration space can be viewed
instead as the freedom to (fully) banal-conformally transform a system's equation of motion.
{\sl Thus the relational approach's simplicity notion for equations of motion has the same mathematical
content as prescribing an affine rather than non-affine parameter for the geodesic equation on
configuration space}.
Thus the banally related $\vec{\mbox{\tt t}}$ corresponds to `the set of (generally) nonaffine parameters
for the geodesic-like equation of motion on configuration space', while $\mbox{\tt t}^{\mbox{\scriptsize a}\mbox{\scriptsize f}\sf-\sg\mbox{\scriptsize e}\so}$
indeed remains identified with the much more restricted set (unique up to a multiplicative constant
time-scale and an additive constant time-origin) of affine parameters for the geodesic equation on
configuration space.
Then part of the argument for emergent time being fixed by the universe's contents" \cite{B94I} is
lost as it is revealed to contain an arbitrary factor.
But one can then get back that preciseness by making a choice.
$\mbox{\tt t}^{\mbox{\scriptsize J}\mbox{\scriptsize B}\sB}$ ($\Omega = 1$, so that $\mbox{\sffamily E}$ carries no nonconstant factors) and
$\mbox{\tt t}^{\mbox{\scriptsize a}\mbox{\scriptsize f}\sf-\sg\mbox{\scriptsize e}\so}$ are then interesting such choices.
\mbox{ }
\noindent 5) In preparation for the passage to QM in Sec 3, the conjugate momenta are the
banal-conformal invariant expressions
\begin{equation}
\widetilde{P}_{\Delta} =
\sqrt{ {\widetilde{\mbox{\sffamily W}}}/{\widetilde{\mbox{\sffamily T}}} }\widetilde{M}_{\Gamma\Delta}\mbox{\Large$\circ$}_g q^{\Gamma} =
\widetilde{M}_{\Gamma\Delta}\widetilde{\mbox{\Large$\ast$}}_gQ^{\Gamma} =
{M}_{\Gamma\Delta}{\mbox{\Large$\ast$}_g}Q^{\Gamma} = P_{\Delta}
\mbox{ } .
\label{20}
\end{equation}
Thus what does banal-scale concerns, a fortiori, configuration space rather than phase space.
\noindent 6) One obtains as a primary constraint resultant from the reparametrization invariance of
the action a quadratic constraint (\ref{set}).
Now, as $N^{\Gamma\Delta}$ is the inverse of $M_{\Gamma\Delta}$, it scales as a banal-conformal covector
\begin{equation}
N^{\Gamma\Delta} \longrightarrow \widetilde{N}^{\Gamma\Delta} = \Omega^{-2}N^{\Gamma\Delta} \mbox{ } .
\label{22}
\end{equation}
Combining (\ref{20}) and (\ref{22}), the quadratic constraint (\ref{set}) is a banal covector.
\noindent 7) In cases with nontrivial configurational relationalism there are also linear constraints
from variation with respect to G-auxiliaries (Sec 1.2).
By (\ref{20}) the linear momentum constraint of GR, ${\cal L}_{\mu}$, and the relational particle
mechanics constraints, $\underline{\mbox{\tt P}}$, $\underline{\mbox{\tt L}}$ and $\mbox{\tt D}$, are banal-conformally
invariant.
\noindent 8) The mechanics Hamiltonian $\mbox{\sffamily H}$ is the left hand side of the quadratic energy constraint
(\ref{set}) and as such scales as a banal covector.
One then integrates this with respect to $\vec{t}$.
If one looks to extend this prescription to relational particle mechanics, given that the linear
constraints are banal-conformally invariant, one finds that one needs to either: build the total
almost-Hamiltonian\footnote{Total Hamiltonians
have constraints appended by Lagrange multiplier coordinates.
In the cyclic velocity auxiliary picture that relationalism instead implies, one has rather what I term
an almost-Hamiltonian \cite{FEPI} with cyclic velocities in place of multipliers, which terminology
I use since velocities still appear in it, albeit only the velocities of {\sl auxiliary} quantities.}
by using $\mbox{\Large$\ast$}\underline{a}$ and $\mbox{\Large$\ast$}\underline{b}$ (and --$\mbox{\Large$\ast$}{c}$ in the similarity case) as the
appending auxiliaries to preserve homogeneity of banal transformation:
\begin{equation}
\mbox{\sffamily A}_{\mbox{\scriptsize t}\so\mbox{\scriptsize t}\mbox{\scriptsize a}\sll} = \mbox{\sffamily H} + \mbox{\Large$\ast$}\underline{a}\cdot\underline{\mbox{\tt P}} +
\mbox{\Large$\ast$}\underline{b}\cdot\underline{\mbox{\tt L}}
\mbox{ } \mbox{ } ( \mbox{ } - \mbox{\Large$\ast$}{c} \, \mbox{\tt D} \mbox{ } ) \mbox{ } .
\end{equation}
Or, have $\mbox{\sffamily H}$ carry a `lapse' prefactor $\textrm{d} \vec{T}/\textrm{d} \lambda$ and then use
$\mbox{\Large$\circ$}\underline{a}$ and $\mbox{\Large$\circ$}\underline{b}$ (and $-\mbox{\Large$\circ$}{c}$ in the similarity case) as the appending
auxiliaries, producing an almost-Hamiltonian that is now banal-conformally invariant,
\begin{equation}
\overline{\mbox{\sffamily A}}_{\mbox{\scriptsize t}\so\mbox{\scriptsize t}\mbox{\scriptsize a}\sll} = \mbox{\Large$\circ$}{\vec{T}}\mbox{\sffamily H} +
\mbox{\Large$\circ$}\underline{a}\cdot\underline{\mbox{\tt P}} +
\mbox{\Large$\circ$}\underline{b}\cdot\underline{\mbox{\tt L}}
\mbox{ } \mbox{ } ( \mbox{ } - \mbox{\Large$\circ$}{c} \,\mbox{\tt D} \mbox{ } ) \mbox{ } .
\end{equation}
These two views are, moreover, physically equivalent, since the first is to be integrated over
$\vec{T}$ and the second over $\lambda$.
For GR, ${\cal H}$ scales as a banal covector and the velocity of the instant, $\mbox{\Large$\circ$}{\mbox{I}}$ (or the lapse, $\alpha$) as a banal vector.
Thus it is entirely straightforward to append the banal-scalar ${\cal H}_{\mu}$ using the banal-scalar
auxiliary $\mbox{\Large$\circ$}{F}^{\mu}$ (or $\beta^{\mu}$), to make either the relational total GR almost-Hamiltonian
\begin{equation}
\mbox{\sffamily A}^{\mbox{\scriptsize G}\mbox{\scriptsize R}}_{\mbox{\scriptsize t}\so\mbox{\scriptsize t}\mbox{\scriptsize a}\sll} = \int\textrm{d}^3x\{\mbox{\Large$\circ$}{\mbox{I}}{\cal H} + {\mbox{\Large$\circ$}\mbox{F}}^{\mu}{\cal L}_{\mu}\} \mbox{ } ,
\end{equation}
or the Arnowitt--Deser--Misner total GR Hamiltonian
\begin{equation}
\mbox{\sffamily H}^{\mbox{\scriptsize G}\mbox{\scriptsize R}}_{\mbox{\scriptsize t}\so\mbox{\scriptsize t}\mbox{\scriptsize a}\sll} = \int\textrm{d}^3x\{\alpha{\cal H} + {\beta}^{\mu}{\cal L}_{\mu}\} \mbox{ } .
\end{equation}
[Comparing the last two paragraps, relational particle mechanics likewise admit Hamiltonians if formulated partially
non-relationally by use of multiplier coordinates in place of their cyclic velocities.
Also note that GR comes already-parametrized, so there is no primed-unprimed ambiguity therein.]
\section{Banal-invariance in finite QM (relational mechanics, minisuperspace)}
At the quantum level, those constraints which remain become wave equations.
In this paper's models, this always includes a quadratic constraint of form (\ref{set}).
This contains a product of $P_{\Gamma}$ and $Q^{\Gamma}$ terms,
$N^{\Gamma\Delta}(Q^{\Lambda})P_{\Gamma}P_{\Delta}$, which picks up ordering issues in passing to QM.
Assume that $Q^{\Gamma} \longrightarrow \widehat{Q}^{\Gamma} = Q^{\Gamma}$, $P_{\Gamma} \longrightarrow
\widehat{P}_{\Gamma} = -i\hbar\partial_{\Gamma}$.
I first consider the more widely well-defined finite case, but leave the working in a general notation
that covers both relational particle mechanics and minisuperspace.
The {\it Laplacian ordering} at the QM level of the classical
$N^{\Gamma\Delta}(Q^{\Lambda})P_{\Gamma}P_{\Delta}$ combination is
\begin{equation}
D^2 = \frac{1}{\sqrt{M}} \frac{\partial}{\partial {Q}^{\Gamma}}
\left\{
\sqrt{M}N^{\Gamma\Delta}\frac{\partial}{\partial{Q}^{\Delta}}
\right\} \mbox{ } .
\end{equation}
This has the desirable property of (straightforwardly) being independent of coordinate choice on the
configuration space $\mbox{\sffamily Q}$ \cite{DeWitt57}.
This property is not, however, unique to this ordering; one can reorder to include a Ricci scalar
curvature term so as to have $D^2 - \xi\,\mbox{Ric}(M)$ \cite{DeWitt57, Magic, HP86, CZ, Halliwell,
Moss, Page91, RT}.
It is then well-known \cite{Wald} that there is a unique choice of $\xi$
(dependent on the dimension $k \geq 2$ of the mathematical space in question\footnote{I exclude
1-d configuration spaces as physics
concerns changes of one configurational variable with respect to another physical variable, in
which sense configuration space dimension $\geq$ 2 is required.
This is the case at the classical level, without it internal time makes no sense and it is also needed
at the quantum level, at least for such as \cite{AF} the semiclassical approach and records theory
approach to the Problem of Time in Quantum Cosmology.
})
that leads to the production of a conformally-invariant operator and hence of a conformally-invariant
operator-ordering in the quantum application \cite{Magic, HP86, CZ, Halliwell, Moss,
Page91, RT}.
Moreover, in the present paper I identify this conformal invariance associated with operator ordering as
being the same as the banal-conformal invariance that is simple and natural in classical relational
product-type actions, whereby demanding this operator ordering can be seen as asking to extend this
simple and natural invariance to hold also at the quantum level.
The operator with the desired properties is, specifically,
\begin{equation}
{D}_{\mbox{\scriptsize C}}^2 = \frac{1}{\sqrt{M}} \frac{\partial}{\partial {Q}^{\Gamma}}
\left\{
\sqrt{M}N^{\Gamma\Delta}\frac{\partial}{\partial{Q}^{\Delta}}
\right\}
- \frac{k - 2}{4\{k - 1\}}\mbox{Ric}(M) \mbox{ }
\end{equation}
where $k$ is the dimension of $\mbox{\sffamily Q}$.
Moreover, this operator by itself is still not banal-conformally invariant: it is furthermore required
that the wavefunction of the universe $\Psi$ that it acts upon itself transforms in general tensorially
under \mbox{\sffamily Q}-conformal transformations \cite{Wald},
\begin{equation}
\Psi \longrightarrow \widetilde{\Psi} = \Omega^{\frac{2 - k}{2}}\Psi \mbox{ } .
\end{equation}
Some simpler cases are as follows.
\mbox{ }
\noindent
1) For models with 2-d configuration spaces such as for the minisuperspace models \cite{HP86, Page91,
Louko, Barvin}, quantum similarity relational particle mechanics of 3 particles in the plane \cite{08II,
+tri} or of 4 particles on the line \cite{AF}, and of 3 particles on the line with scale \cite{SemiclIII,
scaleQM}, the conformal value of $\xi = \{k - 2\}/4\{k - 1\}$ collapses to zero, so that
Laplacian ordering and conformally invariant wavefunctions suffice.
\mbox{ }
\noindent
2) For models with zero Ricci scalar, the conformal ordering coincides with the Laplacian one.
E.g. banal-conformally flat models can be arranged to have this, of which the Euclidean relational
particle mechanics of 3 particles in the plane \cite{08I} or of N particles on a line \cite{scaleQM}
are examples.
\mbox{ }
\noindent
3) If a space has constant Ricci scalar, then the effect of a $\xi \mbox{Ric}(M)$ term, conformal or
otherwise, is just something which can be absorbed into redefining the energy in the case of mechanics.
In particular, this is the case for relational particle mechanics in 1-d as their configuration spaces
are $\mathbb{S}^{\sN - 2}$ which are clearly of constant curvature, and for relational particle
mechanics in 2-d as their configuration spaces are $\mathbb{CP}^{\sN - 2}$ which are Einstein
\cite{FORD} and hence of constant Ricci scalar curvature.
Parallelly, were the Ricci scalar constant in a GR model, it could likewise be absorbed
into redefining the cosmological constant.
\mbox{ }
\noindent
However, almost all other minisuperspace models and relational particle mechanics models (e.g. \cite{08III}) have
configuration space dimension $\geq 3$ for which the choice of a value of $\xi$ is required.
The present paper is written partly in support of the choice of ordering made in \cite{08II, AF, 08III}
and more complicated relational particle mechanics (see e.g. \cite{FORD}).
\mbox{ }
Next, if one sends $\mbox{\sffamily H}\Psi = \mbox{\sffamily E}\Psi$ to $\widetilde{\mbox{\sffamily H}}\widetilde{\Psi} =
\widetilde{\mbox{\sffamily E}}\widetilde{\Psi} = \{\mbox{\sffamily E}/\Omega^2\}\widetilde{\Psi}$, one's eigenvalue problem has a
weight function $\Omega^{-2}$ which then appears in the inner product:
\begin{equation}
\int_{\Sigma}\widetilde{\Psi_1}\mbox{}^*\widetilde{\Psi_2}\Omega^{-2}\sqrt{\widetilde{M}}d^{k}x \mbox{ } .
\label{5}
\end{equation}
This inner product additionally succeeds in being banal-conformally invariant, being equal to (c.f.
\cite{Magic} for the minisuperspace case)
\begin{equation}
\int_{\Sigma}\Psi_{1}\mbox{}^*\Omega^{\frac{2 - k}{2}} \Psi_{2}\Omega^{\frac{2 - k}{2}}
\Omega^{-2}\sqrt{M}\Omega^{k}\textrm{d}^{k}x =
\int_{\Sigma}\Psi_{1}\mbox{}^*\Psi_{2}\sqrt{M}\textrm{d}^{k}x \mbox{ }
\label{FORK}
\end{equation}
in the banal representation that is mechanically natural in the sense that $\mbox{\sffamily E}$ comes with the trivial
weight function, 1.
\mbox{ }
Generally, $\widehat{\widetilde{\mbox{\sffamily H}}} = \widetilde{\widehat{\mbox{\sffamily H}}}$ is not in a simple sense self-adjoint
with respect to $\widetilde{\langle} \mbox{ } | \mbox{ } \widetilde{\rangle}$, while the
mechanically-natural $\widehat{\mbox{\sffamily H}}$ is, in a simple sense, with respect to $\langle \mbox{ } | \mbox{ } \rangle$.
More precisely, this is in the sense that
\begin{equation}
\int\sqrt{M}\textrm{d}^kx \Psi^*D^2\Psi = \int\sqrt{M}\textrm{d}^kx \{D^2\Psi^*\}\Psi + \mbox{boundary terms },
\end{equation}
which amounts to self-adjointness if the boundary terms can be arranged to be zero, whether
by the absence of boundaries in the configuration spaces for 1 and 2 dimensional relational particle mechanics \cite{FORD}
or by the usual kind of suitable fall-off conditions on $\Psi$.
This is not shared by the $\Omega$-inner product as that has an extra factor of $\Omega^{-2}$, which in
general interferes with the corresponding move by the product rule ($\sqrt{M}$ does not interfere thus
above, since the Laplacian is built out of derivatives that are covariant with respect to the metric
$M_{\Gamma\Delta}$.)
However, on the premise that solving $\widetilde{\mbox{\sffamily H}}\widetilde{\Psi} = \widetilde{\mbox{\sffamily E}}\widetilde{\Psi}$ is equivalent to
solving $\mbox{\sffamily H}\Psi = \mbox{\sffamily E}\Psi$, the banal-conformal transformation might at this level be viewed as a
sometimes-useful computational aid, with the answer then being placed in the preceding paragraph's
banal representation for further physical interpretation.
\mbox{ }
What about the case of theories with further, linear constraints?
In the case of relational particle mechanics, choosing conformal ordering before and after dealing with
the linear constraints do not appear to agree in general (so that arguing for conformal ordering by
itself is not a guarantee of unambiguously fixing an ordering).
As I consider the structure of the configuration space to impart lucid knowledge whenever the reduction
can be done, I would favour performing the reduction and then conformal-ordering the reduced
configuration space Hamiltonian, as in \cite{08II, 08III, AF}.
In the Dirac quantization approach for relational particle mechanics (`quantize then constrain'), N.B.
that sending $\mbox{\tt P}\Psi = 0$ to $\widetilde{\mbox{\tt P}}\widetilde{\Psi}$ does cause an alteration as $\mbox{\tt P}$ is a
differential operator.
The same is the case for the zero total angular momentum constraint $\mbox{\tt L}$ and the dilational constraint
$\mbox{\tt D}$.
On the other hand, the reduced quantization approach (`constrain and then quantize') is free of this issue.
Barvinsky \cite{Barvin} investigated for what ordering these two approaches coincide.
On the other hand, e.g. Ashtekar, Horowitz, Romano and Tate \cite{AHRT} have argued for inequivalence of these two
approaches to quantization.
In any case, to 1 loop (first order in $\hbar$) Barvinsky argues that the Laplacian ordering will do
the trick.
Then, as the $\xi \mbox{Ric}(M)$ term contributes only to $O(\hbar^2)$ so that conformal ordering will
likewise do to get equivalence between Dirac and reduced quantization equivalence to 1 loop.
\section{Comments on quantum geometrodynamics itself}
Sec 3 contains the use of conformal ordering in minisuperspace, which I would argue is already an
important and useful case on which there is substantial literature.
For infinite theories like GR, one has not an ordinary but a {\it functional} Laplacian,
\begin{equation}
{\cal D}^2 = \frac{1}{\sqrt{M}} \frac{\delta}{\delta {\cal Q}^{\Gamma}}
\left\{
\sqrt{M}N^{\Gamma\Delta}\frac{\delta}{\partial{\cal Q}^{\Delta}}
\right\} \mbox{ } .
\end{equation}
Using this as one's ordering for $N^{\Gamma\Delta}({\cal Q}^{\Lambda}){\cal P}_{\Gamma}{\cal P}_{\Delta}$
continues to have the desirable property of being independent of the coordinate choice on the
configuration space.
As before, this property is not, however, unique to this ordering: one can include a Ricci scalar
curvature term so as to have ${\cal D}^2 - \xi\,\mbox{Ric}(M)$.
Proceeding analogously to before, there is then a unique banal-conformally invariant choice among these
orderings:
\begin{equation}
{{\cal D}_{\mbox{\scriptsize c}}}^2 = \frac{1}{\sqrt{M}} \frac{\delta}{\delta {\cal Q}^{\Gamma}}
\left\{
\sqrt{M}N^{\Gamma\Delta}\frac{\delta}{\delta{\cal Q}^{\Delta}}
\right\}
- \frac{k - 2}{4\{k - 1\}}\mbox{Ric}(M) \mbox{ } ,
\label{33}
\end{equation}
so long as $\Psi$ itself transforms in general tensorially under the conformal transformation
\begin{equation}
\Psi \longrightarrow \widetilde{\Psi} = \Omega^{\frac{2 - k}{2}}\Psi \mbox{ } .
\label{34}
\end{equation}
There is now a snag in that $k$ is infinite so (\ref{34}) becomes ill-defined; however in the operator
(\ref{33}) the coefficient of Ric($M$) merely tends to 1/4, while the cancellation of $k$ in
working (\ref{FORK}) also continues to hold in this case, and it is the outcome of this (including its
operator expectation counterpart), rather than $\Psi$ itself, that has physical meaning.
This gives a Wheeler--DeWitt equation of the form
\begin{equation}
\hbar^2
\left\{
\frac{1}{\sqrt{{\cal M}}}\frac{\delta}{\delta h_{\mu\nu}}
\left\{
\sqrt{{\cal M}}{\cal N}^{\mu\nu\rho\sigma}\frac{\delta}{\delta h_{\rho\sigma}}
\right\}
- \frac{1}{4}\mbox{Ric}({\cal M})
\right\}
\Psi + \sqrt{h}\{\mbox{Ric}(h) - 2\Lambda\}\Psi = 0 \mbox{ } .
\end{equation}
Also in full GR, due to the presence of the linear momentum constraint and the previous Sec's insight
from relational particle mechanics' analogous linear constraints, conformal order before and after
reduction may differ given the insight from the relational particle mechanics toy models.
Moreover one cannot in general perform the reduction here, so the conformal
order within the Dirac-type quantization scheme may be questionable.
\section{Conclusion}
\noindent
Mechanics and fundamental physics at the classical level can be considered in terms of temporally
relational product-type actions, and doing so is useful in considering whole-universe situations --
the setting for quantum cosmology.
These readily exhibit a banal-conformal invariance under compensating rescalings of the configuration
space metric and the potential (alongside the total energy in the case of mechanics).
The classical equations of motion resulting from product-type actions simplify for a particular form
of emergent time.
In mechanics, this amounts to a recovery of Newtonian time from relational premises, while in GR this
amounts to a recovery of proper time or cosmic time in the various contexts where relevant.
In this paper, we found that this emergent time itself scales when a banal conformal transformation
is performed.
Then how a more complicated manifestation of banal-conformal invariance is present in the more
commonly used difference-type actions can be deduced, provided that the notions of time in these scale
in the same way as the emergent time does.
I also clarified that the simplifying effects on the equations of motion through use of the emergent
time (e.g. Jacobi--Barbour--Bertotti time) and those through use of affine parametrization of geodesics
and dynamical trajectories are linked via a straightforward (albeit apparently hard to spot) working
hinging on conservation of energy.
Suppose then that one chooses to retain this banal-conformal invariance -- simple and natural
from the perspective of relational product actions at the classical level -- upon passing to the quantum level.
Furthermore, let the theoretically-desirable and fairly standard tenet that quantum mechanics be
independent of choice of coordinates on configuration space $\mbox{\sffamily Q}$ be adhered to.
Then these combine to pick out the operator ordering based on the conformally-invariant modification of
the Laplacian.
While this operator ordering has been suggested by others previously (as documented in Sec 1.3), this
is the first paper pointing out the relational underpinning for it.
As how one operator-orders has consequences for the physical predictions of one's theory, and there is
no established way to prescribe the operator ordering in the case of (toy models of) quantum gravity,
this stronger motivation for conformal ordering is of wide interest.\footnote{There does
remain the caveat that QM in general and Quantum Gravity in particular may have other restrictions on
orderings from such as well-defined existence and self-adjointness of quantum Hamiltonians and of other
important quantum operators.
Then possibly another such technical condition could turn out to be incompatible with conformal ordering
at least for some theories/models, but to the best of our knowledge, to date nobody has found any such.}
As regards applications to simple toy models, for 2-d configuration spaces, conformal ordering becomes
indistinguishable from the also sometimes advocated Laplacian ordering, while the difference becomes
minor for manifolds with constant Ricci scalar.
Nor is there any distinction between these to 1 loop in the semiclassical approach.
However, more complicated modelling situations \cite{08III, FORD} do have a distinction between
Laplacian and conformal orderings.
What is conformal ordering depends in detail (to more than 1 loop) on whether one Dirac-quantizes or
reduced-quantizes.
This distinction is already visible in finite but linearly-constrained relational particle models.
Inner products, which are the directly physically meaningful constructs in quantum theory, are found to
be suitably banal-conformally invariant.
Taking these to be primary, that the scaling of the waverfunction itself (required for the conformal
modification of the Laplacian operator to actually form a conformally invariant combination) is
formally infinite in cases with infinite configuration space dimensions, appears unproblematic.
In particular, this suggests an ordering for the Wheeler-DeWitt equation of full geometrodynamics.
\mbox{ }
\noindent{\bf Acknowledgments} I thank Visiting Prof Julian Barbour, Dr Harvey Brown and Ms Anne
Franzen for discussions.
\vspace{3in}
|
1,941,325,220,443 | arxiv | \section{Introduction}
Since its invention in 1981, the scanning tunneling microscopy (STM) has become one of the most powerful tools to characterize nanostructures on metal surface\cite{Binnig1981,Binnig1982}. It uses the quantum tunneling of electrons from the tip to the metal surface to `measure' the local density of states of the metal surface or adsorbate on the surface. Meanwhile, the nano-gap between the tip and surface hosts localized plasmon modes (gap modes) that can be used to localize the electromagnetic field in nanoscale regime. Gimzewski \emph{et al}. observed for the first time that, the gap modes can be excited by tunneling electrons at high enough bias\cite{Gimzewski1988}. Radiative decay of these localized modes gave rise to light emission that was detected from the far field at the same side of the STM tip, coined as STM induced luminescence (STML). Afterwards, the effect of tip, surface shape, gap size, dielectric environment, types of metal on the light emission properties are investigated \cite{Berndt1991,Berndt1993,Berndt1993-2,Berndt1998,Chen2009,Yu2016,Chen2014,Nilius2000,Ushioda2000}. The combination of plasmonic and molecular luminescence is also studied \cite{Qiu2003,Dong2004,Schneider2012,Lutz2013,Imada2016,Zhang2016,Zhang2017,Imada2017}. Recently, the change of STML properties as a function of tip-surface distance is investigated from the tunneling to contact regime both for metal and molecular junctions\cite{Schull2009,Schneider2010,Schneider2012}. The relation between optical yields and finite frequency shot-noise of electrons is revealed \cite{Schull2009,Schneider2010,Schneider2012,Lu2013,Xu2015,Kaasbjerg2015}.
Theoretical analysis of the light emission efficiency reveals that the gap plasmons are mainly excited by inelastic tunneling electrons in the gap, instead of hot luminescence in the electrodes \cite{Persson1992}. Electromagnetic simulations are also conducted to study the emission spectrum. In the early theoretical study of STML, metallic sphere is used to approximate the tip for the calculation of the plasmon emission spectrum in the far field\cite{Rendell1981,Johansson1990,Johansson1998}. The dependence of emission spectrum on the material, sphere radius, sphere-substrate distance and applied voltage was systematically studied. Subsequently, J. Aizpurua \emph{et al.} adopted a hyperbolic tip geometry which is similar to the tip used in experiment.
A more precise relationship between the tip shape and the spectrum is obtained\cite{Aizpurua2000}. Recently, it was found that the gap mode can also excite the surface plasmon polaritons (SPPs) propagating along the metal surface\cite{Novotny2011,Wang2011}. This provides an efficient, local, electrical way of launching SPPs in optical structures, and its applications received considerable attention recently\cite{Wang2014,Cao2014,Wang2015,Du2016,Du2017,Cazier2016}. These results suggest that the electrical excited gap plasmon modes have several optical decay channels.
A natural question to ask is how much of the energy is transferred into different decay channels. This is important for engineering the energy separation among different channels\cite{Greffet2016} and for improving the optical yields in STML. As far as we know, it has not received enough attention yet. In this work, we try to answer this question from classical electromagnetic simulation.
The paper is organized as follows: In Section 2, we present the geometric setting for STML experiments and build the light emission model based on the point dipole approximation of the tunnel junction, and further classify the emission channels with particular emphasis on separating the direct quenching from the total resistive dissipation, which is usually difficult from perspective of classical electromagnetism. In Section 3, the numerical results are given and analyzed, showing that the major emission is distributed into the SPPs on the tip. Finally, Section 4 concludes the paper.
\section{Geometry structure and emission channels of STM luminescence}
\Fig{fig1} (a) shows the structure considered in this work, with a metallic tip positioned right above a metallic surface. The geometric properties of the tip is depicted in \Fig{fig1} (b). Although some efficient methods are proposed in the literature to take into account the quantum effects\cite{RubenEsteban2011,Zhuwenqi2016}, we follow a classical electrodynamics simulation here. The energy source is approximated by an electrical dipole located between the tip and surface (red dot in \Fig{fig1} (a)). Considering the rotation symmetry of the structure, a two-dimension (2D) rotational model is adopted, and the electric dipole is implemented as a magnetic current loop with tiny radius, i.e., 0.1nm, in comparison with the work wavelength with range from 400 nm to 900 nm.
\begin{figure}[htb!]\centering
\includegraphics[scale=0.14]{fig1}
\caption{ (a) A three dimensional model of STM structure. Blue areas represent the direct quenching, which in the apex shows as a cone, the generatrix is $\lambda$/50. (b) The sketch of the silver tip apex, it is depicted by the hyperbolic geometry, $h+b$ represents the gap distance, the value of $b$ determines the acuity of the apex. (c) The 2D plane of the STM structure and the radiative modes pattern. S1 and S3 are used to collect the energy of SPPs, while S2 is used to collect the energy of air mode. The orange blocks located at $x=0$ ($x=20$) $\mu$m along x-axis, which are marked as "1st (41th) $\lambda$ area", are used to calculate the resistive dissipation of SPPs propagation losses in the substrate. (d) The contour plot of resistive dissipation is sketched. The direct quenching contours are concentric to the apex. In contrast, the contours that are much far away from the apex tend to bend towards the tip surface, indicating the dominating effect from the propagation losses of SPPs. The resistive dissipation in the tip can be separated by the dash line but that in the substrate can't be distinguished clearly.}
\label{fig1}
\end{figure}
\subsection{Nanophotonic structure to enhance STM luminescence}
A snapshot of the emitted electromagnetic radiation in the reduced 2D plane is presented in \Fig{fig1} (c). We have used the parameters of silver\cite{Johnson1972} in the simulation. The cone angle of the tip is 15 degrees. The shape of the apex also has significant impact on the spectrum. We use the hyperbolic tip geometry (${\frac{{{{\left( {z - h} \right)}^2}}}{{{b^2}}} - \frac{{{\rho ^2}}}{{{{\left( {b\tan \varphi } \right)}^2}}} = 1}$) \cite{Aizpurua2000} for its relatively good fidelity of representing the tips used in experiment as sketched in \Fig{fig1} (b), where $h+b$ is the distance between the apex and the metal substrate and $b$ is the distance between the intersection of asymptotes and the tip apex. Evidently, the apex becomes flatter as $b$ increases on the premise of fixed $h$. In our model, the ratio of $h$ and $b$ is $\frac{h}{b}=2:1$.
\subsection{Emission channels in STM induced luminescence}
Consider a volume $V$ with its surface $\partial V$, the Poynting theorem states that the time rate of electromagnetic energy change within $V$ plus the net power flowing out of $V$ through $\partial V$ is equal to the negative of total work done on the charges within $V$. Here, we consider electromagnetic waves with time harmonic oscillations, thus the time rate of electromagnetic energy change within $V$ vanishes, leading to the reduced energy conversation law, given as follows,
\begin{equation}
\label{poynting}
-\int_V {\bm J} \cdot {\bm E} dV =\mathop{{\int\!\!\!\!\!\int}\mkern-21mu \bigcirc}\nolimits_{\partial V}
({\bm E \times \bm H) \cdot d\bm A}.
\end{equation}
In \Eq{poynting}, the total current density term $\bm J$ can be split into two terms ($\bm J={\bm J}_{s}+{\bm J}_{c}$), i.e., the source current density term $\bm J _s$ and the polarization current density term $\bm J_c= \frac{d \left[\left({\epsilon}_r-1\right)\epsilon_0 \bm E\right]}{dt}$. The source current density term measures the total external work done on the electrons to generate the electromagnetic radiation, whereas the polarization current density term measures the dissipation rate into the resistive dissipation due to material losses. Indeed, in our setting the source current density term $\bm J_s$ corresponds to the electron tunneling between the tip and the substrate, i.e., the source of generated electromagnetic energy, while the polarization current density $\bm J_c$ is the energy sink of the electromagnetic radiation.
The volume $V$ considered here corresponds to the region by rotating the area marked by red dash line along the y-axis, as shown in \Fig{fig1} (c), which contains the STM tunnel junction. The closed surface $\partial V$ associated with $V$ can be divided into four parts, i.e., $\partial V=S_1+S_2+S_3+S_r$. The net energy flowing through $S_1$ and $S_3$ are the SPP channels propagating along the tip and the substrate respectively, and the photons flowing through $S_2$ are considered as the air mode. $S_r$ is the rest surface of $\partial V$ and none of energy flows through it. Since only the metals have the material losses, the resistive dissipation that absorbs energy from the electromagnetic radiation is exclusively from metals, which can be further separated into direct quenching and the propagation losses of SPPs. The direct quenching occurs locally, as marked as the small blue areas on the tip and the substrate in \Fig{fig1} (a). In contrast, the resistive dissipation from the propagation losses occurs non-locally, as long as the field amplitude of propagating SPP mode is large enough to excite the electron-hole pairs inside the metals. The difference between the two dissipation mechanisms of the electromagnetic radiation will be discussed and used to separate the direct quenching from the total dissipation.
There are four emission channels during the electrons tunneling across the STM junction. The first one is the free propagating photons coined as the air mode channel, which is generated by the tunnel junction and radiated into free space. Secondly, part of power from the oscillating dipole is transfered into resistive dissipation around the local area as indicated by the small blue areas on the tip and the substrate in \Fig{fig1} (a). This is called the direct quenching channel. Thirdly, there are two SPP channels that funnels the electromagnetic radiation along the metal-air interface, which are coined as the tip and substrate SPP channels in the following part of this paper. The tip (substrate) SPPs propagate along the metal-air interface along the surface of the tip (substrate). Their field amplitudes decay due to the propagation losses, which eventually generates resistive dissipation. In summary, the direct quenching is indicated by the small blue areas in \Fig{fig1} (a), the air mode is indicated by integrated power flux over the surface area $S_2$ shown in \Fig{fig1} (c) while the tip (substrate) SPPs energy is the sum of integrated power flux over $S_1$ ($S_3$) and the propagation losses in the tip (substrate).
\subsection{Extraction of direct quenching from the total resistive dissipation}
In classical electromagnetism, it is not trivial to distinguish direct quenching from the propagation losses of the SPP modes in the vicinity of the tunnel junction, since both of them yield the same type of resistive dissipation, i.e, heating. The quantitative assessment of the light emission that are funneled into different channels need to be acquired. The reason that we numerically measure the integrated Poynting vectors 40 wavelengths away from the tunnel junction (see \Fig{fig1} (c)) is to avoid the overlap of SPPs and air mode. To estimate the original power of SPPs excited by the tunneling electrons directly, we need to add up the resistive dissipation in metal caused by SPPs propagation. However, we have mentioned that in the vicinity of the tunnel junction, there exists direct quenching which is also a kind of resistive dissipation. Here comes the question how to separate the resistive dissipation caused by inelastic tunneling electrons and SPPs.
In this work, the separation of the direct quenching from the total resistive dissipation at the apex and the substrate are treated differently. As to the apex, the method is to set up a separation line of the tip, marked as blue dash line in \Fig{fig1} (d). For the direct quenching is extreme large in the tiny part of the apex and decreases dramatically in a further place, we assume it only exists in the area surrounded by the separation line in the apex, outside this area the resistive dissipation is only attributed to SPPs propagation. From the analysis of the resistive dissipation contours, the direct quenching contours are concentric to the apex, but the SPPs propagation loss contours tend to bend towards the surface of the tip as shown in \Fig{fig1} (d). It means the resistive dissipation caused by the SPPs is dominant in the area away from the tip apex. We find the separation distance shown in \Fig{fig1} (a) can be set to 1/50 of the wavelength, which turns out to a good approximation in this model.
As to the substrate, the aforementioned method does not apply, since there is no clear separation line evident from the contour plot of the resistive dissipation shown in \Fig{fig1} (d). The overlap of direct quenching and propagation losses mainly appears in the vicinity of the tunnel junction such as the 1st $\lambda$ area in \Fig{fig1} (c). The extraction of propagation losses in the 1st $\lambda$ area can be achieved by multiplying the propagation losses in the 41th $\lambda$ area (in this area, the resistive dissipation exclusively originates from the propagation losses of SPPs and can be easily obtained from our numerical model) by a certain ratio. The ratio can be calculated from the semi-analytical formulation of the cylindrical SPPs provided by S\"ondergaard \cite{Sondergaard2004} for a dipole emitter oriented along $z$-axis. With the expression of electrical field, the resistive dissipation in the 1st/41th $\lambda$ area due to the propagation losses of SPPs can be calculated as follows,
\begin{equation}
\label{Qrh}
Q=\pi \omega\epsilon_0 \int _{0(40\lambda) }^{\lambda(41\lambda)} d\rho \int _{-\lambda}^{0} dz \rho \texttt{Im}\left[\epsilon_{r} \left({{E_{\rho}}} E_{\rho}^{*} + {E_{z}} {E_{z}}^{*} \right)\right],
\end{equation}
where $\epsilon_{r}$ is the dielectric function of the metallic substrate, and see $E_{\rho}$ and $E_z$ in Appendix A. Based on \Eq{Qrh}, one is able to obtain the ratio between the propagation losses in the 1st $\lambda$ area and that from the 41th $\lambda$ area. Since the exact dissipation of the propagation losses in the 41th $\lambda$ area is known, one could immediately calculate the restive dissipation from the propagation losses in 1st $\lambda$ area. As such, one could extract the direct quenching by subtracting the resistive dissipation of propagation losses from the total dissipation in the 1st $\lambda$ area.
\section{Results}
\begin{figure}\centering
\includegraphics[scale=0.5]{fig2}
\caption{ (a) The total power (black) and its distribution into different channels as a function of photon energy. The red, blue and cyan lines represent energy goes into the SPPs, direct quenching and the air mode channels, respectively. The SPPs and direct quenching (DQ) include both tip and substrate contributions. (b) The percentage of total power into the three kinds of channels as in (a).}
\label{fig2}
\end{figure}
We proceed to discuss the calculated results based on the aforementioned model and to classify the emission energy transfer in the STM junction. The tunneling electrons can be seen as the emitters, modeled by a magnetic current loop. There are two approaches to obtain the total emitted power according to the Poynting theorem: one is to integrate the Poynting vector over a closed surface surrounding the emitter, the other is to integrate the tangential component of magnetic field along the circle with the magnetic current. As a self-benchmark, we carry out the two different procedures to obtain the total emitted power. Indeed, the two approaches yield exactly the same emission power.
The four curves shown in \Fig{fig2} (a) represent the energy radiated by the source, the total power flowing into the sum of two SPP channels, the direct quenching channel, and the air mode channel, respectively. The SPP channels take a dominant part of the source energy. It has a similar lineshape with the total power, sharing the same trends and peaks. The power of SPP channels decreases to zero when the wavelength is less than 1.4 eV or larger than 3.1 eV, such decrease looks more drastic in the high frequency region than in the low frequency region. Such dramatic dropping of emission into SPP channels at high frequencies is due to the cutoff frequency of SPP modes propagating along the interface. The maximum value of the SPP appears at the photon energy of 2.5 eV. In addition, there are some other peaks at the photon energy of 1.55 eV, 2.2 eV, 2.75 eV. The power into the air mode channel is much smaller than the SPP channels. The emission power into the air mode is almost zero in the high frequency region and increases gradually when the frequency decreases, it increases to the maximum at the photon energy of 1.5 eV. In consistency with the experiment, the air mode is detected in the far field\cite{Schneider2010}, which has approximately the same peak position. The direct quenching in this model is also strongly frequency dependent. In the high frequency region, the direct quenching takes up nearly half of the source energy. When the frequency decreases, the direct quenching decreases monotonously. The tremendous quenching is partly resulted from the nanoscale gap distance\cite{Greffet2016}. Comparing the SPP mode and the air mode, we find that the spectral lineshape of emitted SPP has a broad peak located at high frequency region, i.e, from blue to green; while the spectral peak of the air mode emission lies in-between infrared and red frequencies.
In Fig. 2(b), the fractional contribution of the SPP, the direct quenching and the air mode channel are studied. The SPP channels occupy roughly 70\% of the total emission energy in most of the frequency region. It starts to decrease significantly when the photon energy is larger than 2.8 eV. On the other hand, the fraction of air channel decreases smoothly when the frequency increases. This indicates that increasing the source power, i.e., increasing the number of tunneling electrons, can improve the emitted power into the air mode more efficiently in the low frequency region. The fraction of the direct quenching increases for larger frequency, which indicates that the adoption of conducting metals with low-impedance may be useful to decrease the directing quenching in the high frequency region.
\begin{figure}\centering
\includegraphics[scale=0.5]{fig3}
\caption{ (a) Separate contribution to the SPPs (red) and direct quenching (blue) from tip (solid) and substrate (dashed), respectively. (b) Same as (a), but plotted in percentage.}
\label{fig3}
\end{figure}
We continue to discuss the different role of the metallic tip and substrate in the STML. The emission into SPP channels and direct quenching occurs at the apex of metallic tip and substrate. In aforementioned discussions, the SPPs of metallic tip and substrate are considered as different channels, the direct quenching in these two places is considered as one channel. To analyze the difference of energy transfer property in tip and substrate, we present the fractional emission of SPP channel and direction quenching for both the tip and the substrate in \Fig{fig3}. In \Fig{fig3} (a), it shows that the SPPs and the direct quenching at the tip are much larger than that at the surface of the substrate. The tip SPPs take the majority of the total power that excites the propagating SPP modes. The SPP propagating along the substrate has only two peaks, i.e., at 1.55 eV and 2.2 eV, in comparison with four peaks associated with the tip SPPs, the largest of which is at 2.5 eV. This shows that the tip SPP channel is dominant over the surface channels in the present setup. The direct quenching at the tip is also dominant over that of the substrate, since the direct quenching at the surface of the substrate is almost zero. Evidently, the typical shape of the tip, as well as the geometric structure between the tip and the substrate has substantially impact on the quenching process in the STML.
Figure 3 (b) shows the fractional contribution of the SPPs and direct quenching, distributed between the tip and the substrate. We find that the fraction of direct quenching into either tip or substrate is approximately constant, while the fraction of SPP emission channels into tip and substrate apparently changes monotonically as the frequency increases. Since the gap behaves as a plasmonic cavity in STML, the changing ratio indicates that the different surface shape will influence the SPP coupling characteristics at different frequencies. The almost constant quenching reveals the frequency independent characteristics of the quenching process in the STML.
\section{Conclusion}
In summary, we studied the energy transfer into four different channels in the STML. Our main result is that, the majority of energy radiated by the tunneling electrons is transferred into SPP mode on the surface of the tip. The direct quenching in the apex of the tip also takes a large part. These two channels take most of the energy away, so that the energy that can be collected and utilized in the air mode and SPPs along the substrate surface is only a small portion of the total energy from inelastic electron tunneling. We propose possible methods that may increase the energy into the substrate SPPs with the help of our results. Firstly, the nano-structure with sharp apex geometry positioned on the substrate may help to increase the percentage of energy funneling into the substrate SPPs. Secondly, decreasing the non-radiative loss is important too, especially in the high frequency region.
\section*{Appendix A: Electrical field inside the metallic region of the propagating cylindrical SPPs along metal-air interface}
The electric field of SPP mode generated by an electric dipole emitter (located at $\bm r_0$ on z-axis) with the dipole moment being $\bm p$ can be given by $\bm E(\rho ,\varphi ,z) = i\omega_0 \mu_0 \bar{\bm G}_{SPP}(\bm r, \bm r_0) \bm p \delta(\bm r-\bm r_0)$, and the SPP contribution to the Dyadic Green's function, i.e., $\bar{\bm G}_{SPP}(\bm r, \bm r')$, reads
\begin{equation}\label{Gspp}
\bar{\bm G}_{SPP}(\bm r, \bm r')=\int_{\rm{0}}^\infty \frac{{b}_{1}^2 {\kappa}_{ \rho}^{2 } e^{b_2{\kappa}_{\rho}(z+z^{'})}}{a(k_{SPP}^{2}-{\kappa}_{\rho}^{2})} d\kappa_\rho \left( \begin{matrix}
-J_{0}^{''}({{\kappa }_{\rho }}\rho )
& 0
& -{{b}_{1}}J_{0}^{'}({{\kappa }_{\rho }}\rho )
\\0
& -\frac{J_{0}^{'}({{\kappa }_{\rho }}\rho )}{{{\kappa }_{\rho }}\rho }
& 0
\\ {{b}_{1}}J_{0}^{'}({{\kappa }_{\rho }}\rho )
& 0
& {{b}_{1}}^{2}{{J}_{0}}({{\kappa }_{\rho }}\rho ) \\
\end{matrix} \right),
\end{equation}
where $\kappa_\rho$ is the amplitude of the in-plane wave vector, while $a=\pi{\sqrt{\epsilon _{1}(-\epsilon _{2})}} {\left(1-\frac{{\epsilon_1}^2}{{\epsilon_2}^2}\right)}
\frac{\epsilon _{1}+\epsilon _{2}}{\epsilon _{1}\epsilon _{2}} $, $ b_1=-\sqrt{{\epsilon _{1}}/{(-\epsilon _{2})}}$ and $ b_2=\sqrt{{(-\epsilon _{2})}/{\epsilon _{1}}}$, and $\epsilon _{1}$ ($\epsilon _{2}$) is dielectric constant of air (metal). The $J_1$/$J_0$ is the first/zero order Bessel function, $k_{SPP}$ is the wave number of SPPs. For an electric dipole emitter (the magnitude of its dipole moment $\bm \mu$ is 1) orientated along z-axis, i.e., $\bm \mu=[0,0,1]^T$, the electric field in the cylindrical coordinates can be calculated from \Eq{Gspp} as follows,
\begin{equation}\label{Efield}
\bm E(\rho ,\varphi ,z) =\int_{\rm{0}}^\infty
\frac{{b}_{1}^3 {\kappa}_{ \rho}^{2 } e^{b_2{\kappa}_{\rho}(z+z^{'})}}{a(k_{SPP}^{2}-{\kappa}_{\rho}^{2})}
{d\kappa_{\rho}\left(
{{J}_{1}}({{\kappa }_{\rho }}\rho )
{{\bm{e}_{\rho }}}
+{{b}_{1}}{{J}_{0}}({{\kappa }_{\rho }}\rho )
{{\bm{e}_{z}}}\right)}.
\end{equation}
\begin{acknowledgments}
Y. Chen acknowledges financial support from the National Natural Science Foundation of China (Grant No. 61405067), and the Fundamental Research Funds for the Central Universities, HUST: 2017KFYXJJ027. J.T. L\"u acknowledges financial support from the National Natural Science Foundation of China (Grant No. 61371015).
\end{acknowledgments}
|
1,941,325,220,444 | arxiv | \section*{}
\vspace{-1cm}
\footnotetext{\textit{$^{a}$~Department of Chemistry, KU Leuven, Celestijnenlaan 200F,
B-3001 Leuven, Belgium.
Tel: +32 16 32 7939; E-mail: [email protected]}}
\section{Introduction} \label{sec:intro}
It is usually assumed in quantum chemistry without further discussion that the Hamiltonian is
Hermitian and the energies, which are obtained as eigenvalues of the Hamiltonian, thus real.
However, already in 1928, George Gamow recognized that $\alpha$ decay of radioactive nuclei
can be modeled in terms of a non-Hermitian Hamiltonian and complex-valued energies, whose
imaginary parts are interpreted as decay widths.\cite{gamov28}
Whereas $\alpha$ decay has always been of central importance for nuclear physics, and complex
energies hence occur quite frequently in this field,\cite{myo20} corresponding electronic decay
processes played a subordinate role for chemistry for many years. Complex energies were
considered to be an exotic phenomenon and were discussed mostly as an
unwanted byproduct of some electronic-structure models, notably truncated coupled-cluster
methods.\cite{haettig05,koehn07,kjonstad17a,thomas21} However, driven by the growing
relevance of processes involving unbound electrons, the importance of electronic decay for
chemistry has increased in recent years and is expected to continue increasing. State-of-the-art
experimental techniques make it possible to create, in a controlled manner, environments where
selected electrons are no longer bound to the nuclei. For example, core vacancies produced by
X rays,\cite{xraybook,norman18,zimmermann20} temporary anions obtained by attachment of
slow electrons,\cite{simons11,herbert15,alizadeh15} and molecules exposed to quasistatic laser
fields\cite{scrinzi05,gallmann12} all undergo electronic decay.
The subject of this feature article is the quantum-chemical treatment of states that govern
electronic decay processes, so-called electronic resonances, in terms of
complex energies.\cite{nhqmbook,reinhardt82,jagau17} The resonance
phenomenon as such is, however, relevant to other branches of science as well: $\alpha$
decay of radioactive nuclei, predissociation of van der Waals complexes, as well as leaky
modes of optical waveguides all represent examples. An excellent overview is available
from Ref. \citenum{nhqmbook}. A recent perspective on rotational-vibrational resonance
states and their role in chemistry is available from Ref. \citenum{csaszar20}. Furthermore,
non-Hermitian Hamiltonians have been used to model molecular electronics.\cite{baer03,
ernzerhof06}
Although substantial methodological progress has been made in recent years, \textit{ab initio}
modeling involving electronic decay remains very challenging. Except for special cases in
which a resonance can be easily decoupled from the embedding continuum, for example,
a core-vacant state by means of the core-valence separation,\cite{cederbaum80} coupling to
the continuum needs to be considered but quantum-chemical methods designed for bound
states, i.e., the discrete part of the Hamiltonian's spectrum, cannot accomplish this without time
propagation.\cite{nhqmbook,jagau17} While a complete description of resonances and decay
phenomena is obtained by solving the time-dependent Schr\"odinger equation, these approaches
entail tremendous computational cost and remain limited to very small systems.
This makes a time-independent treatment of resonances desirable. Several approaches have
been developed for this purpose: A pragmatic approach consists in extrapolating results from
bound-state calculations using various stabilization\cite{hazi70,nestmann85,mandelshtam94}
or analytic continuation\cite{simons81b,mccurdy83,frey86,horacek10,horacek15,sommerfeld15b,
white17b} methods. A more rigorous description is offered by scattering theory where one
imposes scattering boundary conditions on the solution of the Schr\"odinger equation.\cite{
nhqmbook,taylor72,domcke91} This works very well for atoms and model systems but the
integration into molecular electronic-structure theory is challenging. Important
approaches for electron-molecule collisions based on scattering theory include the R-matrix
method,\cite{tennyson10} the Schwinger multichannel method,\cite{dacosta15} and the discrete
momentum representation method.\cite{lane80} Overviews of recent developments in this field
are available, for example, from Refs. \citenum{ingolfsson19,gorfinkiel20,masin20}. In the
following, the treatment of molecular electronic resonances in terms of scattering theory is
not discussed further.
To certain types of resonances, the theory by Fano and Feshbach\cite{fano61,feshbach62,
domcke91,averbukh05,kolorenc20} can be applied, which treats a resonance as a bound
state superimposed on the continuum. Here, a projection-operator formalism is used to
divide the Hilbert space into a bound and a continuum part. A distinct advantage is that
standard quantum-chemical methods can be used but the critical step consists in the
definition of the projector. Also, additional steps need to be taken to extract information
on the decay process from the discretized representation of the electronic continuum
obtained in such calculations. This can be done implicitly, for example, by means of
Stieltjes imaging,\cite{langhoff74,carravetta87} or alternatively by modeling the wave
function of the outgoing electron explicitly.\cite{zaehringer92,inhester12,skomorowski21}
A further, more general approach relies on recasting electronic resonances as $\mathcal{L}^2$
integrable wave functions by means of complex-variable techniques,\cite{nhqmbook,jagau17}
in particular complex scaling,\cite{aguilar71,balslev71,simon72,moiseyev98} complex-scaled
basis functions,\cite{mccurdy78,moiseyev79} and complex absorbing potentials.\cite{jolicard85,
riss93} No explicit treatment of the electronic continuum is needed here, which offers the
possibility to apply methods and concepts from bound-state quantum chemistry. This
affords a description of decaying states in terms that are familiar to quantum chemists
such as molecular orbitals and potential energy surfaces.
The latter methods, which revolve around non-Hermitian Hamiltonians with complex energy
eigenvalues, are the focus of the present feature article. The remainder of the article is structured
as follows: In Sec. \ref{sec:res}, an overview of different types of electronic resonances and their
relevance for chemistry is given. Sec. \ref{sec:cv} summarizes the theoretical foundations of
complex-variable techniques and discusses practical aspects of their implementation into
quantum-chemical program packages. Sec. \ref{sec:dev} showcases a number of recent
methodological developments together with exemplary applications, while Sec. \ref{sec:conc}
presents some general conclusions and speculations about future developments in the field.
\section{Electronic resonances} \label{sec:res}
Many types of electronic decay processes exist and the electronic structure of the corresponding
resonance states is as diverse as those of bound states. In the following, an overview of several
types of states is given: Autodetaching anions, core-vacant states, and Stark resonances formed
in static electric fields. In addition, the distinction between shape and Feshbach resonances is
discussed.
\subsection{Autodetaching anions} \label{sec:anion}
If one considers a neutral molecule, the corresponding anion is stable only if it is lower in energy,
that is, if the electron affinity is positive. If this is not the case, it does not imply that a free electron
is necessarily scattered off elastically. Instead, a temporary anion with a lifetime in the range of
femtoseconds to milliseconds can form in an electron-molecule collision.\cite{simons08,simons11,
herbert15,jagau17,nhqmbook} These species are ubiquitous; in fact, most closed-shell molecules
do not form stable but only temporary anions in gas phase.\cite{simons08,herbert15}
Temporary anions play a central role for dissociative electron attachment (DEA) and related
electron-induced reactions.\cite{herbert15,fabrikant17,ingolfsson19} Often, reaction barriers
that are insurmountable on the potential energy surface of the neutral species can be
overcome in the presence of catalytic electrons with a few electron volts of kinetic energy.
This is widely exploited in organic synthesis using electronically bound anions\cite{studer14}
but can also involve temporary anions. The latter are, for example, relevant to plasmonic
catalysis,\cite{aslam18} nitrogen fixation using cold plasma,\cite{li18} and radiation-induced
damage to living tissue.\cite{boudaiffa00,alizadeh15}
A further type of resonance are excited states of stable closed-shell anions.\cite{simons08,
simons11,herbert15} Species such as F$^-$, OH$^-$, and CN$^-$ only rarely
support bound excited states; electronic excitation usually entails electron detachment. The
corresponding photodetachment spectra often feature distinct fingerprints of metastable
states.\cite{jagau15,lyle18,lyle19} Also, dianions including common species such as O$^{2-}$,
SO$_4^{2-}$, and CO$_3^{2-}$ are almost never stable against electron loss in gas phase
and exist as bound electronic states only if a solvation shell or some other environment is
present.\cite{dreuw02}
All these anions have in common that they are beyond the reach of standard quantum-chemical
methods.\cite{jagau17} This is illustrated in Fig. \ref{fig:basis}. Since one always uses a finite
number of basis functions in an actual computation, the continuum is discretized. If the basis
is small, usable approximations of resonances as bound states may be obtained in some cases
because no basis function can describe the coupling to the continuum and, consequently, the
decay process. However, this is a crude and uncontrolled approximation: If the size of the basis is
increased, the representation of the continuum improves and the resonance cannot be associated
with a single state anymore. It \textit{dissolves} in the continuum.\cite{jagau17,bravaya13,jagau16b}
An electronic-structure calculation will collapse to a continuum-like solution where one electron
is as far away from the molecule as the basis set permits. In the limit of an infinite basis set
(see right-hand side of Fig. \ref{fig:basis}), where the continuum is represented properly, one
observes an increased density of states in a certain energy range. Since the position and width
of such a peak can be associated with the energy and inverse lifetime of a resonance, the terms
\textit{resonance position} and \textit{resonance width} have been coined.\cite{nhqmbook,
reinhardt82,jagau17}
\begin{figure} \centering
\includegraphics[scale=0.5]{basis.pdf}
\caption{Description of an autodetaching anion X$^-$ in a standard electronic-structure
calculation. Left: When the size of the basis set is increased, the description of the continuum
(gray) improves and the resonance (blue) cannot be associated with a single discrete state
anymore. Right: In a complete basis set, the resonance can be associated with a peak in
the density of continuum states. The onset of the electronic continuum is shown in red.
Reproduced with permission from Ref. \citenum{jagau20}.}
\label{fig:basis}
\end{figure}
\subsection{Core-vacant states} \label{sec:core}
Electronic resonances are also encountered when moving beyond anions with the difference
that the decay of neutral or cationic species is termed autoionization instead of autodetachment.
Neutral states above the first ionization threshold are also called superexcited states. An
important subgroup of autoionizing resonances consists in core-vacant states, which are
created by interaction with X-rays in various spectroscopies.\cite{xraybook,norman18,
zimmermann20} These techniques typically involve photon energies larger than 200 eV
up to 1000 eV so that the resulting states are located at much higher energies than the
temporary anions from Sec. \ref{sec:anion}.
Core-vacant states are subject to the Auger-Meitner effect,\cite{meitner22,auger23} in which
the core vacancy is filled while a second electron is emitted into the ionization continuum.
Different variants of Auger decay and the corresponding decay channels can be identified
in Auger electron spectroscopy\cite{xraybook} by measuring the kinetic energy of the emitted
electrons. Several important non-radiative decay processes that can follow
initial core ionization or core excitation,\cite{brown80,piancastelli87,kempgens99,armen00}
are depicted in Fig. \ref{fig:auger}. Further related processes besides those in Fig. \ref{fig:auger}
include double Auger decay,\cite{carlson65} where two electrons are simultaneously emitted,
and various shake-up and shake-off mechanisms,\cite{koerber66,hotokka84,colle90}
where an additional valence electron is ionized or excited, respectively.
It is also common that the target states of Auger decay undergo further decay resulting
in so-called Auger cascades.\cite{xraybook}
An important common aspect of core-vacant states is that they can be modeled as bound
states with much better accuracy than other types of resonances. In particular, it is possible
to project out the ionization continuum by means of the core-valence separation.\cite{
cederbaum80,coriani15,zheng19,vidal19} This is done by removing those configurations
from the Hilbert space that describe the Auger decay. Notably, such decoupling is not
possible for core-excited states above the respective core-ionization threshold (right panel
of Fig. \ref{fig:auger}).
Related to Auger decay are non-local processes in which the emitted electron does not
stem from the atom or molecule in which the core hole was located. The prime example
is intermolecular Coulombic decay (ICD),\cite{cederbaum97} but there are further flavors
such as electron-transfer mediated decay\cite{zobeley01} and interatomic Coulombic
electron capture.\cite{gokhberg09} ICD and related processes are possible at considerably
lower energies than Auger decay and thus presumed to be more widespread.\cite{jahnke20}
Also, the efficiency of ICD increases in the presence of many neighboring molecules that
can be ionized, which further contributes to its relevance in complex systems. It has even
been claimed that \textit{ICD is everywhere}.\cite{ouchi11}
\begin{figure} \centering
\includegraphics[scale=0.34]{auger.pdf}
\caption{Non-radiative decay of core-vacant states: In Auger
decay depicted in the left panel, a core-ionized cationic state decays into a
dicationic state. In resonant Auger decay depicted in the middle, a core-excited neutral state
decays into a cationic state. There are also core-excited states above the
respective core-ionization threshold that can undergo a one-electron decay process in which
the core hole is not filled as shown in the right panel.}
\label{fig:auger}
\end{figure}
\subsection{Quasistatic ionization} \label{sec:quasi}
A further type of electronic resonance, which is not connected to autodetachment or autoionization,
arises when atoms or molecules are exposed to intense laser fields that are comparable in strength
to the intramolecular forces. Under such conditions, quasistatic ionization takes place and bound
states are turned into Stark resonances.\cite{nhqmbook,reinhardt82} This process underlies many
phenomena observed in strong laser fields, in particular high-harmonic generation.\cite{scrinzi05,
gallmann12} While a comprehensive discussion of atoms and molecules in laser fields is beyond
the scope of this article, a brief account of quasistatic ionization is given in the following.
One can distinguish alternating current (ac) Stark resonances formed in time-dependent electric
fields and direct current (dc) Stark resonances formed in static electric fields. The formation of dc
Stark resonances can be understood in terms of Fig. \ref{fig:stark}: Owing to the distortion of the
potential, electrons can leave the system by tunneling. At even higher field strengths, electrons can
leave above the barrier, which is termed barrier-suppression ionization.\cite{scrinzi05,scrinzi99}
Strictly speaking, tunnel ionization is already possible at infinitesimally low field strengths meaning
that the Hamiltonian of a molecule in the presence of an external electric field never supports any
bound states.\cite{herbst79,herbst81,caliceti07}
\begin{figure}
\includegraphics[scale=0.5]{stark.pdf}
\caption{Distortion of a Coulombic potential by a static electric field in one dimension. A stable
energy level (left) becomes metastable in the presence of a field (middle). At higher field
strengths, barrier-suppression ionization is possible (right). Reproduced with permission from
Ref. \citenum{jagau16}.}
\label{fig:stark}
\end{figure}
This effect can be ignored if the field strength is low enough and a treatment of light-matter
interaction in terms of response properties is possible.\cite{helgaker12} However, the radius
of convergence of a perturbative expansion in powers of the field strength is determined by
the tunnel ionization rate.\cite{caliceti07} At higher field strengths where decay widths are
significant; a treatment of the external field as perturbation is not valid.
To determine under what conditions quasistatic ionization takes actually place, not only the
field strength but also the frequency of a laser need to be considered.\cite{scrinzi05,gallmann12,
reiss08} For small molecules in the electronic ground state, field strengths of the order of $10^{-4}$
to 1 a.u. are usually of interest. This exceeds field strengths typical for photochemistry by orders
of magnitudes, the realization of such conditions is, however, no problem in modern laser
experiments.\cite{scrinzi05,gallmann12} By means of Keldysh's adiabaticity parameter, different
ionization mechanisms can be distinguished.\cite{keldysh65} This relates the tunneling time to
a wave period and provides thus an estimate if the ionization can be thought of as static process
as depicted in Fig. \ref{fig:stark}. If this is the case, the time dependence of the field can be
neglected, hence the name \textit{quasistatic} ionization.
This leads to a considerable simplification because one can work with a time-independent
Hamiltonian. For the hydrogen atom, an estimate of the tunnel ionization rate was obtained
already in 1928\cite{oppenheimer28} and later refined and extended.\cite{landau77,yamabe78,
ammosov86,tong02,batishchev10,tolstikhin14,yue17} The accurate evaluation of molecular
quasistatic ionization rates has remained an active research field until today and is of relevance
to many experiments where strong fields are applied. Especially the modeling of the angular
dependence of the ionization rate is a challenge. This is caused by the exponential dependence
of the ionization rate on the binding energy; in a polyatomic molecule the contribution of individual
channels to the overall ionization rate varies greatly depending on orientation.\cite{jagau18,
hernandez19,hernandez20}
\subsection{Shape and Feshbach resonances} \label{sec:restype}
One can group metastable electronic states into shape resonances, which decay by a one-electron
process, and Feshbach resonances, which decay by a two-electron process.\cite{nhqmbook,
jagau17} The distinction is illustrated by Fig. \ref{fig:restype} for autodetaching anions. Notably,
polyatomic molecules usually feature resonances of both types. Examples of shape resonances
include \vspace{-0.1cm}
\begin{itemize}
\item temporary radical anions formed by electron attachment to neutral ground states,\cite{
simons08,simons11,herbert15} \vspace{-0.2cm}
\item low-lying excited states of closed-shell anions,\cite{simons08,simons11,herbert15} \vspace{-0.2cm}
\item most metastable dianions and more highly charged anions,\cite{dreuw02} \vspace{-0.2cm}
\item some superexcited states of neutral molecules,\cite{platzman62,hatano03} \vspace{-0.2cm}
\item core-excited states above the respective core-ionization threshold,\cite{piancastelli87,
kempgens99} \vspace{-0.2cm}
\item Stark resonances formed in static or dynamic electric fields.\cite{reinhardt76,scrinzi99,
scrinzi05,gallmann12,majety15b}
\end{itemize}
Examples of Feshbach resonances include \vspace{-0.1cm}
\begin{itemize}
\item core-ionized states that undergo Auger decay,\cite{meitner22,auger23,xraybook} \vspace{-0.2cm}
\item core-excited states that undergo resonant Auger decay,\cite{brown80,armen00} \vspace{-0.2cm}
\item related species involving more than one molecule that decay through intermolecular
Coulombic decay,\cite{cederbaum97,jahnke20} \vspace{-0.2cm}
\item anionic states formed by electron attachment to Rydberg states,\cite{schulz73,
ibanescu07,ibanescu08} \vspace{-0.2cm}
\item superexcited Rydberg states,\cite{platzman62,klinker18,plunkett19,rabadan21} \vspace{-0.2cm}
\item states of high spin multiplicity that decay only by spin-orbit coupling.\cite{sommerfeld98b,
dreuw99,dreuw99b,dreuw99c}
\end{itemize}
\begin{figure} \centering
\includegraphics[scale=0.49]{restype.pdf}
\caption{Electronic structure of shape (left) and Feshbach (right) resonances. If the metastable
S$_1$ state of the anion X$^-$ lies above its decay channel D, the decay is a one-electron
process. If it lies below state D of the neutral molecule and at the same time above another
decay channel D', the decay is a two-electron process.}
\label{fig:restype}
\end{figure}
Shape resonances can be easily understood in real coordinate space: The shape of the effective
potential is responsible for the metastable nature of the resonance, meaning the electron is
trapped behind a potential wall but it can leave the system by tunneling.\cite{nhqmbook} In
the case of radical anions this potential wall is given by the sum of the centrifugal potential
and the molecular Coulomb potential,\cite{simons08,herbert15} whereas in dianions a repulsive
Coulomb barrier is formed by the sum of short-range attractive valence interactions and
long-range repulsion between every electron and the anionic rest.\cite{simons08,
herbert15} For Stark resonances, the potential wall is formed through the combination of the
molecular potential and the external field.\cite{scrinzi05}
For Feshbach resonances, decay by a one-electron process is energetically impossible. The
metastability only originates from the coupling to another decay channel and
involves more than one degree of freedom. Feshbach resonances are thus more naturally
described in the space of electronic configurations.\cite{jagau17,nhqmbook} They can be
viewed as superposition of two wave functions one of which is a bound states while the
other one has continuum character. This also forms the motivation for the theory by Fano
and Feshbach\cite{fano61,feshbach62} where one aims to decouple a
Feshbach resonance from the continuum so that bound-state methods become amenable.
Owing to the stronger coupling to the continuum, shape resonances are, in general, shorter
lived than Feshbach resonances.\cite{nhqmbook,jagau17} However, this is not always the
case because other factors influence the lifetime as well. Also, the character of a molecular
resonance can vary across the potential surface as decay channels open or close.
\section{Complex-variable techniques} \label{sec:cv}
\subsection{General theory} \label{sec:cvgen}
Since electronic resonances belong to the continuum, they cannot be described as stationary
solutions of the time-independent Schr\"odinger equation in Hermitian quantum mechanics.
Complex-variable techniques based on non-Hermitian quantum mechanics\cite{jagau17,
reinhardt82,nhqmbook} offer the possibility to describe electronic resonances in terms of
discrete eigenstates. On a qualitative level, it is easy to see that the so-called Siegert
representation\cite{siegert39}
\begin{equation} \label{eq:sieg1}
E = E_R - i \, \Gamma/2
\end{equation}
describes a state with finite lifetime. The imaginary part of the energy leads to decay as the
time evolution of the wave function shows:
\begin{equation} \label{eq:sieg2}
\Psi_\text{res} (t) = \exp [-i \, E \, t] \cdot \Psi_\text{res} (0) = \exp [- i\, E_R \, t ] \cdot
\exp [- \Gamma \, t/2] \cdot \Psi_\text{res} (0)~.
\end{equation}
The probability density thus evolves in time as $-\Gamma \, t$ and the
resonance width $\Gamma$ is connected to the characteristic lifetime $\tau$ through
$\Gamma = 1/\tau$. Although there remain fundamental issues with non-Hermitian
quantum mechanics that are yet to be solved, for example, the formulation
of closure relations for non-Hermitian Hamiltonians,\cite{nhqmbook} Eq. \eqref{eq:sieg1}
offers conceptual simplicity and emphasizes the similarity between resonance and
bound-state wave functions. This equation forms the basis for extending quantum
chemistry of bound states to electronic resonances.
A more rigorous justification of Eq. \eqref{eq:sieg1} and the concept of complex-valued energies
requires an analysis in terms of scattering theory, which is available, for example, in Refs.
\citenum{nhqmbook,taylor72,domcke91}. The central quantity considered there is the scattering
matrix $\mathbf{S}(k)$, which connects initial and final states of a system undergoing a scattering
process, i.e., an electron-molecule collision in the case of electronic resonances. The poles of
$\mathbf{S}(k)$ in the complex momentum plane, which represent the values of $k$ at which
the amplitude of the incoming wave vanishes, can be associated with resonances and bound
states as illustrated in Fig. \ref{fig:kplane}.
\begin{figure} \centering
\includegraphics[scale=0.60]{kplane.pdf}
\caption{Schematic representation of the complex momentum plane. Poles of S on the positive
Im(k)-axis correspond to bound states ($\times$) whereas resonances ($\otimes$) lie in the
fourth quadrant. Poles on the negative Im(k)-axis and in the third quadrant are antibound states
($\times$) and capture resonances ($\otimes$), respectively. These states are
in general beyond the reach of complex-variable methods and not discussed further here.
However, note that antibound states are also referred to as virtual states and related to
s-wave scattering.\cite{taylor72}}
\label{fig:kplane}
\end{figure}
Decaying resonances, meaning the poles of $\mathbf{S}(k)$ in the fourth quadrant
in Fig. \ref{fig:kplane} are connected to peaks in the density of states in the continuum.
The full width at half maximum $\Gamma$ of a peak centered at $E_R$ is given by Eq.
\eqref{eq:sieg1} as can be derived by considering a closed contour integral in the lower half
of the complex momentum plane\cite{nhqmbook,moiseyev98}
\begin{equation}
N = \frac{1}{2\pi i} \oint_C \frac{\partial \text{ln} \mathbf{S}(k)}{\partial k}
\mathrm{d}k
\label{eq:contour}
\end{equation}
with $N$ as the number of poles in the lower half of the complex momentum
plane, and applying the residue theorem to it. While a bound state with purely imaginary $k$
has an energy $E = k^2/2 \in \mathbb{R}^-$, a $k$-value in the fourth quadrant leads to an
energy
\begin{equation} \label{eq:sieg3}
E = 1/2 \cdot [\text{Re}^2(k) - \text{Im}^2(k) - 2\, i \, \text{Im}(k) \text{Re}(k)]~ \in \mathbb{C}~,
\end{equation}
where the imaginary part is strictly negative. The comparison of Eqs. \eqref{eq:sieg1} and
\eqref{eq:sieg3} shows that the location of a resonance in the complex momentum plane is
directly related to its position and width, i.e., $E_R$ and $\Gamma$.
A resonance wave function corresponding to a pole of $\mathbf{S}(k)$ in
the fourth quadrant in Fig. \ref{fig:kplane} behaves asymptotically, in the simplest case, like
$\sim \exp [i \, \text{Re}(k) \, r] \; \exp [\text{Im}(k) \, r]$. This means that these wave functions
diverge in space; they are thus outside the reach of quantum-chemical methods designed
for $\mathcal{L}^2$ integrable states. Such outgoing boundary conditions
as well as other boundary conditions used in scattering theory\cite{taylor72} are difficult
to implement into quantum-chemistry software.\cite{nhqmbook,meyer82,
jagau17,masin20} An elegant solution is to regularize the diverging wave
functions so that bound-state methods become applicable, which can be achieved by
different techniques. Of interest to this article are complex scaling\cite{aguilar71,balslev71,
simon72,mccurdy78} and complex absorbing potentials,\cite{jolicard85,riss93} which are
discussed in Secs. \ref{sec:cs} to \ref{sec:capbas}. Both approaches lead to a non-Hermitian
Hamiltonian with complex eigenvalues that can be interpreted according to Eq. \eqref{eq:sieg1}
and with corresponding eigenfunctions that are $\mathcal{L}^2$ integrable.
Non-Hermitian Hamiltonians have, in general, different left and right eigenvectors.\cite{nhqmbook}
However, when working with complex scaling or complex absorbing potentials, it is possible
to choose identical left and right eigenvectors if the Hamiltonian is real-valued before the
complex-variable treatment is applied. This implies that the matrix representation becomes
complex symmetric. As a consequence, the usual scalar product needs to be replaced by the
so-called $c$-product\cite{nhqmbook,reinhardt82,moiseyev78,moiseyev98}
\begin{equation} \label{eq:cprod}
\langle \Psi_i (r) | \Psi_j (r) \rangle = \int dr \, \Psi_i (r) \, \Psi_j (r)
\end{equation}
where the state on the left is not complex conjugated. The $c$-product is sometimes denoted
by round brackets $(\dots | \dots)$ instead of angle brackets $\langle \dots | \dots \rangle$ but
this practice is not followed here to avoid confusion with Mulliken and Dirac notation for
electron-repulsion integrals. Instead, angle brackets are kept and the use of the $c$-product
is always implied when discussing complex-variable methods.
A further consequence of non-Hermiticity is the complex-variational principle,\cite{braendas77,
moiseyev78,moiseyev82}
\begin{equation} \label{eq:cvar}
\tilde{E} = \langle \tilde{\Psi} | H | \tilde{\Psi} \rangle / \langle \tilde{\Psi} | \tilde{\Psi} \rangle ~ \in
\mathbb{C}
\end{equation}
that holds for any $c$-normalizable trial wave function $\tilde{\Psi}$. Eq. \eqref{eq:cvar} replaces
the usual variational principle for all complex-variable methods alike and is a stationarity principle
for the complex energy rather than an upper or lower bound for its real or imaginary part.
\subsection{Formal aspects of complex scaling} \label{sec:cs}
Complex scaling\cite{nhqmbook,reinhardt82,aguilar71,balslev71,simon72,reed82,loewdin88,
loewdin89,moiseyev98} is a mathematically rigorous technique to make diverging resonance
wave functions $\mathcal{L}^2$ integrable. It represents a dilation transformation
and relies on analytic continuation of the Hamiltonian to the complex plane.
Analytic continuation is a mathematical technique to extend the domain over which an
analytic function is defined. Upon scaling all coordinates in a Hamiltonian $H=T+V$ with
$T$ as kinetic energy and $V$ as compact potential\cite{reed82} by a complex number
$e^{i\theta}$, $\theta \in \mathbb{R}$, $\theta < \pi/4$, a non-Hermitian operator $H^\theta$
is obtained with the spectral properties illustrated in the upper panels of Fig. \ref{fig:cs}:
\vspace{-0.1cm}
\begin{itemize}
\item All discrete eigenvalues of $H$, that is, bound-state energies, are also eigenvalues of
$H^\theta$. \vspace{-0.2cm}
\item The continuous spectrum of $H^\theta$ is $\bigcup_{E_t} + a e^{-2i\theta}$, $a \in
\mathbb{R}^+$ where $E_t$ are the thresholds of $H$, that is, in the context of electronic-structure
theory the ionization energies if one deals with a neutral molecule and the detachment energies
for an anion. Note that, for $H^\theta$, there are separate continua that correspond
to the different thresholds and associated decay channels, whereas there is just one continuum
for $H$. \vspace{-0.2cm}
\item $H^\theta$ may have discrete complex eigenvalues in the wedge formed by the continuous
spectra of $H$ and $H^\theta$. These can be associated with the resonances.
\end{itemize}
\begin{figure} \centering
\includegraphics[scale=0.47]{cs1.pdf} \hspace{1cm} \\[0.3cm]
\includegraphics[scale=0.47]{cs2.pdf}
\caption{Upper panels: Eigenvalue spectra of a Hamiltonian $H=T+V$ describing an autoionizing
resonance and of its complex-scaled counterpart $H^\theta$. Lower panels: Eigenvalue spectra of
a Hamiltonian $H=T+V-Fx$ describing ionization in a static electric field and of its complex-scaled
counterpart $H^\theta$. Continua are denoted by $\sim\!\!\sim$, bound states by $\times$, and
resonances by $\otimes$. Note that the potential $V-Fx$ does not support any bound state.}
\label{fig:cs}
\end{figure}
A resonance wave function with the asymptotic behavior $\sim \exp[i k r]$, $k\in \mathbb{C}$
becomes $\mathcal{L}^2$ integrable upon complex scaling if $\theta > 1/2 \; \text{atan} [\Gamma
/2 \, (E_R - E_t)]$. Above the same critical value, the resonance energies are independent
of $\theta$. Since complex scaling relies on analytic continuation, the original theory\cite{
aguilar71,balslev71} is only applicable to Hamiltonians with dilation-analytic potentials.
Such potentials can be loosely defined as being analytic in the parameter
of the dilation transformation, that is, $e^{i\theta}$ in the case of complex scaling. A rigorous
definition of dilation analyticity is given in Refs. \citenum{aguilar71} and \citenum{reed82}.
Whereas it has been established that the Coulomb potential is dilation analytic, an important
case of a non-analytic potential is $V(x) \sim x$,\cite{reinhardt76,herbst78,herbst79,herbst81,
nicolaides92,scrinzi99} which describes a static electric field in $x$ direction. This implies that
a Hamiltonian for an atom or molecule in a field has other spectral properties;\cite{herbst78,
herbst79,herbst81} these are illustrated in the lower panels of Fig. \ref{fig:cs}: \vspace{-0.1cm}
\begin{itemize}
\item Neither $H$ nor $H^\theta$ have discrete real eigenvalues, that is, bound states. \vspace{-0.2cm}
\item The continuous spectrum of $H$ comprises the whole real axis, whereas the continuous
spectrum of $H^\theta$ is empty. \vspace{-0.2cm}
\item $H^\theta$ may have discrete complex eigenvalues. These can be associated with Stark
resonances. \vspace{-0.2cm}
\end{itemize}
Although $V(x) \sim x$ is not dilation analytic, complex scaling renders Stark resonances, which
asympotically behave as Airy functions, $\mathcal{L}^2$ integrable.
\subsection{Complex scaling in the context of molecular electronic-structure theory} \label{sec:cs2}
While implementations of complex scaling in which the wave function is represented on a
numerical grid preserve many of the appealing formal properties of the exact theory,\cite{
mccurdy91,scrinzi93,rescigno00,rescigno05} the representation in a basis set of atom-centered
Gaussian functions suffers from several problems:\cite{bravaya13} The discrete eigenvalues
of $H^\theta$ become $\theta$-dependent, whereas only the rotated continua depend on
$\theta$ in the exact theory. This poses a problem for the computation of interstate properties
as two states can depend on $\theta$ in a different way. A further problem arises for Stark
resonances because the continuous spectrum is empty as illustrated in the lower panels
of Fig. \ref{fig:cs}. The projection of $H^\theta$ onto a finite basis set therefore gives rise to
additional unphysical eigenvalues that do not correspond to either bound, resonance, or
continuum states.\cite{jagau16}
The dependence of the bound-state and resonance energies on $\theta$ can be traced back
to an oscillatory behavior $\sim \exp [-i \, Z \, r \sin\theta]$ of the respective wave functions
that is introduced by complex scaling.\cite{chuljian83} Importantly, this dependence of the
wave functions on $\theta$ is also present in exact theory, but since it is difficult to represent
in Gaussian bases, it gives rise to an artificial $\theta$-dependence of the energies in this
case. The magnitude of the perturbation is best quantified by analyzing bound states for
which $\text{Im}(E)$ is strictly zero in exact theory but of the order of $10^{-4} - 10^{-3}$
a.u. in typical calculations in Gaussian basis sets.\cite{bravaya13,jagau16,matz21}
$\theta$-dependence entails the need to find an optimal value, $\theta_\text{opt}$, which
is usually done using the criterion\cite{braendas77,moiseyev78,moiseyev98}
\begin{equation} \label{eq:thopt}
\text{min} \; [dE/d\theta]
\end{equation}
because this derivative vanishes in exact theory. Eq. \eqref{eq:thopt} works equally well for
temporary anions, core-vacant states, as well as Stark resonances. A step size of $1^\circ$
for $\theta$ is usually adequate to evaluate Eq. \eqref{eq:thopt} to sufficient
accuracy. In addition, $\theta$ is confined to the interval $0 < \theta < \pi/4$ by theory and
varies little among resonances with similar electronic structure and among different
electronic-structure methods. A more pronounced dependence on the basis set is, however,
sometimes observed. In effect, this means that ca. 10--15 calculations need to be performed
to determine $\theta_\text{opt}$ if no information can be inferred from preceding investigations.
This is the main reason for the increased computational cost of complex-scaled calculations.
Further reasons are the use of complex algebra and the need to include
additional diffuse functions in the basis set; standard bases typically yield bad results.\cite{
bravaya13,jagau16,jagau17,matz21} While there are basis sets that have been designed
specifically for resonances,\cite{venkatnathan00,zdanska05,kapralova13} the addition
of a few shells of even-tempered diffuse functions to a standard basis usually works as
well.\cite{bravaya13,jagau17}
\subsection{Complex basis functions} \label{sec:cbf}
Complex scaling in its original formulation is limited to atomic resonances, because the
electron-nuclear attraction is not dilation analytic within the Born-Oppenheimer (BO)
approximation.\cite{nhqmbook,jagau17} However, because only the asymptotic behavior of the
resonance wave function matters for the regularization, the non-analyticity can be circumvented
by so-called exterior complex scaling (ECS),\cite{simon79,moiseyev88,rom90} where only the
outer regions of space, where the electron-nuclear attraction vanishes, are complex scaled. ECS
has been implemented for a number of time-dependent approaches in which the wave function
is represented on a numerical grid.\cite{mccurdy91,scrinzi93,rescigno00,mccurdy04,rescigno05,
scrinzi10,tao09,yip08,yip14,majety15,orimo18} Such approaches hold a lot of promise but the
realization of ECS in basis set of atom-centered Gaussians is difficult because most techniques
for AO integral evaluation are not applicable to the ECS Hamiltonian.
To implement ECS in the context of Gaussian basis sets, one can exploit the identity
\begin{equation} \label{eq:cbf1}
E = \frac{\langle \Psi(r) | H^\theta(r \, e^{i\theta}) | \Psi(r) \rangle}{\langle \Psi(r) | \Psi(r) \rangle} =
\frac{\langle \Psi(r \, e^{-i \theta}) | H(r) | \Psi(r \, e^{-i \theta}) \rangle}{\langle \Psi(r \, e^{-i \theta}) |
\Psi(r \, e^{-i \theta}) \rangle}~,
\end{equation}
that is, scaling the coordinates of the Hamiltonian as $r \to r e^{i \theta}$ is equivalent to scaling
the basis in which the Hamiltonian is represented as $r \to r e^{-i \theta}$.\cite{mccurdy78,
moiseyev79} The right-hand side of Eq. \eqref{eq:cbf1} is equivalent to scaling the exponents
of Gaussian basis functions by $e^{- 2 i \theta}$.\cite{mccurdy78} If one does
not apply this procedure to all basis functions but rather adds to a given basis set a number of
extra functions with complex-scaled exponents,
\begin{equation} \label{eq:cbf2}
\chi_\mu(r, A) = N_\mu(\theta) \, S_\mu(r_A) \, \text{exp} \big[ -\alpha_\mu\, e^{-2i\theta} \, r_A^2 \big]~,
\end{equation}
a basis-set representation of ECS is obtained. Alternatively, the functions from Eq. \eqref{eq:cbf2}
can be interpreted as being centered not at the nuclei but in the complex plane. The so-defined
complex basis function (CBF) method\cite{mccurdy78,rescigno80,mccurdy80,honigmann99,
honigmann06,honigmann10,white15,white15b,white17} is compatible with the BO approximation;
its main technical challenge consists in the need to evaluate integrals over the non-standard
basis functions from Eq. \eqref{eq:cbf2}.\cite{white15} Whereas most of the established
techniques for AO integral evaluation apply to complex exponents, several changes are
necessary for the computation of the Boys function, which forms the first step of the
evaluation of the electron-repulsion integrals; an efficient implementation was introduced
only recently.\cite{white15}
Besides the applicability to molecules, CBF methods offer further advantages over complex
scaling of the Hamiltonian: $\text{Im}(E)$ of bound states is typically much smaller (ca. $10^{-5}$
a.u.) indicating a better representation of the complex-scaled wave function. Also, while an
optimal scaling angle $\theta_\text{opt}$ needs to be determined using Eq. \eqref{eq:thopt}, the
values of $dE/d\theta$ are significantly smaller than when the Hamiltonian is complex-scaled.
The requirements of CBF calculations towards the basis set are, in general, somewhat less
arduous than those of complex scaling, although it is necessary to augment standard bases
by extra functions. The details depend on the type of resonance and the energy released in
the decay process.\cite{white15,white15b,white17,jagau18,matz21}
\subsection{Formal aspects of complex absorbing potentials} \label{sec:cap}
Complex absorbing potentials (CAPs) have been used for long in quantum dynamics
as a means to prevent artificial reflection of a wave packet at the edge
of a grid.\cite{muga04} By accident, it was discovered that CAPs are also useful in static
electronic-structure calculations.\cite{jolicard85} By absorbing the diverging tail of the
wave function, a CAP enables an $\mathcal{L}^2$ treatment of resonances similar to
complex-scaled approaches. More thorough analysis showed that CAPs are, under
certain conditions, mathematically equivalent to complex scaling.\cite{riss93,riss95,riss98,
moiseyev98b} If this is the case, exact resonance positions and widths can
be recovered from CAP calculations. Still, in practice, CAP methods feature more heuristic
aspects than complex-scaled methods as will be detailed in this section and the subsequent
one. At the same time, CAPs are easier to integrate into molecular electronic-structure
theory.\cite{jagau17,zuev14} In particular, no difficulties arise about the combination with
the BO approximation so that CAP methods can be readily applied to molecules.
The CAP-augmented Hamiltonian is given as
\begin{equation} \label{eq:cap1}
H^\eta = H - i \; \eta \; W
\end{equation}
with $\eta \in \mathbb{R}^+$ as strength parameter. Different functional forms have been
suggested for $W$,\cite{riss93,riss95,riss98,moiseyev98b,rom91,sajeev06,sommerfeld01,
ghosh12,zuev14,sommerfeld15,sommerfeld98} in the simplest case $W$ is chosen as a
real-valued quadratic potential of the form
\begin{equation} \label{eq:cap2}
W = \!\! \sum_{\alpha=x,y,z} \!\!W_\alpha\, , ~\; W_\alpha = \begin{cases} (|r_\alpha - o_\alpha|
- r_\alpha^0)^2 &\text{if} ~ |r_\alpha - o_\alpha| > r_\alpha^0 \\ 0 &\text{if} ~ |r_\alpha
- o_\alpha| < r_\alpha^0 \end{cases}~.
\end{equation}
Here, the vector $(r_x^0, r_y^0, r_z^0)$ defines the onset of the CAP and hence a cuboid
box in which the CAP is not active and the vector $(o_x, o_y, o_z)$ is the origin of the CAP.
$r_\alpha^0$ and $o_\alpha$ are heuristic parameters, protocols for their determination are
discussed in Sec. \ref{sec:capbas}.
A suitable starting point for the mathematical analysis of $H^\eta$ is a free particle in one
dimension exposed to a CAP with onset $r_x^0 = 0$.\cite{riss93,santra06} In this case, $H$
comprises only kinetic energy and the operator
\begin{equation} \label{eq:cap3}
H^\eta = - \frac{1}{2} \frac{\partial}{\partial x^2} - i \, \eta \, x^2
\end{equation}
describes a harmonic oscillator with frequency $\sqrt{\eta} \, (1-i) = \sqrt{2\eta} \, \text{exp}
[-i \pi/4]$ on a complex-rotated length scale $(2 \, \eta)^{-1/4} \, \text{exp} [i \, \pi/8]$. The
spectrum of eigenvalues is $E_n = \sqrt{\eta} (1-i) \, (n+1/2), \; n \in \mathbb{N}$, that is, it is
rotated into the lower half of the complex energy plane by an angle of $2\theta=\text{exp}
[-i\, \pi/4]$ mimicking complex scaling with $\theta = \pi/8$. At the same time, the spectrum
of $H^\eta$ is purely discrete for finite $\eta$; the continuous spectrum of the complex-scaled
Hamiltonian (see upper panels of Fig. \ref{fig:cs}) is recovered only in the limit $\eta \to 0^+$.
By similar considerations it is possible to show that general monomial CAPs $W\sim r^n$
correspond to complex scaling with an angle $\theta = -\pi/(2+n)$.\cite{riss93} Inclusion of
a compact potential in $H^\eta$ does not fundamentally change this analysis.
CAPs of a more elaborate form than Eq. \eqref{eq:cap2} have been investigated as well.\cite{
riss95,riss98,moiseyev98,rom91,sajeev06} One guiding principle has been to design potentials
that are equivalent to complex scaling not only in the limit $\eta \to 0^+$ but also for finite values
of $\eta$. As a first step, one can choose $\eta$ to be energy-dependent because it can be
shown that reflection and absorption properties depend on the energy as well. Applying the
ansatz $\eta(E) = 2 \, a \, (E-V)$ to Eq. \eqref{eq:cap1} leads to a modified Schr\"odinger
equation\cite{riss95}
\begin{equation} \label{eq:tcap1}
T |\Psi(x) \rangle + V(x) \, (1 + 2i \, a \, W(x)) | \Psi(x) \rangle = E (1 + 2i \, a \, W(x) )
| \Psi(x) \rangle
\end{equation}
which can be recast in the usual form by introducing the function $f = \sqrt{1 + 2i\, a \, W}$.
This results in
\begin{equation} \label{eq:tcap2}
f^{-1} \, T \, f^{-1} \, | \widetilde{\Psi}(x) \rangle + V(x) | \widetilde{\Psi}(x) \rangle
= E | \widetilde{\Psi}(x) \rangle
\end{equation}
with $| \widetilde{\Psi}(x) \rangle = f \, |\Psi(x) \rangle$ as transformed wave function. Eq.
\eqref{eq:tcap2}, which features a modified kinetic energy, illustrates the connection between
CAPs and ECS; it can be shown that the function $f$ defines a scaling contour in the complex
plane. Since the theory of complex scaling places only few restrictions on the scaling contour,\cite{
nhqmbook,moiseyev98} that is, the path in the complex plane along which the
coordinates are scaled, diverse functional forms of $W$ are possible. This connection between
CAPs and ECS has triggered the development of the \textit{transformative} CAP\cite{riss95,
riss98} (TCAP) and of several \textit{reflection-free} CAPs\cite{riss95,riss98,moiseyev98b,
sajeev06} all of which are built around a transformed kinetic energy. The TCAP has been
implemented for a few electronic-structure methods,\cite{sommerfeld98} which showed that,
despite formal advantages over Eq. \eqref{eq:cap2}, the results still suffer from the same
shortcomings when using atom-centered Gaussian basis sets. Most recent developments
are thus built on the simpler functional form from Eq. \eqref{eq:cap2}.
\subsection{Complex absorbing potentials in the context of molecular electronic-structure
theory} \label{sec:capbas}
While the limit $\eta \to 0^+$ can be taken if the Schr\"odinger equation is solved exactly,
this is not the case when $H^\eta$ is represented in a finite basis set and
the Schr\"odinger equation is solved approximately.\cite{riss93} Instead, one has to use
finite $\eta$ values and balance out two effects: The error caused by the finite CAP strength,
which increases with $\eta$ and an additional error $\Delta E_\text{bas}(\eta)$ caused by
the finite basis set, which decreases with $\eta$. The optimal value $\eta_\text{opt}$ is
usually determined by a Taylor expansion of the resonance energy in $\eta$
\begin{equation} \label{eq:capbas}
E(\eta) = E_0 + a_1 \eta + a_2 \eta^2 + a_3 \eta^3 + \dots + \Delta E_\text{bas} (\eta)
\end{equation}
and minimizing the linear term, which leads to the criterion\cite{riss93}
\begin{equation} \label{eq:capopt}
\text{min} \, | \eta dE / d\eta |~.
\end{equation}
In analogy to complex-scaled methods, this entails the need to calculate trajectories $E(\eta)$.
However, CAP theory does not place an upper limit on $\eta$, whereas the complex scaling
angle is restricted to $0 < \theta < \pi/4$. In practice, $\eta_\text{opt}$ varies from $10^{-4}$
to $10^{-1}$ a.u. between different calculations depending on the completeness
of the basis set and the choice of onset $(r_x^0, r_y^0, r_z^0)$ and origin $(o_x, o_y, o_z)$
so that, typically, between 20 and 50 calculations need to be run to determine the optimal CAP
strength.\cite{jagau17,zuev14,zuev14e}
For temporary anions, a possible choice for the onset that avoids optimization on a case-by-case
basis consists in $\sqrt{X^2}$, $\sqrt{Y^2}$, $\sqrt{Z^2}$ where the expectation values are
computed for the neutral ground state.\cite{zuev14,zuev14e,jagau14b,jagau14e} This approach
aims to minimize the perturbation of the neutral system and to apply the CAP
only to the extra electron. It is, however, not straightforward to generalize the idea to other types
of resonances beyond temporary anions. As an alternative, the definition of the CAP onset on the
basis of Voronoi cells has been suggested.\cite{sommerfeld15,ehara16} This in advantageous for
larger molecules of irregular shape where the definition of a CAP according to Eq. \eqref{eq:cap2}
is questionable. In other cases, Voronoi CAPs and box-shaped CAPs deliver very similar results.
The impact of the CAP origin has been investigated less thoroughly, it is usually chosen as the
center of mass or the center of nuclear charges of a molecule.\cite{zuev14,benda17,benda18a}
While the integrals $W_{\mu\nu} = \langle \chi_\mu | W | \ \chi_\nu \rangle$ of a box-shaped
CAP defined according to Eq. \eqref{eq:cap2} over Gaussian functions can be calculated
analytically,\cite{santra01} the corresponding integrals of a Voronoi CAP need to be evaluated
numerically.\cite{sommerfeld15} However, the evaluation of integrals over arbitrary potentials
is a standard task in density functional theory and adapting such functionalities to the evaluation
of CAPs is straightforward.\cite{zuev14} In general, the evaluation of $W_{\mu\nu}$ does not
drive the computational cost.
In analogy to complex-scaled and CBF calculations, additional shells of diffuse functions
need to be added to a basis set to represent the resonance wave function properly in a
CAP calculation. The same basis sets usually work as well but it is sometimes possible to
truncate the basis set based on physical considerations when using CAPs.\cite{jagau17,
zuev14} For example, temporary anions that arise from adding an electron to a $\pi^*$
orbital can often be described with a basis set that includes only additional $p$ functions.
However, a general observation is that the use of too small basis sets in CAP calculations
gives rise to additional spurious minima in Eq. \eqref{eq:capopt} that do not correspond to
physical resonance states.\cite{jagau14b,zuev14,jagau17}
Several ideas have been proposed to reduce the dependence of the resonance energy on
the CAP parameters. One can, for example, compute the linear term in Eq. \eqref{eq:capbas}
explicitly and subtract it from the energy.\cite{riss93,jagau14b} This leads to the first-order
corrected energy
\begin{equation} \label{eq:capcorr}
E^{(1)} (\eta) = E(\eta) - \eta \, dE/d\eta = E + i \, \eta \, \langle W \rangle~,
\end{equation}
where the correction term is given as the expectation value of the CAP operator.
$\eta_\text{opt}$ can be determined for Eq. \eqref{eq:capcorr} by minimizing the
next-higher order term in Eq. \eqref{eq:capbas}, which leads to the criterion $\text{min}
|\eta \, dE^{(1)}/d\eta|$. However, the correction leads to an increased basis-set error
$\Delta E_\text{bas}(\eta)$ so that it is not guaranteed that $E^{(1)}$ represents an
improvement over the uncorrected energy $E$.\cite{riss93} In practice, this can be tested
by comparing the values of $|\eta \, dE/d\eta|$ and $|\eta \, dE^{(1)}/d\eta|$, which shows
that, in general, Eq. \eqref{eq:capcorr} does represent an improvement. The first-order
corrected energy is less dependent on $\eta$ and the CAP onset than the uncorrected
energy.\cite{jagau14b,jagau14e,jagau17}
As an alternative to the Taylor expansion in Eq. \eqref{eq:capbas}, it has been suggested
to express $E(\eta)$ in terms of Pad\'e approximants.\cite{lefebvre05,landau16,landau19}
This allows to recover the non-analytic limit $\eta \to 0^+$ from a series of calculations
with different $\eta$ values; the dependence of the energy on $\eta$ is, however, also
with this approach not removed completely.
Recently, the integration of CAPs into Feshbach-Fano theory has been suggested\cite{
kunitsa19} in order to define the projector needed there for separating bound and
continuum parts of the wave function. In analogy to other approaches based on this
theory, the computational cost is lower as compared to complex-variable methods
because the Schr\"odinger equation is solved with a real-variable electronic-structure
method and the CAP is added afterwards. However, only very few applications of this
approach have been reported so far.\cite{kunitsa19}
\subsection{Practical aspects of complex-variable electronic-structure calculations} \label{sec:elst}
The choice of electronic-structure model is of central importance for the accurate
description of molecular resonances. Complex scaling and complex absorbing potentials
have been combined with a variety of methods; implementations of Hartree-Fock
(HF),\cite{rescigno80,mccurdy80,white15b} density functional theory (DFT),\cite{
whitenack11,whitenack12,zhou12},multireference configuration interaction (MRCI),\cite{
sommerfeld98,sommerfeld01,honigmann06,honigmann10} resolution-of-the-identity
second-order M{\o}ller-Plesset perturbation theory (RI-MP2),\cite{hernandez19,
hernandez20} extended multiconfigurational quasidegenerate perturbation theory
(XMCQDPT2),\cite{kunitsa17} algebraic diagrammatic construction (ADC),\cite{santra02,
feuerbacher03,belogolova21,dempwolff21} symmetry-adapted-cluster configuration
interaction (SAC-CI),\cite{ehara12} multireference coupled-cluster (MRCC),\cite{sajeev05}
and equation-of-motion (EOM)-CC methods\cite{ghosh12,bravaya13,zuev14,white17}
have been reported. Since there exist different ways to set up a complex-variable calculation,
the comparison of results obtained using different implementations is not straightforward.
Moreover, many approaches are specifically designed for the treatment of a particular
type of resonance, i.e., only suitable for temporary radical anions, core-vacant states, or
molecules in static electric fields. This has led to a situation where many resonances have
been investigated with only one or two computational approaches. Systematic benchmarks,
where the same resonances are computed using the same basis sets and complex-variable
techniques but different electronic-structure models are still largely missing. The same
applies to the numerical comparison of CAPs and complex scaling calculations based
on the same electronic-structure model.
A first requirement towards a complex-variable electronic-structure model is that the wave
function comprises configurations that describe the decay of the state. This is usually easy
to achieve for shape resonances but requires more consideration in the case of Feshbach
resonances. For example, the configurations describing Auger decay of a core-ionized state
are doubly excited with respect to the core-vacant HF determinant (see Fig. \ref{fig:auger});
as a consequence, the decay cannot be described with HF theory.
A second requirement is to achieve a consistent description of the resonance itself and its
decay channels.\cite{jagau14} Multistate methods such as EOM-CC,\cite{ccbook,krylov08,
sneskov12,bartlett12,emrich81,sekino84,stanton93,nooijen93,stanton94} CC2,\cite{
christiansen95,haettig00,haettig06} and ADC(n),\cite{dreuw15,schirmer82,trofimov95,
trofimov99} which deliver wave functions for states with $N$ and $N+1$ electrons within
the same computation, have a built-in advantage because they place the ionization
continua such that no ambiguity over the character of a state arises. In contrast, if one
performs, for example, two separate HF calculations for a temporary radical anions and
the corresponding neutral molecule, it can happen that the decay width of the anion is
zero although its HF energy is higher than that of the neutral species. This is further
discussed in the context of analytic-gradient theory in Sec. \ref{sec:grad}.
Another advantage of multistate methods is the straightforward evaluation
of transition moments as well as Dyson orbitals\cite{jagau16b} and natural transition orbitals
(NTOs).\cite{skomorowski18} These quantities are equally relevant for resonances as for
bound states;\cite{krylov20} for example, they enable the application of exciton
theory to resonances, which helps explain spectroscopic signatures of resonances.\cite{
skomorowski18} As a consequence of Eq. \eqref{eq:cprod},the real and imaginary parts
of orbitals need to be interpreted separately in the context of complex-variable methods.
A representative example of an NTO pair is shown in Fig. \ref{fig:orbs}.
\begin{figure} \centering
\includegraphics[scale=1.25]{orbitals.pdf}
\caption{Real and imaginary NTOs and their corresponding singular values
$\sigma_K$ for the $^3\Pi$ state of C$_3$N$^-$ computed with CAP-EOM-EE-CCSD.
Reproduced with permission from Ref. \citenum{skomorowski18}.}
\label{fig:orbs}
\end{figure}
A further advantage of multistate methods is technical but crucial for some
types of states: It is often possible to avoid solving the SCF equations for the resonance.
Especially for temporary radical anions, it can be very difficult to ensure convergence to
the resonance state instead of a pseudocontinuum solution. For other types of resonances,
however, this is less problematic. For example, CBF-HF determinants for a core-vacant
state or a molecule in a static electric field can be easily constructed using maximum-overlap
techniques\cite{gilbert08} starting from a real-valued core-vacant HF determinant or a field-free
HF determinant, respectively.\cite{jagau16,jagau18,matz21}
It also needs to be mentioned here that the strengths and weaknesses that have been established
for a particular electronic-structure method in the context of bound states, are still relevant for
resonances. For example, many resonances have open-shell character, a spin-complete
description as afforded by multistate methods such as EOM-CC and ADC is advantageous.
From these considerations, a number of computational approaches arise that combine several
advantages for different types of resonances: For temporary radical anions, EOM-EA-CC within
the singles and doubles approximation (EOM-EA-CCSD) based on a CCSD
reference for the neutral closed-shell state is well suited or, alternatively, the EA variant of ADC(2)
or ADC(3) starting from an MP2 or MP3 reference. Likewise, excited states of closed-shell anions,
superexcited Rydberg states, and core-excited states are best described with the EE variants
of the same methods, while the IP variants are suitable for core-ionized states.
For other types of states, a fully satisfactory description is more difficult to achieve: For example,
in the case of closed-shell dianions, one can construct a CBF-HF or CAP-HF determinant for
the resonance, treat correlation by means of CCSD, and describe the detachment channels as
EOM-IP-CCSD states.\cite{gulania19} This delivers a balanced and spin-complete description
of all relevant states but the bound monoanionic states, which are described using orbitals of
the resonance state, will likely have substantial non-zero imaginary energies. The alternative
approach, where one starts from an open-shell HF determinant for one of the monoanionic
decay channels, avoids the latter problem but delivers a description that is neither balanced
nor spin-complete. A similar problem exists for Stark resonances where the Hamiltonian does
not feature any bound states; it is thus inevitable to solve the SCF equations directly for the
resonance.\cite{jagau16,jagau18}
In post-HF CAP methods, a further degree of freedom arises because the CAP does not
need to be added to the Hamiltonian from the outset. For example, in a CAP-EOM-CC
treatment, the CAP can be introduced already in the HF calculation,\cite{zuev14,zuev14e}
or at the CC step, or even only at the EOM-CC step.\cite{ghosh12} No such choice exists
for CBF methods and one needs to work with a complex-valued wave function throughout.
In the full CI limit, all three variants of CAP-EOM-CC deliver identical results but this is not
the case for truncated CC methods. The distinction between relaxed and unrelaxed (EOM-)
CC molecular properties\cite{gauss00} is related to this subject and, similar to what applies
there, the numerical differences between the three CAP-EOM-CC variants are usually small.
Also, there is no clear formal advantage of one scheme over the other: If the CAP is active
already at the HF level, the form of the CC and EOM-CC equations does not change as
compared to the real-valued formalism. Also, the size-extensivity of truncated CC methods
is preserved and it is straightforward to work out analytic-derivative theory (see Sec.
\ref{sec:grad}). The main advantage of the alternative variant in which the CAP-EOM-CC
Hamiltonian is built from a real-valued CC wave function is its reduced computational cost.
Only the EOM-CC equations have to be solved for multiple values of the CAP strength to
evaluate Eq. \eqref{eq:capopt}, the CC equations for the reference state need to be solved
only once. However, these methods are not size-intensive and it is necessary to include
additional terms in the EOM-CC equations because the cluster operator does not fulfill the
CC equations at $\eta\neq 0$.
As a further idea in the context of temporary radical anions, it has been suggested to
project the CAP on the virtual orbital space.\cite{santra99} The rationale is to minimize
perturbation of the occupied orbital space and to apply the CAP only to the extra electron.
This has been realized for CI and EOM-CC methods;\cite{santra99,ghosh12,jagau17}
numerical evidence suggests that the impact on the results is small.
Finally, it should be mentioned that there is some ambiguity in the evaluation of Eqs.
\eqref{eq:thopt} and \eqref{eq:capopt} as one can search for minima in the total resonance
energy or in the energy difference with respect to some bound parent state.\cite{bravaya13,
white17} If the Schr\"odinger equation was solved exactly, this would not matter because
bound-state energies do not depend on the complex scaling angle or the CAP strength
in that case. However, it can matter for approximate solutions, in particular for calculations
with a complex-scaled Hamiltonian where bound states acquire substantial imaginary
energies (see Sec. \ref{sec:cs2}). For CBF calculations, the impact is substantially smaller.
Numerical experience shows that Eq. \eqref{eq:thopt} is best applied to energy differences
here.\cite{white17,jagau18,matz21} For CAP calculations, the differences are usually
negligible.
\section{Recent methodological developments} \label{sec:dev}
\subsection{Complex-valued potential energy surfaces and analytic gradient theory} \label{sec:grad}
The concept of potential energy surfaces (PES) is a cornerstone of the quantum chemistry of bound
states. By virtue of the BO approximation, the electronic energy is obtained as a function of the
nuclear coordinates by solving the electronic Schr\"odinger equation at fixed nuclear positions.
The nuclear dynamics are then described in terms of these PES.
Consideration of nuclear motion is equally important for molecular electronic resonances because it
happens in many cases on the same timescale as electronic decay. As an example,
consider DEA to a closed-shell molecule. The cross section for this process is determined by the
interplay of nuclear motion and electronic decay.\cite{fabrikant17} Moreover, there are important
cases of DEA where two electronic resonances are coupled through nuclear motion\cite{estrada86,
feuerbacher04}. Aside from DEA, there are anions whose autodetachment is entirely due to nuclear
motion,\cite{simons81,simons99,simons02} for example NH$^-$\cite{chalasinski88} and enolates,\cite{
oneal88} as well as other anions that are adiabatically bound only when zero-point vibrational energies
are taken into account, for example benzonitrile.\cite{gulania20} Vibrational effects hence play a
decisive role for the spectroscopy of temporary anions,\cite{simons11} but also for other types of
resonances such as core-vacant states.\cite{norman18}
Using the Siegert representation, Eq. \eqref{eq:sieg1}, the concept of PES can be readily generalized
to resonances: The real part of the PES is interpreted in analogy to bound states,
whereas the imaginary part yields the decay width as a function of the molecular structure.\cite{
moiseyev17} Complex-valued resonance PES can be as diverse as those of bound states but the
former are in general much less well characterized than the latter. Fig. \ref{fig:cpes} illustrates several
typical PES shapes for temporary radical anions. These resonances often become stable towards
electron loss through structural changes, typically through bond stretching.\cite{jagau17} However,
such stabilization does not pertain to most other types of resonances: A molecule in a static electric
field does not have any bound states so that the electronic energy remains complex-valued at all
nuclear configurations. The same applies to core-vacant states, which also do not become bound
through structural rearrangement.
\begin{figure} \centering
\includegraphics[scale=0.38]{cpes1.pdf} \hspace{0.8cm}
\includegraphics[scale=0.38]{cpes4.pdf} \\
\includegraphics[scale=0.38]{cpes2.pdf} \hspace{0.8cm}
\includegraphics[scale=0.38]{cpes3.pdf}
\caption{Exemplary shapes of potential energy surfaces of temporary anions in one dimension.
Dashed red lines denote anionic states, solid black curves denote neutral states. Upper left: The
anion is unbound but vertically stable at stretched bond lengths. (Example: N$_2^-$) Lower left:
The anion is vertically stable at its own equilibrium structure but adiabatically
unbound (Example: CO$_2^-$) Upper right: The anion is adiabatically bound but
vertically unstable near the equilibrium structure of the parent neutral state. (Example: F$_2^-$)
Lower right: Dissociative electron attachment is possible and proceeds through coupling of two
resonances. The exceptional point is marked by a circle. (Example: HCN$^-$)}
\label{fig:cpes}
\end{figure}
Plots such as those in Fig. \ref{fig:cpes} cannot represent the full dimensionality
of the PES for systems with more than two atoms; they can be exact only for diatomic molecules.
An $N$-atomic molecule has $M=3N-6$ internal nuclear degrees of freedom ($3N-5$ for linear
molecules); the PES is thus a high-dimensional object. In these cases, plots such as those in
Fig. \ref{fig:cpes} only represent cuts through the PES and do not capture the full dimensionality.
Although low-dimensional models can afford meaningful insights, it is desirable to treat all $M$
degrees of freedom on an equal footing as is routinely possible for bound states.
Bound-state PES are commonly characterized in terms of special points such as equilibrium
structures, transition structures, conical intersection seams, and minimum
energy crossing points (MECPs).\cite{matsika21,koeppel84,domcke11} Analogous special
points on complex-valued PES are of interest for resonances as well. Equilibrium structures
are characterized by $dE_R/d\mathbf{R} = 0$ and positive real parts of all eigenvalues of the
Hessian matrix; $\Gamma$ is not relevant here.
Crossing seams between a temporary anion and its parent neutral state mark
the region where the anion becomes stable towards electron loss. Since the two states involved
in such a crossing have a different number of electrons, the Hamiltonian does not couple
them and the crossing seam has the dimension $M-1$. If $E_R$ and $E_0$ are obtained
as eigenvalues of the same Hamiltonian, as is possible, for example, with EOM-CC or ADC
methods, $\Gamma$ in principle becomes zero exactly at this crossing seam,\cite{jagau14}
which is not the case if $E_R$ and $E_0$ are computed independent of each other. However,
since very small decay widths are difficult to represent with CAP methods, deviations are
observed in practice also for multistate methods.\cite{benda18b}
The crossing seam between a temporary anion and its parent state can be related to the
stability of a molecule towards low-energy electrons and the efficiency of DEA
in a similar way in which the location and topology of a conical intersection
explain photostability. Along the seam, the MECP is of particular interest, which motivated
the development of an algorithm for locating it.\cite{benda18b} In a straightforward extension
of a similar algorithm for bound states,\cite{bearpark94} this can be done
using the condition\cite{benda18b}
\begin{equation} \label{eq:mecp}
d/d\mathbf{R} \Big[E_R - E_0 \Big]^2 = 0
\end{equation}
and at the same time minimizing the energy of one of the states in the space orthogonal to
the crossing seam.
Also of interest are crossing seams between two resonances. The intersection
space where the real and imaginary parts of the energies are degenerate is termed exceptional
point (EP).\cite{kato66,heiss12} As for conical intersections, the dimension of
an EP seam is $M-2$,\cite{kato66,benda18c} which implies that diatomic molecules cannot
feature EPs. More general intersection spaces, where only the real \textit{or} imaginary parts
of the energies are degenerate, can also be defined. The analogy between EPs and
conical intersections does not reach very far: Although the dimension of the intersection
space is the same, most other properties are fundamentally different. In particular, a non-Hermitian
Hamiltonian is defective at an EP, which is not the case for a Hermitian Hamiltonian at a
conical intersection.\cite{heiss12}
Although the role of EPs for molecular electronic resonances has not yet been investigated in
a systematic manner,\cite{hazi83,estrada86,feuerbacher04,royal04,haxton05,benda18c} it is
clear that they are relevant to all processes that involve two coupled resonances. This is, for
example, the case for DEA to unsaturated halogenated compounds, which is
presumed to lead initially to a $\pi^*$ resonance that is coupled to a $\sigma^*$ resonance
whose PES is dissociative.\cite{fabrikant17,feuerbacher04,stricklett86,aflatooni10,benda18c}
In analogy to bound states, non-adiabatic transitions are most likely to occur at the EP seam
and especially near minimum-energy exceptional points (MEEPs). This motivated the
development of an algorithm for locating MEEPs. This is again based on a
generalization of the bound-state algorithm\cite{bearpark94} and uses the condition\cite{benda18c}
\begin{equation} \label{eq:meep}
d/d\mathbf{R} \Big[ (E_{R1} - E_{R2} )^2 + 1/4 \cdot (\Gamma_1 - \Gamma_2)^2 \Big] = 0~,
\end{equation}
where $E_{R1}$ and $E_{R2}$ are the positions of the two resonances and $\Gamma_1$
and $\Gamma_2$ the corresponding widths. In addition to Eq. \eqref{eq:meep}, $E_{R1}$
or $E_{R2}$ needs to be minimized in the space orthogonal to the EP seam.
As a numerical example, Fig. \ref{fig:hcn} displays the PES of two resonances of HCN$^-$
near their MEEP. This shows that CAP-EOM-CCSD describes the topology of EPs consistent
with analytical models:\cite{estrada86,feuerbacher04} Square-root energy gaps are separately
observed for the real and imaginary parts of the energy in the branching plane. Notably, standard
EOM-CCSD yields a flawed description of conical intersections, where the
dimensionality of the intersection space is incorrect;\cite{koehn07,kjonstad17a,kjonstad17b}
this difference between EOM-CCSD and CAP-EOM-CCSD can be traced back to fundamental
differences between Hermitian and non-Hermitian operators.\cite{benda18c}
\begin{figure} \centering
\includegraphics[scale=0.505]{hcn.pdf}
\caption{Real (right) and imaginary (left) parts of the PES of the $^2\Pi$ and the $^2\Sigma^+$
resonance of HCN$^-$ computed with CAP-EOM-EA-CCSD. The EP is marked by a red dot.
Reproduced with permission from Ref. \citenum{benda18c}.}
\label{fig:hcn}
\end{figure}
In order to locate equilibrium structures as well as crossing seams using Eqs. \eqref{eq:mecp}
and \eqref{eq:meep}, the first derivative of the complex resonance energy with respect to
nuclear coordinates is required. In accordance with the interpretation of Eq. \eqref{eq:sieg1},
the real part and imaginary part of the gradient vector describe the change in the energy and
decay width across the PES. They point, in general, in different directions.
For diatomic and triatomic molecules, it is possible to evaluate the gradient vector through
single-point energy calculations and numerical differentiation but such an approach becomes
quickly impractical for polyatomic systems owing to increasing computational
cost. For bound states, it was realized more than 50 years ago that an analytic evaluation of
the energy gradient can be achieved at a cost similar to that associated with the evaluation of
the energy itself.\cite{pulay69} Since then, analytic-derivative theory has become an important
aspect of quantum-chemical method development and gradient expressions have been derived
for most of the frequently used electronic-structure methods.\cite{gauss00,helgaker12}
Corresponding developments for electronic resonances started only recently with the presentation
of analytic gradients for CAP-HF, CAP-CCSD, and CAP-EOM-CCSD energies.\cite{benda17}
Because all AO integrals are real-valued in CAP methods, the evaluation of the energy is easier
here than for CBF methods. In a generic gradient expression written in the AO basis,
\begin{equation} \label{eq:grad1}
\frac{dE}{dX} = \sum_{\mu\nu} h^X_{\mu\nu} \gamma_{\mu\nu} \; + \; 1/4 \sum_{\mu\nu\sigma\rho}
\langle \mu\sigma || \nu\rho \rangle^X \Gamma_{\mu\sigma\nu\rho} \; + \; \sum_{\mu\nu} S^X_{\mu\nu}
I_{\mu\nu}
\end{equation}
with $h^X_{\mu\nu}$, $\langle \mu\nu || \sigma\rho \rangle^X$, and $S^X_{\mu\nu}$ as derivatives
of the one-electron Hamiltonian, two-electron, and overlap integrals, it is simply necessary to
replace the real-valued density matrices $\gamma_{\mu\nu}$, $\Gamma_{\mu\sigma\nu\rho}$,
and $I_{\mu\nu}$ by their complex-valued counterparts for the respective CAP method. The only
additional derivative integrals stem from the differentiation of the CAP itself and, because the CAP
is a one-electron operator, are not relevant for the overall computational cost. It should be noted
that these considerations only apply to the case where the CAP is included in the HF equations.
If it is added at a later stage in a correlated calculation, additional contributions to the density
matrices in Eq. \eqref{eq:grad1} may arise.
A complication of CAP gradient calculations is that a molecule may be displaced relative to the
CAP while its structure is optimized.\cite{benda17,benda18a} This can be prevented by constraining
the gradient such that the molecule stays put relative to the origin of the CAP, which leads to a
computationally inexpensive extra term in Eq. \eqref{eq:grad1}. The origin of the CAP can be
chosen, for example, as center of nuclear charges. In order to deal with possible changes of the
optimal CAP strength $\eta_\text{opt}$ across the PES, it was suggested to keep $\eta$ fixed
during an optimization, then determine a new $\eta_\text{opt}$ according to Eq. \eqref{eq:capopt},
and reoptimize the molecular structure.\cite{benda17,benda18a} Typically, only 2--4 of these cycles
are required to obtain a converged structure and CAP strength.
\subsection{Resolution-of-the-identity approximation for electron-repulsion integrals} \label{sec:rank}
When aiming to extend the scope of a quantum-chemical method to larger systems, the
electron-repulsion integrals (ERIs), which form a tensor of order 4, represent a major bottleneck.
In bound-state quantum chemistry, it is common practice to exploit that the ERI tensor is
not of full rank and several techniques have been proposed to decompose it into lower-rank
quantities.\cite{baerends73,whitten73,dunlap79,vahtras93,beebe77,friesner85,koch03,neese09,
weigend09,hohenstein12,parrish12,izsak20}
Only recently, however, steps were taken to extend these techniques to complex-variable methods
for electronic resonances.\cite{hernandez19,hernandez20} Specifically, the resolution-of-the-identity
(RI) approximation has been applied to ERIs over complex-scaled basis functions. The RI
approximation exploits that, for a basis set of atom-centered Gaussian functions, the pair space
of orbital products is often markedly redundant. This redundancy is particularly pronounced if large
basis sets with many diffuse functions are used; the RI approximation thus leads to most significant
speedups for such cases. Since complex-variable calculations often demand these
extended basis sets, significant savings can be expected and RI methods for electronic resonances
hold a lot of promise.
The RI approximation\cite{baerends73,whitten73,dunlap79,vahtras93} is defined according to
\begin{equation} \label{eq:ri1}
(\mu \nu | \sigma \rho) \approx \sum_{PQ} (\mu\nu | P) [\mathbf{J}^{-1}]_{PQ} (Q | \sigma \rho)
= \sum_{Q} B^Q_{\mu\nu} B^Q_{\sigma\rho}~,
\end{equation}
where $P$ and $Q$ refer to auxiliary Gaussian functions $\chi_P, \chi_Q$ that approximate products
of AOs $\rho_{\mu\nu} = \chi_\mu \chi_\nu$, $J_{PQ} = \int dr_1 \int dr_2 \chi_P(r_1) r_{12}^{-1}
\chi_Q(r_2)$, and $B_{\mu\nu}^Q = \sum_P (\mu\nu | P) [\mathbf{J}^{-1/2}]_{PQ}$. In bound-state
quantum chemistry, Eq. \eqref{eq:ri1} is commonly applied to DFT,\cite{eichkorn95,eichkorn97}
HF,\cite{weigend02} MP2,\cite{feyereisen93,weigend98} CC2,\cite{haettig00} and ADC(2)\cite{
haettig06} methods, where it typically entails negligible errors. Although the RI approximation does
not reduce the formal scaling of these methods --with the notable exception of pure DFT\cite{
eichkorn95,eichkorn97}-- it does reduce absolute computation times considerably and it also
allows to avoid storing any four-index quantity.
The recent implementation of Eq. \eqref{eq:ri1} for use in RI-MP2 and RI-HF calculations
with complex basis functions confirmed that one can indeed realize significant speedups
in complex-variable calculations by means of the RI approximation, while
the errors in energies and decay widths are negligible.\cite{hernandez19,hernandez20}
CBF-RI-MP2 calculations with more than 2500 basis functions became possible, which
enabled studying the ionization of molecules with up to ca. 50 atoms in static electric fields.
As an example, Fig. \ref{fig:sfi} displays angle-dependent ionization rates of anthracene and
phenanthrene.
\begin{figure} \centering
\includegraphics[scale=0.85]{sfi_c10h14-b.pdf}
\caption{Angle-dependent ionization rates $\Gamma$ of anthracene and phenanthrene
(C$_{10}$H$_{14}$) in a static electric field of strength $F$=0.04 a.u. computed with RI-MP2.
The molecules are in the $xy$ plane while the field is in the $yz$ plane and $\phi$ is the angle
between the field and the molecular plane with $\phi=0^\circ$ corresponding to the field being
parallel to the molecular plane. Reproduced with permission from Ref. \citenum{hernandez20}.}
\label{fig:sfi}
\end{figure}
Formally, no changes to Eq. \eqref{eq:ri1} are needed when dealing with CBFs except that
the inversion of $\mathbf{J}$ requires care when the auxiliary basis contains CBFs as well.
However, the derivation changes: Originally,\cite{dunlap79} Eq. \eqref{eq:ri1} was obtained
by minimizing the functional
\begin{equation} \label{eq:eri2}
\Delta_{\mu\nu} = \int dr_1 \int dr_2 [\rho_{\mu\nu} (r_1) - \tilde{\rho}_{\mu\nu}(r_1)] \, r_{12}^{-1} \,
[\rho_{\sigma\rho} (r_2) - \tilde{\rho}_{\sigma\rho}(r_2)]
\end{equation}
with $\tilde{\rho}_{\mu\nu}$ and $\rho_{\mu\nu}$ as approximated and exact AO products,
respectively. This is not possible for a basis set containing CBFs because $\Delta_{\mu\nu}$
becomes complex as well. In Ref. \citenum{hernandez19} the absolute value $|\Delta_{\mu\nu}|$
was minimized instead, which also led to Eq. \eqref{eq:ri1}.
Initial applications\cite{hernandez19} of CBF-RI-MP2 employed customized auxiliary basis sets
including several complex-scaled shells, but it was later demonstrated\cite{hernandez20} that
real-valued auxiliary basis sets optimized for standard RI-MP2 calculations work equally well.
As a result, typical CBF-RI-MP2 calculations employ auxiliary basis sets that have roughly the
same size as the original basis set. This is different from standard RI-MP2, where one usually
uses considerably larger auxiliary basis sets.
Very recently, Eq. \eqref{eq:ri1} has been made available for use in CC2 calculations with
CBFs.\cite{paran22} As a multistate method, CC2 in its EOM extension can
provide a description of a resonance together with its decay channels and is thus appropriate
for further types of resonances besides ionization in static fields, for example, core-excited
states and temporary anions. Since CC2 and MP2 are structurally similar,\cite{christiansen95,
haettig00} it is expected that complex-variable RI-CC2 will be applicable to resonances in
systems with up to ca. 50 atoms as well. In addition, the RI approximation has been made
available for CAP-MP2 and CAP-CC2 calculations. Since CAP methods are based on
real-valued AOs, the usual RI approximation can be used and the $B$ tensors from Eq.
\eqref{eq:ri1} become complex only upon transformation to the MO basis.
It should be added that it is also possible to apply Eq. \eqref{eq:ri1} to the ERIs in the context
of CCSD and EOM-CCSD.\cite{epifanovsky13} However, this reduces computation
times and memory requirements much less than in the case of MP2 or CC2 because the
amplitudes $t_{ij}^{ab}$ need to be stored and processed in every iteration. Given that the
treatment of resonances often requires large basis sets, computational savings could be
potentially higher than for bound states and complex-variable RI-CCSD may thus be a more
viable method than its real-valued counterpart. However, no implementation has been reported
so far.
\subsection{Quantum embedding} \label{sec:emb}
To investigate electronic resonances in complex environments, a possible further strategy
besides the use of rank-reduction techniques consists in wave-function-theory in DFT quantum
embedding.\cite{govind98,jones20} Here, only a small region of interest in a larger system is
treated with a high-level method, for example EOM-CC, whereas the remainder is described
using a lower-cost DFT approximation. Since embedding approaches rely on partitioning the
system, they are particularly useful whenever the fragment of interest and the environment can
be told apart easily, for example, if one deals with molecules that are surrounded by a solvation
shell, absorbed at a surface, or enclosed in a protein coat.
While there are many investigations where quantum embedding is used to describe properties
and chemical reactivity of electronic ground states, applications to excitation, ionization, and
electron attachment are a lot scarcer.\cite{izsak20} A central question is whether it is appropriate
to use the same embedding potential for different electronic states. Recently, it was shown that
a state-universal approach based on projection-based EOM-CCSD-in-DFT embedding\cite{
manby12,lee19,bennie17} delivers good numerical results for excited states of valence, Rydberg,
and charge-transfer character, for valence and core ionization, and with certain reservations, also
for electron-attached states.\cite{parravicini21} Through combination with CAPs and CBFs, the
method has been extended to electronic resonances.\cite{parravicini21}
As an illustration of the numerical performance, Tab. \ref{tab:emb} lists representative results for
several types of transitions computed with embedded and regular EOM-CCSD as well as with
DFT. In all these examples, the environment, which is treated with DFT, consists of 1--5 water
molecules simulating microsolvation. Although it can be seen that valence excitation energies
and valence ionization energies are well described by embedded EOM-CCSD, the usefulness
of the approach for these transition is questionable given the good performance of DFT, represented
in Tab. \ref{tab:emb} by the Coulomb-attenuated B3LYP density functional.
More interesting are thus applications to core ionization and Rydberg excited states, where
DFT struggles while embedded EOM-CCSD performs well. As concerns electron attachment,
Tab. \ref{tab:emb} demonstrates that embedded EOM-CCSD improves on DFT but --because
attachment energies are typically very small-- the deviation from regular EOM-CCSD is still larger
than the actual transition energy. However, for positive attachment energies corresponding to
temporary attachment, which can be larger, the performance of embedded CAP-EOM-CCSD
is satisfying both for the real part of the energy and the imaginary part corresponding to the
decay width.
\begin{table*} \small
\caption{\ Different types of transition energies computed with regular and embedded EOM-CCSD.
Absolute values are given for regular EOM-CCSD, deviations from these values for embedded
EOM-CCSD. All values are in eV and have been taken from Ref. \citenum{parravicini21}}
\label{tab:emb}
\begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}lllllll}
\hline
Type of state & Example & EOM-CCSD & \multicolumn{2}{c}{EOM-CCSD embedded in} & CAM-B3LYP \\
& & & CAM-B3LYP & PBE \\ \hline
Valence excitation & CH$_2$O + 5 H$_2$O & \phantom{--}4.08 & \phantom{--}0.05 & \phantom{--}0.04 & --0.06 \\
Rydberg excitation & CH$_3$OH + 3 H$_2$O & \phantom{--}8.75 & --0.03 & --0.48 & --0.68 \\
Valence ionization & CH$_2$O + 5 H$_2$O & \phantom{--}11.19 & \phantom{--}0.12 & \phantom{--}0.10 & --0.15 \\
Core ionization & CH$_2$O + 5 H$_2$O & \phantom{--}540.95 & \phantom{--}0.06 & \phantom{--}0.04 & --1.40 \\
Electron attachment & HCF + 5 H$_2$O & --0.07 & --0.48 & --1.49 & --1.40 \\
Temporary electron attachment\footnote{Computed using CAP.} & CH$_2$O + H$_2$O &
\phantom{--}0.88--0.10$i$ & --- & \phantom{--}0.91--0.11$i$ & --- \\
\hline
\end{tabular*}
\end{table*}
For the projection-based embedded EOM-CCSD method employed in Tab. \ref{tab:emb}, one first
solves a standard SCF equation for the whole system, denoted A+B, using a suitable density functional.
Based on localization of the orbitals and Mulliken population analysis, the resulting SCF wave function
is then split into two pieces corresponding to the high-level fragment A and the environment B.
Subsequently, a second SCF procedure is carried out with a modified Fock matrix $\tilde{\mathbf{F}}$,
whose elements are given as
\begin{align} \label{eq:emb1}
\tilde{F}^\text{A-in-B}_{\kappa\lambda} &= \sum_{\sigma\rho} \mathcal{P}_{\kappa\mu}
F^\text{A-in-B}_{\mu\nu} \mathcal{P}_{\nu\lambda}~, \\
F^\text{A-in-B}_{\mu\nu} &= h_{\mu\nu} + \sum_{\rho\sigma} \gamma^\text{A-in-B}_{\sigma\rho}
[\langle \mu\sigma | \nu\rho \rangle - \langle \mu\sigma | \rho\nu \rangle] + v^\text{emb}_{\mu\nu}~.
\label{eq:emb2} \end{align}
This determines the density matrix $\boldsymbol{\gamma}^\text{A-in-B}$ that forms the basis for the
subsequent CCSD and EOM-CCSD calculations. In Eq. \eqref{eq:emb2}, the embedding potential
is given as $v^\text{emb}_{\mu\nu} = \sum_{\sigma\rho} [\gamma^\text{A+B}_{\sigma\rho} -
\gamma^\text{A}_{\sigma\rho}] \, [\langle \mu\sigma | \nu\rho \rangle - \langle \mu\sigma | \rho\nu
\rangle]$ and thus independent of $\boldsymbol{\gamma}^\text{A-in-B}$ so that it does not need
to be recalculated during the SCF procedure. The projector $\mathcal{P}_{\mu\nu} = \delta_{\mu\nu}
- \sum_{\rho} \gamma^\text{B}_{\mu\rho} \; S_{\rho\nu}$ removes the degrees of freedom
corresponding to subsystem B from the variational space. This formulation of the theory
in terms of Eqs. \eqref{eq:emb1} and \eqref{eq:emb2} is equivalent to the one originally
introduced.\cite{manby12}
Eqs. \eqref{eq:emb1} and \eqref{eq:emb2} ensure that the SCF energy of the full system A+B is
recovered exactly if both fragments are described at the same level of theory.\cite{manby12,lee19}
A further advantage of projection-based embedding is that no modifications are necessary to the
working equations of the higher-level method as long as one is only interested in the energy or
orbital-unrelaxed properties. Likewise, quantities such as NTOs and Dyson orbitals
that are useful to characterize excitation, ionization or electron attachment, can be evaluated in a
straightforward manner.\cite{parravicini21} The fact that the working equations of the higher-level
method stay the same is also the reason that the combination of projection-based embedding with
CAP-EOM-CCSD is very easy if the CAP is active only in the EOM-CCSD calculation. In such
an approach, the SCF calculation based on Eqs. \eqref{eq:emb1} and \eqref{eq:emb2} stays
real-valued. The combination with CBFs is conceptually simple as well but requires the implementation
of Eqs. \eqref{eq:emb1} and \eqref{eq:emb2} for complex numbers.
\subsection{Partial decay widths} \label{sec:grad}
Most electronic resonances can decay into different electronic states, which are referred
to as decay channels. For example, metastable excited states of CN$^-$ can decay into
the $^2\Sigma^+$ and the $^2\Pi$ state of neutral CN.\cite{skomorowski18} A core-ionized
H$_2$O molecule with electronic configuration $(1a_1)^1 (2a_1)^2 (1b_2)^2 (3a_1)^2
(1b_1)^2$ can decay into 16 states of H$_2$O$^{2+}$ where shake-up processes and
double Auger decay have not even been considered yet.\cite{skomorowski21b,matz21}
In a molecule exposed to a static electric field, electrons from all orbitals can undergo
tunnel ionization giving rise to a multitude of decay channels but the relative ease with
which ionization from a particular orbital happens greatly depends on the orientation of
the field and the molecule.\cite{jagau18,hernandez20}
Partial decay widths describing the contributions of different channels are thus of great
importance for the chemistry and physics of electronic resonances. The notable exception
are low-lying temporary radical anions formed by electron attachment to closed-shell
molecules, which typically decay solely into their parent state meaning the neutral ground
state. For all other types of resonances, partial widths are central quantities to analyze
the fate of a metastable system. In addition, branching ratios can often be determined
experimentally with much better accuracy than total decay widths.
However, while there are ample theoretical data on partial decay widths of atomic resonances
and those in diatomic or triatomic molecules, corresponding investigations of polyatomic
molecules are comparatively scarce.\cite{manne85,zaehringer92,zaehringer92b,tarantelli94,
kolorenc11,inhester12,kolorenc20,skomorowski21,skomorowski21b,matz21} Most computations
relied on Fano's theory\cite{fano61,feshbach62} where the determination of partial widths is
straightforward because the total width is obtained as a sum over decay channels. This is not
the case for complex-variable methods, where the total decay width is evaluated according to
Eq. \eqref{eq:sieg1}, i.e., as imaginary part of an eigenvalue of the Schr\"odinger equation.
Consequently, additional steps have to be taken to define partial decay widths and very few
data are available so far.
In some cases, the evaluation of partial widths with complex-variable methods is facilitated by
point-group symmetry. For example, the CAP-MRCI decay width of C$_2^{2-}$ has been
decomposed into contributions from $\sigma_g$, $\sigma_u$, and $\pi_u$ decay channels
by projecting the CAP onto orbitals of a particular point-group symmetry.\cite{sommerfeld00}
In a related manner, the Auger decay width of Ne$^+$ (1s$^{-1}$) computed with CBF-CCSD
has been decomposed into contributions from Ne$^{2+}$ states of S, P, or D symmetry by
complex scaling only s, p, or d shells of the basis set, respectively.\cite{matz21} It has also
been suggested to decompose the total width obtained in a CAP calculation using Fano's
theory by considering the overlap between a Dyson orbital and a Coulomb wave representing
the free electron\cite{gulania19} or, alternatively, by analyzing NTOs.\cite{
skomorowski18} These approaches were applied to C$_2^{2-}$ and cyanopolyyne anions
using CCSD and EOM-EA-CCSD wave functions, respectively.
Recently, a more general approach was introduced to evaluate partial widths in the context
of complex-variable methods.\cite{matz21} This approach is independent of
point-group symmetry and does not make use of Fano's theory. Instead, it is
based on energy decomposition analysis. In initial applications, it was applied to Auger decay
of core-ionized states described with CS-CCSD and CBF-CCSD: For a CC wave function
describing a core-ionized state, the decay width stems solely from the imaginary part of the
correlation energy
\begin{equation} \label{eq:pw1}
E_\text{CC} = \sum_{ijab} \Big( 1/4 t_{ij}^{ab} + 1/2 t_i^a t_j^b \Big) \langle ij || ab \rangle~,
\end{equation}
because the underlying HF reference does not capture the coupling to the continuum. It is
thus possible to assign amplitudes $t_{ij}^{ab}$, in which $a$ or $b$ refer to the core hole,
to a particular decay channel depending on the vacated orbitals $i$ and $j$ and to evaluate
the partial decay width as contribution of the respective $t_{ij}^{ab}$ to $\text{Im}(E)$. If all
amplitudes of this type are removed, a CVS-like wave function is obtained and Eq. \eqref{eq:pw1}
yields zero for the imaginary part of the CC energy. As an alternative, the decomposition
can be based on the CC Lagrangian
\begin{equation} \label{eq:pw2}
E_\text{CC} = \langle 0 | (1 + \Lambda) e^{-T} H e^T | 0 \rangle ~,
\end{equation}
which yields slightly different results. As a numerical example of the approach, Tab. \ref{tab:h2o}
displays partial decay width of the 16 primary decay channels of core-ionized water computed
with CBF-CCSD and, alternatively, based on Fano's theory. This shows overall good agreement
between the methods with the notable exception of the $2a_12a_1$ channel, presumably because
it has a significant admixture of other states.
While energy expressions corresponding to other wave functions can be analyzed in a likewise
manner, the numerical performance of the approach varies. Specifically, the
decomposition of the EOMIP-CCSD energy in terms of $R_2$ and $L_2$ amplitudes yields
much less reliable results because excitations that do not result in filling the core hole deliver
unphysically large contributions to $\text{Im}(E)$. Likely, it is necessary in this case to analyze
further components of the wave functions besides those created by $R_2$ and $L_2$. A further
complication originates from $\text{Im}(E)$ not being exactly zero for bound states in finite basis
sets. This is especially pronounced for CS approaches (see Sec. \ref{sec:cs2}) but does not
impair the performance of the approach too much as Tab. \ref{tab:h2o} illustrates.
Importantly, Eqs. \eqref{eq:pw1} and \eqref{eq:pw2} apply to all complex-variable methods
and types of resonances. The generalization from Auger decay to other processes involving
core vacancies as well as to temporary anions and quasistatic ionization, possibly using other
wave functions, is thus expected to be straightforward.
\begin{table} \small
\caption{\ Partial decay widths of H$_2$O$^+$ (1s$^{-1}$) computed with different methods.
All values in meV.}
\label{tab:h2o}
\begin{tabular*}{0.48\textwidth}{@{\extracolsep{\fill}}llll}
\hline
Decay & CBF & Fano & Fano \\
channel & CCSD\cite{matz21} & EOM-CCSD\cite{skomorowski21b} &
MRCI\cite{inhester12,inhester14} \\ \hline
$3\text{a}_1 1\text{b}_1$ (triplet) & 0.2 & 0.5 & 0.4 \\
$1\text{b}_1 1\text{b}_1$ & 18.0 & 13.3 & 19.0 \\
$3\text{a}_1 1\text{b}_1$ (singlet) & 19.6 & 12.7 & 18.0 \\
$1\text{b}_1 1\text{b}_2$ (triplet) & 0 & 0 & 0 \\
$3\text{a}_1 3\text{a}_1$ & 12.2 & 8.9 & 13.1 \\
$1\text{b}_1 1\text{b}_2$ (singlet) & 15.7 & 10.7 & 15.2 \\
$3\text{a}_1 1\text{b}_2$ (triplet) & 0.2 & 0.4 & 0.3 \\
$3\text{a}_1 1\text{b}_2$ (singlet) & 13.4 & 9.5 & 13.2 \\
$1\text{b}_2 1\text{b}_2$ & 8.7 & 7.1 & 9.8 \\
$2\text{a}_1 1\text{b}_1$ (triplet) & 2.8 & 4.1 & 3.0 \\
$2\text{a}_1 3\text{a}_1$ (triplet) & 2.5 & 3.8 & 2.6 \\
$2\text{a}_1 1\text{b}_2$ (triplet) & 2.2 & 2.9 & 1.6 \\
$2\text{a}_1 1\text{b}_1$ (singlet) & 9.6 & 9.5 & 10.0 \\
$2\text{a}_1 3\text{a}_1$ (singlet) & 12.7 & 13.6 & 11.0 \\
$2\text{a}_1 1\text{b}_2$ (singlet) & 6.8 & 6.3 & 6.6 \\
$2\text{a}_1 2\text{a}_1$ & 21.6 & 15.3 & 4.1 \\ \hline
All & 142.5 & 121.7 & 145.6 \\
\hline
\end{tabular*}
\end{table}
\section{Conclusions and outlook} \label{sec:conc}
This feature article has given an overview of the quantum chemistry of electronic resonances
and their treatment by means of complex-variable electronic-structure methods. Because
resonances are embedded in the continuum, their wave functions are \textit{a priori} not
square-integrable. Complex-variable techniques afford a regularization of
resonances and make them amenable to bound-state quantum chemistry. Quantities such
as Dyson orbitals and natural transition orbitals can be defined and an analysis of resonances
in terms of molecular orbital theory becomes possible.
Regularization of resonance wave functions can be achieved by means
of complex scaling or complex absorbing potentials. While complex scaling offers several
formal advantages and can now be applied to molecular resonances without problems using
complex-scaled basis functions, complex absorbing potentials are more heuristic but also
easier to integrate into existing software. In addition, CAP calculations can be sped up by
activating the CAP only in certain steps of a calculation. However, the unphysical dependence
of the complex resonance energy on parameters such as scaling angle and CAP strength
presents a persistent stumbling block shared by CAP and complex-scaled approaches.
Different types of resonances, in particular temporary anions, core-vacant states, and
Stark resonances were discussed with respect to their electronic structure and their decay
mechanism. These states pose distinct requirements towards the electronic-structure
model. For temporary anions, superexcited states, core-excited and core-vacant states,
a multistate treatment as offered by EOM-CC, SAC-CI, CC2, or ADC methods combines
several advantages. For other states such as metastable dianions, the identification of the
most suitable computational approach is less straightforward. The feature article has
furthermore summarized a number of recent methodological contributions to the field:
\begin{itemize}
\item The development of analytic gradients for CAP-CC methods has enabled the investigation
of potential energy surfaces of polyatomic temporary anions. Special points such as equilibrium
structures, crossing seams, and exceptional points can be located and characterized; the
results demonstrate the relevance of such points for chemical reactions and spectroscopies
involving electronic resonances. In addition, the availability of analytic gradients paves the
way for conducting \textit{ab initio} molecular dynamics simulations of decaying states. Such
calculations will be most relevant to model DEA efficiencies.
\item The development of a resolution-of-the-identity approximation for complex basis
functions has enabled the treatment of resonances in molecules with up to ca. 50 atoms
at the MP2 level of theory. Because resonances require large basis sets with many diffuse
functions, substantial speedups can be achieved by means of the RI approximation.
Notably, standard auxiliary bases without any complex-scaled functions deliver excellent
results for energies and decay widths. The introduction of RI methods for resonances
can be seen as a first step towards the application of further rank-reduction techniques
such as Cholesky decomposition.\cite{beebe77,koch03} Moreover,
multistate methods such as complex-variable CC2 have very recently
been combined with the RI approximation as well; these approaches are expected to
benefit from similar advantages as MP2.
\item Projection-based wave-function-theory in density-functional-theory embedding
provides a way to quantify the impact of the chemical environment on a resonance state.
The method has so far only been used for microsolvated temporary anions described by
CAP-EOM-CCSD, where is delivers good results for energies and decay widths. A corresponding
approach based on complex basis functions that is geared towards core-vacant states has
very recently been achieved as well. Most interesting in this context will be applications
to decay processes such as interatomic Coulombic decay that occur only through participation
of the environment.
\item Partial decay widths have been made available by means of an energy decomposition
analysis that is related to the core-valence separation. The method has been applied to
Auger decay of core-ionized states described by complex basis functions and EOM-CCSD,
but can be generalized to other wave functions and kinds of resonances. Partial widths
are critically important for all states for which more than one decay channel is open
including different types of superexcited, core-vacant, and Stark resonances.
\end{itemize}
The present feature article illustrates that complex-variable techniques and, more general,
quantum chemistry of electronic resonances is a field of active research. The developments
introduced in recent years have broadened the scope of electronic-structure theory significantly
and enabled new types of applications in computational chemistry and spectroscopy. Although
there remain formal as well as technical issues to be solved, the implementations of complex
scaling and complex absorbing potentials that are available today already represent a useful
enhancement of quantum chemistry. Further research on complex-variable techniques is
underway and provides the perspective of making these methods more practical so that,
ultimately, they may be routinely used by non-experts as tool in quantum-chemistry program
packages.
\section*{Conflicts of interest}
There are no conflicts to declare.
\section*{Acknowledgements}
The author is grateful to Professors Anna I. Krylov, Nimrod Moiseyev, and Lorenz Cederbaum,
as well as the current and former members of his research group for many fruitful discussions
about electronic resonances and complex-variable techniques. I also thank
Professor J\"urgen Gauss and Dr. Wojciech Skomorowski for helpful feedback on the manuscript.
Funding from the European Research Council (ERC) under the European Union's Horizon 2020
research and innovation program (Grant Agreement No. 851766), from the German Research
Foundation (Grant JA-2794/1-1), and from the Fonds der Chemischen Industrie is gratefully
acknowledged.
\renewcommand\refname{References}
|
1,941,325,220,445 | arxiv | \section{INTRODUCTION}
The discovery of superconductivity (SC) in LaFeAsO$_{1-y}$F$_{y}$\cite{Kamihara2008} has triggered renewed interest in
superconductivity and also in itinerant magnetism in general. In the Fe-based
``1111'' and ``122" pnictides, the emergence of
superconductivity is accompanied by the suppression of the stripe-like
antiferromagnetic (AFM) ordering of Fe$^{2+}$ and a
tetragonal(T)-orthorhombic(O) structural transition\cite{review4}. It has also
been suggested that AFM/structural fluctuations may be the driving forces for
superconductivity\cite{reviews_pairing}. In view of the prominent role of
magnetism in driving superconductivity in the Fe-based pnictides, in particular
by doping with transition metal ions, systematic investigations of the magnetic
and structural properties of the iso-structural ``1111'' and ``122'' parent pnictides involving the square lattice of other transition metals are called for\cite{Zhao2009}. In
fact, the various transition metals influence subtle and specific structural,
magnetic, and electronic properties, which provides insight to
understanding how the
magnetism is related to the SC state\cite{Ohta2009}. Furthermore, compared with
the parent ``122" compounds, the parent ``1111" oxypnictides \textit{RT}AsO
(\textit{R} = magnetic and non-magnetic rare earths, $T =$ transition metals Fe,
Co, Ni, Mn, etc.) are more intriguing as they offer the possibility to tweak an
additional coupling between the rare-earth \textit{R} and the
transition-metal ions.
CeFeAsO is an itinerant poor metal with a T-O structural transition at $T_S \approx
150$ K followed by stripe-like AFM order of Fe at $T_N \approx 145$ K\cite{Zhao2008}.
Recent $\mu SR$ studies\cite{Maeter2009} have indicated relatively strong
coupling between the rare earth Ce and Fe, and further neutron and X-ray
scattering studies have shown the coupling leads to a gradual Fe
spin-reorientation at low temperatures\cite{Zhang2013}. By contrast, in the
heavy-fermion metal CeNiAsO\cite{Luo2014}, Ni does not order magnetically but
two successive AFM transitions associated with Ce ions are
observed\cite{Luo2011}. In CeCoAsO, a ferromagnetic ordering is found below
$\sim75$ K with no indication for Ce ordering at lower
temperatures\cite{Sarkar2010}. To date, little attention has been paid to
CeMnAsO for which magnetization and heat capacity measurements indicate that Mn
moments order above room temperature and a first-order magnetic transition
emerges at $\sim$ 35 K, possibly related to a Mn spin
reorientation\cite{TSUKAMOTO2011}. It has also been proposed that the Ce spins
do not undergo long range order but are {\it parasitically induced to order} below
$\sim$ 35 K\cite{TSUKAMOTO2011}. However, the AFM N\'{e}el temperature $T_N$,
actual magnetic structures, the values of the ordered moments, and
the interplay between Ce and Mn in CeMnAsO have not been determined. Here, we
report neutron diffraction, and magnetization results on CeMnAsO to answer these
questions, and also to critically compare the structure and magnetism with
related pnictides.
\section{EXPERIMENTAL DETAILS}
Previous reports on the synthesis of CeMnAsO used
CeAs and Mn$_{2}$O$_{3}$ \cite{TSUKAMOTO2011} and added excess Ti as an oxygen
getter which resulted in the formation of CeMnAsO with a secondary phase. In
the present
study, MnO and CeAs as starting materials were mixed thoroughly in
stoichiometric proportions. (CeAs was prepared firstly by reacting Ce and As
powders at 600 $^{\rm o}$C for 35 h and then at 950 $^{\rm o}$C for 8 h). The mixed powder was
sealed in an evacuated tantalum tube and sintered at 1150 $^{\rm o}$C for 40 h.
A single-phase polycrystalline CeMnAsO powder was then obtained and
characterized by x-ray and neutron diffraction methods.
Neutron powder diffraction (NPD) measurements on $\approx 4$ g CeMnAsO sample
were conducted on the HB1A triple-axis spectrometer with a fixed-incident-energy
14.6 meV (located at the high flux isotope reactor, HFIR, at the Oak Ridge
National Laboratory, USA). The measurements on HB1A were performed with an HOPG analyzer to lower background scattering (providing approximately 1 meV energy resolution). A thick block of HOPG was used to filter out the $\lambda/2$ component from the incident beam. The data between $2 <T<300$ K were collected using an
{\it orange} cryostat and a high temperature furnace was used for the
measurements between $300<T<420 $ K. All the neutron diffraction data were analyzed using
Rietveld refinement program Fullprof suite\cite{Fullprof}.
The temperature and magnetic field dependence of the magnetization were carried out in a
superconducting quantum interference device (Quantum Design MPMS-7S, SQUID) magnetometer.
\section{RESULTS AND DISCUSSION}
\subsection{ A. Crystalline structure }
Neutron powder diffraction pattern at 420 K is shown in Fig.\ \ref{fig:Rietveld} indicating the
purity of the material with no indication of a secondary phase in CeMnAsO,
consistent with our x-ray diffraction measurement at room temperature (not shown
here). Rietveld analysis confirms the tetragonal ZrCuSiAs-type structure with
space group $P4/nmm$, as illustrated in the inset of Fig.\ \ref{fig:Rietveld}.
Similar to
the tetragonal structure in \textit{R}FeAsO (\textit{R} is rare earth
element)\cite{Zhang2013} or LaMnAsO \cite{Hanna2013}, the structure of CeMnAsO
consists of MnAs and CeO layers where the Mn$^{2+}$ ions form a square lattice.
Our neutron diffraction results show no change in the crystal structure of this
compound down to 2 K. The refined atomic positions, lattice constants, and
volume of CeMnAsO at 420 K and ground temperature 2 K are summarized in Table \
\ref {tab:lattice}.
\begin{figure} \centering \includegraphics [width = 1\linewidth] {Fig1}
\caption{(color online) Rietveld refinement fit to neutron diffraction pattern
at 420 K and a graphic
representation of the crystal structure
for CeMnAsO using the best fit parameters listed in Table I. The observed data
and the fit are indicated by the open circles and
solid lines, respectively. The difference curve is shown at the bottom. The
vertical bars mark the positions of Bragg reflections for the phases of CeMnAsO
(up) and Al sample holder (below).}
\label{fig:Rietveld}
\end{figure}
\begin{figure} \centering \includegraphics [width = 1\linewidth] {Fig2}
\caption{(color online) (a) Temperature dependence of the zero-field-cooling
(ZFC) and FC magnetization in a low field of 0.01 T for CeMnAsO. The curve at
the bottom shows the first derivative of FC curve with no indication of $T_{\rm
N} = 345 K$. The inset shows the temperature dependence of the ZFC
magnetization at 0.01, 1 and 5 T. (b) Field dependence of magnetization at
different temperatures in CeMnAsO.}
\label{fig:Mag}
\end{figure}
\subsection{ B. Multiple magnetic transitions and
field-induced metamagnetic transition revealed in magnetization measurements}
The temperature dependence of the zero-field-cooled (ZFC) and field-cooled (FC)
magnetization in Fig.\ \ref{fig:Mag}(a) shows a clear magnetic transition at 35 K emphasized
by a single peak in the first derivative of magnetization with no indication of
additional anomaly up to 370 K. The anomaly at 35 K has been previously
attributed to a spin reorientation (SR) transition of Mn.\cite{TSUKAMOTO2011}
Our susceptibility measurements show that this magnetic transition is not
shifted by external magnetic fields up to 5 T in accordance with typical
behavior of a spin reorientation transition (the transition temperature at 35 K
is labeled $T_{SR}$ hereafter). The thermal hysteresis of the magnetization
below T$_{SR}$ is indicative of the first-order nature of the transition. Below
7 K, both ZFC and FC magnetization decrease implying the emergence of another
magnetic transition. We point out that such anomalous decrease below 7 K was
not observed by Tsukamoto \textit{et al.} presumably because of the influence of a
secondary phase as mentioned in Ref. 12. Interestingly, we do not observe a
clear anomaly in the magnetization in the temperature range $35 - 370$ K potentially identifying $T_{\rm N}$.
However, the neutron measurements of the (100) and (101) magnetic Bragg peaks
shown in Fig.\ \ref{fig:OrderPara1} (b) exhibit a sharp increase in the
integrated intensity below $\approx 345$
K, which we identify as the AFM transition temperature $T_{\rm N}$ of the Mn
sublattice. We note that a weak and broader (100) Bragg peak persists above
$T_{\rm N}$ with a linewidth that increases with temperature indicating the
presence of short-range ordered Mn spins above $T_{\rm N}$, as shown in the
inset of Fig. 4(a). This suggests the
existence of strong spin fluctuations above $T_{\rm N}$ that tend to wash out
any anomaly in the susceptibility at $T_{\rm N}$ even in its first derivative
with respect to temperature. The absence or weak peak in the derivative of the
susceptibility is characteristic of the two-dimensional nature of the Mn
magnetic system with a strong inplane coupling that gives rise to short range
fluctuating magnetic order, as has been found in other systems
\cite{Vaknin1989,Vaknin1990}. This is consistent with the
overall behavior of the order parameters as a function of temperature as
expressed in the (100) and the (101) magnetic refelctions shown in Fig.\
\ref{fig:OrderPara1} (b). The intensity of both peaks is modeled by a power
law
\begin{equation}
I(T) = a(1-T/T_N)^{2\beta} +b+cT
\end{equation}
where $a$ is an intensity scale factor (at $T=0$) and $b$ and $c$ account for
background and signal temperature dependent above $T_{\rm N}$. Our fit to both
peaks yields $T_{\rm N} =347\pm1$ K and
$\beta = 0.47\pm0.03$. The overall temperature dependence with a relatively large $\beta$ (compared to $\beta = 0.125$ for the 2D Ising model or $\beta \approx 0.36$ for the 3D Heisenberg model) has been explained for similar 2D system with inplane exchange coupling $J_1$ that is much larger than interlayer one $J_c$, $J_c/J_1 << 1$ \cite{Singh1990}.
\begin{table}
\caption{Refined atomic positions, lattice constants, lattice volume $V$ at
$T = 420$ and 2 K for CeMnAsO space group $P4/nmm$. Ce:
2\textit{c}($\frac{1}{4}$,
$\frac{1}{4}$, z); Mn: 2\textit{b} ($\frac{3}{4}$,
$\frac{1}{4}$, $\frac{1}{2}$); As: 2\textit{c} ($\frac{1}{4}$,
$\frac{1}{4}$, z); O: 2\textit{a} ($\frac{3}{4}$,
$\frac{1}{4}$, 0). }
\label{tab:lattice}
\begin{tabular} {llllll}
\hline\hline
T & Atom& Atomic position & \textit{a}(\AA{})& \textit{c}(\AA{}) &
$V$(\AA{}$^{3}$) \\
\hline
420 K& Ce & z= 0.1263(6)& 4.1032(2) & 9.0038(3) &151.594(8)\\
& As & z= 0.6696(4) & & & \\
2 K& Ce & z= 0.1287(4)& 4.0837(3) & 8.9567(4) &149.370(6)\\
& As & z= 0.6720(5) & & & \\
\hline\hline
\end{tabular}
\end{table}
Figure\ \ref{fig:Mag} (b) shows magnetic field dependence of the magnetization at different
temperatures. Whereas the magnetization at room-temperature (RT) shows a very
weak hysteresis, typical of AFM behavior, i.e., linear $M$ versus $H$ curve is
observed in $T_{\rm SR}<T<T_{\rm N}$. Below $T_{\rm SR}$, the magnetization
first rises linearly at low fields indicative of an AFM behavior, with a weak
jump in the slope at approximately 1.5 T, indicating a possible field-induced
meta-magnetic transition. However, the magnetization is far from saturation
even at 5 T, indicating that the magnetic structure under magnetic field is
mostly preserved, and that the external magnetic field induces a
transformation to weakly canted AFM structures below $T_{SR}$.
\begin{figure} \centering \includegraphics [width = 1\linewidth] {Fig3}
\caption{(color online) (a) Comparison of the neutron diffraction patterns at
420 K, RT and 10 K. (b) Integrated intensity of (110), (100)
and (101) reflections as
a function of temperature with an fit to a power law with relevant parameters
as shown in the figure.}
\label{fig:OrderPara1}
\end{figure}
\begin{figure} \centering \includegraphics [width = 1\linewidth] {Fig4}
\caption{(color online) (a) Temperature dependence of the
integrated intensity of (100) peak. The inset shows the temperature dependence
of the linewidth of (100) peak. (b) Temperature dependence of the integrated
intensity of (002) peak. The inset shows a zoomed-in view at low temperatures
below 80 K with a peak at $T = 7$ K that we attribute to additional magnetic
transition in the Ce sublattice. }
\label{fig:OrderPara2}
\end{figure}
\subsection{ C. Evidence of Mn spin reorientation and
long-range Ce spin orderings determined by NPD measurements
}
A comparison of the neutron diffraction patterns at different temperature
windows $T>T_{N}$ (420 K), $T_{\rm SR}<T<T_{\rm N}$ (RT), $7 K<T<T_{SR}$ (10K)
is shown in Fig.\ \ref{fig:OrderPara1} (a) (nuclear and magnetic Bragg reflections are labeled N and
M, respectively). At RT, whereas the (100) Bragg peak is purely magnetic, the
(101) and (102) peaks have nuclear and magnetic contributions due to the Mn
ordering. At 10 K (below $T_{\rm SR}$), the intensity of the (100) peak decreases dramatically whereas the intensities of the (101) and (102) increase, evidence for a change in
magnetic structure. The magnetic contribution to the (002) nuclear peak below
$T_{N}$ is negligible, however it increases below $T_{\rm SR}$ and peaks at
$\approx 7$ K (see Fig.\ \ref{fig:OrderPara2} (b)). As discussed below, the
intensities of the (100) and (002) peaks reflect the order parameters of Mn and Ce
moments, respectively. All the magnetic reflections can be indexed on the high
temperature nuclear (chemical) unit cell with a magnetic propagation vector
$k=(0,0,0)$. SARAH representational analysis program \cite{SARAH} is used to
derive the symmetry allowed magnetic structures. The decomposition of the
magnetic representation ($\Gamma_{ Mag}$) into the irreducible representations
is $\Gamma^{1}_{3}+\Gamma^{1}_{6}+\Gamma^{2}_{9}+\Gamma^{2}_{10}$ and
$\Gamma^{1}_{2}+\Gamma^{1}_{3}+\Gamma^{2}_{9}+\Gamma^{2}_{10}$ for Mn sites and
Ce sites, respectively. The symmetry allowed basis vectors are summarized in
Table \ \ref {tab:MagMoment}. There are two FM ($\Gamma^{1}_{3}$ and
$\Gamma^{2}_{9}$) and three AFM ($\Gamma^{1}_{2}$, $\Gamma^{1}_{6}$ and
$\Gamma^{2}_{10}$) solutions. But the two FM solutions can be discarded at all
temperatures as there is no FM contribution to the nuclear Bragg reflections in
our neutron diffraction patterns consistent with the magnetization measurements
below $T_{\rm N}$. Thus, only the three AFM solutions are considered for the
data refinement to obtain the magnetic structures at different temperature
windows.
\begin{figure} \centering \includegraphics [width = 1\linewidth] {Fig5}
\caption{(color online) Rietveld refinement fits to neutron diffraction
patterns and the graphic representations of the determined magnetic structures
of CeMnAsO at (a) 45 K,(b) 10 K and (c) 2 K. The observed data and the fit are
indicated by the open circles and solid
lines, respectively. The difference curve is shown at the bottom. The vertical
bars mark the
positions of Bragg reflections for the nuclear phase (up) and magnetic phase
(down) in CeMnAsO. The middle vertical bars in (c) mark the positions of the
phase of Al sample holder. The inset of (b) shows the temperature dependence of
the linewidth of (002) peak.}
\label{fig:MagneticStructures}
\end{figure}
Rietveld refinement fits to neutron diffraction patterns and the graphic
representation of the determined magnetic structures of CeMnAsO at different
temperatures are shown in Fig.\ \ref{fig:MagneticStructures}. In $T_{\rm SR}<T<T_{\rm N}$, there is no
evidence for Ce moment ordering and the neutron diffraction pattern is best
fitted using $\Gamma^{1}_{6}$ model, i.e., the Mn spins are antiparallel at Mn1
and Mn2 sites, forming a nearest-neighbor antiferromagnetic alignment in
\textit{ab} plane and the planes are stacked ferromagnetically along
\textit{c}-axis, i.e., C-type AFM
order with the Mn moments along \textit{c}-axis, as shown in Fig.\
\ref{fig:MagneticStructures} (a) at 45 K. As the temperature decreases to
$T_{\rm SR}$, the
magnetic structure is preserved and the Mn magnetic moment gradually increases
with an average moment 2.29(3) $\mu_{B}$ at RT and 2.78(2) $\mu_{B}$ at 45 K.
\begin{table}
\caption{The symmetry-allowed basis vectors [m$_{x}$,m$_{y}$,m$_{z}$] for the
space group $P4/nmm$ with \textbf{k}=(0,0,0) in CeMnAsO. Mn1: (0.75, 0.25,
0.5), Mn2:(0.25, 0.75, 0.5), Ce1:(0.25, 0.25, 0.126) and Ce2:(0.75, 0.75,
0.874). }
\label{tab:MagMoment}
\begin{tabular} {llllll}
\hline\hline
Atom& $\Gamma^{1}_{2}$ & $\Gamma^{1}_{3}$ & $\Gamma^{1}_{6}$ &
$\Gamma^{2}_{9}$&$\Gamma^{2}_{10}$\\
&&&& \\
\hline
Mn1 & & [0 0 m$_{z}$] & [0 0 m$_{z}$] & [m$_{x}$ m$_{y}$ 0] &[m$_{x}$
m$_{y}$ 0]\\
Mn2& & [0 0 m$_{z}$] & [0 0 -m$_{z}$]& [m$_{x}$ m$_{y}$ 0] &[-m$_{x}$
-m$_{y}$ 0] \\
Ce1& [0 0 m$_{z}$] & [0 0 m$_{z}$] & & [m$_{x}$ m$_{y}$ 0]&[m$_{x}$
m$_{y}$ 0]\\
Ce2&[0 0 -m$_{z}$] & [0 0 m$_{z}$] & & [m$_{x}$ m$_{y}$ 0]&[-m$_{x}$
-m$_{y}$ 0] \\
\hline\hline
\end{tabular}
\end{table}
In the temperature range 7 K $ <T<T_{SR}$, the refinement of the neutron
diffraction patterns are not satisfactory with the assumption of Mn ordering
only and the ordering of Ce magnetic moments is required to obtain good
agreement with the data. Trial refinements assuming a linear combination
of the $\Gamma^{1}_{6}$ of Mn sites and $\Gamma^{1}_{2}$ of Ce sites, $\Gamma^{2}_{10}$
of Mn sites and $\Gamma^{1}_{2}$ of Ce sites, or $\Gamma^{1}_{6}$ of Mn sites
and $\Gamma^{2}_{10}$ of Ce sites do not fit the data well. A satisfactory fit
to the diffraction patterns below $T_{\rm SR}$ is obtained by using
$\Gamma^{2}_{10}$, i.e., antiparallel Ce spins at Ce1 and Ce2 sites and
antiparallel Mn spins at Mn1 and Mn2 sites with ordered Mn and Ce moments in the
\textit{ab} plane. Thus, Mn maintain the C-type magnetic structure
but the ordered Mn magnetic moments reorient to the \textit{ab} plane, and simultaneously
the Ce spins align antiferromagnetically along \textit{c},
similar to the magnetic structure in
PrMnSbO\cite{Kimber2010} and NdMnAsO\cite{Emery2011} below its SR transition.
It is impossible to determine the absolute angle between Mn (or Ce) moments with respect to
\textit{a} axis in \textit{ab} plane due to the tetragonal nature of the
system, nevertheless we show both Mn and Ce moments along \textit{a} axis in
Fig.\ \ref{fig:MagneticStructures}(b). Note that there is no significant broadening of the (002) peak
below/above $T_{\rm SR}$, as shown in the temperature dependence of its
linewidth (see the inset of Fig.\ \ref{fig:MagneticStructures} (b)) indicating
long-range ordered Ce below $T<T_{SR}$.
Below $T_{\rm SR}$, whereas there is no anomaly at 7 K in the intensity of the
(100)
reflection (see Fig.\ \ref{fig:OrderPara2} (a)), the intensity of the (002)
reflection increases slightly
peaking at $\sim 7$ K (see Fig.\ \ref{fig:OrderPara2} (b)), consistent with the
peak in the
magnetization shown in
Fig.\ \ref{fig:Mag}(a), which confirms there is
another magnetic transition in the Ce sublattice.
Furthermore, the good refinement of the neutron diffraction pattern at 2 K is
still obtained by using $\Gamma^{2}_{10}$ model within uncertainties. Since the
refinement using $\Gamma^{2}_{10}$ model of Ce1 and Ce2 spins requires
antiparallel and confined spins to the \textit{ab} plane, it is very likely that
the transition observed at 7 K is due to a finite angle between Ce and Mn spins
with
respect to the Mn spins, forming a noncollinear magnetic structure between Ce
and Mn moments as theoretically predicted\cite{Lee2012}. However, we emphasize
that our powder data is not sufficiently sensitive to determine the accurate
relative angle between Ce and Mn moments. The quality of the refinement to the 2
K data has a tendency to be slightly improved when this angle is increased
from 0$^{\rm o}$ to $\sim20^{\rm o}$. The best refinement results using a
20$^{\rm o}$ angle are shown in Fig.\ \ref{fig:MagneticStructures}(c).
The average ordered moments at 2K for Mn is 3.32(4) $\mu_{B}$ and for Ce is in the range of
$0.75(3) - 0.81(4)\mu_{B}$ (depending on the relative angle between Ce and Mn
moments). The reduced Mn ordered moment (from that of $S=5/2$; $\sim 5$ $
\mu_B$ for an ideal localized moment) in CeMnAsO is likely due to the
spin-dependent hybridization between the Mn 3\textit{d} and As 4\textit{p}
orbitals as in BaMnAsF\cite{Saparov2013} and BaMn$_{2}$As$_{2}$\cite{An2009}
that all share similar MnAs layer. Such hybridization was also noted for iron based pncitides such as SrFe$_2$As$_2$\cite{Lee2010}. It is worthwhile noting that the (002) peak
intensity and the peak in the susceptibility at 7 K in CeMnAsO are different
from the observations in the iso-structural NdMnAsO in which both the intensity
of the magnetic peak and the susceptibility saturate below 4 K with no
indication of a magnetic transition in the Nd sublattice \cite{Emery2011}. Our
proposed schematic illustration of the magnetic transitions in CeMnAsO is
summarized in Fig.\ \ref{fig:PhaseDiagram}.
\begin{figure} \centering \includegraphics [width = 1\linewidth] {Fig6}
\caption{(color online) Schematic illustration of the proposed magnetic
structures for Ce and Mn sublattices in the CeMnAsO. Note that the different
\textit{ac }and \textit{ab} planes are used to illustrate the magnetic
structures above and below $T_{SR}$, respectively. }
\label{fig:PhaseDiagram}
\end{figure}
\subsection{ D. Magnetic interactions and absence of T-O structural
transitions above and below $T_{\rm N}$}
The ordered Mn moments 3.32(4) $\mu_{B}$ at 2 K compared to the 5 $\mu_B$
expected from a localized moment indicate that in the spectrum of itinerant {\it
versus} local-moment AFM, CeMnAsO tends to be the latter (i.e., local-moment
AFM)\cite{TSUKAMOTO2011}. This is different from the itinerant CeFeAsO with a
much lower ordered Fe moment of $\sim0.9 \mu_{B}$ compared to the 4 $\mu_B$
expected from a localized moment\cite{Zhang2013}. The inplane
checker-board-like AFM structure of the C-type order in CeMnAsO in $T<T_{\rm N}$
suggests that the NN interaction $J_{1}$ is dominant whereas inplane
next-nearest-neighbor (NNN) interaction $J_{2}$ is very weak or negligible.
Thus, in the context of $J_{1}-J_{2}-J_{c}$ model \cite{Johnston2011}, we
conclude that $J_{1}>0$, $J_{2}<J_{1}/2$ with
negligible spin frustration. This is in sharp contrast to CeFeAsO for which the
effective NNN interaction $J_{2}>J_{1}/2$\cite{Calderon} is necessary to
stabilize the
stripe-like AFM ordering with the ordered moment in the \textit{ab} plane. Note
that the preferred orientation of Mn is along the \textit{c} axis in CeMnAsO, in
contrast to the preferred orientation of Fe in \textit{ab} plane in CeFeAsO.
Another significant difference between CeMnAsO and CeFeAsO is that the T-O
structural transition observed above $T_{\rm N}$ in CeFeAsO is absent in
CeMnAsO. It is generally accepted that the T-O
structural transition in CeFeAsO and other ``1111" Fe-based pnictides
has a magnetoelastic origin\cite{Singh2009} due to nematic
fluctuations\cite{Fernandes2014,Zhang2014}. The stripe-like AFM structure in
``1111" Fe-based pnictides can be separated into two N\'{e}el sublattices each
defined by NNN Fe spins in the basal plane.\cite{Fernanders2010} Due to
$J_{2}>J_{1}/2$, there is a strong frustration between the two sublattices and the orthorhombic
distortion reduces the frustration and lowers the total energy (magnetic and elastic energy)\cite{Singh2009,Fernandes2014, Fernanders2010}. The absence of such magnetic
frustration may explain the absence of a T-O structural transition in CeMnAsO . This
is further consistent with the absence of the T-O transition in the isostructural
BaMnAsF\cite{Saparov2013}, and in the Mn-based``122'' systems
BaMn$_{2}$As$_{2}$\cite{Singh2009}. The AFM structure in both systems is G-type
with the NN Mn spins antiparallel along all the directions and moments
along the $c$-axis. We note that the T-O transition at $ \approx 35 $ K found
in
PrMnSbO below its $T_{\rm N} =230$ K is likely driven by local
\textit{f}-electrons in Pr$^{3+}$, unlike the T-O transition in the ``1111"
Fe-based pnictides driven by the transition metal. Different from PrMnSbO, no
any structural transition below $T_{\rm N}$ is oberved in CeMnAsO, the
mechanism of
which deserves further investigations.
As compared to BaMn$_{2}$As$_{2}$ with antiparallel Mn spins along
\textit{c}-axis, the parallel Mn spins along \textit{c}-axis
in CeMnAsO suggests that the interlayered magnetic interaction $J_{c}>0$ in
BaMn$_{2}$As$_{2}$ but $J_{c}<0$ in CeMnAsO. The N\'{e}el temperature of 625 K
in BaMn$_{2}$As$_{2}$ with three-dimensional magnetism is much higher than 347
K in CeMnAsO. Assuming the inplane exchange coupling $J_1$ is of the same order of magnitude this implies a much weaker magnetic interlayer interaction
$J_c$ in CeMnAsO, however with strong 2D correlations (fluctuated) as discussed above.
The much weaker interlayer magnetic interaction due to the longer distance
between adjacent
MnAs layers leads to a quasi-two-dimensional AFM character in CeMnAsO
similar to other ``1111" systems\cite{Memet2013}.
For the doped Fe-based
superconductors, it is commonly observed that the emergence of the SC is
accompanied with the suppression of both the structural and magnetic transitions
with $J_{2}>J_{1}/2$. However, for CeMnAsO, there is no evidence for
T-O structural transition and $J_{2}<J_{1}/2$. Further, CeMnAsO is a
local
moment antiferrromagnet in contrast to the itinerant antiferromagnet in
Fe-based
superconductors. This indicates that Mn at the transition metal site may
prevent the emergence of
superconductivity in Mn-doped ''1111" and ''122" Fe-based pnictides, which is
supported by experimental evidence that the substitution of Mn for Fe
in Ba(Fe$_{1-x}$Mn$_{x}$)$_{2}$Mn$_{2 }$ does not induce SC
\cite{Thaler2011}.
\subsection{E. Ce-Mn coupling}
The Mn SR transition is not observed in
LaMnAsO \cite{Emery2011}, BaMnAsF\cite{Saparov2013} or
BaMn$_{2}$As$_{2}$ \cite{Singh2009} where there is no magnetic rare earth ion
but is found in CeMnAsO and NdMnAsO \cite{Emery2011}, which indicates the SR
transition in CeMnAsO is driven by the coupling between rare earth Ce and Mn.
The Mn$^{2+}$ moment,
commonly displays very weak single-ion anisotropy as expected for the $L=0$ of
Mn$^{2+}$\cite{Toft-Petersen2012}, favors orientation along the \textit{c}-axis. As soon as Ce$^{3+}$
spins ($S= 1/2$ and $L= 3$) order below 35 K, the Ce-Mn coupling exerts an
effective field that induces a flop of Mn$^{2+}$ spins to the basal plane.
We point out that there is also a spin reorientation in CeFeAsO
\cite{Zhang2013} and
CeCrAsO \cite{TSUKAMOTO2011} but not
in CeCoAsO\cite{Sarkar2010} or CeNiAsO \cite{Luo2011}. In CeFeAsO, the
stripe-like Fe$^{2+}$ spins rotate uniformly and gradually in the \textit{ab}
plane below $ \approx14$ K and the SR of Cr occurs at $ \approx 36$ K. Recently
we performed neutron diffraction measurements on CeCoAsO and confirmed the FM
behavior of Co without evidence for a Co SR transition or Ce ordering in
agreement with previously reported studies\cite{Sarkar2010}. SR of transition
metal ions Mn, Fe, Cr, has also been observed in other systems due to the
coupling between magnetic rare earth \textit{f} and transition metal \textit{d}
moments, such as in \textit{R}FeO$_{3}$ (\textit{R}=Ce and Nd) \cite{RFeO},
\textit{R}Fe$_{2}$ (\textit{R}= Ce and Nd) \cite{Belov1976},
\textit{R}$_{2}$Fe$_{14}$B (\textit{R}=Nd and Er)\cite{Guslienko1995}, hexagonal
HoMnO$_{3}$ \cite{Vajk2005}, \textit{R}CrO$_{3}$ (\textit{R}=Ce, Nd and
Sm)\cite{Cao2014}.
\section{CONCLUSION}
In summary, we report on the structure and magnetic properties in CeMnAsO.
Whereas no structural transition is observed above and below the N\'{e}el
temperature in tetragonal CeMnAsO, it exhibits
a set of complex magnetic transitions. We find
two-dimensional short-range ordered Mn (most likely dynamic in nature, i.e.,
spin fluctuations)
above $T_{\rm N}=347(1)$ K. Below $T_{\rm N}$, the Mn spins order in a
C-type AFM structure with moments pointing along the \textit{c}-axis. A spin
reorientation of the Mn moments
from the $c$-axis to the \textit{ab} plane while keeping the C-type order occurs
below $T_{\rm SR}=35$ K, which is induced by long-range ordering of the Ce via
Ce-Mn coupling. Below 7 K, the collinear magnetic structure transforms to a
noncollinear one with an angle between the Ce and Mn moments.
A possible field-induced metamagnetic transition is observed below $T_{\rm
SR}$ in magnetization measurements.
The local-moment
antiferromagnetism with dominant NN interaction and negligible NNN interaction
with $J_{2}<J_{1}/2$
in CeMnAsO contrasts with the itinerant antiferromagnetism in CeFeAsO with
$J_{2}>J_{1}/2$. We also point out that the spin reorientation
transition is common not only to Mn, but also to Fe or Cr ions in the
oxypnictides and other oxides or intermetallics induced by strong
coupling between rare-earth \textit{R} and the transition metal ions.
\section{Acknowledgments}
Research at Ames Laboratory is supported by the US Department of Energy, Office
of Basic Energy Sciences, Division of Materials Sciences and Engineering under
Contract No. DE-AC02-07CH11358. Use of the high flux isotope reactor at the Oak
Ridge National Laboratory, was supported by the US Department of Energy, Office
of Basic Energy Sciences, Scientific User Facilities Division.
|
1,941,325,220,446 | arxiv | \section{Introduction}
About fifty years ago gamma-ray bursts (GRBs) were discovered. By the characteristic duration, they can be grouped into two classes \citep{Kouveliotou1993}. Long-duration GRBs (LGRBs) are usually regarded to be originated from the collapse of a massive star, while short-duration GRBs (SGRBs) are related to the merger events of black hole (BH)-neutron star (NS) or NS-NS binaries. In either case, the central engine of GRBs is likely to be a BH hyperaccretion system \citep[see reviews by][]{Liu2017} or a massive millisecond magnetar \citep[or protomagnetar, e.g.,][]{Duncan1992,Usov1992,Dai1998,Zhang2001,Dai2006,Metzger2011}.
In the BH hyperaccretion scenarios, if the accretion rate is very high ($\sim 0.001-10 ~M_{\odot}~\rm s^{-1}$), the photons cannot escape from the accretion disk, and only neutrinos are emitted from the disk surface. These neutrinos annihilate in the space outside of the disk and then form the primordial fireball to power a GRB. This kind of accretion disk is the so-called neutrino-dominated accretion flow (NDAF). In the past decades, accumulated studies have been done on NDAFs. Specifically, their structures, components and luminosities have been explored in great details~\citep[e.g.,][]{Popham1999,Narayan2001,Kohri2002,Gu2006,Liu2007,Kawanaka2007,Janiuk2007,Xue2013}. The NDAF has also been used to explain some phenomena related to the central engines of GRBs \citep[e.g.,][]{Liu2008,Liu2010,Liu2012,Liu2014,Liu2015a,Liu2016b,Kawanaka2012,Luo2013,Cao2014,Lin2016,Yi2017}. In particular, the detectabilities of gravitational waves (GWs) and MeV neutrinos released by NDAF as well as the possible existence of NDAFs in GRB centers have been discussed \citep[e.g.,][]{Reynoso2006,Lei2007,Sun2012,Liu2016c}. Besides NDAFs, in the BH hyperaccretion processes, the rotational energy of a BH can be efficiently extracted to power a Poynting jet via a large-scale poloidal magnetic field threading the BH horizon \citep{Blandford1977} to power GRBs \citep[e.g.,][]{Lee2000a,Lee2000b}. We call this Blandford-Znajek (BZ) mechanism afterwards.
In the age of BATSE, the quasi-periodic variability of GRBs is generally thought to be caused by the precession of jets \citep[e.g.,][]{Blackman1996,PortegiesZwart1999,Putten2003,Reynoso2006,Lei2007}. \cite{Liu2010} investigated that the jet precession driven by an NDAF around a spinning BH. The outer disk forces the BH to precess while the inner disk is aligned with the BH spin axis. Thus the total effect is that a precessed jet is feasible to the central engine of a GRB. The different lightcurve forms and spectral evolutions of GRBs may both be attributed to the different viewing effect. This jet precession model was successfully used to explain the variability of the giant X-ray bump in GRB 121027A \citep{Hou2014a} and the time evolution of the flares in GRB 130925A \citep{Hou2014b}. Subsequently, \cite{Sun2012} studied the GWs from the precession systems, and found that they could be detected by DECIGO/BBO in $\sim$ 10 Hz if GRBs occur in the Local Group ($\lesssim$ 1 Mpc).
\citet{Epstein1978} derived formulae for the GWs released from a small source due to the anisotropic axisymmetric emission of neutrinos. He found that the GWs may be generated from the anisotropic emission of neutrinos from supernovae (SNe), whose amplitudes and energies can be comparable to those from the collapsed SN cores. Then, \cite{Suwa2009} investigated this kind of neutrino-induced GWs from a BH hyperaccretion system, and concluded that they could be detected at $\sim$ 10 Mpc by DECIGO/BBO. Unfortunately, they simplified NDAFs as thin disks or oblate spheroids and also ignored the dominant factors from the dynamic characteristics of the NDAF.
In the GRB framework, regardless of the central engine type, there exists another type of GW sources, i.e., the hidden jets. \citet{Sago2004} analyzed the GWs from the internal shock in the GRB jets. Since the typical frequency is $\sim$ 0.1 Hz and the GW amplitude is $\sim 10^{-22}$, DECIGO/BBO might be able to detect such an event when the GRBs occur in the Local Group. The GWs from the decelerating phases of the GRB jets were studied \citep{Akiba2013} as well, and their typical frequency is $\sim 10-1000$ Hz. However, the characteristic amplitude is too low to be detected. The GWs radiated from accelerating uniform or structured jets of GRBs were also presented \citep{Birnholtz2013}. In addition, \citet{Hiramatsu2005} investigated the GWs with ``memory effect'' from the neutrino-driven jets in GRBs. They concluded that the GWs could be detected by ultimate-DECIGO in low frequency of $\sim 0.1-1$ Hz for LGRB cases.
Overall, there are various origins of GWs related to GRBs, including BH-NS or NS-NS mergers, collapses of massive stars, SNe, GRB central engines, and GRB jets \citep[see reviews by][]{Cutler2002,Postnov2014,Liu2017}. By studying these GWs sources and their electromagnetic counterparts, one may deeply reveal the nature of GRBs.
Up to now, several GW events from two merging massive BHs have been discovered by the advanced LIGO \citep[aLIGO,][]{Abbott2016a,Abbott2016b,Abbott2017}. And the \emph{Fermi}/GBM recorded a suspected SGRB 0.4 s after GW 150914 \citep{Connaughton2016}, which has been theoretically-modelled in many literatures \citep[e.g.,][]{Li2016,Loeb2016,Liu2016a,Zhang2016,Perna2016,Woosley2016,Zhang2016a,Janiuk2017}. Essentially, no more than two scenarios were proposed, i.e., BH hyperaccretion and charged BHs. The possible GW-GRB association and its theoretical explanations are still quite controversial. The investigation of the GW sources and their electromagnetic counterparts related to the compact objects is nowadays one of the most popular astrophysical topics. For the current GW detectors, the GWs from the compact binary mergers are the main goals. We here consider another potential candidates for detectors, which are from GRB central engines after merger events. For this purpose, in the present paper, the GWs from NDAFs and other candidates of GRB central engines are further revisited and compared.
The paper is organized as follows. In Section 2 we describe the numerical methods and main results of the GWs from NDAFs. In Section 3 we present the comparisons of GW detectabilities of three central engine models by aLIGO and Einstein Telescope (ET). Summary is done in Section 4.
\section{GWs from NDAFs}
\subsection{Model}
\citet{Xue2013} computed the one-dimensional steady global solutions of NDAFs in Kerr metric \citep[e.g.,][]{Kato2008}, incorporating detailed neutrino physics, chemical potentials equilibrium, photodisintegration, neutrino trapping, nuclear statistical equilibrium, etc. Based on 16 solutions with different accretion rates and BH spins, time-independent analytical formula were fitted, for the neutrino luminosity $\bar{L}_{\nu}$ and neutrino annihilation luminosity $\bar{L}_{\nu \bar{\nu}}$:
\begin{eqnarray} \label{eq:lv}
\log \bar{L}_{\nu} ~\textrm{(erg s} ^{-1} \textrm{)} \approx 52.5+1.17a_* +1.17\log \dot{m},
\end{eqnarray}
\begin{eqnarray} \label{eq:lvv}
\log \bar{L}_{\nu \bar{\nu}}~({\rm{erg\ s^{-1}}})\approx 49.5+2.45a_*+2.17\log\dot{m},
\end{eqnarray}
where $a_*$ ($0\leq a_* \leq 1$) and $\dot{m}$ are the mean dimensionless BH spin parameter and dimensionless accretion rate. Here $\dot{m}=\dot{M}/M_{\odot}~ \rm s^{-1}$, and $\dot{M}$ is the mass accretion rate. The formula is applicable for the accretion rate in the range of $0.01 < \dot{m} < 10$.
Actually, both the BH spin and accretion rate, even the structure and components of the disk, are in violent evolution in the activity timescale of the central engine, corresponding to the complicated GRB variability. The time evolution of the neutrino luminosity $L_{\nu} (t)$ can be structured as \citep[e.g.,][]{Suwa2009}
\begin{eqnarray} \label{eq:lvtsingle}
L_{\nu} (t)= \bar{L}_{\nu} \Theta (t) \Theta (T-t),
\end{eqnarray}
where $T$ is the activity duration of the GRB central engine and $\Theta$ is the Heaviside step function. This is for GRBs with single pulse. However, the observed complex variability of GRBs may be related directly to the underlying accretion behavior, and the intermittent time variability of the central engine should be taken into account, therefore it may be more realistic to consider the case of multiple pulses, i.e.,
\begin{eqnarray} \label{eq:lvtmultiple}
L_{\nu}(t)=\sum_{i=1}^{N} \frac{\bar{L}_{\nu}T}{N\delta t} \Theta (t-\frac{i}{N} T) \Theta (\frac{i}{N} T+\delta t-t),
\end{eqnarray}
where $N$ is the number of the pulses and $\delta t$ is the duration of one pulse. $N\delta t$ should be shorter than $T$ unless $N=1$ for the single pulse.
After an inverse Fourier transform, $L_{\nu} (t)$ can be written as
\begin{eqnarray} \label{eq:lvtsinglefourier}
L_{\nu} (t)= \int_{-\infty}^{\infty} \tilde{L}_{\nu} (f)e^{-2\pi ift}df,
\end{eqnarray}
where $f$ is the frequency.
We consider that the GRB variabilities are originated from the time-variant and anisotropic neutrino emission of NDAFs, which is resulted from the characteristics structure and the variation dynamics of NDAFs, hence the GW emissions from the hyperaccretion systems are related to the neutrino luminosity, and the typical GW frequencies correspond to the GRB variabilities.
The local energy flux of GWs can be written as \citep[e.g.,][]{Suwa2009}
\begin{eqnarray}
\frac{dE_{\rm GW}}{D^2 d\Omega dt}= \frac{c^3}{16 \pi G} | \frac{d}{dt} h_+(t)|^2,
\end{eqnarray}
where $D$ means the distance of a GRB, $\Omega$ is the solid angle, and the non-vanishing GW amplitude of NDAFs $h_+(t)$ can be estimated by \citep[for details, see e.g.,][]{Muller1997}
\begin{eqnarray}
h_+(t)=\frac{2 G}{3 D c^4} \int^{t-D/c}_{-\infty} L_\nu (t')d t'.
\end{eqnarray}
The total GW energy can be obtained as
\begin{eqnarray}
E_{\rm GW} = \frac{\beta G}{9 c^5} \int^\infty_{-\infty} L_\nu^2 (t)d t,
\end{eqnarray}
where $\beta \sim 0.47039$. By combining with Equation (\ref{eq:lvtsinglefourier}), one can deduce the GW energy spectrum as
\begin{eqnarray} \label{eq:degwf}
\frac{dE_{\rm GW}(f)}{df} =\frac{2 \beta G}{9 c^5} |\tilde{L}_{\nu} (f)|^2.
\end{eqnarray}
The characteristic GW strain can be expressed as \citep{Flanagan1998}
\begin{eqnarray} \label{eq:hcf}
h_c (f)=\sqrt{\frac{2}{\pi ^2} \frac{G}{c^3} \frac{1}{D^2} \frac{dE_{\rm GW}(f)}{df}},
\end{eqnarray}
From the above Equations, we can obtain the relations between $h_c (f)$ and $f$ for GRBs with single pulse or multiple pulses.
Moreover, the signal-to-noise ratios (SNRs) obtained from matched filtering for GW detectors can be calculated. Considering the relative orientation of a source and detector, the optimal SNR is
\begin{eqnarray}
{\rm SNR}^2=\int^\infty_0 \frac{h_c^2(f)}{h_n^2(f)} \frac{df}{f},
\end{eqnarray}
where $h_n(f) = [5f S_n(f)]^{1/2}$ is the noise amplitude and $S_n(f)$ is the power spectral density of the strain noise in the detector at frequency $f$.
\subsection{Results}
\begin{figure}
\centering
\includegraphics[scale=0.4]{f1a.eps}
\includegraphics[scale=0.4]{f1b.eps}
\includegraphics[scale=0.4]{f1c.eps}
\caption{The strains of GWs from NDAFs as the central engine of GRBs with single pulse. (a) The green, magenta, red, blue, and black lines indicate the GW strains of NDAFs with 10 kpc, 100 kpc, 1 Mpc, 10 Mpc, and 100 Mpc, respectively, for $a_* =0.9$, $\dot{m}=1$, and $T=0.5$ s. (b) Similar as (a) except $\dot{m}=0.1$, and $T=50$ s. (c) The red, blue, black, and magenta lines indicate the GW strains of NDAFs with 1 Mpc for ($a_*$, $\dot{m}$, $T$)= (0.9, 1, 0.5 s), (0.9, 0.1, 50 s), (0.5, 1, 0.5 s), (0.5, 0.1, 50 s). In all three figures, the gray lines display the sensitivity curves (the noise amplitudes $h_n$) of eLISA, DECIGO/BBO, ultimate-DECIGO, aLIGO, and ET.}
\label{fig1}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.4]{f2a.eps}
\includegraphics[scale=0.4]{f2b.eps}
\includegraphics[scale=0.4]{f2c.eps}
\caption{The strains of GWs from NDAFs as the central engine of GRBs with multiple pulses. (a) The green, magenta, red, blue, and black lines indicate the GW strains of NDAFs with 10 kpc, 100 kpc, 1 Mpc, 10 Mpc, and 100 Mpc, respectively, for $a_* =0.9$, $\dot{m}=1$, $T=0.5$ s, $N=10$, and $\delta t=0.005$ s. (b) Similar as (a) except $\dot{m}=0.1$, $T=50$ s, $N=100$, and $\delta t=0.05$ s. (c) The red, blue, black, and magenta lines indicate the GW strains of NDAFs with 1 Mpc for ($a_*$, $\dot{m}$, $T$, $N$, $\delta t$)= (0.9, 1, 0.5 s, 10, 0.005 s), (0.9, 0.1, 50 s, 100, 0.05 s), (0.5, 1, 0.5 s, 10, 0.005 s), (0.5, 0.1, 50 s, 100, 0.05 s). In all three figures, the gray lines display the sensitivity curves (the noise amplitudes $h_n$) of eLISA, DECIGO/BBO, ultimate-DECIGO, aLIGO, and ET.}
\label{fig2}
\end{figure}
Before the central BHs or magnetars are born to power GRBs, the compact binary mergers and collapsars are also important GW sources \citep[e.g.,][]{Cutler2002,Postnov2014,Liu2017}. We here restrict ourselves only on the GWs from the GRB central engines.
In the NDAF model, we adopt $\dot{m}$ and $T$ as (0.1, 50 s) and (1, 0.5 s) as the typical luminosities and durations of LGRBs and SGRBs, respectively. Our main interest is the effects of the distance of GRBs to the Earth $D$, BH spin parameter $a_*$, and the mean accretion rate $\dot{m}$, on the GW strains.
Figures~\ref{fig1} and ~\ref{fig2} show the strains of GWs from NDAFs as the central engine of GRBs with single pulse and multiple pulses, respectively. It is obvious that the GW strains have positive correlations with both the BH spin parameters and accretion rates [as seen from Equation (\ref{eq:lv})], and have negative correlations with the distances of the sources as seen from Equation (\ref{eq:hcf}), i.e., $h_c$ is in the inverse proportion of $D$]. One can roughly read the SNR from these figures. Furthermore, by comparing Figures 1 and 2, we notice that for the same $T$, the spectra of GRBs for single pulse and multiple pulses are very different in the high-frequency range but similar in the low-frequency range. This is because that in the case of the multiple pulses many pulses are in short time scale and long-term behaviors are independent to the particulars of the burst.
In all three figures,the gray lines display the sensitivity curves (the noise amplitudes $h_n$) of eLISA, DECIGO/BBO, ultimate-DECIGO, aLIGO, and ET, respectively. For GRBs with either single pulse or multiple pulses, the GWs from NDAFs can be detected at a distance of $\sim 100$ kpc by aLIGO, and $\sim 1$ Mpc by ET in the detectable frequency $\sim 10-100$ Hz. they can be detected by ultimate-DECIGO at $\sim$ 100 Mpc in $\sim 0.1-10$ Hz for the LGRB cases. Additionally, in the jet precession model \citep{Sun2012}, the typical GW frequency is determined by the precession period, which corresponds to the GRB variability. For the neutrino-induced GWs from NDAFs, the GW frequency also depends on the GRB variability, which corresponds to $\sim 10-100$ Hz for aLIGO/ET and $\sim 0.1-10$ Hz for ultimate-DECIGO in LGRB cases.
\section{Comparisons of GWs from different central engine models}
NDAFs, BZ mechanism, and magnetars are three mainly possible candidates for the central engine of GRBs. They have different capabilities on powering GRBs in the scenarios of the collapsars or compact object mergers, different dynamics on describing GRB morphology, and different evolution, components and products for progenitors and environment of GRBs \citep[see reviews by][]{Liu2017}. They have certainly also different GW radiations.
\subsection{BZ mechanism}
For a BZ jet driven from a BH hyperaccretion disk, its luminosity can be written as \citep{Blandford1977,Lee2000a,Lee2000b}
\begin{eqnarray}
L_{\rm BZ}=1.7\times10^{20}a_*^{2}m^{2}B_{\rm in, G}^{2}F(a_*)~{\rm erg~s^{-1}},
\end{eqnarray}
where $B_{\rm in, G}=B_{\rm in}/1 \rm G$ is the dimensionless magnetic strength at the inner boundary of the disk. $m=M/M_{\odot}$, with $M$ the BH mass.
\begin{eqnarray}
F(a_*)=[(1+q^{2})/q^{2}][(q+1/q)\arctan(q)-1],
\end{eqnarray}
where $q=a_*/(1+\sqrt{1-a_*^{2}})$. According to the balance between the pressure of the disk and the magnetic pressure on the BH horizon, the BZ jet power can be derived as
\begin{eqnarray} \label{eq:lbz}
L_{\rm BZ}=9.3\times10^{53}a_*^{2}\dot{m} F(a_*)(1+\sqrt{1-a_*^{2}})^{-2}~{\rm erg~s^{-1}}.
\end{eqnarray}
Comparing Equations (\ref{eq:lvv}) to (\ref{eq:lbz}), one can see that for fixed BH spin and accretion rate, $L_{\rm BZ}$ is about two orders of magnitude larger than $\bar{L}_{\nu \bar{\nu}}$. On the other hand, if assuming that two mechanisms have the same conversion efficiency to power a certain GRB, hence $L_{\rm BZ} = \bar{L}_{\nu \bar{\nu}}$, then the BH spin and accretion rate are one or two orders of magnitude lower than those in NDAFs, respectively. This means that $\bar{L}_{\nu}$ should be about two orders of magnitude larger than $L_{\rm BZ}$, as shown in Equations (\ref{eq:lv}) and (\ref{eq:lvv}).
Since BZ mechanism and neutrino annihilation process in NDAFs both depend on BH hyperaccretion systems and their dynamical characteristics, the physical mechanisms of the GWs are therefore similar. Then from Equations (\ref{eq:degwf}) and (\ref{eq:hcf}), we know that the GW strain from BZ mechanism is lower than that from NDAFs. That is, for a certain GRB, as shown in the right panel of Figure \ref{f3}, the GW detectable distance of BZ mechanism is about two orders of magnitude lower than that of NDAFs. Furthermore, as mentioned above, the GW frequency in the BH hyperaccretion framework, no matter which mechanism, is obviously determined by the typical GRB variability, so the GW detectable frequency of BZ mechanism by aLIGO and ET is $\sim 10-100$ Hz.
\subsection{Magnetars}
\begin{figure*}
\centering
\includegraphics[scale=0.45]{f3a.eps}
\includegraphics[scale=0.45]{f3b.eps}
\caption{Left panel: Schematic picture of the applications of three GRB central engine models to the isotropic energy of three types of GRBs (adapted from Figure 20 in \citet{Liu2017}). Right panel: Schematic picture of the GW detectable distances of three models by aLIGO (blue lines) and ET (red lines).}
\label{f3}
\end{figure*}
Magnetars have been widely studied as the central engine of GRBs \citep[e.g.,][]{Duncan1992,Usov1992,Dai1998,Zhang2001,Dai2006,Metzger2011}, which are described as rapidly spinning (the typical rotation period $P \sim 1$ ms), supramassive ($\sim 2.6-2.8 ~M_\odot$), and strongly magnetized (the dipole magnetic field strength $B \sim 10^{15} ~\rm G$) NSs. The spin-down of magnetars from NS-NS merger events has been used to explain some GRBs with an internal X-ray plateau \citep[e.g.,][]{Rowlinson2010,Yu2010,Lu2015,Gao2016}. Recently
the equation of state models of quark stars
have been suggested to be more preferred than those of NSs for such bursts with plateaus \citep[e.g.,][]{Lia2016,Li2017}. LGRBs and even super-luminous SNe have been investigated in the scenario of magnetar born in the collapsars \citep[e.g.,][]{Metzger2008,Metzger2011,Metzger2015}. Further studies have included also ultra-LGRBs (ULGRBs) in the magnetar scenario \citep[e.g.,][]{Greiner2015,Ioka2016}. That is, magnetars might power all types of GRBs and reconcile their diverse behaviors.
We mention here that the total rotational energy of a magnetar can be estimated as \citep[e.g.,][]{Lu2015}
\begin{eqnarray}
E_{\rm rot}\approx2\times 10^{52} (\frac{M_{\rm NS}}{1.4~M_\odot})(\frac{R_{\rm NS}}{10^6 ~\rm cm})^2 (\frac{P}{1~\rm ms})^{-2}~\rm erg,
\end{eqnarray}
where $M_{\rm NS}$ and $R_{\rm NS}$ are the mass and radius of the magnetar, respectively.
Concerning the accretion rate and timescale (i.e., the accreted mass), the BH hyperaccretion systems are more demanding as the GRB progenitors than magnetars. Furthermore, BZ mechanism can power a GRB more easily than NDAFs, as demonstrated in Section 3.1. Those are summarized in the left panel of Figure \ref{f3} (adapted from Figure 20 in \citet{Liu2017}): First, almost all SGRBs might be described in the NDAF model, with reasonable disk masses derived from the remanent of the compact objects mergers \citep{Liu2015c}; Second, only about half of LGRBs might satisfy the NDAF model, while it might be necessary for the left half to introduce massive disks, extreme Kerr BHs, and high conversion efficiency for neutrino annihilation \citep{Song2016}; Third, for LGRBs and ULGRBs, BZ mechanism is especially more efficient than NDAFs. Actually, the deviation between BZ mechanism and NDAFs is more significant if X-ray flares are included \footnote{The total energy of GRBs generally includes the isotropic radiated energy in the prompt emission phase and the isotropic kinetic energy of the jet powering long-lasting afterglow. And some flares may be origin from the restart of GRB central engine.} \citep[e.g.,][]{Liu2015b,Luo2013,Mu2016}; Fourth, the millisecond magnetar model could cover almost all types of GRBs as said before.
\citet{Cutler2002} reviewed the estimations for the event rates and the GW strengths of the well-known GW sources including NSs. The quadrupole deformation of a NS in the spin-down phase, caused by the rapid spin or magnetic field, can be described by the ellipticity $\epsilon$, which leads to the GW radiation. Formerly, the ellipticity $\epsilon$ of NSs was usually as small as $\sim 10^{-5}-10^{-6}$, which cannot result in detectable GWs \citep{Andersson2003}. Recently, the GW radiation of magnetars has been carefully revisited \citep{Corsi2009,Fan2013a,Fan2013b,Dai2016,Gao2017}, a larger $\epsilon \sim 0.005$ might be reachable. On the other hand, the initial rotation period of a newborn magnetar is expected to be $\sim$ 1 ms, whatever is originated from NS-NS mergers or collapsars. Its rotational energy might be larger than the energy requirements of GRBs and kilonovae \citep[or mergernovae, see e.g.,][]{Yu2013,Metzger2017}. The remaining energy has to be carried away by a non-electromagnetic emission, i.e., GWs \citep{Fan2013b}. Consequently, the energy of the GW radiation from magnetars approaches the typical GRB energy, which should be much higher than the BH hyperaccretion systems. In the jet precession model of magnetars \citep{Sun2012}, the quadruple power of GWs is six orders of magnitude lower than GRB luminosity at $f \sim 10$ Hz, like in the cases of NDAFs and BZ mechanism.
Following \citet{Fan2013a}, the GW radiation from millisecond magnetars has a frequency of $f \sim$ 2000 Hz, corresponding to $P \sim 1$ ms \citep[$f=2/P$, e.g.,][]{Zimmermann1979,Shapiro1983}, and the characteristic GW amplitude at this frequency can be estimated as
\begin{eqnarray}
h_c \approx 5.1 \times 10^{-22} (\frac{D}{100~\rm Mpc})^{-1}(\frac{I}{10^{45.3} ~{\rm g ~cm^2}})^{1/2}(\frac{P}{1~\rm ms})^{-1},
\end{eqnarray}
where $I$ is the moment of inertia of the NS. From this equation, we can estimate the GW detectable distance of a typical GRB originated from magnetars. It is $\sim$ 25 Mpc for aLIGO and $\sim$ 500 Mpc for ET, which is consistent with the estimations in \citet{Gao2017}. As our main results, the GW detectable distances of three central engine models are displayed in the right panel of Figure \ref{f3} for a typical GRB. From the distance of the sources, the characteristic frequency, and the GW amplitude, one can determine whether an NDAF, a BZ jet or a magnetar in the GRB center.
However, the rate of GRBs occurred in these GW detectable distances is apparently low. \citet{Wanderman2015} declared that the ``local'' SGRB detection rate is about $4.1^{+2.3}_{-1.9}$ Gpc$^{-3}$ yr$^{-1}$. While, the SGRB rate is much lower than the LGRB rate \citep[e.g.,][]{Fryer1999,Podsiadlowski2004,Virgili2013}. \citet{Liu2016c} calculated the LGRB rates, including off-axis LGRBs, of the major galaxies in the Local Group (shown in Table I), which is about 3 per century. It can be considered roughly as the upper limit of GW detection rate in the Local Group.
\section{Summary}
In the lifetime of GRBs, the progenitors which include compact binary mergers and collapsars, the central engines which include BH hyperaccretion systems and millisecond magnetars, and the jets which include internal and external shocks, are all potentially detectable GW sources, and can be detected by the current or future detectors when occuring in the nearby galaxies. Furthermore, for a certain step, the differences in the strength, the typical frequency, and the detectable distance of the GWs can be used to identify the underlying mechanisms.
Also, in this paper we proposed a new way, besides the MeV neutrino emission \citep[e.g.,][]{Liu2016c}, to distinguish different GRB central engine models, i.e., NDAFs, BZ mechanism, and millisecond magnetars. It is through their GWs. For a typical GRB, the detectable distances of the three models on aLIGO/ET are roughly 100 kpc/1 Mpc at $\sim 10-100$ Hz, 1 kpc/10 kpc at $\sim 10-100$ Hz, and 25 Mpc/500 Mpc at $\sim 2000$ Hz, respectively.
It is possible to detect the GWs from NS-NS mergers under the current detectability. First, a bright GRB, an X-ray transient or an optical kilonova are likely to be observed \citep[e.g.,][]{Liu2017,Metzger2017}. Second, after merger events, a possible newborn millisecond magnetar will release strong GWs, which may be detected by aLIGO in the high-frequency range; if not, it is a BH hyperaccretion system that might exist in the GRB center.
\acknowledgments
We thank the anonymous referee for very useful suggestions and comments. We also thank Mou-Yuan Sun and Tuan Yi for helpful discussion. This work was supported by the National Basic Research Program of China (973 Program) under grant 2014CB845800 and the National Natural Science Foundation of China under grants 11473022 and U1431107.
|
1,941,325,220,447 | arxiv | \section{Introduction}
Inspired by human neurons' working patterns, spiking neural networks (SNNs) are considered as the third generation artificial neural network\cite{maass1997networks}.
With the development of SNNs, a large range of applications have been demonstrated including image classification \cite{iakymchuk2015simplified}\cite{datta2021hyper}, video processing \cite{hinton2012improving} \cite{hu2016dvs}, posture and gesture recognition\cite{zhao2014feedforward}\cite{xu2020boosting}, voice recognition\cite{zhang2019spike} \cite{jin2018hybrid}.
Compared with traditional artificial neural networks (ANNs) which consist of static and continuous-valued neuron models, spiking neural networks (SNNs) have a unique event-driven computation characteristic that can respond to the events in a nearly latency-free and power-saving way\cite{pei2019towards}\cite{roy2019towards}, and it is naturally more suitable for processing event stream class.
To take full advantage of the event-driven advantages of spiking neural networks, neuromorphic sensors, such as DVS (Dynamic Vision Sensor)\cite{DVS128}\cite{CIFAR10DVS}, Asynchronous Time-based Image Sensor(ATIS)\cite{N-MNIST}, are usually used to transform datasets into neuromorphic datasets by encoding the time, location, and polarity of the brightness change.
However, the event streams recorded by neuromorphic sensor based cameras are usually redundant in the temporal dimension, which is caused by high temporal resolution and irregular dynamic scene changes\cite{yao2021temporal}.
This characteristic makes event streams almost impossible to be processed directly by deep spiking neural networks, which are based on dense computation.
To make neuromorphic datasets suitable for deep spiking neural networks, variable pre-processing methods are proposed.
The event-to-frame integrating method for pre-processing neuromorphic datasets is widely used. In \cite{SpikingJelly}, events are split into $N$ slices with nearly the same number of events in each slice and integrate events to frames. The event-to-frame integrating method can convert the event streams into tens of frames at most. Otherwise, the accuracy of the SNNs will drop significantly.
In \cite{xu2020boosting}, a temporal compression method is proposed which can reduce the length of event streams by shrinking the duration of the input event trains.
However, this method is only applied to the trained SNNs, which limits its potential.
In \cite{xu2021direct}, the normalized pixels of the static pictures are taken directly as the input current and multi-threshold neuron models are applied, which makes the SNNs can obtain a good performance in only two time steps.
This method is suitable for static image classification, but it is difficult to directly apply it to event stream classification.
In this paper, we proposed a spatio-temporal compression method to aggregate event streams into few time steps of synaptic current to reduce the training and inference latency.
To keep the accuracy of SNNs under high compression ratios, we also proposed a synaptic convolutional block in which a synaptic layer is applied to balance the dramatic change between adjacent time steps.
To increase the information processing capability of neuron models, parametric multi-threshold Leaky Integrate-and-Fire models is introduced in our SNNs.
We evaluate our method on neuromorphic datasets, such as N-MNIST\cite{N-MNIST}, CIFAR10-DVS\cite{CIFAR10DVS}, DVS128-gesture\cite{DVS128} and experimental results show that our method outperforms the state-of-the-art accuracy on nearly all tested datasets using fewer time-steps.
\section{Approach}
In this section, we first introduce the proposed spatio-temporal compression method and synaptic convolutional block in Sec. \ref{sec_compression} and \ref{sec_Synapticlayer}.
Then multi-threshold Leaky Integrate-and-Fire models with learnable membrane constants are introduced in Sec. \ref{sec_MLIF}.
At last, we describe the proposed network structure of SNNs.
\subsection{Spatio-temporal compression method\label{sec_compression}}
The spatio-temporal compression block, which is shown in Fig. \ref{fig_ST_Compression}, is used to pre-processing neuromorphic datasets, such as N-MNIST\cite{N-MNIST}, DVS128 Gesture\cite{DVS128}, CIFAR10-DVS\cite{CIFAR10DVS}, etc.
neuromorphic datas are usually in the formulation of $E(x_{i}, y_{i}, t_{i}, p_{i})$ that represent the event's coordinate, time and polarity.
We split the event's time $T_{event}$ into $T$ slices with nearly the same time interval in each slice and integrate these events.
Note that $T$ is also the simulating time-step.
As Fig.\ref{fig_ST_Compression} shows, in each slice, spiking events are evenly divided into $N_{r}$ parts in the temporal dimension.
$N_{r}$ is the resolution that user can change based on their requirement.
Those spiking events in the same part will be integrated firstly and multiplied by the weights as the formulation \eqref{eq1} is shown.
Due to the spiking events having two polarities, two channels frame is used to represent the compressed neuromorphic data.
\begin{figure}[t]
\centerline{\includegraphics[width=\columnwidth]{ST_Compression.pdf}}
\caption{spatio-temporal compression block }
\label{fig_ST_Compression}
\end{figure}
\begin{equation}
\begin{split}
&j_{l}=floor(\frac{max(T_{event})}{T})\cdot j\\
&j_{u}=\left \{
\begin{array}{cl}
floor(\frac{max(T_{event})}{T})\cdot (j+1), & j<T-1 \\
max(T_{event}), & j=T-1
\end{array}
\right.\\
&F(j,p,x,y)=\sum\limits_{k=0}^{N_r-1}(2^k(\sum\limits_{i=j_{l}+floor(k\frac{j_u-j_l}{N_r})}^{j_{l}+floor((k+1)\frac{j_u-j_l}{N_r})}I_{E(x,y,t,p)}(x_{i}, y_{i}, t_{i}, p_{i})))
\end{split}
\label{eq1}
\end{equation}
where $j_l$ and $j_u$ are the lower and upper bounds of the $j_{th}$ slices, respectively,
$floor()$ is the function that rounds the elements to the nearest integers towards minus infinity and $I_{E(x,y,t,p)}(x_{i}, y_{i}, t_{i}, p_{i})$ is an indicator function of the event which equals to 1 only when $(x,y,t,p) = (x_i,y_i,t_i,p_i)$
\subsection{Synaptic convolutional block\label{sec_Synapticlayer}}
Since the synaptic convolution layer is used to balance the dramatic change between the adjacent time steps of the compressed neuromorphic data, the synaptic convolution block is used to replace the first convolution layer.
The synaptic convolution block consists of a convolution layer, a synaptic layer and a optional average pooling layer, which is shown in Fig. \ref{SynapticLayer}.
Whether there is a average pooling layer dependents on the network structure.
\begin{figure}[t]
\centerline{\includegraphics[width=16cm]{SynapticLayer_3.pdf}}
\caption{Synaptic convolution block}
\label{SynapticLayer}
\end{figure}
The key of the synaptic convolutional block is the synaptic layer.
In the synaptic layer, the first-order synaptic model is applied, which is shown below.
\begin{equation}
I_{syn}(t)=e^{- \frac{1}{\tau_{syn}}} I_{syn}(t-1) + I_{in} (t)
\label{eq2}
\end{equation}
where $I_{syn}(t)$ is the output current of the synapse at time $t$, $\tau_{syn}$ is the time constant of synapse and $I_{in}$ is the input current at time $t$. To facilitate the calculation and simulation, we convert Eq. \eqref{eq2} to
\begin{equation}
I_{syn}[t_k]=(1-\frac{1}{\tau _{syn}}) I_{syn}[t_{k-1}] + I_{in} [t_k]
\label{eq3}
\end{equation}
where $I_{syn}[t_k]$ represents the output current of the synapse at time step $t_k-1$.
$\tau _{syn}$ can control how much synaptic current at time step $t_{k-1}$ can be retained at the time step $t_{k}$.
To make the synaptic layer adjust adaptively according to the characteristics of datasets, $\tau_{syn}$ can be calculated based on
\begin{equation}
\tau _{syn}[t_k]=\frac{1}{1-\frac{C_{valid}[t_k]}{C_{total}}}
\label{eq3_1}
\end{equation}
where $C_{valid}[t_k]$ denotes the number of channels whose synaptic current is not zero at time step $t_k$.
$C_{total}$ is the total number of channels.
As Eq. \eqref{eq3_1} is shown, the higher the value of $C_{valid}[t_k]$ is, the higher the value of $(1-\frac{1}{\tau_{syn}})$ is, which means more information at time step $t_{k-1}$ will be retained at time $t_k$.
\subsection{Parametric Multi-threshold Leaky Integrate-and-Fire models(PMLIF)\label{sec_MLIF}}
It is known that Leaky Integrate-and Fire (LIF) model is one of the most widely applied models to describe the neuronal dynamics in SNNs.
In this paper, we introduce the parametric multi-threshold Leaky Integrate-and-Fire models (PMLIF), whose membrane constants are learnable, to increase the information processing capability of neuron models.
The neuronal membrane potential of neuron $i$ at time $t$ is
\begin{equation}
{\tau _m}\frac{{d{u_i}(t)}}{{dt}} = - {u_i}(t) + I(t) + {u_{reset}}(t)
\label{eq4}
\end{equation}
Where $I(t)$ is the pre-synaptic input current at time $t$ and $\tau_m$ is a time constant of membrane voltage. $u_{reset}(t)$ denotes the reset function, which reduces the membrane potential by a certain amount $V_{th}$ after the neuron $i$ fires.
The pre-synaptic input $I(t)$ is given by
\begin{equation}
I(t) = \sum\limits_{j = 1}^N {{\omega _{ij}}s_j(t)}
\label{eq5}
\end{equation}
Where $s_j(t)$ is the output spike of pre-synaptic neuron $j$ at time $t$ and $\omega_{ij}$ is the presynaptic weight from the neuron $j$ in the pre-synaptic layer to the neuron $i$ in the post-synaptic layer.
Due to the discrete time steps in the simulation, we apply the fixed-step first-order Euler method to discretize \eqref{eq6} to
\begin{equation}
{u_i}[t] = (1 - \frac{1}{{{\tau _m}}}){u_i}[t - 1] + I[t] + {u_{reset}}[t]
\label{eq6}
\end{equation}
Where $u_{reset}[t]$ is equal to $-s_i[t]V_{th}$ and $s_i[t]$ is the output spike of neuron $i$. In this paper, we extend the LIF model into multi-threshold LIF model, in which the output of the neuron $i$ can be expressed by
\begin{equation}
{s_i}[t] = \left\{ {\begin{array}{*{20}{r}}
{0,}&{{u_i}[t] < {V_{th}}}\\
{floor(\frac{{{u_i}[t]}}{{{V_{th}}}}),}&{{V_{th}} \le {u_i}[t] < {S_{max}}{V_{th}}}\\
{{s_{\max }},}&{{u_i}[t] \ge {S_{max}}{V_{th}}}
\end{array}} \right.
\label{eq7}
\end{equation}
Where $S_{max}$ is the upper limit of the output spikes and $floor()$ is the function that rounds the elements to the nearest integers towards minus infinity.
Since $\tau _m$ is a learnable parameter and its value should be positive, we use a sigmoid function about synaptic time constant weight $\omega _{m_{i}}$ of the neuron $i$ to replace the $(1-\frac{1}{\tau _m})$ and the \eqref{eq6} becomes
\begin{equation}
{u_i}[t] = S(\omega _{m_{i}}){u_i}[t - 1] + I[t] + {u_{reset}}[t]
\label{eq8}
\end{equation}
where $S(\omega _{m_{i}})$ is the sigmoid function about $\omega _{m_{i}}$.
\subsection{Error Backpropagation of PMLIF}
To present the error backpropagation of PMLIF, we define the loss function $L[t_k]$ in which the mean square error for each output neuron at time step $t_k$.
\begin{equation}
L[{t_k}] = \frac{1}{2}{\sum\limits_{i = 0}^{{N_o}} {({y_i}[{t_k}] - {s_i}[{t_k}])} ^2}
\label{eq9}
\end{equation}
Where $N_o$ is the number of neurons in the output layer, $y_i[t_k]$ and $s_i[t_k]$ denotes the desired and the actual firing event of neurons $i$ in the output layer at time step $t_k$.
By combining \eqref{eq4}-\eqref{eq8}, it can be seen that loss function $L[t_k]$ is a function of presynaptic weight $\omega_{i,j}$ and synaptic time constant weight $\omega_m$.
In this paper, we use $W^{(l)}= [\omega_1^{(l)}; ... ; \omega_{N_l}^{(l)}]$ to represent the presynaptic weight matrix of layer $l$ in which $\omega_{N_l}^{(l)} = [\omega_{N_l,1}^{(l)}, \omega_{N_l,2}^{(l)}, ... ,\omega_{N_l,N_{l-1}}^{(l)}]$.
$W^{(l)}_m$ denotes the synpatic time constant weight matrix, which is equal to $[\omega _{m_{1}}^{(l)}, ... , \omega _{m_{N_{l}}}^{(l)}]$.
The aim of error backpropagation is to update the presynaptic weight $W^{(l)}$ and synaptic time constant weight $W_m^{(l)}$ using the error gradient $\frac{\partial L[t_k]}{\partial W^{(l)}}$ and $\frac{\partial L[t_k]}{\partial W_m^{(l)}}$.
Using the chain rule, the error gradient with the respect to the presynaptic weight $W^{(l)}$ in the layer $l$ is
\begin{equation}
\frac{{\partial L[{t_k}]}}{{\partial {W^{(l)}}}} = \frac{{\partial L[{t_k}]}}{{\partial {u^{(l)}}[{t_k}]}}\frac{{\partial {u^{(l)}}[{t_k}]}}{{\partial {W^{(l)}}}} = {\delta ^{(l)}}[{t_k}]\frac{{\partial {u^{(l)}}[{t_k}]}}{{\partial {W^{(l)}}}}
\label{eq10}
\end{equation}
Where $\delta ^{(l)}[{t_k}]$ is the back propagated error of layer $l$ at time $t_k$, which is equal to $\frac{\partial L[t_k]}{\partial u^{(l)}[t_k]}$.
\begin{equation}
\begin{split}
{\delta ^{(l)}}[{t_k}] &= \frac{{\partial {u^{(l + 1)}}[{t_k}]}}{{\partial {u^{(l)}}[{t_k}]}}\frac{{\partial L[{t_k}]}}{{\partial {u^{(l + 1)}}[{t_k}]}}\\
{\rm{ }}&=\frac{{\partial {u^{(l + 1)}}[{t_k}]}}{{\partial {s^{(l)}}[{t_k}]}}\frac{{\partial {s^{(l)}}[{t_k}]}}{{\partial {u^{(l)}}[{t_k}]}}{\delta ^{(l + 1)}}[{t_k}]\\
{\rm{ }} &={({W^{(l + 1)}})^T} \frac{{\partial {s^{(l)}}[{t_k}]}}{{\partial {u^{(l)}}[{t_k}]}}{\delta ^{(l + 1)}}[{t_k}]
\end{split}
\label{eq11}
\end{equation}
The key to calculate $\delta ^{(l)}[t_k]$ is to obtain $ \frac{{\partial {s^{(l)}}[{t_k}]}}{{\partial {u^{(l)}}[{t_k}]}}$. Theoretically, $s^{(l)}[t]$ is a non-differentiable function and we cannot obtain the value of $ \frac{{\partial {s^{(l)}}[{t_k}]}}{{\partial {u^{(l)}}[{t_k}]}}$ directly.
In this paper, we use an approximate curve in \cite{xu2021direct} to surrogate the original derivative function. The function of the approximate curve $f_1$ is shown below.
\begin{equation}
{f_1}(u) = \sum\limits_{i = 1}^{{S_{\max }}} {{\alpha _H}{e^{ - {{(u - i{V_{th}})}^2}/{\alpha _W}}}}
\label{eq12}
\end{equation}
where $\alpha_H$ and $\alpha_W$ determine the curve shape and steep degree.
$S_{max}$ is the upper limit of the output spikes.
From \eqref{eq10}, the second term $\frac{\partial u^{(l)}[t_k]}{\partial W^{(l)}}$ is given by
\begin{equation}
\begin{split}
\frac{{\partial {u^{(l)}}[{t_k}]}}{{\partial {W^{(l)}}}} &= S(W_m^{(l)})\frac{{\partial {u^{(l)}}[{t_{k - 1}}]}}{{\partial {W^{(l)}}}} + {s^{l - 1}}[{t_k}] - \frac{{\partial {s^{(l)}}[{t_k}]}}{{\partial W^{(l)}}}{V_{th}}\\
{\rm{ }}&=S(W_m^{(l)})\frac{{\partial {u^{(l)}}[{t_{k - 1}}]}}{{\partial {W^{(l)}}}} + {s^{l - 1}}[{t_k}] - \frac{{\partial {s^{(l)}}[{t_k}]}}{{\partial {u^{(l)}}[{t_k}]}}\frac{{\partial {u^{(l)}}[{t_k}]}}{{\partial W^{(l)}}}{V_{th}}
\end{split}
\label{eq13}
\end{equation}
From \eqref{eq13}, we can get
\begin{equation}
\frac{{\partial {u^{(l)}}[{t_k}]}}{{\partial W^{(l)}}}{\rm{ = }}\frac{{S(W_m^{(l)})\frac{{\partial {u^{(l)}}[{t_{k - 1}}]}}{{\partial {W^{(l)}}}} + {s^{l - 1}}[{t_k}]}}{{1 + \frac{{\partial {s^{(l)}}[{t_k}]}}{{\partial {u^{(l)}}[{t_k}]}}{V_{th}}}}
\label{eq14}
\end{equation}
The error gradient with the respect to the synaptic time constant weight $W_{m}^{(l)}$ can be calculated by
\begin{equation}
\frac{{\partial L[{t_k}]}}{{\partial {W_{m}^{(l)}}}} = {\delta ^{(l)}}[{t_k}]\frac{{\partial {u^{(l)}}[{t_k}]}}{{\partial {W_{m}^{(l)}}}}
\label{eq15}
\end{equation}
From \eqref{eq15}, the second term $\frac{{\partial {u^{(l)}}[{t_k}]}}{{\partial {W_{m}^{(l)}}}}$ is given by
\begin{equation}
\begin{aligned}
\frac{{\partial {u^{(l)}}[{t_k}]}}{{\partial {W_{m}^{(l)}}}}
&=\frac{\partial(S(W_m^{(l)})u^{(l)}[t_{k-1}])}{\partial W^{(l)}_m}-\frac{{\partial {s^{(l)}}[{t_k}]}}{{\partial {u^{(l)}}[{t_k}]}}\frac{{\partial {u^{(l)}}[{t_k}]}}{{\partial W_m^{(l)}}}{V_{th}}
\end{aligned}
\label{eq16}
\end{equation}
From \eqref{eq16}, we can obtain
\begin{equation}
\frac{{\partial {u^{(l)}}[{t_k}]}}{{\partial {W_{m}^{(l)}}}}
=\frac{\frac{\partial(S(W_m^{(l)})u^{(l)}[t_{k-1}])}{\partial W^{(l)}_m}}{1+\frac{\partial s^{(l)}[t_k]}{\partial u^{(l)}[t_k]}V_{th}}
\label{eq16}
\end{equation}
\subsection{Network architecture\label{structure}}
\begin{figure}[t]
\centerline{\includegraphics[width=12cm]{Architecture_3.pdf}}
\caption{Architecture of the proposed SNN}
\label{Architecture}
\end{figure}
We proposed a flexible network structure for event stream classification tasks. The proposed network structure is illustrated in Fig. \ref{Architecture}.
The network consists of a spatio-temporal compression layer, a synaptic convolution block, $M$ end-to-end connected convolution blocks, $N$ end-to-end connected dense blocks, and a voting layer.
Since the output of the synaptic convolution block are synaptic currents which are real values, not binary values, we use the convolution block which consists of sequentially connected layers including a convolution layer, a ReLU layer, and an average pooling layer, to extracted features, directly.
In dense blocks, there are a dropout layer and a fully connected layer which consists of PMLIFs.
A voting layer after the last dense block is used to boost classifying robustness.
The output of the last dense block will be divided into $N_{class}$ groups, randomly, and connected to the voting layer, where $N_{class}$ is the number of the classes.
The voting layer is implemented by calculating the average value of each group and selecting the maximum value as the classification result.
\section{Experiments and results}
We test our proposed SNN model and training method on three neuromorphic datasets N-MNIST\cite{N-MNIST}, DVS128 Gesture\cite{DVS128}, CIFAR10-DVS\cite{CIFAR10DVS} with different sizes and structures of SNNs.
And we compared our training method with several previously reported state-of-the-art results with the same or similar networks including different SNNs trained by BP-based methods, converted SNNs, and traditional ANNs.
\subsection{Experiment settings}
\begin{table}[t]
\caption{Parameters setting}
\label{table1}
\setlength{\tabcolsep}{3pt}
\centering
\arrayrulecolor{black}
\begin{tabular}{lll}
\hline
Parameters & Description & Value \\
\hline
\textit{$V_{th}$} & Threshold & 10 mV \\
\textit{$\alpha _m$} & Derivative approximation parameters & 1 \\
\textit{$\alpha _W$} & Derivative approximation parameters & 20 \\
\textit{$N_{r}$} &\begin{tabular}[c]{@{}l@{}}Resolution of spatio-temporal\\ compression\end{tabular} & 8
\\
\textit{$S_{max}$} & Upper limit of output spikes & 15 \\
\textit{$N_{Batch}$} & \begin{tabular}[c]{@{}l@{}}Batch Size\\(N-MNIST/DVS128/CIFAR10-DVS)\end{tabular} & 64,16,16 \\
\textit{$\eta$} & \begin{tabular}[c]{@{}l@{}}Learning rate\\(N-MNIST/DVS128/CIFAR10-DVS)\end{tabular} & 0.0002, 0.00004, 0.0001 \\
\textit{$\beta _1, \beta _2, \lambda$} & Adam parameters & 0.9, 0.999, $1-10^{-8}$ \\
\hline
\end{tabular}
\arrayrulecolor{black}
\end{table}
All reported experiments below are conducted on an NVIDIA Tesla V100 GPU.
The implementation of our proposed method is on the Pytorch framework\cite{Pytorch}.
The experimented SNNs are based on the network structure described in Sec. \ref{structure}.
Only 2-5 time steps are used to demonstrate the proposed ultra low-latency spiking neural network.
No refractory period is used.
Adam \cite{Adam} is applied as the optimizer.
If not otherwise specified, the accuracy in this paper refers to the best results obtained by repeating the experiments five times.
The initialization of parameters, such as the weights, threshold, time constant of membrane voltage and synapse, and other parameters, directly affect the convergence speed and stability of the whole network.
We should simultaneously make sure enough spikes transmit information between neural network layers and avoid too many spikes that reduce the neuronal selectivity.
In this paper, we use a fixed threshold in each neuron for simplification and initialize the weight $W^{(l)}$ parameters sampling from the normal distribution.
\begin{equation}
W^{(l)}{\rm{\sim[}}\frac{{{V_{th}}}}{{{N_{l-1}}}}{\rm{,0}}{\rm{.5]}}
\label{eq17}
\end{equation}
where $V_{th}$ is the threshold of membrane voltage, $N_{l-1}$ is the number of neurons of pre-layer.
The synaptic time constant weights $W_m$ are initialized to zeros.
The set of other parameters is presented in Table \ref{table1}.
In addition, we do not apply complex skill, such as error normalization\cite{c8}, weight regularization\cite{c9}, warm-up mechanism \cite{c10}, etc.
All testing accuracy is reported after training 50 epochs in our experiments.
\begin{table}[t]
\caption{Network structure}
\label{table1_1}
\centering
\begin{tabular}{lc}
\hline
Dataset & Network structure \\
\hline
N-MNIST & \begin{tabular}[c]{@{}c@{}}128SC3-128C3-AP2-256C3-AP2\\-512C3-AP4-DP-512FC-10Voting\end{tabular}\\
DVS128 & \begin{tabular}[c]{@{}c@{}}32SC3-32C3-AP2-64C3-AP2-128C3-AP2\\-256C3-AP2-512C3-AP4-DP-512FC-11Voting\end{tabular}\\
Cifar10-DVS& \begin{tabular}[c]{@{}c@{}}32C3-32C3-AP2-64C3-AP2-128C3-AP2\\-256C3-Ap2-512C3-AP4-DP-512FC-10Voting\end{tabular} \\
\hline
\end{tabular}
\begin{tablenotes}
\item[1] 128SC3 represents synaptic convolution block with 128 3 $\times$ 3 filters.
128C3 represents convolution block with 128 3 $\times$ 3 filters.
AP2 represents average pooling layer with 2 $\times$ 2 filters.
DP denotes dropout layer and 512FC means a fully connected layer that consists of 512 PMLIFs.
\end{tablenotes}
\end{table}
\subsection{Dataset experiments}
\subsubsection{N-MNIST}
The Neuromorphic-MNIST (N-MNIST) dataset is a spiking version of the original frame-based MNIST dataset, which consists of the same 60 000 training and 10 000 testing samples as the original MNIST dataset\cite{N-MNIST}.
Each N-MNIST example is captured at the same visual scale as the original MNIST dataset (28x28 pixels).
The N-MNIST dataset was captured by mounting the Asynchronous Time Based Image (ATIS) sensor on a motorized pan-tilt unit and having the sensor move while it views.
For N-MNIST dataset, the network structure we applied is shown in Table \ref{table1_1}.
We compare our proposed network with several spiking convolutional neural networks which have similar network structures.
Table \ref{T1} shows that our proposed spiking neural network can achieve $99.63\%$ which outperforms other results.
In addition to the performance improvement, our proposed method also has a large reduction of time step count.
Compared with LMCSNN\cite{LMCSNN} which only has 10 time steps, our method still has 5 times improvement.
\begin{table}[t]
\caption{Comparisons with SNNs on N-MNIST}
\label{T1}
\centering
\begin{tabular}{lllll}
\toprule
Models & Method & Time Step & Epoch & ACC(\%) \\
\hline
TSSL-BP\cite{TSSL-BP} & SNN & 30 & 100 & 99.23 \\
LISNN\cite{LISNN} & SNN & 20 & 100 & 99.45 \\
NeuNorm SNN\cite{NeuNormSNN} & SNN & 50 & 200 & 99.53 \\
BackEISNN\cite{BackEISNN} & SNN & 100 & 200 & 99.57 \\
LMCSNN\cite{LMCSNN} & SNN & 10 & 200 & 99.61 \\
This work & SNN & 2 & 50 & 99.63 \\
\bottomrule
\end{tabular}
\end{table}
\subsubsection{DVS128-gesture}
The IBM DVS128-gesture\cite{DVS128} is an event-based gesture recognition dataset, which has the temporal resolution in $\mu s$ level and 128 $\times$ 128 spatial resolution.
It records 1342 samples of 11 gestures, such as hand clips, arm roll, etc., collected from 29 individuals under three illumination conditions, and each gesture has an average duration of 6 seconds.
Compared with N-MNIST, DVS128-gesture is a more challenging dataset which has more temporal information.
Table \ref{T2} shows that our method obtains $98.96\%$ test accuracy with only 5 time steps in 50 epochs. BPSTA-SNN \cite{BPSTA-SNN} uses the SNN which is pre-trained by TSSL-BP \cite{TSSL-BP} to obtain the same accuracy with 16 time steps in 300 epochs.
\begin{table}
\caption{Comparisons with DNNs and SNNs on DVS128 Gesture}
\label{T2}
\centering
\begin{tabular}{llllll}
\hline
Models & Method &
Time Step & Epoch & Trick & ACC(\%) \\
\hline
STBP-TDBN\cite{stbp-TDBN} & SNN & 40 & - & & 96.87 \\
RG-CNN\cite{RG-CNN} & DNN & 8 & 150 & & 97.20 \\
LMCSNN\cite{LMCSNN} & SNN & 20 & 200 & & 97.57 \\
STFilter\cite{STFilter} & DNN & 12 & - & \begin{tabular}[c]{@{}l@{}}Spatiotem- \\poral filters\end{tabular} & 97.75 \\
BPSTA-SNN\cite{BPSTA-SNN} & SNN & 16 & 300 & \begin{tabular}[c]{@{}l@{}}Pre-train by\\ TSSL-BP \end{tabular} & 98.96 \\
Our Method & SNN & 5 & 50 & & 98.96 \\
\hline
\end{tabular}
\end{table}
\subsubsection{Cifar10-DVS}
To validate our method, we apply a deeper network structure which contains six convolution layers and five average pooling, and a dense layer on the dataset Cifar10-DVS\cite{CIFAR10DVS} which is an important benchmark for comparison in SNN domains.
Cifar10-DVS is a neuromorphic version converted from the Cifar10 dataset in which 10,000 frame-based images are converted into 10,000 event streams with DVS.
Our proposed method obtains $73.8\%$ test accuracy with 5 time steps.
Compared with TA-SNN \cite{TA-SNN}, our proposed method obtain $1.8\%$ accuracy improvement with only a half of time steps.
When we apply the 10 time steps, we can achieve $76.9\%$ test accuracy in 50 epochs.
\begin{table}
\caption{Comparisons with SNNs on Cifar10-DVS }
\label{T3}
\centering
\begin{tabular}{lllll}
\toprule
Models & Method & Time Step & Epoch & ACC(\%) \\
\hline
NeuNorm SNN\cite{NeuNormSNN} & SNN & 100 & 200 & 60.5 \\
SR-ANN\cite{SR-ANN} & ANN to SNN & 60 & - & 66.75 \\
STBP-TDBN\cite{stbp-TDBN} & SNN & 40 & - & 67.8 \\
LIAF-SNN\cite{LIAF-SNN} & SNN & 10 & 60 & 70.4 \\
TA-SNN\cite{TA-SNN} & SNN & 10 & 150 & 72 \\
Our proposed & SNN & 5 & 50 & 73.8 \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Ablation Study\label{AblationStudy}}
We conduct extensive ablation studies on DVS128-gesture to evaluate the validity of the method.
The reason that we chose DVS128-gesture is that it is a more challenging dataset, especially for SNNs with few time steps, due to the more complex spatial structure and stronger temporal information.
The function of the synaptic convolutional block is to balance the dramatic change between adjacent time steps.
The dramatic change may slow down the convergence speed of the spiking neural network or even cause no converge during the training process.
We compare the output of a synaptic convolutional block with the output of a convolution layer + ReLU layer.
In Fig. \ref{DVS128_ST}, the figures in the first row are the input data at different time steps.
Compared with figures in the second row, the figures in the third row are more continuous.
It is shown that the main difference between Cov+ReLU and Cov+Synaptic layer is that the synaptic layer retains some information of the previous time step which avoids the dramatic changes between adjacent time steps.
\begin{figure}[t]
\includegraphics[width=\columnwidth]{DVS128__ST.pdf}
\caption{Different between Cov+ReLU layer and Cov+Synaptic Layer }
\label{DVS128_ST}
\end{figure}
To evaluate the influence of synaptic convolutional blocks and PMLIF models,
we design an ablation study of different strategies, which
consist of four: S0, SNNs with synaptic convolutional blocks and PMLIF; S1: SNNs with only PMLIF; S2: SNNs with only synaptic convolutional blocks; S3: SNNs without synaptic convolutional blocks and PMLIF.
As shown in table \ref{T6}, The SNNs with Synaptic convolutional blocks and PMLIF have a higher testing accuracy and the performance improvement is more significant with the reduction of time steps.
Compared with the SNN without synaptic convolutional block and PMLIF, the SNN with synaptic convolutional block and PMLIF has a $1.38\%$ improvement on testing accuracy, when time step count is 2.
\begin{table}
\caption{Ablation study of different strategies based on DVS128-gesture }
\label{T6}
\centering
\resizebox{\columnwidth}{15mm}{
\begin{tabular}{lccccccccc}
\hline
\multicolumn{1}{c}{\multirow{2}{*}{Methods}} & \multicolumn{3}{c}{T=2} & \multicolumn{3}{c}{T=5} & \multicolumn{3}{c}{T=10} \\
\cline{2-10}
\multicolumn{1}{c}{} & Mean(\%) & Std(\%) & Best(\%) & Mean(\%) & Std(\%) & Best(\%) & Mean(\%) & Std(\%) & Best(\%) \\
\hline
S0 & 95.60 & 0.40 & 96.52 & 98.47 & 0.31 & 98.96 & 98.75 & 0.28 & 98.96 \\
S1 & 95.35 & 0.17 & 95.49 & 98.06 & 0.52 & 98.96 & 98.33 & 0.38 & 98.61 \\
S2 & 94.79 & 0.31 & 95.14 & 97.99 & 0.40 & 98.61 & 98.13 & 0.28 & 98.61 \\
S3 & 94.65 & 0.36 & 95.14 & 97.98 & 0.24 & 98.26 & 97.92 & 0.22 & 98.26 \\
\hline
\end{tabular}
}
\end{table}
\subsection{Performance analysis}
\subsubsection{Influence of compression ratio}
To reduce the training and inference latency of SNNs, we proposed a spatio-temporal compression method to aggregate event streams into a few time steps.
Since neuromorphic datasets will be integrated into tens or hundreds of frames in the reported works,
the compression ratio is defined as the ratio of average frames or time steps used in the mentioned work to the time steps we applied.
The lines in Fig. \ref{DVS128_Cifar10} denotes the mean of the accuracy obtained by repeating the experiments five times, the shade is the fluctuation range of the data.
As Fig. \ref{DVS128_Cifar10} (a) is shown, the accuracy still can keep $98.96\%$ when compression ratio is $26.04\%$ for DVS128-gesture.
When compression ratio is $10.42\%$, classification accuracy drops from $98.96\%$ to $95.83\%$.
The reason for the significant accuracy degradation is the time steps are only two when the compression ratio is $10.42\%$, which causes too much temporal information to be lost.
For Cifar10-DVS dataset, a higher compression ratio is applied.
Compared with DVS128-gesture, the accuracy drop is not significant with the increase of the compression ratio.
The reason for that is Cifar10-DVS is a neuromorphic dataset converted from the static dataset, which contains less temporal information, high compression ratio mainly causes temporal information loss rather than spatial information.
\begin{figure}[t]
\subfigure[]{
\includegraphics[width=6cm]{DVS128_3_1.pdf}
}
\hspace{-6mm}
\subfigure[]{
\includegraphics[width=6cm]{Cifar10_DVS_1_1.pdf}
}
\caption{ The test accuracy of different compression ratios on (a) DVS128-gesture and (b) Cifar10-DVS dataset}
\label{DVS128_Cifar10}
\end{figure}
\subsubsection{Distribution of $\tau_{mem}$ after training}
In PMLIF model, we introduce the learnable membrane constants to improve the convergence speed and stability of the whole network.
The results in section \ref{AblationStudy} have shown the benefit of applying the PMLIF model.
In this section, we study the distribution of the membrane constants under different compression ratios.
As Fig. \ref{Distribution} is shown, the distribution of membrane constants basically conformed to a Gaussian distribution.
We can find that the mean of membrane constants increases with the increase of the compression ratio.
The membrane time constants are used to control the retained information at the previous time step.
A larger membrane constant means more information at the previous time step will be retained.
The increase of compression ratio means that more information will be integrated into one time step.
The analysis result is that a larger membrane time constant should be applied to keep more information of previous time step retained when the compression ratio increases.
\begin{figure*}[t]
\centerline{\includegraphics[width=12cm]{tmemVsT.pdf}}
\caption{Distribution of $\tau_{mem}$ after training on DVS128-Gesture and CIFAR10-DVS}
\label{Distribution}
\end{figure*}
\section{Conclusion}
In this work, we proposed an ultra-low latency spiking neural network with spatio-temporal compression and a Synaptic convolutional block for event stream classification.
The proposed spatio-temporal compression method is used to compress event streams into few time steps to reduce the training and inference latency.
We also proposed a synaptic convolutional block as the first layer of the SNN, in which a synaptic layer is applied to balance the dramatic change between adjacent time steps.
The parametric multi-threshold Leaky Integrate-and-Fire models, whose membrane constants are learnable, are introduced in our SNNs.
We evaluate our proposed method and compare it with state-of-the-art methods.
Experiment results show that our proposed method outperforms state-of-the-art methods with the same or similar network architecture on neuromorphic datasets, such as N-MNIST, CIFAR10-DVS, DVS128-gesture.
\section*{Acknowledgment}
This work was supported in part by the National Natural Science Foundation of China under Grant 62004146, by the China Postdoctoral Science Foundation funded project under Grant 2021M692498, and by the Fundamental Research Funds for the Central Universities.
|
1,941,325,220,448 | arxiv | \section{Convergence} \label{app:convergence}
We evaluated the extent to which our models attained inflow equilibrium---i.e. the extent to which they converged---using a similar criterium as the one proposed by \cite{Narayan2012, Sadowski2013}, as follows. We estimated the convergence radius $r_{\rm conv}$ as the radius at which the simulation time $t_{\rm sim}$ equals the viscous time $t_{\rm visc} = r/|v_r|$ $t_{\rm visc}$:
\begin{equation}
t_{\rm sim} = t_{\rm visc} = \frac{r_{\rm conv}}{|v_r(r_{\rm conv}; t_{\rm sim})|}
\label{convergence-criteria}
\end{equation}
We restricted our analysis to the equatorial region of the accretion flow and only considered the last $10000 GM/c^3$ of duration of each model. The values of $t_{\rm visc}$ fluctuate as a function of radius making. We took $r_{\rm conv}$ to be the largest value that satisfied equation \eqref{convergence-criteria}. The resulting values of $r_{\rm conv}$ are listed in Table \ref{tab:convergence}.
\begin{table}
\centering
\begin{tabular}{llc}
\textbf{\#ID} &\textbf{Name}& \textbf{$r_{\rm conv}$} [$R_S$] \\ \hline
00 & PNST.01 & 1024 \\
01 & PNST.1 & 280 \\
02 & PNSS.1 & 5 \\
03 & PNSS.3 & 5 \\
04 & PNSSV & 5 \\
05 & PL0ST.1 & 201 \\
06 & PL0SS.3 & 25 \\
07 & PL2SS.1 & 464 \\
08 & PL2SS.3 & 1002 \\
09 & PL2ST.1 & 444 \\
10 & PL4ST.01& 340 \\ \hline
\end{tabular}
\label{tab:convergence}
\caption{Radius of convergence for all models considered in this work.}
\end{table}
From Table \ref{tab:convergence}, we can see that some of the models achieved inflow equilibrium out to very large radii, such as models PNST.01 and PL2SS.3 which have $r_{\rm conv} \approx 1000 R_S$. Among our simulations, model PNST.01 is the one with the longest duration whereas model PL2SS.3 displayed the most prominent winds. Models PNSS.1, PNSS.3 and PNSSV are very similar to each other and displayed poor convergence, as well as PL0SS.3.
Models with higher inflow velocities resulted generally in larger convergence radii. By the same token, the larger velocities are a product of the more efficient angular momentum removal of the ST viscosity prescription, which also led to higher mass accretion rates.
\section{Other simulations} \label{subsec:other}
Besides the two simulation discussed in the main paper, we also performed other simulations (Table \ref{tab:simulations}) which we briefly describe below.
\subsection{PNST.1}
The difference between models PNST.01 and PNST.1 is the adopted value of $\alpha$. In this simulation we use $\alpha = 0.1$ which makes the effects of viscosity more pronounced. In this configuration, the disc loses its original form around $t=60000 \left[ \frac{GM}{c^3} \right]$ and there is no clear outflows in the wind region, following our previous definitions. Simulation PNST.1 becomes very similar to the shape of PNST.01 showed in the left panel of figure \ref{fig:dens-profiles} but in a shorter time and with a 10 times higher accretion rate--as expected for accretion with a higher viscosity.
The values for the equatorial density profile power-law index is shown in Table \ref{tab:allmodels}. The profile indicates the second lowest value of $p$, which indicates mass loss via outflows following the model, but the outflows were not saw and presented a $\eta$ function similar to PNST.01 and the fraction of ejected energy calculated via particles in the system was $\sim 2\%$, a bit below that the value obtained for PNST.01.
\subsection{PNSSV}
This simulation is one of the three simulations with SS viscosity and l(r) following PN profile. This simulation presented extremely low accretion rate and convergence radius. This was reflected with minimal visual changes in torus shape over the simulation evolution. The wind production was intermittent and the material ejection in average was close of the results for PNST.01. However the poor convergence made these results unreliable. The $s$ values for the trio PNSS are unusually high when compared with the literature traditional values, $s \gtrsim 2$, this can be related to lack of convergence in these three simulations.
\subsection{PNSS.1}
This simulation is very close to PNSSV. They share the same specific angular momentum profile and viscosity prescription (SS) with the value of $\alpha$ of both being very close. Therefore, it is not a surprise that the results of models PNSS.1 and PNSSV are very similar. For instance, these two simulations share the same value of $p$ for the equatorial density profile (table \ref{tab:allmodels}), similar $\eta$ variability, ejection rates, launching region and extremely low convergence radius. The effects of the variation of $\alpha(r)$ in the innermost part of the simulation does not change the dynamics of ejection in the wind region and in the outer parts of the accretion disc.
\subsection{PNSS.3}
PNSS.3 is very similar to both PNSS.1 and PNSSV, with the only difference in the choice of $\alpha$. There was not much difference between this simulation and the other two. All members of this trio of models had essentially the same durations ($\sim 400000GM/c^3$), and overall presented very similar results. Including the extremely small convergence radius.
\subsection{PL0ST.1}
This simulation was performed with a constant specific angular moment, $l(R) = {\rm const}$ and a ST-type viscosity profile. It presented a evolution marked by a very strong inflow since the beginning with the material essentially free-falling onto the BH. During the infall, the material piled-up in the inner parts of the disc and formed a spherical accretion flow. We found a jet-like structure arisen in the simulation which has an hydrodynamic origin for the following reason. Material was accreted quite fast due to the strong $\alpha$-viscosity. The disc overfeeds the BH, giving it more than it can take and the accretion becomes spherical. The material were piled up along the polar axis, and the ensuing overpressure creates a vertical structure that looks like a jet. All this process occurred considerably fast, within $15 000 GM/c^3$ after the beginning of the simulation.
Curiously this simulation presented an equatorial density profile $\rho \propto r^{0.97}$, which could indicate existence of outflows. Since we observed only inflows in the model, this confirms that density profiles--taken by themselves--are not a good indicator of the presence of outflows.
The net mass accretion rate in this simulation is essentially the same as PNST.1, even though accretion happened much more rapidly given the larger $\alpha$. Not a single particle escaped to the wind region. The $\eta$ had two bursts along the simulation time with peak of $\sim 0.05$, but most of the time $\dot{E}_{wind} = 0$.
\subsection{PL0SS.3}\label{app:PL0SS3}
PL0SS.3 shares the same $l(R)$ with PL0ST.1, but with other viscosity profile. This one are not similar with the discussed main simulations. The disc shape did not present great changes along the simulation, it maintained its original form during all the $2 \times 10^5 GM/c^3$. The net mass accretion rate here is a bit higher than the rate observed for PNSS.1, PNSS.3, PNSSV and PL2ST.1, and the density profile is similar to the $\rho(r)$ for PL0ST.1, as it had shown in table \ref{tab:allmodels}.
The particles for this simulation presented a behavior slightly similar to the ones from PL0ST.01, the particles have been launched, some were accreted and other followed the external contour of the disc and get ejected near to $\theta = 90 \degree$, hence we do not consider this ejection as a wind. The number fraction of ejected particles in wind region was null. But differently from the other simulations with low value of fraction of ejected energy, the $\eta$ here indicates presence of winds higher than PL2SS.3, which is not consistent with the particle analysis. This come from a diffusion of the huge disc (see panel (a) of figure \ref{fig:initial-conditions}), the disc has diffused and make the calculation of $\dot{E}_{wind}$ unreliable at $r=500R_S$. If we calculate the same integral in a little big radius we can note that $\eta = 0$, different of PL2SS.3. The high value of $\eta$ here is not due to wind production.
\subsection{PL2SS.1}
PL2SS.1 was the simulation with more intense outflows, the fraction of ejected energy is $\sim 20\%$, which is twice the value found for PL2SS.3 in the previous detailed analysis, with a bit smaller time of execution than PL2SS.3. The general aspects of PL2SS.1 were very similar to PL2SS.3, they both shared the same specific angular momentum profile and viscosity prescription, the only difference is the $\alpha$ value, $\alpha^{07} = 0.1$ and $\alpha^{08} = 0.3$. There is minor differences in the density maps between the two simulations, PL2SS.1 showed less ejection in the equatorial plane than PL2SS.3, which was observable in the difference in the slope of density profile from table \ref{tab:allmodels}. The accretion rate and the $\eta$ of these two simulations are very alike.
The main differences between PL2SS.1 and PL2SS.3 are: (i) the net mass accretion rate plot, for PL2SS.3, bottom panel in figure \ref{fig:acc-time}, the net mass accretion rate increased with larger radius, the same is observed for PL2SS.1, with close values, but for PL2SS.1 the net mass accretion rate was outflow dominated, while in PL2SS.3 was inflow dominated. PL2SS.1 is the only simulation in that the mass outflow rate is more intense than mass inflow rate for $30 \lesssim r \lesssim 300R_S$. And (ii) the velocity distribution of the particles, PL2SS.3 velocity histogram, which is showed in the third panel of figure \ref{fig:LP-velocities}, is dominated by particles with $v_r > 0$, for PL2SS.1 there are much more particles with $v_r < 0$, near to the half of the total number. The average velocity of the particles in PL2SS.1 are smaller than PL2SS.3, but is still the second highest average velocity of particles from our sample.
The simulation ejection map was very close to the third panel in figure \ref{fig:LP-map}, both simulations ejected particles from all parts of the disc. PL2SS.1 and PL2SS.3 are similar with each other and very different from the rest of the sample, with some similarities with PL4ST.01.
\subsection{PL2ST.1}
This simulation had the same specific angular momentum profile as PL2SS.1 and PL2SS.3, but with a different viscosity prescription, which led to a complete different result, there was no outflows. The particles had been mostly accreted at high accretion rate, the ejected ones were ejected in the jet region. Like PL0ST.1 in this simulation we had a spherical accretion and the emergence of a jet-like structure formed due to the intense loss of angular momentum of the disc, even with the small running time of $\sim 3.8 \times 10^4 GM/c^3$. There was no winds here.
The density profile slope was very close to the one found in PNSSV (see table \ref{tab:allmodels}) but they had completely different accretion modes, the torus format evolution have no similarities between these simulations. The ejection fraction and wind efficiency were both null.
\subsection{PL4ST.01}
PL4ST.01 was the only simulation with the original condition $l(R) \propto R^{0.4}$ that did not presented numerical errors in the very first steps of evolution, the implementation of the SS viscosity prescription unfortunately was not possible for this specific angular momentum profile. The results of this simulation were different from all previous setups.
The accretion disc was utterly destroyed in $\sim 1.2 \times 10^5\left[ \frac{GM}{c^3} \right]$ and left some filaments, that looked like a gaseous wig that keep being accreted. The accretion rate decreased after the destruction of the disc, but even with lowered rate it is still orders of magnitude bigger than the accretion rate of the simulations with SS-viscosity (in units of $M_0c^3/GM$). PL4ST.01 had the highest net mass accretion rate, $\dot{m} = \sim 10^{-4} M_0\frac{c^3}{GM}$, of all simulations.
The fraction of ejected energy from PL4ST.01 was really close to the value of PNSS.3, $\sim 1\%$, but its wind efficiency $\eta$ in the second half of the simulation time is comparable to the value found in PL2SS.3. probably after the torus destruction outflows were produced in PL4ST.01. This scenario is not very physical, because we expect a well-behaved accretion disc that could survive for a long time and not a destroyed disc reduced to some gas filaments in order to explain AGN physics. Another remarkable feature of this simulation is the value of $p=1.53$, which is barely consistent with the assumption of $p<1.5$, considering that we had uncertainties in the calculus.
\section{Winds production}
The wind production for some simulations can be seen in figure \ref{fig:eta-winds-app}. As we said in section \ref{sec:results}, for various simulations we did not find continuous wind production. Figure \ref{fig:eta-winds} showed non-continuous wind generation for PNST.01. PNST.01 has a wind "flare", the same appeared in figure \ref{fig:eta-winds-app} for PNST.1, PL0ST.1 and PL2ST.1. All these simulations share the same viscosity prescription.
$7^{\rm th}$ column in table \ref{tab:allmodels} presented the wind activity time for our simulation sample. In average the wind production operates $\sim 50\%$ of the time. As we said in \ref{sec:disc}, outflows similar to ``Parker winds'' can be intermittent \citep{Parker1960}. This is a consistent scenario with our outflows.
\begin{figure}
\noindent
\includegraphics[width=\linewidth]{figures/efficiency-winds-app.png}
\caption{Temporal evolution of the wind efficiency $\eta$ as defined in equation \ref{eta-efficiency} for the other simulations. This figure is analogous to \ref{fig:eta-winds}.
}
\label{fig:eta-winds-app}
\end{figure}
\section*{Supporting information}
All simulation data will be made publicly available on figshare upon acceptance of the manuscript for publication.
\section{Discussion}\label{sec:disc}
\subsection{Accretion flow and density radial profile}
In table \ref{tab:allmodels} we present the power-law index $p$ for density radial profile $\rho \propto r^{-p}$ averaged over the equatorial region of the accretion flow. From this table we can draw a number of conclusions:
\begin{enumerate}
\item There is correlation between the initial angular momentum profile adopted and the value of $p$. The corollary is that we see no particular values of $p$ associated with any of the three groups in figure \ref{fig:integrated-mdot}.
\item For simulations with the same specific angular momentum, the SS-viscosity models resulted in higher values of $p$ compared to the ST-viscosity ones.
\item Our results did not present clear correlation between the value of $p$ and the wind production or $s$.
\end{enumerate}
The last item above is especially relevant because it demonstrates that based only on the value of $p$, it is not straightforward to tell whether there are winds being produced. This result seems to contradict some previous analytical \citep{Blandford1999, Begelman2012} and numerical \citep{Yuan2012} works which base their analysis on the assumption that $\rho(r)$ in the accretion disc is strongly dependent on the presence of mass-loss. These works assume that $\rho(r) \propto r^{-3/2+s}$ where $s$ is usually in the range 0.5-1 with larger values corresponding to more profuse outflows ($s=0$ corresponds to a no-wind ADAF; \citealt{Narayan1994}). Concretely, ADIOS models suggest that $s=1$, $p=0.5$ corresponds to very strong winds. Our model PNST.01 shows such a similar density profile, however it display a feeble breeze over just a short amount of time. Our model with the strongest winds--model PL2SS.3--has a low value of $p=1.33$ in contradiction with ADIOS models, and also similar to models with no winds such as PL2ST.1. We conclude that we cannot make strong statements about the presence of winds based on the indirect information given by $\rho(r)$.
$s$ was clearly related to the adopted viscosity for the simulation, with ST simulations showing $s \lesssim 0.5$, while SS simulation had much higher values as $0.75 < s < 1.2$ --some simulations reached $s > 2$, but they presented poor convergence, see table \ref{tab:convergence}. Furthermore the relation between $p$ and $s$ are not clear in our simulations sample, despite the expected relation in ADIOS models of $s + p = 3/2$. The values of these two power-law index are more related to the viscosity and initial conditions than to each other. In fact, they are probably non-trivial functions of the flow parameters.
The role of viscosity is prominent in our results and the evolution of accretion rate is directly related to it. The SS-viscosity produced simulations with lower accretion rates, as we can see in figure \ref{fig:integrated-mdot}. $\dot{m}$ is a direct consequence of the angular momentum transfer. ST-viscosity simulations transferred angular momentum more efficiently than SS-viscosity simulations. For SS-viscosity case the accretion is slower and we have steeper density and accretion rate profiles --respectively the values of $p$ ans $s$. For higher accretion rates, the gas was not able to accumulate in the inner regions of the accretion flow, it was accreted. For instance, for ST-viscosity we have $\max(\rho) \approx 1$ while for SS-viscosity we have $\max(\rho) > 1$.
The wind production is more persistent for SS-viscosity simulations. Our results suggested that lower accretion rates are more prone to produce outflows. Lower accretion means that the material have slower radial velocity and remains more time trapped inside the accretion flow. These effects make the material more likely to be subject to internal turbulent forces and thermal expansion.
Group 3 in figure \ref{fig:integrated-mdot} (the green ones) are the simulations with higher mass loss and a more steady outflow production. These two simulations had the same SS-viscosity profile and specific angular momentum profile $l(R) \propto r^{0.2}$. The initial disc can be seen in figure \ref{fig:initial-conditions}, panel (b). Higher values of $a$ means higher axial velocities and higher values of energy. These systems have naturally higher $Be$ associated, make them more prone to produce outflows.
The third of our explored parameters is $\alpha$. The effects of $\alpha$ are not clear as the others. Simulations PL2SS.1 and PL2SS.3 were very similar, with a major difference only in the values of $s$. Otherwise simulations PNST.01 and PNST.1 presented notable differences. $\alpha$ is related to the ``strenght'' of the viscous effects. For the same viscosity prescription and specific angular momentum profile, the simulation with higher $\alpha$ presented higher $\dot{m}$ for the pairs cited.
\subsection{Wind launching mechanism}
Since our simulations do not have magnetic fields that could be responsible for ejecting material though the Lorentz force, the only possibility left is a thermally-driven mechanism to explain our observed outflows. In order to interpret the hydrodynamic winds observed here, we use the model of \cite{Parker1960} originally proposed to explain the nature of the Sun's coronal outflows. The main parameter that describes Parker winds is the ratio between gravitational binding energy and thermal energy
\begin{equation}
\Lambda = \frac{2 G M m_H}{5 r k T(r)}
\end{equation}
where $m_H$ is the hydrogen mass and $k$ is the Boltzmann constant.
For $\Lambda \leq 1$, the thermal energy overcomes the gravitational energy and winds can be thermally launched via thermal expansion.
\cite{Parker1960} originally considered spherically symmetric mass outflows in stellar system with temperatures $\sim 10^6$K which are much lower than the typical temperatures in RIAFs. The question of course is: can much hotter accretion flows launch thermally-driven (Parker) winds, even though the central mass is quite larger than in stellar systems? \cite{Waters2012} attacked this question in the context of much colder, thin accretion discs around BHs. Here, we analyzed it in the context of our hot accretion flow models. More recently \cite{Cui2019} worked with solutions of Parker winds from RIAFs around SMBHs.
Analyzing the averaged temperature profile of our simulations we found $\Lambda \sim 1-2$ in the disc equator and $\Lambda \ll 1$ in the coronal region. Therefore, the winds we have observed in our RIAF models are consistent with being launched from the RIAF's corona via Parker wind scenario. Our outflows are produced by the extremely high temperatures in the torus corona. In this region we have $\Lambda \ll 1$ favouring thermal expansion and material ejection via thermally-driven winds.
PNST.01 shows only a short wind burst while PL2SS.3 displays persistent, vigorous mass-loss over the entire simulation time. These result come from the interplay between thermal expansion (i.e. Parker winds), the initial reservoir of angular momentum and shear stress. The difference between these models lies in the parameters which regulate the angular momentum and its transport across the flow. The initial launching of winds in both models is similar since they have similar amounts of enthalpy compared to the gas gravitational binding energy---both simulations had similar initial profiles of Bernoulli parameter and $\Lambda$. However, model PL2SS.3 has initially $\sim 33\%$ more angular momentum than the PNST.01 setup. The wind in PL2SS.3 carried a considerable amount of angular momentum away from the mid-plane of the accretion flow. While in PNST.01 there is no persistent wind and the transport of angular momentum outwards occurred in the equatorial zone, generating a tail-like structure.
With our very long simulations, we have found that the wind production is not continuous in time as can be seen in figure \ref{fig:eta-winds} --for PNST.01-- and in table \ref{tab:allmodels}. For Parker winds, with $\Lambda \leq 1$ we can reach, or not, stationary expansion solutions (continuously outflow generation), however the coronal heating can be not sufficient to cause the stationary expansion state even with the $\Lambda$ condition achieved, in this case found an intermittent expansion state \citep{Parker1960}, which match with our results. In $7^{th}$ column of table \ref{tab:allmodels} there is the fraction of time in each simulation in which $\eta > 0$, in other words the fraction of time in which the system ejected material.
We have found that a small change in the value of the $\alpha$-viscosity can have a notable effect on the properties of the resulting outflow. For instance, consider the models PL2SS.1 and PL2SS.3. A small increase in the value of $\alpha$ from 0.1 to 0.3 resulted in a notable decrease in the amount of energy carried by the outflow as we can see in the $9^{\rm th}$ column in table \ref{tab:allmodels}. Interestingly, the accretion rate did not change with this variation. A possible qualitative explanation is that for small values of $\alpha$ there is not enough gas reaching the wind launching region, so the wind is very weak or absent. On the other hand, with very high values of $\alpha$ there is enough gas being channeled in an outflow but the increased viscosity makes it lose energy and angular momentum rapidly. Therefore, there would an intermediate ``sweet spot'' of $\alpha$-values that optimizes wind launching, such that enough gas is lost in an outflow and keeping it stable and with enough energy to reach large distances.
We found that the SS viscosity profile is the most conducive to wind formation. Simulations with the specific angular momentum scaled as a power-law ($r^a$) with higher values for the coefficient $a$ presented stronger winds. Simulations PL2SS.1 and PL2SS.3 were the ones with the most prominent outflows. Changing values of $\alpha$ did not change drastically the wind production, this is visible when we compare similar simulations such as PL2SS.1 and PL2SS.3.
\subsection{Comparison with observations}\label{subsec:obs}
Our simulations with the ST viscosity (models PNST.01, PNST.1, PL0SST.1 and PL2ST.1) resulted in values $p \sim 0.5-1$. The resulting density profiles are consistent with those constrained from observations of LLAGNs, for instance Sgr A* ($p \sim 0.5$; \citealt{Yuan2003, Wang2013}), NGC 3115 ($p \sim 1$; \citealt{Wong2011, Wong2014, Almeida2018}) and M87 ($p \sim 1$; \citealt{Kuo2014, Russell2015, Park2019}). In our sample these simulations had weaker winds compared with the remaining ones. The simulations with SS viscosity (models PNSS.1, PNSS.3, PNSSV, PL0SS.3, PL2SS.1, PL2SS.3) achieved more efficient winds but with $p \sim 1.1-1.4$, marginally consistent with the observations of NGC 3115 and M87.
In many of our simulations, we have found that a typical value for the efficiency of wind production $\eta$ (eq. \ref{eta-efficiency}) is $10^{-3}$. Interestingly enough, this is in good agreement with the mechanical feedback efficiency of $10^{-4}-10^{-3}$ required in cosmological simulations of AGN feedback in the so-called radio mode, in order to offset cooling in galaxy clusters and individual galaxies \citep{Ciotti2010,Sijacki2007,Sijacki2015} and reproduce observations. Therefore, RIAFs could in principle provide efficient feedback to quench star formation in galaxies.
Given the typical values of $\eta$ found in our simulations, we can use eq. \ref{eta-efficiency} to write
\begin{equation} \label{power}
\dot{E}_{\rm wind} = 10^{41} \left( \frac{M}{10^8M_\odot} \right) \left( \frac{\dot{M}}{10^{-3} \dot{M}_{\rm Edd}} \right) \ {\rm erg \ s}^{-1}
\end{equation}
where $\dot{M}$ is taken as the accretion rate fed at the outer radius of the accretion flow, as defined previously (cf. section \ref{subsec:eff}).
We now turn to the comparison of the energetics of our modeled winds with observations of LLAGNs. The "Akira" galaxy hosts a $10^8 M_\odot$ SMBH accreting at $\dot{M} \sim 10^{-4} \dot{M}_{Edd}$ \citep{Cheung2016}. Appying eq. \ref{power} to Akira, we get $\dot{E}_{\rm wind} \sim 10^{40} \ {\rm erg \ s}^{-1}$ which is consistent with the wind kinetic power derived from integral field unit observations of the ionized gas ($\approx 10^{39} \ {\rm erg \ s}^{-1}$; \citealt{Cheung2016}). This wind can inject sufficient energy to offset the cooling rate in both the ionized and cool gas phases in Akira. Moreover, the simple wind model of Cheung et al. gives a constant radially-outward velocity of $310 \ {\rm km \ s}^{-1}$ in a wide-angle cone in Akira. From our simulations, the average velocity of the outflowing particles was $\sim 10^{-3}c \approx 300 \ {\rm km \ s}^{-1}$, which is in excellent agreement with the observations reported by \cite{Cheung2016}. In conclusion, the properties of the wind observed in the Akira galaxy are well explained as winds from a RIAF as modelled in this work.
The SMBH at the center of Our Galaxy--Sgr A*--is accreting with a Bondi rate of $\dot{M}_{\rm Bondi} \approx 10^{-5} M_{\odot}/yr \approx 10^{-4} \dot{M}_{Edd}$ \citep{Baganoff2003} which taking into account the RIAF solution gives $\dot{M} \sim 0.1 \dot{M}_{\rm Bondi} \approx 10^{-5} \dot{M}_{Edd}$. Using eq. \ref{power} this results in a wind power of $\dot{E}_{wind} = 10^{38} \ {\rm erg \ s}^{-1}$. This estimate is similar to the power previously estimated by different authors \citep{Falcke2000, Merloni2007}. Such winds could be important in explaining the Pevatron observations by the High Energy Stereoscopic System collaboration \citep{HESSCollaboration2016} and the \textit{Fermi} bubbles \citep{Su2010}.
We should note that our winds could be agents of AGN feedback in galaxies hosting SMBHs accreting in the sub-Eddington, RIAF mode. Such feedback would be neither in the radio mode--since it is not through a relativistic jet--nor in the quasar mode--since we are modeling SMBHs accreting at low rates. One class of galaxies which could be subject to this type of feedback--in fact, it seems to be required to explain them--are LLAGNs in the proposed ``red geyser'' mode \citep{Cheung2016,Roy2018}. In red geysers, periodic low-power outflows from the central LLAGN would be able to heat the surrounding gas, prevent any substantial star formation and thereby maintain the quiescence in typical galaxies. The outflows self-consistently modeled in this work can explain the origin of the red geyser mode of AGN feedback.
\subsection{Comparison with previous numerical simulations}\label{subsec:compare-sims}
Our simulations with the ST viscosity, except PL4ST.01, presented the value of $p \sim 0.5-1$, which agrees with the simulations performed by \cite{Stone1999, Yuan2012, Yuan2012b} that had used the same viscosity. The simulations with SS viscosity achieved more efficient winds but with $p \sim 1.1-1.4$, which is slightly below the self-similar, no-wind ADAF solution \citep{Narayan1994}. The resulting power-law dependence of $\rho(r)$ and the fact that $p < 1.5$ are in general agreement with expectations of the ADIOS model \citep{Blandford1999}. It is also in agreement with previous hydrodynamical simulations \citep{Stone1999, Yuan2012b}. However we found very high values of $s \gtrsim 2$ for some simulations, revealing a strong correlation between the radial profile of mass inflow rates and the adopted viscosity parameterization. Considering values of $p$ and $s$, our results for PNST.01 and PNST.1 are the most similar to previous simulations.
On average the efficiency of the winds in our models is in the range $\eta \sim 10^{-3}-10^{-2}$, which is a bit lower than the typical values of $\eta = 0.03$ found by \cite{Sadowski2016} in their GRMHD simulations of RIAFs around nonspinning BHs. We think that the difference is due to the fact that we have not considered magnetic fields in our simulations, which can increase the intensity of outflows due to MHD processes. We intend to investigate the impact of magnetic fields on the outflows in a forthcoming work.
\subsection{Pathologies}
These simulations are purely hydrodynamical, with the angular momentum transport role of the MRI incorporated via an effective viscous stress tensor. MHD effects such as e.g. magnetocentrifugal processes could enhance the production of outflows beyond our estimates in this work. In our simulation the material was ejected via forces created by pressure gradients in the disc --thermally-drive winds. Magnetic fields add into the material a new force component, the Lorentz force, that can enhance the production of outflows and the average energy of the ejected particles. We plan to carry out (GR)MHD simulations to investigate these effects in the future.
We did not consider the effects of radiation pressure in our simulations, since RIAFs are low-density, optically thin systems, with the radiation field only interacting very weakly with the gas.
Our gravity is represented by the simple pseudo-Newtonian gravitational potential of \cite{Paczynsky1980}. This is clearly not the most accurate description of gravity near the event horizon. Nevertheless, it is a reasonable approximation at larger radii ($r \gtrsim 10 R_S$) and is very useful to keep the calculations conceptually simple (Newtonian) and to save computer time since it avoids the extra computational costs of dealing with metric factors, with the advantage of incorporating the physics of innermost stable circular orbit. For very small radius $r \approx R_S$ our simulation is not very accurate, so we need to restrict our analysis to a slightly larger radius.
All the simulations were two-dimensional--we assumed complete axisymetry. Three-dimensional simulations could reveal more turbulence in the disc and possible stronger anisotropies in the wind production (e.g. \citealt{Narayan2012}). They are much more computationally expensive, but the upgrade from 2D to 3D can improve the accuracy of the results.
\section{Summary}\label{sec:end}
In this work, we performed two-dimensional numerical, hydrodynamical simulations of radiatively inefficient accretion flows onto nonspinning black holes. Our models were evolved for very long duration of up to $8 \times 10^5 GM/c^3$--comparable to the viscous time of the system. Our initial conditions involved large tori extending up to 500 Schwarzschild radii. Given that the initial conditions of accretion flows are poorly constrained in nature, we explored a diversity of rotation curve profiles and viscosity prescriptions, potentially spanning the diversity of RIAFs that can be found in the centers of galaxies. Our main goal was to investigate the properties of the outflows emanating from these large, hot accretion flows, and compare the properties of these winds with those of low-luminosity AGNs--clarifying along the way their potential for AGN feedback. Here we present a brief summary of our main results:
\begin{itemize}
\item Our accretion flows produced powerful subrelativistic, thermally-driven winds reaching velocities of up to $0.01 c$.
\item The wind powers correspond to $0.1-1\%$ of the rest-mass energy associated with inflowing gas at large distances, $\dot{E}_{\rm wind} = (0.001-0.01) \dot{M} c^2$, in good agreement with models of the ``radio mode'' of AGN feedback.
\item The properties of our simulated winds are largely in agreement with constraints for the prototypical example of LLAGN wind--the Akira galaxy--and can explain how red geysers are able to heat ambient cooler gas and thereby suppress star formation.
\item Our thermal winds are originated in the corona of the accretion flow ($30\degree \lesssim \theta \lesssim 60\degree$), being produced at distances $\approx 10-100 R_S$ from the SMBH and they can be considered analogous to Parker winds.
\item The equatorial density profile of the accretion flow $\rho(r, z=0)$ displayed a complex behavior which follows the general expectations from the ADIOS models. However, we were unable to make strong statements about the presence of winds based on the indirect information given by $\rho(r)$.
\item Our models generally displayed a $\dot{M}_{\rm in} \propto r^s$ behavior. However, in some cases the value of $s$ was anomalously high ($s>1$) to be consistent with the expectations of the ADIOS model.
\item Variations in the specific angular momentum profile and the viscosity parameterization caused drastic changes in the accretion flow properties: Even long-run simulations retained some memory of the initial condition.
\item Most of the winds generated were intermittent with an ``on-off'' behavior. Just a few models displayed continuous winds over the whole simulation time. Sometimes winds were produced in powerful bursts with $\eta$ reaching close to 100\%.
\item Thermal winds can remove the excess of angular momentum removal from the accretion flow. Therefore, discs which begin with larger reservoirs of angular momentum will tend to incur more vigorous mass-loss in the winds.
\end{itemize}
We adopted two approaches in analyzing our simulations: (i) looking at the energy and mass fluxes between spherical shells and (ii) using Lagrangian tracer particles to track the wind. The results given by both techniques were consistent with each other, with both approaches supporting the scenario of winds as a generic feature of hot accretion flows. These thermal winds can be a mechanism of feedback in LLAGNs.
We propose two improvements to our simulations: the addition of magnetic fields and improving the dynamical range. Magnetic fields are natural component for accretion flows, since we believe that the mechanism behind the viscosity is the MRI \citep{Balbus1991, Balbus1998}, furthermore mass ejection can be affected by Lorentz force, eventually increasing (or suppressing) the wind strength.
We need to increase the dynamical range in order to evolve the winds as they flow out of the SMBH's sphere of gravitational influence and into the galactic environment, thereby affecting the host galaxy. These two improvements are the natural next step to the work presented here.
\section*{Acknowledgements}
We acknowledge the help of Francisco Ribacionka who helped us to run our models in the AGUIA cluster. We acknowledge useful discussions with Lu\'is H.S. Kadowaki, Maria Luisa Gubolin, Bhargav Vaidya, Defu Bu, Roderik Overzier, Diego Falceta Gon\c{c}alves and Thaisa Storchi Bergmann. This work was supported by S\~ao Paulo Research Foundation (FAPESP) under grants 2016/24857-6, 2017/01461-2 and 2019/10054-7. This work has made use of the computing facilities of the Laboratory of Astroinformatics (IAG/USP, NAT/Unicsul), whose purchase was made possible by the FAPESP grant 2009/54006-4 and the INCT-A. Research developed with the help of HPC resources provided by the Information Technology Superintendence of the University of S\~ao Paulo. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Quadro P6000 GPU used for this research.
\section{Introduction}\label{sec:introduction}
When matter falls into a black hole (BH) it forms a disk-like structure due to the barrier posed by angular momentum conservation--an accretion flow. Magnetic stresses in the ionized plasma introduce friction which allows the gas to flow in toward the BH \citep{Balbus2003}. At the same time, these stresses convert some of the gravitational potential energy of the accretion flow into heat and can release a substantial fraction of its rest mass energy, providing the primary power source behind active galactic nuclei (AGNs), black hole binaries and gamma-ray bursts \citep{Meier2012}.
The dynamics of the resulting accretion flow depends critically on whether the viscously generated thermal energy is radiated away \citep{Abramowicz2013}. This is parameterized in terms of the radiative efficiency $\epsilon = L/\dot{M} c^2$ where $L$ is the luminosity produced by the accretion flow and $\dot{M}$ is the mass accretion rate onto the BH. In this paper, we are particularly interested in the regime of BHs accreting at low $\dot{M}$. At rates $\dot{M} \lesssim 0.01 \dot{M}_{\rm Edd}$ ($\dot{M}_{\rm Edd}$ is the Eddington accretion rate), the gas cannot radiate its thermal energy away and becomes extremely hot ($T \sim 10^{12}$ K), geometrically thick ($H \sim R$, $H$ is the vertical disk thickness) and optically thin, giving rise to a radiatively inefficient accretion flow (RIAF) with $\epsilon \ll 1$ \citep{Yuan2014}. The sheer majority of SMBHs in the local universe--inactive galaxies and low-luminosity AGNs (LLAGNs)--are fed at low, sub-Eddington rates and hence in the RIAF mode, with the nearest example being Sagittarius A* (Sgr A*), the $4 \times 10^6 \, M_\odot$ BH at the center of Our Galaxy \citep{Narayan1995nat,Yuan2003}.
The presence of a SMBH accreting in the RIAF mode can have important feedback effects in its host galaxy. In the centers of many galaxy clusters the ``radio mode'' of feedback has been observed in the form of powerful radio jets heating the cluster atmospheres and offsetting cooling \eg{McNamara2012}; these clusters usually host a SMBH accreting at low $\dot{M}$ \eg{Birzan2004, Nemmen2015}.
There is also evidence for feedback operating in individual galaxies in the form of centrally driven winds from SMBHs in LLAGNs lacking obvious extended radio jets, dubbed ``red geysers''; these winds carry out enough mechanical energy to heat ambient, cooler gas and thereby suppress star formation \citep{Cheung2016,Roy2018}.
In fact, it has been proposed that outflows from SMBHs accreting at low rates may be responsible for quenching star formation \citep{Croton2006,Bower2006,Bower2017} and therefore explain the increase in the number of quiescent galaxies--the vast majority of galaxies which have little or no ongoing star formation--over the past ten billion years \citep{Bell2004, Bundy2006, Faber2007, Ilbert2010}.
Moving on closer to home, a major surprise from the \textit{Fermi} Large Area Telescope was the detection of the \textit{Fermi} bubbles above and below the direction of the galactic center \citep{Su2010,Ackermann2014a}.
One possibility is that the SMBH at the center of the Milky Way may once have had a stronger activity at its nucleus like that of a brighter AGN, producing powerful outflows within the past few million years \citep{Guo2012, Mou2014}.
It is clear that properly modeling RIAFs and their outflows is relevant for the full understanding of AGN feedback.
There is a considerable body of work on the theory of RIAFs. Here, we briefly summarize the progress focusing on numerical simulations of wind launching from RIAFs. The early work focused on deriving analytical one-dimensional solutions to the RIAF structure \citep{Narayan1994, Narayan1995b}; they suggested that the positivity of the Bernoulli parameter in the solutions implies that the gas is weakly bound to the BH --$Be$ is defined as
\begin{equation}
Be = \frac{v^2}{2} + \gamma \frac{P}{\rho} + \psi.
\label{Be-eq}
\end{equation}
where $v$, $P$, $\rho$ and $\psi$ are, respectively, velocity, pressure, density and gravitational potential. Therefore, RIAFs would be quite likely to produce outflows. \cite{Blandford1999, Begelman2012} took the argument to the extreme, suggesting the RIAFs are always accompanied by vigorous outflows and proposed the ansatz that the inflow rate follows $\dot{M}(r) \propto r^s$--i.e. a reduction in the inflow rate due to mass-loss in winds. \cite{Abramowicz2000} argued that the Bernoulli parameter is irrelevant to judge whether outflows are produced by the system but pointed out that RIAFs may have--but do not need to have--winds.
While analytical one-dimensional models are very useful, some aspects of accretion physics such as the formation of outflows and their nonlinear dynamics are beyond the scope of such models. Numerical simulations are needed to properly model these systems. The first global simulations of RIAFs were purely hydrodynamic and Newtonian \citep{Stone1999, Igumenshchev1999, Igumenshchev2000}. They found that the accretion flows are convective and observed strong bipolar outflows. \cite{Proga2003a} used a pseudo-Newtonian potential and ignored viscosity; Proga \& Begelman found no outflows in their work. More recently, \cite{Yuan2012, Yuan2012b, Bu2016a} performed hydrodynamic simulations of RIAFs with an increased dynamical range encompassing from near the Bondi radius down to the BH . They found fairly strong outflows and an apparent support to the $\dot{M}(r)$ ansatz of \cite{Blandford1999}.
\cite{Li2013, Bu2018, Bu2018b} included a cooling term in the energy equation and found strong, thermally-driven winds.
The next step of numerical work consisted of advancing beyond hydrodynamic models and adding magnetic fields in order to explore the magnetorotational turbulence and the effect of different initial configurations of magnetic fields on the disk and wind evolution. \cite{Machida2000, Machida2001} performed global magnetohydrodynamic (MHD) simulations of RIAFs and found the development of temporary outflows. Similarly, \cite{Igumenshchev2003} found an initial transient bipolar outflow, however in the latter work the transient is followed by a steady state weak thermal wind. \cite{Stone2001, Hawley2002, Proga2003, Bu2016} observed strong outflows at all radii beyond the innermost stable circular orbit in their MHD models. The MHD simulations of \cite{Pen2003, Pang2011} showed no sign of outflows.
\cite{DeVilliers2003} inaugurated the era of global, general relativistic MHD (GRMHD) simulations of RIAFs. \cite{DeVilliers2003, DeVilliers2005} observed two types of outflows in their models: relativistic, Poynting-flux dominated jets along the poles of the BH and a coronal matter-dominated wind that did not have enough energy to escape to infinity and hence was bound to the BH (cf. also \citealt{McKinney2004, Hawley2006}). \cite{Sasha2012a, McKinney2012} performed GRMHD simulations of larger tori with an emphasis on understanding the dynamics of jets. They found relatively strong, magnetized winds with a power depending on the BH spin and carrying as much as $\approx 10\%$ of the rest-mass energy associated with accreted matter to infinity, similarly to \cite{Sadowski2013,Sadowski2016}. The simulations of \cite{Moscibrodzka2013, Moscibrodzka2014, Moscibrodzka2016a} also find magnetized coronal winds, though they do not quantify the energy carried by such outflows. Puzzlingly, \cite{Narayan2012} found little evidence for winds in their GRMHD models with large tori, long durations and different magnetic topologies. Narayan et al. pointed out that the limited convergence of their models prevents them from drawing more robust conclusions on the amount of mass-loss in winds from RIAFs. Interestingly enough, \cite{Yuan2015} reanalyzed the simulation data of \cite{Narayan2012} using Lagrangian particles and found winds that carry $\sim 1\%$ of the rest-mass energy associated with accreted matter to infinity.
From the literature review presented above, it is clear that the issue of wind-launching from RIAFs is not settled. Some of the unresolved questions are: do the winds produced by underfed SMBHs provide significant feedback inside the host galaxy? In other words, do they carry enough energy and momentum to be able to heat up gas, shut down star formation and therefore impact the evolution of galaxies? What are the energy, momentum and mass outflow rates from such systems? These are the main broad questions that this paper will address.
This work employs numerical simulations for studying the global, multidimensional physics of hot accretion flows. More specifically, here we perform global two-dimensional hydrodynamical simulations of RIAFs around non-spinning BHs, with the goal of investigating in a self-consistent way the winds produced by accreting SMBHs such as those that inhabit the centers of nearby galaxies, and the possible feedback effects in their environment.
Since we wanted to keep the simulation conditions as general as possible, we considered only a Schwarzschild BH and did not assume initial conditions with particular magnetic topologies (such as e.g. \citealt{Narayan2012}), keeping the simulation purely hydrodynamic. Because the BHs in our models are not spinning, we will not have energy extraction from Kerr spacetime and hence no Blandford-Znajek driven polar jets \citep{Blandford1977}. This is by design, since we know that jets occur in only $\approx 10\%$ of AGNs \citep{Kellermann1989}--therefore they cannot account for AGN feedback in the vast majority of quiescent galaxies--and they are also collimated and therefore may not interact efficiently with the interstellar medium.
Technically, the novelty of this work compared to many previous numerical simulations of hot accretion flows in the literature is the following:
(i) some of our models are the longest running simulations of RIAFs so far produced, with durations of up to $8 \times 10^5 GM/c^3$;
(ii) our models have a large dynamical range, with the initial outer edge of the torus extending to $500 R_S$;
(iii) we explored a prescription for viscous stress tensor based on GRMHD simulations \citep{Penna2013b};
(iv) in some of our models, we adopted the equilibrium torus solution of \cite{Penna2013}, which corresponds to a more physical initial condition than earlier torus solutions;
(v) finally, we used a Lagrangian tracer particles to improve the estimates of quantities associated with the outflows.
The structure of this paper is as follows. In section \ref{sec:methods} we outline the computational methods used to solve the fluid conservation equations, initial conditions, parameter space and techniques used in the analysis. In section \ref{sec:results} we describe the results: the temporal evolution of the flow, amount of energy, momentum and mass outflow rates, geometry, collimation and launching radii of winds, and the density profile of the accretion flow. In section \ref{sec:disc} we contextualize our results, comparing our simulated accretion flows and outflows with observations of LLAGNs and AGN feedback, and also with previous numerical models. Finally, we conclude with a summary and perspectives in section \ref{sec:end}.
Readers interested in the density profiles of the hot accretion flow can skip to section \ref{subsec:accretion-flow}. Those interested in the outflow properties and feedback efficiency should go to sections \ref{subsec:eff}, \ref{subsec:lag-part}. The comparisons with observations of LLAGNs, Sgr A* and AGN feedback can be found in section \ref{subsec:obs}.
\section{Methods}\label{sec:methods}
\subsection{Computational method} \label{subsec:comp-method}
In this work, we aim at simulating the evolution of thick accretion flows around black holes. We are particularly interested in understanding the origin and development of subrelativistic winds from black holes, for which the extraction of spin energy from the black hole is thought to be not so important--as opposed to relativistic jets. For this reason, we considered only a Schwarzschild black hole and adopted a Newtonian hydrodynamical (HD) treatment, describing the black hole gravity in terms of the pseudo Newtonian potential (\citealt{Paczynsky1980}; cf. section \ref{subsec:equations}).
We performed our numerical simulations with the Eulerian \code{PLUTO} code\footnote{We used version 4.2 of the \code{PLUTO} code, commit \code{8ffd30330ecf91f08ca6cda5f9e61492cae55e3e} available at \url{https://github.com/black-hole-group/pluto}.} which solves the hyperbolic system of conservation equations of Newtonian fluid hydrodynamics using the finite volume approach based on a Godunov-type scheme \citep{Mignone2007}. We did not take into account electromagnetic fields explicitly; instead, we incorporated the energy and angular momentum dissipation expected due to the magnetorotational instability (MRI; \citealt{Balbus2003}) by means of an appropriate viscous stress tensor (cf. section \ref{subsec:equations}).
We adopted units such that $GM = 1$ and the Schwarzschild radius is unitary, $R_S \equiv 2GM/c^2 = 1$ (i.e. $c= \sqrt{2}$). Length and time in this paper are given in units of $R_S$ and $GM/c^3$, respectively. $R$ corresponds to the radius in cylindrical coordinates, $r$ in spherical ones.
Our simulations run for a very long time, since we are interested in the \textit{global} dynamics of the accretion flow and winds. We can make a rough estimate of the simulation duration necessary for the flow state to converge. The basic idea is that we expect the flow to reach a steady state equilibrium on a timescale comparable to the viscous time $t_{\rm visc}$. The simple self-similar ADAF model \citep{Narayan1994} gives us useful scalings, according to which the viscous time at a radius $R$ is given by
\begin{equation}
t_{\rm visc} = \frac{r}{v_r} \sim \frac{t_{\rm ff}}{0.5 \alpha}
\end{equation}
where $\alpha$ is the Shakura-Sunyaev viscosity parameter and $t_{\rm ff}$ is the free-fall timescale. This simple model indicates that in order for a parcel of gas located at $r=500 R_S$ in the disk to achieve inflow equilibrium, it would take an amount of time $t \sim t_{\rm visc} = 2 \times 10^5 GM/c^3$ for $\alpha=0.3$. Therefore, our simulations need to have a comparable duration in order to ensure that the flow achieves inflow equilibrium (i.e. convergence) in at least part of the domain, thus justifying the long running times. The running time of the simulations varied between $4 \times 10^4$ to $8 \times 10^5 GM/c^3$, depending on whether we found a specific simulation to be more promising in terms of its potential for wind launching potential. An analysis of the radial extension within which the models attained inflow equilibrium is presented in appendix \ref{app:convergence}.
Our black hole accretion flow simulations have the longest duration to date, to our knowledge. The long durations of our models imply that they are usually quite computationally expensive. For this reason, we have chosen to restrict the dimensionality of our models to only two dimensions.
\subsection{Equations set}\label{subsec:equations}
The set of equations describing hydrodynamic accretion flows were presented in \cite{Stone1999}; these equations are reproduced below:
\begin{align}
& \frac{d\rho}{dt} + \rho {\nabla} \cdot \mathbf{v} = 0\text{,} \label{mass-conservation} \\
& \rho\frac{d\mathbf{v}}{dt} = {\nabla}P - \rho {\nabla} \psi + {\nabla} \cdot \mathbf{T}\text{,} \label{momentum-conservation} \\
& \rho\frac{d\mathbf{e/\rho}}{dt} = -P{\nabla} \cdot \mathbf{v} + \frac{\mathbf{T}^2}{\mu}\text{.}
\label{energy-conservation}
\end{align}
In equations \eqref{mass-conservation} - \eqref{energy-conservation}, $\rho$ is the density, $\mathbf{v}$ is the velocity, $P$ is the pressure, $e$ is the internal energy density, $\psi$ is the gravitational potential and $\mathbf{T}$ is the anomalous stress tensor. We adopted the pseudo Newtonian potential $\psi = GM/(r-R_S)$ which incorporates the basic features of the Schwarzschild geometry \citep{Paczynsky1980}.
In order to incorporate angular momentum transport that mimics MRI, we followed \cite{Stone1999} and assumed that the non-azimuthal components of $\mathbf{T}$ are zero; the non-zero terms of $\mathbf{T}$ are, in spherical-polar coordinates ($r, \theta, \phi$):
\begin{align}
& T_{r\phi} = \mu r \frac{\partial}{\partial r}\left( \frac{v_{\phi}}{r} \right)\text{,} \label{T-radial} \\
& T_{\theta\phi} = \frac{\mu \sin \theta}{r} \frac{\partial}{\partial \theta}\left( \frac{v_{\phi}}{\sin \theta} \right)\text{,} \label{T-phi}
\end{align}
where $\mu = \nu \rho$ is the viscosity coefficient and $\nu$ is the kinematic viscosity \citep{Landau1959}. For astrophysical systems, the plasma microphysics details are still an open question. The wind production is heavily dependent on how the angular momentum is transferred across the accretion disc. The effective viscosity generated by MRI is not well constrained, depending on the initial magnetic field topology. In this work we explored two different prescriptions for the viscous stress by adopting different parameterizations for $\nu$:
\begin{enumerate}
\item $\nu = \alpha r^{1/2}$ which corresponds to the ``K-model'' in \cite{Stone1999}. We will refer to this $\nu$-scaling as ST.
\item $\nu = \alpha c_s^2/ \Omega_K$ following \cite{Shakura1973}. We will refer to this parameterization as SS.
\end{enumerate}
In the above expressions, $\Omega_K$ is the Keplerian angular velocity and $c_s$ is sound speed. The $\alpha$ parameter is the usual Shakura-Sunyaev $\alpha$-parameter for accretion discs \citep{Shakura1973} which we allow to vary in the range $0.01 \leq \alpha \leq 0.3$. Note that, strictly speaking, the correspondence between the $\alpha$ here and the ``Shakura-Sunyaev $\alpha$'' is exact only in the SS model. $\alpha$ is related to turbulence scales in the system, by definition in \cite{Shakura1973}. Since $\alpha$ is a multiplicative factor in $\nu$ expression, it is connected to the efficiency of the angular momentum energy transport. This is a traditional parameter explored in the literature of accretion flow simulations \citep{Stone1999, Yuan2012b}.
We also explored a model in which $\alpha$ varies with radius, i.e. $\alpha=\alpha(r)$, inspired on \cite{Penna2013b}. Penna et al. obtained an analytical approximation to $\alpha(r)$ that reproduces well the numerical GRMHD simulations of RIAFs, which we reproduce here:
\begin{equation}
\begin{aligned}
\alpha(r) =
\begin{cases}
\frac{1}{40} \left( \frac{1-\frac{1}{r}}{1 - \frac{1.5}{r}}\right)^6, & r > 3R_S \\
0.140466, & r < 3R_S
\end{cases}
\end{aligned}
.
\label{alpha-penna}
\end{equation}
\subsection{Initial conditions and grid}\label{subsec:init-cond}
Our initial condition consists of a rotating HD torus in dynamical equilibrium with a specific angular momentum profile $l(R)$. The torus' inner edge is located at $R_{\rm in} = 5-20R_S$--this range is due to numerical reasons and the specific choice of $R_{\rm in}$ depends on $l(R)$--and outer edge $R_{\rm out} \approx 500 R_S$. The radius of maximum density $R_0$ was varied in our models in the range $R_0 \approx 10-25 R_S$ depending on the $l(R)$ model adopted, bound by the values of $R_{\rm in}$ and $R_{\rm out}$. Our torus is pretty large--larger than most simulations which usually begin with a torus ending at $\approx 40 R_S$ (e.g., \citealt{Moscibrodzka2013, Porth2017})--since we are interested in both the density profile up to larger scales and whether winds are launched at larger radii from the disk. In this work we defined the total torus mass $M_0$ as $M_0 = \int \rho(\mathbf{r}, t=0)dV$, with the following normalization: $\max (\rho) = 1$. For our system $M_{\rm BH} \gg M_0$, so we neglected any effects from torus self-gravity.
The $l(R)$ is responsible to describe the rotation of the initial torus. In this sense, this parameter describes how the kinetic energy was stored inside our initial system -- since we had the stationary initial condition imposed by $v_r(t=0) = 0$ and $v_z(t=0)-0$. We explored two $l(R)$-profiles in our simulations, both depending only on the cylindrical radius $R$:
\begin{enumerate}
\item Power-law scaling $l(R) \propto R^a$, where $0 \leq a < 0.5$. \cite{Papaloizou1984} reported a full analysis of the $a=0$ case. Here, we considered three different values of $a$: 0.0, 0.2 and 0.4.
\item $l(R)$ piecewise scaling proposed by \cite{Penna2013}, adapted to a non-relativistic torus: $l={\rm constant}$ for $R<21R_S$, $l(R)=0.71 l_K$ elsewhere where $l_K$ is the Keplerian specific angular momentum.
\end{enumerate}
For the Power-law profile, the value of $a$ is limited to $a<0.5$. For $a=0.5$ one would have an infinitely thin disc. Increasing the value of $a$ leads to thinner initial discs. Previous works \citep{Stone1999, Yuan2012} used discs with $a=0$ to initialize the RIAF. However, the initial state of SMBH accretion is unknown such as the velocity field in the innermost regions. For this reason, we explored different $l(R)$ profiles in our simulation sample. Intermediary values of $a$ were allowed in order to understand the influence of rotation in RIAFs (all our profiles presented sub-keplerian motion). In addition to the power-laws, we studied the profile proposed by \cite{Penna2013} which is a initial condition used in the RIAF simulations literature \citep{Narayan2012}. The choice of $l(R)$ is important to the energy balance of the system and the total energy initially available. Higher values of angular momentum lead to higher values of the Bernoulli parameter. High rotation is naturally a more fertile terrain for wind production, since it is easier to produce $Be > 0$. We are interested in any feature that can modify the outflows dynamic as iti is known that different initial setups for rotation can lead to different results for accretion flow simulations \citep{Bu2013}.
The four torus described above are shown in Figure \ref{fig:initial-conditions}. As can be seen in this figure, $a$ has the effect of changing the torus thickness. The reason why we explored models with $a>0$ is because we wanted to initialize models with a torus thickness $H \sim R$ as expected for RIAF models \citep{Yuan2014}, where $H$ is the scale height.
\begin{figure*}
\vspace{\fill}
\noindent
\makebox[\textwidth]{
\subfigure[][$l(R) = $ constant]{\includegraphics[width=\linewidth/4]{figures/a00r.png}}
\subfigure[][$l(R) \propto R^{0.2}$]{\includegraphics[width=\linewidth/4]{figures/a02r.png}}
\subfigure[][$l(R) \propto R^{0.4}$]{\includegraphics[width=\linewidth/4]{figures/a04r.png}}
\subfigure[][$l(R)$ inspired on \cite{Penna2013}]{\includegraphics[width=\linewidth/4]{figures/penr.png}}
}
\caption{Torus density distribution for the four specific angular momentum profiles considered.}
\label{fig:initial-conditions}
\vspace{1cm}
\end{figure*}
Regarding the computational domain, we used a fixed mesh and our grid extends to a large radius, $10^4 R_S$--which is one order of magnitude larger than the outer radius of the disc size--in order to avoid undesirable boundary effects. Our grid is uniformly distributed in $\log_{10}$(radius) with 400 cells; as such, the inner regions have a higher resolution. The radius of the computational domain begins at $1.25 R_S$. We adopt the outflow boundary condition at the inner and outer radii.
To avoid numerical errors, the grid is restricted to $2 \degree \leq \theta \leq 178 \degree$. In the $\theta$-direction, we defined two regions with a different number of cells in each, such that we have less cells near the grid poles (Figure \ref{fig:grid}). The regions are separated according to the values of $\theta$:
\begin{equation}
N_{\rm cells}\ {\rm in}\ \theta{\rm -direction} =
\begin{cases}
10, & \text{if } \theta < 15^{\circ} \ \text{or} \ \theta > 165^{\circ} \\
180, & \text{if} \ 15^{\circ} \leq \theta \leq 165^{\circ}
\end{cases}
.
\end{equation}
The reason why we decreased the spatial resolution near the poles is because we do not expect any significant action to occur in this region. Therefore we have chosen to concentrate the resolution in the polar regions where we expect the development of the accretion flow and wind. If we were simulating the flow around a Kerr BH, then we would expect to have a Poynting flux-dominated jet which would fill the polar regions. However, since we are dealing with a Schwarzschild black hole, our grid choice is appropriate.
\begin{figure}
\center
\includegraphics[width=\linewidth]{figures/grid.png}
\caption{Grid used in the simulations.}
\label{fig:grid}
\end{figure}
\subsection{Lagrangian particle tracking}\label{subsec:traj-app}
One technique that we used to identify and characterize outflows--in addition to analyzing the evolution of the mass and energy fluxes across our mesh-based simulations--was to introduce ``tracer'' particles which are be passively advected with the fluid flow, and thereby track its Lagrangian evolution, allowing the thermodynamical history of individual fluid elements to be recorded. This technique is called Lagrangian particle tracking and has been used to make sense of several astrophysical simulations (e.g. \citep{Enslin2002, Dubois2012, Genel2013, Yuan2015}). It is particularly useful in our simulations, since it does not rely on using the Bernoulli parameter which is an indirect way of assessing whether outflows were produced, therefore being a more appropriate outflow measure.
We implemented the traditional scheme in which the tracer particles are massless particles advected in space using the local velocity field \citep{Harlow1965}. To obtain the trajectories of the particles, we solve the differential equation
\begin{equation}
\frac{d \mathbf{x_p}}{dt} = \mathbf{v_f}(\mathbf{x_p}, t)
\end{equation}
where $\mathbf{x_p}(t)$ is the particle position and $\mathbf{v}$ is the fluid velocity at the position $\mathbf{x_p}$. With the velocities from simulation data at a particular time $t$, we can advance the position of the tracer particle to $t + \Delta t$ which is accurate to first-order, limited by the time-resolution of the simulation.
The simulations' time step $\Delta t$ were chosen to be sufficiently short--approximately the orbital Keplerian period $t_K$ at $R \approx 8R_S$--such that the distance a fluid element is able to cover over a timescale $t_K$ is much smaller than the size of the disc, $v \Delta t \ll R_{\rm out}$ where in this context $v$ is a typical fluid velocity.
In order to assess whether outflows are produced from a given simulation and--in case there is an outflow--to quantify its properties, we used a set of 1250 tracer particles. We started the particle tracking at the moment when the fluid has reached a stationary net mass accretion rate, i.e. when the value of $\dot{M}_{\rm acc}(R_{\rm in}, t)$ (cf. equation \ref{mdot-in}, Figure \ref{fig:acc-time}) becomes roughly constant; we defined this moment as $t_0$. The particles were initially uniformly distributed in the $r-\theta$ space. The particles are separated in $r$ by steps of $4.3R_S$ and in $\theta$ by steps of $6.25\degree$. The region of the particles were delimited by the ranges: $R=40R_S-250R_S$ and $\theta=15\degree - 165\degree$ -- the distribution was with 50 particles over $r$-coordinate and 25 particles over $\theta$-coordinate. For $t>t_0$, we let the particles to be advected by the flow and monitored their positions with time.
In this work we adopted two criteria for identifying whether a tracer particle is part of an outflow. Firstly, since we are only interested in the properties of winds, we reject particles which are located near the poles--the domain of the relativistic jet if we had a Kerr BH--or in the accretion disc. In order to perform this rejection, one straightforward approach is to consider only particles within a limit range of polar angles. Here, we consider as outflowing particles only those which have reached $15\degree \leq \theta \leq 45\degree$ or $135\degree \leq \theta \leq 165\degree$ at the end of the simulation, following the results of \cite{Sadowski2013} who find that subrelativistic winds are limited to a similar range (cf. also \citealt{Moscibrodzka2014}).
Secondly, based on the final radius $r_{\rm final}$ of the particle we have defined two types of outflow:
\begin{enumerate}
\item If $R(t=t_0) < r_{\rm final} < 500R_S$, we call this ``simple outflow'', i.e. the particle was not accreted but also did not reach very far away.
\item If $r_{\rm final} > 500R_S$ we call ``real outflow'', i.e. the particle reaches a distance larger than the maximum radius of the original torus ($r_{\rm final} > R_{\rm out}$).
\end{enumerate}
We adopted this criterion because it is a simple estimator of whether the particle was able to reach a sufficiently large distance from the black hole. In our tests, particles that reach distances of $\sim 500R_S$ usually have $Be>0$---i.e. the particle is likely unbound---since the gravitational binding energy decreases with distance and the kinetic energy and enthalpy dominate over gravitational energy.
Based on the final velocity of the outflowing particles we defined:
\begin{enumerate}
\item If $v_r > 0$ we considered the particle as a true outflowing particle.
\item If $v_r < 0$, we call these particles as ``fallback particles'', since they were ejected once but eventually returned towards the BH.
\end{enumerate}
Following these criteria, the ``wind region'' is illustrated in Figure \ref{fig:zones}; particles that get outside the red circle are presumably part of a wind launched by the black hole. Results from GRMHD simulations support the basic aspects of this picture \citep{Sadowski2013, Moscibrodzka2014}
\begin{figure}
\noindent
\includegraphics[width=\linewidth]{figures/zones-1.png}
\caption{Schematic drawing of the different regions of the flow. Jet region are defined as a region near the pole with 15\degree opening, and the disc region is a region near the equator with 45\degree opening; all material ejected in these two regions are excluded in our analysis of outflows because this regions are believed to be dominated by jet and accretion disc, respectively, in nature. We considered only the region between both, that we called wind region and is represented in blue. The red solid line is the outflow limit that we have defined, every material that it is in the wind region and beyond the red line was classified as real outflow. The pink solid region is a representation of our initial torus.}
\label{fig:zones}
\end{figure}
\subsection{Simulation setup}\label{subsec:sim-setup}
We performed a total of 11 simulations exploring the variation of three main properties of the flow: the specific angular moment profile $l(R)$, the viscosity prescription $\nu$ and the value of $\alpha$; the parameter space of simulations is summarized in Table \ref{tab:simulations}. It is important to investigate different $l(R)$-profiles since the actual rotation curve of RIAFs in nature is not known. In particular, we do not know the initial conditions of SMBH accretion in low-luminosity AGNs, and the long-term evolution of the accretion flow and possible winds could be dependent on these initial conditions, which is an incentive to not be too conservative in choosing the parameters of our numerical experiments.
\begin{table}
\centering
\begin{tabular}{llcccc}
\textbf{\#ID} &\textbf{Name} & \textbf{$l(R)$} & \textbf{$\nu$} & \textbf{$\alpha$} & \textbf{Duration $10^5 \left[ \frac{GM}{c^3} \right]$} \\ \hline
00 & PNST.01 & Penna2013 & ST & 0.01 & 8.0 \\
01 & PNST.1 & Penna2013 & ST & 0.1 & 0.9 \\
02 & PNSS.1 & Penna2013 & SS & 0.1 & 4.5 \\
03 & PNSS.3 & Penna2013 & SS & 0.3 & 3.3 \\
04 & PNSSV & Penna2013 & SS & $\alpha (r)$ & 3.8 \\
05 & PL0ST.1 & $a=0.0$ & ST & 0.1 & 0.8 \\
06 & PL0SS.3 & $a=0.0$ & SS & 0.3 & 2.1 \\
07 & PL2SS.1 & $a=0.2$ & SS & 0.1 & 1.4 \\
08 & PL2SS.3 & $a=0.2$ & SS & 0.3 & 2.1 \\
09 & PL2ST.1 & $a=0.2$ & ST & 0.1 & 0.4 \\
10 & PL4ST.01 & $a=0.4$ & ST & 0.01 & 1.7 \\ \hline
\end{tabular}
\label{tab:simulations}
\caption{List of the numerical simulations performed in this work. The second column refers to the specific angular momentum. ``Penna2013'' refers to the torus described in \citealt{Penna2013} and the others are related to a power-law form $l(R) \propto R^a$ (see section \ref{subsec:init-cond});
$\nu$ and $\alpha$ columns refer to the adopted viscosity profile and the dimensionless coefficient (see \ref{subsec:equations} ).
}
\end{table}
The other two parameters--$\nu$ and $\alpha$--are responsible for the angular momentum transport that allows accretion to proceed. We described the two parameterizations of $\nu$ that we adopted in section\ref{subsec:init-cond}. We expect the long-term behavior of the flow to strongly depend on the functional form of $\nu$. Moreover, $\alpha$ regulates the strength of the angular momentum removal as in the classical Shakura-Sunyaev solution. We chose values of $\alpha$ consistent with estimates from global and shearbox simulations of the MRI process in BH accretion flows (cf. \citealt{Penna2013b} for a review).
As argued in section \ref{subsec:comp-method}, we ran the simulations for a long time--comparable to the viscous time at large radii in the disc--in the hopes that a considerable part of the accretion flow converges. The individual duration of each model was different based on whether we found each interesting in terms of wind production. Models that did not show clear signs of winds were not allowed to develop for a long time (e.g. model 05). On the opposite end, models 02-04 had very high running times $\gtrsim 3 \times 10^5 GM/c^3$ and PNST.01 had an extreme high running time of $\sim 8 \times 10^5 GM/c^3$, which is the longest BH accretion flow simulation produced to date, to our knowledge\footnote{The previous longest-duration simulation is the three-dimensional GRMHD model of a RIAF performed by \cite{Chan2015}, which ran for $2.3\times 10^5 \ GM/c^3$.}.
\section{Results}\label{sec:results}
In this section, we present the results from the analysis of our numerical simulations. In subsections \ref{subsec:accretion-flow} and \ref{subsec:lag-part}, we present in detail the results for two of our models which illustrate the diversity of emergent behaviors, both in terms of initial conditions and the intensity of the resulting outflows: PNST.01 (very weak outflows) and PL2SS.3 (strong outflows). These two models are distinguished from the others because they reached inflow equilibrium up to large radii---in fact, they have the largest convergence radii among the models (cf. Appendix \ref{app:convergence}). In section \ref{subsec:integ} we present a holistic picture of the results from all our simulations.
In appendix \ref{subsec:other} we discuss the other simulations, which presented varying wind strenghts and convergence radii.
\subsection{Accretion flow properties}\label{subsec:accretion-flow}
Figure \ref{fig:dens-maps} shows snapshots of the density maps of models PNST.01 and PL2SS.3 at different times. Models PNST.01 presented a ``diffusion-shape'' and volume expansion of the torus, but not so dramatic as in model PL2SS.3. The bottom panel shows stronger ejection than the top one, with the formation of bipolar outflows and the torus shape becoming quite disturbed compared to its initial state. Model PL2SS.3--together with PL2SS.1--were the simulations that presented the strongest outflows. In the simulations above, we can see fluid elements being ejected to distances $\gtrsim 500R_S$--which is the initial torus equatorial outer edge adopted.
\begin{figure*}
\center
\subfigure[][PNST.01]{\includegraphics[width=.7\linewidth]{figures/density-map-00.png}}
\qquad
\subfigure[][PL2SS.3]{\includegraphics[width=.7\linewidth]{figures/density-map-08.png}}
\caption{Snapshots of the density map for the main simulations where the color corresponds to $\log \rho(\mathbf{r})$. Here we can see how the torus evolves and changes its shape as time advances; in particular, we can see outflowing material reaching distances further than $500 R_S$.}
\label{fig:dens-maps}
\end{figure*}
Figures \ref{fig:avg-dens} and \ref{fig:avg-temp} shows both the velocity field and the ion temperature distribution in our models. From the velocity field displayed, we can see that there is strong turbulence occurring in the accretion flow. From the bottom panel, we can see that the temperatures are quite high, as expected for RIAFs. The temperatures range between $10^{9}$ K near the equator to $\lesssim 10^{12}$ K towards the low-density regions in the corona and outflows. In the artificial atmosphere of the simulation (the white region in the plot) the temperature is even higher, reaching $10^{13}$ K, but this region have extremely low density and should not be taken into account in the analysis.
\begin{figure*}
\center
\includegraphics[width=\linewidth]{figures/dens-vel-map-t=160kr.png}
\caption{Snapshots of the main simulations taken at $t \approx 160000 GM/c^3$. Here is the inner part of the accretion flow ($r < 200R_S$), the color corresponds to $\log \rho(\mathbf{r})$ and the blue arrows represent the velocity field.}
\label{fig:avg-dens}
\end{figure*}
\begin{figure*}
\center
\includegraphics[width=\linewidth]{figures/temp-map-t=160kr.png}
\caption{Snapshots of the main simulations taken at $t \approx 160000 GM/c^3$. Here the color corresponds to $\log T(\mathbf{r})$. The white area corresponds to the low-density atmosphere around the initial torus. In these plots we observe the accretion disc surrounded by a hotter corona. The expelled material in PL2SS.3 is considerably hotter than the disc.}
\label{fig:avg-temp}
\end{figure*}
Following \cite{Stone1999}, we defined the accretion rate as the flux of material through a surface of radius $r$. We denoted $\dot{M}_{\rm in}$ the mass \textit{inflow} rate and $\dot{M}_{\rm out}$ the mass \textit{outflow} rate, which are defined as
\begin{align}
& \dot{M}_{\rm in}(r) = 2\pi r^2\int_0^{\pi} \rho \text{ min}(v_r, 0) \sin \theta d\theta, \label{mdot-in} \\
& \dot{M}_{\rm out}(r) = 2\pi r^2\int_0^{\pi} \rho \text{ max}(v_r, 0) \sin \theta d\theta.
\label{mdot-out}
\end{align}
The net mass accretion rate is
\begin{equation}
\dot{M}_{\rm acc} = \dot{M}_{\rm in} + \dot{M}_{\rm out}.
\end{equation}
Figure \ref{fig:acc-time} shows the net mass accretion rate calculated at the inner boundary of the simulation--which represents the event horizon\footnote{Note that since this is a Newtonian simulation, properly speaking we cannot define a perfectly absorbing event horizon boundary.}. Each line represents a different simulation. In this plot is very clear that the viscosity profile has strong impact in the mass accretion rate; for instance, simulations with the SS-viscosity have much weaker mass accretion rates. The accretion rates for PNST.01 and PL2SS.3 reach, respectively, a mean value of $10^{-6.5}$ and $10^{-(8.-9.)}$ in units of $M_0c^3/GM$ where $M_0$ is the torus initial total mass.
\begin{figure}
\noindent
\includegraphics[width=\linewidth]{figures/acc-timer.png}
\caption{Net mass accretion rate near the inner boundary of the simulation, $r = 1.5R_S$. Each line represents one of the three simulations, PNST.01 is the black solid line and PL2SS.3 is the dot-dashed blue line.}
\label{fig:acc-time}
\end{figure}
In figure \ref{fig:acc-profiles}, we show the radial dependence of the mass flux rates in the accretion flow; to obtain the mass flux here, we first computed the angle-average between $85\degree - 95 \degree$--i.e. around the equatorial plane--then we computed the time-average using the last 50 states of each simulation.
We find the most striking difference among the radial dependencies displayed in Figure \ref{fig:acc-profiles} is in the net accretion rates. For instance, in the ST model (top panel) we see a constant $\dot{M}_{\rm acc}$ until it starts to oscillate at a radius $200 R_S$. Conversely, in the SS simulations we have found a constant $\dot{M}_{\rm acc}$ until $r \sim 30R_S$, for $r \gtrsim 30R_S$ $\dot{M}_{\rm acc}$ increases until $\sim 500 R_S$ (model PL2SS.3, bottom panel). Furthermore, we see that the inflow rate in noticeably larger than the outflow rates, whereas in model PL2SS.3 the two curves closely track each other for most radii of interest.
The inflow rates display a power-law radial dependence in the range $\approx 10-200 R_S$, in agreement with the \textit{ansatz} $\dot{M}_{\rm in} \propto r^s$ originally proposed by \cite{Blandford1999}. We fitted a $\dot{M}_{\rm in} \propto r^s$ curve to our simulation data in the radial range $20-200 R_S$ and the resulting fits are displayed in Figure \ref{fig:acc-profiles}. We find that $s$ ranges between $0.4$ and $2.6$--i.e. the power-law index of the dependence can be even higher than the value of one proposed by \cite{Begelman2012}.
\begin{figure}
\center
\subfigure[][PNST.01]{\includegraphics[width=\linewidth]{figures/acc-profile-0r.png}}
\qquad
\subfigure[][PL2SS.3]{\includegraphics[width=\linewidth]{figures/acc-profile-8r.png}}
\caption{Mass flux radial profiles for the two main simulations, angle-averaged around the equatorial plane and time-averaged using the last 50 states of each model. The dash-dotted orange, dashed green and solid blue and red lines correspond to the inflow rate, outflow rate and net accretion rate, respectively. The color of the solid line indicates the dominant flow mode: blue if inflow dominates, red if outflow dominates. The solid black lines indicate the power-law fits to the inflow rates in the $20-200 R_S$ range --shifted upwards for clarity.}
\label{fig:acc-profiles}
\end{figure}
The equatorial density profile in the accretion disc --computed in the same fashion as the mass flux described above-- is shown in Figure \ref{fig:dens-profiles}. As can be seen in the figure, the density is well-approximated by a power-law of the form $\rho \propto r^{-p}$ in the $r=10-300 R_S$ range, with the value of the power-law index $p$ in the range $0.6-1.5$ as indicated for each model in the panels. The resulting power-law dependence of $\rho(r)$ and the fact that $p < 1.5$ are in agreement with the general expectations of the ADIOS model \citep{Blandford1999}. It is also in agreement with previous hydrodynamical simulations \citep{Stone1999, Yuan2012b}. We compare our results with these models in section \ref{subsec:compare-sims}.
\begin{figure}
\center
\subfigure[][PNST.01]{\includegraphics[width=\linewidth]{figures/dens-profile-st.png}}
\qquad
\subfigure[][PL2SS.3]{\includegraphics[width=\linewidth]{figures/dens-profile-a2.png}}
\caption{Density profiles for the two main simulations, $\rho(r)$, around the equatorial plane, it was angle averaged between $85\degree - 95 \degree$. These profiles were taken using the last 50 states of each model. The solid blue line is the density extracted from the simulation, the units are in code units of the defined $\rho_0$. The dashed red line is the adjust in the ``linear region'', adopted between $10-300R_S$.}
\label{fig:dens-profiles}
\end{figure}
Finally, we provide a convenient conversion from $\dot{m}$ in code units to physical ones. The conversion is given by
\begin{equation}
\frac{\dot{M}}{\dot{M}_{\rm Edd}} = 3 \times 10^{-4} \left( \frac{M_{0}}{M_{\odot}} \right) \left( \frac{M_{\rm BH}}{10^{8} M_{\odot}} \right)^{-1} \left( \frac{\dot{M}_{\rm sim}}{\rm code \ units} \right),
\label{acc-edd}
\end{equation}
where $M_0$ is the initial torus mass, $M_{\rm BH}$ is the black hole mass and $\dot{M}_{\rm sim}$ is mass accretion rate in code units from the simulation. This is useful if one wants to read off e.g. the $\dot{m}$ variability values displayed in Fig. \ref{fig:acc-time} in physical units.
\subsection{Outflows and the Bernoulli parameter}
Traditionally, the Bernoulli parameter -- see equation \eqref{Be-eq} -- $Be$ has been used as an indicator of the presence of unbound gas in numerical simulations \citep{Narayan1994, Narayan2012, Yuan2012}. For a stationary, laminar flow, $Be$ can be interpreted as a quantity that measures how much the gas is gravitationally bound to the central mass. $Be < 0$ indicates a bound particle and $Be > 0$ a particle able to escape to infinity. This is the reason why positive values of $Be$ have been taken as indicating the presence of unbound outflows in numerical simulations of BH accretion. On the other hand, $Be > 0$ does not guarantee that a gas packet will be ejected, since $Be$ can change its sign in a viscous flow as discussed by \cite{Yuan2015}. In any case, we analyzed the behavior of $Be$ in our models. In our simulations, $Be$ is positive in most parts of the flow with the exception of the innermost parts located at $r \lesssim 50 R_S$.
\subsection{Efficiency of wind production} \label{subsec:eff}
We now present our results related to the energetics of the winds produced in our simulations. Quantifying the energy outflows from SMBHs is instrumental in the understanding of the coevolution between SMBHs and their host galaxies, since the energy deposited by BH winds can potentially offset gas cooling and quench star formation (cf. introduction). From our simulations, we are able to compute separately the energy outflow rate through winds, $\dot{E}_{\rm wind}$, and the mass accretion rate onto the BH, $\dot{M}$. We then defined a ``wind efficiency factor'' $\eta$ as
\begin{equation}
\dot{E}_{\rm wind} = \eta \dot{M} c^2.
\label{eta-efficiency}
\end{equation}
which is the quantity we quote in this paper. Before turning to this efficiency, we need to define what we mean by $\dot{E}_{\rm wind}$ and $\dot{M}$.
Typically, in applications of AGN feedback such as cosmological simulations of galaxy evolution, the authors estimate the feedback power from a mass accretion rate provided to the BH near its Bondi radius $R_{\rm Bondi}$--usually the Bondi accretion rate (e.g. \citealt{Di2005, Sijacki2015}). For consistency with such works, in our simulations we defined $\dot{M}$ in equation \ref{eta-efficiency} as the mass accretion rate at the initial outer radius $R_{\rm out}$ of our accretion flow,
\begin{equation}
\dot{M} \equiv \dot{M}_{\rm in} (R_{\rm out})
\end{equation}
which is computed using equation \ref{mdot-in}. We choose to compute $\dot{M}$ at this radius because in our case this is a more appropriate estimate of the outer accretion rate.
The energy outflow rate was calculated as the surface integral
\begin{equation}
\dot{E}_{\rm wind} = \int \epsilon \max (v_r,0) dA
\label{ewind-definition}
\end{equation}
calculated at $r=R_{\rm out}$ and only within the angle intervals $15\degree \leq \theta \leq 45\degree$ or $135\degree \leq \theta \leq 165\degree$ as defined in section \ref{subsec:lag-part}. With the integral defined in the above equation, when computing the energy rate we will automatically consider only fluid elements with $v_r > 0$. $\epsilon$ is the energy density taking into account the kinetic, thermal and gravitational contributions, defined as
\begin{equation}
\epsilon (\textbf{r}) = \rho(\textbf{r}) \frac{v(\textbf{r})^2}{2} + \frac{\gamma}{\gamma -1}p(\textbf{r}) - \frac{GM\rho(\textbf{r})}{R- R_S}.
\label{specific-energy}
\end{equation}
Therefore, $\dot{E}_{wind}$ is the total power (minus rest mass energy) carried by outflowing gas that crosses the spherical surface at $R=R_{\rm out}$, not taking into account the poles and the accretion disc domain.
Now we are in a position to present the resulting efficiency of wind production. The temporal evolution of $\eta$ for the two main simulations is presented in Figure \ref{fig:eta-winds}. Each simulation had a strikingly different behavior of $\eta(t)$ with respect to each other. The strongest winds are found in model PL2SS.3--supporting the conclusion from the density maps in Figure \ref{fig:dens-maps}. For instance, at $t \sim 50000 GM/c^3$ the efficiency peaks at $\eta \approx 1$, i.e. the wind power is comparable to the instantaneous accretion power. Afterwards, $\eta$ drops to a flat value around $10^{-3}$ in the remaining simulation time. For model PNST.01 there is no continuous outflow. Instead, model PNST.01 displays only a timid outflow burst at $t \sim 1.2 \times 10^5 GM/c^3$ with a peak of $\eta \approx 10^{-3}$, lasting for $\Delta t \approx 1 \times 10^4 GM/c^3$. Despite $\eta$'s variability in all models, we did not find any clear periodic oscillation.
\begin{figure}
\noindent
\includegraphics[width=\linewidth]{figures/efficiency-windsr.png}
\caption{Temporal evolution of the wind efficiency $\eta$ as defined in equation \ref{eta-efficiency} for the simulations PNST.01 (solid black line) and PL2SS.3 (dot-dashed blue line).}
\label{fig:eta-winds}
\end{figure}
\subsection{Analysis using tracer particles}\label{subsec:lag-part}
One of the strengths of using the technique of tracer particles (section \ref{subsec:traj-app}) is that we are able to quantify more precisely the amount of mass lost from the disc due to outflows by tracking the amount of mass carried by each particle.
In Figure \ref{fig:LP-mass-energy} we show the mass and energy carried away by the outflowing particles following the criteria defined in section \ref{subsec:traj-app} for the ``real outflow''. We defined the relative fraction of ejected mass $f_{\rm m}$ and the fraction of ejected energy $f_{\rm e}$ as
\begin{align}
f_{\rm m} & = \frac{\rm mass\ in\ tracer\ particles\ lost\ in\ outflows}{\rm total\ mass\ of\ tracer\ particles} \nonumber \\
& = \frac{\sum_k \rho_k(t=t_{\rm final}, r = r_{\rm final}) \delta V \Theta[r_{k}(t=t_{\rm final}) - r_{\rm out}]}{\sum_k \rho_k(t=t_{\rm initial}, r = r_{\rm initial}) \delta V}, \label{mass-ejected-real}
\end{align}
where the sums are carried over all tracer particles and $\Theta$ is the Heaviside function. The mass of each particle was defined as:
\begin{equation}
m_k (t) = \rho[\textbf{r}_k(t)] \delta V,
\end{equation}
where we assume that all particles occupy the same small volume $\delta V = \rm const$. The specific value that we adopt for $\delta V$ does not matter because when computing $f_{\rm m}$ using equation \ref{mass-ejected-real}, $\delta V$ cancels out. Similarly, we defined the relative fraction of ejected energy $f_{\rm e}$ as
\begin{equation}
\label{energy-ejected-real}
f_{\rm e} = \frac{\sum_k E_k(t=t_{\rm final})\Theta[r_{k}(t=t_{\rm final}) - r_{\rm out}]}{\sum_k E_k(t=t_0)}
\end{equation}
where the energy is defined as $E(\textbf{r}) = \epsilon(\textbf{r}) \delta V$ and $\epsilon$ is the energy density from \eqref{specific-energy}.
\begin{figure}
\noindent
\includegraphics[width=\linewidth]{figures/LP-mass-energy2totr.png}
\caption{Total ejected mass and energy in the main simulations in arbitrary units. Models PNST.01 and PL2SS.3 are displayed as solid black line and dot-dashed blue line, respectively. The x-axis is the launching radius of the particle, and y-axis is particle's final mass/energy. We can see that the loss of both mass and energy is more pronounced in model PL2SS.3 compared to the other one--i.e. the resulting outflows in this model are stronger.}
\label{fig:LP-mass-energy}
\end{figure}
The mass ejection plot is the upper one in the panels of figure \ref{fig:LP-mass-energy}, whereas the energy ejection is displayed in bottom panel. From these two plots we can see that the ejected energy roughly follows the same pattern as the mass ejection.
In addition the behavior of mass (or energy) loss are similar at all radii. Considering the original mass (or energy)--see equations \eqref{mass-ejected-real} and \eqref{energy-ejected-real}--, the simulations presented a fraction of mass (energy) ejected of up to 0.2\% (2\%) of the total mass (energy) available, with an average value of 0.03\% (0.2\%). The difference between these two values can be attributed to the different temperatures in the accretion disc and the corona. The particles were heated and accelerated away as the disc's corona thermally expands, carrying out energy.
Mass-loss through winds is not uniformly distributed across all radii. In order to quantify how far a particle originated in a certain radius can go, we plotted the quantity $r(t_{\rm final})/r(t_0)$--which we will refer to as wind depth henceforth-- in Figure \ref{fig:LP-map}. Larger values of the wind depth in a given region of the flow indicate that it can produce outflows that reach large distances. As such, Figure \ref{fig:LP-map} is tracking the accretion flow regions where the ejected particles come from. The two panels were labeled for each simulation and we considered only particles that are in the wind region. In model PL2SS.3 we see bipolar outflows, whereas model PNST.01 displays a strange asymmetry --a unipolar outflow-- with all the ejections occurring in the same side, which is very unique when compared with the other simulations we performed. This behavior is qualitatively similar to the unipolar outflows seen in model G of \cite{Igumenshchev2000} (cf. Fig. 12 in that paper). In model PNST.01, the ejection occurred mainly in the torus corona -- similarly to coronally-driven winds--whereas model PLSS.3 seems to produce winds from all regions of the disc with a more homogeneous ejection region, with outflows coming even from close to the equator.
\begin{figure*}
\noindent
\makebox[\textwidth]{
\subfigure[][PNST.01 ]{\includegraphics[width=6.5cm]{figures/LP-map-str.png}}
\subfigure[][PL2SS.3]{\includegraphics[width=6.5cm]{figures/LP-map-a2r.png}}
}
\caption{Maps of the wind depth illustrating the regions of the accretion flow from which outflows are produced. Lighter regions eject particles which reach farther distances compared to the darker regions.}
\label{fig:LP-map}
\end{figure*}
An important parameter to be analyzed in these simulations are the velocity of these ejected particles. The distribution of their velocities is displayed in Figure \ref{fig:LP-velocities}. In the figure we divided the sample in two types of particles, the ones with $v_r > 0$ (blue) --which we refer to as ``outflow'' particles since they are expected to be in outflows-- and the other ones with $v_r < 0$ (grey) which fall back and can be reincorporated into the accretion disc --the latter types of particles are referred to as ``fallback''. PNST.01 had a low rate of outflow particles and is dominated by fallback ones which reach the highest velocities of the simulation. PL2SS.3 was dominated by outflow particles. Considering only the outflow particles, the average velocities for outflow particles for simulations PNST.01 and PL2SS.3 were respectively $1.6 \times 10^{-3}c$ and $5.0 \times 10^{-3}c$. For all simulations $\overline{v}_{\rm out}$ was in the range 0.001-0.005c. The ejected particles presented nonrelativistic velocities. For instance, the maximum velocity of an individual particle in the simulations did not exceed $0.05c$. All these ejected particles --both ``outflow'' and ``fallback''-- had a positive value for the Be.
\begin{figure}
\noindent
\subfigure[][PNST.01]{\includegraphics[width=\linewidth]{figures/ID00-velocities-histr.png}}
\qquad
\subfigure[][PL2SS.3]{\includegraphics[width=\linewidth]{figures/ID08-velocities-histr.png}}
\caption{Distribution of velocities of the ejected particles for simulations PNST.01 and PLSS.3. These histograms displays the averaged velocity of the ejected particles in the last $\sim 1000 GM/c^3$ of each simulation. The blue columns represented the population of particles with $v_r > 0$ (outflow), the grey columns represented the population of particles with $v_r < 0$ (fallback).}
\label{fig:LP-velocities}
\end{figure}
\subsection{Overview of results for all models} \label{subsec:integ}
After the individual analysis of each simulation we proceed to analyze these results as a whole. Table \ref{tab:allmodels} shows the results for all simulations that we computed. In Figure \ref{fig:integrated-mdot} we plotted $f_{\rm m}$ as a function of $\dot{m}(1.25R_S)$, i.e. it relates our the fraction of mass lost in the wind (cf. equation \ref{mass-ejected-real}) and the net mass accretion rate at the event horizon (more rigorously, at the inner boundary of the simulation). $\dot{m}$ is normalized by the torus initial mass assuming that all simulations had the same total torus mass in the beginning.
\begin{figure}
\noindent
\includegraphics[width=8.0cm]{figures/mass-ejectedr.png}
\caption{Net mass accretion rate $\dot{m}$ versus the fraction of ejected mass of the simulations. The labels identify the simulations. We divided them in three groups for the analysis as described in the text. The black dotted line in the center are separating the two regime of viscosity adopted, in the left-side there is the simulations with SS-viscosity, in the right side the ones with ST-viscosity (see section \ref{subsec:equations}).}
\label{fig:integrated-mdot}
\end{figure}
\begin{table*}
\noindent
\centering
\begin{threeparttable}
\begin{tabular}{lcccccccccccc}
\hline \hline
\multicolumn{1}{c}{Short} &
\multicolumn{1}{c}{Full} &
\multicolumn{1}{c}{$p^{\rm 3}$} &
\multicolumn{1}{c}{$s^{\rm 4}$} &
\multicolumn{1}{c}{$\eta^{\rm 5}$} &
\multicolumn{1}{c}{$\max(\eta)$} &
\multicolumn{1}{c}{Wind} &
\multicolumn{1}{c}{$f_{\rm m}$$^{\rm 7}$} &
\multicolumn{1}{c}{$f_{\rm e}$$^{\rm 8}$} &
\multicolumn{1}{c}{$\overline{v}$$^{\rm 9}$} &
\\
name$^{\rm 1}$ & name$^{\rm 2}$ & & & ($\times 10^{-3}$) & & activity & (\%) & (\%) & ($c$) \\
& & & & & & time$^{\rm 6}$ (\%)& & & & \\
\hline \hline
00 & PNST.01 & $0.61 $ & $0.43 \pm 0.01$ & 0.0 & 0.014 & 2 & 0.002 & 0.55 & 0.0020 \\
01 & PNST.1 &$0.89 $ & $0.17 \pm 0.01$ & 0.0 & 0.060 & 15 & 0.001 & 0.23 & 0.0062 \\
02 & PNSS.1 &$1.16 $ & $2.55 \pm 0.03$ & 0.2 & 0.22 & 51 & 0.004 & 0.92 & 0.0010 \\
03 & PNSS.3 &$1.16 $ & $2.19 \pm 0.04$ & 0.0 & 0.21 & 46 & 0.002 & 0.53 & 0.0010 \\
04 & PNSSV &$1.16 $ & $2.61 \pm 0.02$ & 1.2 & 0.22 & 53 & 0.0 & 0.79 & 0.0018 \\
05 & PL0ST.1 &$0.97 $ & $-0.11 \pm 0.01$ &0.0 & 0.53 & 13 & 0.0 & 0.0 & -- \\
06 & PL0SS.3 &$0.91 $ & $1.08 \pm 0.04$ & 15$^{10}$&7.6& 98 & 0.0 & 0.0 & -- \\
07 & PL2SS.1 &$1.37 $ & $0.77 \pm 0.05$ & 6.5 & 12& 95 & 0.11 & 0.60 & 0.0028 \\
08 & PL2SS.3 &$1.33 $ & $1.18 \pm 0.04$ & 7.4 & 12& 97 & 0.18 & 0.31 & 0.0045 \\
09 & PL2ST.1 &$1.13 $ & $0.02 \pm 0.01$ & 0.0 & 33& 45 & 0.0 & 0.0 & -- \\
10 & PL4ST.01&$1.53 $ & $0.10 \pm 0.01$ & 7.4 & 10& 56 & 0.019 & 1.77 & 0.0017 \\
\hline \hline \\
\end{tabular}
\begin{tablenotes}\footnotesize
\item[1] Short model name.
\item[2] Full model name including information on parameters.
\item[3] Power-law coefficient defined as $\rho \propto r^{-p}$. The $1\sigma$ uncertainty corresponds to $0.01$ from the fits.
\item[4] Power-law coefficient defined as $\dot{M}_{\rm in} \propto r^{s}$.
\item[5] Median value of $\eta(t) \times 10^{-3}$
\item[6] Fraction of the total time in which $\eta > 0$.
\item[7] Fraction of the mass ejected following the lagrangian particle analysis (see \eqref{energy-ejected-real}).
\item[8] Fraction of the energy ejected following the lagrangian particle analysis (see \eqref{energy-ejected-real}).
\item[9] Refers to Lagrangian particles.
\item[10] Unusually high value, better discussed in section \ref{app:PL0SS3}
\end{tablenotes}
\end{threeparttable}
\label{tab:allmodels}
\caption{Results concerning outflows for all simulations.}
\end{table*}
Each simulation occupies a different region of the diagram in Figure \ref{fig:integrated-mdot}. The different viscosity parameterizations adopted are clearly distinguishable, for instance simulations with the ST prescription generated $\dot{m}$ values orders of magnitude higher than the SS profile. Motivated by this considerable difference, we plotted the black dotted line in the figure to separates these two types of simulations. We divided them in three groups for the analysis:
\begin{itemize}
\item Group 1: simulations with the specific angular momentum adapted from \cite{Penna2013};
\item Group 2: simulations with power-law $l(R)$ and smallest fraction of ejected mass;
\item Group 3: simulations with power-law $l(R)$ and highest fraction of ejected mass.
\end{itemize}
They have some major characteristics considering both fluid and particle analysis:
\begin{itemize}
\item Group 1 had on average $.01\%$ of mass ejection, this value seems that does not change drastically with the free parameters of the simulation or the adopted viscosity. The wind flux (see equations \eqref{ewind-definition}-\eqref{eta-efficiency} and figure \ref{fig:eta-winds}) of these simulations was non-continuous, winds were not generated all the time here. The simulations with SS viscosity presented a very small convergence radius. The average velocity of the ejected particles here are smaller than the group averaged velocity for Group 3, $\overline{v}_{\rm out}^{\text{ G1}} \lesssim \overline{v}_{\rm out}^{\text{ G3}}$.
\item Group 2 had the smallest fraction of mass ejected, except for PL4ST.01. These simulations presented strong inflow component, except for PL0SS.3, the inflow was so intense in these three that suppressed any outflow. PL0SS.3 did not present the same inflow component as the other ones, but the particles remained inside the initial torus all the way (see first panel from figure \ref{fig:initial-conditions}). The wind generation pattern of these simulations varied for all simulations. This group presented completely heterogeneous properties.
\item Group 3 are the simulations with the most energetic winds and particles. Models PL2SS.1 and PL2SS.3 are very similar simulations with the only difference in the value of $\alpha$, as discussed before. The setup consisting of $a = 0.2$ and SS-viscosity presented powerful outflows, with a continuous generation of winds, and some of highest average velocities from our sample $v \approx 0.003-0.004$c.
\end{itemize}
It is worthwhile asking: considering holistically all the models which produced winds, what is the location in the disc from which the outflowing particles come from, on average? For this purpose, we apply the tracer particles formalism to locate the launching region in the eleven simulations. For each model, we considered only the particle ejected in the wind region -- similarly to Figure \ref{fig:LP-map} -- by defining the binary variable
\begin{equation}
\begin{aligned}
\Xi =
\begin{cases}
1, & \text{if } (\textbf{r}_{\rm final} \text{ is in wind region) and} \ (r_{\rm final} > R(t_0)) \\
0, & \text{otherwise}.
\end{cases}
\end{aligned}
\label{ej-map}
\end{equation}
The variable $\Xi$ informs whether a particle at a given position has been ejected ($\Xi=1$) or not ($\Xi=0$). After creating maps of $\Xi$ for all simulations, we added them up and computed the average, $\langle \Xi \rangle$. The result can be seen in figure \ref{fig:integrated-map}, where the color scale indicates the likelihood that a particle located at the given position at the beginning of all simulations becomes part of an outflow later on. A value of one at a certain position would indicate that in all simulations a particle initially at that position was ejected; conversely, a value of zero means that in all simulations a particle initially at that position was not ejected. We can see in Figure \ref{fig:integrated-map} the presence of some regions with values of ejected particles in $\sim 50 \%$ of the simulations (i.e. with values $\langle \Xi \rangle >0.5$). These regions with higher likelihoods of producing winds are located in the corona of the accretion disc, suggesting that the winds we are seeing correspond to coronal winds.
\begin{figure}
\noindent
\includegraphics[width=8.0cm]{figures/stacking-mapr.png}
\caption{Map showing the average fraction of particles in a given position which are lost in outflows, taken over all simulations carried out in this work (variable $\langle \Xi \rangle$, equation \ref{ej-map}). }
\label{fig:integrated-map}
\end{figure}
Finally, we computed the power spectrum from the time series of different quantities such as $\eta$ and mass accretion rate. We did not find periodicity in any of the simulations. It is possible that our 2D setup suppressed any possible orbital related variability, since we assumed axial symmetry. |
1,941,325,220,449 | arxiv | \section{Cutoff effects and renormalization}
Phenomenological results from simulations of lattice QCD to compare with
experiments should be obtained with all the systematic uncertainties
under control.
The first requirement is to have an efficient algorithm to simulate
$N_{\rm f}=2$ dynamical light quarks with the possibility to include
1(+1) heavier quarks. The algorithm should allow, in a reasonable time,
to reach small pion masses
($m_\pi < 300$ MeV) where a matching with chiral perturbation theory ($\chi$PT)
should become possible, and to simulate a large enough volume ($L\ge2$ fm).
The second requirement is to have a lattice action with good scaling
and simplified renormalization properties, as close as possible to
the renormalization of continuum QCD.
The topic I want to address in this contribution is if
lattice twisted mass QCD (tmQCD), combined with a suitable algorithm,
is a possible lattice action that fulfills these requirements.
\subsection{Lattice QCD action}
\label{sec:latticeqcd}
Despite the only rather recent interest, the tmQCD fermionic lattice action has
a long history. It was introduced in \cite{Aoki:1984qi} as a tool to study spontaneous
parity and flavour symmetry breaking. In \cite{Frezzotti:2000nk} it was proved
that lattice tmQCD is an alternative discretization of lattice QCD.
The lattice QCD action
\be
S = S_g[U] + S_F[U,\psi,\psibar]
\ee
has a fermionic part given by tmQCD
\be
S_F = a^4\sum_x \big\{\psibar(x)[D[U] + m_0 +i\mu\gamma_5\tau^3]\psi(x)\big\}
\label{eq:tmQCD}
\ee
and for the moment we leave unspecified the gauge part $S_g$.
In eq. (\ref{eq:tmQCD}) $D[U]$ is the massless Wilson-Dirac operator
\be
D[U] = \frac{1}{2} [\gamma_\mu(\nabla_{\mu} + \nabla_{\mu}^*) - a
\nabla_{\mu}^* \nabla_{\mu}]
\ee
$m_0$ is the untwisted bare quark mass parameter, $\mu$ is the bare twisted
quark mass, and $\tau^3$ is the third Pauli matrix acting in flavour space.
In \cite{Frezzotti:2001ea} it was shown that the standard framework of the Symanzik
improvement program, works in the similar way as for usual Wilson fermions. In
particular for spectral quantities no further improvement coefficients are
needed. A set of scaling tests have been performed, using the
non-perturbatively improved clover action with twisted mass, in small
\cite{DellaMorte:2001ys} and large \cite{DellaMorte:2001tu} volume, confirming
that the usual Symanzik improvement program can be applied also for tmQCD.
In a remarkable paper of Frezzotti and Rossi \cite{Frezzotti:2003ni} a step forward was made.
It was proved that parity even correlators of multiplicatively renormalizable
fields, are free from O($a$) effects, and so no improvement coefficients are needed
(automatic O($a$) improvement), if
the target continuum theory is fully twisted \footnote{To obtain automatic
O($a$) improvement in \cite{Frezzotti:2003ni} also other possibilities were
exploited. Here we will concentrate on the automatic O($a$) improvement that
is used in numerical simulations.}.
The proof in \cite{Frezzotti:2003ni} is based on a set of spurionic symmetries
of the lattice action. Here we give a simpler proof based
on the symmetries of the continuum QCD action (see appendix A of
\cite{Frezzotti:2005gi} for an analogous proof).
The Symanzik
\cite{Symanzik:1981hc,Symanzik:1983dc,Symanzik:1983gh,Sheikholeslami:1985ij,Luscher:1996sc}
effective action reads
\be
S_{\rm eff} = S_0 + aS_1 + \ldots
\ee
and we are interested in a continuum target theory where the physical quark mass is fully given by
the renormalized twisted mass $\mu_{\rm R}$ (fully twisted theory)
\be
S_0 = \int d^4x \psibar(x) \big [ \gamma_\mu D_\mu + {\rm i} \mu_{\rm R}
\gamma_5 \tau^3 \big] \psi(x)
\label{eq:ctmQCD}
\ee
The correction terms in the effective action are given by
\be
S_1 = \int d^4y {\mathcal L}_1(y) \qquad {\mathcal L}_1(y) = \sum_i c_i
{\mathcal O}_i(y)
\ee
where the dimension five operators classified on the basis of the symmetries
of the lattice action are given by \cite{Frezzotti:2001ea}
\be
{\mathcal O}_1 =
\psibar\sigma_{\mu\nu}F_{\mu\nu}\psi \qquad {\mathcal O}_2 =
\mu^2\psibar \psi \qquad {\mathcal O}_3 = \Lambda^2\psibar \psi
\label{eq:sym_op}
\ee
where $\Lambda$ is an energy scale of the order of the QCD scale $\Lambda_{\rm QCD}$.
The operator ${\mathcal O}_1$ is the usual clover term. The operators
${\mathcal O}_2$ and ${\mathcal O}_3$ are related to the renormalization of the untwisted
quark mass.
Since we are interested in a continuum target theory where the untwisted quark
mass vanishes, the operator ${\mathcal O}_3$
parameterizes the mass independent O($a$) uncertainties in the critical mass.
We consider now a general multiplicatively renormalizable multilocal field
that in the effective theory is represented by the effective field
\be
\Phi_{\rm eff} = \Phi_0 + a \Phi_1 + \ldots
\ee
A lattice correlation function of the field $\Phi$ to order $a$ is given by
\be
\langle \Phi \rangle = \langle \Phi_0 \rangle_0 + a \int d^4y \langle \Phi_0
{\mathcal L}_1(y) \rangle_0 + a \langle \Phi_1 \rangle_0 + \ldots
\label{eq:sym_exp}
\ee
where the expectation values on the r.h.s are to be taken in the continuum
theory with action $S_0$.
The key point is that the continuum action (\ref{eq:ctmQCD}) is symmetric
under the following parity transformation
\be
\psi(x) \longrightarrow \gamma_0(i \gamma_5\tau^3)\psi(x_0,-{\bf x})
\label{eq:parity}
\ee
\be
\psibar(x) \longrightarrow \psibar(x_0,-{\bf x})(i\gamma_5\tau^3)\gamma_0
\label{eq:paritybar}
\ee
and that all the operators in eq. (\ref{eq:sym_op}), of
the Symanzik expansion of the lattice action, are odd under the parity
symmetry of the continuum action.
If the operator $\Phi_0$ is parity even, the second term in the r.h.s. of
eq. (\ref{eq:sym_exp}) vanishes, and $\Phi_1$, being of one dimension higher,
is parity odd: for the same reason the third term in the r.h.s of
eq. (\ref{eq:sym_exp}) vanishes. Possible contact terms
coming from the second term amount to a redefinition of $\Phi_1$ and so do not
harm the proof.
It is then also clear that in order to achieve automatic O($a$) improvement, the
continuum target theory must have a vanishing untwisted quark mass $m_{\rm
R}$, otherwise the standard mass term $m_{\rm R} \psibar \psi$
will break the parity symmetry of the continuum action defined before. The
most natural way to achieve this on the lattice is
by setting the untwisted bare quark mass to
its critical value $m_0 = m_{\rm c}$.
The proof also shows that a possible uncertainty of O($a$) in the critical
mass does not wash out automatic O($a$) improvement since these uncertainties,
are odd under parity.
A remark is in order now.
We take the polar mass defined in \cite{Frezzotti:2001ea}
\be
M = \sqrt{\mu^2 + m_{\rm q}^2} = \sqrt{\mu^2 + (\eta_1 a \Lambda^2)^2}; \qquad m_{\rm q} = m_0 - m_{\rm c}
\ee
where the $\eta_1$ term parameterizes the mass independent O($a$) uncertainties in
the value of the untwisted quark mass $m_{\rm q}$.
Expanding in powers of $a$ we have
\be
M \simeq \mu\Big[1+\frac{\eta_1 a^2\Lambda^4}{2\mu^2} +
O(a^4)\Big]
\label{eq:pole_exp}
\ee
We observe immediately that as soon $\mu < a\Lambda^2$, even if parametrically
O($a$) terms are absent in (\ref{eq:pole_exp}), there is a term of O($a^2$) with a
coefficient that tends to diverge as soon $\mu$ is made smaller and smaller.
From this example we can conclude that to have an effective automatic O($a$)
improvement, without big O($a^2$) effects, with a generic choice of the critical mass, such that the
uncertainties in the untwisted quark mass are of order $a\Lambda^2$, we need to have the
constraint $\mu > a\Lambda^2$.
It has been shown in \cite{Frezzotti:2005gi} that these cutoff effects that
diverges at small quark masses, so called infrared divergent (IR) cutoff
effects, are a general property of tmQCD. These dangerous cutoff effects are
removed by an appropriate choice of the critical mass.
\subsection{O($a$) improvement and small pion masses}
\label{sub:1.2}
The O($a$) uncertainties of the untwisted quark mass depend on how the
critical line is fixed, hence the choice of the critical mass has to be
discussed with care.
The issue was raised by the work of Aoki and B\"ar \cite{Aoki:2004ta} and by
the numerical results obtained in \cite{Bietenholz:2004wv}. This problem has
been further analyzed in several aspects
\cite{Sharpe:2004ny,Frezzotti:2005gi,Sharpe:2005rq}. In
\cite{Aoki:2004ta,Sharpe:2004ny,Sharpe:2005rq} the theoretical framework is twisted mass
chiral perturbation theory (tm$\chi$PT) \cite{Munster:2003ba} where the cutoff effects are included
in the chiral lagrangian along the lines of
\cite{Sharpe:1998xm,Rupak:2002sm}. In this framework a power counting
scheme that includes quark mass and lattice spacing has to be specified. In
particular in \cite{Aoki:2004ta} the power counting was $\mu \sim a^2
\Lambda^3$ while in \cite{Sharpe:2004ny} it was $\mu \sim a \Lambda^2$.
We stress here that this approach for the description of lattice data, does
not require a continuum extrapolation, hence the power counting scheme does
not mean that $\mu$ goes to zero in the continuum limit but represents only an
order of magnitude equality.
Both these works \cite{Aoki:2004ta,Sharpe:2004ny}
agree on the fact that choosing the critical mass imposing a vanishing PCAC
quark mass
\be
m_{\rm PCAC} = \frac{\sum_{\bf x} \langle \partial_0 A_0^a(x) P^a(0)\rangle}
{2\sum_{\bf x} \langle P^a(x) P^a(0)\rangle} \qquad a=1,2
\ee
where
\be
A_\mu^a(x) = \psibar(x)\gamma_\mu \gamma_5 {\tau^a \over 2} \psi(x)
\ee
\be
P^a(x) = \psibar(x)\gamma_5 {\tau^a \over 2} \psi(x)
\ee
allows to have automatic O($a$) improvement, and in particular down to quark masses that fulfill
$\mu \simeq a^2\Lambda^3$ for
\cite{Aoki:2004ta} and $ a^2\Lambda^3 < \mu < a
\Lambda^2$ for \cite{Sharpe:2004ny}\footnote{We will see in section \ref{sec:phase} that
the phase structure of $N_{\rm f} = 2$ dynamical Wilson fermions
does not allow anyway the twisted mass to be smaller then $\mu_{\rm c} \sim
a^2\Lambda^3$.}.
In \cite{Frezzotti:2005gi} a Symanzik expansion along the lines of
\cite{Luscher:1996sc} was performed confirming the results of \cite{Aoki:2004ta,Sharpe:2004ny}.
A possible practical procedure is then to compute for a fixed value of $\mu$ the
critical mass $m_{\rm c}$ from the vanishing PCAC mass,
and then to extrapolate the set of critical masses
obtained for different values of $\mu$ to $\mu = 0$ (method {\bf A}).
This procedure has been used in \cite{Jansen:2005gf,Jansen:2005kk}.
In fig. \ref{fig:kappa_c} a typical
extrapolation of the critical mass to $\mu=0$ is shown.
With this procedure the O($a$) uncertainties of the critical mass are fixed in
such a way that, for a generic value of $\mu$, the dangerous $a\Lambda^2$
cutoff effects in the untwisted quark mass are absent.
\begin{figure}[htb]
\begin{center}
\epsfig{file=plots/kappac.ps,angle=270,width=0.6\linewidth}
\caption{Determination of the critical mass $m_{\rm c}$ ($\kappa_{\rm c}^{-1} =
2am_{\rm c} +8$) for given values of $\mu$ at $\beta=6.0$, and extrapolation to
$\mu=0$. The red point is the critical mass determined with method {\bf C}
(see text). The difference between the two determination of the critical mass
should be an O($a$).}
\label{fig:kappa_c}
\end{center}
\end{figure}
The slope of the curve is proportional, as it has been recently discussed in
\cite{Aoki:2005ii,Sharpe:2005rq}, to O($a$) cutoff effects related to the
discretization errors of the PCAC mass.
We remind that it is not surprising that the PCAC mass is not
automatically O($a$) improved since it is an odd quantity under the parity
transformation of eqs. (\ref{eq:parity}, \ref{eq:paritybar}).
In \cite{Abdel-Rehim:2005gz} the extrapolation to
$\mu=0$ is not performed and each value of the critical mass has been used for
the corresponding value of $\mu$ used in the simulations (method {\bf B}).
With this method the O($a$) cutoff effects of the critical mass are obviously fixed in such a
way that the untwisted quark mass is always vanishing for all the simulation
points.
These two methods, even if they give
different cutoff effects to the critical mass, are perfectly good in order to
achieve automatic O($a$) improvement.
Another possible way to fix the critical mass, expecially practical for
expensive dynamical simulations, is to compute the critical mass, using the
PCAC relation at the smallest value of $\mu$, and then use this critical mass
for all the simulation points at heavier masses.
Using methods {\bf A} and {\bf B} a set of quenched studies
\cite{Jansen:2003ir,Bietenholz:2004wv,Abdel-Rehim:2004gx,Abdel-Rehim:2005gz,Jansen:2005gf,Jansen:2005kk,Abdel-Rehim:2005qv}
have been performed to check the result of \cite{Frezzotti:2003ni}
and to gain experience with this formulation of lattice QCD.
\begin{figure}[htb]
\epsfig{file=plots/r0fps_sca_1.ps,angle=270,width=0.5\linewidth}
\epsfig{file=plots/cont_fpi.ps,angle=270,width=0.5\linewidth}
\caption{Left panel: scaling behaviour of $r_0f_{\rm PS}$ for 3 fixed values
of $r_0m_{\rm PS}$. Right panel: Continuum limit values for $f_{\rm PS}$
as a function of $m_{\rm PS}^2$ in physical units. The empty squares are taken from \cite{Garden:1999fg}.}
\label{fig:fps}
\end{figure}
An interesting quantity to compute with tmQCD is the pseudoscalar decay
constant $f_{\rm PS}$. As it was noted in
\cite{DellaMorte:2001tu,Frezzotti:2001du,Jansen:2003ir},
the computation of $f_{\rm PS}$ does not require any renormalization constant,
in contrast of ordinary Wilson fermions, and moreover given automatic O($a$)
improvement, does not need the computation of any improvement coefficient.
Thus the situation for this quantity is like with overlap fermions.
In fig. \ref{fig:fps} (left panel) the
continuum limit of $r_0 f_{\rm PS}$ \footnote{The values of $r_0/a$, $r_0=0.5$
fm being the Sommer scale \cite{Sommer:1993ce}, are taken from
\cite{Guagnelli:1998ud}.}, the critical mass being computed with method {\bf A},
is shown as a function of $(a/r_0)^2$. The scaling is consistent with being of
O($a^2$), and moreover the O($a^2$) effects are rather small for all the pseudoscalar
masses investigated down to $m_{\rm PS} = 272$ MeV. The right panel of
fig. \ref{fig:fps} shows the chiral behaviour of the continuum pseudoscalar
decay constant, compared with the non-perturbatively O($a$) improved
data of \cite{Garden:1999fg}. We remark that this comparison is purely
illustrative since it is in the quenched approximation, and the simulations
with clover fermions had to stop around $m_{\rm PS} \simeq 500$ MeV due to the
appearance of exceptional configuration.
\begin{figure}[htb]
\begin{center}
\epsfig{file=plots/cont_avx.ps,angle=270,width=0.6\linewidth}
\caption{$\langle x \rangle^{\msbar}(\mu= \ 2 \ GeV)$ extrapolated to the continuum as a function of
the pion mass. Open squares represent results that are obtained from a
combined continuum
extrapolation of earlier Wilson and clover-Wilson simulations
\cite{Guagnelli:2004ga}. The filled circles represent results using
Wilson twisted mass fermions \cite{Capitani:2005aa}. The open circle denotes
a result which is not corrected for finite size effects and the diamond corresponds
to the experimental point.}
\label{fig:xpion}
\end{center}
\end{figure}
To see the potential of tmQCD another interesting phenomenological quantity
is the average momentum carried by valence quarks in a pion ($\langle
x\rangle$).
In \cite{Capitani:2005aa} results using tmQCD were presented. Here we concentrate on the
chiral behaviour in the continuum, having in mind that the renormalization has
been performed already in a non-perturbative way
\cite{Guagnelli:2003hw,Guagnelli:2004ga}.
Fig. \ref{fig:xpion}
shows that in principle also for this quantity small pseudoscalar masses
$m_{\rm PS} < 300$ MeV can be reached, opening the possibility of a safe
chiral extrapolation.
\begin{figure}[htb]
\epsfig{file=plots/fpi_pssca_b6.0.ps,angle=270,width=0.5\linewidth}
\epsfig{file=plots/fpi_scaling_1.ps,angle=270,width=0.5\linewidth}
\caption{Left panel: comparison of the chiral
behaviour at fixed lattice spacing ($\beta = 6.0$) of the pseudoscalar decay
constant computed using method {\bf A}, {\bf B}, {\bf C} and with
results obtained with overlap fermions. Right panel: unconstrained continuum
limit, for several values of fixed charge pion masses,
of $r_0f_{\rm PS}$ performed using method {\bf A} and {\bf C} to determine the
critical mass.}
\label{fig:bending}
\end{figure}
In \cite{Bietenholz:2004wv} to obtain automatic O($a$) improvement the critical
mass $m_{\rm c}$ was computed extrapolating the squared pseudoscalar mass to
the chiral limit using data from the pure Wilson theory (method {\bf C}). Using this
determination of $m_{\rm c}$ several quantities were computed. In particular in the
left panel of fig. \ref{fig:bending} there is a comparison of the chiral
behaviour at fixed lattice spacing ($\beta = 6.0$) of the pseudoscalar decay
constant computed using method {\bf A}, {\bf B}, {\bf C} and with
results obtained with overlap fermions \cite{Giusti:2004yp}. While methods
{\bf A}, {\bf B} and the overlap data are all consistent within the
statistical errors,\footnote{We recall here that since the comparison is made
at fixed lattice spacing the data in principle could disagree due to
different cutoff effects.} the data obtained using method {\bf C} to fix the
critical mass, show a ``bending'' towards the chiral limit.
The same phenomenon was observed also for the vector mass
\cite{Bietenholz:2004wv}.
The ``bending'' phenomenon appeared exactly when $\mu \simeq a
\Lambda^2$. Having in mind the caveat observed before in the proof of automatic O($a$)
improvement, this indicates that the extraction of the critical mass with
method {\bf C} leaves the dangerous $a\Lambda^2$ in the untwisted quark mass
uncanceled. This is numerically confirmed by the results of
\cite{Jansen:2005kk}, showed in the right panel of fig. \ref{fig:bending},
since using method {\bf A} and {\bf C} to determine the critical mass, a
consistent continuum limit is obtained, showing also that method {\bf C}
induces big O($a^2$) effects and a reduced scaling window.
A description of the ``bending'' phenomenon at fixed lattice spacing,
has been obtained in \cite{Aoki:2005ii} using $\chi$PT, as it is shown in the
left panel of fig. \ref{fig:bar}, where a fit to available quenched data is
performed on the ratio $R={a^2 m_{\rm PS}^2 \over a\mu}$.
This analysis shows also that $\chi$PT
theory is able to describe the lattice data up to $\mu \simeq 80$ MeV. It is
reassuring that using method {\bf A} to determine the critical mass and
restricting the data to the region were $\chi$PT is applicable the ratio $R$
is flat (right panel of fig. \ref{fig:bar})
consistently with continuum $\chi$PT (up to chiral logs).
\begin{figure}[htb]
\begin{center}
\epsfig{file=plots/Rbeta60.eps,angle=0,width=0.42\linewidth} \hspace{0.8cm}
\epsfig{file=plots/Rmps2mu6.0.eps,angle=0,width=0.45\linewidth}
\caption{Left panel: bending phenomenon at $\beta$=6.0 on the ratio $R={a^2
m_{\rm PS}^2 \over a\mu}$ and its description with $\chi$PT. Right panel:
comparison of the ration $R$ using method {\bf A} and {\bf C} to determine
the critical mass.}
\label{fig:bar}
\end{center}
\end{figure}
In \cite{Frezzotti:2005gi}, based on the observation that the big
O($a^2$) effects come from uncanceled O($a$) of the PCAC mass,
to eliminate the ``bending'' phenomenon has been proposed
to use a non-perturbatively improved tmQCD action. This approach has been
numerically tested in \cite{Lubicz}, and as it can be seen in
fig. \ref{fig:clover} indeed it confirms that the ``bending'' phenomenon also
in this case it is not present.
\begin{figure}[htb]
\epsfig{file=plots/fp_m3.eps,angle=0,width=0.5\linewidth}
\epsfig{file=plots/fp_m3_zoom.eps,angle=0,width=0.47\linewidth}
\caption{Comparison of the chiral
behaviour at fixed lattice spacing ($\beta = 6.0$) of the pseudoscalar decay
constant computed using method {\bf A} and {\bf C} for both tmQCD and
non-perturbatively improved tmQCD.}
\label{fig:clover}
\end{figure}
\subsection{Renormalization}
In \cite{Sint} it has been given a construction of a
Schr\"odinger functional (SF) with twisted boundary conditions that preserves the
nice properties of O($a$) improvement without bulk improvement coefficients;
(see \cite{Frezzotti:2005zm} for a possible alternative to this construction).
The construction is based on the consideration that in a finite volume with
suitable boundary condition the
Wilson theory in the chiral limit is O($a$) improved, and it makes use of orbifolding techniques
(see \cite{Taniguchi:2004gf} for an application of orbifolding techniques to Ginsparg-Wilson
fermions).
A simple way to visualize the construction is to repeat the proof of automatic
O($a$) improvement given in section \ref{sec:latticeqcd},
where now since we are in a finite volume with suitable boundary conditions,
the twisted mass could be safely sent to zero.
Then the new boundary projectors \cite{Sint} $Q_{\pm} = {1 \over 2} (1+{\rm
i}\gamma_0 \gamma_5 \tau^3$) commute with the
previous parity transformation (as the twisted mass term in infinite volume).
It is very important to note that the new boundary projectors
can be obtained performing a chiral rotation of
the original projectors in the standard SF framework \cite{Sint:1993un}.
An important consequence is that
the running of the coupling constant, should be
identical to the running computed with the ``old'' SF \cite{DellaMorte:2004bc}.
The O($a$) uncertainties
in the critical mass do not harm the O($a$) improvement.
\section{Flavour symmetry}
When tmQCD is used to define the standard QCD correlation functions some of the
physical symmetries are restored only in the continuum limit. In particular
flavour and parity symmetries.
The explicit breaking of flavour symmetry generates for example splitting between charged and neutral
pions, while the absence of parity symmetry, gives as a consequence the
appearance of states of opposite parity in the spectral decomposition of usual correlators.
Both these phenomena are expected to vanish, at maximal twist, with a
rate of O($a^2$) \cite{Frezzotti:2004wz}.
Here we concentrate on the flavour symmetry breaking.
To fix the notation we recall some basic definitions. The charged pseudoscalar
currents are given by
\be
P^\pm(x) =\psibar(x)\gamma_5{\tau^\pm \over 2}\psi(x) \qquad \tau^\pm = {\tau^1
\pm {\rm i} \tau^2 \over 2}
\ee
and a possible interpolating field for the neutral pion is the scalar current
\be
S^0(x) = \psibar(x)\psi(x) .
\ee
The charged and neutral pseudoscalar masses can be extracted by the following correlators
\be
C_{\pi^{+}} (x_0) = a^3\sum_{\mathbf x} \langle P^+(x) P^-(0) \rangle \qquad
C_{\pi^{0}} (x_0) = a^3\sum_{\mathbf x} \langle S^0(x)S^0(0) \rangle
\ee
\be
C_{\pi^{0}} (x_0) = a^3\sum_{\mathbf x} \big\{ \langle - \tr \big[ G(0,x) G(x,0)
\big] + \tr \big[G(x,x)\big] \tr \big[G(0,0)\big] \rangle\big\}
\label{eq:pi0}
\ee
where $G(x,y)$ is the fermionic propagator.
In \cite{Jansen:2005cg} a pilot quenched study has been preformed to study
flavour breaking effects with tmQCD.
For the neutral pseudoscalar correlator in eq. (\ref{eq:pi0})
a first possibility is to study only
the connected part. In the quenched approximation it is still possible to
interprete the connected part in terms of local operators. The reason is that
one could think the connected part as coming from Wick contractions obtained
using the Osterwalder-Seiler (OS) \cite{Osterwalder:1977pc} action
\be
S_{\rm OS} = a^4\sum_x \big\{\psibar(x)[D[U] + m_0 +i\mu\gamma_5]\psi(x)\big\} .
\ee
This action has a trivial flavour structure and so does not present any
flavour breaking, and in particular the disconnected part of eq. (\ref{eq:pi0})
vanishes. We remark that this is not the neutral pseudoscalar meson of tmQCD,
but it is an interesting quantity to study with precise data on its own, in view
of a possible use of mixed actions (the OS action for the valence quarks and
tmQCD for the sea quarks).
In fig. \ref{fig:pi0all} the scaling behaviour of the connected correlator (OS
pseudoscalar), is compared with the pseudoscalar meson in tmQCD where also the
disconnected part is included (in both computations method {\bf A} is used for
the determination of the critical mass).
\vspace{-0.5cm}
\begin{figure}[htb]
\begin{center}
\epsfig{file=plots/summ.ps,angle=0,width=0.48\linewidth}
\epsfig{file=plots/summ2.ps,angle=0,width=0.48\linewidth}
\caption{Scaling behaviour of the mass splittings
between the neutral and the charged pseudoscalar masses for 2 values of
$r_0m_{\rm PS}$. The open squared are the data for the neutral pseudoscalar
meson with tmQCD, and the stars only the connected contribution (pseudoscalar
meson with OS action). The full and dotted lines are an estimate of the $a^2$
dependence for the two pion splittings, making the hypothesis that O($a^2$)
effects are mass independent.}
\label{fig:pi0all}
\end{center}
\end{figure}
For all the technical details of the
computation of the disconnected part I refer to
\cite{Jansen:2005cg,Farchioni:2005hf}. The results show an O($a^2$) scaling
for both the pseudoscalar masses, even if there are indications that
the neutral pseudoscalar meson for tmQCD (with the
inclusion of the disconnected correlator) has
reduced cutoff effects, within the rather large statistical errors.
It is possible to give a very rough estimate of the pion splitting
$r_0^2(m_{\pi^0}^2 - m_{\pi^{\pm}}^2) \simeq c(a/r_0)^2$ with $c \simeq 10$
(with large errors).
Comparing to a quenched simulation for na\"ive staggered fermions with Wilson
gauge action \cite{Ishizuka:1993mt}, one finds a similar size of the flavour
splitting encountered for the pion mass at a similar lattice spacing with a
value $c \simeq 40$. For dynamical improved staggered fermions
a value of $c \simeq 10$ has been found \cite{Aubin:2004wf}.
An interesting study of the flavour breaking effects was presented at this
conference in \cite{Abdel-Rehim:2005qv}. To avoid the computation of
disconnected diagrams in the quenched approximation a second doublet for {\it
strange} and {\it charm} quarks is introduced following the strategy of
\cite{Pena:2004gb}.
Then the splitting on the kaon system is studied. In this study method {\bf B}
has been used for the determination of the critical mass.
The results shown in fig. \ref{fig:kaons}, indicates that, as expected, the
flavour breaking effects vanish linearly with a rate of $a^2$, but that indeed
they could be significant at a lattice spacing $a > 0.1 $fm.
\begin{figure}[htb]
\begin{center}
\epsfig{file=plots/kaonversusa2.eps,angle=0,width=0.5\linewidth}
\caption{Scaling behaviour of the mass splitting between neutral and charged kaons.}
\label{fig:kaons}
\end{center}
\end{figure}
Recent results \cite{Farchioni:2005hf} with $N_{\rm f}=2$ dynamical tmQCD fermions and DBW2 gauge
action (see next sections for details on the simulation parameters), indicate
that at a lattice spacing of $a\simeq 0.12$ fm and a mass $\mu \simeq 12 $
MeV, the pion mass splitting even if with large errors, is consistent with
zero.
\section{$N_{\rm f} =2$}
\subsection{Algorithmic improvements}
In the Lattice 2001 conference A. Ukawa presented \cite{Ukawa:2002pc} a rather impressing
analysis on the possibility of simulating light quark masses with
Wilson fermions. This was summarized with the now well known Berlin wall
figure (see \cite{Jansen:2003nt} for a recent update).
Recently new algorithms \cite{Luscher:2004rx,Urbach:2005ji}
have been proposed that have finally moved the wall to
rather small quark masses.
Both the algorithms are based on the standard HMC but have used new
preconditioner.
In \cite{Luscher:2004rx} it was shown that with a domain
decomposition (DD) preconditioning combined with a multiple time (mt) scale
integrator \cite{Sexton:1992nu}, light quark masses
($m_\pi = 294$ MeV) are reachable with Wilson fermions with remarkable
performances.
In \cite{Urbach:2005ji} another very efficient preconditioner for the HMC algorithm has been
introduced and tested,
based on a mass preconditioner \cite{Hasenbusch:2001ne} (also known as Hasenbusch (H) acceleration)
with again a multiple time scale integrator.
In table \ref{tab:algo} is summarized a rough comparison between the two algorithms, using
different lattice actions, based on the so called cost figure $\nu = 10^{-3}
(2N + 3)\tau_{\rm int}(P)$ introduced in \cite{Luscher:2004rx}.
\begin{table}[b]
\begin{center}
\begin{tabular} {|c|c|c|c|c|c|}
\hline\hline
&&&&&\\[-0.5ex]
Action & Algorithms & $r_0/a$ & $m_{\rm \pi}$ [MeV] & $\nu$ & $\tau_{\rm int}$ \\ [1ex]
\hline
W+W & (mt)(DD)HMC \cite{Luscher:2004rx} & 6.40(15) & 294 & $0.74(18)$ & $21(5)$ \\
tlSym+Wtm & (mt)(H)HMC \cite{Urbach:2005ji} & 5.20(25) & 280 & $0.49(34)$ & $21(14)$ \\
\hline\hline
\end{tabular}
\caption{Comparison of the 2 algorithms discussed in the text for a similar
physical situation.}
\label{tab:algo}
\end{center}
\end{table}
The conclusion is that the algorithms have comparable performance down to pion
masses of the order of $m_\pi \simeq 300$ MeV.
In fig. \ref{fig:bw2} is plotted the update of the Berlin wall figure.
\begin{figure}[htb]
\begin{center}
\epsfig{file=plots/berlinwall.eps,angle=0,width=0.4\linewidth}
\epsfig{file=plots/berlinwall2.eps,angle=0,width=0.4\linewidth}
\caption{Computer resources needed to generate $1000$ independent
configurations of size $24^3\times 40$ at a lattice spacing of about
$0.08\ \mathrm{fm}$ in units of $\mathrm{Tflops}\cdot
\mathrm{years}$ as a function of $m_\mathrm{PS}/m_\mathrm{V}$. See text
for a detailed description of the plots.}
\label{fig:bw2}
\end{center}
\end{figure}
On the left panel there is a comparison between the results of
\cite{Urbach:2005ji,Jansen:2005yp} (squares and diamond) and the results of
\cite{Orth:2005kq} (circles). The lines are
functions proportional to $(m_\mathrm{PS}/m_\mathrm{V})^4$ (dashed) and
$(m_\mathrm{PS}/m_\mathrm{V})^6$ (solid).
On the right panel it is shown a comparison between the Ukawa's formula
in \cite{Ukawa:2002pc} (solid line) and the extrapolation of the results in
\cite{Urbach:2005ji} using a
$(m_\mathrm{PS}/m_\mathrm{V})^4$ (dashed) and a
$(m_\mathrm{PS}/m_\mathrm{V})^6$ (dotted) dependence for the data. The arrow
indicates the physical pion to rho meson mass ratio.
In addition there are also data points from staggered simulations
(see \cite{Jansen:2003nt} and references therein). In particular this plot
indicates that running for one year a 1 Tflop sustained performance
machine allows to generate at the physical point with $a \simeq 0.08$
fm and a lattice of $24^3 \times 40$,
1000 independent trajectories.
\subsection{Phase diagram of Wilson fermions}
\label{sec:phase}
In \cite{Farchioni:2004us} the first study of tmQCD with $N_{\rm f} = 2$
dynamical fermions was performed. Starting the exploration of a
completely new territory, it is always good to remember a sentence of
G. Parisi \cite{Parisi:1988hz} ``Let me describe a typical computer
simulation: the first thing to do is to look for phase transitions''.
It is important to have then the correct understanding of the phase diagram
with Wilson fermions in the 3 parameters space $(\beta = 6/g_0^2,m_0,\mu)$.
To check that the results are not induced by the algorithm used it is always good
to have at least 2 algorithms that reproduce the same results.
Indeed in \cite{Farchioni:2004us} using the so called TSMB
\cite{Montvay:1995ea} and GHMC \cite{Hasenbusch:2001ne,Hasenbusch:2002ai}
algorithms rather surprising results were found. The action used was Wilson
gauge action combined with Wilson fermions with and without twisted mass.
In particular at a lattice spacing of $a\approx 0.16$ fm, strong evidence of a first
order phase transition was found for a rather large range of values of twisted
masses going from zero twisted mass to $\mu \simeq 100$ MeV. This study
reveals also that the phase transition tends to disappear increasing the value
of $\mu$, it persists for $\mu=0$ and it is volume independent.
A typical example of a MC history for the plaquette expectation value can be
seen in fig. \ref{fig:meta}, where a cold and a hot start was performed.
\begin{figure}[htb]
\epsfig{file=plots/metastability.eps,angle=0,width=0.5\linewidth}
\epsfig{file=plots/metastability-mu0.0.eps,angle=0,width=0.5\linewidth}
\caption{Metastable states at $\beta=5.2$. Left panel: MC history of the
average plaquette value with a twisted quark mass $\mu \simeq 10 $
MeV and a lattice size $16^3 \times 32$. Right panel: MC history of the
average plaquette for pure Wilson fermions ($\mu =0$)
and a lattice size $12^3 \times 24$. }
\label{fig:meta}
\end{figure}
These results can help to see from a different point of view old numerical and
theoretical works. In \cite{Aoki:1995yf,Aoki:1996pw} from a finite
temperature study there was an indication of difficulties in observing a phase
with spontaneous breaking of flavour and parity symmetry (Aoki phase) at
$\beta > 4.8$.
In \cite{Blum:1994eh} the MILC collaboration found a surprising bulk first
order phase transition for Wilson fermions at $\beta \simeq 4.8$.
In \cite{Creutz:1996bg} an analysis using the linear sigma-model is
performed, finding an indication of two possible patterns of symmetry
breaking at finite lattice spacing.
This observation was put on firmer theoretical basis in \cite{Sharpe:1998xm}.
In this very important paper several interesting results and consideration
were done, that, seen now from a different point a view, can help to understand
the rather surprising numerical results obtained in \cite{Farchioni:2004us}.
In \cite{Sharpe:1998xm} for the first time the concept of chiral lagrangian at
finite lattice spacing is given. The key point of the construction is the
observation that {\it the Pauli term in the effective Symanzik lagrangian
transforms under chiral rotation exactly as does the mass term}. I would like
to add that this is also the key point for the automatic O($a$) improvement
for tmQCD at maximal twist.
Neglecting the derivative interaction, being interested in the vacuum state,
the potential of the effective chiral lagrangian reads
\be
{\mathcal V} = -{c_1 \over 4} \langle \Sigma+\Sigma^\dagger \rangle +
{c_2 \over 16} \langle \Sigma + \Sigma^\dagger \rangle^2
\ee
\be
c_1 \sim m'\Lambda^3 \qquad c_2 \sim m'^2\Lambda^2 + m'a\Lambda^4 +
a^2\Lambda^6 \qquad m' = m-a\Lambda^2
\ee
where $\Sigma$ is the matrix that collects the Goldstone boson fields of the
theory. We remark here that $m'$ is a redefinition of the untwisted quark mass
that includes the O($a$) coming from the clover term.
Up to O($a^2$) $m'$ is proportional to the PCAC quark mass.
The two terms in the potential become comparable when
$m' \sim a^2\Lambda^3$. In this region of quark masses the competition of
these two terms causes a non-trivial vacuum structure that gives the following
2 scenarios \cite{Sharpe:1998xm} for the phase diagram of Wilson fermions: 1) The Aoki phase
\cite{Aoki:1984qi}; 2) The existence of a $1^{\rm st}$ order phase transition
\cite{Sharpe:1998xm}.
The extension to tmQCD of these results is done in
\cite{Munster:2004am,Scorzato:2004da,Sharpe:2004ps}. The result is
summarized in fig. \ref{fig:tmphase} where the x-axis is $m'/a^2$ and the
y-axis is $\mu/a^2$. A non-zero value of the twisted mass washes out the Aoki
phase introducing an explicit breaking of flavour and parity symmetry. As can
be seen from fig. \ref{fig:tmphase} (left panel) the Aoki phase lies on the untwisted
axis. In the second scenario in fig. \ref{fig:tmphase} (right panel)
the first order phase transition line extends into
the twisted direction to a distance of $\mu_{\rm c} \approx a^2\Lambda^3$. The transition ends
with a second order phase transition point, where the neutral pion mass vanishes.
Several comments are in order now.
The occurrence of one of the two scenarios depends on the sign of the coefficient
$c_2$ proportional to the O($a^2$) term in the chiral lagrangian.
This coefficient $c_2$ depends on the choice of the gauge action, on the
presence in the lattice action of the clover term and on the bare gauge coupling.
\begin{figure}[htb]
\epsfig{file=plots/mmuphase1.eps,angle=0,width=0.5\linewidth}
\epsfig{file=plots/mmuphase2.eps,angle=0,width=0.5\linewidth}
\caption{Left panel: the phase diagram of Wilson diagram according to $\chi$
PT for $c_2>0$. Right panel: as the left panel but for $c_2<0$. The x-axis
is $m'/a^2$ and the y-axis is $\mu/a^2$.}
\label{fig:tmphase}
\end{figure}
An analysis with Wilson fermions of the two dimensional Gross-Neveau model
\cite{Izubuchi:1998hy} indicates that indeed both the scenarios describe
the phase structure of Wilson fermions depending on the value of the
couplings of the model. The analysis shows that at strong coupling there is an Aoki phase
while at weak coupling the first order phase transition line sets in.
This analysis has been recently extended for the twisted mass case
\cite{Nagai:2005mi}, indicating even more complicated structures, like a coexistence
of the two scenarios at the same value of the coupling.
Our present understanding of the lattice QCD phase diagram can be
summarized as following.
For values of the lattice spacing much coarser than $a=0.15\
\mathrm{fm}$ there is a second order phase transition from the standard
lattice QCD phase to
the Aoki phase \cite{Aoki:1984qi,Ilgenfritz:2003gw,Sternbeck:2003gy}.
For smaller values of the lattice spacing a first order phase transition
appears \cite{Farchioni:2004us,Farchioni:2004fs,Farchioni:2004ma,Farchioni:2005tu}
that separates the positive quark mass from the negative quark mass
phase. This first order phase transition is reminiscent of the continuum
phase transition when the quark mass is changed from positive to negative values
with the corresponding jump of the scalar condensate as the order parameter of
spontaneous chiral symmetry breaking.
The generic phase structure of lattice QCD
is illustrated in fig.~\ref{fig:phase} and discussed in
refs.~\cite{Farchioni:2004us,Farchioni:2004fs,Farchioni:2004ma}.
\begin{figure}[htb]
\vspace{-0.0cm}
\begin{center}
\epsfig{file=plots/fancyphase.ps,angle=0,width=0.55\linewidth}
\caption{Current knowledge of the Wilson lattice QCD phase diagram as function of the inverse
gauge coupling $\beta\propto 1/g^2$, the hopping parameter $\kappa$ and the
twisted mass parameter $\mu$.}
\label{fig:phase}
\end{center}
\end{figure}
\subsection{Minimal pion mass}
In the scenario with a first order phase transition the pseudoscalar mass
$m_{\rm PS}$ cannot be made arbitrarily small both if the chiral point is reached
from the untwisted or the twisted direction. Lowering the quark mass from
the untwisted direction the
algorithm will start to sample also in the region with negative masses.
The minimal pion mass reachable will then depend on the algorithm used and on
the strength of the phase transition.
Lowering the quark mass from the twisted direction there is a minimal pion mass given directly by the
extension of the first order phase transition line, even if the twisted mass
gives a sharp infrared cutoff in the sampling performed by the algorithm.
It therefore becomes important to
understand the phase structure of lattice QCD
as a pre-requisite before starting large scale
simulations. As we have seen the extension of the first order phase transition
line in the twisted direction is proportional to the coefficient $|c_2|$.
This coefficient depends both on the gauge action used and on the presence of
the clover term in the lattice action.
In \cite{Farchioni:2005tu} has been studied the lattice spacing dependence of
of the first order phase transition with Wilson gauge action,
taking as a measure of its strength, the
gap between the two phases in the plaquette expectation value and in the PCAC
quark mass.
The qualitative estimate for the lattice spacing, where a minimal pion mass
$m_\pi \simeq 300 $ MeV could be reached, without being affected by the first
order first transition is 0.07-0.1 fm.
It is suggestive that at the microscopic level the occurrence of this first
order phase transition is accompanied by a massive rearrangement
of the small eigenvalues of the Wilson-Dirac. This rearrangement could be
suppressed by the use of a renormalization group improved or O($a^2$) improved
gauge actions, and indeed results from \cite{Aoki:2004iq} indicate that
metastabilities in the average plaquette observed for
$N_{\rm f} = 3$ dynamical Wilson fermions with a clover term (there is also an
indication that the same metastabilities survive without a clover term for $N_{\rm f} = 3$),
can be suppressed replacing the Wilson gauge action with the Iwasaki
action \cite{Iwasaki:1985we}.
\subsection{Tree-level Symanzik improved gauge action}
The dependence of the phase diagram on the gauge action used and on the
lattice spacing has been studied in a set of papers
\cite{Farchioni:2004us,Farchioni:2004fs,Farchioni:2004ma,Farchioni:2005tu}
(see also \cite{Farchioni:2005ec} for a detailed summary of these results).
The gauge actions so far studied can be parameterized by
\be
S_{\rm G} = \beta \big[b_0\sum_{x;\mu<\nu}(1-{1 \over 3}P^{1 \times
1}(x;\mu,\nu)) + b_1 (1-{1 \over 3} P^{1 \times 2}(x;\mu,\nu)) \big]
\ee
with the normalization condition $b_0 = 1-8b_1$.
The parameters of the tree-level Symanzik action \cite{Weisz:1982zw}
($b_1 = - {1 \over 12}$) simulations are summarized in tab. \ref{tab:tlsym}.
The last line indicates an estimate of the minimal pion mass reachable at the
corresponding lattice spacings.
\begin{table}
\begin{center}
\begin{tabular}{|l|l|l|}\hline
\multicolumn{1}{|c|}{ $\beta=3.65$ } &
\multicolumn{1}{|c|}{ $\beta=3.75$ } &
\multicolumn{1}{|c|}{ $\beta=3.90$ }
\\
\hline
\ $a\mu = 0.01$ & \ $a\mu =0.0094-0.005$ & \ $a\mu =0.0075-0.004$ \\
\ $a\approx 0.13$ fm & \ $a\approx 0.12 $fm & \ $a\approx 0.1$ fm \\
\ $L\approx 1.56$ fm & \ $L\approx 2 $fm & \ $L\approx 1.6$ fm \\
\ $(m_\pi)_{\rm min} \approx 450$ MeV & \ $(m_\pi)_{\rm min} \approx 400$
MeV & \ ($m_\pi)_{\rm min} \approx 280$ MeV \\
\hline
\end{tabular}
\caption{Summary of the simulation parameters for dynamical runs of tmQCD with
tlSym gauge action. The last line is an estimate of the minimal pion mass
reachable without encountering metastabilities.}
\label{tab:tlsym}
\end{center}
\end{table}
In order to check for a possible phase transition and corresponding
metastabilities a measure of the average plaquette value as a function of the
hopping parameter $\kappa$ on runs that start from both a hot and a cold
configuration has to be done.
Since the metastability, if any, will show up around $\kappa_c$ (determined
monitoring the PCAC mass $m_{\rm PCAC}$ at the corresponding fixed value of $\mu$)
attention should be given to the hot and cold runs on $\kappa$-values closest to $\kappa_c$
only.
At $\beta=3.65, a \simeq 0.13 fm, 12^3 \times 24$ and
$\mu \simeq 15$ MeV there are signs of a very nearby phase transition, as can be deduced
from the steep rise in $\kappa$ of the plaquette expectation value (left panel
in fig. \ref{fig:b3.65}), from a very slow
thermalization and large fluctuations of the plaquette MC history value over several
hundreds of trajectories (right panel in fig. \ref{fig:b3.65}).
An estimate of the pseudoscalar mass, close to $\kappa_{\rm c}$, is $m_{\rm PS} \simeq 450 MeV$.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.49\linewidth]{plots/av_plaq_tlSym_b3.65_m0.01_L12T24.eps}
\includegraphics[width=0.49\linewidth]{plots/L12T24_b3.65_m0.01_plaq_mc_history.eps}
\caption{Left panel: Average plaquette value
vs.~$\kappa$ at $\beta=3.65, r_0 \mu=0.038$
on a $12^3 \times 24$ lattice from hot (red symbols) and cold starts (blue symbols).
Right panel:
Average plaquette MC time history for two runs at $\beta=3.65, r_0 \mu=0.038, \kappa=0.17024$
on a $12^3 \times 24$ lattice starting from hot (red line) and cold configuration
(blue line).}
\label{fig:b3.65}
\end{center}
\end{figure}
At $\beta=3.75, a \simeq 0.12$ fm, $12^3 \times 24$ and
$\mu \simeq 8$ MeV there is a similar situation we have observed before at
$\beta=3.65$ and $\mu \simeq 15$ MeV.
This is described by fig. \ref{fig:b3.75} (left panel), where it
is plotted the $\kappa$ dependence of the PCAC mass.
This dependence is very useful to monitor a possible metastable critical
point, since this shows up in a different extrapolated $\kappa_{\rm c}$, when
the extrapolation is performed from positive or negative quark masses.
A second twisted mass $\mu \simeq 15$ MeV has been simulated in a lattice $16^3
\times 32$ around the
critical point for this lattice spacing. Even if a strict check done with a
hot and a cold start is not available at the moment the $\kappa$ dependence of
the PCAC mass for this second value of $\mu$ suggests that the critical point
is free from metastabilities. The pseudoscalar mass measured for the heaviest twisted
mass is around $m_{\rm PS} \simeq 400$ MeV.
\begin{figure}[htb]
\vspace{-0.0cm}
\begin{center}
\includegraphics[width=0.49\linewidth]{plots/tlSym_L12T24_b3.75_m0.005_mPCAC_vs_kappa.eps}
\includegraphics[width=0.49\linewidth]{plots/tlSym_L16T32_b3.90_m0.0075_mPCAC_vs_kappa.eps}
\caption{PCAC quark mass $m_{\rm PCAC}$ vs.~$\kappa$ on a $16^3 \times 32$
lattice. Left panel: $a \simeq
0.12 $ fm, $\mu \simeq 8 $ MeV. Right panel: $a \simeq 0.1$ fm, $\mu \simeq
15$ MeV.}
\label{fig:b3.75}
\end{center}
\end{figure}
At $\beta=3.9, a \simeq 0.1$ fm, $16^3 \times 32$ and $\mu \simeq 8$ and $15$
MeV, there are no signs of metastabilities at the two corresponding critical
points.
In fig. \ref{fig:b3.75} (right panel) is plotted the $\kappa$ dependence
of the PCAC quark mass.
The pseudoscalar masses obtained for the two values of $\mu$ are respectively
$m_{\rm PS} \simeq 280$ and $450$ MeV.
We remark also that the physical volume at this $\beta$ value is rather small
$L\simeq1.6$ fm, and results obtained for pure Wilson fermions
\cite{Luscher:2004rx,Orth:2005kq}, indicate that for these quark masses and
these volumes the finite size effects could be substantial. The estimate of
the minimal pseudoscalar mass for this lattice spacing is then clearly only
an upper bound.
\subsection{DBW2 gauge action}
In this section I summarize the results \cite{Farchioni:2004fs}
obtained using the so called DBW2 gauge action \cite{Takaishi:1996xj} ($b_1 = -1.4088$).
The parameters used in the simulations are summarized in tab. \ref{tab:dbw2}.
The twisted mass for the two lattice spacing is kept roughly fixed to $\mu
\approx 12$ MeV.
The last line indicates an estimate of the minimal pion mass reachable at the
corresponding lattice spacings.
\begin{table}
\begin{center}
\begin{tabular}{|l|l|}\hline
\multicolumn{1}{|c|}{\ $\beta=0.67$ } &
\multicolumn{1}{|c|}{\ $\beta=0.74$ }
\\
\hline
\ $ a\mu = 0.01$ & \ $a\mu = 0.0075$ \\
\ $a\approx 0.19$ fm & \ $a\approx 0.12$ fm \\
\ $L\approx 2.3$ fm & \ $L\approx 2 $ fm \\
\ $(m_\pi)_{\rm min}\approx 360$ MeV & \ $(m_\pi)_{\rm min}\approx 320$ MeV \\
\hline
\end{tabular}
\caption{Summary of the simulation parameters for dynamical runs of tmQCD with
DBW2 gauge action. The last line is an estimate of the minimal pion mass
reachable without encountering metastabilities.}
\label{tab:dbw2}
\end{center}
\end{table}
Also for this gauge action several quantities have been computed. Here we
concentrate as before on the PCAC mass and on the minimal pion mass.
In contrast with the tlSym results here simulations at full twist
were never performed, so the evidence for a metastability region can be
deduced only indirectly from the dependence of the PCAC mass on the untwisted
quark mass as discussed before.
In fig. \ref{fig:dbw2} is shown the $1/(2\kappa)$ dependence of the PCAC mass
for the two lattice spacing used. At $a \approx 0.19$ fm there is an indirect
evidence of a small metastability at full twist, that seems to disappear at
$a \approx 0.12$ fm.
\begin{figure}[htb]
\vspace{+1cm}
\begin{center}
\epsfig{file=plots/full_tw_extra_b067.eps,angle=0,width=0.45\linewidth}
\hspace{0.5cm}
\epsfig{file=plots/full_tw_extra_b074.eps,angle=0,width=0.45\linewidth}
\caption{Determination of the critical hopping parameter
$\kappa_{\rm c}$ by extrapolating to zero the untwisted PCAC quark mass $m_{\rm PCAC}$.
The small discrepancy observed at $\beta=0.67$ (left panel)
between extrapolations from positive and negative
quark masses is probably a small effect of the first order phase transition.
For $\beta=0.74$ (right panel), extrapolations
from both sides give consistent results. An alternative way to fix the
critical mass is also plotted (see \cite{Farchioni:2005ec} for details).}
\label{fig:dbw2}
\end{center}
\end{figure}
To summarize, there are first indications, that in order to reach pion masses
of the order of $m_{\rm PS} \simeq 300$ MeV with tmQCD a gauge action like
tlSym or DBW2 is appropriate. There are also first indications that this pion
mass can be reached with DBW2 at slightly coarser lattices. In order to avoid
possible large cutoff effects with DBW2 (see for example fig. 3 in
\cite{Necco:2003vh}), or big coefficients in perturbative expansions, with the
present data, the tlSym gauge action may be considered a better choice.
\subsection{$N_{\rm f} = 2+1+1$}
The fact that tmQCD can be formulated only for an even number of flavours is not a
limitation. Indeed using an off diagonal splitting, where the degenerate quark
doublet has a different flavour orientation from the splitting between the quarks, in
\cite{Frezzotti:2003xj} it was shown that the determinant for non degenerate
quarks is real and positive (see \cite{Pena:2004gb} for alternative
formulations of tmQCD including a non degenerate doublet).
The only restriction of the construction in \cite{Frezzotti:2003xj}
is on the value of the ratio between renormalization constants of the
pseudoscalar and scalar current. To give an example, fixing the values of the
renormalized {\it strange} and {\it charm} quark masses, gives the following constraints
\be
\mu^{\rm R}_{\rm c} \simeq 1.5 {\rm GeV} \qquad \mu^{\rm R}_{\rm s} \simeq 0.1 {\rm GeV}
\Rightarrow {Z_{\rm P} \over Z_{\rm S}} > 0.875.
\ee
At this conference first results with dynamical $N_{\rm f} = 2+1+1$ twisted
quarks have been presented \cite{Farchioni:2005ec}.
The simulations of the $N_{\rm f} = 2+1+1$ theory are performed by a
polynomial hybrid Monte Carlo algorithm (PHMC)\cite{Frezzotti:1997ym}.
The structure of the algorithm goes along the lines indicated in
\cite{Montvay:2005tj}. At this conference another variant of the PHMC to
include the two non degenerate twisted quarks has been presented \cite{Chiarappa:2005mx}.
\section{Further results}
In this section I summarize further results concerning tmQCD.
In \cite{Frezzotti:2005my} it has been presented a strategy
to compute $B_{\rm K}$ and matrix elements related
to the $\Delta I = 1/2$ rule without mixing with operators with wrong
chiralities, retaining all the properties of automatic O($a$) improvement.
The strategy is based on the usage of a mixed action (OS for valence quarks
and tmQCD for sea quarks) \cite{Frezzotti:2004wz}.
In the quenched approximation $B_{\rm K}$ has been
computed \cite{Dimopoulos:2004xc} in the continuum limit, using clover improved tmQCD and
a non-perturbative renormalization without mixing (in the SF scheme).
A strategy to compute $B_{\rm B}$ with tmQCD, along the lines of \cite{Guagnelli:2002xz}, has
been proposed in \cite{Palombi:2005pa}, and along the lines of \cite{Frezzotti:2004wz} in
\cite{DellaMorte:2004wn}.
In \cite{Pena:2004gb} a strategy, based on clover improved tmQCD with $N_{\rm
f} =4$, has been proposed to compute the renormalization of $K
\rightarrow \pi$ matrix elements.
In \cite{Gattringer:2005vp} the effect of a twisted mass term of the low-lying modes of the
Wilson-Dirac operator and a remnant of the index theorem for twisted mass
fermions has been discussed.
\section{Conclusions}
Several lessons come from quenched studies of tmQCD. With a particular and field
theoretically well founded definition of the critical mass, automatic O($a$)
improvement is effective till small pion masses ($m_\pi = 272$ MeV), and the
residual O($a^2$) cutoff effects are small. The bending phenomenon just results
from big cutoff effects, that are reproducible with $\chi$PT at finite lattice
spacing.
The bending phenomenon is not present even at
finite lattice spacing with a suitable choice of the critical mass.
The flavour breaking is an issue and it has to be investigated with dynamical
simulations.
We have indications of the existence of an Aoki phase for quenched Wilson
fermions at lattice spacings around $a \simeq 0.1$ fm.
To perform dynamical simulations at small pion masses,
algorithmic improvements are crucial, and now new algorithms
allow to have efficient and performant simulations with Wilson
fermions and most probably with staggered fermions.
We have a much better understanding of the phase structure of dynamical Wilson
fermions. A theoretically well founded action (tlSym gauge and tmQCD fermion
action) allows to perform dynamical simulations with $N_{\rm f}=2$ at pion masses
smaller then 300 MeV starting from a lattice spacing $a \simeq 0.1$ fm,
allowing matching with $\chi$PT, and simulation with
$N_{\rm f}=2 +1+1$ flavours are just starting.
In many cases it has been shown that the renormalization properties of local
operators related to very important phenomenological quantities, is continuum
like.
Although presently not all aspects of tmQCD are fully investigated, tmQCD is
an attractive and powerful discretization of lattice QCD, and it
certainly belongs to the pool of well founded fermion actions that ought to be
used to control the continuum limit of physical quantities of interest.
\vspace{0.5cm}
{\bf Acknowledgments}:
I thank the LOC for the stimulating atmosphere of the conference.
I would like to thank F. Farchioni, I. Montvay, K. Nagai,
M. Papinutto, E. Scholz, L. Scorzato, N. Ukita,
C. Urbach, U. Wenger and I. Wetzorke for discussions and a most
enjoyable collaboration, and in particular Karl Jansen for constant
encouragement, and for a careful reading of this manuscript.
I have profited from interesting and useful discussions with
S. Aoki, O. B\"ar, R. Frezzotti, C. Michael,
G. C. Rossi, S. Sharpe, S. Sint.
This work is partially supported by the DFG Sonderforschungbereich/Transregio
SFB/TR9-03.
\bibliographystyle{JHEP-2}
|
1,941,325,220,450 | arxiv | |
1,941,325,220,451 | arxiv | \section{Introduction}
In the recent years, online social networks have emerged
as a significant platform for discussion and dissemination of political information.
For example, $2011$ Pew surveys \cite{lot:pew_2011} found that 22\% of adult Internet users participated in
political campaigns through at least one of the major social media platforms (Twitter, Facebook, Myspace)
during the 2010 US elections. Similarly, it was found in \cite{lot:pew_2012} that, in year 2012,
34\% of social network users posted their own thoughts on political or social issues,
and 38\% of users ``liked" and reposted political posts of others.
This increasing importance of social media and the relative convenience of its analysis
attracted attention from academic researchers.
Among the questions that have been investigated are: can future election results be predicted (e.g., \cite{lit:gayo_limits}),
is political information on Twitter credible \cite{lit:castillo_inf_cred,lit:morris_tweeting_believing,lit:ratkiewicz_polit_abuse},
who are users whose opinion on a certain subject is influential \cite{lit:barbieri_topic_aware_infl,lit:weng_twitterrank},
and how to leverage anonymized web search queries to analyze and visualize political issues \cite{lit:weber_webQ_polit}.
In this paper we analyze social factors associated with the level of participation of
users in the political discussion. Two complementing theories were suggested in the scientific literature
to explain the interaction between likemindedness of ones' social environment and of the level
of political activity:
\begin{itemize}
\item\textbf{Echo chamber and echo chamber amplification.} In \cite{lit:echo_chamber}, the author shows that people tend
to look for cognitive comfort by discussing their opinions with like-minded people.
Their opinions are thus echoed and reinforced by their social peers; creating an \emph{echo chamber} effect.
In the context of the web, the echo chamber effect is achieved when people follow blogs and news sources
that do not challenge their political opinions. This theory predicts that people
in comforting environments such as echo chambers will exhibit an \emph{amplified} level of political activity.
\item\textbf{Disagreement effect.} A large body of political-science literature
\cite{lit:moy_predicting,lit:mutz2006hearing,lit:nir_disagreement,lit:pattie_conversation}
explores the effect that \emph{disagreement}, i.e., having political opinions different from your (non-virtual) social peers,
has on your political activity.
Nir \cite{lit:nir_disagreement} shows that disagreement has a dual effect.
A politically isolated person, in the sense that \emph{all} of their peers disagree with their opinion,
tends to exhibit a lower than average level of political activity.
However, a person with politically heterogeneous social peers tends to exhibit a higher level of political activity
(see also \cite{lit:pattie_conversation}), even when compared to people completely surrounded with like-minded peers
(echo chamber). This theory predicts that the level of political activities should be attenuated
for isolated users and have a dominant peak for heterogeneous social environments.
\end{itemize}
In this paper we test the presence and the relative importance of these two phenomena
in political discussions in Twitter.
We also test a conjecture that likemindedness of both the virtual (web) environment
and the physical (geographical) environment have effect on a user's level of political activity.
\section{Methods}\label{sec:data_set_desc}
In this paper we analyze data from Twitter - a micro-blogging service that allows users to post
short ``tweets" and to receive tweets made by other users by ``following" their Twitter feeds, thus
creating a social network of Twitter accounts. Our data extraction technique largely follows
methods from our previous work in \cite{lit:my_msr_journal}.
In our analysis we extract and analyze four overlapping sets of Twitter users.
We begin by extracting \emph{Raw-DS}, which is a large set of users with known political affiliations
and known levels of political publishing activity before 2012 US presidential election.
We then extract a \emph{Retweet-network} graph that is used as an approximation of a social network
connecting Twitters users in Raw-DS.
Using connections defined by Retweet-network, we extract two subsets of Raw-DS: \emph{Follower-DS}
and \emph{Followed-DS}. These sets of users are used to analyze the relation between
users' level of political activity and the likemindedness of their neighbors in the Retweet-network.
Finally, we extract a small subset \emph{Geographical-DS}
of users of Raw-DS that have enough available geo-spatial information to identify counties they reside in.
The Geographical-DS data set is then used to analyze the relation between users' level of political activity and
the likemindedness of their physical environment.
We proceed formally define each one of these sets.\\
$ $\\
\textbf{Raw-DS.}
We begin by extracting a large set of users with known political affiliations and
known levels of political publishing activity before 2012 US presidential election.
We denote this set of users by \emph{Raw-DS}.
In what follows, we alternatively refer to these users as pro-Obama and pro-Romney, respectively.
To this end, we employed the method described in \cite{lit:my_msr_journal}.
Namely, we looked for specific highly-partisan hashtags (a single word preceded by ``\#" sign,
listed in Table \ref{Table:hashtags_identPolitUsers}) among the tweets made in the 10 days following Election Day.
We picked this method over other existing methods (e.g., \cite{lit:conover_predicting_politAlign,lit:twit_voting_beh_from_lingExp,lit:pennacchiotti_demRepAndStarbucks})
because of ease of its implementation and accuracy (above $95\%$) that is higher than in other solutions.
The simplicity and the higher accuracy of our method come at the expense of a smaller recall than that of
other existing methods. However, the obtained user population was large enough for meaningful analysis.
As in \cite{lit:my_msr_journal}, we found a total of $372,769$ pro-Obama users and
$22,902$ pro-Romney users.
\begin{center}
\begin{table}[ht]
\hfill{}
\begin{tabular}{| c | l |}
\hline
$\quad$Affiliation$\quad$ & $\quad$Used hashtags$\quad$\\
\hline
Pro-Obama & \#voteobama, \#obama2012, \#goobama,\\
& \#obamabiden, \#guardthechange, \\
& \#4moreyears,\\
& \#forward, \#forwardwithobama,\\
& \#obamaforpresident,\#igoobama\\
\hline
Pro-Romney & \#romenyryan2012, \#voteromneyryan,\\
&\#voteromney, \#benghazi, \#nobama, \\
&\#imwithmitt, \#americascomebackteam, \\
&\#fireobama, \#teamprolife, \#gogop\\
\hline
\end{tabular}
\hfill{}
\caption{List of hashtags used for identification of the political affiliation of users.}\label{Table:hashtags_identPolitUsers}
\end{table}
\end{center}
We then extracted all tweets published by users in Raw-DS during the three and a half months period between Aug. 1st and Nov. 15th $2012$.
This interval includes tweets published, roughly, three months before the Election Day (Nov. 6th) and
ten days after it.
Overall, there were $55,740,001$ tweets. As in \cite{lit:my_msr_journal}, we used hashtags such as
"\#election2012" (see Table \ref{Table:hashtags_identPolitTweets} for the complete list) to extract a subset
of $465,842$ tweets on political issues.
\begin{center}
\begin{table}[ht]
\hfill{}
\begin{tabular}{| l |}
\hline
List of hashtags:\\
\hline
\tiny{$ $}\\
All hashtags from Table \ref{Table:hashtags_identPolitUsers}, \#tcot , \#election2012, \#gop,\\
\#romney,\#obama, \#elections, \#president\\
\hline
\end{tabular}
\hfill{}
\caption{List of hashtags used to identify political tweets.}\label{Table:hashtags_identPolitTweets}
\end{table}
\end{center}
Given the number of political and non-political tweets made by each user in Raw-DS, we were able to
calculate their political activity (\emph{PA}), which we quantified
as the fraction of political tweets among their posts.\\
$ $\\
\textbf{Retweet-network.}
We next inferred the edges of the social graph that connects users in Raw-DS.
There are several commonly used proxies for the social connections between Twitter users.
For instance, one approach is to assume that there exits a directed edge from user A to user B if user A follows user B's Twitter feed.
Another approach is to define an edge from user A to user B if user A ``retweeted" one of user B's posts.
We choose the latter approach as it indicates a stronger connection between users. Namely,
user A is more than just skimming through user B's Twitter feed, user A also actively engages with its content.
We refer to corresponding social network over users in Raw-DS as the \emph{Retweet-network}.
We keep the terms ``follower" and ``followed" to describe the relationship between users in the Retweet-network.
The Retweet-network contains $201,362$ edges for $392,995$ nodes, which implies that the network is highly sparse.
Table \ref{Table:RTnetwork_stats} shows the number of edges between users for the four possible pairs of political affiliations.
\begin{center}
\begin{table
\hfill{}
\begin{tabular}{| c | c | c |}
\hline
$\space$ User pair $\space$ & $\space$ Number of edges$\space$\\
$\space$ & $\space$ (in thousands) $\space$\\
\hline
(pO,pO) & $655$\\
(pO,pR) & $51$\\
(pR,pO) & $48$\\
(pR,pR) & $42$\\
\hline
\end{tabular}
\hfill{}
\caption{Connectivity statistics in Retweet-network. Acronym pO stands for pro-Obama user and acronym pR stands for pro-Romney user.
The first user in the pair follows the second user in the pair, e.g., in a pair (pO,pR) a pro-Obama user follows a pro-Romney user.}\label{Table:RTnetwork_stats}
\end{table}
\end{center}
We note a large number of retweets that cross party lines,
i.e., a tweet made by a pro-Obama user is retweeted by a pro-Romney user or vice versa.
In \cite{lit:my_msr_journal}, we show that some of these retweets are part of a political debate.
In particular, when a link to a political article published by a pro-Obama user is retweeted by a pro-Romney user,
the text accompanying the retweeted link is likely to be modified, usually
to interpret the link according to the users' own point of view \cite{lit:my_msr_journal}.\\
$ $\\
\textbf{Follower-DS.} Using Retweet-network, we extracted a subset\\ \emph{Follower-DS} of users in Raw-DS
that have at least one follower. This subset contains $3,831$ pro-Romney users and $48,858$ pro-Obama users.
For users in Follower-DS, we refer to the likemindedness of their followers as
\emph{follower-LM} and quantify it as the fraction of users' followers that share their choice of candidate.\\
$ $\\
\textbf{Followed-DS.} Similarly to Follower-DS, we extracted a subset\\ \emph{Followed-DS} of users in Raw-DS
that follow at least one user. This subset contains $10,217$ pro-Romney users and $187,374$ pro-Obama users.
For users in Followed-DS, we refer to the likemindedness of Twitter feeds they follow as
\emph{followed-LM} and quantify it as the fraction of Twitter feeds followed by these users
that share their choice of candidate.
We note that this separation between Follower-DS and Followed-DS is meaningful
since only $12.5$\% of edges in Retweet-network are reciprocated.\\
$ $\\
\textbf{Geographical-DS.}
Finally, we identified a subset \emph{Geographical-DS} of users in Raw-DS
with enough geo-spatial information to identify counties they reside in.
To this end, we began by extracting a larger subset of users that
provided their geographical location (in terms of GPS coordinates) in at least two of their tweets.
For each such user, we calculated their average location by taking the mean value of GPS coordinates.
In order for this average location to be representative,
we discarded all users with the maximal distance between user's locations greater than $50$ kilometers.
We further discarded all users with the average location outside of the United States.
The remaining subset Geographical-DS contains a total of $1,083$ pro-Romney users and $18,475$ pro-Obama users.
For each user in Geographical-DS we use their average location to identify the county this user resides in
and obtain the official voting record for this county.
Given this information we are able to calculate the likemindedness of user's geographical environment (\emph{geographical-LM}),
which is quantified as voting share of user's candidate in their county.
\section{Results}
We begin by using data from Follower-DS to analyze the dependence of the median level of political activity
on the likemindedness of users that read posts of the considered user. I.e., the dependence of median PA on follower-LM.
To this end, we divide the range $[0,1]$ of possible values of follower-LM to $10$ equally-sized bins. For each bin, we calculate the median PA of
all users with the value of follower-LM that falls in this bin.
Figure \ref{fig:follower_LM_Dems} and Figure \ref{fig:follower_LM_Reps} depict these dependencies for
pro-Obama and pro-Romney users, respectively.
Both for pro-Obama and pro-Romney users we observe a unimodal dependency of median PA on follower-LM.
The single peak corresponds to the disagreement effect and is
obtained for medium values of follower-LM (between $0.2$ and $0.6$ for pro-Obama users and
between $0.4$ and $0.7$ for pro-Romney users).
The level of political activity of politically isolated users (follower-LM smaller than $0.2$ for pro-Obama users
and smaller than $0.4$ for pro-Romney users) is negligible. This observation is also in line with the disagreement effect.
The level of political activity of users in the echo chamber environment (large values of follower-LM) is also very low,
which is in contradiction to the echo chamber amplification theory.
We now use data from Followed-DS to analyze the dependence of the median level of political activity
on the likemindedness of the Twitter feeds read by the considered user, i.e. the dependence of
median PA on followed-LM.
Again, we divide the range $[0,1]$ of possible values of followed-LM to $10$ equally-sized bins.
For each bin, we calculate the median PA of all users with the value of followed-LM that falls in this bin.
Figure \ref{fig:followed_LM_Dems} and Figure \ref{fig:followed_LM_Reps} depict these dependencies for
pro-Obama and pro-Romney users, respectively.
For both pro-Obama and pro-Romney users, small values of followed-LM (below $0.1$) and high values of followed-LM (above $0.9$)
correspond to very low levels of political activity. The first observation is in line with the disagreement effect and
the second observation contradicts the echo chamber amplification theory. For both pro-Obama and pro-Romney users,
the dominant peak corresponds to the disagreement effect and is obtained for values of followed-LM close to $0.4$.
However, in contrast to the dependence of PA on follower-LM, there are also secondary peaks.
For Obama supporters, there is a secondary peak for the values of followed-LM between $0.1$ and $0.3$.
We hypothesize that this secondary peak is also a manifestation of the disagreement effect.
For Romney supporters, there is a secondary peak for the values of followed-LM between $0.6$ and $0.9$.
This secondary peak may be a manifestation of both the disagreement effect and the echo chamber amplification.
Finally, we analyzed the relationship between the median PA and geographical-LM.
Similarly to figures above, we divide the range $[0,1]$ of possible values of geographical-LM to $10$
equally-sized bins. For each bin, we calculate the median PA of all users with the value of geographical-LM
that falls in this bin.
The results are depicted on Figure \ref{fig:geoLM_Dems} for pro-Obama users and on Figure \ref{fig:geoLM_Reps} for pro-Romney users.
In contrast to the dependence of median PA on likemindedness of user's virtual environment (follower-LM and followed-LM),
the dependence of PA on geographical-LM does not differ for pro-Obama and pro-Romney users.
In fact, it seems that the level of political activity of users is independent of geographical-LM, hence
does not exhibit neither the echo chamber amplification nor the disagreement effect.
\onecolumn
\begin{figure}
\centering
\begin{minipage}{0.4\textwidth}
\centering
\includegraphics[scale=0.4]{RT1_AtLeastOneNeigh_Follower_Dems.eps}
\caption{Median PA (fraction of political tweets) vs. follower-LM (fraction of users' followers that support their candidate)
for pro-Obama users. Median PA is depicted as a dashed line.
Lower confidence interval is given by the 20th percentile and the upper confidence interval
is given by the 80th percentile. Each bin contains at least $20$ points. One bin with less than $20$ points was omitted.}\label{fig:follower_LM_Dems}
\end{minipage}\hfill
\begin{minipage}{0.45\textwidth}
\centering
\includegraphics[scale=0.4]{RT1_AtLeastOneNeigh_Follower_Reps.eps}
\caption{Median PA (fraction of political tweets) vs. follower-LM (fraction of users' followers that support their candidate)
for pro-Romney users. Median PA is depicted as a dashed line.
Lower confidence interval is given by the 20th percentile and the upper confidence interval
is given by the 80th percentile. Each bin contains at least $20$ points.}\label{fig:follower_LM_Reps}
\end{minipage}
\begin{minipage}{0.4\textwidth}
\centering
\includegraphics[scale=0.4]{RT1_AtLeastOneNeigh_Followed_Dems.eps}
\caption{Median PA (fraction of political tweets) vs. followed-LM (fraction of people followed by the user that support user's candidate)
for pro-Obama users. Median PA is depicted as a dashed line.
Lower confidence interval is given by the 20th percentile and the upper confidence interval
is given by the 80th percentile. Each bin contains at least $20$ points.}\label{fig:followed_LM_Dems}
\end{minipage}\hfill
\begin{minipage}{0.45\textwidth}
\centering
\includegraphics[scale=0.4]{RT1_AtLeastOneNeigh_Followed_Reps.eps}
\caption{Median PA (fraction of political tweets) vs. followed-LM (fraction of people followed by the user that support user's candidate)
for pro-Romney users. Median PA is depicted as a dashed line.
Lower confidence interval is given by the 20th percentile and the upper confidence interval
is given by the 80th percentile. Each bin contains at least $20$ points.}\label{fig:followed_LM_Reps}
\end{minipage}
\begin{minipage}{0.4\textwidth}
\centering
\includegraphics[scale=0.4]{PAvsGeoLM_Dems.eps}
\caption{Median PA (fraction of political tweets) vs. geographical-LM
(voting share users' candidate in their county).
Median PA is depicted as a dashed line.
Lower confidence interval is given by the 20th percentile and the upper confidence interval
is given by the 80th percentile. Each bin has at least 10 points. Three bins with less than $10$ points were omitted.}\label{fig:geoLM_Dems}
\end{minipage}\hfill
\begin{minipage}{0.45\textwidth}
\centering
\includegraphics[scale=0.4]{PAvsGeoLM_Reps.eps}
\caption{Median PA (fraction of political tweets) vs. geographical-LM
(voting share users' candidate in their county). Median PA is depicted as a dashed line.
Lower confidence interval is given by the 20th percentile and the upper confidence interval
is given by the 80th percentile. Each bin has at least 10 points. Three bins with less than $10$ points were omitted.}\label{fig:geoLM_Reps}
\end{minipage}
\end{figure}
\twocolumn
\section{Discussion}
In this paper we analyzed the connection between user's level of political activity on Twitter (PA)
and the likemindedness of its virtual (follower-LM, followed-LM) and geographical environments (geographical-LM).
Specifically, we focused on the presence of the echo chamber amplification and the disagreement effect.
We showed that user's PA as a function of follower-LM has a similar form for both
pro-Obama and pro-Romney users. In both cases,
high level of political activity of a user correlates with a politically diverse set of followers,
i.e., readers of user's Twitter feed. The dependence of PA on follower-LM exhibits strong disagreement effect,
but does not manifest the echo chamber amplification.
The dependence of user's PA on followed-LM is different for users supporting different candidates.
Pro-Romney users exhibit high level of political activity when Twitter feeds they follow are predominantly,
but not purely, pro-Romney. Namely, pro-Romney users are most active in the mostly likeminded environments.
This behavior is a manifestation of the combination of the disagreement effect and the echo chamber amplification.
In contrast, pro-Obama users tend to have a high level of political activity in the politically adverse environments.
Namely, when Twitter feeds they follow are predominantly pro-Romney but not exclusively pro-Romney.
This behavior is in line with the disagreement effect and contradicts the echo chamber amplification theory.
We also show that the level of users' political activity is independent of
the likemindedness of their geographical environment, for both pro-Obama and
pro-Romney users. In particular, this implies that the dependence of PA on
geographical-LM does not manifest neither the disagreement effect nor the echo chamber amplification.
We thus conclude that the level of political activity of the Twitter users correlated with
likemindedness of their virtual environment and is independent of the likemindedness of their geographical environment.
The exact form of correlation between the PA and the likemindedness of the virtual environment
is in line with the disagreement effect and,
with the exception of PA as a function of followed-LM for Romney supporters,
contradicts the echo chamber amplification theory.
The main limitation of our approach is in the selection of users for our analysis. Specifically,
we ignored users that are politically active but did not express explicit support for neither of
the presidential candidates of the $2012$ US presidential election.
This obviously introduced a bias to our measurements of user's follower-LM and followed-LM.
\bibliographystyle{abbrv}
|
1,941,325,220,452 | arxiv | \section{Introduction}
In \cite{VHa:96} Van Hamme presented several results and conjectures
concerning a curious analogy between the values of certain hypergeometric series
and the congruences of some of their partial sums modulo power of prime.
In this paper we would like to discuss a new example of this analogy. Let us consider
\begin{eqnarray*}
\sum_{k=1}^{\infty}{(-1)^k\over k}{-{1\over 2} \choose k}&=&
\left(1\over 2\right)+{1\over 2}\left(1\cdot 3\over 2\cdot 4\right)
+{1\over 3}\left(1\cdot 3\cdot 5\over 2\cdot 4 \cdot 6\right)
+{1\over 4}\left(1\cdot 3\cdot 5 \cdot 7\over 2\cdot 4 \cdot 6\cdot 8\right)
+\cdots\\
&=&\int_0^{-1}{1\over x}\left({1\over \sqrt{1+x}}-1\right)\,dx=
-2\left[\log\left({1+\sqrt{1+x}\over 2}\right)\right]_0^{-1}
=2\log 2.
\end{eqnarray*}
Let $p$ be a prime number, what's the $p$-adic analogue of the above result?
\noindent The real case suggests to replace the logarithm
with some $p$-adic function which behaves in a similar way.
It turns out that the right choice is the {\sl Fermat quotient}
$$q_p(x)={x^{p-1}-1 \over p}$$
(which is fine since $q_p(x \cdot y)\equiv q_p(x)+q_p(y)$ (mod $p$)),
and, as shown in \cite{SuzwTa:09}, the following congruence holds for any prime $p\not=2$
$$\sum_{k=1}^{p-1}{(-1)^k\over k}{-{1\over 2} \choose k}\equiv 2\,q_p(2) \pmod{p}.$$
Here we improve this result to the following statement.
\begin{thm}\label{T11} For any prime $p>3$
\begin{eqnarray*}
\sum_{k=1}^{p-1} {(-1)^k\over k}{-{1\over 2} \choose k}
&\equiv& 2q_p(2)-pq_p(2)^2+{2\over 3}p^2q_p(2)^3+{7\over 12}p^2 B_{p-3} \\
&\equiv& -\sum_{k=1}^{(p-1)/2}{1\over k} \pmod{p^3}
\end{eqnarray*}
where $B_n$ is the $n$-th Bernoulli number.
\end{thm}
In the proof we will employ some new congruences for
alternating multiple harmonic sums which are interesting in themselves such as
\begin{align*}
&H(-1,-2;p-1)=\sum_{0<i<j<p}{(-1)^{i+j}\over ij^2}\equiv -{3\over 4}B_{p-3} &\pmod{p}\,,\\
&H(-1,-1,1;p-1)=\sum_{0<i<j<k<p}{(-1)^{i+j}\over ijk}\equiv q_p(2)^3+{7\over 8}B_{p-3}&\pmod{p}.
\end{align*}
\section{Alternating multiple harmonic sums}
Let $r>0$ and let $(a_1,a_2,\dots,a_r)\in (\mathbb Z^*)^r$.
For any $n\geq r$, we define the {\it alternating multiple harmonic sum} as
$$H(a_1,a_2,\dots,a_r;n)=
\sum_{1\leq k_1<k_2<\dots<k_r\leq n}\; \prod_{i=1}^r{\mbox{sign}(a_i)^{k_i}\over k_i^{|a_i|}}.$$
The integers $r$ and $\sum_{i=1}^r |a_i|$ are respectively the {\it depth} and the {\it weight} of the harmonic sum.
From the definition one derives easily the {\it shuffle relations}:
\begin{align*}
&H(a;n)\cdot H(b;n)=H(a,b;n)+H(b,a;n)+H(a\oplus b;n)\\
&H(a,b;n)\cdot H(c;n)=H(c,a,b;n)+H(a,c,b;n)+H(a,b,c;n)\\
&\hspace{40mm}+H(a\oplus b,c;n)+H(a,b\oplus c;n)
\end{align*}
where $a\oplus b=\mbox{sign}(ab)(|a|+|b|)$.
\noindent Moreover, if $p$ is a prime, by replacing $k_i$ with $p-k_i$ we get the
{\it reversal relations}:
\begin{align*}
&H(a,b;p-1)\equiv H(b,a;p-1)(-1)^{a+b}\mbox{sign}(ab) &\pmod{p}\,,\\
&H(a,b,c;p-1)\equiv H(c,b,a;p-1)(-1)^{a+b+c}\mbox{sign}(abc) &\pmod{p}.
\end{align*}
The values of several {\it non-alternating} (i. e. when all the indices are positive)
harmonic sums modulo a power of prime are well known:
\begin{enumerate}
\item[(i).] (\cite{Ho:07}, \cite{ZC:07}) for $a,r>0$ and for any prime $p>ar+2$
$
H(\left\{a\right\}^r;p-1)\equiv \left\{
\begin{array}{lll}
(-1)^{r}{a(ar+1)\over 2(ar+2)}\,p^2\,B_{p-ar-2} &\pmod{p^3} &\mbox{if $ar$ is odd}\\ \\
(-1)^{r-1}{a\over ar+1}p\,B_{p-ar-1} &\pmod{p^2} &\mbox{if $ar$ is even}
\end{array}
\right.;
$$
\item[(ii).] (\cite{Sunzh:00}) for any prime $p>3$
$$
H\left(1;{p-1\over 2}\right)\equiv
-2q_p(2)+p q_p(2)^2-{2\over 3}\,p^2 q_p(2)^3-{7\over 12}\,p^2\,B_{p-3} \pmod{p^3}.
$$
and for $a>1$ and for any prime $p>a+1$
$$H\left(a;{p-1\over 2}\right)\equiv\left\{
\begin{array}{lll}
-{2^a-2\over a}\,B_{p-a} &\pmod{p} &\mbox{if $a$ is odd}\\ \\
{a(2^{a+1}-1)\over 2(a+1)}\,p\,B_{p-a-1} &\pmod{p^2} &\mbox{if $a$ is even}
\end{array}
\right.;$$
\item[(iii).] (\cite{Ho:07}, \cite{Zh:06}) for $a,b>0$ and for any prime $p>a+b+1$
$$H(a,b;p-1)\equiv {(-1)^b\over a+b}{a+b\choose a} \,B_{p-a-b} \pmod{p}$$
(note that $B_{2n+1}=0$ for $n>0$).
\end{enumerate}
The following result will allow us to compute the mod $p$ values
of multiple harmonic sums of depth $\leq 2$ when the indices are all negative.
\begin{thm}\label{T21} Let $a,b>0$ then for any prime $p\not=2$
\begin{align*}
&H(-a;p-1)=-H(a;p-1)+{1\over 2^{a-1}} H\left(a;{p-1\over 2}\right),\\
&2H(-a,-a;p-1)=H(-a;p-1)^2-H(2a;p-1),
\end{align*}
and
$$H(-a,-b;p-1)\equiv
-\left(1-{1\over 2^{a+b-1}}\right)\,H(a,b;p-1)-{(-1)^b\over 2^{a+b-1}} H\left(a;{p-1\over 2}\right)H\left(b;{p-1\over 2}\right)
\pmod{p}.
$$
\end{thm}
\begin{proof} The shuffling relation given by $H(-a;p-1)^2$ yields the second equation.
As regards the first equation we simply observe that $(-1)^i/i^a$ is positive if and only if $i$ is even.
We use a similar argument for the congruence: since $(-1)^{i+j}/(i^a j^b)$ is positive if and only
if $i$ and $j$ are both even or if $(p-i)$ and $(p-j)$ are both even then
$$H(-a,-b;p-1) \equiv -H(a,b;p-1)+
{2\over 2^{a+b}}\left(
H\left(a,b;{p-1\over 2}\right)+(-1)^{a+b}H\left(b,a;{p-1\over 2}\right)
\right).$$
Moreover, by decomposing the sum $H(a,b;p-1)$ we obtain
$$
H(a,b;p-1)\equiv H\left(a,b;{p-1\over 2}\right)+
H\left(a;{p-1\over 2}\right)(-1)^b H\left(b;{p-1\over 2}\right)
+(-1)^{a+b}H\left(b,a;{p-1\over 2}\right).
$$
that is
$$H\left(a,b;{p-1\over 2}\right)+(-1)^{a+b}H\left(b,a;{p-1\over 2}\right)
\equiv H(a,b;p-1)-H\left(a;{p-1\over 2}\right)(-1)^b H\left(b;{p-1\over 2}\right).$$
and the congruence follows immediately.
\end{proof}
\begin{cor}\label{C22} For any prime $p>3$
\begin{align*}
&H(-1;p-1)\equiv -2q_p(2)+pq_p(2)^2-{2\over 3}p^2 q_p(2)^3-{1\over 4}p^2\,B_{p-3} &\pmod{p^3}\,, \\
&H(-1,-1;p-1)\equiv 2q_p(2)^2-2pq_p(2)^3-{1\over 3}p\,B_{p-3}&\pmod{p^2}.
\end{align*}
Moreover for $a>1$ and for any prime $p>a+1$
$$H(-a;p-1)\equiv -{2^a-2\over a2^{a-1}}B_{p-a} \pmod{p}.$$
\end{cor}
\begin{proof} The proof is straightforward: apply Theorem \ref{T21}, (i), (ii), and (iii).
\end{proof}
The following theorem is a variation of a result presented in \cite{ZhSuzw:09}.
\begin{thm}\label{T23} Let $r>0$ then for any prime $p>r+1$
$$H(\{1\}^{r-1},-1;p-1)\equiv (-1)^{r-1}\sum_{k=1}^{p-1}{2^k\over k^r} \pmod{p}.$$
\end{thm}
\begin{proof} For $r\geq 1$, let
$$F_r(x)=\sum_{0<k_1<\dots<k_r<p} {x^{k_r}\over k_1\cdots k_r} \in \mathbb Z_p[x]
\quad\mbox{and}\quad f_r(x)=\sum_{0<k<p} {x^k\over k^r}\in \mathbb Z_p[x].$$
We show by induction that
$$F_r(x)\equiv (-1)^{r-1} f_r(1-x) \pmod{p}$$
then our congruence follows by taking $x=-1$.
\noindent For $r=1$, since ${p\choose k}=(-1)^{k-1}{p\over k}\pmod{p^2}$ for $0<k<p$ then
$$f_1(x)\equiv {1\over p}\sum_{k=1}^{p-1}(-1)^{k-1}{p\choose k}{x^k}
=-{1\over p}\sum_{k=1}^{p-1}{p\choose k}{(-x)^k}=
{1-(1-x)^p-x^p\over p}\pmod{p}.$$
Hence $F_1(x)=f_1(x)\equiv f_1(1-x)\pmod{p}$.
\noindent Assume that $r>1$, then the formal derivative yields
\begin{eqnarray*}
{d\over dx} F_r(x)&=&\sum_{0<k_1<\dots<k_r<p} {k_rx^{k_r-1}\over k_1\cdots k_r}
=\sum_{0<k_1<\dots<k_{r-1}<p} {1\over k_1\cdots k_{r-1}}\sum_{k_r=k_{r-1}+1}^{p-1} x^{k_r-1}\\
&=&\sum_{0<k_1<\dots<k_{r-1}<p} {1\over k_1\cdots k_{r-1}}\cdot {x^{p-1}-x^{k_{r-1}}\over x-1}\\
&=&{x^{p-1}\over x-1}\, H(\{1\}^{r-1};p-1)-{1\over x-1}\, F_{r-1}(x)
\equiv {F_{r-1}(x)\over 1-x} \pmod{p}.
\end{eqnarray*}
Moreover
$${d\over dx} f_r(1-x)=-\sum_{0<k<p} {(1-x)^{k-1}\over k^{r-1}}=-{f_{r-1}(1-x)\over 1-x}$$
Hence, by the induction hypothesis
$$
(1-x){d\over dx} \left(F_r(x)+(-1)^r f_r(1-x)\right)\equiv
F_{r-1}(x)+(-1)^{r-1} f_{r-1}(1-x)\equiv 0 \pmod{p}.
$$
Thus $F_r(x)+(-1)^r f_r(1-x)\equiv c_1$ (mod $p$) for some constant $c_1$ since this polynomial
has degree $<p$. Substituting in $x=0$ we find that by (i)
$$F_r(x)+(-1)^r f_r(1-x)\equiv c_1\equiv F_r(0)+(-1)^r f_r(1)=(-1)^r H(r;p-1)\equiv 0 \pmod{p}.$$
\end{proof}
With the next two corollaries we have a complete the list of the mod $p$ values of
the alternating multiple harmonic sums of depth and weight $\leq 3$.
\begin{cor}\label{C24} The following congruences mod $p$ hold for any prime $p>3$
\begin{align*}
&H(1,-1;p-1)\equiv -H(-1,1;p-1)\equiv q_p(2)^2\,, \\
&H(-1,2;p-1)\equiv H(1,-2;p-1)\equiv H(2,-1;p-1)\equiv H(-2,1;p-1)\equiv {1\over 4} B_{p-3}\,,\\
&H(-1,-2;p-1)\equiv -H(-2,-1;p-1)\equiv -{3\over 4}B_{p-3}\,.
\end{align*}
\end{cor}
\begin{proof} By Theorem \ref{T23} and by \cite{Gr:04}
$$H(1,-1;p-1)\equiv-\sum_{k=1}^{p-1}{2^k\over k^2}\equiv q_p(2)^2 \pmod{p}.$$
By (i) and by the shuffling relation given by the product $H(-1;p-1)H(2;p-1)$ we get
$$H(-1,2;p-1)={1\over 2}H(-1;p-1)H(2;p-1)-{1\over 2}\; H(-3;p-1)\equiv {1\over 4} B_{p-3} \pmod{p}.$$
By (ii) and by Theorem \ref{T21}
$$H(-1,-2;p-1)\equiv-{3\over 4}H(1,2;p-1)-{1\over 4} H\left(1;{p-1\over 2}\right)H\left(2;{p-1\over 2}\right)
\equiv -{3\over 4}B_{p-3}
\pmod{p}.$$
The remaining congruences follow by applying the reversal relation of depth $2$.
\end{proof}
\begin{cor}\label{C25} The following congruences mod $p$ hold for any prime $p>3$
\begin{align*}
&H(-1,1,-1;p-1)\equiv 0 \,,\\
&H(1,1,-1;p-1)\equiv H(-1,1,1;p-1)\equiv -{1\over 3}q_p(2)^3-{7\over 24}B_{p-3}\,, \\
&H(-1,-1,1;p-1)\equiv -H(1,-1,-1;p-1)\equiv q_p(2)^3+{7\over 8}B_{p-3}\,, \\
&H(1,-1,1;p-1)\equiv {2\over 3}q_p(2)^3+{1\over 12}B_{p-3}\,,\\
&H(-1,-1,-1;p-1)\equiv -{4\over 3}q_p(2)^3-{1\over 6}B_{p-3}\,.
\end{align*}
\end{cor}
\begin{proof}
By the reversal relation of depth 3,
$H(-1,1,-1;p-1)\equiv -H(-1,1,-1;p-1)\equiv 0$.
By Theorem \ref{T23} and by \cite{DiSk:06}
$$H(1,1,-1;p-1)\equiv\sum_{k=1}^{p-1}{2^k\over k^3}\equiv -{1\over 3}q_p(2)^3+{7\over 12}H(-3,p-1)
\equiv -{1\over 3}q_p(2)^3-{7\over 24}B_{p-3}\pmod{p}.$$
By the shuffling relations given by the products
$$H(1,-1;p-1)H(-1;p-1),\;H(1,-1;p-1)H(1;p-1),\,\mbox{and}\;H(-1,-1;p-1)H(-1;p-1)$$
we respectively find that
\begin{align*}
&2H(1,-1,-1;p-1)\equiv H(1,-1;p-1)H(-1;p-1)-H(1,2;p-1)-H(-2,-1;p-1)\,, \\
&H(1,-1,1;p-1)\equiv -2H(1,1,-1,p-1)-2H(2,-1;p-1)\,,\\
&3H(-1,-1,-1;p-1)\equiv H(-1,-1;p-1)H(-1;p-1)-2H(2,-1;p-1).
\end{align*}
The remaining congruences follow by applying the reversal relation of depth 3.
\end{proof}
\section{Proof of Theorem \ref{T11}}\label{sec:two}
The following useful identity appears in \cite{SuzwTa:09}. Here
we give an alternate proof by using Riordan's array method
(see \cite{Sp:94} for more examples of this technique).
\begin{thm}\label{T31} Let $n\geq d>0$
$$ d\sum_{k=1}^{n} {2k \choose k+d}\,{x^{n-k}\over k}
=\sum_{k=0}^{n-d} {2n \choose n+d+k} v_k-{2n \choose n+d}$$
where $v_0=2$, $v_1=x-2$ and $v_{k+1}=(x-2)v_k-v_{k-1}$ for $k\geq 1$.
\end{thm}
\begin{proof} We first note that
\begin{eqnarray*}
{2k \choose k+d}&=&
{2k \choose k-d}=(-1)^{k-d}{-k-d-1 \choose k-d}\\
&=& [z^{k-d}]{1\over (1-z)^{k+d+1}}=[z^{-1}]
{z^{d-1}\over(1-z)^{d+1}}\cdot \left({1 \over z(1-z)}\right)^k.
\end{eqnarray*}
Since the residue of a derivative is zero then
\begin{eqnarray*}
d\sum_{k=1}^{n} {2k \choose k+d}\,{x^{n-k}\over k}
&=&[z^{-1}]\, x^n\, {dz^{d-1}\over (1-z)^{d+1}}\, G\left({1\over x z(1-z)}\right)\\
&=&-[z^{-1}]\, x^n\, {z^{d}\over (1-z)^{d}}\, G'\left({1\over x z(1-z)}\right)\cdot \left({1\over x z(1-z)}\right)'\\
&=&[z^{-1}]\, {z^{d-n-1}\over (1-z)^{n+d+1}}\, {1-x^nz^n(1-z)^n\over 1-xz+xz^2}\cdot (1-2z)\\
&=&[z^{-1}]\, {z^{d-n-1}\over (1-z)^{n+d+1}}\, {1-2z\over 1-xz+xz^2}.
\end{eqnarray*}
where $G(z)=\sum_{k=1}^n {z^k\over k}$ and $G'(z)=\sum_{k=1}^n z^{k-1}={1-z^n\over 1-z}$.
Moreover
\begin{eqnarray*}
{2n\choose n+d+k}&=&{2n \choose n-d-k}=(-1)^{n-d-k} {-n-d-k-1\choose n-d-k}\\
&=&[z^{n-d-k}]{1\over (1-z)^{n+d+k+1}}=
[z^{-1}]{z^{d-n-1}\over (1-z)^{n+d+1}}\cdot \left({z\over 1-z}\right)^k
\end{eqnarray*}
Letting $F(z)=\sum_{k=0}^{\infty} v_k z^k={2-(x-2)z\over 1-(x-2)z+z^2}$ then
\begin{eqnarray*}
\sum_{k=0}^{n-d} {2n \choose n+d+k} v_k-{2n \choose n+d}
&=&
[z^{-1}]{z^{d-n-1}\over (1-z)^{n+d+1}}\cdot F\left({z\over 1-z}\right)
-[z^{-1}]{z^{d-n-1}\over (1-z)^{n+d+1}}\\
&=&[z^{-1}]\, {z^{d-n-1}\over (1-z)^{n+d+1}}\,\left({(2-xz)(1-z)\over 1-xz+xz^2}-1\right)\\
&=&[z^{-1}]\, {z^{d-n-1}\over (1-z)^{n+d+1}}\, {1-2z\over 1-xz+xz^2}
.
\end{eqnarray*}
\end{proof}
\begin{cor}\label{C32} For any $n>0$
$$4^n\sum_{k=1}^{n} {-{1\over 2} \choose k} \,{(-1)^k\over k}=
-4(-1)^{n}\sum_{d=0}^{n-1}{(-1)^{d}\over n-d}\sum_{j=0}^{d-1} {2n \choose j}-2(-1)^{n}\sum_{d=0}^{n-1}{(-1)^{d}\over n-d}{2n \choose d}.
$$
\end{cor}
\begin{proof} Since
$$0=\sum_{d=-k}^{k}(-1)^{d}{2k\choose k+d}
={2k\choose k}+2\sum_{d=1}^{k} (-1)^d {2k\choose k+d}$$
then for any $n\geq k$
$$(-1)^k{-{1\over 2}\choose k}=4^{-k}{2k\choose k}=-2\cdot4^{-k}\sum_{d=1}^{n} (-1)^d {2k\choose k+d}.$$
For $x=4$ then $v_k=2$ for all $k\geq 0$ and by Theorem \ref{T31}
\begin{eqnarray*}
4^n\sum_{k=1}^{n} {(-1)^k\over k}{-{1\over 2} \choose k}&=&
-2\sum_{k=1}^{n}{4^{n-k}\over k}\sum_{d=1}^{n} (-1)^d {2k\choose k+d}
=-2\sum_{d=1}^{n}(-1)^d \sum_{k=1}^{n} {4^{n-k}\over k}{2k\choose k+d}\\
&=&-4\sum_{d=1}^{n}{(-1)^d\over d}\sum_{k=0}^{n-d} {2n \choose n+d+k}+2\sum_{d=1}^{n}{(-1)^d\over d}{2n \choose n+d}\\
&=&-4\sum_{d=1}^{n}{(-1)^d\over d}\sum_{k=1}^{n-d} {2n \choose n-d-k}-2\sum_{d=1}^{n}{(-1)^d\over d}{2n \choose n-d}\\
&=&-4(-1)^{n}\sum_{d=0}^{n-1}{(-1)^{d}\over n-d}\sum_{j=0}^{d-1} {2n \choose j}-2(-1)^{n}\sum_{d=0}^{n-1}{(-1)^{d}\over n-d}{2n \choose d}.
\end{eqnarray*}
\end{proof}
We will make use of the following lemma.
\begin{lem}\label{L33} For any prime $p\not =2$ and for $0<j<p$
$${2p\choose j}\equiv -2p{(-1)^j\over j}+4p^2{(-1)^j\over j}H(1;j-1)\pmod{p^3}$$
and
$${2p\choose p}\equiv 2-{4\over 3}p^3 B_{p-3} \pmod{p^4}.$$
\end{lem}
\begin{proof} It suffices to expand the binomial coefficient in this way
$${2p\choose j}=-2p{(-1)^j\over j}\prod_{k=1}^{j-1}\left(1-{2p\over k}\right)
={(-1)^j\over j}\sum_{k=1}^{j-1}(-2p)^k\,H(\{1\}^{k-1};j-1).
$$
and apply (i).
\end{proof}
\begin{proof}[{\sl Proof of Theorem 1.1.}] Letting $n=p$ in the identity given by Corollary \ref{C32} we obtain
$$4^p\sum_{k=1}^{p} {(-1)^k\over k}{-{1\over 2} \choose k}=
4\sum_{0\leq j<d<p}{(-1)^{d}\over p-d} {2p \choose j}+2\sum_{0\leq d<p}{(-1)^{d}\over p-d}{2p \choose d}.$$
that is
$$4^{p-1}\sum_{k=1}^{p-1} {(-1)^k\over k}{-{1\over 2} \choose k}={2-{2p \choose p}\over 4p}
-\sum_{0<d<p}{(-1)^{d}\over d}
+\sum_{0<j<d<p}{(-1)^{d}\over p-d} {2p \choose j}+{1\over 2}\sum_{0<d<p}{(-1)^{d}\over p-d}{2p \choose d}.$$
Now we consider each term of the r.h.s. separately. By Lemma \ref{L33}
$${2-{2p \choose p}\over 4p}\equiv {1\over 3}\,p^2 B_{p-3}\pmod{p^3}.$$
By (ii)
$$\sum_{0<d<p}{(-1)^{d}\over d}=H(-1;p-1)=-2q_p(2)+pq_p(2)^2-{2\over 3}\,p^2 q_p(2)^3-{1\over 4}\,p^2 B_{p-3}\pmod{p^3}.$$
Since for $0<d<p$
$${1\over p-d}=-{1\over d(1-{p\over d})}\equiv - {1\over d}-{p\over d^2} \pmod{p^2}$$
then by Lemma \ref{L33}, (i), and (iii) we have that
\begin{eqnarray*}
\sum_{0<d<p}{(-1)^{d}\over p-d}{2p \choose d}&\equiv&
\sum_{0<d<p}\left(-{(-1)^{d}\over d}-p{(-1)^{d}\over d^2}\right)
\left(-2p{(-1)^d\over d}+4p^2{(-1)^d\over d}H(1;d-1)\right)\\
&\equiv& 2p H(2;p-1)+2p^2 H(3;p-1)-4p^2 H(1,2;p-1)\\
&\equiv&-{8\over 3}\, p^2 B_{p-3}.\pmod{p^3}.
\end{eqnarray*}
In a similar way, by Lemma \ref{L33} and Corollaries \ref{C24} and \ref{C25} we get
\begin{eqnarray*}
\sum_{0<j<d<p}{(-1)^{d}\over p-d} {2p \choose j}&\equiv&
\sum_{0<j<d<p}\left(-{(-1)^{d}\over d}-p{(-1)^{d}\over d^2}\right)
\left(-2p{(-1)^j\over j}+4p^2{(-1)^j\over j}H(1;j-1)\right)\\
&\equiv& 2p H(-1,-1;p-1)+2p^2 H(-1,-2;p-1)-4p^2 H(1,-1,-1;p-1)\\
&\equiv& 4pq_p(2)^2+{4\over 3}\,p^2B_{p-3} \pmod{p^3}.
\end{eqnarray*}
Thus
$$4^{p-1}\sum_{k=1}^{p-1}{(-1)^k\over k}{-{1\over 2} \choose k}=
2q_p(2)+3pq_p(2)^2+{2\over 3}\,p^2 q_p(2)^3+{7\over 12}\,p^2 B_{p-3} \pmod{p^3}.$$
Since $4^{p-1}=(q_p(2)p+1)^2=1+2q_p(2)p+q_p(2)^2p^2$ then
$$4^{-(p-1)}=(1+2q_p(2)p+q_p(2)^2p^2)^{-1}\equiv 1-2q_p(2)p+3q_p(2)^2p^2 \pmod{p^3}.$$
Finally
\begin{eqnarray*}
\sum_{k=1}^{p-1} {(-1)^k\over k}{-{1\over 2} \choose k}
&\equiv& \left(1-2q_p(2)p+3q_p(2)^2p^2\right)
\left(2q_p(2)+3pq_p(2)^2+{2\over 3}\,p^2 q_p(2)^3+{7\over 12}\,p^2 B_{p-3}\right)\\
&\equiv& 2q_p(2)-pq_p(2)^2+{2\over 3}p^2q_p(2)^3+{7\over 12}p^2 B_{p-3} \pmod{p^3}.
\end{eqnarray*}
Note that by (ii) the r.h.s. is just $-H(1,(p-1)/2)=-\sum_{k=1}^{(p-1)/2}{1\over k}$.
\end{proof}
|
1,941,325,220,453 | arxiv | \section*{Acknowledgments}
\newlength{\bibitemsep}\setlength{\bibitemsep}{.2\baselineskip plus .05\baselineskip minus .05\baselineskip}
\newlength{\bibparskip}\setlength{\bibparskip}{0pt}
\let\oldthebibliography\thebibliography
\renewcommand\thebibliography[1]{%
\oldthebibliography{#1}%
\setlength{\parskip}{\bibitemsep}%
\setlength{\itemsep}{\bibparskip}%
}
\section{Background : FL Aggregation}~\label{sec:background}
\begin{algorithm}
\small
\caption{Generalized FedAVG~\cite{fieldguide}}~\label{alg:fedavg}
\mbox{{\bf Aggregator Side}} \\
\mbox{ } \\
$\mbox{Initial model } m^{1}$ \\
\For(){$r ~\in~ \{1,2,\ldots,R\}$}{
Sample a subset $\mathcal{S}^{(r)}$ of participants \\
\textsc{send} $m^{(r)}$ \mbox{to each} $i \in \mathcal{S}^{(r)}$ \\
\textsc{recv} \mbox{model update} $\triangle_{i}^{(r)}$ \mbox{from each} $i \in \mathcal{S}^{(r)}$ \\
\mbox{Aggregate } $\triangle^{(r)} \gets \frac{1}{N} {\sum_{i \in \mathcal{S}^{(r)}} n_i \triangle_{i}^{(r)}}$ \\
$m^{(r+1)} \gets \textsc{optimizer} (m^{r}, -\triangle^{(r)}, \eta^{(r)})$
}
\mbox{ } \\
\mbox{{\bf Participant Side}} \\
\textsc{recv} ($m^{(r)}$) from aggregator \\
Local model $x^{(r,1)} \gets m^{(r)}$\\
\For(){$k \in \{1,2,\ldots,\tau \} $ } {
Compute local stochastic gradient $g_i(x^{(r,k)})$\\
$x^{(r,k+1)} \gets \textsc{optimizer}(x^{{r,k}}, -g_i(x^{(r,k)}), \eta^{(r)})$\\
}
Compute local model update $\triangle^{(r,l)} \gets x^{(r,\tau)} - x^{(r,1)}$\\
\textsc{send} $\triangle^{(r,l)}$ to aggregator \\
\mbox{ } \\
\end{algorithm}
An aggregator typically coordinates the entire FL job.
The parties, aided by the aggregator, agree on the model architecture (ResNet, EfficientNet, etc),
optimizer to use (SGD, Adam, AdaGrad, etc.) and hyperparameters to be
used for the FL job (batch size, learning rate, aggregation frequency etc.). The aggregator is responsible for
durably storing the global model and keeping track of the FL job.
We illustrate FL using the most common algorithm used for neural networks and
gradient descent based machine learning models -- FedAvg~\cite{fieldguide}.
For FedAvg (Algorithm \ref{alg:fedavg}), the aggregator selects a random subset $\mathcal{S}^{(r)} \subset \mathcal{S}$ of parties
for every round $r$. The aggregator initializes the global model $m^{1}$ using the same process as if the
job is centralized (i.e, either randomly or from existing pre-trained models). At each round, the
aggregator transmits the global model $m^{(r)}$ to $\mathcal{S}^{(r)}$. Once a party receives $m^{(r)}$, it uses
$m^{(r)}$ to make $\tau$ training passes on its local dataset. $\tau$ is the aggregation frequency.
It then computes the local gradient update
after $\tau$ passes, $\triangle^{(r,l)}$, and transmits the same to the aggregator. The aggregator in FedAvg
then computes the weighted average of all gradient updates --
$\frac{1}{N} \sum_{i \in \mathcal{S}^{(r)}} n_i \triangle_{i}^{(r)}$ to
compute the global gradient update $\triangle^{(r)}$ and update the global model (for the next round)
$m^{(r+1)}$. This process proceeds for a set number $R$ of rounds or until the aggregator
has determined that the model has converged. The term $n_i$ in the weighted average is the number of training
samples at party $i$ and $N$ is the total number of training samples involved in the round, i.e., $N = \sum_{i \in \mathcal{S}^{(r)}} n_i$.
\begin{comment}
Most typically, for neural networks, parties would run a local training process on their
training data, share the gradients of their model (also called a \emph{model update}) with the aggregator,
which would then aggregate the
updates using a fusion algorithm/strategy. Then, the merged/aggregated model is sent back to all
parties for the next round of training on their local datasets.
Like regular (centralized) machine learning training which makes several passes over a
centralized dataset, an FL job proceeds over a number of model fusion/synchronization rounds, determined
by the batch size ($B$) used. While model fusion after every minibatch ($B$) is possible, typically
parties in an FL job synchronize every local epoch, i.e., they train by making a pass over their entire
local data set before fusing local models. For each round, a model update generated by a party is often intended to
be ephemeral, but must be durably stored by the aggregator until model fusion is complete,
and the aggregated model is durably stored. Model updates may be retained by each party
according to its local data retention policy, but the default behavior on the aggregator
side is to delete the model updates once the fused model is durably stored. If required
by legal or audit regulations, or for model explainability, aggregators may store model
updates long term with the permission of parties. Durable storage means reliable replicated
distributed storage (like Cloud Object Stores).
\end{comment}
\noindent{\bf Associativity of Aggregation:}
Since the number of participants typically varies between FL jobs,
and within a job (over time) as participants join and leave, horizontal scalability of FL
aggregation software is vital. \emph{Horizontally scalable} aggregation is only feasible
if the aggregation operation is associative -- assuming $\oplus$ denotes the aggregation of model updates (e.g., gradients)
$U_i$, $\oplus$ is associative if $U_1 \oplus U_2 \oplus U_3 \oplus U_4 \equiv (U_1 \oplus U_2) \oplus (U_3 \oplus U_4)$.
Associativity is the property that enables us to exploit data parallelism to
partition participants among aggregator instances, with
each instance responsible for handling updates from a subset of participants.
The outputs of these instances must be further aggregated.
In the case of FedAvg, $\sum_{i \in \mathcal{S}^{(r)}} n_i \triangle_{i}^{(r)}$ is associative because addition is associative,
and the most computationally intensive because each $\triangle_{i}^{(r)}$ involves millions of floating point numbers.
A common design pattern in parallel computing~\cite{grama} is to use tree-based or hierarchical aggregation
in such scenarios, with a tree topology connecting the aggregator instances. The output of each aggregator
goes to its parent for further aggregation.
\section{Evaluation}~\label{sec:eval}
\begin{figure*}[htb]
\centering
\includegraphics[width=0.32\textwidth]{graphs/agglatency_effnet_cifar100.pdf}
\includegraphics[width=0.32\textwidth]{graphs/agglatency_vgg16_rvlcdip.pdf}
\includegraphics[width=0.32\textwidth]{graphs/agglatency_incep4_inaturalist.pdf}
\caption{Aggregation Latency (s) -- time taken for aggregation to finish after the last model update is available}
\label{fig:agglatency-static}
\end{figure*}
\begin{figure}[ht]
\small
\begin{tabular}{@{}rrrrl@{}}
\toprule
\multicolumn{1}{l}{\# parties} & \multicolumn{1}{l}{Static~Tree (s)} & \multicolumn{1}{l}{Serverless (s)} & \multicolumn{1}{l}{$\frac{Static ~Tree}{Serverless}$} & \\ \midrule
100 & 4.58 & 1.57 & 2.92$\times$ & \\
1000 & 12.46 & 4.34 & 2.87$\times$ & \\
10000 & 15.59 & 4.82 & 3.23$\times$ & \\ \bottomrule
\end{tabular}
\caption{Effect of 20\% party joins on aggregation latency (seconds). EfficientNet-B7 on CIFAR100 using FedProx aggregation algorithm.}~\label{tbl:joins-cifar}
\end{figure}
\begin{figure}[ht]
\small
\begin{tabular}{@{}rrrrl@{}}
\toprule
\multicolumn{1}{l}{\# parties} & \multicolumn{1}{l}{Static~Tree (s)} & \multicolumn{1}{l}{Serverless (s)} & \multicolumn{1}{l}{$\frac{Static ~Tree}{Serverless}$} & \\ \midrule
100 & 10.59 & 4.29 & 2.47$\times$ & \\
1000 & 17.6 & 6.45 & 2.73$\times$ & \\
10000 & 26.82 & 7.4 & 3.62$\times$ & \\ \bottomrule
\end{tabular}
\caption{Effect of 20\% party joins on aggregation latency (seconds). VGG16 on RVL-CDIP using FedSGD aggregation algorithm.}~\label{tbl:joins-rvlcdip}
\end{figure}
\begin{figure}[ht]
\small
\begin{tabular}{@{}rrrrl@{}}
\toprule
\multicolumn{1}{l}{\# parties} & \multicolumn{1}{l}{Static~Tree (s)} & \multicolumn{1}{l}{Serverless (s)} & \multicolumn{1}{l}{$\frac{Static ~Tree}{Serverless}$} & \\ \midrule
100 & 20.64 & 7.5 & 2.75$\times$ & \\
1000 & 36.64 & 10.66 & 3.44$\times$ & \\
7000 & 59.78 & 13.45 & 4.44$\times$ & \\ \bottomrule
\end{tabular}
\caption{Effect of 20\% party joins on aggregation latency (seconds). InceptionV4 on iNaturalist using FedProx aggregation algorithm.}~\label{tbl:joins-inaturalist}
\end{figure}
In this section, we evaluate the efficacy of \textsf{AdaFed}, by first comparing \textsf{AdaFed}\(\)
against a centralized aggregator setup common in several FL
frameworks
like IBM FL~\cite{ibmfl}, FATE~\cite{fate} and NVFLARE~\cite{nvflare}. We demonstrate how such single
aggregator setups have difficulties when scaling beyond 100 participants. We then demonstrate how
a static hierarchical (tree) overlay of aggregator instances can help with the scalability issue,
but is ineffective from a resource consumption, utilization, cost and elasticity perspectives.
\subsection{Metrics}
Given that aggregation depends on \emph{whether} the expected number of model updates are available, we
define \emph{aggregation latency} as the time elapsed between the reception of the last model update
and the availability of the aggregated/fused model. When compared to a static tree deployment of aggregator instances,
serverless functions are dynamically instantiated in response to model updates. Deployment
of serverless functions takes a small amount of time ($<$ 100 milliseconds) and elastic scaling of
a cluster in response to bursty model update can also take 1-2 seconds. Consequently, the overhead
of aggregation in \textsf{AdaFed}\ will usually manifest in the form of increased \emph{aggregation latency}.
It is measured for each FL synchronization round,
and the reported numbers in the paper are averaged over all the rounds of the FL job. We want aggregation latency to be as
low as possible. Scalability, or the lack thereof, of any FL aggregation architecture, also manifests in the form
of increased aggregation latency when the number of parties rises.
We therefore evaluate
(i) \emph{efficiency} by examining whether serverless functions increase the latency
of an FL job, as perceived by a participant, (ii) \emph{scalability} by examining the
impact of the number of parties on latency, (iii) \emph{adaptivity/elasticity},
by examining the impact of parties joining midway on latency.
We evaluate
\emph{resource efficiency}, by measuring resource consumption (in terms
of the number and duration of containers used for aggregation), resource (CPU and memory)
utilization and projected total cost.
We execute both hierarchical aggregation and \textsf{AdaFed}\ using
containers on Kubernetes pods in our datacenter, and measure the number of \emph{container seconds}
used by an FL job from start to finish. Container seconds is calculated by multiplying the number of
containers used with the time that each container was used/alive. This includes all the resources used by the ancillary services,
including MongoDB (for metadata), Kafka and Cloud Object Store. Measuring \emph{container seconds} helps us use
publicly available pricing from cloud providers like Microsoft Azure to project the monetary cost
of aggregation, in both cases, and project cost savings. We also report average CPU and memory utilization,
averaged over the entire FL job.
\subsection{Experimental Setup}
Aggregation was executed on a Kubernetes cluster on CPUs, using Docker containers. For IBM FL,
the container used for the single aggregator was run on a dedicated server with 16 CPU cores (2.2 Ghz, Intel Xeon 4210)
and 32GB of RAM. Each container for hierarchical or serverless aggregation
was equipped with 2 vCPUs (2.2 Ghz, Intel Xeon 4210) and 4 GB RAM. For hierarchical/tree aggregation, each
instance was encapsulated using the Kubernetes service abstraction. Parties were emulated, and distributed over
four datacenters (different from the aggregation datacenter) to emulate geographic distribution.
Each party was also executed inside Docker containers (2 vCPUs and 4 GB RAM) on Kubernetes, and these containers
had dedicated resources. We actually had parties running training to emulate realistic federated
learning, as opposed to using, e.g., Tensorflow Federated simulator.
We select three real-world federated learning jobs --
two image classification tasks from the Tensorflow Federated (TFF)~\cite{tff-benchmark} benchmark
and one popular document classification task. From TFF~\cite{tff-benchmark}, we select (i) CIFAR100 dataset which can be
distributed over 10-10000 parties, with classification performed using the EfficientNet-B7 model and the FedProx~\cite{fedprox}
aggregation algorithm and (ii) iNaturalist dataset
which can be distributed over 10-9237 parties, with classification performed using the InceptionV4
model and FedProx~\cite{fedprox} aggregation algorithm. Thus,
we consider two types of images and two models of varying sizes. We do not consider other workloads
from TFF because they involve less than 1000 parties. For additional diversity, we consider a third workload
using the VGG16~\cite{vgg16-rvlcdip} model and FedSGD~\cite{bonawitz2019towards} aggrgeation algorithm on RVL-CDIP~\cite{rvlcdip} document classification dataset. Each job was executed for 50 synchronization rounds, with model fusion happening after every local epoch.
For all scenarios, the datasets were partitioned in a realistic non-IID manner.
\subsection{Aggregation Latency and Scalability}~\label{sec:agglatency-scalability}
First, we consider a scenario where the number of parties
remains constant throughout the FL job, for all synchronization rounds, i.e., once the job starts, no
parties join or leave.
From Figure~\ref{fig:agglatency-static}, we observe that a centralized
single aggregator setting does not scale to a large number of parties, as average
aggregation latency increases significantly -- almost linearly.
This is because of both constrained compute/memory capacity at the single aggregator and
constrained network bandwidth needed to transfer/load model updates for aggregation.
Figure~\ref{fig:agglatency-static} also illustrates that the increase in aggregation
latency is much more gradual for both static tree overlays and \textsf{AdaFed}\ (which uses
serverless functions), enabling these architectures to scale to larger FL settings.
In fact, for both static tree and \textsf{AdaFed}, latency increases only by $\approx~4~\times$
when the number of parties increases 1000$\times$. This trend is due to the data parallelism
inherent in both the static tree and \textsf{AdaFed}.
From an efficiency standpoint, we observe that the aggregation latency is similar between static tree and
\textsf{AdaFed}, within 4\% of each other, with aggregation latency of \textsf{AdaFed}\ being slightly higher
than that of the static tree overlay. This is because using serverless functions
does not reduce the number of aggregation steps; it merely avoids having to
keep the aggregators provisioned and alive when they are not needed. We used runtime profiling to
determine that the slight (up to 4\%) increase in aggregation latency over the static tree is primarily
due to cold starts when functions are
started; the other minor factor is the latency due to the aggregation trigger. Thus, we
observe that the runtime overhead of using and triggering serverless functions is minimal.
\begin{figure*}[htb]
\small
\centering
\setlength{\tabcolsep}{0.5em}
\begin{tabular}{|r|r|r|r|r|r|r|r|r|r|}
\toprule
\multicolumn{1}{|c|}{Num.} & \multicolumn{2}{|c|}{Tot. container seconds} & \multicolumn{2}{|c|}{Proj. Total cost US\$} & \multicolumn{1}{|c|}{Cost}
& \multicolumn{2}{|c|}{Avg. CPU Util. (\%) } & \multicolumn{2}{|c|}{Avg. Memory Util. (\%) } \\
\multicolumn{1}{|c|}{Parties} &
\multicolumn{1}{|c|}{Static Tree} &
\multicolumn{1}{|c|}{\textsf{AdaFed}} &
\multicolumn{1}{|c|}{Static Tree} &
\multicolumn{1}{|c|}{\textsf{AdaFed}} &
\multicolumn{1}{|c|}{Savings \%} &
\multicolumn{1}{|c|}{Static Tree} &
\multicolumn{1}{|c|}{\textsf{AdaFed}} &
\multicolumn{1}{|c|}{Static Tree} &
\multicolumn{1}{|c|}{\textsf{AdaFed}} \\
\midrule
10 & 1723 & 228 & 0.46 & 0.06 & 86.96\% & 12.31\% & 82.95\% & 46.54\% & 73.35\% \\
100 & 2653 & 351 & 0.71 & 0.09 & 87.32\% & 17.09\% & 83.08\% & 20.89\% & 72.89\% \\
1000 & 22340 & 2951 & 6.01 & 0.79 & 86.86\% & 10.99\% & 83.52\% & 17.23\% & 72.87\% \\
10000 & 298900 & 40849 & 80.46 & 11 & 86.33\% & 10.61\% & 84.27\% & 18.66\% & 75.39\% \\ \bottomrule
\end{tabular}
\caption{EfficientNet-B7 on CIFAR100 using FedProx aggregation algorithm. Active Participants. Resource usage and projected cost, using container cost/s of 0.0002692 US\$ (source Microsoft Azure\cite{azurepricing})}~\label{tbl:cost-cifar-active}
\end{figure*}
\begin{figure*}[htb]
\small
\centering
\setlength{\tabcolsep}{0.5em}
\begin{tabular}{|r|r|r|r|r|r|r|r|r|r|}
\toprule
\multicolumn{1}{|c|}{Num.} & \multicolumn{2}{|c|}{Tot. container seconds} & \multicolumn{2}{|c|}{Proj. Total cost US\$} & \multicolumn{1}{|c|}{Cost}
& \multicolumn{2}{|c|}{Avg. CPU Util. (\%) } & \multicolumn{2}{|c|}{Avg. Memory Util. (\%) } \\
\multicolumn{1}{|c|}{Parties} &
\multicolumn{1}{|c|}{Static Tree} &
\multicolumn{1}{|c|}{\textsf{AdaFed}} &
\multicolumn{1}{|c|}{Static Tree} &
\multicolumn{1}{|c|}{\textsf{AdaFed}} &
\multicolumn{1}{|c|}{Savings \%} &
\multicolumn{1}{|c|}{Static Tree} &
\multicolumn{1}{|c|}{\textsf{AdaFed}} &
\multicolumn{1}{|c|}{Static Tree} &
\multicolumn{1}{|c|}{\textsf{AdaFed}} \\
\midrule
10 & 1953 & 162 & 0.53 & 0.04 & 91.73\% & 13.17\% & 91.98\% & 47.01\% & 84.36\% \\
100 & 3078 & 234 & 0.83 & 0.06 & 92.4\% & 10.75\% & 90.22\% & 20.27\% & 82.01\% \\
1000 & 25250 & 1992 & 6.8 & 0.54 & 92.11\% & 13.86\% & 92.92\% & 22.9\% & 85.82\% \\
10000 & 337830 & 30303 & 90.94 & 8.16 & 91.03\% & 12.36\% & 89.25\% & 22.96\% & 82.89\% \\ \bottomrule
\end{tabular}
\caption{VGG16 on RVL-CDIP using FedSGD aggregation algorithm. Active Participants. Resource usage and projected cost, using
container cost/s of 0.0002692 US \$ (source Microsoft Azure\cite{azurepricing}}~\label{tbl:cost-rvlcdip-active}
\end{figure*}
\begin{figure*}[htb]
\small
\centering
\setlength{\tabcolsep}{0.5em}
\begin{tabular}{|r|r|r|r|r|r|r|r|r|r|}
\toprule
\multicolumn{1}{|c|}{Num.} & \multicolumn{2}{|c|}{Tot. container seconds} & \multicolumn{2}{|c|}{Proj. Total cost US\$} & \multicolumn{1}{|c|}{Cost}
& \multicolumn{2}{|c|}{Avg. CPU Util. (\%) } & \multicolumn{2}{|c|}{Avg. Memory Util. (\%) } \\
\multicolumn{1}{|c|}{Parties} &
\multicolumn{1}{|c|}{Static Tree} &
\multicolumn{1}{|c|}{\textsf{AdaFed}} &
\multicolumn{1}{|c|}{Static Tree} &
\multicolumn{1}{|c|}{\textsf{AdaFed}} &
\multicolumn{1}{|c|}{Savings \%} &
\multicolumn{1}{|c|}{Static Tree} &
\multicolumn{1}{|c|}{\textsf{AdaFed}} &
\multicolumn{1}{|c|}{Static Tree} &
\multicolumn{1}{|c|}{\textsf{AdaFed}} \\
\midrule
10 & 2365 & 389 & 0.64 & 0.1 & 83.55\% & 10.86\% & 91.86\% & 49.73\% & 82.25\% \\
100 & 3354 & 548 & 0.9 & 0.15 & 83.65\% & 14.17\% & 91.18\% & 21.71\% & 83.49\% \\
1000 & 30545 & 5144 & 8.22 & 1.38 & 83.16\% & 10.87\% & 91.77\% & 23.12\% & 83.43\% \\
9237 & 420870 & 68307 & 113.3 & 18.39 & 83.77\% & 13.44\% & 91.01\% & 21.33\% & 82.49\% \\ \bottomrule
\end{tabular}
\caption{InceptionV4 on iNaturalist using FedProx aggregation algorithm. Active Participants. Resource usage and projected cost, using
container cost/s of 0.0002692 US \$ (source Microsoft Azure\cite{azurepricing}}~\label{tbl:cost-inaturalist-active}
\end{figure*}
\begin{figure*}[htb]
\small
\centering
\setlength{\tabcolsep}{0.5em}
\begin{tabular}{|r|r|r|r|r|r|r|r|r|r|}
\toprule
\multicolumn{1}{|c|}{Num.} & \multicolumn{2}{|c|}{Tot. container seconds} & \multicolumn{2}{|c|}{Proj. Total cost US\$} & \multicolumn{1}{|c|}{Cost}
& \multicolumn{2}{|c|}{Avg. CPU Util. (\%) } & \multicolumn{2}{|c|}{Avg. Memory Util. (\%) } \\
\multicolumn{1}{|c|}{Parties} &
\multicolumn{1}{|c|}{Static Tree} &
\multicolumn{1}{|c|}{\textsf{AdaFed}} &
\multicolumn{1}{|c|}{Static Tree} &
\multicolumn{1}{|c|}{\textsf{AdaFed}} &
\multicolumn{1}{|c|}{Savings \%} &
\multicolumn{1}{|c|}{Static Tree} &
\multicolumn{1}{|c|}{\textsf{AdaFed}} &
\multicolumn{1}{|c|}{Static Tree} &
\multicolumn{1}{|c|}{\textsf{AdaFed}} \\
\midrule
10 & 634 & 272 & 0.17 & 0.07 & 99.28\% & 10.58\% & 81.3\% & 42.67\% & 75.26\% \\
100 & 576 & 385 & 0.16 & 0.1 & 98.89\% & 11.97\% & 79.77\% & 12.17\% & 74.77\% \\
1000 & 10516 & 1113 & 2.83 & 0.3 & 99.82\% & 11.41\% & 81.06\% & 11.05\% & 74.15\% \\
10000 & 105021 & 18741 & 28.27 & 5.05 & 99.7\% & 10.25\% & 81.09\% & 10.29\% & 74.71\% \\ \bottomrule
\end{tabular}
\caption{EfficientNet-B7 on CIFAR100 using FedProx aggregation algorithm. Intermittent participants updating over a 10 minute interval for every synchronization round. Resource usage and projected cost using Container cost/s of 0.0002693 US \$
(source Microsoft Azure~\cite{azurepricing}).}~\label{tbl:cost-cifar-inter}
\end{figure*}
\begin{figure*}[htb]
\small
\centering
\setlength{\tabcolsep}{0.5em}
\begin{tabular}{|r|r|r|r|r|r|r|r|r|r|}
\toprule
\multicolumn{1}{|c|}{Num.} & \multicolumn{2}{|c|}{Tot. container seconds} & \multicolumn{2}{|c|}{Proj. Total cost US\$} & \multicolumn{1}{|c|}{Cost}
& \multicolumn{2}{|c|}{Avg. CPU Util. (\%) } & \multicolumn{2}{|c|}{Avg. Memory Util. (\%) } \\
\multicolumn{1}{|c|}{Parties} &
\multicolumn{1}{|c|}{Static Tree} &
\multicolumn{1}{|c|}{\textsf{AdaFed}} &
\multicolumn{1}{|c|}{Static Tree} &
\multicolumn{1}{|c|}{\textsf{AdaFed}} &
\multicolumn{1}{|c|}{Savings \%} &
\multicolumn{1}{|c|}{Static Tree} &
\multicolumn{1}{|c|}{\textsf{AdaFed}} &
\multicolumn{1}{|c|}{Static Tree} &
\multicolumn{1}{|c|}{\textsf{AdaFed}} \\
\midrule
10 & 33043 & 258 & 8.9 & 0.07 & 99.21\% & 13.23\% & 87.06\% & 46.98\% & 82.11\% \\
100 & 33037 & 385 & 8.89 & 0.1 & 98.88\% & 14.12\% & 84.2\% & 10.3\% & 81.56\% \\
1000 & 510039 & 2975 & 137.3 & 0.8 & 99.42\% & 14.46\% & 85.77\% & 10.69\% & 81.7\% \\
10000 & 5700030 & 40884 & 1534.45 & 11.01 & 99.28\% & 10.91\% & 84.27\% & 12.08\% & 80.86\% \\ \bottomrule
\end{tabular}
\caption{VGG16 on RVL-CDIP using FedSGD aggregation algorithm. Intermittent participants updating over a 10 minute interval for every synchronization round. Resource usage and projected cost using Container cost/s of 0.0002693 US \$
(source Microsoft Azure~\cite{azurepricing}).}~\label{tbl:cost-rvlcdip-inter}
\end{figure*}
\begin{figure*}[htb]
\small
\centering
\setlength{\tabcolsep}{0.5em}
\begin{tabular}{|r|r|r|r|r|r|r|r|r|r|}
\toprule
\multicolumn{1}{|c|}{Num.} & \multicolumn{2}{|c|}{Tot. container seconds} & \multicolumn{2}{|c|}{Proj. Total cost US\$} & \multicolumn{1}{|c|}{Cost}
& \multicolumn{2}{|c|}{Avg. CPU Util. (\%) } & \multicolumn{2}{|c|}{Avg. Memory Util. (\%) } \\
\multicolumn{1}{|c|}{Parties} &
\multicolumn{1}{|c|}{Static Tree} &
\multicolumn{1}{|c|}{\textsf{AdaFed}} &
\multicolumn{1}{|c|}{Static Tree} &
\multicolumn{1}{|c|}{\textsf{AdaFed}} &
\multicolumn{1}{|c|}{Savings \%} &
\multicolumn{1}{|c|}{Static Tree} &
\multicolumn{1}{|c|}{\textsf{AdaFed}} &
\multicolumn{1}{|c|}{Static Tree} &
\multicolumn{1}{|c|}{\textsf{AdaFed}} \\
\midrule
10 & 34365 & 509 & 9.25 & 0.14 & 98.52\% & 13.49\% & 87.75\% & 51.13\% & 84.17\% \\
100 & 34358 & 588 & 9.25 & 0.16 & 98.29\% & 11.08\% & 87.08\% & 11.88\% & 83.72\% \\
1000 & 734456 & 17700 & 197.72 & 4.76 & 97.59\% & 11.59\% & 89.09\% & 10.1\% & 87.28\% \\
9237 & 6783036 & 206883 & 1825.99 & 55.69 & 96.95\% & 11.43\% & 88.55\% & 11.19\% & 84.4\% \\ \bottomrule
\end{tabular}
\caption{InceptionV4 on iNaturalist using FedProx aggregation algorithm. Intermittent participants updating over a 10 minute interval for every synchronization round. Resource usage and projected cost using Container cost/s of 0.0002693 US \$
(source Microsoft Azure~\cite{azurepricing}).}~\label{tbl:cost-inaturalist-inter}
\end{figure*}
\subsection{Adaptivity/Elastic Scaling for Party Joins}
Next, we illustrate how \textsf{AdaFed}\ can handle parties joining in the middle of the job with minimal impact
on aggregation latency. For this, we consider a single synchronization round, and increase the number of
parties by 20\%. Figures~\ref{tbl:joins-cifar},\ref{tbl:joins-rvlcdip} and \ref{tbl:joins-inaturalist} illustrate the aggregation
latency when 20\% more parties send model updates during the synchronization round.
For these experiments, we only illustrate static tree based overlays and \textsf{AdaFed}. This is because
Section~\ref{sec:agglatency-scalability} has already demonstrated that centralized aggregators
do not scale to handle large numbers of parties; the effect of party joins is similar -- aggregation latency
increases almost linearly w.r.t number of parties joining.
Serverless aggregation in \textsf{AdaFed}\ needs no overlay reconfiguration, while static tree aggregation needs
to add more aggregator instances and reconfigure the tree. This manifests as a significant increase in aggregation latency
(2.47$\times$ to 4.62$\times$). This is due to the fact that the number of serverless function invocations depends
on the aggregation workload, and partially aggregated updates can be stored in message queues. However,
with a tree overlay, new aggregator nodes have to be instantiated and the topology changed. Thus, although
both static tree and serverless aggregation methods are elastic, using serverless functions provides
significantly better outcomes.
\subsection{Resource Consumption \& Cost}
We compare \textsf{AdaFed}\ with static tree aggregation in terms of resource usage. Although the single aggregator
deployment (e.g., using IBM FL) has much lower resource requirements when compared to \textsf{AdaFed}, it has significantly higher latency
and does not scale. So, we do not consider it in the experiments in this section.
We first illustrate the resource consumption of experiments where parties participate actively (as defined in Section~\ref{sec:background}). Figures~\ref{tbl:cost-cifar-active},\ref{tbl:cost-rvlcdip-active} and
\ref{tbl:cost-inaturalist-active} tabulate the resource usage for the three workloads, in terms of
container seconds and CPU/memory utilization. This data illustrates the real benefits of
using serverless aggregation, with $>85\%$ resource and cost savings for the EfficientNet-B7/CIFAR100/FedProx job,
$>90\%$ for VGG16/RVL-CDIP/FedSGD and $>80\%$ for InceptionV4/iNaturalist/FedProx.
These savings are significant and are a direct result of the adaptivity of \textsf{AdaFed}, by deploying
aggregator functions only when needed. Resource wastage due to static tree can also be observed
from the CPU/memory utilization figures, which are consistently low for static tree because aggregator
instances are idle for long periods. We also observe that, while compute resources needed for aggregation increase
with the number of participants for both static tree and serverless aggregation, the amount of resource and cost
savings remains fairly consistent. We use Microsoft Azure's container pricing for illustrative purposes only;
pricing is similar for other cloud providers.
We stress that the experiments in Figures~\ref{tbl:cost-cifar-active},\ref{tbl:cost-rvlcdip-active} and
\ref{tbl:cost-inaturalist-active} are \emph{conservative}; they assume active participation. That is, parties have dedicated resources
to the FL job, parties do not fail in the middle of training, and training on parties for each round
starts immediately after a global model is published by the aggregator. In realistic scenarios,
parties (e.g., cell phones or laptops or edge devices) perform many functions other than model training,
have other tasks to do and can only be expected to respond over a period of time (response timeout).
Depending on the deployment scenario, this can be anywhere from several minutes to hours.
Figures~\ref{tbl:cost-cifar-inter},\ref{tbl:cost-rvlcdip-inter} and \ref{tbl:cost-inaturalist-inter} demonstrate that
resource and cost savings are huge ($>99\%$) when response timeout is set to
\emph{a modest} 10 minutes per aggregation round. Real world FL jobs typically use higher response timeouts and
will thus reap enormous benefits. Thus, our experiments reinforce our
confidence that serverless aggregation can lead to significant resource and cost savings
with minimal overhead.
\section{\textsf{AdaFed}\ : Design and Implementation}
\begin{figure}[htb]
\centering
\includegraphics[width=0.8\columnwidth]{figures/Hierarchical.pdf}
\caption{Hierarchical/Tree-based Aggregation}
\label{fig:hierarchical}
\end{figure}
\textsf{AdaFed}, as its name suggests, adapts to the mechanics of a specific FL job.
When a job's aggregation function is associative, as it is in most FL jobs,
\textsf{AdaFed}\ leverages data parallelism
to spawn several aggregation ``entities/instances'' per FL job and arranges them
in a tree based (hierarchical) overlay. Tree-based overlays are a common distributed computing
pattern in publish-subscribe~\cite{pubsub} and stream processing~\cite{streamoverlays}.
This enables aggregation to scale to support thousands of parties.
However, using ``statically deployed'' (always on) overlays, while advantageous in
high throughput stream processing, is not suitable for FL.
Consequently, \textsf{AdaFed}\ has a programming model whose
goal is to reduce state in aggregators and to decouple aggregator instances.
This enables said instances to execute as serverless functions, which are spawned only
when model updates arrive, and are torn down when parties are busy training (no updates available
to aggregate). An aggregation function instance can be triggered once a specific number of model updates
are available; or multiple instances can be triggered once the expected number of model updates for the
current FL round are available. Once a model aggregation round is complete and the fused model
is sent back to the parties, all aggregator functions exit until the next round,
thereby releasing resources.
\subsection{Associativity $\rightarrow$ Tree-based Aggregation}
Associativity enables us to partition parties among aggregator instances, with
each instance responsible for handling updates from a subset of parties.
The outputs of these instances must be further aggregated.
A tree topology connects the aggregator instances. The output of each aggregator
goes to its parent for further aggregation. We have determined that it is possible to split any
associative FL aggregation operation into leaf and intermediate aggregators as illustrated by
Figure~\ref{fig:hierarchical}. A leaf aggregator implements logic to fuse raw model weight updates $U_i$ from a group of
$k$ parties to generate a partially aggregated model update $U_k$. For example, in the case of
FedAvg~\cite{mcmahan2017communication, mcmahan2017learning} this function would take
$k_i$ gradient update vectors and return the weighted sum $S_i = \sum_{1,\ldots,k_i} n_i \triangle_{i}^{(r)}$ of these vectors,
along with the number of data items processed so far $\sum_{1,\ldots,k_i} n_i$.
An intermediate aggregator implements logic to further aggregate partially
aggregated model updates ($U_k$), in stages, to produce the final aggregated model
update ($U_F$). In the case of FedAvg, this function would aggregate (add up) multiple
$(S_i)$. If all expected model updates have arrived from $\mathcal{S}^{(r)}$ parties, the
intermediate aggregator would have thus calculated $\sum_{1,\ldots,|\mathcal{S}^{(r)}|} n_i \triangle_{i}^{(r)}$
and $N=\sum_{1,\ldots,|\mathcal{S}^{(r)}|} n_i$, from which the aggregated gradient update $\triangle^{(r)}$
is calculated per Algorithm~\ref{alg:fedavg} at the root/master aggregator (Figure~\ref{fig:hierarchical}).
Establishing a tree-based aggregation topology as in Figure~\ref{fig:hierarchical}
starts by identifying the
number of parties that can be comfortably handled by an aggregator instance.
This is dependent on (i) size/hardware capability (CPU/RAM/GPU) of the instance (server or VM or container) and
its network bandwidth, and (ii) the size of the
model, which directly determines the size of the model update and the memory/compute
capabilities needed for aggregation. Assuming that each instance can handle $k$ participants,
a complete and balanced $k$-ary tree can be used. $\ceil{\frac{n}{k}}$ leaf aggregators are
needed to handle $n$ participants; the tree will have $O(\ceil{\frac{n}{k}})$ nodes.
While a tree-based FL aggregation overlay is conceptually simple, it does involve significant
implementation and deployment effort for fault tolerant aggregation.
Typically, aggregator nodes are instantiated
using virtual machines (VMs) or containers (e.g., Docker) and managed using
a cluster management system like Kubernetes. These instances are then arranged in the
form of a tree, i.e., each instance is provided with the IP address/URL of its parent,
expected number of child aggregators, credentials to authenticate itself to said parent and
send aggregated model updates. Failure detection and recovery
is typically done using heartbeats and timeouts, between each instance, its parents and children.
Once faults happen, the aggregation service provider should typically take responsibility for
recovering the instance, and communicating information about the recovered instance to
its children for further communications. Things become complicated when an instance fails
at the same time as one of its parent or child instances. Another issue, common in distributed software systems, that arises in this scenario is
network partitions. In summary, to implement hierarchical
aggregation the traditional way~\cite{grama}, any aggregation service
has to maintain dedicated microservices to deploy, monitor and heal these aggregation overlays.
\subsection{``Idle Waiting'' in Static Tree Aggregation}
Even if some technologies like Kubernetes pods and service abstractions are able to simplify
a few of these steps, a more serious problem with tree-based aggregation overlays is that
aggregator instances are ``always on'' waiting for updates, and this is extremely wasteful in terms
of resource utilization and monetary cost. To handle FL jobs across thousands of parties,
aggregation services including \textsf{AdaFed}\
must support intermittent parties effectively. Given that, for every round, parties
may send model updates over an extended time period (hours), aggregators spend the bulk
of their time waitin. Idle waiting wastes resources and increases aggregation cost.
A tree-based aggregation overlay compounds resource wastage and cost.
Re-configuring tree-based aggregation overlays is also difficult. This is needed, for example,
when midway through a job, a hundred (or a thousand) participants decide to join.
Supporting them would require reconfiguration at multiple levels of the aggregation overlay.
Reconfigurations are also necessary to scale down the overlay when participants leave.
Thus, elasticity of aggregation is hard to achieve in the static tree setting.
\begin{figure}[htb]
\centering
\includegraphics[width=0.75\columnwidth]{figures/lambdafl-arch.pdf}
\caption{\textsf{AdaFed}\ System Architecture. Aggregators are executed as serverless functions.}
\label{fig:lambdafl-arch}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[width=\columnwidth]{figures/QAgg.pdf}
\caption{\textsf{AdaFed}\ -- Illustration of stepwise serverless aggregation}
\label{fig:qagg}
\end{figure}
\subsection{Using Serverless Functions}
\textsf{AdaFed}\ takes associativity one step further. \textsf{AdaFed}\ mitigates issues with aggregation overlays by avoiding the construction of \emph{actual/physical}
tree topology. Instead, \textsf{AdaFed}\ uses serverless functions chained together with message queues
to realize a \emph{logical} tree topology. \textsf{AdaFed}\ executes both leaf and intermediate aggregation
operations as serverless/cloud functions. These functions are executed in containers on a
cluster managed by Kubernetes, which multiplexes
multiple workloads and enables the cluster to be shared by multiple FL jobs and/or other workloads.
Also, since there is no static topology, more (or less) aggregator functions can be spawned depending
on the number of parties (model updates), thereby handling party joins/leaves effectively.
The challenge in executing aggregation as serverless functions, which are ephemeral and have no
stable storage, is to manage state -- that of each aggregation entity, intermediate aggregation
outputs, inter-aggregator communications and party-aggregator communications.
We also note that splitting aggregation into leaf and intermediate
functions makes the logic simpler. It is also possible to have a single serverless function that can operate
on both raw updates and partially fused updates; doing that will increase the complexity of the function.
\subsection{Party-Aggregator Communication}
This is done using a distributed message queue (Kafka). Kafka is a topic-based
message queue offering standard publish/subscribe semantics. That is, each queue has a ``name'' (i.e., pertains to a ``topic''), and
multiple distributed entities can write to (publish) and read from (subscribe to) it. Kafka enables us
to set a replication level per queue, which ensures durability of messages between the aggregator instances
and parties. For each FL job (with an identifier \textsf{JobID}, two queues are created at deployment time
-- \textsf{JobID-Agg} and \textsf{JobID-Parties}.
Only aggregator instances (serverless functions) can publish to \textsf{JobID-Agg} and all parties subscribe to it. Any party can publish to
\textsf{JobID-Parties} but only the aggregator instances can both publish to and read from it. This ensures that model updates
sent to \textsf{JobID-Parties} are private and do not leak to other parties. When the job starts, the aggregator publishes the
initial model on \textsf{JobID-Agg}; parties can then download the model and start training. At the end of each
job round, parties publish their model updates to \textsf{JobID-Parties}. \emph{Inter-Aggregator Communication,}
is also handled using Kafka. Partially fused model updates are published
by aggregation functions into Kafka, and can trigger further function invocations.
\subsection{Aggregation Trigger}
For serverless functions to execute, they must be triggered by some event.
\textsf{AdaFed}\ provides several flexible and configurable triggers. The simplest ones trigger an aggregation function for
every $k$ updates published to \textsf{JobID-Parties}, or every $t$ seconds. For FL jobs that use a parameter server strategy
for model updates,
it is possible in \textsf{AdaFed}\ to implement the update logic as a serverless function and trigger it every time
an update is published by a party. Other custom triggers involve the periodic execution of any
valid Python code (also as a serverless function) which triggers aggregation.
Custom triggers are vital to handling FL jobs involving intermittent parties. As an illustration, consider
an FL job where each round is successful if 50\% of parties send model updates within 10 minutes. The aggregation trigger here could be a serverless function, invoked every minute, to
count the number of parties that have responded and perform partial aggregation through leaf
aggregators; aggregation is complete when at least 50\% of the parties have responded. Another
FL job may require that aggregation waits for at least 10 minutes and considers the round successful
if at least 50\% of parties have responded. In this case, the job would contain a configuration
parameter that triggers aggregation after 10 minutes.
\subsection{End-to-End Illustration}
As illustrated in Figure~\ref{fig:qagg}, a set of parties decide to start an FL job through existing private communication channels.
``Matchmaking'' or inducing parties to join an FL job is out of scope of this paper and \textsf{AdaFed}.
We assume that this set of parties is convinced of the benfits of FL and want to collaborate.
While forming a group, they also decide things like model architecture, model interchange format
and hyperparameters (initial model weights,
batch size and learning rate schedule, number of rounds, target accuracy and
model update frequency). \textsf{AdaFed}\ then assigns a \textsf{JobID} to this job, creates metadata pertaining
to the job (including party identities and hyperparameters), updates its internal data structures,
instantiates two Kafka queues -- \textsf{JobID-Agg} and \textsf{JobID-Parties}.
A serverless function is triggered to publish the initial model architecture and weights
on \textsf{JobID-Agg}. The FL job also specifies the triggering function.
Then the first round of training starts at the parties' local
infrastructure using the model downloaded/received from \textsf{JobID-Agg}.
Once local training is complete, parties send model updates to \textsf{JobID-Parties}.
The trigger (serverless) function executes, and if it determines that an aggregation has
to be initiated, triggers a leaf or intermediate aggregator. They pull inputs from \textsf{JobID-Parties}
and publish their outputs to the same. This process continues as model updates arrive. When an aggregator
function determines that all parties have sent their updates, the round is finished and
the updated model published to \textsf{JobID-Agg}. Then the next round starts.
Job termination criteria may be different depending on the type of the FL job, as discussed earlier. A time-based
or a quorum-based completion criterion may be also used.
\subsection{Durability} Aggregation checkpointing for fault tolerance determines how frequently the aggregator checkpoints its state to external stable storage.
While this is needed for traditional FL platforms, \textsf{AdaFed}\ does not use
checkpointing. If the execution of a serverless aggregation function fails, it is simply restarted.
All aggregator state (updates from parties, partially fused models, etc) is durably stored in message queues.
This aspect of \textsf{AdaFed}\ is vital to understanding \textsf{AdaFed}'s resource usage; we observe that the resource overhead of
using message queues is equal to that of checkpointing using cloud object stores in single/hierarchical aggregator schemes.
\subsection{Implementation and Elastic Scaling}
We implement \textsf{AdaFed}\ using the popular Ray~\cite{ray} distributed computing platform.
Ray provides several abstractions, including powerful serverless functions (Ray remote
functions). We explored a couple of alternate implementations, including KNative~\cite{knative} and Apache Flink~\cite{flink},
and settled on Ray because it provides arbitrarily long serverless functions, is well integrated
with common Python libraries (numpy, scikit-learn, Tensorflow and PyTorch) and provides the freedom
to use accelerators if necessary. Ray's internal message queue could have been used
in lieu of Kafka, but we found Kafka to be more robust. Aggregation triggers are implemented
using Ray, and support typical conditions on \textsf{JobID-Parties} (receipt of a certain number of messages, etc.),
but are flexible enough to execute user functions that return booleans (whether aggregation should be triggered or
not).
Our implementation using Ray executes on the Kubernetes cluster manager. Ray's elastic scaler
can request additional Kubernetes pods to execute serverless functions, depending on how
frequently aggregation is triggered. It is also aggressive about releasing unused pods
when there are no model updates pending. When aggregation is triggered, groups of
model updates are assigned to serverless function invocations. Each invocation is assigned
2 vCPUs and 4GB RAM (this is configurable). If there are insufficient pods to support
all these invocations, Ray autoscales to request more Kubernetes pods. This also enables
\textsf{AdaFed}\ to handle large scale party dropouts and joins effectively. Only the exact amount of
compute required for aggregation is deployed -- overheads to spawn tasks on Kubernetes pods
and create new pods are minimal, as demonstrated in our empirical evaluation.
It is also vital to ensure that model updates are not consumed twice by aggregation functions.
When aggregation is triggered for a model update in a Kafka queue, it as marked using a flag.
The flag is released only after the output of the function is written to Kafka.
If the aggregation function crashes, Ray restarts it, thereby guaranteeing ``exactly once''
processing and aggregation semantics.
\subsection{Expressivity and Security}
The programming model of \textsf{AdaFed}\ and its implementation using Ray enables us to
support a wide variety of FL aggregation algorithms. Associativity is a pre-requisite for
aggregation scalability; and any associative algorithm can be programmed using
\textsf{AdaFed}. Most FL aggregation algorithms, including FedAvg/FedSGD~\cite{bonawitz2019towards}, FedProx~\cite{fedprox},
FedMA~\cite{fedma}, Mime~\cite{mime}, Scaffold~\cite{scaffold}, FedPA~\cite{fedpa}, FedPD~\cite{fedpd}
and FedDist~\cite{feddist} are associative. In the rare case that the aggregation
algorithm is not associative, \textsf{AdaFed}\ still uses serverless functions to spawn the single aggregator
instance and does so with a Docker container of the maximum size (configurable) supported by the underlying
Kubernetes cluster. The size and number of aggregator instances, as well as the number of parties handled
by any single instance are configurable, enabling \textsf{AdaFed}\ to support FL jobs with varying participation.
Furthermore, none of the design choices of \textsf{AdaFed}\ has any impact on FL privacy mechanisms used.
Transport layer encryption (TLS) used to transmit model updates in existing FL platforms
can be used to send updates to Kafka in \textsf{AdaFed}. Updates are decrypted by the aggregation function reading them
from Kafka. \textsf{AdaFed}\ is oblivious to any noise added by parties
for differential privacy. And the fact that functions in \textsf{AdaFed}\ can execute most Python code means that
aggregation of homomorphically encrypted model updates (using appropriate libraries) is also feasible.
\section{Introduction}
Federated Learning (FL)~\cite{kairouz2019advances, fieldguide} is a mechanism in which
multiple parties collaborate to build and train a joint machine learning model typically
under the coordination/supervision of a central server or service provider (definition
by Kairouz et. al.~\cite{kairouz2019advances, fieldguide}). This central server is also
called an \emph{aggregator}. FL is private by design, because parties retain their
data within their private devices/servers; never sharing said data with either
the aggregator or other parties. An FL job involves parties performing local training on their data,
sharing the weights/gradients of their model (also called a \emph{model update}) with the aggregator,
which aggregates the model updates of all parties using a fusion algorithm.
The use of \emph{centralized aggregation} is common in FL because of the ease in which
various machine learning models (neural networks, decision trees, etc.) and
optimization algorithms can be supported.
FL is \emph{typically} deployed in two scenarios: \emph{cross-device} and \emph{cross-silo}.
In the cross-silo scenario, the number of parties is small, but each party has extensive
compute capabilities (with stable access to electric power and/or equipped with hardware accelerators)
and large amounts of data. The parties have reliable participation throughout the entire federated
learning training life-cycle, but are more susceptible to sensitive data leakage. Examples include
multiple hospitals collaborating to train a tumor/COVID detection model on radiographs~\cite{nvidia-covid}, multiple banks
collaborating to train a credit card fraud detection model, etc.
The cross-device scenario involves a large number of parties ($>100$), but each party has a small
number of data items, constrained compute capability, and limited energy reserve (e.g., mobile phones or IoT devices).
They are highly unreliable/asynchronous and are expected to drop and join frequently. Examples include a large
organization learning from data stored on employees' devices and a device manufacturer training a model
from private data located on millions of its devices (e.g., Google Gboard~\cite{bonawitz2019towards}).
Increasing adoption of FL has, in turn, increased the need for
FL-as-a-service offerings by public cloud providers, which serve as a nexus
for parties in an FL job and aggregate/fuse model updates.
Such FL aggregation services have to effectively support multiple concurrent
FL jobs, with each job having tens to thousands of heterogeneous participants (mobile phones,
tablets, sensors, servers) from different organizations and administrative domains.
Our experience, in building and operating the IBM Federated Learning (IBM FL)~\cite{ibmflpublic, ibmfl}
service on our public and private clouds has led us to believe that existing FL aggregation
methods have performance, scalability and resource efficiency challenges, primarily
due to the use of centralized aggregation.
\begin{comment}
Such cloud services have to effectively support multiple concurrent FL jobs and
multi-tenancy -- parties
may be from different organizations with varying security and privacy policies,
and each organization may have several participating entities (employee devices, data centers
from different geographies, etc.).
Over the past three years, our team has built, launched and operated the
. Our experience has led us to
believe that effective aggregation of model updates is a key problem in FL,
when viewed from either a performance, scalability, resource efficiency/cost, or
privacy perspective.
\end{comment}
\noindent{\bf Performance:} Aggregators should not become a bottleneck or a single point
of failure in FL jobs. They should be able to store incoming model updates without loss,
and have low latency -- the time between the arrival of the last expected model update
and the completion of aggregation. In the case of a cloud hosted FL aggregation service,
said guarantees must hold across all running FL jobs. Most existing FL platforms
(IBM FL~\cite{ibmfl}, Webank FATE~\cite{fate}, NVIDIA NVFLARE~\cite{nvflare}) are based on a client-server model with
a single aggregator per FL job deployed (as a virtual machine or container)
in datacenters waiting for model updates. Such platforms are able to easily support multiple concurrent
FL jobs, but performance drops as the number of parties increases, especially in cross-device settings.
This is because aggregation
throughput is limited by the computational capacity of the largest VM or container
(memory and compute, and to a lesser extent, network bandwidth).
\noindent{\bf Scalability:} is considered in terms of the number of parties, size of model
updates, frequency of updates and (for an FL service) number of concurrent FL jobs. FL platforms
using a single aggregator per job only support vertical scalability; non-trivial design
using data parallelism and connecting multiple aggrgeators
is necessary for horizontal scalability, especially in cross-device settings. FL jobs involve several rounds,
and take an extended period of time, especially with intermittently available parties. Party joins and
dropouts are common; so aggregation infrastructure must scale horizontally to support this.
\noindent{\bf Resource Efficiency/Cost:} While operating IBM FL and from publicly available FL benchmarks
like LEAF~\cite{leaf-benchmark} and Tensorflow Federated~\cite{tff-benchmark}, we have observed that
training at the party
takes much longer compared to model update fusion/aggregation, resulting in under-utilization and
wastage of computing resources dedicated to aggregation. This is a significant problem even
in cross-silo settings -- active participation is not guaranteed even in cross-silo settings
due to competition from other higher priority workloads and variations in data availability.
It is further compounded in ``cross-device'' deployments, where parties are highly \emph{intermittent}
and do not have dedicated resources for training.
In these scenarios, the
aggregator expects to hear from the parties \emph{eventually} (typically over a several hours or maybe once
a day). Large-scale FL jobs almost always involve intermittent parties -- as the number
of parties increases, it is extremely hard to expect that all of them participate at the same pace.
This results in aggregators having to wait for long periods of time for parties to finish local
training and send model updates.
\noindent {\bf Contributions:} The core technical contribution of this paper is the design, implementation and evaluation of a
flexible parameter aggregation mechanism for FL -- \textsf{AdaFed}, which has the following novel features:
\begin{itemize}
\item \textsf{AdaFed}\ reduces state in aggregators and treats aggregators as serverless functions. In many existing FL jobs,
every aggregator instance typically
acts on a sequence of inputs and produces a single output. State, if present, is not local to the aggregator
instance and may be shared by all aggregators. Such state is best left in an external store, and consequently
aggregators can be completely stateless and hence, serverless. \textsf{AdaFed}\ is therefore scalable both with respect to
participants -- effective for cross-silo and cross-device deployments,
and with respect to geography -- single/hybrid cloud or multicloud.
\item \textsf{AdaFed}\ leverages serverless technologies to deploy and tear down aggregator instances dynamically
in response to participant model updates, thereby supporting both intermittent and active participants
effectively. There is no reason to keep aggregators deployed all the time and simply ``awaiting input''.
\item \textsf{AdaFed}\ is efficient, both in terms of resource utilization with support for automatic elastic scaling, and in terms of aggregation latency.
\item \textsf{AdaFed}\ is reasonably expressive for programmers to easily implement scalable aggregation algorithms.
\textsf{AdaFed}\ is implemented using the popular Ray~\cite{ray} distributed computing platform, and can run arbitrary Python code
in aggregation functions, and use GPU accelerators if necessary.
\item Increased FL job reliability and fault tolerance by reducing state in aggregators, eliminating persistent
network connections between aggregators, and through dynamic load balancing of participants.
\item \textsf{AdaFed}\ supports widely used FL privacy preserving and security mechanisms
\end{itemize}
\section{Conclusions and Future Work}
In this paper, we have presented \textsf{AdaFed}, a system for adaptive serverless aggregation in federated learning.
We have described the predominant way of parallelizing aggregation using a tree topology and
examined its shortcomings. We have demonstrated how serverless/cloud functions can be used to
effectively parallelize and scale aggregation while eliminating resource wastage and
significantly reducing costs. Our experiments using three different model architectures, datasets
and two FL aggregation algorithms demonstrate that the overhead of using serverless functions for aggregation
is minimal, but resource and cost savings are substantial. We also demonstrate that serverless
aggregation can effectively adapt to handle changes in the number of participants in the FL job.
We are currently working to extend this work in two directions: (i) increasing the dependability
and integrity of aggregation using trusted execution environments (TEEs) and (ii) effectively supporting
multi-cloud environments by using service mesh (like Istio) to find the best aggregator function to
route a model update.
\section{Algorithms Temporary}
\begin{algorithm}
\caption{An algorithm with caption}\label{alg:two}
\KwData{$n \geq 0$}
\KwResult{$y = x^n$}
$y \gets 1$\;
$X \gets x$\;
$N \gets n$\;
\While{$N \neq 0$}{
\eIf{$N$ is even}{
$X \gets X \times X$\;
$N \gets \frac{N}{2}$
}{\If{$N$ is odd}{
$y \gets y \times X$\;
$N \gets N - 1$\;
}
}
}
\end{algorithm}
\newcommand{\gmodel}[2]{\ensuremath{{{m}_{#1}}^{(#2)}}}
\newcommand{}{\ensuremath{R}}{}{\ensuremath{R}}
\newcommand{}{\ensuremath{r}}{}{\ensuremath{r}}
\section{Related Work}
Parallelzing FL aggregation using a hierarchical topology has been
explored by ~\cite{bonawitz2019towards}, though the design pattern was introduced by and
early work on datacenter parallel computing~\cite{grama}. While \cite{bonawitz2019towards}
uses hierarchical aggregation, its programming model is different from \textsf{AdaFed}. Its primary goal is
scalability and consequently, it deploys long lived actors instead of serverless functions.
\textsf{AdaFed}\ aims to make FL aggregation resource efficient, elastic in addition to being scalable;
and use off-the-shelf open source software like Ray, Kafka and Kubernetes.
Another closely related concurrent work is FedLess~\cite{fedless}, which predominantly uses serverless functions
for the training side (party side) of FL. FedLess is able to use popular serverless technologies
like AWS Lambda, Azure functions and Openwhisk to enable clients/parties on cloud platforms perform
local training and reports interesting results on using FaaS/serverless instead of IaaS (dedicated VMs and containers)
to implement the party side of FL. It also has the ability to run a single aggregator as a cloud function, but does not
have the ability to parallelize aggregation, and does not seem to scale beyond 200 parties (with 25 parties
updating per FL round, per \cite{fedless}). Our work in \textsf{AdaFed}\ has the primary goal of parallelizing and
scaling FL aggregation. Fedless~\cite{fedless} also does not adapt aggregation based on party behavior,
and it is unclear whether parties on the edge (phones/tablets) can train using FedLess.
A number of ML frameworks -- Siren~\cite{siren}, Cirrus~\cite{cirrus} and
the work by LambdaML~\cite{jiang-serverless-ml} use serverless functions
for centralized (not federated) ML and DL training.
Siren~\cite{siren} allows users to train models (ML, DL and RL) in the cloud
using serverless functions with the goal to reduce programmer burden involved
in using traditional ML frameworks and cluster management technologies for
large scale ML jobs. It also contains optimization algorithms to tune training
performance and reduce training cost using serverless functions.
Cirrus~\cite{cirrus} goes further, supporting end-to-end centralized ML training workflows
and hyperparameter tuning using serverless functions.
LambdaML~\cite{jiang-serverless-ml} analyzes
the cost-performance trade-offs between IaaS and serverless
for datacenter/cloud hosted centralized ML training.
LambdaML supports various ML and DL optimization algorithms, and
can execute purely using serverless functions or optimize cost using
a hybrid serverless/IaaS strategy. \textsf{AdaFed}\ differs from Siren, Cirrus and LambdaML in
significant ways -- Distributed ML (in Siren, Cirrus and LambdaML) is different from FL. Distributed ML involves
centralizing data at a data center or cloud service and performing training at a central location.
In contrast, with FL, data never leaves a participant. FL's privacy guarantees are much stronger
and trust requirements much lower than that of distributed ML.
The term ``serverless'' has also been used to refer to peer-to-peer (P2P) federated learning, as
in ~\cite{rw1,rw4, flgossip}. In such systems, aggregation happens over a WAN overlay and not in a datacenter.
The first step involves establishing the overlay network, by following existing technologies like
publish/subscribe overlays, peer discovery, etc~\cite{pubsub, streamoverlays}. The next step involves establishing a spanning
tree over the P2P overlay, routing updates along the spanning tree and aggregating at each node on the tree.
Gossip based learning, \cite{flgossip} does not construct overlays but uses gossip-based broadcast algorithms
to deliver and aggregate model updates in a decentralized manner. While these techniques are scalable and
(in the case of gossip algorithms) fault tolerant, they do require either (i) that the model be revealed
to more entities during routing, or (ii) homomorphic encryption~\cite{jayaram-cloud2020} which can be challenging both from a key agreement
and model size explosion standpoints, or (iii) differential privacy~\cite{abadi-diffpriv} which reduces model accuracy in the
absence of careful hyperparameter tuning.
\section{Algorithms Temporary}
\begin{algorithm}
\caption{An algorithm with caption}\label{alg:two}
\KwData{$n \geq 0$}
\KwResult{$y = x^n$}
$y \gets 1$\;
$X \gets x$\;
$N \gets n$\;
\While{$N \neq 0$}{
\eIf{$N$ is even}{
$X \gets X \times X$\;
$N \gets \frac{N}{2}$
}{\If{$N$ is odd}{
$y \gets y \times X$\;
$N \gets N - 1$\;
}
}
}
\end{algorithm}
\newcommand{\gmodel}[2]{\ensuremath{{{m}_{#1}}^{(#2)}}}
\newcommand{}{\ensuremath{R}}{}{\ensuremath{R}}
\newcommand{}{\ensuremath{r}}{}{\ensuremath{r}}
\section{Background : FL Aggregation}~\label{sec:background}
\begin{algorithm}
\small
\caption{Generalized FedAVG~\cite{fieldguide}}~\label{alg:fedavg}
\mbox{{\bf Aggregator Side}} \\
\mbox{ } \\
$\mbox{Initial model } m^{1}$ \\
\For(){$r ~\in~ \{1,2,\ldots,R\}$}{
Sample a subset $\mathcal{S}^{(r)}$ of participants \\
\textsc{send} $m^{(r)}$ \mbox{to each} $i \in \mathcal{S}^{(r)}$ \\
\textsc{recv} \mbox{model update} $\triangle_{i}^{(r)}$ \mbox{from each} $i \in \mathcal{S}^{(r)}$ \\
\mbox{Aggregate } $\triangle^{(r)} \gets \frac{1}{N} {\sum_{i \in \mathcal{S}^{(r)}} n_i \triangle_{i}^{(r)}}$ \\
$m^{(r+1)} \gets \textsc{optimizer} (m^{r}, -\triangle^{(r)}, \eta^{(r)})$
}
\mbox{ } \\
\mbox{{\bf Participant Side}} \\
\textsc{recv} ($m^{(r)}$) from aggregator \\
Local model $x^{(r,1)} \gets m^{(r)}$\\
\For(){$k \in \{1,2,\ldots,\tau \} $ } {
Compute local stochastic gradient $g_i(x^{(r,k)})$\\
$x^{(r,k+1)} \gets \textsc{optimizer}(x^{{r,k}}, -g_i(x^{(r,k)}), \eta^{(r)})$\\
}
Compute local model update $\triangle^{(r,l)} \gets x^{(r,\tau)} - x^{(r,1)}$\\
\textsc{send} $\triangle^{(r,l)}$ to aggregator \\
\mbox{ } \\
\end{algorithm}
An aggregator typically coordinates the entire FL job.
The parties, aided by the aggregator, agree on the model architecture (ResNet, EfficientNet, etc),
optimizer to use (SGD, Adam, AdaGrad, etc.) and hyperparameters to be
used for the FL job (batch size, learning rate, aggregation frequency etc.). The aggregator is responsible for
durably storing the global model and keeping track of the FL job.
We illustrate FL using the most common algorithm used for neural networks and
gradient descent based machine learning models -- FedAvg~\cite{fieldguide}.
For FedAvg (Algorithm \ref{alg:fedavg}), the aggregator selects a random subset $\mathcal{S}^{(r)} \subset \mathcal{S}$ of parties
for every round $r$. The aggregator initializes the global model $m^{1}$ using the same process as if the
job is centralized (i.e, either randomly or from existing pre-trained models). At each round, the
aggregator transmits the global model $m^{(r)}$ to $\mathcal{S}^{(r)}$. Once a party receives $m^{(r)}$, it uses
$m^{(r)}$ to make $\tau$ training passes on its local dataset. $\tau$ is the aggregation frequency.
It then computes the local gradient update
after $\tau$ passes, $\triangle^{(r,l)}$, and transmits the same to the aggregator. The aggregator in FedAvg
then computes the weighted average of all gradient updates --
$\frac{1}{N} \sum_{i \in \mathcal{S}^{(r)}} n_i \triangle_{i}^{(r)}$ to
compute the global gradient update $\triangle^{(r)}$ and update the global model (for the next round)
$m^{(r+1)}$. This process proceeds for a set number $R$ of rounds or until the aggregator
has determined that the model has converged. The term $n_i$ in the weighted average is the number of training
samples at party $i$ and $N$ is the total number of training samples involved in the round, i.e., $N = \sum_{i \in \mathcal{S}^{(r)}} n_i$.
\begin{comment}
Most typically, for neural networks, parties would run a local training process on their
training data, share the gradients of their model (also called a \emph{model update}) with the aggregator,
which would then aggregate the
updates using a fusion algorithm/strategy. Then, the merged/aggregated model is sent back to all
parties for the next round of training on their local datasets.
Like regular (centralized) machine learning training which makes several passes over a
centralized dataset, an FL job proceeds over a number of model fusion/synchronization rounds, determined
by the batch size ($B$) used. While model fusion after every minibatch ($B$) is possible, typically
parties in an FL job synchronize every local epoch, i.e., they train by making a pass over their entire
local data set before fusing local models. For each round, a model update generated by a party is often intended to
be ephemeral, but must be durably stored by the aggregator until model fusion is complete,
and the aggregated model is durably stored. Model updates may be retained by each party
according to its local data retention policy, but the default behavior on the aggregator
side is to delete the model updates once the fused model is durably stored. If required
by legal or audit regulations, or for model explainability, aggregators may store model
updates long term with the permission of parties. Durable storage means reliable replicated
distributed storage (like Cloud Object Stores).
\end{comment}
\noindent{\bf Associativity of Aggregation:}
Since the number of participants typically varies between FL jobs,
and within a job (over time) as participants join and leave, horizontal scalability of FL
aggregation software is vital. \emph{Horizontally scalable} aggregation is only feasible
if the aggregation operation is associative -- assuming $\oplus$ denotes the aggregation of model updates (e.g., gradients)
$U_i$, $\oplus$ is associative if $U_1 \oplus U_2 \oplus U_3 \oplus U_4 \equiv (U_1 \oplus U_2) \oplus (U_3 \oplus U_4)$.
Associativity is the property that enables us to exploit data parallelism to
partition participants among aggregator instances, with
each instance responsible for handling updates from a subset of participants.
The outputs of these instances must be further aggregated.
In the case of FedAvg, $\sum_{i \in \mathcal{S}^{(r)}} n_i \triangle_{i}^{(r)}$ is associative because addition is associative,
and the most computationally intensive because each $\triangle_{i}^{(r)}$ involves millions of floating point numbers.
A common design pattern in parallel computing~\cite{grama} is to use tree-based or hierarchical aggregation
in such scenarios, with a tree topology connecting the aggregator instances. The output of each aggregator
goes to its parent for further aggregation.
\section{Conclusions and Future Work}
In this paper, we have presented \textsf{AdaFed}, a system for adaptive serverless aggregation in federated learning.
We have described the predominant way of parallelizing aggregation using a tree topology and
examined its shortcomings. We have demonstrated how serverless/cloud functions can be used to
effectively parallelize and scale aggregation while eliminating resource wastage and
significantly reducing costs. Our experiments using three different model architectures, datasets
and two FL aggregation algorithms demonstrate that the overhead of using serverless functions for aggregation
is minimal, but resource and cost savings are substantial. We also demonstrate that serverless
aggregation can effectively adapt to handle changes in the number of participants in the FL job.
We are currently working to extend this work in two directions: (i) increasing the dependability
and integrity of aggregation using trusted execution environments (TEEs) and (ii) effectively supporting
multi-cloud environments by using service mesh (like Istio) to find the best aggregator function to
route a model update.
\section{\textsf{AdaFed}\ : Design and Implementation}
\begin{figure}[htb]
\centering
\includegraphics[width=0.8\columnwidth]{figures/Hierarchical.pdf}
\caption{Hierarchical/Tree-based Aggregation}
\label{fig:hierarchical}
\end{figure}
\textsf{AdaFed}, as its name suggests, adapts to the mechanics of a specific FL job.
When a job's aggregation function is associative, as it is in most FL jobs,
\textsf{AdaFed}\ leverages data parallelism
to spawn several aggregation ``entities/instances'' per FL job and arranges them
in a tree based (hierarchical) overlay. Tree-based overlays are a common distributed computing
pattern in publish-subscribe~\cite{pubsub} and stream processing~\cite{streamoverlays}.
This enables aggregation to scale to support thousands of parties.
However, using ``statically deployed'' (always on) overlays, while advantageous in
high throughput stream processing, is not suitable for FL.
Consequently, \textsf{AdaFed}\ has a programming model whose
goal is to reduce state in aggregators and to decouple aggregator instances.
This enables said instances to execute as serverless functions, which are spawned only
when model updates arrive, and are torn down when parties are busy training (no updates available
to aggregate). An aggregation function instance can be triggered once a specific number of model updates
are available; or multiple instances can be triggered once the expected number of model updates for the
current FL round are available. Once a model aggregation round is complete and the fused model
is sent back to the parties, all aggregator functions exit until the next round,
thereby releasing resources.
\subsection{Associativity $\rightarrow$ Tree-based Aggregation}
Associativity enables us to partition parties among aggregator instances, with
each instance responsible for handling updates from a subset of parties.
The outputs of these instances must be further aggregated.
A tree topology connects the aggregator instances. The output of each aggregator
goes to its parent for further aggregation. We have determined that it is possible to split any
associative FL aggregation operation into leaf and intermediate aggregators as illustrated by
Figure~\ref{fig:hierarchical}. A leaf aggregator implements logic to fuse raw model weight updates $U_i$ from a group of
$k$ parties to generate a partially aggregated model update $U_k$. For example, in the case of
FedAvg~\cite{mcmahan2017communication, mcmahan2017learning} this function would take
$k_i$ gradient update vectors and return the weighted sum $S_i = \sum_{1,\ldots,k_i} n_i \triangle_{i}^{(r)}$ of these vectors,
along with the number of data items processed so far $\sum_{1,\ldots,k_i} n_i$.
An intermediate aggregator implements logic to further aggregate partially
aggregated model updates ($U_k$), in stages, to produce the final aggregated model
update ($U_F$). In the case of FedAvg, this function would aggregate (add up) multiple
$(S_i)$. If all expected model updates have arrived from $\mathcal{S}^{(r)}$ parties, the
intermediate aggregator would have thus calculated $\sum_{1,\ldots,|\mathcal{S}^{(r)}|} n_i \triangle_{i}^{(r)}$
and $N=\sum_{1,\ldots,|\mathcal{S}^{(r)}|} n_i$, from which the aggregated gradient update $\triangle^{(r)}$
is calculated per Algorithm~\ref{alg:fedavg} at the root/master aggregator (Figure~\ref{fig:hierarchical}).
Establishing a tree-based aggregation topology as in Figure~\ref{fig:hierarchical}
starts by identifying the
number of parties that can be comfortably handled by an aggregator instance.
This is dependent on (i) size/hardware capability (CPU/RAM/GPU) of the instance (server or VM or container) and
its network bandwidth, and (ii) the size of the
model, which directly determines the size of the model update and the memory/compute
capabilities needed for aggregation. Assuming that each instance can handle $k$ participants,
a complete and balanced $k$-ary tree can be used. $\ceil{\frac{n}{k}}$ leaf aggregators are
needed to handle $n$ participants; the tree will have $O(\ceil{\frac{n}{k}})$ nodes.
While a tree-based FL aggregation overlay is conceptually simple, it does involve significant
implementation and deployment effort for fault tolerant aggregation.
Typically, aggregator nodes are instantiated
using virtual machines (VMs) or containers (e.g., Docker) and managed using
a cluster management system like Kubernetes. These instances are then arranged in the
form of a tree, i.e., each instance is provided with the IP address/URL of its parent,
expected number of child aggregators, credentials to authenticate itself to said parent and
send aggregated model updates. Failure detection and recovery
is typically done using heartbeats and timeouts, between each instance, its parents and children.
Once faults happen, the aggregation service provider should typically take responsibility for
recovering the instance, and communicating information about the recovered instance to
its children for further communications. Things become complicated when an instance fails
at the same time as one of its parent or child instances. Another issue, common in distributed software systems, that arises in this scenario is
network partitions. In summary, to implement hierarchical
aggregation the traditional way~\cite{grama}, any aggregation service
has to maintain dedicated microservices to deploy, monitor and heal these aggregation overlays.
\subsection{``Idle Waiting'' in Static Tree Aggregation}
Even if some technologies like Kubernetes pods and service abstractions are able to simplify
a few of these steps, a more serious problem with tree-based aggregation overlays is that
aggregator instances are ``always on'' waiting for updates, and this is extremely wasteful in terms
of resource utilization and monetary cost. To handle FL jobs across thousands of parties,
aggregation services including \textsf{AdaFed}\
must support intermittent parties effectively. Given that, for every round, parties
may send model updates over an extended time period (hours), aggregators spend the bulk
of their time waitin. Idle waiting wastes resources and increases aggregation cost.
A tree-based aggregation overlay compounds resource wastage and cost.
Re-configuring tree-based aggregation overlays is also difficult. This is needed, for example,
when midway through a job, a hundred (or a thousand) participants decide to join.
Supporting them would require reconfiguration at multiple levels of the aggregation overlay.
Reconfigurations are also necessary to scale down the overlay when participants leave.
Thus, elasticity of aggregation is hard to achieve in the static tree setting.
\begin{figure}[htb]
\centering
\includegraphics[width=0.75\columnwidth]{figures/lambdafl-arch.pdf}
\caption{\textsf{AdaFed}\ System Architecture. Aggregators are executed as serverless functions.}
\label{fig:lambdafl-arch}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[width=\columnwidth]{figures/QAgg.pdf}
\caption{\textsf{AdaFed}\ -- Illustration of stepwise serverless aggregation}
\label{fig:qagg}
\end{figure}
\subsection{Using Serverless Functions}
\textsf{AdaFed}\ takes associativity one step further. \textsf{AdaFed}\ mitigates issues with aggregation overlays by avoiding the construction of \emph{actual/physical}
tree topology. Instead, \textsf{AdaFed}\ uses serverless functions chained together with message queues
to realize a \emph{logical} tree topology. \textsf{AdaFed}\ executes both leaf and intermediate aggregation
operations as serverless/cloud functions. These functions are executed in containers on a
cluster managed by Kubernetes, which multiplexes
multiple workloads and enables the cluster to be shared by multiple FL jobs and/or other workloads.
Also, since there is no static topology, more (or less) aggregator functions can be spawned depending
on the number of parties (model updates), thereby handling party joins/leaves effectively.
The challenge in executing aggregation as serverless functions, which are ephemeral and have no
stable storage, is to manage state -- that of each aggregation entity, intermediate aggregation
outputs, inter-aggregator communications and party-aggregator communications.
We also note that splitting aggregation into leaf and intermediate
functions makes the logic simpler. It is also possible to have a single serverless function that can operate
on both raw updates and partially fused updates; doing that will increase the complexity of the function.
\subsection{Party-Aggregator Communication}
This is done using a distributed message queue (Kafka). Kafka is a topic-based
message queue offering standard publish/subscribe semantics. That is, each queue has a ``name'' (i.e., pertains to a ``topic''), and
multiple distributed entities can write to (publish) and read from (subscribe to) it. Kafka enables us
to set a replication level per queue, which ensures durability of messages between the aggregator instances
and parties. For each FL job (with an identifier \textsf{JobID}, two queues are created at deployment time
-- \textsf{JobID-Agg} and \textsf{JobID-Parties}.
Only aggregator instances (serverless functions) can publish to \textsf{JobID-Agg} and all parties subscribe to it. Any party can publish to
\textsf{JobID-Parties} but only the aggregator instances can both publish to and read from it. This ensures that model updates
sent to \textsf{JobID-Parties} are private and do not leak to other parties. When the job starts, the aggregator publishes the
initial model on \textsf{JobID-Agg}; parties can then download the model and start training. At the end of each
job round, parties publish their model updates to \textsf{JobID-Parties}. \emph{Inter-Aggregator Communication,}
is also handled using Kafka. Partially fused model updates are published
by aggregation functions into Kafka, and can trigger further function invocations.
\subsection{Aggregation Trigger}
For serverless functions to execute, they must be triggered by some event.
\textsf{AdaFed}\ provides several flexible and configurable triggers. The simplest ones trigger an aggregation function for
every $k$ updates published to \textsf{JobID-Parties}, or every $t$ seconds. For FL jobs that use a parameter server strategy
for model updates,
it is possible in \textsf{AdaFed}\ to implement the update logic as a serverless function and trigger it every time
an update is published by a party. Other custom triggers involve the periodic execution of any
valid Python code (also as a serverless function) which triggers aggregation.
Custom triggers are vital to handling FL jobs involving intermittent parties. As an illustration, consider
an FL job where each round is successful if 50\% of parties send model updates within 10 minutes. The aggregation trigger here could be a serverless function, invoked every minute, to
count the number of parties that have responded and perform partial aggregation through leaf
aggregators; aggregation is complete when at least 50\% of the parties have responded. Another
FL job may require that aggregation waits for at least 10 minutes and considers the round successful
if at least 50\% of parties have responded. In this case, the job would contain a configuration
parameter that triggers aggregation after 10 minutes.
\subsection{End-to-End Illustration}
As illustrated in Figure~\ref{fig:qagg}, a set of parties decide to start an FL job through existing private communication channels.
``Matchmaking'' or inducing parties to join an FL job is out of scope of this paper and \textsf{AdaFed}.
We assume that this set of parties is convinced of the benfits of FL and want to collaborate.
While forming a group, they also decide things like model architecture, model interchange format
and hyperparameters (initial model weights,
batch size and learning rate schedule, number of rounds, target accuracy and
model update frequency). \textsf{AdaFed}\ then assigns a \textsf{JobID} to this job, creates metadata pertaining
to the job (including party identities and hyperparameters), updates its internal data structures,
instantiates two Kafka queues -- \textsf{JobID-Agg} and \textsf{JobID-Parties}.
A serverless function is triggered to publish the initial model architecture and weights
on \textsf{JobID-Agg}. The FL job also specifies the triggering function.
Then the first round of training starts at the parties' local
infrastructure using the model downloaded/received from \textsf{JobID-Agg}.
Once local training is complete, parties send model updates to \textsf{JobID-Parties}.
The trigger (serverless) function executes, and if it determines that an aggregation has
to be initiated, triggers a leaf or intermediate aggregator. They pull inputs from \textsf{JobID-Parties}
and publish their outputs to the same. This process continues as model updates arrive. When an aggregator
function determines that all parties have sent their updates, the round is finished and
the updated model published to \textsf{JobID-Agg}. Then the next round starts.
Job termination criteria may be different depending on the type of the FL job, as discussed earlier. A time-based
or a quorum-based completion criterion may be also used.
\subsection{Durability} Aggregation checkpointing for fault tolerance determines how frequently the aggregator checkpoints its state to external stable storage.
While this is needed for traditional FL platforms, \textsf{AdaFed}\ does not use
checkpointing. If the execution of a serverless aggregation function fails, it is simply restarted.
All aggregator state (updates from parties, partially fused models, etc) is durably stored in message queues.
This aspect of \textsf{AdaFed}\ is vital to understanding \textsf{AdaFed}'s resource usage; we observe that the resource overhead of
using message queues is equal to that of checkpointing using cloud object stores in single/hierarchical aggregator schemes.
\subsection{Implementation and Elastic Scaling}
We implement \textsf{AdaFed}\ using the popular Ray~\cite{ray} distributed computing platform.
Ray provides several abstractions, including powerful serverless functions (Ray remote
functions). We explored a couple of alternate implementations, including KNative~\cite{knative} and Apache Flink~\cite{flink},
and settled on Ray because it provides arbitrarily long serverless functions, is well integrated
with common Python libraries (numpy, scikit-learn, Tensorflow and PyTorch) and provides the freedom
to use accelerators if necessary. Ray's internal message queue could have been used
in lieu of Kafka, but we found Kafka to be more robust. Aggregation triggers are implemented
using Ray, and support typical conditions on \textsf{JobID-Parties} (receipt of a certain number of messages, etc.),
but are flexible enough to execute user functions that return booleans (whether aggregation should be triggered or
not).
Our implementation using Ray executes on the Kubernetes cluster manager. Ray's elastic scaler
can request additional Kubernetes pods to execute serverless functions, depending on how
frequently aggregation is triggered. It is also aggressive about releasing unused pods
when there are no model updates pending. When aggregation is triggered, groups of
model updates are assigned to serverless function invocations. Each invocation is assigned
2 vCPUs and 4GB RAM (this is configurable). If there are insufficient pods to support
all these invocations, Ray autoscales to request more Kubernetes pods. This also enables
\textsf{AdaFed}\ to handle large scale party dropouts and joins effectively. Only the exact amount of
compute required for aggregation is deployed -- overheads to spawn tasks on Kubernetes pods
and create new pods are minimal, as demonstrated in our empirical evaluation.
It is also vital to ensure that model updates are not consumed twice by aggregation functions.
When aggregation is triggered for a model update in a Kafka queue, it as marked using a flag.
The flag is released only after the output of the function is written to Kafka.
If the aggregation function crashes, Ray restarts it, thereby guaranteeing ``exactly once''
processing and aggregation semantics.
\subsection{Expressivity and Security}
The programming model of \textsf{AdaFed}\ and its implementation using Ray enables us to
support a wide variety of FL aggregation algorithms. Associativity is a pre-requisite for
aggregation scalability; and any associative algorithm can be programmed using
\textsf{AdaFed}. Most FL aggregation algorithms, including FedAvg/FedSGD~\cite{bonawitz2019towards}, FedProx~\cite{fedprox},
FedMA~\cite{fedma}, Mime~\cite{mime}, Scaffold~\cite{scaffold}, FedPA~\cite{fedpa}, FedPD~\cite{fedpd}
and FedDist~\cite{feddist} are associative. In the rare case that the aggregation
algorithm is not associative, \textsf{AdaFed}\ still uses serverless functions to spawn the single aggregator
instance and does so with a Docker container of the maximum size (configurable) supported by the underlying
Kubernetes cluster. The size and number of aggregator instances, as well as the number of parties handled
by any single instance are configurable, enabling \textsf{AdaFed}\ to support FL jobs with varying participation.
Furthermore, none of the design choices of \textsf{AdaFed}\ has any impact on FL privacy mechanisms used.
Transport layer encryption (TLS) used to transmit model updates in existing FL platforms
can be used to send updates to Kafka in \textsf{AdaFed}. Updates are decrypted by the aggregation function reading them
from Kafka. \textsf{AdaFed}\ is oblivious to any noise added by parties
for differential privacy. And the fact that functions in \textsf{AdaFed}\ can execute most Python code means that
aggregation of homomorphically encrypted model updates (using appropriate libraries) is also feasible.
\section{Evaluation}~\label{sec:eval}
\begin{figure*}[htb]
\centering
\includegraphics[width=0.32\textwidth]{graphs/agglatency_effnet_cifar100.pdf}
\includegraphics[width=0.32\textwidth]{graphs/agglatency_vgg16_rvlcdip.pdf}
\includegraphics[width=0.32\textwidth]{graphs/agglatency_incep4_inaturalist.pdf}
\caption{Aggregation Latency (s) -- time taken for aggregation to finish after the last model update is available}
\label{fig:agglatency-static}
\end{figure*}
\begin{figure}[ht]
\small
\begin{tabular}{@{}rrrrl@{}}
\toprule
\multicolumn{1}{l}{\# parties} & \multicolumn{1}{l}{Static~Tree (s)} & \multicolumn{1}{l}{Serverless (s)} & \multicolumn{1}{l}{$\frac{Static ~Tree}{Serverless}$} & \\ \midrule
100 & 4.58 & 1.57 & 2.92$\times$ & \\
1000 & 12.46 & 4.34 & 2.87$\times$ & \\
10000 & 15.59 & 4.82 & 3.23$\times$ & \\ \bottomrule
\end{tabular}
\caption{Effect of 20\% party joins on aggregation latency (seconds). EfficientNet-B7 on CIFAR100 using FedProx aggregation algorithm.}~\label{tbl:joins-cifar}
\end{figure}
\begin{figure}[ht]
\small
\begin{tabular}{@{}rrrrl@{}}
\toprule
\multicolumn{1}{l}{\# parties} & \multicolumn{1}{l}{Static~Tree (s)} & \multicolumn{1}{l}{Serverless (s)} & \multicolumn{1}{l}{$\frac{Static ~Tree}{Serverless}$} & \\ \midrule
100 & 10.59 & 4.29 & 2.47$\times$ & \\
1000 & 17.6 & 6.45 & 2.73$\times$ & \\
10000 & 26.82 & 7.4 & 3.62$\times$ & \\ \bottomrule
\end{tabular}
\caption{Effect of 20\% party joins on aggregation latency (seconds). VGG16 on RVL-CDIP using FedSGD aggregation algorithm.}~\label{tbl:joins-rvlcdip}
\end{figure}
\begin{figure}[ht]
\small
\begin{tabular}{@{}rrrrl@{}}
\toprule
\multicolumn{1}{l}{\# parties} & \multicolumn{1}{l}{Static~Tree (s)} & \multicolumn{1}{l}{Serverless (s)} & \multicolumn{1}{l}{$\frac{Static ~Tree}{Serverless}$} & \\ \midrule
100 & 20.64 & 7.5 & 2.75$\times$ & \\
1000 & 36.64 & 10.66 & 3.44$\times$ & \\
7000 & 59.78 & 13.45 & 4.44$\times$ & \\ \bottomrule
\end{tabular}
\caption{Effect of 20\% party joins on aggregation latency (seconds). InceptionV4 on iNaturalist using FedProx aggregation algorithm.}~\label{tbl:joins-inaturalist}
\end{figure}
In this section, we evaluate the efficacy of \textsf{AdaFed}, by first comparing \textsf{AdaFed}\(\)
against a centralized aggregator setup common in several FL
frameworks
like IBM FL~\cite{ibmfl}, FATE~\cite{fate} and NVFLARE~\cite{nvflare}. We demonstrate how such single
aggregator setups have difficulties when scaling beyond 100 participants. We then demonstrate how
a static hierarchical (tree) overlay of aggregator instances can help with the scalability issue,
but is ineffective from a resource consumption, utilization, cost and elasticity perspectives.
\subsection{Metrics}
Given that aggregation depends on \emph{whether} the expected number of model updates are available, we
define \emph{aggregation latency} as the time elapsed between the reception of the last model update
and the availability of the aggregated/fused model. When compared to a static tree deployment of aggregator instances,
serverless functions are dynamically instantiated in response to model updates. Deployment
of serverless functions takes a small amount of time ($<$ 100 milliseconds) and elastic scaling of
a cluster in response to bursty model update can also take 1-2 seconds. Consequently, the overhead
of aggregation in \textsf{AdaFed}\ will usually manifest in the form of increased \emph{aggregation latency}.
It is measured for each FL synchronization round,
and the reported numbers in the paper are averaged over all the rounds of the FL job. We want aggregation latency to be as
low as possible. Scalability, or the lack thereof, of any FL aggregation architecture, also manifests in the form
of increased aggregation latency when the number of parties rises.
We therefore evaluate
(i) \emph{efficiency} by examining whether serverless functions increase the latency
of an FL job, as perceived by a participant, (ii) \emph{scalability} by examining the
impact of the number of parties on latency, (iii) \emph{adaptivity/elasticity},
by examining the impact of parties joining midway on latency.
We evaluate
\emph{resource efficiency}, by measuring resource consumption (in terms
of the number and duration of containers used for aggregation), resource (CPU and memory)
utilization and projected total cost.
We execute both hierarchical aggregation and \textsf{AdaFed}\ using
containers on Kubernetes pods in our datacenter, and measure the number of \emph{container seconds}
used by an FL job from start to finish. Container seconds is calculated by multiplying the number of
containers used with the time that each container was used/alive. This includes all the resources used by the ancillary services,
including MongoDB (for metadata), Kafka and Cloud Object Store. Measuring \emph{container seconds} helps us use
publicly available pricing from cloud providers like Microsoft Azure to project the monetary cost
of aggregation, in both cases, and project cost savings. We also report average CPU and memory utilization,
averaged over the entire FL job.
\subsection{Experimental Setup}
Aggregation was executed on a Kubernetes cluster on CPUs, using Docker containers. For IBM FL,
the container used for the single aggregator was run on a dedicated server with 16 CPU cores (2.2 Ghz, Intel Xeon 4210)
and 32GB of RAM. Each container for hierarchical or serverless aggregation
was equipped with 2 vCPUs (2.2 Ghz, Intel Xeon 4210) and 4 GB RAM. For hierarchical/tree aggregation, each
instance was encapsulated using the Kubernetes service abstraction. Parties were emulated, and distributed over
four datacenters (different from the aggregation datacenter) to emulate geographic distribution.
Each party was also executed inside Docker containers (2 vCPUs and 4 GB RAM) on Kubernetes, and these containers
had dedicated resources. We actually had parties running training to emulate realistic federated
learning, as opposed to using, e.g., Tensorflow Federated simulator.
We select three real-world federated learning jobs --
two image classification tasks from the Tensorflow Federated (TFF)~\cite{tff-benchmark} benchmark
and one popular document classification task. From TFF~\cite{tff-benchmark}, we select (i) CIFAR100 dataset which can be
distributed over 10-10000 parties, with classification performed using the EfficientNet-B7 model and the FedProx~\cite{fedprox}
aggregation algorithm and (ii) iNaturalist dataset
which can be distributed over 10-9237 parties, with classification performed using the InceptionV4
model and FedProx~\cite{fedprox} aggregation algorithm. Thus,
we consider two types of images and two models of varying sizes. We do not consider other workloads
from TFF because they involve less than 1000 parties. For additional diversity, we consider a third workload
using the VGG16~\cite{vgg16-rvlcdip} model and FedSGD~\cite{bonawitz2019towards} aggrgeation algorithm on RVL-CDIP~\cite{rvlcdip} document classification dataset. Each job was executed for 50 synchronization rounds, with model fusion happening after every local epoch.
For all scenarios, the datasets were partitioned in a realistic non-IID manner.
\subsection{Aggregation Latency and Scalability}~\label{sec:agglatency-scalability}
First, we consider a scenario where the number of parties
remains constant throughout the FL job, for all synchronization rounds, i.e., once the job starts, no
parties join or leave.
From Figure~\ref{fig:agglatency-static}, we observe that a centralized
single aggregator setting does not scale to a large number of parties, as average
aggregation latency increases significantly -- almost linearly.
This is because of both constrained compute/memory capacity at the single aggregator and
constrained network bandwidth needed to transfer/load model updates for aggregation.
Figure~\ref{fig:agglatency-static} also illustrates that the increase in aggregation
latency is much more gradual for both static tree overlays and \textsf{AdaFed}\ (which uses
serverless functions), enabling these architectures to scale to larger FL settings.
In fact, for both static tree and \textsf{AdaFed}, latency increases only by $\approx~4~\times$
when the number of parties increases 1000$\times$. This trend is due to the data parallelism
inherent in both the static tree and \textsf{AdaFed}.
From an efficiency standpoint, we observe that the aggregation latency is similar between static tree and
\textsf{AdaFed}, within 4\% of each other, with aggregation latency of \textsf{AdaFed}\ being slightly higher
than that of the static tree overlay. This is because using serverless functions
does not reduce the number of aggregation steps; it merely avoids having to
keep the aggregators provisioned and alive when they are not needed. We used runtime profiling to
determine that the slight (up to 4\%) increase in aggregation latency over the static tree is primarily
due to cold starts when functions are
started; the other minor factor is the latency due to the aggregation trigger. Thus, we
observe that the runtime overhead of using and triggering serverless functions is minimal.
\begin{figure*}[htb]
\small
\centering
\setlength{\tabcolsep}{0.5em}
\begin{tabular}{|r|r|r|r|r|r|r|r|r|r|}
\toprule
\multicolumn{1}{|c|}{Num.} & \multicolumn{2}{|c|}{Tot. container seconds} & \multicolumn{2}{|c|}{Proj. Total cost US\$} & \multicolumn{1}{|c|}{Cost}
& \multicolumn{2}{|c|}{Avg. CPU Util. (\%) } & \multicolumn{2}{|c|}{Avg. Memory Util. (\%) } \\
\multicolumn{1}{|c|}{Parties} &
\multicolumn{1}{|c|}{Static Tree} &
\multicolumn{1}{|c|}{\textsf{AdaFed}} &
\multicolumn{1}{|c|}{Static Tree} &
\multicolumn{1}{|c|}{\textsf{AdaFed}} &
\multicolumn{1}{|c|}{Savings \%} &
\multicolumn{1}{|c|}{Static Tree} &
\multicolumn{1}{|c|}{\textsf{AdaFed}} &
\multicolumn{1}{|c|}{Static Tree} &
\multicolumn{1}{|c|}{\textsf{AdaFed}} \\
\midrule
10 & 1723 & 228 & 0.46 & 0.06 & 86.96\% & 12.31\% & 82.95\% & 46.54\% & 73.35\% \\
100 & 2653 & 351 & 0.71 & 0.09 & 87.32\% & 17.09\% & 83.08\% & 20.89\% & 72.89\% \\
1000 & 22340 & 2951 & 6.01 & 0.79 & 86.86\% & 10.99\% & 83.52\% & 17.23\% & 72.87\% \\
10000 & 298900 & 40849 & 80.46 & 11 & 86.33\% & 10.61\% & 84.27\% & 18.66\% & 75.39\% \\ \bottomrule
\end{tabular}
\caption{EfficientNet-B7 on CIFAR100 using FedProx aggregation algorithm. Active Participants. Resource usage and projected cost, using container cost/s of 0.0002692 US\$ (source Microsoft Azure\cite{azurepricing})}~\label{tbl:cost-cifar-active}
\end{figure*}
\begin{figure*}[htb]
\small
\centering
\setlength{\tabcolsep}{0.5em}
\begin{tabular}{|r|r|r|r|r|r|r|r|r|r|}
\toprule
\multicolumn{1}{|c|}{Num.} & \multicolumn{2}{|c|}{Tot. container seconds} & \multicolumn{2}{|c|}{Proj. Total cost US\$} & \multicolumn{1}{|c|}{Cost}
& \multicolumn{2}{|c|}{Avg. CPU Util. (\%) } & \multicolumn{2}{|c|}{Avg. Memory Util. (\%) } \\
\multicolumn{1}{|c|}{Parties} &
\multicolumn{1}{|c|}{Static Tree} &
\multicolumn{1}{|c|}{\textsf{AdaFed}} &
\multicolumn{1}{|c|}{Static Tree} &
\multicolumn{1}{|c|}{\textsf{AdaFed}} &
\multicolumn{1}{|c|}{Savings \%} &
\multicolumn{1}{|c|}{Static Tree} &
\multicolumn{1}{|c|}{\textsf{AdaFed}} &
\multicolumn{1}{|c|}{Static Tree} &
\multicolumn{1}{|c|}{\textsf{AdaFed}} \\
\midrule
10 & 1953 & 162 & 0.53 & 0.04 & 91.73\% & 13.17\% & 91.98\% & 47.01\% & 84.36\% \\
100 & 3078 & 234 & 0.83 & 0.06 & 92.4\% & 10.75\% & 90.22\% & 20.27\% & 82.01\% \\
1000 & 25250 & 1992 & 6.8 & 0.54 & 92.11\% & 13.86\% & 92.92\% & 22.9\% & 85.82\% \\
10000 & 337830 & 30303 & 90.94 & 8.16 & 91.03\% & 12.36\% & 89.25\% & 22.96\% & 82.89\% \\ \bottomrule
\end{tabular}
\caption{VGG16 on RVL-CDIP using FedSGD aggregation algorithm. Active Participants. Resource usage and projected cost, using
container cost/s of 0.0002692 US \$ (source Microsoft Azure\cite{azurepricing}}~\label{tbl:cost-rvlcdip-active}
\end{figure*}
\begin{figure*}[htb]
\small
\centering
\setlength{\tabcolsep}{0.5em}
\begin{tabular}{|r|r|r|r|r|r|r|r|r|r|}
\toprule
\multicolumn{1}{|c|}{Num.} & \multicolumn{2}{|c|}{Tot. container seconds} & \multicolumn{2}{|c|}{Proj. Total cost US\$} & \multicolumn{1}{|c|}{Cost}
& \multicolumn{2}{|c|}{Avg. CPU Util. (\%) } & \multicolumn{2}{|c|}{Avg. Memory Util. (\%) } \\
\multicolumn{1}{|c|}{Parties} &
\multicolumn{1}{|c|}{Static Tree} &
\multicolumn{1}{|c|}{\textsf{AdaFed}} &
\multicolumn{1}{|c|}{Static Tree} &
\multicolumn{1}{|c|}{\textsf{AdaFed}} &
\multicolumn{1}{|c|}{Savings \%} &
\multicolumn{1}{|c|}{Static Tree} &
\multicolumn{1}{|c|}{\textsf{AdaFed}} &
\multicolumn{1}{|c|}{Static Tree} &
\multicolumn{1}{|c|}{\textsf{AdaFed}} \\
\midrule
10 & 2365 & 389 & 0.64 & 0.1 & 83.55\% & 10.86\% & 91.86\% & 49.73\% & 82.25\% \\
100 & 3354 & 548 & 0.9 & 0.15 & 83.65\% & 14.17\% & 91.18\% & 21.71\% & 83.49\% \\
1000 & 30545 & 5144 & 8.22 & 1.38 & 83.16\% & 10.87\% & 91.77\% & 23.12\% & 83.43\% \\
9237 & 420870 & 68307 & 113.3 & 18.39 & 83.77\% & 13.44\% & 91.01\% & 21.33\% & 82.49\% \\ \bottomrule
\end{tabular}
\caption{InceptionV4 on iNaturalist using FedProx aggregation algorithm. Active Participants. Resource usage and projected cost, using
container cost/s of 0.0002692 US \$ (source Microsoft Azure\cite{azurepricing}}~\label{tbl:cost-inaturalist-active}
\end{figure*}
\begin{figure*}[htb]
\small
\centering
\setlength{\tabcolsep}{0.5em}
\begin{tabular}{|r|r|r|r|r|r|r|r|r|r|}
\toprule
\multicolumn{1}{|c|}{Num.} & \multicolumn{2}{|c|}{Tot. container seconds} & \multicolumn{2}{|c|}{Proj. Total cost US\$} & \multicolumn{1}{|c|}{Cost}
& \multicolumn{2}{|c|}{Avg. CPU Util. (\%) } & \multicolumn{2}{|c|}{Avg. Memory Util. (\%) } \\
\multicolumn{1}{|c|}{Parties} &
\multicolumn{1}{|c|}{Static Tree} &
\multicolumn{1}{|c|}{\textsf{AdaFed}} &
\multicolumn{1}{|c|}{Static Tree} &
\multicolumn{1}{|c|}{\textsf{AdaFed}} &
\multicolumn{1}{|c|}{Savings \%} &
\multicolumn{1}{|c|}{Static Tree} &
\multicolumn{1}{|c|}{\textsf{AdaFed}} &
\multicolumn{1}{|c|}{Static Tree} &
\multicolumn{1}{|c|}{\textsf{AdaFed}} \\
\midrule
10 & 634 & 272 & 0.17 & 0.07 & 99.28\% & 10.58\% & 81.3\% & 42.67\% & 75.26\% \\
100 & 576 & 385 & 0.16 & 0.1 & 98.89\% & 11.97\% & 79.77\% & 12.17\% & 74.77\% \\
1000 & 10516 & 1113 & 2.83 & 0.3 & 99.82\% & 11.41\% & 81.06\% & 11.05\% & 74.15\% \\
10000 & 105021 & 18741 & 28.27 & 5.05 & 99.7\% & 10.25\% & 81.09\% & 10.29\% & 74.71\% \\ \bottomrule
\end{tabular}
\caption{EfficientNet-B7 on CIFAR100 using FedProx aggregation algorithm. Intermittent participants updating over a 10 minute interval for every synchronization round. Resource usage and projected cost using Container cost/s of 0.0002693 US \$
(source Microsoft Azure~\cite{azurepricing}).}~\label{tbl:cost-cifar-inter}
\end{figure*}
\begin{figure*}[htb]
\small
\centering
\setlength{\tabcolsep}{0.5em}
\begin{tabular}{|r|r|r|r|r|r|r|r|r|r|}
\toprule
\multicolumn{1}{|c|}{Num.} & \multicolumn{2}{|c|}{Tot. container seconds} & \multicolumn{2}{|c|}{Proj. Total cost US\$} & \multicolumn{1}{|c|}{Cost}
& \multicolumn{2}{|c|}{Avg. CPU Util. (\%) } & \multicolumn{2}{|c|}{Avg. Memory Util. (\%) } \\
\multicolumn{1}{|c|}{Parties} &
\multicolumn{1}{|c|}{Static Tree} &
\multicolumn{1}{|c|}{\textsf{AdaFed}} &
\multicolumn{1}{|c|}{Static Tree} &
\multicolumn{1}{|c|}{\textsf{AdaFed}} &
\multicolumn{1}{|c|}{Savings \%} &
\multicolumn{1}{|c|}{Static Tree} &
\multicolumn{1}{|c|}{\textsf{AdaFed}} &
\multicolumn{1}{|c|}{Static Tree} &
\multicolumn{1}{|c|}{\textsf{AdaFed}} \\
\midrule
10 & 33043 & 258 & 8.9 & 0.07 & 99.21\% & 13.23\% & 87.06\% & 46.98\% & 82.11\% \\
100 & 33037 & 385 & 8.89 & 0.1 & 98.88\% & 14.12\% & 84.2\% & 10.3\% & 81.56\% \\
1000 & 510039 & 2975 & 137.3 & 0.8 & 99.42\% & 14.46\% & 85.77\% & 10.69\% & 81.7\% \\
10000 & 5700030 & 40884 & 1534.45 & 11.01 & 99.28\% & 10.91\% & 84.27\% & 12.08\% & 80.86\% \\ \bottomrule
\end{tabular}
\caption{VGG16 on RVL-CDIP using FedSGD aggregation algorithm. Intermittent participants updating over a 10 minute interval for every synchronization round. Resource usage and projected cost using Container cost/s of 0.0002693 US \$
(source Microsoft Azure~\cite{azurepricing}).}~\label{tbl:cost-rvlcdip-inter}
\end{figure*}
\begin{figure*}[htb]
\small
\centering
\setlength{\tabcolsep}{0.5em}
\begin{tabular}{|r|r|r|r|r|r|r|r|r|r|}
\toprule
\multicolumn{1}{|c|}{Num.} & \multicolumn{2}{|c|}{Tot. container seconds} & \multicolumn{2}{|c|}{Proj. Total cost US\$} & \multicolumn{1}{|c|}{Cost}
& \multicolumn{2}{|c|}{Avg. CPU Util. (\%) } & \multicolumn{2}{|c|}{Avg. Memory Util. (\%) } \\
\multicolumn{1}{|c|}{Parties} &
\multicolumn{1}{|c|}{Static Tree} &
\multicolumn{1}{|c|}{\textsf{AdaFed}} &
\multicolumn{1}{|c|}{Static Tree} &
\multicolumn{1}{|c|}{\textsf{AdaFed}} &
\multicolumn{1}{|c|}{Savings \%} &
\multicolumn{1}{|c|}{Static Tree} &
\multicolumn{1}{|c|}{\textsf{AdaFed}} &
\multicolumn{1}{|c|}{Static Tree} &
\multicolumn{1}{|c|}{\textsf{AdaFed}} \\
\midrule
10 & 34365 & 509 & 9.25 & 0.14 & 98.52\% & 13.49\% & 87.75\% & 51.13\% & 84.17\% \\
100 & 34358 & 588 & 9.25 & 0.16 & 98.29\% & 11.08\% & 87.08\% & 11.88\% & 83.72\% \\
1000 & 734456 & 17700 & 197.72 & 4.76 & 97.59\% & 11.59\% & 89.09\% & 10.1\% & 87.28\% \\
9237 & 6783036 & 206883 & 1825.99 & 55.69 & 96.95\% & 11.43\% & 88.55\% & 11.19\% & 84.4\% \\ \bottomrule
\end{tabular}
\caption{InceptionV4 on iNaturalist using FedProx aggregation algorithm. Intermittent participants updating over a 10 minute interval for every synchronization round. Resource usage and projected cost using Container cost/s of 0.0002693 US \$
(source Microsoft Azure~\cite{azurepricing}).}~\label{tbl:cost-inaturalist-inter}
\end{figure*}
\subsection{Adaptivity/Elastic Scaling for Party Joins}
Next, we illustrate how \textsf{AdaFed}\ can handle parties joining in the middle of the job with minimal impact
on aggregation latency. For this, we consider a single synchronization round, and increase the number of
parties by 20\%. Figures~\ref{tbl:joins-cifar},\ref{tbl:joins-rvlcdip} and \ref{tbl:joins-inaturalist} illustrate the aggregation
latency when 20\% more parties send model updates during the synchronization round.
For these experiments, we only illustrate static tree based overlays and \textsf{AdaFed}. This is because
Section~\ref{sec:agglatency-scalability} has already demonstrated that centralized aggregators
do not scale to handle large numbers of parties; the effect of party joins is similar -- aggregation latency
increases almost linearly w.r.t number of parties joining.
Serverless aggregation in \textsf{AdaFed}\ needs no overlay reconfiguration, while static tree aggregation needs
to add more aggregator instances and reconfigure the tree. This manifests as a significant increase in aggregation latency
(2.47$\times$ to 4.62$\times$). This is due to the fact that the number of serverless function invocations depends
on the aggregation workload, and partially aggregated updates can be stored in message queues. However,
with a tree overlay, new aggregator nodes have to be instantiated and the topology changed. Thus, although
both static tree and serverless aggregation methods are elastic, using serverless functions provides
significantly better outcomes.
\subsection{Resource Consumption \& Cost}
We compare \textsf{AdaFed}\ with static tree aggregation in terms of resource usage. Although the single aggregator
deployment (e.g., using IBM FL) has much lower resource requirements when compared to \textsf{AdaFed}, it has significantly higher latency
and does not scale. So, we do not consider it in the experiments in this section.
We first illustrate the resource consumption of experiments where parties participate actively (as defined in Section~\ref{sec:background}). Figures~\ref{tbl:cost-cifar-active},\ref{tbl:cost-rvlcdip-active} and
\ref{tbl:cost-inaturalist-active} tabulate the resource usage for the three workloads, in terms of
container seconds and CPU/memory utilization. This data illustrates the real benefits of
using serverless aggregation, with $>85\%$ resource and cost savings for the EfficientNet-B7/CIFAR100/FedProx job,
$>90\%$ for VGG16/RVL-CDIP/FedSGD and $>80\%$ for InceptionV4/iNaturalist/FedProx.
These savings are significant and are a direct result of the adaptivity of \textsf{AdaFed}, by deploying
aggregator functions only when needed. Resource wastage due to static tree can also be observed
from the CPU/memory utilization figures, which are consistently low for static tree because aggregator
instances are idle for long periods. We also observe that, while compute resources needed for aggregation increase
with the number of participants for both static tree and serverless aggregation, the amount of resource and cost
savings remains fairly consistent. We use Microsoft Azure's container pricing for illustrative purposes only;
pricing is similar for other cloud providers.
We stress that the experiments in Figures~\ref{tbl:cost-cifar-active},\ref{tbl:cost-rvlcdip-active} and
\ref{tbl:cost-inaturalist-active} are \emph{conservative}; they assume active participation. That is, parties have dedicated resources
to the FL job, parties do not fail in the middle of training, and training on parties for each round
starts immediately after a global model is published by the aggregator. In realistic scenarios,
parties (e.g., cell phones or laptops or edge devices) perform many functions other than model training,
have other tasks to do and can only be expected to respond over a period of time (response timeout).
Depending on the deployment scenario, this can be anywhere from several minutes to hours.
Figures~\ref{tbl:cost-cifar-inter},\ref{tbl:cost-rvlcdip-inter} and \ref{tbl:cost-inaturalist-inter} demonstrate that
resource and cost savings are huge ($>99\%$) when response timeout is set to
\emph{a modest} 10 minutes per aggregation round. Real world FL jobs typically use higher response timeouts and
will thus reap enormous benefits. Thus, our experiments reinforce our
confidence that serverless aggregation can lead to significant resource and cost savings
with minimal overhead.
\section{Introduction}
Federated Learning (FL)~\cite{kairouz2019advances, fieldguide} is a mechanism in which
multiple parties collaborate to build and train a joint machine learning model typically
under the coordination/supervision of a central server or service provider (definition
by Kairouz et. al.~\cite{kairouz2019advances, fieldguide}). This central server is also
called an \emph{aggregator}. FL is private by design, because parties retain their
data within their private devices/servers; never sharing said data with either
the aggregator or other parties. An FL job involves parties performing local training on their data,
sharing the weights/gradients of their model (also called a \emph{model update}) with the aggregator,
which aggregates the model updates of all parties using a fusion algorithm.
The use of \emph{centralized aggregation} is common in FL because of the ease in which
various machine learning models (neural networks, decision trees, etc.) and
optimization algorithms can be supported.
FL is \emph{typically} deployed in two scenarios: \emph{cross-device} and \emph{cross-silo}.
In the cross-silo scenario, the number of parties is small, but each party has extensive
compute capabilities (with stable access to electric power and/or equipped with hardware accelerators)
and large amounts of data. The parties have reliable participation throughout the entire federated
learning training life-cycle, but are more susceptible to sensitive data leakage. Examples include
multiple hospitals collaborating to train a tumor/COVID detection model on radiographs~\cite{nvidia-covid}, multiple banks
collaborating to train a credit card fraud detection model, etc.
The cross-device scenario involves a large number of parties ($>100$), but each party has a small
number of data items, constrained compute capability, and limited energy reserve (e.g., mobile phones or IoT devices).
They are highly unreliable/asynchronous and are expected to drop and join frequently. Examples include a large
organization learning from data stored on employees' devices and a device manufacturer training a model
from private data located on millions of its devices (e.g., Google Gboard~\cite{bonawitz2019towards}).
Increasing adoption of FL has, in turn, increased the need for
FL-as-a-service offerings by public cloud providers, which serve as a nexus
for parties in an FL job and aggregate/fuse model updates.
Such FL aggregation services have to effectively support multiple concurrent
FL jobs, with each job having tens to thousands of heterogeneous participants (mobile phones,
tablets, sensors, servers) from different organizations and administrative domains.
Our experience, in building and operating the IBM Federated Learning (IBM FL)~\cite{ibmflpublic, ibmfl}
service on our public and private clouds has led us to believe that existing FL aggregation
methods have performance, scalability and resource efficiency challenges, primarily
due to the use of centralized aggregation.
\begin{comment}
Such cloud services have to effectively support multiple concurrent FL jobs and
multi-tenancy -- parties
may be from different organizations with varying security and privacy policies,
and each organization may have several participating entities (employee devices, data centers
from different geographies, etc.).
Over the past three years, our team has built, launched and operated the
. Our experience has led us to
believe that effective aggregation of model updates is a key problem in FL,
when viewed from either a performance, scalability, resource efficiency/cost, or
privacy perspective.
\end{comment}
\noindent{\bf Performance:} Aggregators should not become a bottleneck or a single point
of failure in FL jobs. They should be able to store incoming model updates without loss,
and have low latency -- the time between the arrival of the last expected model update
and the completion of aggregation. In the case of a cloud hosted FL aggregation service,
said guarantees must hold across all running FL jobs. Most existing FL platforms
(IBM FL~\cite{ibmfl}, Webank FATE~\cite{fate}, NVIDIA NVFLARE~\cite{nvflare}) are based on a client-server model with
a single aggregator per FL job deployed (as a virtual machine or container)
in datacenters waiting for model updates. Such platforms are able to easily support multiple concurrent
FL jobs, but performance drops as the number of parties increases, especially in cross-device settings.
This is because aggregation
throughput is limited by the computational capacity of the largest VM or container
(memory and compute, and to a lesser extent, network bandwidth).
\noindent{\bf Scalability:} is considered in terms of the number of parties, size of model
updates, frequency of updates and (for an FL service) number of concurrent FL jobs. FL platforms
using a single aggregator per job only support vertical scalability; non-trivial design
using data parallelism and connecting multiple aggrgeators
is necessary for horizontal scalability, especially in cross-device settings. FL jobs involve several rounds,
and take an extended period of time, especially with intermittently available parties. Party joins and
dropouts are common; so aggregation infrastructure must scale horizontally to support this.
\noindent{\bf Resource Efficiency/Cost:} While operating IBM FL and from publicly available FL benchmarks
like LEAF~\cite{leaf-benchmark} and Tensorflow Federated~\cite{tff-benchmark}, we have observed that
training at the party
takes much longer compared to model update fusion/aggregation, resulting in under-utilization and
wastage of computing resources dedicated to aggregation. This is a significant problem even
in cross-silo settings -- active participation is not guaranteed even in cross-silo settings
due to competition from other higher priority workloads and variations in data availability.
It is further compounded in ``cross-device'' deployments, where parties are highly \emph{intermittent}
and do not have dedicated resources for training.
In these scenarios, the
aggregator expects to hear from the parties \emph{eventually} (typically over a several hours or maybe once
a day). Large-scale FL jobs almost always involve intermittent parties -- as the number
of parties increases, it is extremely hard to expect that all of them participate at the same pace.
This results in aggregators having to wait for long periods of time for parties to finish local
training and send model updates.
\noindent {\bf Contributions:} The core technical contribution of this paper is the design, implementation and evaluation of a
flexible parameter aggregation mechanism for FL -- \textsf{AdaFed}, which has the following novel features:
\begin{itemize}
\item \textsf{AdaFed}\ reduces state in aggregators and treats aggregators as serverless functions. In many existing FL jobs,
every aggregator instance typically
acts on a sequence of inputs and produces a single output. State, if present, is not local to the aggregator
instance and may be shared by all aggregators. Such state is best left in an external store, and consequently
aggregators can be completely stateless and hence, serverless. \textsf{AdaFed}\ is therefore scalable both with respect to
participants -- effective for cross-silo and cross-device deployments,
and with respect to geography -- single/hybrid cloud or multicloud.
\item \textsf{AdaFed}\ leverages serverless technologies to deploy and tear down aggregator instances dynamically
in response to participant model updates, thereby supporting both intermittent and active participants
effectively. There is no reason to keep aggregators deployed all the time and simply ``awaiting input''.
\item \textsf{AdaFed}\ is efficient, both in terms of resource utilization with support for automatic elastic scaling, and in terms of aggregation latency.
\item \textsf{AdaFed}\ is reasonably expressive for programmers to easily implement scalable aggregation algorithms.
\textsf{AdaFed}\ is implemented using the popular Ray~\cite{ray} distributed computing platform, and can run arbitrary Python code
in aggregation functions, and use GPU accelerators if necessary.
\item Increased FL job reliability and fault tolerance by reducing state in aggregators, eliminating persistent
network connections between aggregators, and through dynamic load balancing of participants.
\item \textsf{AdaFed}\ supports widely used FL privacy preserving and security mechanisms
\end{itemize}
\section{Related Work}
Parallelzing FL aggregation using a hierarchical topology has been
explored by ~\cite{bonawitz2019towards}, though the design pattern was introduced by and
early work on datacenter parallel computing~\cite{grama}. While \cite{bonawitz2019towards}
uses hierarchical aggregation, its programming model is different from \textsf{AdaFed}. Its primary goal is
scalability and consequently, it deploys long lived actors instead of serverless functions.
\textsf{AdaFed}\ aims to make FL aggregation resource efficient, elastic in addition to being scalable;
and use off-the-shelf open source software like Ray, Kafka and Kubernetes.
Another closely related concurrent work is FedLess~\cite{fedless}, which predominantly uses serverless functions
for the training side (party side) of FL. FedLess is able to use popular serverless technologies
like AWS Lambda, Azure functions and Openwhisk to enable clients/parties on cloud platforms perform
local training and reports interesting results on using FaaS/serverless instead of IaaS (dedicated VMs and containers)
to implement the party side of FL. It also has the ability to run a single aggregator as a cloud function, but does not
have the ability to parallelize aggregation, and does not seem to scale beyond 200 parties (with 25 parties
updating per FL round, per \cite{fedless}). Our work in \textsf{AdaFed}\ has the primary goal of parallelizing and
scaling FL aggregation. Fedless~\cite{fedless} also does not adapt aggregation based on party behavior,
and it is unclear whether parties on the edge (phones/tablets) can train using FedLess.
A number of ML frameworks -- Siren~\cite{siren}, Cirrus~\cite{cirrus} and
the work by LambdaML~\cite{jiang-serverless-ml} use serverless functions
for centralized (not federated) ML and DL training.
Siren~\cite{siren} allows users to train models (ML, DL and RL) in the cloud
using serverless functions with the goal to reduce programmer burden involved
in using traditional ML frameworks and cluster management technologies for
large scale ML jobs. It also contains optimization algorithms to tune training
performance and reduce training cost using serverless functions.
Cirrus~\cite{cirrus} goes further, supporting end-to-end centralized ML training workflows
and hyperparameter tuning using serverless functions.
LambdaML~\cite{jiang-serverless-ml} analyzes
the cost-performance trade-offs between IaaS and serverless
for datacenter/cloud hosted centralized ML training.
LambdaML supports various ML and DL optimization algorithms, and
can execute purely using serverless functions or optimize cost using
a hybrid serverless/IaaS strategy. \textsf{AdaFed}\ differs from Siren, Cirrus and LambdaML in
significant ways -- Distributed ML (in Siren, Cirrus and LambdaML) is different from FL. Distributed ML involves
centralizing data at a data center or cloud service and performing training at a central location.
In contrast, with FL, data never leaves a participant. FL's privacy guarantees are much stronger
and trust requirements much lower than that of distributed ML.
The term ``serverless'' has also been used to refer to peer-to-peer (P2P) federated learning, as
in ~\cite{rw1,rw4, flgossip}. In such systems, aggregation happens over a WAN overlay and not in a datacenter.
The first step involves establishing the overlay network, by following existing technologies like
publish/subscribe overlays, peer discovery, etc~\cite{pubsub, streamoverlays}. The next step involves establishing a spanning
tree over the P2P overlay, routing updates along the spanning tree and aggregating at each node on the tree.
Gossip based learning, \cite{flgossip} does not construct overlays but uses gossip-based broadcast algorithms
to deliver and aggregate model updates in a decentralized manner. While these techniques are scalable and
(in the case of gossip algorithms) fault tolerant, they do require either (i) that the model be revealed
to more entities during routing, or (ii) homomorphic encryption~\cite{jayaram-cloud2020} which can be challenging both from a key agreement
and model size explosion standpoints, or (iii) differential privacy~\cite{abadi-diffpriv} which reduces model accuracy in the
absence of careful hyperparameter tuning.
\section*{Acknowledgments}
\newlength{\bibitemsep}\setlength{\bibitemsep}{.2\baselineskip plus .05\baselineskip minus .05\baselineskip}
\newlength{\bibparskip}\setlength{\bibparskip}{0pt}
\let\oldthebibliography\thebibliography
\renewcommand\thebibliography[1]{%
\oldthebibliography{#1}%
\setlength{\parskip}{\bibitemsep}%
\setlength{\itemsep}{\bibparskip}%
}
|
1,941,325,220,454 | arxiv | \section{Introduction}
The remote sensing of extensive air showers (EAS) using a bistatic radar system is a promising technique with almost 100$\%$ duty cycle that is currently being developed. If successful, it will allow the next generation of cosmic ray observatories to be built at much lower cost.
The concept of implementing a radar for cosmic ray detection dates back to 1940 \cite{bibe:blackett}. However, due to the lack of experimental confirmation of this method, it was not pursued for several decades. In recent years renewed attention has been given to this topic \cite{bibe:baruch,bibe:gorham,bibe:bakunov,bibe:takai,bibe:stasielak} and experimental efforts to detect EAS using the radar technique were made by several groups \cite{bibe:vin,bibe:lyono,bibe:terasawa,bibe:mariachi,bibe:tara,bibe:tara2}.
Detection of EAS via radar technique is based on the principle of scattering radio waves off the plasma produced in the atmosphere by the high-energy particles of the shower. The locally produced plasma decays. For the plasma densities relevant for EAS and at low altitudes, the three-body attachment to oxygen dominates the deionization process as it depends quadratically on the oxygen density. This leads to the plasma lifetime of 10 ns at sea level and about 100 ns at an altitude of 10 km \cite{bibe:vidmar,bibe:nijdam,bibe:n}.
Some features of scattering of the radio waves from the ionization column produced by meteors or lightnings are expected to be similar to the scattering from the ionization trail left behind the shower front. Therefore, we can use these similarities as a starting-point for analysis of the radar reflection from EAS.
The ionization trail that results from meteors or lightnings is traditionally divided into underdense and overdense regions, depending on the local plasma frequency $\nu_p$. If the electron density is high enough that the plasma frequency exceeds the radar frequency then the radio wave is reflected from its surface. Such a region is called overdense. In contrast, if the electron density is low enough that the local plasma frequency is lower than the frequency of the incoming radio wave, then the region is underdense and the radio wave can penetrate the ionized region. In such a case the reflections are caused by the Thomson scattering of the radio wave on individual free electrons.
Gorham \cite{bibe:gorham} considered radar reflection from the side of a horizontal ionization trail left by ultra-high energy neutrinos at an altitude of about 10 km. He suggested that the most inner (overdense) part of the ionization column is responsible for the bulk of the radar reflection. By analogy with the reflective behaviour of the overdense region produced by a meteor, he assumed that the radar cross-section (RCS) of the overdense trail produced by EAS is equal to the RCS of a thin metallic wire.
An alternative mode of EAS detection was discussed in \cite{bibe:bakunov}, where reflection of the radar wave from the relativistically moving shower front was considered. The reflection coefficient was obtained by solving Maxwell's equations with the corresponding boundary conditions. The speed of the shower front, however, was assumed to be that of electrons moving with the speed lower than the speed of light in the air.
In reality the shower front moves with the speed of the highest energy particles in a shower and exceeds the local speed of light at all altitudes of relevance. Therefore, a reflected wave in the forward direction cannot exist because it would be immediately caught by the shower front. In its place a second transmitted wave, so called transmitted backscattered wave, is formed \cite{bibe:stephanov} and it follows in pursuit of the ionization front while standing off from it.
It follows that, in the case of scattering of the radio waves incoming at small angles to the shower axis, one can not treat plasma as overdense because it is transparent to arbitrarily low-frequency incident radiation. Considering the plasma as underdense and using the Thomson cross-section for radar scattering seems to be justified in this situation.
Moreover, unlike the case of reflection from the side of the ionization trail, where the frequency does not change, the frequency of the backscattered radio wave will be upshifted. One can then expect an enhancement of the backscattered signal due to its time compression. To avoid confusion we will use the term 'reflected wave' as a synonym of the scattered wave.
Scattering of the radio wave from the ionization trail produced by the EAS in the underdense plasma regime was considered in \cite{bibe:takai}. The calculations were made for the forward scattered signal assuming that the ionization occurs in a line along the shower axis, i.e. that contributions from the laterally distributed electrons are coherent. The transmitter and receiver were located 50 km apart. The computed case lies in between side scattering and front scattering with respect to the ionization column.
Thus, the frequency upshift of the received signal reaches only modest values.
In this paper, which is an extension of our previous work \cite{bibe:stasielak}, we investigate the feasibility of detecting EAS by the bistatic radar technique at viewing angles smaller than $\sim 25^\circ$ to the shower axis. Simulations are performed for the underdense regime using the Thomson cross-section for scattering of radio waves off the short-lived, non-moving plasma. We neglect absorption, multiple scattering, and currents induced in the plasma. We sum coherently contributions of the radio wave reflection on each individual electron over the volume of the disk-like ionization trail and obtain the time evolution of the radar echo. The final result depends on the individual phase factors of the scattering electrons.
\section{Modeling radar reflection}
\begin{figure}[t]
\centering
\includegraphics[width=0.4\textwidth]{icrc2013-0473-1.eps}
\caption{A schematic diagram representing a bistatic radar system and reflection from the non-moving plasma produced by the EAS in the atmosphere. See the text for a detailed explanation.}
\label{fig:1}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{icrc2013-0473-2.eps}
\caption{Distribution of the factors by which the emitted frequency is up-shifted ($f_r$) for the radio wave scattered off different parts of the shower disk. The altitude of the disk element is given by $h$, whereas its distance to the receiver in the horizontal plane is $d$. A vertical shower heading towards the transmitter is considered.}
\label{fig:upshift}
\end{figure}
A schematic diagram representing the concept of the EAS detection using radar technique is shown in Figure \ref{fig:1}.
A ground-based radio transmitter (T) irradiates a short-lived, disk-like, non-moving plasma left behind the shower front. The radio signal is scattered by free electrons in the ionization trail and subsequently received by the ground-based antenna (R). The geometry of the bistatic radar system is determined by the polar coordinates of the transmitter and the receiver, i.e. by the distances from the shower core to the transmitter ($d_T$) and to the receiver ($d_R$) together with the angles $\varphi_t$ and $\varphi_r$.
Let us consider the radar reflection from the disk element with coordinates ($r_L$, $\varphi$), altitude $h$, and electron density $n_e$. Its contribution to the radar echo at the receiver at time $t$ is given by
\begin{eqnarray}
U (t,s,r_L,\varphi) & = & \sin \alpha \sqrt{\frac{G_T A_R}{4 \pi}} \sqrt{\frac{\mathrm{d} \sigma_T}{ \mathrm{d} \Omega}} n_e e^{i\omega t + \varphi_0}
\nonumber \\
& \times & {e^{-i\int_{{\bf r}} n{\bf k} \cdot \mathrm{d}{\bf r}} e^{ - i\int_{{\bf r_{sc}}} n{\bf k_{sc}} \cdot \mathrm{d} {\bf r_{sc}} } \over |{\bf r}| |{\bf r_{sc}}|}
\rm{,} \label{dUrcv}
\end{eqnarray}
where $G_T$ is the transmitter antenna gain, $A_R$ is the effective area of the receiver antenna, $\bf k$ and $\bf k_{sc}$ are the altitude-dependent wave vectors of incoming and scattered radio wave, $\varphi_0$ is the initial phase of the emitted signal, $\mathrm{d}\sigma_T/\mathrm{d}\Omega$ is the differential Thomson cross-section, $s$ is the projection of the distance between the shower core and the considered disk element on the shower axis, $\alpha$ is the inclination angle of the reflected radio wave, and $n$ is the refractive index of the air derived from the fit to the US standard atmosphere. The factor $\sin \alpha$ is included to take into account the dependence of the receiver antenna gain on the direction. We assume that the receiver is oriented vertically upwards.
The signal received by the antenna at a given time $t$ is the sum of the signals scattered at different times and from different parts of the plasma disk. These individual contributions interfere with each other and only the integral over the whole volume $V(t)$, from which they arrive simultaneously, gives us the correct total signal. The relative amplitude of the radio wave at the receiver antenna is given by
\begin{equation}
U (t) = \int_{V(t)} U (t,s,r_L,\varphi) r_L \mathrm{d}r_L \mathrm{d}\varphi \mathrm{d}s \rm{.} \label{total}
\end{equation}
The factor $U(t)$ is defined in such a way that the ratio of the 'instantaneous' power $P_R(t)$ received by the detector antenna to the power emitted by the transmitter $P_T$ is equal to
\begin{equation}
P_R(t)/P_T = R^2(t) \rm{,}
\end{equation}
where $ R(t)$ is the real part of $U(t)$. The term $R(t)$ is proportional to the electric field strength detected by the receiver. It is used in the Fourier analysis to obtain the power spectrum of the recorded signal. The 'real' power received by the detector $P_R$ can be obtained by averaging $R^2(t)$ (according to the time resolution of the detector), i.e. $P_R=P_T<P_R(t)/P_T>$. Note that $P_R/P_T \sim G_T A_R$.
Alternatively, the radar reflection can be described in terms of the effective RCS, which is a measure of the target equivalent physical area of an ideal scattering surface. The effective shower RCS can be defined, by analogy with \cite{bibe:gorham}, in the following way
\begin{equation}
\sigma (t) = \left| \int_{V(t)} e^{-i\int_{{\bf r}} n{\bf k} \cdot \mathrm{d}{\bf r}} e^{ - i\int_{{\bf r_{sc}}} n{\bf k_{sc}} \cdot \mathrm{d} {\bf r_{sc}} } n_e \sqrt{\frac{\mathrm{d} \sigma_T}{ \mathrm{d} \Omega}} \mathrm{d} V \right|^2
\rm{.} \label{eff-cross}
\end{equation}
Note that after removing the geometrical factor of the bistatic radar system $\frac{\sin \alpha }{|{\bf r}| |{\bf r_{sc}}|}$ from the definition of $ U (t,s,r_L,\varphi)$ we obtain $\sigma (t) \propto |U(t)|^2$.
\begin{figure}[t]
\vspace{-0.4cm}
\centering
\includegraphics[width=0.5\textwidth]{icrc2013-0473-3.eps}
\caption{The effective RCS, the waveform of the radar echo, and the ratio of the power received by the detector to the emitted one (averaged over a 100 ns running time window) calculated for a vertical shower with energy $10^{18}$ eV heading towards the transmitter. The shower core-receiver distance is $d_R=100$ m and the frequency of the incident radar wave is $\nu=1$ MHz. The time $t=0$ coincides with the moment at which the shower hits the ground.}
\label{fig:res1}
\end{figure}
The factor $\sigma(t)$, which is defined by equation (\ref{eff-cross}), has the meaning of cross-section only when the radio transmitter and receiver are sufficiently far away from the scattering plasma or the volume of the scattering plasma is very small itself. These conditions will not always be met. In reality, the volume from which the scattered radio waves arrive simultaneously to the detector can have a considerable size for a small viewing angle to the shower axis. It is caused partly by the time compression of the reflected signal.
The estimation of the EAS radar cross-section given by $\sigma(t)$ is used only for comparison.
\section{Frequency upshift}
Despite the fact that the radio wave is scattered on a non-moving plasma, the ionization front moves with relativistic velocity, and thus we observe a Doppler shift of the received signal. Figure \ref{fig:upshift} shows the factors by which the emitted frequency is up-shifted ($f_r$) for the radio wave scattered on different parts of the disk-like plasma produced by the vertical shower heading towards the transmitter. The altitude of the plasma element is given by $h$, whereas its distance to the receiver in the horizontal plane is $d$.
The frequency upshift depends on the wave direction and the refractive index of the air. It has the highest value for the case in which the viewing angle coincides with the Cherenkov angle.
The typical $f_r$ is high enough to upshift a MHz signal into the GHz range. Therefore, it might be possible to observe the radar echo in the GHz range using a CROME-like setup \cite{bibe:smida} supplemented with a commercial high power MHz transmitter.
\section{Modeling plasma and radar setup}
The electron density of the plasma, produced by the high-energy shower particles in the air, is estimated using the average longitudinal profile of proton showers parametrized by the Gaisser-Hillas function and assuming the Gora function \cite{bibe:gora} as the lateral distribution. We assume that each shower particle deposits on average 2.3 MeV/g/cm$^{2}$ and that all of the deposited energy goes into ionization. The mean energy per ion-pair production is 33.8 eV. We plan to improve the calculation of the plasma density by incorporating the method used in \cite{bibe:n}.
Since the received power of the radar echo is strongly diminished by the geometrical factor of $|\bf r|^{-2}|\bf r_{sc}|^{-2}$, the strongest signal will be obtained from altitudes close to the ground level, so we can assume an exponential decay of the static plasma with the characteristic time of 10 ns.
As for the bistatic radar system, we assume that the effective area of the receiver antenna is $A_R$=1 m$^2$ and the transmitter emits signal into the whole upper hemisphere (i.e. $G_T=2$).
Moreover, the receiver is ideal and its efficiency is independent of the frequency of the radar echo.
\section{Results}
\begin{figure}[t]
\vspace{-0.4cm}
\centering
\includegraphics[width=0.5\textwidth]{icrc2013-0473-4.eps}
\caption{The same as figure \ref{fig:res1} but for the transmitter-receiver distance of $d_R=500$ m and radar frequency of $\nu=2$ MHz. The power ratio $<P_R/P_T>$ is averaged over a 500 ns running time window.}
\label{fig:res2}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{icrc2013-0473-5.eps}
\vspace{-0.5cm}
\caption{Spectrogram of the radar echo for the vertical shower with energy $10^{18}$ eV heading towards the transmitter. The radar frequency and transmitter-receiver distance are equal to $\nu=10$ MHz and $d_R=500$ m, respectively. The time window of 500 ns was used in the spectrogram calculation.}
\label{fig:spect}
\end{figure}
Figures \ref{fig:res1} and \ref{fig:res2} show the effective RCS, the waveforms $R$ of the radar echo, and the ratios of the power received by the detector to the emitted one $<P_R/P_T>$ (averaged over a running time window) calculated for different frequencies $\nu$ of the radar wave and different shower core-receiver distances $d_R$. In both cases the receiver is outside the Cherenkov cone (see figure \ref{fig:upshift}) and a vertical shower with energy $10^{18}$ eV heading towards the transmitter is considered. The time $t=0$ coincides with the moment at which the shower front hits the ground. The factor $<P_R/P_T>$, which is an equivalent of the return power, is calculated from the spectrogram of the waveform assuming an infinite bandwidth detector. Note that the waveforms are given in units of $10^{-8}$.
As we can see, the frequencies of the received signal are higher than that of the emitted one, despite the fact that the Thomson scattering preserves the frequency of the scattered radio wave. The observed upshift is caused by the interference of the radio waves reflected from the plasma volume at different stages of its development, i.e. at different times.
Since we are outside the Cherenkov cone, which is the most common case expected in experiments, the amplitudes of the waveforms increase with time. It is a geometrical effect of the reflections from the lower parts of the atmosphere. In accordance with the behavior of the signal amplitude, the return power grows with time.
For cases when the receiver is inside the Cherenkov cone, the time sequence is reversed: the lower part of the shower is seen first and the amplitude decreases with time.
Figures \ref{fig:res1} and \ref{fig:res2} show the enhancement of the RCS at the beginning of the signal. It is a combined effect of the increase in the size of the region from which the scattered waves arrive simultaneously to the detector and the time compression of the received signal. These two interrelated factors lead to an increase in the size of the region from which we get the coherent signal. The part of the radar echo with the highest frequency upshifts arrives to the detector from large altitudes. Therefore, despite the fact that the scattered signal is enhanced due to increase of the shower RCS, the return power of the radar echo is small due to the geometrical factor of $|\bf r|^{-2}|\bf r_{sc}|^{-2}$.
Figure \ref{fig:spect} shows an example of the radar echo spectrogram. As expected, the frequency decreases with time. Note the low-frequency component at the end of the radar echo, which is caused by the modulation of the received signal by the factor $e^{i\omega t}$ (see equation (\ref{dUrcv})).
The values of the power ratio $<P_R/P_T>$ for different showers are given in table \ref{tab:1}. It is evident that the strength of the signal decreases with increasing radar frequency. The size of the region from which one gets a coherent signal decreases with the wavelength and destructive interference cancels out the signal from the other regions.
\begin{table}[t]
\vspace{-0.2cm}
\caption{The maximum values of the received to the emitted power ratio for shower with different energies $E$, frequencies $\nu$ of the radar wave, and transmitter-receiver distances $d_R$. In all cases a vertical shower heading towards the transmitter is considered. The signal is averaged over a 100 ns time window.}
\begin{center}
\begin{tabular}{l|l|l|l|l}
\hline \hline
\vspace{-0.3cm}
& \multicolumn{2}{|c}{} & \multicolumn{1}{|c|}{} \\
$E$ & \multicolumn{2}{c}{$10^{18}$ eV} & \multicolumn{1}{|c}{$10^{19}$ eV} & \multicolumn{1}{|c}{$10^{20}$ eV} \\
\hline
$\nu$ \textbackslash $d_R$ & 100 m & 500 m & 200 m & 200 m \\
\hline
1 MHz & -133 dB & -158 dB & -119 dB & -98 dB \\
5 MHz & -154 dB & -174 dB & -138 dB & -117 dB \\
10 MHz & -162 dB & -182 dB & -146 dB & -124 dB \\
20 MHz & -172 dB & -191 dB & -156 dB & -134 dB \\
\hline \hline
\end{tabular}
\end{center}
\label{tab:1}
\vspace{-0.2cm}
\end{table}
\section{Conclusions}
We have studied the feasibility of EAS detection by the bistatic radar technique at small viewing angles to the shower axis.
Due to the time compression, the signal scattered off the plasma is enhanced. The effect is strongest for the initial part of the radar echo with the highest frequency upshifts. However, the resulting return power is strongly diminished due to the large distance of the scattering plasma to the detector.
The typical signal consists of two parts: a short signal upshifted to high-frequency with low amplitudes and a long signal with modest frequency upshifts and larger amplitudes. In principle, the last part of the signal, with the typical ratio of the received to the emitted power between -100 dB and -190 dB, should be observable. Since the strength of the signal decreases with the radar frequency, it is recommended to use low frequencies of the radar wave.
A note should be added, that the shown time traces are for an infinite bandwidth detector $-$ a realistic detector would only be able to detect the signal in a narrow frequency range. Moreover, it will only see the shower for a certain fraction of its development.
\vspace*{0.5cm}
\footnotesize{{\bf Acknowledgment:}{
This work has been supported in part by the KIT start-up grant 2066995641,
the ASPERA project BMBF 05A11VKA and the National Centre for Research and Development (NCBiR) grant
ERA-NET-ASPERA/01/11.}}
|
1,941,325,220,455 | arxiv | \section{Introduction}
The realized quadratic variation is a powerful tool in the statistical analysis of stochastic processes, and it has received
a lot of attention in the literature. Furthermore, its generalization, the realized power variation of order $p>0$, have
received similar attention as it can tackle with several problems related to realized quadratic variation. For example, the
asymptotic normality does not hold for realized quadratic variation in the case of the fractional Brownian motion (fBm) $B^H$ with
$H>\frac34$, while asymptotic normality hold for realized power variation if one chooses $p$ large enough. Many results are limited
to the fBm who has the stationary increment, but not to the general non-stationary Gaussian process.
The realized power variation of order $p$ (quadratic variation if
$p=2$) is defined as
\begin{equation} \label{eq:statistic}
\sum_{i=1}^{[nt]}\left|X_{i/n}-X_{(i-1)/n}\right|^p
\end{equation}
where $\{X_{t},t\geq0\}$ is a stochastic process. It was originally
introduced in Barndorff-Nielsen and Shephard (\cite{BS2002}, \cite{BS2003}, \cite{BS2004a},\cite{BS2004b}) to estimate the integrated
volatility in some stochastic volatility models used in quantitative finance and also, under an appropriate modification,
to estimate the jumps of the processes. The main interest in the mentioned papers is the asymptotic behaviour of appropriately
normalised version of the statistic (\ref{eq:statistic}), when the process $X_{t}$ is a stochastic integral with respect to a
Brownian motion. Refinements of the results have been obtained in \cite{W2003} and \cite{W2005}, and further extensions can be found
in \cite{BSal}.
The asymptotic behaviour of the power variation of a stochastic integral $Z_{t}=\int_{0}^{t}u_{s}dB_{s}^{H}$ with respect to a fBm was studied in \cite{CNW}. In \cite{CNW} the authors proved that if $u=\{u_{t},t\geq0\}$ has finite $q$-variation for some $q<1/(1-H)$,
then
\begin{eqnarray}
n^{-1+pH}V_{p}^{n}(Z)_{t} &\longrightarrow&
c_{1,p}\int_{0}^{t}|u_{s}|^{p}ds
\end{eqnarray}
uniformly in probability in any compact sets of t, where $c_{1,p}=\mathrm{
\kern-0.16em E}[|B_{1}^{H}|^{p}]$ and $V_{p}^{n}(Z)_{t}=\sum_{i=1}^{[nt]}\left|Z_{i/n}-Z_{(i-1)/n}\right|^p.$ The authors also proved central
limit
theorem for $H \in (0,\frac{3}{4}]$. However, the condition $H\in(0,\frac{3}
4}]$ is critical in \cite{CNW}. The first objective of \cite{HNZ}
was to remove this restriction. They used higher order differences
and defined the
power variation as $V_{k,p}^{n}(Z)_{t}=\sum_{i=1}^{[nt]-k+1}\left
\sum_{j=0}^{k}(-1)^{k-j}C_{j}^{k}Z_{(i+j-1)/n}\right|^{p}$ for
certain numbers $C_j^k$.
On a related literature we mention also a series of articles, all by the same authors, studying power variations of
general Gaussian processes. In \cite{BCP2009} asymptotic theory for the realized power variation of the processes $\phi(G)$ was studied.
Here $G$ is a general Gaussian process with stationary increments, and $\phi$ is a deterministic function. The authors proved that
under some mild assumptions on the variance function of the increments of $G$ and certain regularity conditions on the path of
the process, a properly normalised converge uniformly in probability. Exploiting these ideas, central limit theorems and
convergence of (multi) power variations for the general Gaussian processes with stationary increments and Gaussian semistationary
processes was studied in \cite{BCPW2009} and \cite{BCP2011}. Finally, similar questions for variations based on higher order
differences were studied in \cite{BCP2013}. As an application, estimation of the smoothness parameter of the process was discussed.
While the literature on the topic is wide due to the centrality of the problem,
all of the mentioned studies consider only (uniform) convergence in probability.
To the best of our knowledge, stronger mode of convergence such as uniform almost sure convergence is not widely studied in the literature.
In the paper \cite{BEV}, they studied the asymptotic behaviour of
the
realized quadratic variation of a process of the form $\int_{0}^{t}u_{s}dY^{(1)}_{s}$, where $Y_{t}^{(1)}=\int_{0}^{t}e^{-s}dB_{a_{s}}$; $a_{t}=He^{t/H}$, $B^{H}$ is a fBm with Hurst parameter $H\in(0,1)$, and $u$ is a $\beta$-H\"older continuous process with $\beta > 1-H$. such that the process $Y^{(1)}$ is connected to the fractional Ornstein-Uhlenbeck process of the second kind, that is defined through the Lamperti transform
of the fBm. Equivalently, fractional Ornstein-Uhlenbeck process of the second kind can be defined as
the solution to the stochastic differential equation \begin{eqnarray}
dX_{t} &=& -\theta X_{t}dt+\sigma_{t} dY^{(1)}_{t}.
\end{eqnarray}
As the main result, they obtained the almost sure and uniform
convergence. In comparison, \cite{CNW} obtained uniform
convergence in probability. They
also established weak convergence result provided that $H\in\left(0,\frac3
\right)$.
In this paper we study the asymptotic behaviour of the realized
quadratic variation of a process of the form $\int_{0}^{t}u_{s}dG^{H}_{s}$, where
G^{H}$ is a self-similar Gaussian process (including fBm $B^H$, sub-fBm $S^H$ and bi-fBm $B^{H_0,K_0}$) with parameter $H\in(0,3/4)$ ($H=H_0K_0$ for bi-fBm) and $u$ is a $\beta$-H\"{o}lder continuous process with $\beta > 1-H$. The Guaussian Ornstein-Uhlenbeck process can be
defined as the solution to the stochastic differential equation
\begin{eqnarray}
dX_{t} &=& -\theta X_{t}dt+\sigma_{t} dG^{H}_{t}.
\end{eqnarray}
As our main result, we obtain almost sure and uniform convergence of the realized quadratic variation of the self-similar Gaussian process $G^{H}$. That is, we show that for $Z_t = \int_{0}^{t}u_{s}dG^{H}_{s}$ we have
$$
\sum_{i=1}^{[nt]}\left|Z_{\frac{i}{n}}-Z_{\frac{i-1}{n
}\right|^2 \longrightarrow \int_{0}^{t}|u_{s}|^2ds
$$
almost surely and uniformly in $t$, for any $H\in(0,3/4)$ and any process $u$ that is regular enough. In order to obtain this stronger convergence, we apply recently developed simplified method \cite{Lauri Viitasaari} to study quadratic variations of Gaussian sequence. With this simplified method that is based
on a concentration phenomena, one is able to obtain stronger convergence at the same time.
To obtain the desired results, we make the following assumptions on the self-similar Gaussian process $G^H$:
\textbf{(A1)} Let $d(s,t)=\mathbb{E}(G_t^H-G_s^H)^2$ is in $C^{1,1}$ outside diagonal, which satisfies
$$|\partial_{s,t}d(s,t)|=O(|t-s|^{2H-2}).$$
\textbf{(A2)} $G^H$ is H\"{o}lder continuous of order $\delta$ for any $0<\delta<H$.
\textbf{(A3)} Let $I_n(i)=\{j: ~\frac{j}{m}\in(\frac{i-1}{m},\frac{i}{m}]\}$. As $m\to\infty$,
$$m^{-1+2H}\sum_{j\in I_n(i)}\mathbb{E}|G^H_{j/m}-G^H_{(j-1)/m}|^2\to\frac1n.$$
\textbf{(A4)} For $j, l=1,2,\cdots, N$, there exist constants $c_0$ and $c_1$ such that
$$\mathbb{E}[(G^H_{j}-G^H_{j-1})(G^H_{l}-G^H_{l-1})]=c_0\rho_H(|j-l|)+c_1\theta(j,l)$$
where $\rho_H(x)=\frac12\Big[(x+1)^{2H}+(x-1)^{2H}-2x^{2H}\Big]$ and $|\theta(j,l)|^2=o(1/j)$ as $j\to\infty$ (or equal to $o(1/l)$ as $l\to\infty$).\\
Note that, assumptions (A1)--(A3) mainly used in the proof of consistency in Theorem \ref{theorem:Consistency}. Condition of $\theta(j,l)$ in (A4) such that for $m\geq2$, $$\lim_{N\to\infty}\frac1N\sum_{j,l}\Big|\mathbb{E}[(G^H_{j}-G^H_{j-1})(G^H_{l}-G^H_{l-1})]\Big|^m<\infty$$ which will given in the main proof of stable convergence in Theorem \ref{theorem:distribution asymptotic}.
The paper is outlined in the following way. After some preliminaries in Section 2, Section 3 is devoted to the proof of main results, based on the assumptions (A1)--(A4) in Section 1 and the Lemmas and Theorems given in Section 2. We apply our results to the estimation of the integrated volatility in Section 4.
Throughout this paper, if not mentioned otherwise, the letter $c$, with or without a subscript, denotes a generic positive finite constant and may change
from line to line.
\section{Preliminaries}
In this paper, we will consider $\{G_t^H, t\geq0\}$ is a centered Gaussian process defined on some probability space $(\Omega, \mathcal{F}, P)$ with self-similar index $H\in(0,3/4)$. We always assume that $G^H$ satisfies assumptions (A1)--(A4). This conditions that are satisfied by a variety of Gaussian processes. In particular, it is straightforward to validate the following Gaussian processes.
\begin{example}\label{ex-fBm}
$G^H_t=B_t^H$ is a fBm, of which the covariance function is
$$\mathbb{E}(B_t^HB_s^H)=\frac12(t^{2H}+s^{2H}-|t-s|^{2H}).$$
\end{example}
\begin{example}\label{ex-sub-fBm}
$G^H_t=S_t^H$ is a sub-fBm, of which the covariance function is
$$\mathbb{E}(S_t^HS_s^H)=t^{2H}+s^{2H}-\frac12[(t+s)^{2H}+|t-s|^{2H}].$$
\end{example}
\begin{example}\label{ex-bi-fBm}
$G^H_t=B_t^{H_0,K_0}$ is a bi-fBm, of which the covariance function is
$$\mathbb{E}(B_t^{H_0,K_0}B_s^{H_0,K_0})=2^{-K_0}[(t^{2H_0}+s^{2H_0})^{K_0}-|t-s|^{2H_0K_0}],$$
\end{example}
where $H=H_0K_0\in(0,3/4)$ and $K_0\in(0,1]$.\\
Next, we are going to verify that these processes meet the assumptions (A1)--(A4).
\begin{lemma}\label{lem-fBm}
Assumptions (A1)--(A4) are satisfied by fBm.
\end{lemma}
\begin{proof}
For $t,s >0$,
$$d(s,t)=|t-s|^{2H}$$
which gives (A1) and (A2).
Since the fBm has the incremental stationarity, then
$$\mathbb{E}[(B^H_{j}-B^H_{j-1})(B^H_{l}-B^H_{l-1})]=\rho_H(|j-l|),$$
where $\rho_H(x)=\frac12[|x+1|^{2H}+|x-1|^{2H}-2|x|^{2H}]$. This gives (A4).
For (A3),
\begin{align*}
m^{-1+2H}\sum_{j\in I_n(i)}\mathbb{E}|G^H_{j/m}-G^H_{(j-1)/m}|^2&=m^{-1}\sum_{j\in I_n(i)}\mathbb{E}[(B^H_{j}-B^H_{j-1})^2\\
&=m^{-1}\sum_{j\in I_n(i)}\rho_H(0)\\
&=\frac{i}{n}-\frac{i-1}{n}=\frac1n.
\end{align*}
This completes proof.
\end{proof}
\begin{lemma}\label{lem-sub-fBm}
Assumptions (A1)--(A4) are satisfied by sub-fBm.
\end{lemma}
\begin{proof}
For $t,s >0$, by Proposition 1.15 in Tudor \cite{Tudor}, we can see
$$(2-2^{2H-1})|t-s|^{2H}\leq d(s,t)\leq |t-s|^{2H}, ~~H>1/2$$
and
$$|t-s|^{2H}\leq d(s,t)\leq (2-2^{2H-1})|t-s|^{2H}, ~~H<1/2,$$
which gives (A1) and (A2).
By simple calculation, we can find
$$\mathbb{E}[(S^H_j-S^H_{j-1})(S^H_l-S^H_{l-1})]=\rho_H(|j-l|)-\rho_H(j+l-1).$$
It is easy to see that (A3) follows from the proof of Lemma \ref{lem-fBm} and
$$\sum_{j\in I_n(i)}\rho_H(2j-1)=\sum_{j\in(\frac{i-1}n,\frac{i}n]}\rho_{H}(2mj-1)=O(m^{2H-2}) ~\text{as} ~m\to\infty.$$
Since $\rho_H(n)$ is a monotonically decreasing function and is greater than zero when $H>1/2$, and $\rho_H(n)$ is increasing and is less than zero for $H<1/2$, we have
$$\Big|\mathbb{E}[(S^H_j-S^H_{j-1})(S^H_l-S^H_{l-1})]\Big|\leq \Big|\rho_H(|j-l|)\Big|.$$
Moreover, $|\rho_H(j+l-1)|^2=o(1/j)$ as $j\to\infty$ (or equal to $o(1/l)$ as $l\to\infty$) for $H<3/4$. This completes the proof.
\end{proof}
\begin{lemma}\label{lem-bi-fBm}
Assumptions (A1)--(A4) are satisfied by bi-fBm.
\end{lemma}
\begin{proof}
For $t,s >0$, by Proposition 1.7 in Tudor \cite{Tudor}, we can see
$$2^{-K_0}|t-s|^{2H}\leq d(s,t)\leq 2^{2-K_0}|t-s|^{2H}, $$
which gives (A1) and (A2).
Similar to sub-fBm, we have
$$\mathbb{E}[(B^{H_0,K_0}_j-B^{H_0,K_0}_{j-1})(B^{H_0,K_0}_l-B^{H_0,K_0}_{l-1})]=2^{1-K_0}\rho_{H_0K_0}(|j-l|)+\theta(j,l),$$
where
\begin{align*}
\theta(j,l)&=2^{-K_0}\Big[((j-1)^{2H_0}+l^{2H_0})^{K_)}+(j^{2H_0}+(l-1)^{2H_0})^{K_0}\\
&\qquad\qquad-(j^{2H_0}+l^{2H_0})^{K_0}-((j-1)^{2H_0}+(l-1)^{2H_0})^{K_0}\Big].
\end{align*}
By the Lemma 1.1 and the proof of Proposition 1.10 in Tudor \cite{Tudor}, we can have
$$2^{K_0}|\theta(j,j)|=|h(j)+2|,$$
where $h(x)=x^{2H_0K_0}+(x-1)^{2H_0K_0}-2^{1-K_0}(x^{2H_0}-(x-1)^{2H_0})^{K_0}$.
Thus, (A3) follows from
$$\sum_{j\in I_n(i)}\theta(j,j)=\sum_{j\in(\frac{i-1}n,\frac{i}n]}\theta(mj,mj)$$
and $h(mj)$ converges to zero, as $m\to\infty$.
Let $f_j(x)=(j^{2H_0}+x^{2H_0})^{K_0}-((j-1)^{2H_0}+x^{2H_0})^{K_0}>0$, which is decreasing with respect to $x$. Then we can see
\begin{align*}
|\theta(j,l)|&=2^{-K_0}(f_j(l-1)-f_j(l))\leq2^{-K_0}f_j(0)\\
&=2^{-K_0}\left(j^{2H_0K_0}-(j-1)^{2H_0K_0}\right)\\
&\leq 2^{-K_0}|\rho_{H_0K_0}(j-1)|.
\end{align*}
When $H=H_0K_0<3/4$,
$$|\rho_{H}(j-1)|^m=o(1/j), ~\text{as} ~j\to\infty, ~~\text{for} ~m\geq2.$$
Similarly, we can obtain that
\begin{align*}
|\theta(j,l)|^m=o(1/l), ~~\text{as} ~l\to\infty, ~~\text{for} ~m\geq2.
\end{align*}
This gives (A4).
\end{proof}\\
We refer to \cite{LN}, \cite{RT} and \cite{Tudor} for more details on sub-fBm and bi-fBm.
We also recall that, for $p>0$, the $p$-variation of a real-valued function $f$ on an interval [a,b] is defined as
\begin{equation} \label{p-variation}
var_{p}(f;[a,b])=\sup_{\pi}\left(\sum_{i=1}^{n}|f(t_{i})-f(t_{i-1})|^
\right)^{1/p},
\end{equation}
where the supremum is taken over all partitions $\pi=\{a=t_{0}<t_{1}<...<t_{n}=b\}.$ We say that $f$ has finite
$p$-variation (over the interval $[a,b]$), if $var_p(f;[a,b])<\infty$. Young proved that the integral $\int_{a}^{b}fdg$ exists
as a Riemann-Stieltjes integral provided that $f$ and $g$ have finite $p$-variation and $q$-variation with $1/p+1/q>1$. Moreover, the following inequality holds:
\begin{equation} \label{Young inequality}
\left|\int_{a}^{b}fdg-f(a)(g(b)-g(a))\right|\leqslant c_{p,q}var_{p}(f;[a,b])var_{q}(g;[a,b]),
\end{equation}
where $c_{p,q}=\zeta(1/q+1/p)$ , with $\zeta(s)=\sum_{n\geq1}n^{-s}.$
We denote by
\begin{equation*}
\parallel f\parallel_{\alpha}:=\sup_{a\leqslant s<t\leqslant b}\frac{|f(t)-f(s)|}{|t-s|^\alpha}
\end{equation*}
the H\"older seminorm of order $\alpha$. Clearly, if $f$ is $\alpha$-H\"older continuous, then it has finite $(1/\alpha)$-variation on
any finite interval. In this case we have, for any $p\geq \frac{1}{\alpha}$, that
\begin{equation} \label{eq:holder_var}
var_{p}(f;[a,b]) \leqslant \parallel
f\parallel_{\alpha}(b-a)^\alpha.
\end{equation}
Throughout the paper, we also assume that $T< \infty$ is fixed. That is, we
consider stochastic processes on some compact interval. We denote by $\|.\|_{\infty}$ the supremum norm on $[0,T]$.\\
For any natural number $n\geq1$, and for any stochastic process $Z=\{Z_{t},t\geq0\}$, we write
\begin{eqnarray} \label{quadratic variation of integral}
V_{n}(Z)_{t}=\sum_{i=1}^{[nt]}\left|Z_{\frac{i}{n}}-Z_{\frac{i-1}{n}}\right|^2.
\end{eqnarray}
We will use the following two general results, taken from \cite{Lauri Viitasaari}, on the convergence of the quadratic variations of a Gaussian
process.
\begin{theorem}
\cite[Theorem 3.1]{Lauri Viitasaari}) \label{theorem:BE_bound_QV}
Let $X$ be a continuous Gaussian process and denote by $V_n^X$ its quadratic variation defined by
\begin{equation*}
V_n^X = \sum_{k=1}^n \left[\left(\Delta_k X\right)^2 -\mathrm{I\kern-0.16em E}\left(\Delta_k X\right)^2 \right],
\end{equation*}
where $\Delta_k X = X_{t_{k}} - X_{t_{k-1}}$. Assume that
\begin{eqnarray}
\max_{1\leqslant j \leqslant N(\pi_{n})-1}\sum_{k=1}^{N(\pi_n)-1}\frac{1}{\sqrt{\phi(\Delta t_{k})\phi(\Delta t_{j})}}|\mathrm{I\kern-0.16em E}[(X_{t_{k}}-X_{t_{k-1}})(X_{t_{j}}-X_{t_{j-1}})]| &\leqslant& h(|\pi_{n}|) \notag
\end{eqnarray}
for some function $\phi$ and $h(|\pi_{n}|)$.\newline If $h(|\pi_{n}|)\rightarrow 0 $ as $|\pi_{n}|$ tends to zero, then
the convergence
\begin{eqnarray}
\left|\sum_{k=1}^{N(\pi_{n})-1}\frac{(X_{t_{k}}-X_{t_{k-1}})^2}{\phi(t_{k}-t_{k-1})}-\sum_{k=1}^{N(\pi_{n})-1}\frac{\mathrm{I\kern-0.16em E
(X_{t_{k}}-X_{t_{k-1}})^2}{\phi(t_{k}-t_{k-1})}\right|
&\rightarrow& 0
\end{eqnarray}
holds in Probability. Furthermore, the convergence holds almost surely provided that $h(|\pi_{n}|)=o(\frac{1}{log(n)}).$
\end{theorem}
The following lemma gives easy way to compute the function $h(n)$ and is essentially taken from \cite{Lauri Viitasaari} (see
\cite[Theorem 3.3]{Lauri Viitasaari}).
\begin{lemma}
\label{lemma:lemma of Lauri} \label{lma:rate}\cite{Lauri
Viitasaari} Let $X$
be a continuous Gaussian process such that the function
d(s,t)=E(X_{t}-X_{s})^{2}$ is in $C^{1,1}$ outside diagonal.
Furthermore, assume that
\begin{equation}\label{eq:assuption}
|\partial_{st} d(s,t)|=O\left(|t-s|^{2H-2}\right)
\end{equation}
for some $H\in (0,1), H \neq \frac12$. Then
\begin{equation*}
\max_{1\leqslant j \leqslant n} \sum_{k=1}^n
\left\vert\mathrm{I\kern-0.16em E}(\Delta_k X \Delta_j
X)\right\vert \leqslant \max_{1\leqslant j \leqslant
n} d\left(\frac{j}{n},\frac{j-1}{n}\right)+ \left(\frac{1}{n
\right)^{1\wedge 2H}.
\end{equation*}
\end{lemma}
Finally, in order to study stable convergence in law we recall the following general convergence result taken from \cite{CNP}.
\begin{theorem}[\protect\cite{CNP}]
\label{theorem: Hypotheses} Let $(\Omega,\mathcal{F},P)$ be a complete probability space. Fix a time interval $[0,T]$ and
consider a double sequence of random variables $\xi=\{\xi_{i,m},m\in Z_{+},1\leqslant i \leqslant [mT]\}.$ Assume
the double sequence $\xi$ satisfies the following hypotheses.\\
\textbf{(H1)} Denote $g_{m}(t):=\sum_{i=1}^{[mt]}\xi_{i,m}$. The finite dimensional distributions of the sequence of processes
$\{g_{m}(t),t \in [0,T]\}$ converges $\mathcal{F}$-stably to those of $\{B(t), t\in [0,T]\}$ as $m\rightarrow\infty$, where $\{B(t), t\in [0,T]\}$ is a standard
Brownian motion independent of $\mathcal{F}$.\\
\textbf{(H2)} $\xi$ satisfies the tightness condition $\mathrm{I\kern-0.16em E}\left|\sum_{i=j+1}^{k}\xi_{i,m}\right|^4\leqslant C \left(\frac{k-j}{m}\right)^2$ for any $1\leqslant j \leqslant k\leqslant [mT]$.
If $\{f(t), t\in [0,T]\}$ is an $\alpha$-H\^{o}lder continuous process with $\alpha>1/2$ and we set $X_{m}(t):=\sum_{i=1}^{[mt]}f(\frac{i}{m})\xi_{i,m},$ then we have the $\mathcal{F}$-stable convergence
\begin{eqnarray}
X_{m}(t)&\underset{m\rightarrow\infty}{\overset{Law}{\longrightarrow}}&\int_{0}^{t}f(s)dB_{s}, \notag
\end{eqnarray}
in the Skorohod space $\mathcal{D}[0,T]$ equipped with the uniform topology.
\end{theorem}
Recall that a sequence of random vectors or processes $Y_{n}$
converges $\mathcal{F}$-stably in law to a random vector or
process $Y$, where $Y$ is defined on an extention
$(\Omega',\mathcal{F}',P')$ of the original probability
$(\Omega,\mathcal{F},P)$, if
$(Y_{n},Z)\overset{Law}{\longrightarrow} (Y,Z)$ for any
$\mathcal{F}$-measurable random variable Z. If Y is
$\mathcal{F}$-measurable, then we have convergence in probability.
We refer to \cite{AE}, \cite{LL} and \cite{Renyi} for more details on stable
convergence.\\
At last of this section, we will give a useful lemma to prove the stable convergence by (A4).
\begin{lemma}\label{lem-fdd}
Let $(a_k,b_k]$, $k=1,\cdots, N$ be pairwise disjoint intervals contained in $[0,T]$. Define
$$G_k^{(n)}=n^{-H}\sum_{[na_k]<j\leq [nb_k]}(G_j^{H}-G_{j-1}^H)$$
and
$$Y_k^{(n)}=\frac1{\sqrt{n}}\sum_{[na_k]<j\leq [nb_k]}H_2(G_j^{H}-G_{j-1}^H)$$
for $k=1,\cdots, N$, where $H_2(x)=x^2-1$ is the $2$-th Hermite polynomial. Assume $H<3/4$ and $G$ satisfies (A1)--(A4), then we have
$$(G^{(n)}, Y^{(n)})\overset{\mathcal{L}}{\to}(G,V),$$
where $G$ and $V$ are independent centred Gaussian vectors, with $G_k=G^H_{b_k}-G^H_{a_k}$, and the components of $V$ are independent with variances $v_1^2(b_k-a_k)$ and $v_1$ is dependent on functions $\rho_H$ and $\theta$.
\end{lemma}
\begin{proof}
Denote by $\mathcal{H}_m$ the $m$-th Wiener chaos, the closed subspace of $L^2(\Omega, \mathcal{F}, P)$ generated by the random variables $H_m(X)$, where $X$ belongs to first Wiener chaos, $\mathbb{E}X^2=1$ and $H_m$ is the $m$-th Hermite polynomial. The mapping $I_m: ~\mathcal{H}_1^{\odot m}\to \mathcal{H}_m$ denoted by $I_m(X^{\otimes m})=H_m(X)$ is a linear isometry between the symmetric tensor product $\mathcal{H}_1^{\odot m}$, equipped with the norm $\sqrt{m!}||\cdot||_{\mathcal{H}_1^{\otimes m}}$. For function
$$H(X)=\sum_{m=2}^\infty c_mH_m(X)$$
with $\sum_{m=2}^\infty c_m^2m!=\mathbb{E}|H(Z)|^2<\infty$, $Z$ being an $N(0,1)$ random variable and
$$J_mH(X)=c_mH_m(X)$$
where $J_m$ denote the projection operator on the $m$-th Wiener chaos. Using the same ways as the proof of Proposition 10 in Corcuera, Nualart and Woerner \cite{CNW}, to prove the desired result, we only need to prove, for any $m\geq2$, $k=1,\cdots, N$,
\begin{equation}\label{eq-fdd-1}
\lim_{n\to\infty}\mathbb{E}|J_m\widetilde{Y}_k^{(n)}|^2=:\sigma^2_{m,k}<\infty,
\end{equation}
\begin{equation}\label{eq-fdd-2}
\sum_{m=2}^\infty\sup_{n}\mathbb{E}|J_m\widetilde{Y}_k^{(n)}|^2<\infty,
\end{equation}
\begin{equation}\label{eq-fdd-3}
\lim_{n\to\infty}\mathbb{E}[J_m\widetilde{Y}_k^{(n)}J_m\widetilde{Y}_h^{(n)}]=0, ~~k\neq h,
\end{equation}
and
\begin{equation}\label{eq-fdd-4}
\lim_{n\to\infty}I_m^{-1}J_m\widetilde{Y}_k^{(n)}\otimes_p I_m^{-1}J_m\widetilde{Y}_k^{(n)}=0, ~~1\leq p\leq m-1,
\end{equation}
where $$\widetilde{Y}_k^{(n)}=\frac1{\sqrt{n}}\sum_{[na_k]<j\leq [nb_k]}H(G_j^{H}-G_{j-1}^H).$$
Replace $\rho_H(|j-l|)$ by $\rho_H(|j-l|)+\theta(j,l)$, then it is easy to obtain \eqref{eq-fdd-3} and \eqref{eq-fdd-4}, since $|\theta(j,l)|^2=o(1/j)$ as $j\to\infty$. So, we only need prove \eqref{eq-fdd-1} and \eqref{eq-fdd-2} below.
\begin{align*}
\mathbb{E}|J_m\widetilde{Y}_k^{(n)}|^2&=\frac{m!c_m^2}{n}\sum_{[na_k]<j,l\leq [nb_k]}\Big[\mathbb{E}(G^H_j-G^H_{j-1})(G^H_l-G^H_{l-1})\Big]^m\\
&=\frac{m!c_m^2}{n}\sum_{[na_k]<j,l\leq [nb_k]}\Big[c_0\rho_H(|j-l|)+c_1\theta(j,l)\Big]^m\\
&=\frac{m!c_m^2}{n}\sum_{[na_k]<j\leq [nb_k]}\Big[c_0\rho_H(0)+c_1\theta(j,j)\Big]^m\\
&\qquad+2\frac{m!c_m^2}{n}\sum_{[na_k]<j\neq l\leq [nb_k]}\Big[c_0\rho_H(|j-l|)+c_1\theta(j,l)\Big]^m.
\end{align*}
By assumption (A4), we can see the summation above with respect to $\theta(j,l)$ part is finite, denoted by
\begin{align*}
\sigma^2_\theta:=\lim_{n\to\infty}\left(\frac{m!c_m^2}{n}\sum_{[na_k]<j\leq [nb_k]}\Big[c_1\theta(j,j)\Big]^m+2\frac{m!c_m^2}{n}\sum_{[na_k]<j\neq l\leq [nb_k]}\Big[c_1\theta(j,l)\Big]^m\right).
\end{align*}
Then \eqref{eq-fdd-1} and \eqref{eq-fdd-2} follow by
\begin{align*}
&\frac1n\sum_{[na_k]<j\leq [nb_k]}\rho_H(0)^m+\frac1n\sum_{[na_k]<j\neq l\leq [nb_k]}(\rho_H(|j-l|))^m\\
&\qquad=\frac{[nb_k]-[na_k]}{n}\rho_H(0)^m+\sum_{j=1}^{[nb_k]-[na_k]}\rho_H(j)^m\frac{[nb_k]-[na_k]-j}{n}\\
&\qquad\to(b_k-a_k)\rho_H(0)^m+\sum_{j=1}^\infty\rho_H(j)^m=:\sigma^2_\rho, ~~n\to\infty,
\end{align*}
and we denoted by $\sigma^2_{m,k}:=\lim_{n\to\infty}\mathbb{E}|J_m\widetilde{Y}_k^{(n)}|^2$
(since this is a complex binomial expansion related to $\rho_H$ and $\theta$, the calculation process of $\lim_{n\to\infty}\mathbb{E}|J_m\widetilde{Y}_k^{(n)}|^2$ is complicated, so we can only denote it by $\sigma^2_{m,k}$).
When $m=2$, we can compute the variance of the limit $\lim_{n\to\infty}\mathbb{E}|Y_k^{(n)}|^2=:v_1^2(b_k-a_k)$ with $v_1^2(b_k-a_k)c_2^2=\sigma^2_{2,k}$.
\end{proof}
\section{Main results}
We study the asymptotic behavior of the realized quadratic variation of a stochastic process of the form $\int_{0}^{t}u_{s}dG^{H}_{s}$,
where $u$ is a H\"older continuous process of order $\beta >1-H$. Note that, as $G^{H}$ is H\"older continuous of order
$H-\varepsilon$ by assumption (A2), the integral can be understood as a Riemann-Stieltjes integral. In particular, the process is well-defined.
We are now ready to state our first main result that provides us
the uniform strong consistency.
\begin{theorem}
\label{theorem:Consistency} Under the assumptions (A1)--(A3), we further suppose that $u=\{u_{t},t\in[0,T]\}$
is an H\"older continuous stochastic process of order $\beta$ with $\beta > 1-H$, $0<H<3/4$, and set
\begin{eqnarray} \label{eq:process_Z}
Z_{t} &=& \int_{0}^{t}u_{s}dG^{H}_{s}.
\end{eqnarray}
Then, as $n$ tends to infinity,
\begin{eqnarray}
n^{2H-1}V_{n}(Z)_{t} &\longrightarrow & \int_{0}^{t}|u_{s}|^2ds,
\end{eqnarray}
almost surely and uniformly in $t$.
\end{theorem}
\begin{proof}
For $t\in [0,T]$ and an integer $n$, we denote by $[nt]$ the largest integer that is at most $nt$. Let now $m\geq n$. We have
\begin{eqnarray}
m^{-1+2H}V_{m}(Z)_{t}&-&\int_{0}^{t}|u_{s}|^2ds \notag \\
&=&m^{2H-1}\sum_{j=1}^{[mt]}\left(\left
\int_{(j-1)/m}^{j/m}u_{s}dG^{H}_{s}\right|^{2}-\left|u_{\frac{j-1}{m
}(G^{H}_{\frac{j}{m}}-G^{H}_{\frac{j-1}{m}})\right|^2\right) \notag \\
&&+m^{2H-1}\left(\sum_{j=1}^{[mt]}\left|u_{\frac{j-1}{m}}\left(G^{H}_
\frac{j}{m}}-G^{H}_{\frac{j-1}{m}}\right)\right|^2-\sum_{i=1}^{[nt]
\left|u_{\frac{i-1}{n}}\right|^2\sum_{j\in I_{n}(i)}\left|G^{H}_{\frac{j}{
}}-G^{H}_{\frac{j-1}{m}}\right|^2\right) \notag \\
&&+m^{2H-1}\sum_{i=1}^{[nt]}\left|u_{\frac{i-1}{n}}\right|^2\sum_{j\in
I_{n}(i)}\left|G^{H}_{\frac{j}{m}}-G^{H}_{\frac{j-1}{m
}\right|^2-n^{-1}\sum_{i=1}^{[nt]}\left|u_{\frac{i-1}{n}}\right|^2
\notag \\
&&+\left(n^{-1}\sum_{i=1}^{[nt]}\left|u_{\frac{i-1}{n
}\right|^2-\int_{0}^{t}|u_{s}|^2ds\right) \notag \\
&=& A_{t}^{(m)}+B_{t}^{(n,m)}+C_{t}^{(n,m)}+D_{t}^{(n)}, \notag
\end{eqnarray}
where
\begin{eqnarray}
I_{n}(i)&=&\left\{j:\frac{j}{m}\in\left(\frac{i-1}{n},\frac{i}{n}\right],
\ \ 1\leqslant i\leqslant [nt]. \right\} \notag
\end{eqnarray}
The idea of the proof is that we first let $m\rightarrow \infty$ and then $n\rightarrow \infty$, and we show that each of the terms
$A_t^{(m)}, B_t^{(n,m)}, C_t^{(n,m)}$, and $D_t^{(n)}$ converges to zero almost surely, and uniformly in $t$.
Let us begin with the term $C_t^{(n,m)}$. We have
\begin{eqnarray}
\parallel C^{(n,m)}\parallel_{\infty} &\leqslant&\sum_{i=1}^{[nT]}\left|u_
\frac{i-1}{n}}\right|^{2}\left|m^{2H-1}\sum_{j\in I_{n}(i)}\left| G^{H}_
\frac{j}{m}}-G^{H}_{\frac{j-1}{m}}\right|^2-n^{-1}\right|.
\notag
\end{eqnarray}
As we first let $m\rightarrow \infty$, it suffices to show that, for a fixed $n$, we have
\begin{eqnarray}
\left|m^{2H-1}\sum_{j\in I_{n}(i)}\left| G^{H}_{\frac{j}{m}}-G^{H}_{\frac{j-1}{m}}\right|^2-n^{-1}\right| \rightarrow 0. \notag
\end{eqnarray}
By assumption (A1), Lemma \ref{lma:rate} and Theorem \ref{theorem:BE_bound_QV}, we only need to prove
$$\lim_{m\to\infty}m^{2H-1}\sum_{j\in I_{n}(i)}\left| G^{H}_{\frac{j}{m}}-G^{H}_{\frac{j-1}{m}}\right|^2=n^{-1}$$
which follows from assumption (A3).
Consider next the term $A_t^{(m)}$. We have
\begin{eqnarray}
|A_{t}^{(m)}|&\leqslant&m^{2H-1}\sum_{j=1}^{[mt]}\left|\left
\int_{(j-1)/m}^{j/m}u_{s}dG^{H}_{s}\right|^{2}-\left|u_{\frac{j-1}{m
}(G^{H}_{\frac{j}{m}}- G^{H}_{\frac{j-1}{m}})\right|^2\right|.
\notag
\end{eqnarray}
We will use the following inequality, valid for any $x,y\in\mathbb{R}$,
\begin{eqnarray} \label{eq:inequality}
\left||x|^2-|y|^2\right| &\leqslant& 2\left[|x-y|^2+|y||x-y|\right].
\end{eqnarray}
This implies
\begin{eqnarray}
|A_{t}^{(m)}|
&\leqslant&2m^{-1+2H}\sum_{j=1}^{[mt]}\left
\int_{(j-1)/m}^{j/m}u_{s}dG^{H}_{s}-u_{\frac{j-1}{m}}(G^{H}_{\frac{j}{m
}-S^{H}_{\frac{j-1}{m}})\right|^2 \notag \\
&&+2m^{-1+2H}\sum_{j=1}^{[mt]}\left|u_{\frac{j-1}{m}}\left(G^{H}_{\frac{
}{m}}- G^{H}_{\frac{j-1}{m}}\right)\right|\left
\int_{(j-1)/m}^{j/m}u_{s}dG^{H}_{s}-u_{\frac{j-1}{m}}\left(G^{H}_{\frac{
}{m}}-G^{H}_{\frac{j-1}{m}}\right)\right| \notag \\
&=:&E_{(m)}(t)+R_{(m)}(t), \notag
\end{eqnarray}
where
\begin{eqnarray}
E_{(m)}(t)&=&2m^{-1+2H}\sum_{j=1}^{[mt]}\left
\int_{(j-1)/m}^{j/m}u_{s}dG^{H}_{s}-u_{\frac{j-1}{m}}\left(G^{H}_{\frac{
}{m}}- G^{H}_{\frac{j-1}{m}}\right)\right|^2, \notag \\
R_{(m)}(t)&=&2m^{-1+2H}\sum_{j=1}^{[mt]}\left|u_{\frac{j-1}{m
}\left(G^{H}_{\frac{j}{m}}- G^{H}_{\frac{j-1}{m}}\right)\right|\left
\int_{(j-1)/m}^{j/m}u_{s}dG^{H}_{s}-u_{\frac{j-1}{m}}\left(G^{H}_{\frac{
}{m}}-G^{H}_{\frac{j-1}{m}}\right)\right|. \notag
\end{eqnarray}
For the term $E_{(m)}(t)$ we observe, by applying Young inequality \eqref{Young inequality}, that
\begin{eqnarray}
|E_{(m)}(t)| &\leqslant&c_{H,\beta,\varepsilon} m^{2H-1}
\sum_{j=1}^{[mT]}\left|var_{\frac{1}{\beta}}(u;\mathcal{I
_{m}(j))var_{1/(H-\varepsilon)}(G^{H};\mathcal{I}_{m}(j))\right|^2,
\notag
\end{eqnarray}
where $0<\varepsilon<H$, the constant $c_{H,\beta,\varepsilon}$ comes from inequality \eqref{Young inequality} and depends only on $H,\beta$ and $\varepsilon$, and $\mathcal{I}_{m}(j)=\left(\frac{j-1}{m},\frac{j}{m}\right]$.
By \eqref{eq:holder_var} we have
\begin{eqnarray}
var_{\frac{1}{\beta}}(u,\mathcal{I}_{m}(j)) &\leqslant&
m^{-\beta}\|u\|_{\beta} \notag
\end{eqnarray}
and
\begin{eqnarray}
var_{1/(H-\varepsilon)}(G^{H},\mathcal{I}_{m}(j)) &\leqslant&
m^{-(H-\varepsilon)}\|G^{H}\|_{H-\varepsilon}. \notag
\end{eqnarray}
Thus
\begin{eqnarray}
\|E_{(m)}\|_{\infty}
&\leqslant&c_{H,\beta,\varepsilon}m^{2H-1-2\beta}
\|u\|^2_{\beta} \sum_{j=1}^{[mT]}\left|var_{1/(H-\varepsilon)}(G^{H}
\mathcal{I}_{m}(j))\right|^2, \notag \\
&\leqslant&Tc_{H,\beta,\varepsilon}m^{2H-1-2\beta-2(H-\varepsilon)+1}
\|u\|^2_{\beta}\|G^{H}\|^2_{(H-\varepsilon)} \notag \\
&\leqslant&Tc_{H,\beta,\varepsilon}m^{2(\varepsilon-\beta)}
\|u\|^2_{\beta}\|G^{H}\|^2_{(H-\varepsilon)}. \notag
\end{eqnarray}
As we can choose $\varepsilon< \beta$, this implies that $\lim_{m\rightarrow\infty}\|E_{(m)}\|_{\infty}=0$ almost surely. Similarly, we can apply
\eqref{Young inequality} to the term $R_{(m)}(t)$ to get
\begin{eqnarray}
|R_{(m)}(t)|&\leqslant&c_{H,\beta,\varepsilon}m^{-1+2H}\sum_{j=1}^{[mT]
\left|u_{\frac{j-1}{m}}(G^{H}_{\frac{j}{m}}-G^{H}_{\frac{j-1}{m
})\right|\left|var_{\frac{1}{\beta}}(u,\mathcal{I}_{m}(j))var_{1/(H
\varepsilon)}(G^{H},\mathcal{I}_{m}(j))\right| \notag \\
&\leqslant&2c_{H,\beta,\varepsilon}m^{-1+2H-\beta-(H-\varepsilon)
\parallel u\parallel_{\beta}\parallel
G^{H}\parallel_{H-\varepsilon}\sum_{j=1}^{[mT]}\left|u_{\frac{j-1}{m
}(G^{H}_\frac{j}{m}-G^{H}_{\frac{j-1}{m}})\right| \notag \\
&\leqslant&2c_{H,\beta,\varepsilon}m^{-1+H-\beta+\varepsilon}\parallel
u\parallel_{\beta}\parallel
G^{H}\parallel_{H-\varepsilon}\parallel
u\parallel_{\infty}\sum_{j=1}^{[mT]}\left|var_{1/(H-\varepsilon)}(G^{H}
\mathcal{I}_{m}(j))\right| \notag \\
&\leqslant&Tc_{H,\beta,\varepsilon}\parallel
u\parallel_{\beta}\parallel
G^{H}\parallel^2_{H-\varepsilon}\parallel
u\parallel_{\infty}m^{-\beta+2\varepsilon}. \notag
\end{eqnarray}
Hence, for $\varepsilon < \frac{\beta}{2}$, we get $\parallel R_{(m)}\parallel_{\infty} \rightarrow 0$ almost surely, and consequently,
$\parallel A^{(m)}\parallel_{\infty} \rightarrow 0$ almost surely as $m\rightarrow\infty$.
It remains to study the terms $D^{(n)}_t$ and $B^{(n,m)}_t$. For the term $D^{(n)}_t$ we first observe that
for any $s\in \left[\frac{i-1}{n},\frac{i}{n}\right]$, we have
\begin{equation*}
||u_{\frac{i-1}{n}}|^2-|u_s|^2| \leqslant
2\|u\|_{\infty}\|u\|_{\beta} n^{-\beta}.
\end{equation*}
Thus we can estimate
\begin{eqnarray}
|D_{t}^{(n)}|&=&\left|n^{-1}\sum_{i=1}^{[nt]}|u_{\frac{i-1}{n
}|^2-\int_{0}^{t}|u_s|^2ds\right| \notag \\
&=&\left|\sum_{i=1}^{[nt]}\int_{(i-1)/n}^{i/n}(|u_{\frac{i-1}{n
}|^2-|u_s|^2)ds + \int_{[nt]/n}^t|u_s|^2ds \right| \notag \\
&\leqslant&\sum_{i=1}^{[nt]}\int_{(i-1)/n}^{i/n}\left||u_{\frac{i-1}{n
}|^2-|u_{s}|^2\right|ds + \int_{[nt]/n}^t|u_s|^2ds \notag \\
&\leqslant&2T\|u\|_{\infty}\|u\|_{\beta} n^{-\beta} +
\|u\|_{\infty}|t-[nt]/n| \notag \\
&\leqslant&2T\|u\|_{\infty}\|u\|_{\beta} n^{-\beta} +
\|u\|_{\infty}n^{-1}. \notag
\end{eqnarray}
This implies that also $\parallel D^{(n)}\parallel_{\infty}\rightarrow 0$ almost surely as $n\rightarrow 0$. It remains to study the term
$B_{t}^{(n,m)}$. First note that, by the definition of $I_n(i)$, we have
\begin{equation*}
\sum_{j=1}^{[mt]}\left|u_{\frac{j-1}{m}}\left(G^{H}_{\frac{j}{m}}-G^{H}_
\frac{j-1}{m}}\right)\right|^2 = \sum_{i=1}^{[nt]}\sum_{j\in I_n(i)}\left|u_
\frac{j-1}{m}}\left(G^{H}_{\frac{j}{m}}-G^{H}_{\frac{j-1}{m
}\right)\right|^2.
\end{equation*}
Together with the fact that
\begin{equation*}
|u_{\frac{j-1}{m}} - u_{\frac{i-1}{n}}|^2 \leqslant 4\parallel
u\parallel_{\infty}\parallel u\parallel_\beta n^{-\beta}
\end{equation*}
as $\frac{j}{m} \in \left(\frac{i-1}{n},\frac{i}{n}\right]$, this gives us
\begin{eqnarray}
|B_{t}^{(n,m)}|&=&\left|m^{2H-1}\left(\sum_{j=1}^{[mt]}\left|u_{\frac{j-1}{
}}\left(G^{H}_{\frac{j}{m}}-G^{H}_{\frac{j-1}{m}}\right)\right|^2
\sum_{i=1}^{[nt]}\left|u_{\frac{i-1}{n}}\right|^2\sum_{j\in
I_{n}(i)}\left|G^{H}_{\frac{j}{m}}-G^{H}_{\frac{j-1}{m}}\right|^2\right)
\right| \notag \\
&\leqslant &m^{2H-1}\sum_{i=1}^{[nt]}\sum_{j\in I_n(i)}|u_{\frac{j-1}{m}} - u_{\frac
i-1}{n}}|^2\left|G^{H}_{\frac{j}{m}}-G^{H}_{\frac{j-1}{m}}\right|^2
\notag \\
&\leqslant & 4m^{2H-1}\parallel u\parallel_{\infty}\parallel
u\parallel_\beta
n^{-\beta}\sum_{i=1}^{[nt]}\sum_{j\in I_n(i)}\left|G^{H}_{\frac{j}{m
}-G^{H}_{\frac{j-1}{m}}\right|^2. \notag
\end{eqnarray}
Here
\begin{equation*}
m^{2H-1}\sum_{j\in I_n(i)}\left|G^{H}_{\frac{j}{m}}-G^{H}_{\frac{j-1}{m}}\right|^2 \rightarrow n^{-1}
\end{equation*}
almost surely, and thus
\begin{equation*}
m^{2H-1}\sum_{i=1}^{[nt]}\sum_{j\in I_n(i)}\left|G^{H}_{\frac{j}{m}}-G^{H}_{\frac{j-1}{m}}\right|^2 \rightarrow t
\end{equation*}
almost surely. This implies that $\parallel B^{(n,m)}\parallel_{\infty}\rightarrow 0$ which completes the proof.
\end{proof}
For each $t\geq 0$ we denote by $\mathcal{F}_{t}^H$ the $\sigma$-field generated by the random variables $\{G_{s}^{H}, 0 \leq s\leq t\}$ and the null sets.
\begin{theorem}\label{theorem:distribution asymptotic}
Under the assumptions (A1)--(A4), we further suppose that $u=\{u_{t},t\in[0,T]\}$ is an H\"older continuous stochastic
process of order $\beta$ with $\beta >\max\left(1-H,\frac12\right)$, and measurable with respect to $\mathcal{F}_T^{H}$. Set
\begin{eqnarray}
Z_{t} &=& \int_{0}^{t}u_{s}dG^{H}_{s}.
\end{eqnarray}
Then, as $n$ tends to infinity,
\begin{eqnarray*}
n^{2H-1/2}V_{n}(Z)_{t}-\sqrt{n}\int_{0}^{t}|u_{s}|^2ds &\overset{\mathcal{L}}{\rightarrow}& v_1\int_{0}^{t}|u_{s}|^2dW_{s}
\end{eqnarray*}
$\mathcal{F}_T^H$-stably in the space $\mathcal{D}([0,T]^2)$, where $W=\{W_{t},t\in[0,T]\}$ is a Brownian motion independent
of $\mathcal{F}_{T}^H $, $v_1$ is given in Lemma \ref{lem-fdd}.
\end{theorem}
\begin{proof}
As in the proof of Theorem \ref{theorem:Consistency}, we make a decomposition
\begin{eqnarray}
n^{2H-1/2}V_{n}(Z)_{t}-\sqrt{n}\int_{0}^{t}|u_{s}|^2 ds
&=:& A_{t}^{(n)}+B_{t}^{(n)}+C_{t}^{(n)}, \notag
\end{eqnarray}
where
\begin{eqnarray*}
A_{t}^{(n)}
&=&n^{2H-1/2}\sum_{i=1}^{[nt]}\left(\left
\int_{(i-1)/n}^{i/n}u_{s}dG^{H}_{s}\right|^{2}-\left|u_{\frac{i}{n
}(G^{H}_{\frac{i}{n}}-G^{H}_{\frac{i-1}{n}})\right|^2\right), \notag \\
\notag \\
B_{t}^{(n)}&=&n^{2H-1/2}\sum_{i=1}^{[nt]}\left|u_{\frac{i}{n
}\left(G^{H}_{\frac{i}{n}}-G^{H}_{\frac{i-1}{n}}\right)\right|^2
\frac{1}{\sqrt{n}}\sum_{i=1}^{[nt]}\left|u_{\frac{i}{n}}\right|^2, \notag \\
C_{t}^{(n)}&=&\left(\frac{1}{\sqrt{n}}\sum_{i=1}^{[nt]}\left|u_{\frac
i}{n}}\right|^2-\sqrt{n}\int_{0}^{t}|u_{s}|^2ds\right). \notag
\end{eqnarray*}
Using $\beta > \frac12$ and treating the terms $A_t^{(n)}$ and $C_t^{(n)}$ as the terms $A_t^{(m)}$ and $D_t^{(n)}$ in the proof of
Theorem \ref{theorem:Consistency}, we obtain
\begin{equation*}
\parallel A^{(n)}\parallel_\infty +\parallel C^{(n)}\parallel_\infty
\rightarrow 0
\end{equation*}
almost surely. Consider next the term $B_{t}^{(n)}$. We set
\begin{eqnarray*}
\xi_{i,n} &=& n^{2H-1/2}\left|G^{H}_{\frac{i}{n}} -G^{H}_{\frac{(i-1)}{n}}\right|^2-\frac{1}{\sqrt{n}} \notag
\end{eqnarray*}
so that
\begin{equation*}
B_{t}^{(n)}= \sum_{i=1}^{[nt]}|u_{i/n}|^{2}\xi_{i,n}.
\end{equation*}
In order to complete the proof, we need to verify hypotheses (H1) and (H2) of Theorem \ref{theorem: Hypotheses}.
For the first hypothesis (H1), we have
\begin{eqnarray}
g_{m}(t)&=&\sum_{i=1}^{[mt]}\left(m^{2H-1/2}\left|G^{H}_{i/m}-G^{H}_{(i-1)/m}\right|^{2}-\frac{1}{\sqrt{m}}\right).\nonumber
\end{eqnarray}
From Lemma \ref{lem-fdd}, we obtain the finite dimensional distributions. Thus, by the Theorem 3 in Corcuera, Nualart and Woerner \cite{CNW}, for the
following convergence in law for $0<H<3/4$,
\begin{eqnarray}
\left(G^{H}_{t},\sqrt{m}\sum_{i=1}^{[mt]}\left(m^{2H-1}\left|G^{H}_{i/m}-G^{H}_{(i-1)/m}\right|^{2}-\frac{1}{m}\right)\right)&\overset{\mathcal{L}}{\rightarrow}&\left(G^{H}_{t},v_1 W_{t}\right),\nonumber
\end{eqnarray}
in the space $\mathcal{D}([0,T])^{2}$ equipped with the Skorohod topology, where $W =\{W_{t},t\in[0 ,T]\}$ is a Brownian motion independent of the Gaussian process $G^{H}$, we need to prove the tightness condition, which is the second hypothesis in Theorem \ref{theorem: Hypotheses}.
For the hypothesis (H2), using the Lemma 4.3 and the Proposition 4.2 in
\cite{Taqqu} and replace $\mathbb{E}[(G^H_j-G_{j-1}^H)(G^H_{j+u}-G_{j+u-1}^H)]=\rho_H(u)$ by $\rho_H(u)+\theta(j,j+u)$, we have for any $1\leqslant j <k\leqslant [nT].$
\begin{eqnarray}
\mathrm{I\kern-0.16em E}\left(\left|\sum_{i=j+1}^{k}\xi_{i,n}\right|^
\right) &=&\frac{1}{n^2} \mathrm{I\kern-0.16em E}\left(\left
\sum_{i=j+1}^{k}\left(n^{2H}\left|G^{H}_{\frac{i}{n}} -G^{H}_{\frac{(i-1)}{n
}\right|^2-1\right)\right|^4\right) \notag \\
&=&\frac{1}{n^2} \mathrm{I\kern-0.16em E}\left(\left|\sum_{i=j+1}^{k}H_2(G^{H}_{i}-G^{H}_{i-1})\right|^4\right) \notag \\
&=&\frac{1}{n^2} \left(\sum_{i=j+1}^{k}\sum_{l=j+1}^{k}\left(\mathbb{E}H_2(G^{H}_{i}-G^{H}_{i-1})H_2(G^{H}_{l}-G^{H}_{l-1})\right)^2\right)^2 \notag \\
&\leq&\frac{1}{n^2} \left(\sum_{i=j+1}^{k}\sum_{l=j+1}^{k}\left(c_0\rho_H(|i-l|)+c_1\theta(i,l)\right)^2\right)^2 \notag \\
&\leq&\frac{c(k-j)^2}{n^2}\left(\sum_{i=0}^{\infty}\rho_{H}^2(i)\right)^2+\frac{c}{n^2}\left(\sum_{i=j+1}^k\sum_{l}\theta(i,l)^2\right)^2 \notag \\
&\leqslant&c\left(\frac{k-j}{n}\right)^2 \notag
\end{eqnarray}
where we use $|\theta(i,l)|^2=o(1/l)$ as $l\to\infty$ in (A4), which is convergent in summation, in the last inequality.
This concludes the proof of the theorem.
\end{proof}
\section{Application to the estimation of the integrated volatility}
\label{sec:vol} In this section we apply our main results to the estimation of the integrated volatility
$\int_{0}^{t}|\sigma_s|^{2}ds$. We consider a generalized Gaussian Ornstein-Uhlenbeck process defined as the solution
to the stochastic differential equation
\begin{eqnarray} \label{eq:SDE}
dX_{t} &=& -\theta X_{t}dt+\sigma_{t}dG^{H}_{t} ,
\end{eqnarray}
with some initial condition $X_0 \in \mathbb{R}$. We define the estimator $QV_{n}(X)_{t}$ for the integrated volatility
$\int_{0}^{t}|\sigma_s|^{2}ds$ as
\begin{eqnarray} \label{eq:estimator}
QV_{n}(X)_{t} &=&n^{2H-1}V_{n}(X)_{t}, \ \ t \in [0,T].
\end{eqnarray}
We begin with two simple propositions which allows us to introduce drift to the process defined by \eqref{eq:process_Z}, which can be obtained directly from Bajja, Es-Sebaiy and Viitasaari \cite{BEV}, so we omit the detailed proof here.
\begin{proposition}\label{proposition:consistency}
Suppose that the assumptions of Theorem \ref{theorem:Consistency} prevail, and let $Y=\{Y_{t},t\in [0,T]\}$ be
a stochastic process such that, as $n$ tends to infinity,
\begin{eqnarray}
n^{2H-1}V_{n}(Y)_{t} &\rightarrow& 0 \notag
\end{eqnarray}
almost surely and uniformly in $t$. Then
\begin{eqnarray}
n^{2H-1}V_{n}(Y+Z)_{t}&\underset{n\rightarrow\infty}{\longrightarrow
&\int_{0}^{t}|u_s|^2ds. \notag
\end{eqnarray}
almost surely and uniformly in $t$.
\end{proposition}
Similarly, we obtain the following result on the weak convergence.
\begin{proposition}
\label{proposition:clt} Suppose that the assumptions of Theorem \ref{theorem:distribution asymptotic} prevail,
and let $Y=\{Y_{t},t\in[0,T]\}$ be a stochastic process such that, as $n$ tends to infinity,
\begin{eqnarray}
n^{2H-1}V_{n}(Y)_{t} &\rightarrow& 0 \notag
\end{eqnarray}
and uniformly in probability. Then
\begin{eqnarray}
n^{2H-\frac{1}{2}}(Y+Z)_{t}-\sqrt{n}\int_{0}^{t}|u_s|^2ds&\underset
n\rightarrow\infty}{\overset{Law}{\longrightarrow}}&v_1\int_{0}^{t}|u_s|^2dW_{s}
\notag
\end{eqnarray}
$\mathcal{F}_T^H$-stably in $\mathcal{D}([0,T])$, where $W=\{W_{t},t\in [0,T]\}$ is a Brownian motion independent of $\mathcal{F}_T^H$.
\end{proposition}
Consider now the estimator \eqref{eq:estimator} for the integrated volatility. With the help of Proposition
\ref{proposition:consistency} and Proposition \ref{proposition:clt} we obtain the following results.
\begin{theorem}
Suppose that $\sigma_s$ is a H\"older continuous function of order $\beta > 1-H$. Then
\begin{equation*}
QV_{n}(X)_{t} \longrightarrow \int_{0}^{t}|\sigma_s|^2ds
\end{equation*}
almost surely and uniformly in $t$.
\end{theorem}
\begin{proof}
Recall that $X$ satisfies \eqref{eq:SDE}. Thus we have
\begin{equation*}
X_{t}= X_{0}+Y_{t}+\int_{0}^{t}\sigma_{s}dG^{H}_{s},
\end{equation*}
where $Y_{t}=-\theta\int_{0}^{t}X_{s}ds.$ It is straightforward to check that the solution $X$ is bounded on every compact interval.
Consequently, the process $Y_t$ is differentiable with bounded derivative, and thus
\begin{equation*}
V_n(Y) \leqslant \theta \parallel X\parallel_\infty^2 n^{-1}.
\end{equation*}
Now the result follows from Proposition \ref{proposition:consistency} and Theorem \ref{theorem:Consistency}.
\end{proof}
\begin{theorem}
Suppose that $\sigma = \{\sigma_{t},t\in[0,T]\}$ is H\"older continuous of order $\beta > \max\left(1-H,\frac12\right)$, and
measurable with respect to $\mathcal{F}_T^{G^{H}}$. Suppose further that $0<H<3/4$. Then
\begin{eqnarray}
\sqrt{n}\left(QV_{n}(X)_{t}-\int_{0}^{t}|\sigma_s|^2ds\right)&\underset
n\rightarrow\infty}{\overset{Law}{\longrightarrow}}&\int_{0}^{t}
\sigma_s|^2dW_{s}, \notag
\end{eqnarray}
$\mathcal{F}_T^H$-stably in the space $\mathcal{D}([0,T]^2)$, where $W=\{W_{t},t\in[0,T]\}$ is a Brownian motion independent
of $\mathcal{F}_{T}^H $.
\end{theorem}
\begin{proof}
Observing that since $0< H< 3/4$, we have, for $Y_{t}=-\theta\int_{0}^{t}X_{s}ds,$ that
\begin{equation*}
n^{2H-\frac12}V_n(Y) \leqslant \theta \parallel
X\parallel_\infty^2 n^{2H-\frac32} \rightarrow 0.
\end{equation*}
Thus the result follows directly from Proposition \ref{proposition:clt} and Theorem \ref{theorem:distribution asymptotic}.
\end{proof}
|
1,941,325,220,456 | arxiv | \section{Introduction}
\label{sec:introduction}
Since 1998, despite the propositions of different conceptual models, we were not really able to construct a hundred percent correct theoretical support to the observations of late time cosmic acceleration. Stress energy tensor was redesigned to interpret the accelerated expansion. The names of candidates of such a repulsive energy/ hypothetical matter are familiar as dark energy (DE)~\cite{1:garnavich:1998}. $\Lambda $CDM (Cosmological constant with cold dark matter) model is successful to describe the formation and the evolution of Large Scale Structures in the universe~\cite{1:new:Spergel:2003,6:new:Smith:2007}. However, problems like fine tuning~\cite{9:new:RevModPhys} and cosmic coincidence are faced~\cite{10:new:Astashenok:2012}. We will discuss in detail the anomalies faced by $\Lambda $CDM model. Firstly, the lithium abundance challenge should be discussed. It is followed~\cite{1982:Spite} that no significant destruction of $Li^7$ took place during the protostellar phase of solar mass stars. It is assumed that the lithium abundances of old halo stars is representative of abundance in the primordial matter. This does not signify $\Lambda $CDM. Next anomalies to be discussed must be CMB power asymmetry, missing satellites, very massive galaxies at high redshift etc. $\Lambda $CDM model is also found to encounter problems while describing structures at small scales\cite{31:moore:1994,32:Moore:1999,33:Ostriker:2003,34:Boylan:Kolchin:2011,35:Boylan:Kolchin:2012,36:Oh:2011}. Among these, main problems are CUSP/ CORE problem \cite{31:moore:1994,37:Del:Popolo:2017,37:new:Flores:1994}, saying about the flat density profiles of dwarf galaxies, irregulars and low surface brightness galaxies. Missing satellite problem coins the anomaly between the number predicted subhalos in N-body Simulations~\cite{32:Moore:1999,41:new:Klypin:1999} and those actually observed. This is further complicated by the “to big to fail” problem, arising from the $\Lambda $CDM prediction of satellites that are too massive and dense, compared to those observed~\cite{34:Boylan:Kolchin:2011,35:Boylan:Kolchin:2012}. Similarly, we face the problems regarding angular momentum catastrophe~\cite{42:new:van:den:Bosch:2001,43:new:Cardone:2009} and alignment on thin planes of satellite galaxies of MW and M31 is difficult to explain in simulations of the $\Lambda $CDM paradigm~\cite{44:new:Pawlowski:2014}. The problem of obtaining the slope and scatter of baryonic Tully-Fisher relation ($M_b \propto V_c^4$)~\cite{45:new:McGaugh:2011}. Other issues are discussed in references~\cite{46:new:Kroupa:2005,47:new:Kroupa:2010,48:Kroupa:2012,49:new:KROUPA:2012,50:new:Kroupa:2015}. Lastly, we recall about the cosmic coincidence problem~\cite{12:new:Sivanandam:2013} which can not be justified from the $\Lambda $CDM model.
It is noted that to support cosmic acceleration, matter energy part staying inside the cosmos must violate the strong energy condition $\sum_k\left(\rho_k+3p_k\right)<0$, where $\rho_k$ and $p_k$ denote energy density and pressure of different components filling up the cosmos. Standard approach leads up to consider DE a fluid exerting negative pressure and possessing barotropic equation of state $p=\omega \rho$, where $\omega$ is constant being equal to $-1$. But $p=-\rho$ implies no dynamics.
Recent observational reconstructions of DE EoS interpret $\omega$ to be dynamical and to cross phantom divide $\omega=-1$~\cite{2:6a:new:Zhao:2017,3:6a:Wang:2018}. $\omega$ is preferably treated to be a function of time scale factor or redshift. We face again another problem that dynamical DE, $\omega=\omega(a)$, combined with the assumption of DE as a perfect fluid arises some thermodynamic problems (eg., positiveness of entropy, temperature and chemical potential leads to $\omega \geq -1$ which contradicts the existence of phantom DE~\cite{4:6a:Duarte:2019,5:6a:Silva:2013}.)
To pass by such thermodynamic conflicts, we suppose DE to be a fluid with which bulk viscosity is present, i.e., a break for the perfect fluid hypothesis. Fluid with bulk viscosity supports dissipative processes. This again allows violation of dominant energy condition $p+\rho<0$ even without the DE necessarily becomes phantom~\cite{6:6a:BARROW1988743,7:6a:CRUZ2017103}. With this model, the late time cosmic acceleration be explained as the effect of negative pressure due to bulk viscosity, $\zeta<0$, where $p_{eff}=p+\zeta$~\cite{8:6a:Zimdahl:2001,9:6a:Balakin:2003}. Eckart theory~\cite{10:6a:PhysRev} has been used in maximum cases to build such models to cross phantom line~\cite{11:6a:Brevik:2005}, magnitude of viscosity~\cite{12:6a:Brevik:2015}, consideration of Big Rip singularity~\cite{13:6a:Brevik:2013}, unified dark fluid cosmologies~\cite{14:6a:Avelino:2010,15:6a:Velten:2011} etc. We must refer two pioneer works for dissipative processes in cosmology~\cite{19:6a:Maartens:1995} and relativistic fluid dynamics and dissipative relativistic fluids along with their applications to cosmology and astrophysics and bulk viscous perturbations~\cite{20:6a:Maartens}.
Cosmological perturbation theory along with non-ideal fluid in the presence of shear and bulk viscosities is constructed with the energy momentum tensor for non-ideal fluid~\cite{38:6a:Weinberg} given as
\begin{equation}
T^{\mu\nu}=\rho u^{\mu} u^{\nu} +\left(p+p_b\right)\Delta^{\mu\nu}+\Pi^{\mu\nu}~~~~,
\end{equation}
where $p_b(=-\zeta\nabla_{\mu}u^{\mu})$ is bulk viscous pressure with bulk viscosity coefficient $\zeta$. $\Pi^{\mu\nu}$ represents the shear viscosity tensor, having the form
\begin{equation}
\Pi^{\mu\nu}=-2\eta \sigma^{\mu\nu}=-2\eta\left[ \frac{1}{2}\left(\Delta^{\mu\alpha} \nabla_{\alpha}u^{\nu}+\Delta^{\nu\alpha} \nabla_{\alpha}u^{\mu}\right)-\frac{1}{3}\Delta^{\mu\nu}\left(\nabla_{\alpha}u^{\alpha}\right)\right]~~~~,
\end{equation}
with $\eta$ being the shear viscosity. $\Delta^{\mu\nu}=u^{\mu}u^{\nu}+g^{\mu\nu}$, the projection operator, does project to the subspace orthogonal to the fluid velocity.
Nowadays, there are indirect ways to anticipate viscosity of the dark sector. But, before speaking about those indirect ways, we should focus on some references like~\cite{2:Turner:1984} \& \cite{3:mathews:2008} where dark matter(DM)-DE interactive models are proposed to support cosmic acceleration and observed viscosity of DE. These references have considered the phenomena of ``delayed decay of cold DM'' (from initial matter dominated flat cosmology with $\Lambda=0$) into light undetectable relativistic species. This generates bulk viscosity and a $\Lambda=-1$ epoch\footnote{actually measured as $-1.028\pm 0.032$ by the Planck Collaboration (2018) \cite{lewis:2020}. True dimension of $\Lambda$, however, becomes equivalent with $length^{-2}$. According to the reference\cite{lewis:2020}, dimensionless density parameter $\Omega_{\Lambda}=0.6889\pm 0.0056$, value of Hubble’s parameter at present time = $67.66\pm0.42~Kms^{-1}/Mpc=2.1927664\pm 0.0136\times 10^{-18} s^{-1}$ and the value of $\Lambda$ becomes $\Lambda=3\left(\frac{H_0}{c}\right)^2\Omega_{\Lambda}=1.1056\times10^{-52}m^{-2}=2.888\times\times10^{-122}l_p^{-2}~~,$ where $l_p$ is the Planck length.}
near the neighborhood of present time. Deceleration for this model is found to be faster due to the presence of the matter content. Also the model is found to be consistent with the surveys of supernova magnitude-redshift relation~\cite{4:reiss:1998}\cite{5:perlmuttr:1998} and ages from the superannuated stars and globular clusters.
There exist different DE models. Quintessence and modified Chaplygin gas (MCG) are two important candidates among such models. The EoS of quintessence is given as
\begin{equation}
p_q=\omega_q \rho_q.
\end{equation}
The value of $\omega_q$ regulates nature of the chronological evolution of universe. We may obtain the radiation, pressureless dust era, quintessence and phantom barriers for $\omega_q=\frac{1}{3}, 0, -\frac{1}{3}$ and $-1$ respectively. On the other hand, MCG has its EoS as
\begin{equation}
p=\alpha \rho - \frac{\beta}{\rho^n} .
\end{equation}
MCG can mimic different chronological stages of cosmic evolution depending on the value of $n$, mainly. Reference~\cite{71:new:Velasquez:Toribio:2011} uses type Ia supernovae and BAO data set to predict the best fit parametric values as $\alpha=0.061\pm 0.079$ and $n=0.053\pm 0.089$ (Best fit for Constitution+BAO+CMB) and $\alpha=0.110\pm0.097$ and $n=0.089\pm 0.099$ (Best fit for Union2+BAO+CMB). Another article~\cite{72:new:Lu:2010} uses Union2, SNIa, OHD, CBF, BAO and CMB data to constrain the modified Chaplygin gas model parameters with $1\sigma$ and $2\sigma$ confidences as $\alpha=0.00189^{+0.00583+0.00660}_{-0.00756-0.00915}$ and $n=0.1079^{+0.3397+0.4678}_{-0.2539-0.2911}$ .
Now, we will focus on these ``indirect ways'' to realize and measure the viscous properties of DE. To do so, we will take the help of gravitational wave (GW) signals. It is quite possible to constrain the constituents present in our universe by observing the nature of propagation of GW through them. Till date, GW signals from several black hole (BH) binary mergers like GW150914, LVT151012, GW51226, GW170104, GW170608, GW170814 etc and GW signals from neutron star binary like GW170817 etc are announced by LIGO and Virgo collaborations~\cite{6:abbott:2016}\cite{7:abbott:2016}\cite{8:abbott:2016}\cite{9:abbott:2017}\cite{10:abbott:2017}\cite{11:abbott:2017}\cite{12:abbott:2017}. Simultaneously, electromagnetic (EM) radiations coming out of the same sources have been detected. Tallying these, we are able to measure the arrival delay between the EM photons and GWs through the cosmological distances~\cite{13:will:1998}\cite{14:nishizawa:2016}\cite{15:li:2016}. Different references like~\cite{16:will:2014}\cite{17:khaya:2016} \& \cite{18:Wu:2016} predict that GWs should propagate freely without any absorption and dissipation if perfect fluid medium is considered to be embedded in Friedmann-Robertson-Walker universe is considered. Nevertheless, this scenario changes as soon as the fluid content is chosen to be non-ideal type~\cite{19:hawking:1966}, GW gets dissipated with a damping rate $\beta_D\equiv 16\pi G_N\eta$~\cite{20:esposito:1971}\cite{21:madore:1973}\cite{22:prasanta:1999} when an amount of shear viscosity $\eta$ is incorporated into the fluid's energy momentum tensor, $G_N$ being the Newtonian gravitational constant. So, changes in $\beta_D$ which can be noticed in GW attenuation indicate changes in the value of $\eta$ over time or over cosmic distances. This is nothing but the evolution of viscosity, especially the shear viscosity, of DE over time. The authors of the reference~\cite{23:lu:2018} present a statistics of different GW events, median value of source luminosity distances with $90\%$ credible intervals in $Mpc$ units and upper limit on the damping rate $\beta_D$ at $95\%$ of confidence level in units of $10^{-3}~Mpc$ given by table~\ref{table:lumino}.
\begin{table}
\caption{Luminosity Distances of different GW events}
\centering
\begin{tabular}{ |c|c|c| }
\hline
GW Event & Luminosity Distance & $\beta$ \\
& ($MPc$) & ($10^{-3} MPc^{-1}$) \\ \hline
LVT 151012 & $1000^{+500}_{-500}$ & 1.29 \\ \hline
GW 170104 & $880^{+450}_{-390}$ & 1.40 \\ \hline
GW 170814 & $540^{+130}_{-210}$ & 1.25 \\ \hline
GW 151226 & $440^{+180}_{-190}$ & 2.39 \\ \hline
GW 150914 & $410^{+160}_{-180}$ & 2.50 \\ \hline
GW 170608 & $340^{+140}_{-140}$ & 3.05 \\ \hline
GW 170817 & $40^{+8}_{-14}$ & 14.08 \\
\hline
\end{tabular}
\label{table:lumino}
\end{table}
It is clear from these data that as we look through longer distances, i.e., we look in past, the shear viscosity reduces. We are able to conclude that DE, as time grows, exerts more and more amount of shear viscosity.
Regarding bulk viscosity of DE, especially for generalized Chaplygin gas type model, we can find several works. The reference~\cite{24:Li:2014} uses the then-available cosmic observational data from SNLS3, BAO, HST and Planck and constrains the value of bulk viscosity coefficient as, $$\zeta_0= {{0.0000138^{+0.00000614} _{-0.0000105}}^{+0.0000145}_ {-0.0000138}}^{+0.0000212}_{-0.0000138}$$ However, even shear and bulk viscosity are related to each other~\cite{25:herzfeld:2004}.
Viscous effects of a fluid is more realizable when it flows; particularly layer by layer. Most prominent examples are the narrow X-ray binaries where an accretion disc is likely to get formed. The most simple diagram of such a Roche lobe overflow incident can be imagined with some assumptions : Consider that the flow is axis-symmetric, i.e., causing a cylindrical structure around a compact star-preferably a BH- where $r$, $\phi$ and $z$ are the coordinates. We will consider $\frac{\partial}{\partial \phi}\equiv 0$ and also a stationary disc, i.e., $\frac{\partial}{\partial t}\equiv 0$. We will further consider a thin disc, i.e., $\frac{h(r)}{r}<<0$, where $h(r)$ is the disc height. The viscous effect of the disc is also taken to be small, i.e., the radial inward velocity $v_r<<$the local Keplerian rotational speed/azimuthal velocity $v_{\phi}=\Omega r=\frac{GM}{r}$, if the radial momentum is conserved. On the other hand, conservation of angular momentum requires effects of viscous forces to take into account. Keplerian balance always implies differential rotation. A transportation of angular momentum in the direction, perpendicular to the velocity, should be observed due to the differences in velocities at different locations. If this is absent, translational (shear) viscosity will turn down to zero. To simplify the nonzero viscosity, a simplistic description of the physics of accretion disc can be obtained by $\alpha_{ss}$ \cite{26:shakura:1973}~prescription. Though the accretion driven/driving viscosity is of magnetic origin, it is popular to use an effective hydrodynamic description of the related disc presented by the hydrodynamic stress tensor as~\cite{27:landau:1987}:
\begin{equation}
\tau_{r\phi}=\rho\nu\frac{\partial v_{\phi}}{\partial r}=\rho\frac{d\Omega}{d\ln(r)}~~~~,
\end{equation}
where $\rho$ and $\nu$ are the density and kinematic viscosity coefficient respectively. Notifying total thermal pressure by $P$, isothermal sound speed $\sqrt{\frac{P}{\rho}}$ by $c_s$ and introducing a regulating parameter $\alpha_{ss} (\le 1)$, Shakura and Sunyaev~\cite{26:shakura:1973} proposed the prescription
\begin{equation}\tau_{r\Phi} = \alpha_{ss} P \impliess \gamma = \alpha_{ss} c^2_s\left[\frac{dr}{dh}\right]^{-1}.
\end{equation}
For Keplarian angular velocity $\Omega = \Omega_k = \left(\frac{G_NM}{r^3}\right)^{-1}$, we have,
\begin{equation}
\gamma = \frac{2}{3} \alpha_{ss} c^2_s/\Omega_k.
\end{equation}
To keep the equilibrium, the gravitational force is counter acted by the force produced by the pressure gradient
\begin{equation}
\frac{\partial p}{\partial z} = \rho , ~~~~ g_z = \rho \frac{G_{N}M}{R^2} \times \frac{z}{R}
\end{equation}
and as $h(\lambda) << r$, we obtain
\[
\frac{h}{r}\approx \frac{c_s}{v_k} \impliess \gamma \approx \frac{2}{3} \alpha_{ss} c_s h.
\]
This will be the way to replace the kinematic viscosity by $\alpha_{ss}$ parameter.
The value of $\alpha_{ss}$ is typically assumed to lie between 0.01 to 0.1~\cite{28:gou:2011}. Fromang et al.~\cite{29:fromang:2011} have found a radially varying $\alpha_{ss}$, the overall size of which was over an order of magnitude lower, peaking at 0.013 and declining to below 0.002.
It is clear that, to measure viscosity, we will require to know the variation of density as well. Again accretion density and many other properties are dependent on the spin parameter of the central gravitating BH. Fink~\cite{Fink:2016} found the spin parameter $a$ for Arakelian 120 galaxy in the constellation of Orion at coordinates $\alpha_{J2000.0}05^h 16^m 11.395^s \delta_{J2000.0} 00' 5^{9.65'}$ to be $a = 0.99 ^{+0.003}_{0.004}$. But Turner et. al.~\cite{2:Turner:1984} shows the range to be $0.996\le a \le 0.998$. Again for the same Seyfert I galaxy, Fink measured the number density $n$ of the accretion disc to be $10^{15}cm^{-3}$. Reference~\cite{2:Turner:1984} has also measured it as $10^{15.95}cm^{-3}$. Mean molecular weight of Sun is chosen as $0.62$ (ionized gas) and hence the density of the taken accretion disc is found to be $\approx 6.2\times 10^{15} gm~cm^{-3}$.
Primary physical model of spherically symmetric gas accretion falling onto an astrophysical object was studied by Bondi for the first time. If rotation of the accreting fluid is not taken into account, effective accretion begins from a characteristic radius (So called Bondi radius, given by $R_B = \frac{GM}{c^2_s}$, $c_s$ being the sound speed through the gas.) by dominating the thermal energy by negative gravitational energy. It has been considered that the density distribution to follow $\rho \propto \left(\frac{1+R_B}{r}\right)^{3/2}$.
Bondi and Shakura-Sunyaev considered accretion dynamics by considering Newtonian potential. The essential general relativistic effects on the curvature, i.e., the gravity around a BH was not taken into account. The later work has considered the only general relativistic effect by truncating the innermost edge of the disc at the last stable orbit of the Schwarzschild geometry. Novikov and Thorne~\cite{novikov:1973} have developed a complete general relativistic description of a thin Keplerian disc~\cite{ghosh:2007}.
To reduce general relativistic non-linearity, it is helpful to consider stationary flow and to replace the general relativistic effect by the introduction of pseudo Newtonian potentials (PNP). Paczynsky \& Witta~\cite{paczynsky:1980} proposed such a force which exactly reproduces marginally stable orbit and marginally bound orbit of that infall GR. But this potential does consider only BH's mass. As almost all the celestial objects are rotating, the BHs are also rotating and Mukhopadhyay(2002)~\cite{BM2002:mukhopadhyay:2002} developed a PNP for a rotating BH for the first time. Sarkar and Biswas~\cite{sarkar:biswas:2019} has constructed a PNP for a rotating BH embedded in quintessence. This model proposes at most $4.95\%$ error as compared to GR results. Roy and Biswas have modeled an accretion structure with Sarkar \& Biswas's potential. For this model, the first requisite cases should be known why accretion onto such a quintessence contaminated BH will take place. We will call the associated force as Pseudo Newtonian Force (PNF).
This is obvious to consider supermassive black holes (SMBHs) in the center of galaxies through the cases of such a presence is not justified. We are able to observe SMBHs at redshift $z=7.54$ which must have formed within less than one billion years~\cite{D1:deuardo:2018}. Alternative models to BHs, inclusion of extended objects in classical general relativity~\cite{D2:chirenti:2007}, consideration of existence of more exotic models, viz ``naked singularity''~\cite{D3:joshi:2011} have been considered. So far, the motivations of these works were to consider only the gravitational effects of alternatives to BHs and to find out their observational properties in order to distinguish a BH from a so called BH mimicker. Recently, in the references like~\cite{D4:levkov:2018}, a possibility of DM, in the form of bosons, to form self gravitating bound structure in different galaxies are studied. Authors of~\cite{D5:boshkayev:2019} have compared the motion of test particles in the gravitational field of both SMBH and DM core. A significant discrepancy in the motion is noticeable around the radial distance $100AU$ and this increases as we are approaching to the center.
Finer observations in future (Say VLBI, BH cam project etc) might be able to distinguish the shadows caused by BH and BH mimicker. As of now, we cannot exclude the idea of existence of SMBH candidates like gravastars or boson stars etc. These studies/realizations motivate us to consider quintessence contaminated BHs. Besides, DM clustering are chosen to be the cause of formation of different structures of universe, especially the galaxies. DM and DE interact. As mentioned earlier, DE and bulk viscosity even can be formed out of the delayed DM decay. As a result, we can expect the presence of DE at the vicinity of the core area of SMBHs. This motivates us to study the viscous accretion onto quantum contaminated SMBHs.
Another motivation for the present work must be mentioned here. While studying the accretion and wind properties, we see for adiabatic fluid, wind branches are almost parallel to $x$ axis in $u-x$ plane while we go far from the central BH. On the contrary, the wind branch turns to be parallel to $u$ axis while modified Chaplygin gas is accreting (References~\cite{biswas:2011} and~\cite{biswas:sandip:2019}). These two extremely inclinedness are not smoothly changed at all. But no change in the physical constrain leads to such a drastically diversified solutions. So, there must exist some ``missing links'' between the two kinds of terminal cases (i.e. adiabatic and MCG flow). If even we are succeeded to find them, what should be the related nature of the density variations and the corresponding thermodynamics is more interesting point. We will try to find out the answers in the subsequent sections.
The rest of the paper is organized as follows. In section~\ref{sec2:profile:fluid}, first we reorganize the structure of the PNF for a rotating BH embedded in quintessence universe. Then we construct the mathematical problem for our model. In subsection~\ref{subsec:speed}, we find solutions for radially inward speed , speed of sound and specific angular momentum as function of radial distance from the BH. We thoroughly analyze these curves as well. In subsection~\ref{subsec:density}, we find the variation of densities of accretion and wind for different parameters and have explained them. Subsection~\ref{subsec:viscosity} deals with the study of the ratio of shear viscosity to entropy density for our model. In section~\ref{sec:conclusion}, we conclude in brief.
\section{Mathematical Modeling of Viscous Accretion onto Rotating SMBHs}
\label{sec2:profile:fluid}
\subsection{Equations for Pseudo Newtonian Force}
We keep effects of different sectors like quintessence etc and construct the PNF~\cite{sarkar:biswas:2019}. We will fix units of length and speed to be $G_NM/c^2$ and $c$ respectively, where $M$ is the mass of the central object and $c$ is the speed of light. We prepare dimensionless parameters $x= \frac{r}{G_NM/c^2}$ and $a = j/c$.
Assuming
\[
\zeta(x) = a\mathcal{A}_q x^{3\omega_q} ~~~~ and
\]
\[
\eta(x)=x^{3(\omega_q-1)} ~~~~ along~ with
\]
$$
\alpha(x) = \zeta(x) \{3\omega_q(a^2+x^2)+ 3x^2-8x+a^2\}~,
$$
$$
\beta(x) = (2a^2+6x-8) \zeta^2(x)(A^{-2}_q/a) - 2aA^2_q x ~~ and
$$
$$
\gamma(x)= 2\eta(x)\{(x^2-2x+a^2)x^{3\omega_q} -\mathcal{A}_q x\}^2 \{\mathcal{A}_q (3\omega_q+1) +2\zeta(x)/(a\mathcal{A}_q)\}~~~~,$$
we construct the numerator of the PNF as found by~\cite{sarkar:biswas:2019}
$$
N(x) = \{\alpha(x)+\beta(x) - \sqrt{\gamma(x)}\}^2~.
$$
Next we again assume
$$
\phi(x) = a \zeta(x)(3\omega_q+1) + 2a^2 x^{6\omega_q}~~ and
$$
$$
\psi(x) = \frac{\zeta(x)}{a}\left[\frac{1}{x^{3}\eta(x)}+ \frac{(2-x)}{\mathcal{A}_q}\right]
$$
and the denominator of the PNF is formed as~\cite{sarkar:biswas:2019}
$$
D(x) = x^3 \{\phi(x) + 2x \psi^2(x)\}^2~.
$$
Finally, we write our PNF as
\begin{equation}
F_g(x) = N(x)/D(x)
\label{eqn:pnf}
\end{equation}
\subsection{Sound and Fluid Speed Equations}
In this subsection, we will construct the mathematical model from references~\cite{BM:2003} and~\cite{biswas:2011:PNF}. First we will consider the continuity equation,
\[
\frac{\partial \rho}{\partial z} + \vec{\nabla}. (\rho \vec{V}) = 0~~~~,
\]
which is simplified to
\begin{equation}
\frac{d}{dx} \left(x u \Sigma\right) = 0
\label{eqn:math1}
\end{equation}
for stationary and cylindrical structure, where $\Sigma$ is vertically integrated density expressed as
\begin{equation}
\Sigma = I_C \rho_e h(x),
\label{eqn:math2}
\end{equation}
with $I_C$ = constant (related to EoS of accreting fluid) = 1 (for simplicity), $\rho_e$ = density of the accreting fluid at the equatorial plane, $h(x)$ = half thickness of the disc. $u= u_x = \frac{v_x}{c}$, $v_x$ is the radially inward speed of accretion. Next, we will consider the radial component of stationary Navier Stokes equation.
\begin{equation}
\rho(\vec{V}.\vec{\nabla})\vec{V} = \vec{\nabla}\rho + \rho \gamma \nabla^2 \vec{u} - \vec{F}_{GX}
\end{equation}
as,
\begin{equation}
u\frac{du}{dx} + \frac{1}{\rho} \frac{dp}{dx}-\frac{\lambda^2}{x^3}+F_g\left( x\right)=0~,
\label{eqn:math3}
\end{equation}
where $F_g(x)$ is the radially inward component of the gravitational force. We use equation (\ref{eqn:pnf}) for $F_g(x)$.
The azimuthal momentum balance equation turns to be
\begin{equation}
u\frac{d\lambda}{dx}=\frac{1}{x\sum}\frac{d}{dx}\left[x^2\alpha_{SS}\left(P+\rho u^2\right)h(x)\right]~~~~.
\end{equation}
Assuming the vertical equilibrium from the vertical component, we get,
\begin{equation}
h(x) = c_s \sqrt{\frac{x}{F_g}}.
\label{eqn:math4}
\end{equation}
Assume $\Psi =c^2_s + (\alpha-c^2_s)n + \alpha $ and $\mu(x) = (3 - \frac{x}{F_g} \cdot \frac{dF_g}{dx}) $ and the radial inward speed, sonic speed and angular momentum gradients turn to be,
\begin{equation}\label{Ritz_raidal}
\frac{du}{dx} = \frac{u\left[\{\lambda^2-x^3 F_g(x)\}\Psi + x^3\mu(x)c^4_s\right]}{x^3[\Psi u^2 - 2 c^4_s]}~~~~,
\end{equation}
\begin{equation}
\frac{dc_s}{dx} = \left(\frac{1}{2} \mu(x) + \frac{1}{u}\frac{du}{dx}\right) \left\{\frac{(n+1)c_s (c^2-\alpha)}{\Psi}\right\}~~~~and
\end{equation}
\begin{dmath}
\label{differential equation for lambda}
\frac{d \lambda}{d{x}} = \frac{{x} \alpha_{ss}}{u} \left[ \frac{1}{2} \left( \frac{5}{{x}} - \frac{1}{F_g} \frac{dF_g}{d{x}} \right) \left\lbrace \frac{\left(n+1 \right) \alpha - c_s^2}{n} + u^2 \right\rbrace \right.
\left. + 2 u \frac{du}{d{x}} + \left\lbrace \left( \frac{\left(n+1 \right) \alpha - c_s^2}{n} + u^2 \right) \frac{1}{c_s} -\left( c_s^2 +u^2 \right) \left( \frac{1}{n+1} \frac{2 c_s}{c_s^2 - \alpha} \right) \right\rbrace \frac{dc_s}{d{x}} \right]~~~~.
\end{dmath}
From the structure of the denominator of the equation (\ref{Ritz_raidal}), it is clear that this will vanish in the domain $(0,~1)$. Now, as the speed of the accreting fluid is very low when it is far from the gravitating object and is very high, even almost equal to the speed of light, i.e., equals to $1$ at the vicinity of the event horizon, we can assume that there will exist a radial distance $x=x_c$, where the vanishing of the denominator takes place. We call this point a critical point. As the flow should be physical, at $x_c$ the numerator should vanish as well and use of L'Hospital's rule will provide us a quadratic equation of the radial velocity gradient and this will generate two different branch of flow : one is accretion and another is expressing the wind. Below, we write the quadratic as
\begin{equation}\label{accretioncqg11}
\mathcal{A}\left(\frac{du}{dx}\right)^{2}_{x=x_c}+\mathcal{B}\left(\frac{du}{dx}\right)_{x=x_c}+\mathcal{C}=0,
\end{equation}
where $$\mathcal{A}=2\left[1-\frac{2\left(c_{sc}^{2}-\alpha\right)\left(n+1\right)\left\{\left(1-n\right)c_{sc}^{2}+2\alpha\left(n+1\right)\right\}}{\left\{\left(1-n\right)c_{sc}^{2}+\alpha\left(n+1\right)\right\}^{2}}\right],$$
\begin{equation}\label{accretioncqg12}
\mathcal{B}=-\frac{2}{c_{sc}^{4}}\frac{\left(c_{sc}^{2}-\alpha\right)\left(n+1\right)\left\{\left(1-n\right)c_{sc}^{2}+2\alpha\left(n+1\right)\right\}}{\left\{\left(1-n\right)c_{sc}^{2}+\alpha\left(n+1\right)\right\}}\left[F_{g}(x_{c})-\frac{\lambda^{2}}{x_{c}^{3}}\right],
\end{equation}
\begin{eqnarray}
\mathcal{C} & = & \left\{\frac{3\lambda^{2}}{x_{c}^{4}}-\left(\frac{dF_{g}}{dx}\right)_{x=x_{c}}\right\}-\left[\left\{\frac{1}{F_{g}}\left(\frac{dF_{g}}{dx}\right)^{2}\right\}_{x=x_{c}}-\frac{3}{x_{c}^{2}}-\left(\frac{1}{F_{g}}\frac{d^{2}F_{g}}{dx^{2}}\right)_{x=x_{c}}\right]\frac{u_{c}^{2}}{2} \\ \nonumber
& & -\frac{u_{c}^{2}}{2c_{sc}^{8}}\left[\left(c_{sc}^{2}-\alpha\right)\left(n+1\right)\left\{\left(1-n\right)c_{sc}^{2}+2\alpha\left(n+1\right)\right\}\right]\left[F_{g}(x_{c})-\frac{\lambda^{2}}{x_{c}^{3}}\right]^{2},
\end{eqnarray}
where $u_c$ is the value of radial velocity at $x=x_c$ and $c_{sc}$ is the speed of sound at $x=x_c$.
\section{Solutions and Analysis}
In this section, we will study different accretion properties, viz, fluid speed, sonic speed, $\lambda/\lambda_k$ ratio, accretion/wind fluid density and $\eta/s$ ratio. We have divided the whole study into three subsections: (i) speeds, (ii) density and (iii) $\eta/s$ ratio.
\subsection{Profiles for Accreting Fluid Speed}
\label{subsec:speed}
\begin{figure}
\centering
\setcounter{subfigure}{0}
\subfigure[$\Gamma=1.6, \alpha_{ss}=10^{-4}$]{\includegraphics[width=0.24\textwidth]{fluid-1-1.png}}
\subfigure[$\Gamma=1.6, \alpha_{ss}=10^{-2}$]{\includegraphics[width=0.24\textwidth]{fluid-1-2.png}}
\subfigure[$\Gamma=0.09, \alpha_{ss}=10^{-4}$]{\includegraphics[width=0.24\textwidth]{fluid-1-3.png}}
\subfigure[$\Gamma=0.09, \alpha_{ss}=10^{-2}$]{\includegraphics[width=0.24\textwidth]{fluid-1-4.png}}
\caption*{\textbf{\emph{Figure 1.1:}} Images for $\lambda_c=2.7, \omega_q = 1/3, A_q=0.01$. Red solid line shows wind for $a=0$, Green solid line shows accretion for $a=0$, Purple dotted line shows wind for $a=0.5$, Black dotted line shows accretion for $a=0.5$, Orange dash-dotted line shows wind for $a=0.9$, Olive dash-dotted line shows accretion for $a=0.9$ and Pink dashed-dashed line shows wind for $a=0.998$, Blue dashed-dashed line shows accretion for $a=0.998$}
\setcounter{subfigure}{0}
\subfigure[$\Gamma=1.6, \alpha_{ss}=10^{-4}$]{\includegraphics[width=0.24\textwidth]{fluid-2-1.png}}
\subfigure[$\Gamma=1.6, \alpha_{ss}=10^{-2}$]{\includegraphics[width=0.24\textwidth]{fluid-2-2.png}}
\subfigure[$\Gamma=0.09, \alpha_{ss}=10^{-4}$]{\includegraphics[width=0.24\textwidth]{fluid-2-3.png}}
\subfigure[$\Gamma=0.09, \alpha_{ss}=10^{-2}$]{\includegraphics[width=0.24\textwidth]{fluid-2-4.png}}
\caption*{\textbf{\emph{Figure 1.2:}} Images for $\lambda_c=2.7, \omega_q = 0, A_q=0.01$. Red solid line shows wind for $a=0$, Green solid line shows accretion for $a=0$, Purple dotted line shows wind for $a=0.5$, Black dotted line shows accretion for $a=0.5$, Orange dash-dotted line shows wind for $a=0.9$, Olive dash-dotted line shows accretion for $a=0.9$ and Pink dashed-dashed line shows wind for $a=0.998$, Blue dashed-dashed line shows accretion for $a=0.998$}
\setcounter{subfigure}{0}
\subfigure[$\Gamma=1.6, \alpha_{ss}=10^{-4}$]{\includegraphics[width=0.24\textwidth]{fluid-3-1.png}}
\subfigure[$\Gamma=1.6, \alpha_{ss}=10^{-2}$]{\includegraphics[width=0.24\textwidth]{fluid-3-2.png}}
\subfigure[$\Gamma=0.09, \alpha_{ss}=10^{-4}$]{\includegraphics[width=0.24\textwidth]{fluid-3-3.png}}
\subfigure[$\Gamma=0.09, \alpha_{ss}=10^{-2}$]{\includegraphics[width=0.24\textwidth]{fluid-3-4.png}}
\caption*{\textbf{\emph{Figure 1.3:}} Images for $\lambda_c=2.7, \omega_q = -2/3, A_q=10^{-10}$. Red solid line shows wind for $a=0$, Green solid line shows accretion for $a=0$, Purple dotted line shows wind for $a=0.5$, Black dotted line shows accretion for $a=0.5$, Orange dash-dotted line shows wind for $a=0.9$, Olive dash-dotted line shows accretion for $a=0.9$ and Pink dashed-dashed line shows wind for $a=0.998$, Blue dashed-dashed line shows accretion for $a=0.998$}
\caption{Curves for fluid speed vs radial distance from the BH.}
\end{figure}
\begin{figure}
\ContinuedFloat
\setcounter{subfigure}{0}
\subfigure[$\Gamma=1.6, \alpha_{ss}=10^{-4}$]{\includegraphics[width=0.24\textwidth]{fluid-4-1.png}}
\subfigure[$\Gamma=1.6, \alpha_{ss}=10^{-2}$]{\includegraphics[width=0.24\textwidth]{fluid-4-2.png}}
\subfigure[$\Gamma=0.09, \alpha_{ss}=10^{-4}$]{\includegraphics[width=0.24\textwidth]{fluid-4-3.png}}
\subfigure[$\Gamma=0.09, \alpha_{ss}=10^{-2}$]{\includegraphics[width=0.24\textwidth]{fluid-4-4.png}}
\caption*{\textbf{\emph{Figure 1.4:}} Images for $\lambda_c=2.7, \omega_q = -1, A_q=10^{-10}$. Red solid line shows wind for $a=0$, Green solid line shows accretion for $a=0$, Purple dotted line shows wind for $a=0.5$, Black dotted line shows accretion for $a=0.5$, Orange dash-dotted line shows wind for $a=0.9$, Olive dash-dotted line shows accretion for $a=0.9$ and Pink dashed-dashed line shows wind for $a=0.998$, Blue dashed-dashed line shows accretion for $a=0.998$}
\caption{Curves for fluid speed vs radial distance from the BH.}
\label{fig:fluid}
\end{figure}
\begin{figure}
\centering
\setcounter{subfigure}{0}
\subfigure[$\Gamma=1.6, \alpha_{ss}=10^{-4}$]{\includegraphics[width=0.24\textwidth]{sound-1-1.png}}
\subfigure[$\Gamma=1.6, \alpha_{ss}=10^{-2}$]{\includegraphics[width=0.24\textwidth]{sound-1-2.png}}
\subfigure[$\Gamma=0.09, \alpha_{ss}=10^{-4}$]{\includegraphics[width=0.24\textwidth]{sound-1-3.png}}
\subfigure[$\Gamma=0.09, \alpha_{ss}=10^{-2}$]{\includegraphics[width=0.24\textwidth]{sound-1-4.png}}
\caption*{\textbf{\emph{Figure 2.1:}} Images for $\lambda_c=2.7, \omega_q = 1/3, A_q=0.01$. Red solid line shows wind for $a=0$, Green solid line shows accretion for $a=0$, Purple dotted line shows wind for $a=0.5$, Black dotted line shows accretion for $a=0.5$, Orange dash-dotted line shows wind for $a=0.9$, Olive dash-dotted line shows accretion for $a=0.9$ and Pink dashed-dashed line shows wind for $a=0.998$, Blue dashed-dashed line shows accretion for $a=0.998$}
\setcounter{subfigure}{0}
\subfigure[$\Gamma=1.6, \alpha_{ss}=10^{-4}$]{\includegraphics[width=0.24\textwidth]{sound-2-1.png}}
\subfigure[$\Gamma=1.6, \alpha_{ss}=10^{-2}$]{\includegraphics[width=0.24\textwidth]{sound-2-2.png}}
\subfigure[$\Gamma=0.09, \alpha_{ss}=10^{-4}$]{\includegraphics[width=0.24\textwidth]{sound-2-3.png}}
\subfigure[$\Gamma=0.09, \alpha_{ss}=10^{-2}$]{\includegraphics[width=0.24\textwidth]{sound-2-4.png}}
\caption*{\textbf{\emph{Figure 2.2:}} Images for $\lambda_c=2.7, \omega_q = 0, A_q=0.01$. Red solid line shows wind for $a=0$, Green solid line shows accretion for $a=0$, Purple dotted line shows wind for $a=0.5$, Black dotted line shows accretion for $a=0.5$, Orange dash-dotted line shows wind for $a=0.9$, Olive dash-dotted line shows accretion for $a=0.9$ and Pink dashed-dashed line shows wind for $a=0.998$, Blue dashed-dashed line shows accretion for $a=0.998$}
\setcounter{subfigure}{0}
\subfigure[$\Gamma=1.6, \alpha_{ss}=10^{-4}$]{\includegraphics[width=0.24\textwidth]{sound-3-1.png}}
\subfigure[$\Gamma=1.6, \alpha_{ss}=10^{-2}$]{\includegraphics[width=0.24\textwidth]{sound-3-2.png}}
\subfigure[$\Gamma=0.09, \alpha_{ss}=10^{-4}$]{\includegraphics[width=0.24\textwidth]{sound-3-3.png}}
\subfigure[$\Gamma=0.09, \alpha_{ss}=10^{-2}$]{\includegraphics[width=0.24\textwidth]{sound-3-4.png}}
\caption*{\textbf{\emph{Figure 2.3:}} Images for $\lambda_c=2.7, \omega_q = -2/3, A_q=10^{-10}$. Red solid line shows wind for $a=0$, Green solid line shows accretion for $a=0$, Purple dotted line shows wind for $a=0.5$, Black dotted line shows accretion for $a=0.5$, Orange dash-dotted line shows wind for $a=0.9$, Olive dash-dotted line shows accretion for $a=0.9$ and Pink dashed-dashed line shows wind for $a=0.998$, Blue dashed-dashed line shows accretion for $a=0.998$}
\caption{Curves for sound speed vs radial distance from the BH.}
\end{figure}
\begin{figure}
\ContinuedFloat
\setcounter{subfigure}{0}
\subfigure[$\Gamma=1.6, \alpha_{ss}=10^{-4}$]{\includegraphics[width=0.24\textwidth]{sound-4-1.png}}
\subfigure[$\Gamma=1.6, \alpha_{ss}=10^{-2}$]{\includegraphics[width=0.24\textwidth]{sound-4-2.png}}
\subfigure[$\Gamma=0.09, \alpha_{ss}=10^{-4}$]{\includegraphics[width=0.24\textwidth]{sound-4-3.png}}
\subfigure[$\Gamma=0.09, \alpha_{ss}=10^{-2}$]{\includegraphics[width=0.24\textwidth]{sound-4-4.png}}
\caption*{\textbf{\emph{Figure 2.4:}} Images for $\lambda_c=2.7, \omega_q = -1, A_q=10^{-10}$. Red solid line shows wind for $a=0$, Green solid line shows accretion for $a=0$, Purple dotted line shows wind for $a=0.5$, Black dotted line shows accretion for $a=0.5$, Orange dash-dotted line shows wind for $a=0.9$, Olive dash-dotted line shows accretion for $a=0.9$ and Pink dashed-dashed line shows wind for $a=0.998$, Blue dashed-dashed line shows accretion for $a=0.998$}
\caption{Curves for sound speed vs radial distance from the BH.}
\label{fig:sound}
\end{figure}
We have plotted figures~\ref{fig:fluid}.1.a to~\ref{fig:fluid}.4.d to show the variations of fluid's radially inward speed w.r.t radial distance. Rows entitled \ref{fig:fluid}.1, \ref{fig:fluid}.2, \ref{fig:fluid}.3 and \ref{fig:fluid}.4 are for $\omega_q=1/3, 0, -2/3$ and $-1$ respectively. Columns entitled (a) and (b) are for adiabatic accretions with polytropic index = 1.6 with viscosity given by $\alpha_{ss}=10^{-4}$ and $10^{-2}$ respectively. Columns entitled (c) and (d) are for adiabatic index 0.09 with viscosity parameter $\alpha_{ss}=10^{-4}$ and $10^{-2}$ respectively. In each figure, solid lines are (green) and wind(red) for $a=0$, dotted lines are accretion(black) and wind(purple) for $a=0.5$, dot-dashed lines are accretion(olive) and wind (orange) for $a=0.9$ and dashed lines are accretion(blue) and wind(pink) for $a=0.998$. Inset contains $\log(u)$ vs. $\log(x)$ curves whereas the offset shows $u$ vs. $log(x)$ variations.
Now we are ready to analyze figure~\ref{fig:fluid}.1.a. The common features of the curves are same. Accretion speed is raising as we move towards the BH. Wind speed is low near the BH. As we go far, it increases and then becomes almost constant. Point to be noted that the accretion becomes fainter as we move far from the BH. This fainting rate increases as the value of the spin parameter increases. The more is the rotation, the nearer the sensible wind flow starts from the BH. As viscosity increases, in figure~\ref{fig:fluid}.1.b, we observe that the accretion to fall abruptly at the distant parts of the accretion disc. So, inclusion of viscosity reduces the angular momentum transport efficiency which ultimately causes the reduction of the physical radius of the disc. This is very much clear in \ref{fig:fluid}.1.d, which is drawn for $\gamma=0.09, \alpha_{ss}=10^{-2}$. We failed to reach to the positive values of $n_{MCG}$ which may indicate the existence of DE accretion onto a quintessence BH. $\Gamma=0.09$ is the lowest value for which get physical solutions. We see wind speed to increase abruptly its value reaches light's speed at finite distance. We conclude that if the value of $\Gamma$ is low, wind dominates accretion. This nature is, however, accompanied by inclusion of high viscosity. For high spin parameter, the radius, where wind speed becomes equal to that of light is less.
Before approaching to the other cases of $\omega_q$, we try to find out the case for which the accretion stops or the wind reaches the speed of light at a finite distance. To do this, we will look at the sound speed and $\frac{\lambda}{\lambda_k}$ curves which are plotted in figures~\ref{fig:sound}.1.a to~\ref{fig:sound}.4.d and~\ref{fig:lambda}.1.a to \ref{fig:lambda}.4.d respectively.
\begin{figure}
\centering
\setcounter{subfigure}{0}
\subfigure[$\Gamma=1.6, \alpha_{ss}=10^{-4}$]{\includegraphics[width=0.24\textwidth]{lambda-1-1.png}}
\subfigure[$\Gamma=1.6, \alpha_{ss}=10^{-2}$]{\includegraphics[width=0.24\textwidth]{lambda-1-2.png}}
\subfigure[$\Gamma=0.09, \alpha_{ss}=10^{-4}$]{\includegraphics[width=0.24\textwidth]{lambda-1-3.png}}
\subfigure[$\Gamma=0.09, \alpha_{ss}=10^{-2}$]{\includegraphics[width=0.24\textwidth]{lambda-1-4.png}}
\caption*{\textbf{\emph{Figure 3.1:}} Images for $\lambda_c=2.7, \omega_q = 1/3, A_q=0.01$. Red solid line shows wind for $a=0$, Green solid line shows accretion for $a=0$, Purple dotted line shows wind for $a=0.5$, Black dotted line shows accretion for $a=0.5$, Orange dash-dotted line shows wind for $a=0.9$, Olive dash-dotted line shows accretion for $a=0.9$ and Pink dashed-dashed line shows wind for $a=0.998$, Blue dashed-dashed line shows accretion for $a=0.998$}
\setcounter{subfigure}{0}
\subfigure[$\Gamma=1.6, \alpha_{ss}=10^{-4}$]{\includegraphics[width=0.24\textwidth]{lambda-2-1.png}}
\subfigure[$\Gamma=1.6, \alpha_{ss}=10^{-2}$]{\includegraphics[width=0.24\textwidth]{lambda-2-2.png}}
\subfigure[$\Gamma=0.09, \alpha_{ss}=10^{-4}$]{\includegraphics[width=0.24\textwidth]{lambda-2-3.png}}
\subfigure[$\Gamma=0.09, \alpha_{ss}=10^{-2}$]{\includegraphics[width=0.24\textwidth]{lambda-2-4.png}}
\caption*{\textbf{\emph{Figure 3.2:}} Images for $\lambda_c=2.7, \omega_q = 0, A_q=10.01$. Red solid line shows wind for $a=0$, Green solid line shows accretion for $a=0$, Purple dotted line shows wind for $a=0.5$, Black dotted line shows accretion for $a=0.5$, Orange dash-dotted line shows wind for $a=0.9$, Olive dash-dotted line shows accretion for $a=0.9$ and Pink dashed-dashed line shows wind for $a=0.998$, Blue dashed-dashed line shows accretion for $a=0.998$}
\setcounter{subfigure}{0}
\subfigure[$\Gamma=1.6, \alpha_{ss}=10^{-4}$]{\includegraphics[width=0.24\textwidth]{lambda-3-1.png}}
\subfigure[$\Gamma=1.6, \alpha_{ss}=10^{-2}$]{\includegraphics[width=0.24\textwidth]{lambda-3-2.png}}
\subfigure[$\Gamma=0.09, \alpha_{ss}=10^{-4}$]{\includegraphics[width=0.24\textwidth]{lambda-3-3.png}}
\subfigure[$\Gamma=0.09, \alpha_{ss}=10^{-2}$]{\includegraphics[width=0.24\textwidth]{lambda-3-4.png}}
\caption*{\textbf{\emph{Figure 3.3:}} Images for $\lambda_c=2.7, \omega_q = -2/3, A_q=10^{-10}$. Red solid line shows wind for $a=0$, Green solid line shows accretion for $a=0$, Purple dotted line shows wind for $a=0.5$, Black dotted line shows accretion for $a=0.5$, Orange dash-dotted line shows wind for $a=0.9$, Olive dash-dotted line shows accretion for $a=0.9$ and Pink dashed-dashed line shows wind for $a=0.998$, Blue dashed-dashed line shows accretion for $a=0.998$}
\caption{$\lambda/\lambda_k$ vs radial distance plotting for accretion and wind branches for different parameters.}
\end{figure}
\begin{figure}
\ContinuedFloat
\setcounter{subfigure}{0}
\subfigure[$\Gamma=1.6, \alpha_{ss}=10^{-4}$]{\includegraphics[width=0.24\textwidth]{lambda-4-1.png}}
\subfigure[$\Gamma=1.6, \alpha_{ss}=10^{-2}$]{\includegraphics[width=0.24\textwidth]{lambda-4-2.png}}
\subfigure[$\Gamma=0.09, \alpha_{ss}=10^{-4}$]{\includegraphics[width=0.24\textwidth]{lambda-4-3.png}}
\subfigure[$\Gamma=0.09, \alpha_{ss}=10^{-2}$]{\includegraphics[width=0.24\textwidth]{lambda-4-4.png}}
\caption*{\textbf{\emph{Figure 3.4:}} Images for $\lambda_c=2.7, \omega_q = -1, A_q=10^{-10}$. Red solid line shows wind for $a=0$, Green solid line shows accretion for $a=0$, Purple dotted line shows wind for $a=0.5$, Black dotted line shows accretion for $a=0.5$, Orange dash-dotted line shows wind for $a=0.9$, Olive dash-dotted line shows accretion for $a=0.9$ and Pink dashed-dashed line shows wind for $a=0.998$, Blue dashed-dashed line shows accretion for $a=0.998$}
\caption{$\lambda/\lambda_k$ plotting for Accretion and Wind for different parameters.}
\label{fig:lambda}
\end{figure}
For radiation dominated era, we find the sound speed of wind branch decreases slowly as we go far from the BH. As viscosity is considered, the sound speed for the wind branch is found to blow up at a finite distance $x=x_{end}\left(\omega_q, \alpha_{ss}\right)$. Exactly at $x=x_{end}\left(\omega_q\right)$ the fluid speed for wind branch for wind branch is found to be zero. So, at this end regions of the corresponding disc, the fluid is acting mere stiff which causes the raising in the sound speed's value but lowering in original fluid speed. As $\alpha_{ss}$ increases, the value of $x_{end}$ decreases. Similarly, as $\omega_q$ is reduced, $x_{end}$ decreases as well. So, both the effects of viscosity and quintessential nature of the BH's background shortens the effective disc length. Accretion branch's accretion speed is found to have sound speed which is low at farthest point and decreases as the distance is reduced.
$\lambda/\lambda_k$ curves are saying where the angular momentum is being greater than that possessed by a Keplerian orbit. If $\lambda/\lambda_k > 1$, the disc is rotating with a speed which is even greater than to fight the inward gravitational pull. This will break the structure of stable accretion disc after $x=x_{end}$ and the part beyond it will be truncated off.
These three sets of graphs conclude that a viscous accretion onto a quintessence contaminated BH is weakened/shortened by four factors: the rotation of the BH, negativity of $\omega_q$ in which the BH is embedded, value of $\alpha_{ss}$, i.e., the viscosity imposed and the negativity inserted in EoS of accreting fluid. If all these factors increase, the accretion disc is fainted causing a weaker feeding process.
To concretize these results, in the next subsection, we will study the variation of density of accretion and the wind for all the possible cases.
\subsection{Profiles for Accreting Fluid Density}
\label{subsec:density}
To sustain an accretion process to run, the total system considered must not have luminosity greater than a maximum limit. Beyond this limit, the radiation pressure is so high, that any object will overcome gravitational pull. This will let no matter to fall inward and accretion will stop. This maximum limit is known as Eddington luminosity ($L_{Edd}$, say). To obtain this limit, we balance the gravitational force with radiation force as,
\[
F_{Grav}(M, m, R) = \frac{G_N M m}{R^2} = P_{rad}k m = \frac{L}{c} \cdot \frac{1}{4\pi R^2} k m = \frac{L}{c} \cdot \frac{1}{4\pi R^2} \frac{\sigma_T}{m_p}m
\]
where $k, \sigma_T, m_p,$ and $L$ are opacity, Thompson Scattering cross-section, mass of proton and luminosity respectively. If this $L$ is taken to be equal to Eddington Luminosity, we obtain
\begin{equation}
L_{Edd} = \frac{4\pi G_N M c m_p}{\sigma_T}~~~~.
\end{equation}
Now, consider $\dot{M}_{Edd}$ is the Eddington mass accretion rate of the considered system. If the $\epsilon$ fraction of the mass is supposed to generate energy, then,
\begin{equation}
\label{eqn:edington}
L_{Edd} = \epsilon \dot{M}_{Edd}c^2~~\impliess \dot{M}_{Edd} = \frac{4\pi G M m_p}{\epsilon c \sigma_T}~~~~.
\end{equation}
To choose the mass $M$ of the central BH, we will enlist a few of them in table~\ref{table:smbh:mass:list}.
\begin{table}[h]
\centering
\begin{tabular}{c|c|c}\hline
Name & Mass($M_\odot$) & Ref\\ \hline
SDSS J102325.31+514251.0 & $(3.31\pm 0.61)\times10^{10}$ & \cite{wiki13:zuo:2015}\\ \hline
Messier 87 & $7.22^{+0.34}_{-0.40}\times 10^9$ & \cite{wiki37:10:1093:mnras:stv2982} \\ \hline
PG 1700+518 & $7.81^{+1.82}_{-1.65}\times 10^8$ & \cite{wiki4:peterson:2014} \\ \hline
Messier 81 & $7\times 10^7$ & \cite{wiki75:devereux:2003} \\ \hline
Sagittarius A* & $4.3\times 10^6$ & \cite{wiki88:Ghez:2008}\\ \hline
\end{tabular}
\caption{Masses of some SMBHs}
\label{table:smbh:mass:list}
\end{table}
We generalize this as $M=10^{6+\sigma}\times M_\odot$.
To determine $\epsilon$, we point out the value as $0.01-0.1L_{Edd}$ for quasars and $0.001-0.3$ for Seyfert galaxies. So, we choose
\begin{equation}
\label{eqn:eps}
\epsilon = 3\times 10^{-1-\psi}~~~~.
\end{equation}
Combining equations~(\ref{eqn:edington}) \&~(\ref{eqn:eps}), we have
\begin{equation}
\dot{M}_{Edd} = \frac{4\pi G M_{\odot}m_p 10^7}{3 c \sigma_T} 10^{\sigma+\psi}~~~~.
\end{equation}
Now, let us assume the accretion disc concerned in this work is consuming mass at Eddington mass accretion limit. Then,
\begin{equation}
\rho = \frac{\dot{M}_{Edd}\sqrt{\frac{F_g}{x^3}}}{u\; c_s}~~\impliess \rho = \frac{4\pi m_p 10^{1+\psi}}{3 c \sigma_T}\times \frac{\sqrt{\frac{F_g}{x^3}}}{u c_s}~~~~.
\end{equation}
\begin{figure}
\centering
\setcounter{subfigure}{0}
\subfigure[$\Gamma=1.6, \alpha_{ss}=10^{-4}$]{\includegraphics[width=0.24\textwidth]{density-1-1.png}}
\subfigure[$\Gamma=1.6, \alpha_{ss}=10^{-2}$]{\includegraphics[width=0.24\textwidth]{density-1-2.png}}
\subfigure[$\Gamma=0.09, \alpha_{ss}=10^{-4}$]{\includegraphics[width=0.24\textwidth]{density-1-3.png}}
\subfigure[$\Gamma=0.09, \alpha_{ss}=10^{-2}$]{\includegraphics[width=0.24\textwidth]{density-1-4.png}}
\caption*{\textbf{\emph{Figure 4.1:}} Images for $\lambda_c=2.7, \omega_q = 1/3, A_q=0.01$. Red solid line shows wind for $a=0$, Green solid line shows accretion for $a=0$, Purple dotted line shows wind for $a=0.5$, Black dotted line shows accretion for $a=0.5$, Orange dash-dotted line shows wind for $a=0.9$, Olive dash-dotted line shows accretion for $a=0.9$ and Pink dashed-dashed line shows wind for $a=0.998$, Blue dashed-dashed line shows accretion for $a=0.998$}
\setcounter{subfigure}{0}
\subfigure[$\Gamma=1.6, \alpha_{ss}=10^{-4}$]{\includegraphics[width=0.24\textwidth]{density-2-1.png}}
\subfigure[$\Gamma=1.6, \alpha_{ss}=10^{-2}$]{\includegraphics[width=0.24\textwidth]{density-2-2.png}}
\subfigure[$\Gamma=0.09, \alpha_{ss}=10^{-4}$]{\includegraphics[width=0.24\textwidth]{density-2-3.png}}
\subfigure[$\Gamma=0.09, \alpha_{ss}=10^{-2}$]{\includegraphics[width=0.24\textwidth]{density-2-4.png}}
\caption*{\textbf{\emph{Figure 4.2:}} Images for $\lambda_c=2.7, \omega_q = 0, A_q=0.01$. Red solid line shows wind for $a=0$, Green solid line shows accretion for $a=0$, Purple dotted line shows wind for $a=0.5$, Black dotted line shows accretion for $a=0.5$, Orange dash-dotted line shows wind for $a=0.9$, Olive dash-dotted line shows accretion for $a=0.9$ and Pink dashed-dashed line shows wind for $a=0.998$, Blue dashed-dashed line shows accretion for $a=0.998$}
\setcounter{subfigure}{0}
\subfigure[$\Gamma=1.6, \alpha_{ss}=10^{-4}$]{\includegraphics[width=0.24\textwidth]{density-3-1.png}}
\subfigure[$\Gamma=1.6, \alpha_{ss}=10^{-2}$]{\includegraphics[width=0.24\textwidth]{density-3-2.png}}
\subfigure[$\Gamma=0.09, \alpha_{ss}=10^{-4}$]{\includegraphics[width=0.24\textwidth]{density-3-3.png}}
\subfigure[$\Gamma=0.09, \alpha_{ss}=10^{-2}$]{\includegraphics[width=0.24\textwidth]{density-3-4.png}}
\caption*{\textbf{\emph{Figure 4.3:}} Images for $\lambda_c=2.7, \omega_q = -2/3, A_q=10^{-10}$. Red solid line shows wind for $a=0$, Green solid line shows accretion for $a=0$, Purple dotted line shows wind for $a=0.5$, Black dotted line shows accretion for $a=0.5$, Orange dash-dotted line shows wind for $a=0.9$, Olive dash-dotted line shows accretion for $a=0.9$ and Pink dashed-dashed line shows wind for $a=0.998$, Blue dashed-dashed line shows accretion for $a=0.998$}
\caption{Curves for $\eta/s$ vs radial distance from BH.}
\end{figure}
\begin{figure}
\ContinuedFloat
\setcounter{subfigure}{0}
\subfigure[$\Gamma=1.6, \alpha_{ss}=10^{-4}$]{\includegraphics[width=0.24\textwidth]{density-4-1.png}}
\subfigure[$\Gamma=1.6, \alpha_{ss}=10^{-2}$]{\includegraphics[width=0.24\textwidth]{density-4-2.png}}
\subfigure[$\Gamma=0.09, \alpha_{ss}=10^{-4}$]{\includegraphics[width=0.24\textwidth]{density-4-3.png}}
\subfigure[$\Gamma=0.09, \alpha_{ss}=10^{-2}$]{\includegraphics[width=0.24\textwidth]{density-4-4.png}}
\caption*{\textbf{\emph{Figure 4.4:}} Images for $\lambda_c=2.7, \omega_q = -1, A_q=10^{-10}$. Red solid line shows wind for $a=0$, Green solid line shows accretion for $a=0$, Purple dotted line shows wind for $a=0.5$, Black dotted line shows accretion for $a=0.5$, Orange dash-dotted line shows wind for $a=0.9$, Olive dash-dotted line shows accretion for $a=0.9$ and Pink dashed-dashed line shows wind for $a=0.998$, Blue dashed-dashed line shows accretion for $a=0.998$}
\caption{Curves for $\eta/s$ vs radial distance from BH.}
\label{fig:density}
\end{figure}
\begin{figure}
\centering
\setcounter{subfigure}{0}
\subfigure[$\Gamma=1.6, \alpha_{ss}=10^{-4}$]{\includegraphics[width=0.24\textwidth]{eta-1-1.png}}
\subfigure[$\Gamma=1.6, \alpha_{ss}=10^{-2}$]{\includegraphics[width=0.24\textwidth]{eta-1-2.png}}
\subfigure[$\Gamma=0.09, \alpha_{ss}=10^{-4}$]{\includegraphics[width=0.24\textwidth]{eta-1-3.png}}
\subfigure[$\Gamma=0.09, \alpha_{ss}=10^{-2}$]{\includegraphics[width=0.24\textwidth]{eta-1-4.png}}
\caption*{\textbf{\emph{Figure 5.1:}} Images for $\lambda_c=2.7, \omega_q = 1/3, A_q=0.01$. Red solid line shows wind for $a=0$, Green solid line shows accretion for $a=0$, Purple dotted line shows wind for $a=0.5$, Black dotted line shows accretion for $a=0.5$, Orange dash-dotted line shows wind for $a=0.9$, Olive dash-dotted line shows accretion for $a=0.9$ and Pink dashed-dashed line shows wind for $a=0.998$, Blue dashed-dashed line shows accretion for $a=0.998$}
\setcounter{subfigure}{0}
\subfigure[$\Gamma=1.6, \alpha_{ss}=10^{-4}$]{\includegraphics[width=0.24\textwidth]{eta-2-1.png}}
\subfigure[$\Gamma=1.6, \alpha_{ss}=10^{-2}$]{\includegraphics[width=0.24\textwidth]{eta-2-2.png}}
\subfigure[$\Gamma=0.09, \alpha_{ss}=10^{-4}$]{\includegraphics[width=0.24\textwidth]{eta-2-3.png}}
\subfigure[$\Gamma=0.09, \alpha_{ss}=10^{-2}$]{\includegraphics[width=0.24\textwidth]{eta-2-4.png}}
\caption*{\textbf{\emph{Figure 5.2:}} Images for $\lambda_c=2.7, \omega_q = 0, A_q=0.01$. Red solid line shows wind for $a=0$, Green solid line shows accretion for $a=0$, Purple dotted line shows wind for $a=0.5$, Black dotted line shows accretion for $a=0.5$, Orange dash-dotted line shows wind for $a=0.9$, Olive dash-dotted line shows accretion for $a=0.9$ and Pink dashed-dashed line shows wind for $a=0.998$, Blue dashed-dashed line shows accretion for $a=0.998$}
\setcounter{subfigure}{0}
\subfigure[$\Gamma=1.6, \alpha_{ss}=10^{-4}$]{\includegraphics[width=0.24\textwidth]{eta-3-1.png}}
\subfigure[$\Gamma=1.6, \alpha_{ss}=10^{-2}$]{\includegraphics[width=0.24\textwidth]{eta-3-2.png}}
\subfigure[$\Gamma=0.09, \alpha_{ss}=10^{-4}$]{\includegraphics[width=0.24\textwidth]{eta-3-3.png}}
\subfigure[$\Gamma=0.09, \alpha_{ss}=10^{-2}$]{\includegraphics[width=0.24\textwidth]{eta-3-4.png}}
\caption*{\textbf{\emph{Figure 5.3:}} Images for $\lambda_c=2.7, \omega_q = -2/3, A_q=10^{-10}$. Red solid line shows wind for $a=0$, Green solid line shows accretion for $a=0$, Purple dotted line shows wind for $a=0.5$, Black dotted line shows accretion for $a=0.5$, Orange dash-dotted line shows wind for $a=0.9$, Olive dash-dotted line shows accretion for $a=0.9$ and Pink dashed-dashed line shows wind for $a=0.998$, Blue dashed-dashed line shows accretion for $a=0.998$}
\caption{Curves for $\eta/s$ vs radial distance from BH.}
\end{figure}
\begin{figure}
\ContinuedFloat
\setcounter{subfigure}{0}
\subfigure[$\Gamma=1.6, \alpha_{ss}=10^{-4}$]{\includegraphics[width=0.24\textwidth]{eta-4-1.png}}
\subfigure[$\Gamma=1.6, \alpha_{ss}=10^{-2}$]{\includegraphics[width=0.24\textwidth]{eta-4-2.png}}
\subfigure[$\Gamma=0.09, \alpha_{ss}=10^{-4}$]{\includegraphics[width=0.24\textwidth]{eta-4-3.png}}
\subfigure[$\Gamma=0.09, \alpha_{ss}=10^{-2}$]{\includegraphics[width=0.24\textwidth]{eta-4-4.png}}
\caption*{\textbf{\emph{Figure 5.4:}} Images for $\lambda_c=2.7, \omega_q = -1, A_q=10^{-10}$. Red solid line shows wind for $a=0$, Green solid line shows accretion for $a=0$, Purple dotted line shows wind for $a=0.5$, Black dotted line shows accretion for $a=0.5$, Orange dash-dotted line shows wind for $a=0.9$, Olive dash-dotted line shows accretion for $a=0.9$ and Pink dashed-dashed line shows wind for $a=0.998$, Blue dashed-dashed line shows accretion for $a=0.998$}
\caption{Curves for $\eta/s$ vs radial distance from BH.}
\label{fig:eta}
\end{figure}
We plot density vs radial distances in figures 4.1.a to 4.4.d. Among them, figures 4.1.a to 4.1.d are for the EoS value $\omega_q=\frac{1}{3}$. In figure 4.1.a, where viscosity is low ($\alpha_{SS}=10^{-4}$), the wind density varies from $10^{-21}$ to $10^{11}gmcm^{-3}$. In figure 4.1.b, viscosity is high ($\alpha_{SS}=10^{-2}$) and as a result we observe the wind density to raise from the order of $10^{-24}gmcm^{-3}$ at $10^{3}$ Schwarzschild radius to $10^{12}gmcm^{-3}$ near the event horizon. For low value of $\Gamma$, we observe that the wind varies for a larger range.
Accretion density, however, varies for a comparatively smaller range.
The same wide range of variations are observed for all the $\omega_q$ cases.
We can explain this issue like : the accretion density varies through an order of $10^{4}$ centering commonly around the value $10^{-18}gmcm^{-3}$. The negative pressure of quintessence may oppose the accreting matter to fall in and hence near the BH, the wind speed is tremendously high. Wind density profiles which does match with the density profile predictions of the article like~\cite{di:matteo:2003}.
\subsection{Shear Viscosity Coefficient to Entropy Density Ratio}
\label{subsec:viscosity}
Dual holographic nature of states is predicted by strongly interacting quantum field theories. For example, we can choose the systems where BHs are embedded in AdS space. For such a system, a universal lower bound of the shear viscosity coefficient ($\eta$) to entropy density ($s$) ratio is prescribed as~\cite{kovtun:2005},\cite{policastro:2001},\cite{tamaryan:2003},\cite{buchel:2004},\cite{son:2007} $$\frac{\eta}{s}\geq \frac{1}{4\pi} \frac{\hbar}{\kappa_B}$$
This lower bound is popularly known as the Kovtun-Starinets-Son (KSS) bound.
However, Jakovac~\cite{jakovac:2010} has calculated the $\frac{\eta}{s}$ ratio mathematically. To do this, he has assumed some physical conditions for the spectral functions and kept the entropy density constant. He observed that the lower bound may not be universal for some systems which carry quasi-particle constituents with small wave function re-normalization constant, high temperature strongly interacting systems or systems with low temperatures and zero mass excitation. As we have discussed in the introductory section of this article, DE may possess a very small amount of shear viscosity. Besides, the entropy density for DE should be high due to its repulsive nature. Entropy for different components of universe shows~\cite{egan:2010}. That for cosmic event horizon it may raise up to $2.6\pm 0.3\times 10^{122}$ and for SMBHs it may go up to $1.2^{+1.1}_{-0.7}\times 10^{103}$. So for a phenomenon which involves both SMBHs and DE, the $\frac{\eta}{s}$ ratio may fall and can become lower than the theoretical prediction as well.
We follow the reference~\cite{151:banibrata:2013151} to set the entropy equation as
\begin{equation}
uT\frac{ds}{dx}=q_{vis}+q_{mag}+q_{nex}-q_{rad}~~~~.
\end{equation}
$T$ denotes the temperature of the flow. $s$ is the entropy density. $q_{vis}$, $q_{mag}$ and $q_{nex}$ respectively denote the energies released per unit volume per unit time due to viscous dissipation, magnetic dissipation and thermonuclear reactions. $q_{rad}$ indicates the energy radiated away per unit volume per unit time by various cooling process like synchrotron, bremsstrahlung and inverse Comptonisation of soft photons and the energy absorbed per unit volume per unit time due to thermonuclear reactions.
As a result, the entropy density of the flow can be expressed as
\begin{equation}
s=\int \frac{q_{vis}+q_{mag}+q_{nex}-q_{rad}}{uT}dx~~~~.
\end{equation}
The turbulent kinematic viscosity $\gamma$ can be scaled linearly with sound speed, $c_s$, in the flow and half thickness, $h$, of the disc, providing $\gamma=\alpha_{SS}c_s h$. Hence the shear viscosity has the form
\begin{equation}
\gamma=\alpha_{SS}\rho c_s h
\end{equation}
and ultimately, the $\frac{\eta}{s}$ ratio for DE accretion is found to be
\begin{equation}
\frac{\eta}{s}=\frac{\alpha_{SS}c_s^2\sqrt{\frac{x_c}{F_g}}\rho^{n+1}}{\rho^{n+1}\left(\alpha+1\right)-\beta}~~~~.
\end{equation}
\section{Brief Discussions and Conclusions}
\label{sec:conclusion}
This present article can be treated as a detailed study of the viscous accretion onto a rotating black hole embedded in a quintessence universe and the consequent thermodynamic phenomena. To construct the mathematical model we have chosen a particular type of black hole which has mass and rotation as signature properties along with a special type of background. Quintessence is a hypothetical fluid which is theorized to create repulsive force responsible for late time cosmic acceleration. We choose a rotating black hole solution which carries effects of quintessence universe in it. The gravitational effect of such a black hole is implied through a pseudo Newtonian potential. This is done as direct general relativistic nonlinear differential equations are difficult to solve. Viscous effect is adopted through the Shakura and Sunyaev $\alpha_{SS}$ effects.
We follow that if the viscosity is high the accretion branch's fluid speed steeply falls down as we go far from the central black hole. Wind speed increases as we increase viscosity. But the radial distance wise shift is small. At a finite distance fluid speed becomes equal to that of light. As we increase the quintessential effect, wind speed increases.
Truncation in the accretion length is supported by the sonic speed curves and specific angular momentum to Keplerian angular momentum ratio curves. Either the sonic speed reaches the speed of light or the $\frac{\lambda}{\lambda_k}$ ratio reaches the value $1$ where the accretion turns zero. Steep fall in accretion due to the increase in viscosity signifies the weakening of accretion procedure.
Density profiles are found to be very interesting. At the edges of the disc, approximately at the order of thousand Schwarzschild radius distance the density is found to be very low. But of course this was higher than the density of universe. At the nearer vicinity of the SMBH, we see the wind density to raise up to the order of $10^{12}gmcm^{-3}$. This quite matches with the observational results.
Finally, we study the $\frac{\eta}{s}$ ratio and follow that this ratio turns to be less than the theoretical predictions. Present day speculation about the shear viscosity of DE supports this result. Interestingly, we achieve the result where $\eta/s$ is lower than the pre-predicted value for adiabatic accretion. Only the dark energy contamination is considered for the BH metric itself, not onto the accreting fluid's property. Our result strongly states that far late time BHs, accretion of adiabatic fluid can even reduce the $\eta/s$ ratio.
\section*{Acknowledgment}
This research is supported by the project grant of Government of West Bengal, Department of Higher Education, Science and Technology and Biotechnology (File no:- $ST/P/S\&T/16G-19/2017$). RB thanks IUCAA for providing Visiting Associateship.
\bibliographystyle{unsrt}
|
1,941,325,220,457 | arxiv | \section{Introduction}\label{s;intro}
Designing high efficient algorithms is an important subject of numerical computation due to the computational time
and memory issues in the solution
of large scale problems.
The technique of parallel algorithms was paid more and more attention in past few years, containing domain decomposition method in spatial direction and the parallel in time direction generally.
The parareal algorithm, our focus in the sequel, was first introduced by Lions, Maday, and Turinici in 2001 \cite{Lions2001}, further work modified by Bal and Maday in \cite{Bal2002}, and has attracted vast attention in the last decade.
Compared with other parallel approaches, this algorithm belongs to time-parallel category.
The general idea of parareal algorithm contains roughly three steps as follows.
First, we obtain an approximate solution on a coarse time-step by a rough solver. Second, we use another more accurate solver to get the approximation on each coarse time interval (splitting the coarse time interval into more fine time domain) performed by parallel with initial values computed in the first step. Finally, combining the values of the above two steps in the coarse time grids, we obtain a new approximation value by a prediction and correction iteration.
In general, this algorithm has higher parallel performance and is easy to perform, which motivates the development of efficient parallel methods for time dependent problems. Since the parareal algorithm was proposed, many efforts have been made to analyze it theoretically \cite{Wu2015} and numerically, which verify the effectiveness of the parareal algorithm for a large various of problems, including control theory\cite{Maday2002}, Navier-Stokes problem\cite{2005Barth} and Hamiltonian differential equations\cite{Dai2013,Gander2014} for instance.
Stochastic differential equations (SDEs) have attracted considerable attention in order to obtain much more realistic mathematical models in many scientific disciplines, such as physics, molecular biology, population dynamics and finance \cite{Gardiner2009,Oksendal2003}.
However,
it is difficult to find explicit solutions of SDEs analytically; therefore, there has been tremendous interest in developing effective and reliable numerical methods for SDEs (e.g. \cite{Burrage2004,Higham2001,Kloeden1992} and references therein).
It is also a significant issue whether some geometric features of SDEs are preserved in performing reliable numerical methods, especially for long-time simulation, which is as important as the deterministic case \cite{Hairer2006,Milstein2004}.
In practice, they are time consuming, so the parallel techniques can be considered to speed up the original integrator.
For stochastic problem, the application of parallel algorithm are relatively few. For example, the parareal algorithm has been applied to stochastic ordinary differential equations with filter problems \cite{Bal2003} and stochastic models in chemical kinetics \cite{Engblom2009}.
However, to the best of our knowledge, no results on parareal algorithm focusing on stochastic differential equations with conserved quantities.
In order to apply the parareal algorithm to SDEs with conserved quantities, as mentioned in \cite{Dai2013,Dai2013a,Gander2014},
the original algorithm are not able to share this kind of conservative property, namely, the preservation of conserved quantities along the sample path of the exact solution, even though when the coarse and fine integrators all have adequate conservative properties. Therefore, the behavior of long time numerical simulation is not enjoyed as the original system itself has. In this paper, we mainly utilize the projection methods for SDEs with conserved quantities as
the basic propagators and the parareal algorithm with a projection corrector, which preserve some conserved quantities of the exact flow as proposed in \cite{Dai2013}.
The rest of the paper is organized as follows. Section \ref{s;pr} briefly recalls the parareal algorithm for general time-dependent problem. Section \ref{s;pro} discusses the procedure projection methods for SDEs with conserved quantities, and gives the corresponding mean-square convergence. Next in Section \ref{s;prq}, we consider the parareal algorithm focusing on the SDEs with certain conserved quantities, which combines the ideas of the previous two sections. Finally, three typical SDE examples are chosen to perform numerical tests in Section \ref{s;numer}.
\section{The original parareal algorithm}\label{s;pr}
In this section, we first review the original parareal algorithm for a general initial-value problem:
\begin{equation}\label{2.2.1}
\left\{
\begin{aligned}
u'(t) &= f(t,u(t)), \quad t\in [0,T], \\
u(0) &= u_0,
\end{aligned}
\right.
\end{equation}
where $f: \mathbb{R} \times \mathbb{R}^d \rightarrow \mathbb{R}^d $ is a suitable function to ensure the well-posedness of (\ref{2.2.1}). To perform the parareal algorithm, we first divide time interval $[0,T]$ into $N$ uniform large time intervals $[T_n, T_{n+1}]$, with step-size $\Delta T = T_{n+1} - T_n$ $n=0,1,\dots,N-1$, and $N=\frac{T}{\Delta T}$. Then, we further divide every large interval $[T_n, T_{n+1}]$ into $J$ small time intervals $[t_{n+\frac{j}{J}}, t_{n+\frac{j+1}{J}}]$, $j = 0,1,\dots, J-1$. With that, two numerical propagators, the coarse propagator $\mathscr{G}$ and the fine propagator $\mathscr{F}$, are needed here. In fact, $\mathscr{G}$ is usually easy to solve with low convergence order and $\mathscr{F}$ is of high order but more expensive to compute. The parareal algorithm can be described as following.
\begin{itemize}
\item Initialization: use the coarse propagator $\mathscr{G}$ and time-step $\Delta T$ to compute initial value $\{u_{n}^{0}\}_{n=0}^N$ sequentially
\begin{equation*}
\left\{
\begin{aligned}
u_{n+1}^{0} &= \mathscr{G}(T_n, u_n^{(0)},\Delta T), \quad n=0,1,\dots,N-1,\\
u_0^{(0)} &= u_0.
\end{aligned}\right.
\end{equation*}
\item For $k=0,1,\dots$
\begin{enumerate}
\item use the fine propagator $\mathscr{F}$ and small time-step $\frac{\Delta T}{J}$ to compute $\hat{u}_n$ on each sub-interval $[T_n, T_{n+1}]$ independently, thus possibly in parallel
\begin{equation*}
\left\{
\begin{aligned}
\hat{u}_{n+\frac{j+1}{J}} &= \mathscr{F}(t_{n+\frac{j}{J}}, \hat{u}_{n+\frac{j}{J}},\frac{\Delta T}{J}), \quad j=0,1,\dots,J-1,\\
\hat{u}_n &= u_0^{(k)}.
\end{aligned}\right.
\end{equation*}
\item perform sequential corrections
\begin{equation*}
\left\{
\begin{aligned}
u_{n+1}^{(k+1)} &= \mathscr{G}(T_n, u_n^{(k+1)},\Delta T) + \hat{u}_{n+1} - \mathscr{G}(T_n, u_n^{(k)},\Delta T), \quad n=0,1,\dots,N-1,\\
u_0^{(0)} &= u_0.
\end{aligned}\right.
\end{equation*}
\item If $\{u_n^{k+1}\}_{n=1}^N$ satisfies the stopping criterion, break the iteration; otherwise continue the iteration again.
\end{enumerate}
\end{itemize}
Note that the parareal algorithm can be expressed compactly as follows:
\begin{equation}\label{e;pc}
u_{n+1}^{(k+1)} = \mathscr{G}(T_n, u_n^{(k+1)},\Delta T) + \mathscr{F}^J(T_n, u_n^{(k)},\frac{\Delta T}{J}) - \mathscr{G}(T_n, u_n^{(k)},\Delta T)
\end{equation}
where $\mathscr{F}^J$ means computing the value of $\mathscr{F}$ for $J$ times sequentially. It is known that $u_n^{(k)} \rightarrow u_n^*, n=0,1,\dots,N$, as $k \rightarrow + \infty$ when iteration \eqref{e;pc} converges, where $u_n^*$ is actually the result computed by the fine propagator $\mathscr{F}$ with small step-size $\Delta T/J$ \cite{Wu2011}. Thus, the convergence accuracy of this iterative algorithm after certain iterations is comparable to that of the fine propagator $\mathscr{F}$ with the small step-size $\Delta T/J$ \cite{Bal2002}. In other words, the parareal algorithm can approach to the accuracy of the fine propagator, and the computational cost only is same as the coarse propagator.
\section{Projection methods for SDEs with conserved quantities}\label{s;pro}
Consider the initial value problem for the general $d$-dimensional autonomous stochastic differential equation (SDE) in the sense of Stratonovich:
\begin{equation}\label{e;sde}
\left\{
\begin{aligned}
dX(t) &= f\big(X(t)\big)dt + \sum_{r=1}^m g_r\big(X(t)\big) \circ dW_r(t), \quad t\in [0, T],\\
X(0) &= X_0,
\end{aligned}\right.
\end{equation}
where $X(t)$ is $d$-dimensional column-vector, $W_r(t), r = 1,\dots,m $, are $m$ independent one-dimensional standard Wiener processes defined on a complete filtered probability space $(\Omega , \mathcal{F}, \mathbb{P}, \{\mathcal{F}_t\}_{t \geq 0})$ fulfilling the usual conditions, $f$ and $g_r$ are $\mathbb{R}^d$-valued functions satisfying the conditions under which \eqref{e;sde} has a unique solution. $X_0$ is $\mathcal{F}_{0}$-measurable random variable with $\mathbb{E}|X_0|^2<\infty$.
\begin{definition}\label{d:multiple}
System \eqref{e;sde} possesses $l$ ($l\ge 1$) independent conserved quantities $I^i(x)$, $i=1,\dots ,l$, if
\begin{equation*}
\big(\nabla I^i(x)\big)^T f(x) = 0 \quad\text{and}\quad\big(\nabla I^i(x)\big)^T g_r(x) = 0,
\quad r=1,\dots,m; \quad i=1,\dots,l.
\end{equation*}
\end{definition}
If we define vector $\mathbf{I}(x) := \big(I^1(x),\dots,I^l(x)\big)^T$, then
\[\mathbf{I}'(x)f(x)=\mathbf{I}'(x)g_r(x)=\mathbf{0}, \quad r=1,\dots,m,\]
where $\mathbf{I}'(x)$ is the Jacobian matrix of $I(x)$.
If system \eqref{e;sde} possesses $l$ conserved quantities $I^i(x)$, $i=1,\dots ,l$, then by It\^{o}'s formula we have
\begin{equation*}\label{e;dI}
\begin{aligned}
dI^i\big(X(t)\big) = \nabla I^i\big(X(t)\big)^T f\big(X(t)\big)dt + \sum_{r=1}^{m} \nabla I^i\big(X(t)\big)^T g_r\big(X(t)\big)\circ dW_r(t)=0.
\end{aligned}
\end{equation*}
Then
\begin{equation*}
X(t)\in \mathcal{M}_{X_0}:=\Big\{x\in\mathbb{R}^d \mid I^i(x)=I^i(X_0),
i=1,\dots,l\Big\} \quad t\in[0,T], \quad \text{a.s.},
\end{equation*}
which implies that the solution $X(t)$ of this system will be confined to the invariant submanifold $\mathcal{M}_{X_0}$ generated by $I^i(x)$, $i=1,\dots,l$.
Suppose that we have a supporting one-step method $\widehat{X}_{t,x}$, the projection method, then the process is
\begin{enumerate}
\item Compute the one-step approximation $\widehat{X}_{t,x}$.
\item Compute $\mathbf{\lambda}\in \mathbb{R}^l$ for $\bar{X}_{x,t} = \widehat{X}_{t,x} + \Phi\lambda$,\, s.t.\, $\mathbf{I}(\bar{X}_{x,t}) = \mathbf{I}(x)$.
\end{enumerate}
Here the matrix $\Phi \in \mathbb{R}^{d\times l}$ defines the direction of the projection, and $\mathbf{\lambda}$ is a $l$-dimensional vector chosen such that $\bar{X}_{t,x}$ belongs to the invariant manifold $\mathcal{M}_{X_0}$. In fact $\Phi$ is not unique, and here we choose $\Phi=\bigl(\mathbf{I}'(\widehat{X}_{t,x})\bigr)^T$, which is transpose of the Jacobian matrix of $\mathbf{I}(\cdot)$ at $\widehat{X}_{x,t}$.
The general idea of the projection methods is shown in Fig. \ref{f:proj}.
\begin{figure}[ht]
\centering
\includegraphics[width=1\linewidth]{pic}
\caption{Basic idea of the projection methods.}
\label{f:proj}
\end{figure}
The convergence in the mean-square of this kind of projection methods is listed below.
\begin{theorem} \cite{Zhou2016}
Suppose that system \eqref{e;sde} possesses $l$ independent conserved quantities $I^i(x),i=1,\dots,l$.
Also assume that a supporting method $\widehat{X}$ applying to \eqref{e;sde} satisfies
\begin{equation}\label{c:p1}
|\mathbb{E}(X_{t,x}(t+h) - \widehat{X}_{t,x}(t+h))| \leq K(1+|x|^2)^{1/2} h^{p+1},
\end{equation}
\begin{equation}\label{c:p2}
\big(\mathbb{E}|X_{t,x}(t+h) - \widehat{X}_{t,x}(t+h)|^2\big)^{1/2} \leq K(1+|x|^2)^{1/2} h^{p+\frac{1}{2}},
\end{equation}
with mean-square order $p$. Assume that $\nabla I^i$ satisfies global Lipschitz condition and has uniformly bounded derivatives up to order 2, $|\nabla I^i|$ has a positive lower bound and $\big(|\nabla I^i|^2\big)^{-1}$ has bounded derivative near the invariant manifold. Then the projection method $\bar{X}$ using the supporting method $\widehat{X}$ will also have mean-square order p as well.
\end{theorem}
\section{Parareal algorithm for SDEs with conserved quantities}\label{s;prq}
For SDEs with conserved quantities, both theoretical and numerical results show that the original parareal algorithm in Section \ref{s;pr} is unable to deal with this kind of problem in long time simulation \cite{Bal2003,Dai2013a,Gander2014}, so we need other technique to deal with it. Of course, the projection method is a natural choice in order to preserve the conserved quantities of system. Even though we can choose the projection methods described in Section \ref{s;pro} as propagators $\mathscr{G}$ and $\mathscr{F}$ in the original parareal algorithm, after sequential corrections, the new iterations cannot preserve the conversed quantities any longer. Thus, what we need is another projection step to ensure that the approximations in every iteration preserve the conserved quantities as well.
To be precise, we list the corresponding parareal algorithm with projection for SDEs. As in Section \ref{s;pr}, we have the coarse propagator $\mathscr{G}$ and the fine propagator $\mathscr{F}$ for SDE \eqref{e;sde}, but here they converge in the mean-square sense.
\begin{itemize}
\item Initialization: use the coarse propagator $\mathscr{G}$ and time-step $\Delta T$ to compute initial value $\{X_{n}^{0}\}_{n=0}^N$ sequentially
\begin{equation*}
\left\{
\begin{aligned}
X_{n+1}^{0} &= \mathscr{G}(T_n, X_n^{(0)},\Delta T), \quad n=0,1,\dots,N-1,\\
X_0^{(0)} &= X_0.
\end{aligned}\right.
\end{equation*}
\item For $k=0,1,...$
\begin{enumerate}
\item use the fine propagator $\mathscr{F}$ and small time-step $\frac{\Delta T}{J}$ to compute $\hat{X}_n$ on each sub-interval $[T_n, T_{n+1}]$ independently
\begin{equation*}
\left\{
\begin{aligned}
\hat{X}_{n+\frac{j+1}{J}} &= \mathscr{F}(t_{n+\frac{j}{J}}, \hat{X}_{n+\frac{j}{J}},\Delta T/J), \quad j=0,1,\dots,J-1,\\
\hat{X}_n &= X_0^{(k)}
\end{aligned}\right.
\end{equation*}
\item perform sequential corrections
\begin{equation*}
\left\{
\begin{aligned}
X_{n+1}^{(k+1)} &= \pi_{\mathcal{M}_{X_0}} \Big(\mathscr{G}(T_n, X_n^{(k+1)},\Delta T) + \hat{X}_{n+1} - \mathscr{G}(T_n, X_n^{(k)},\Delta T)\Big), \quad n=0,1,\dots,N-1,\\
X_0^{(0)} &= X_0,
\end{aligned}\right.
\end{equation*}
where $\pi_{\mathcal{M}_{X_0}}(\cdot)$ denotes the projection operator.
\item If $\{X_n^{k+1}\}_{n=1}^N$ satisfy the stopping criterion, break the loop; otherwise continue the iteration again.
\end{enumerate}
\end{itemize}
Note that, in the sequential correction step, we couple an additional projection operator applied to the original parareal algorithm so that the new iteration confined on the same invariant manifold, which implies it can preserves the conserved quantities of the system. Furthermore, $X_{n}^{(k)}$ converge to $\mathscr{F}$ with projection $\pi_{\mathcal{M}_{X_0}}$, denoted by $\mathscr{F}{\pi_{\mathcal{M}_{X_0}}}$, instead of the fine propagator $\mathscr{F}$.
\section{Numerical experiments}\label{s;numer}
In this section, we perform several typical numerical examples by utilizing different parareal algorithms, with or without projection procedure. In order to investigate the convergence property of these algorithms for SDEs with conserved quantities through numerical tests, we consider the following schemes:
\begin{itemize}
\item Euler-Maruyama scheme (Euler, EulerP)
\item Milstein scheme (Mil, MilP)
\item Mid-point scheme (Mid, MidP)
\item It\^{o}-Taylor order 1.5 scheme (T32, T32P)
\item It\^{o}-Taylor order 2 scheme (T2, T2P)
\end{itemize}
where the suffix P means the projection method introduced in Section \ref{s;pro}.
That is to say, we use these schemes both for the coarse propagator $\mathscr{G}$ and the fine propagator $\mathscr{F}$, respectively.
The mean-square error is applied as the stopping criterion of these parareal algorithms:
\begin{equation}\label{e;stopping}
(\mathbb{E} |X_N^{(k)} - X^*_N|^2)^{1/2} \le 10^{-12},
\end{equation}
where $X^*_N$ denotes the last step approximation computed with small step-size $\frac{\Delta T}{J}$ by the fine propagator $\mathscr{F}$ for original parareal, or by $\mathscr{F}$ with projection for parareal with projection in Section \ref{s;prq}. The expectation here is simulated by computing the average of 1000 sample paths.
\begin{table}[htbp]
\centering
\caption{Line styles.}
\label{tab;linestyle}
\begin{tabular}{@{}lcc@{}}
\toprule
& \multicolumn{2}{c}{Projection} \\
\cmidrule(l){2-3}
Style & Propagators & Correction \\ \midrule
\mgape{\includegraphics{line1}} & $\times$ & $\times$ \\
\mgape{\includegraphics{line2}} & $\times$ & \checkmark \\
\mgape{\includegraphics{line3}} & \checkmark & $\times$ \\
\mgape{\includegraphics{line4}} & \checkmark & \checkmark \\ \bottomrule
\end{tabular}
\end{table}
In Table \ref{tab;linestyle}, we list four line styles to make a distinction among different algorithms in figures later. $\times$ and \checkmark denote whether the projection technique is used in propagators (both coarse and fine) and the sequential correction. For instance, if we check the type of Euler scheme, then the solid line (the fourth style in Table \ref{tab;linestyle}) signifies that we apply the EulerP scheme in $\mathscr{G}$ and $\mathscr{F}$, and make use of the parareal algorithm with projection.
\subsection{Kubo oscillator}
Our first example is a two-dimensional linear SDEs of this form
\begin{equation}\label{kubo}
\left\{
\begin{aligned}
dX_1(t) &= -X_2(t) dt - c X_2(t) \circ dW(t),\\
dX_2(t) &= \phantom{-} X_1(t) dt + c X_1(t) \circ dW(t),
\end{aligned}\right.
\end{equation}
where $c$ is a real-valued parameter. Note that it is also called Kubo oscillator and is a typical stochastic Hamiltonian system with multiplicative noise \cite{Cohen2014,Milstein2002mul}. It is easy to check that \eqref{kubo} has a quadratic conserved quantity
\begin{equation}\label{e;I_kubo}
I(x,y) = \frac{1}{2}(x^2 + y^2),
\end{equation}
which is also its Hamiltonian function and forms a circle as the invariant submanifold in its phase space. In this example, we choose $c=0.5$, and the initial value $X(0)=(1,0)$.
The convergence results of the parareal algorithms are shown in Figure \ref{fig;kubo}. The left part of this figure is the short time simulation (T=10), and the right part displays the long time simulation (T=1000). Each row of Figure \ref{fig;kubo} corresponds to a particular scheme which acts as the basic integrators ($\mathscr{F}$ and $\mathscr{G}$) in the parareal algorithm. In the case of short time test, we observe that all the schemes (with or without projection) converge properly, and the parareal algorithms with projection and using projection schemes as the $\mathscr{G}$ and $\mathscr{F}$ integrators, have the fastest convergence rate (the solid line). For the long time case, the common Euler and Milstein schemes without projection do not converge in the parareal algorithms, so we just plot the results of parareal with projection and EulerP or MilP in the first two rows of the right side of Figure \ref{fig;kubo}. However, the other high order methods still work in the corresponding parareal algorithms. Note that the Mid scheme preserves the quadratic conserved quantity \eqref{e;I_kubo}; thus, we do not need to use the MidP scheme in this test.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.9\textwidth]{kubo_k}
\caption{Kubo oscillator \eqref{kubo} with $c = 0.5$, $X_0=(1,0)$. Mean-square errors vs. iteration number $k$ for original parareal and parareal with projection algorithms using five propagators as $\mathscr{F}$ and $\mathscr{G}$ ($\Delta T = 0.1$, $J=100$). Left: $T=10$. Right: $T=1000$.}
\label{fig;kubo}
\end{figure}
From Figure \ref{fig;kubo}, it also turns out that with the help of projection method, the convergence rates for the original parareal and parareal with projection are similar if they both converge. To compare these two algorithms, we then estimate the errors of the conserved quantity $I(x)$ \eqref{e;I_kubo} in Figure \ref{fig;kubo_cq} where $T=10$. Here EulerP are chosen as the fine and coarse propagators $\mathscr{F}$ and $\mathscr{G}$. Other parameters are the same as those in Figure \ref{fig;kubo}. The left plot of Figure \ref{fig;kubo_cq} shows the errors of the original parareal and the parareal with projection after $k=2$ iterations, while the right solely demonstrates the later one. Therefore, although they both have good convergence property for SDEs with conserved quantity, the parareal with projection provides a much better reproduction of the preservation of the conserved quantity $I(x)$. In the case of other high mean-square order schemes, the results are just similar, so we omit them here.
\begin{figure}[htbp]
\includegraphics[width=.45\textwidth]{energy_Euler_T=10_J=100.png}
\includegraphics[width=.45\textwidth]{energy_Euler2_T=10_J=100.png}
\caption{Errors in conserved quantity $I(x)$ \eqref{e;I_kubo} along numerical approximations by two kinds of parareal algorithms ($\mathscr{F}, \mathscr{G}$ = EulerP) after $k=2$ iterations. Left: the original parareal. Right: parareal with projection.}
\label{fig;kubo_cq}
\end{figure}
\subsection{Stochastic pendulum}
Next, we restrict to a two-dimensional mathematical pendulum perturbed by two multiplicative noises \cite{Cohen2014}
\begin{equation}\label{pend}
d
\begin{pmatrix}
X_1(t) \\
X_2(t) \\
\end{pmatrix}
=
\begin{pmatrix}
-\sin\big(X_2(t)\big) \\
X_1(t) \\
\end{pmatrix}
\Big( dt + c_1\circ dW_1(t) + c_2\circ dW_2(t)\Big),
\end{equation}
where $c_1$ and $c_2$ are real-valued parameters. It has a conserved quantity as follows
\begin{equation}\label{e;I_pend}
I(x,y) = \frac{1}{2}x^2 - \cos(y),
\end{equation}
which is a non-quadratic one unlike that of the first example.
\begin{figure}[htbp]
\centering
\includegraphics[width=.9\linewidth]{pend_k}
\caption{Stochastic pendulum with two multiplicative noises \eqref{pend} with $c_1 = 0.5$, $c_2=0.1$, $X_0=(0.2,1)$. Mean-square errors vs. iteration number $k$ for original parareal and parareal with projection algorithms using five propagators as $\mathscr{F}$ and $\mathscr{G}$ ($\Delta T = 0.1$, $J=100$). Left: $T=10$. Right: $T=1000$.}
\label{fig;pend}
\end{figure}
Setting $c_1 = 0.5, c_2 = 0.1$, and $X(0) = (0.2, 1)$, we thus get the results of convergence property in Figure \ref{fig;pend}. As before, the left and right part show the short time case and long time case, respectively. It is obvious that the results are similar to that of Figure \ref{fig;kubo}, except that Mid scheme can not preserve the conserved quantity \eqref{e;I_pend} (non-quadratic), so we consider the MidP scheme in the third row. In the case of $T=1000$, the original parareal algorithms without projection integrators are unable to reach proper error during the iteration process. Instead, the projection parareal algorithms using projection schemes as the $\mathscr{G}$ and $\mathscr{F}$ integrators converge properly. In addition, for Mid, T32P and T2 schemes with projection technique, the corresponding iteration numbers are all less than 10.
\begin{figure}[htbp]
\includegraphics[width=.45\textwidth]{energy_pend_Euler_T=10_J=100}
\includegraphics[width=.45\textwidth]{energy_pend_Euler2_T=10_J=100}
\caption{Errors in conserved quantity $I(x)$ \eqref{e;I_pend} along numerical approximations by two kinds of parareal algorithms ($\mathscr{F}, \mathscr{G}$ = EulerP) after $k=3$ iterations. Left: the original parareal. Right: parareal with projection}
\label{fig;pend_cq}
\end{figure}
In addition, Figure \ref{fig;pend_cq} displays the errors in conserved quantity \eqref{e;I_pend} along the original parareal and parareal with projection, where $T=10$, and other parameters are the same as those in the test of Figure \ref{fig;pend}. Here the fine and coarse propagators $\mathscr{F}, \mathscr{G}$ are all EulerP. The left plot of Figure \ref{fig;pend_cq} shows the errors of the original parareal and the parareal with projection after $k=3$ iterations, while the right solely demonstrates the later one. Therefore, although they both have good convergence property for SDEs with conserved quantity, the parareal with projection provides a much better reproduction of the preservation of the conserved quantity \eqref{e;I_pend}.
\subsection{Stochastic cyclic Lotka-Volterra system}
Last we consider a three-dimensional cyclic Lotka-Volterra model
\begin{equation}\label{lk}
d\left(
\begin{array}{c}
X_1(t) \\
X_2(t) \\
X_3(t) \\
\end{array}
\right) =
\left(
\begin{array}{c}
X_1(t)\big(X_3(t)-X_2(t)\big) \\
X_2(t)\big(X_1(t)-X_3(t)\big) \\
X_3(t)\big(X_2(t)-X_1(t)\big) \\
\end{array} \right) \Big(dt + c \circ dW(t)\Big),
\end{equation}
where $c$ is also a real-valued constant parameter. This system represents a chaotic environment consisting of three completing species \cite{Chen2014conservative,Hong2011}. And it is easy to check that system \eqref{lk} possesses two independent conserved quantities:
\begin{equation}\label{e;I_lk}
\begin{aligned}
I_1(x,y,z) &= x+y+z, \\
I_2(x,y,z) &= xyz.
\end{aligned}
\end{equation}
\begin{figure}[htbp]
\centering
\includegraphics[width=.9\linewidth]{lk_k}
\caption{Stochastic cyclic Lotka-Volterra system \eqref{lk} with $c = 0.5$, $X_0=(1,2,1)$. Mean-square errors vs. iteration number $k$ for original parareal and parareal with projection algorithms using five propagators as $\mathscr{F}$ and $\mathscr{G}$ ($\Delta T = 0.01$, $J=100$). Left: $T=10$. Right: $T=1000$.}
\label{fig;lk}
\end{figure}
By the conserved quantities above, the phase trajectory of the exact solution to \eqref{lk} is a closed curve in $\mathbb{R}^3$.
In this test, we choose parameter $c=0.5$ and initial value $X(0) = (1,2,1)$. In contrast to the previous two examples, we set $\Delta T = 0.01$ in order to investigate the long-term behavior of these methods. The convergence property for the corresponding parareal algorithms with different schemes are shown in Figure \ref{fig;lk}. Form the left part of it, we notice that all these algorithms are able to reach good convergence for $T=10$. However, figures in the right hand side show something different. First, for Euler and Mil schemes, we only plot the solid lines, i.e., projection performed in both propagators ($\mathscr{F}$ and $\mathscr{G}$) and correction step. Comparing these two figures, we observe that Mil type scheme can reach the stopping criterion \eqref{e;stopping} faster than the Euler one. That is to say, the Mil type scheme needs nearly one half of the iteration numbers compared to the Euler one, and achieve more accuracy. The last three figures show the convergence property of the Mid, T32 and T2 type schemes, respectively. Also, the original parareal algorithm with non-projection schemes are unable to meet the stopping criterion \eqref{e;stopping} during iteration, but if we use the projection schemes or parareal algorithm with projection, less than 10 iterations are needed for this example.
\begin{figure}[htbp]
\includegraphics[width=.45\textwidth]{energy_lk_Euler_T=10_J=100}
\includegraphics[width=.45\textwidth]{energy_lk_Euler2_T=10_J=100}
\caption{Errors in conserved quantity $I(x)$ \eqref{e;I_lk} along numerical approximations by two kinds of parareal algorithms ($\mathscr{F}, \mathscr{G}$ = EulerP) after $k=5$ iterations. Left: the original parareal. Right: parareal with projection}
\label{fig;lk_cq}
\end{figure}
Errors in the two conserved quantities \eqref{e;I_lk} along the original parareal and parareal with projection algorithms are shown in Figure \ref{fig;lk_cq}, where $T=10$, and other parameters are the same as those in the test of Figure \ref{fig;lk}. Also the $\mathscr{F}$ and $\mathscr{G}$ all choose the EulerP scheme same as the previous examples. The left plot of Figure \ref{fig;lk_cq} shows the errors of the original parareal and the parareal with projection after $k=5$ iterations, while the right solely demonstrates the later one. Therefore, although they both have good convergence property for SDEs with conserved quantity, the parareal with projection provides a much better reproduction of the preservation of the conserved quantity \eqref{e;I_lk}.
\section{Conclusion}
In conclusion, we investigate the possibility of applying parallel-in-time technique to SDEs with conserved quantities by combing the projection methods and a version of the parareal algorithm. For this kind of system, projection methods can be used to guarantee that the numerical approximations preserve certain conserved quantities exactly. However, the long-time simulation of this problem is still challenging, and it is inevitable to take the parallel algorithms into consideration. With the help of the parareal algorithm with projection, we obtain an effective parallel-in-time approach which maintains the geometric property to simulate SDEs with conserved quantities. In the numerical experiments, three systems, linear or non-linear, are performed by parareal algorithm with and without projection technique, respectively. From the numerical results, we can conclude that the parareal algorithm is of fast convergence with few iterations, and with the help of projection method it shows advantages in preserving conserved quantities. This paper mainly focuses on the numerical simulation of this efficient parareal algorithm for SDEs with conserved quantities. However, there are lack of corresponding theoretical analysis of this algorithm. Thus, we will continue to study the convergence property and other numerical behaviors in the further works.
\bibliographystyle{elsarticle-num}
|
1,941,325,220,458 | arxiv | \section{Introduction}
\label{introduction}
It is well known since the 80's that the spectra of low-mass \mbox{X-ray} binaries
(LMXBs) hosting a neutron star (NS) can be described up to about 30 keV by
a two-component model representing the contribution from different
emitting regions of the system.
However, the interpretation of the spectra is not unique
as demonstrated using the variety of different models used across the years.
Before the {\it BeppoSAX}\ and {\it RXTE}\ era, two concurring models were classically used to described X-ray emission in LMXBs. In the so-called ``eastern model'' \citep{mitsuda84, mitsuda89} the spectra were fitted by the sum of a soft
blackbody (BB) emission (actually modeled by a multi-colour disk BB spectrum)
attributed to the accretion disk, plus a hotter simple or Comptonized BB
claimed to originate close to the NS surface.
On the other hand, in the ``western model'' interpretation \citep{white86, white88}, the direct BB component was attributed to the NS surface, while
an unsaturated Comptonization spectrum was thought to originate from a hot corona above the inner accretion disk, which supplies most of the soft seed photons for Comptonization.
In fact, even after the advent of {\it BeppoSAX}\ and {\it RXTE}, the persistent emission of NS LMXBs was described by the sum of a BB component plus a thermal Comptonization (TC) spectrum, usually described by the {\rm XSPEC}\ {\sc Comptt}\ model \citep{t94, ht95}.
Despite the significant improvement in our knowledge of the source spectral properties by means of the broad-band observations, the BB+TC model resulted and subjected to a dichotomy. Indeed, both of the cases where the temperature of the direct BB spectrum $kT_{\rm bb}$ is lower \citep[e.g.,][]{ds00a, ds00b, ds01, oosterbroek01, gd02, lavagetto04, paizis05} or higher \citep[e.g.,][hereafter F08]{paizis05, farinelli07, farinelli08} than that of the thermally Comptonized seed photons, $kT_{\rm s}$, provide in general equally acceptable good fits.
The consequences of these results were the interpretation of the BB emission
as due either to the accretion disk ($kT_{\rm bb} < kT_{\rm s}$) or to the NS surface ($kT_{\rm bb} > kT_{\rm s}$).
Moreover, in addition to the persistent X-ray emission, {\it BeppoSAX}\ \citep[e.g.,][]{ds00b,ds02} {\it RXTE}\ \citep{damico01, ds06} and later also {\it INTEGRAL}\ \citep[][hereafter P06]{paizis06} allowed the possibility to discover in bright NS LMXBs a transient powerlaw (PL)
X-ray emission above 30 keV .
Motivated by the need to put some order and give an unified scenario in the different NS systems spectral states, Paizis et al. (2006, hereafter P06) performed a systematic observational campaign with the {\it ISGRI}\ (20-200 keV) monitor onborad {\it INTEGRAL}. Using long-term average spectra and including former {\it INTEGRAL}\ results on GX 354--0 \citep{falanga06}, P06 classified NS LMXBs into four main states: the \emph{hard/PL}, \emph{low/hard}, \emph{intermediate} and \emph{soft}.
The high-energy ($>$ 20 keV) spectra in different sources were interpreted
by P06 as the result of the interplay between thermal and bulk Comptonization processes whose relative efficiency is ultimately dictated by the mass accretion rate.
At high energies (where the direct BB component is negligible), the \emph{hard/PL} state spectra can be fitted with a simple PL component; the \emph{low/hard} spectra by a TC spectrum of soft ($\la 1$ keV) BB-like photons
off an electron population with $kT_{\rm e} \sim 20-30$ keV and $\tau_0 \la 3$; the \emph{intermediate} state spectra by TC spectrum with $kT_{\rm e} \sim 3-5$ keV and $\tau_0\ga 5$ plus a PL component with photon index $\Gamma \sim 2-3$; the \emph{soft} state spectra by a TC component similar to the \emph{intermediate} state but without the high-energy X-ray tail.
From the observational point of view, the quantities directly measurable in the data
are the cut-off energy $E_c$ of the dominating TC bump and the spectral slope, parametrized through the energy index $\alpha$ (=$\Gamma -1$).
For pure TC spectra, $\alpha$ is tightly correlated with the plasma temperature $kT_{\rm e}$ and optical depth $\tau_0$ \citep[][hereafter TL95]{st80, tl95}, while in the
presence of a converging flow (bulk motion), the shape of the velocity field also determines the emerging spectral slope \citep[][F08]{tmk97, LT99}.
In mathematical terms, $\alpha$ represents the index of the system Green's function (GF), namely as it responds to monochromatic line injection. The resulting emerging spectrum is then obtained as a convolution of the GF with the input seed photon spectrum.
We also emphasize that the \emph{hard/PL}, \emph{hard} and \emph{soft} states spectra of NS LMXBs are characterized by the presence of just \emph{one} Comptonization index (see Fig. 4 in P06). In the first case, its value is interpreted as a result of a mixed TC plus BC process, while in the latter two cases, $\alpha$ is derived from pure TC. On the other hand, the \emph{intermediate} state spectra show \emph{two} Comptonization indexes,
one is related to the persistent TC component and the other one characterizes the transient PL-like hard X-ray emission.
In Section \ref{sect_alpha} we give an overview of the theoretical and observational issues related to
spectral evolution in X-ray binary systems hosting a NS or a black hole (BH) outlining the differences among the two
classes of sources. We subsequently report on results related to the observed TC index $\alpha$ for a sample of NS sources.
In Section \ref{model} we propose a theoretical model based on diffusion formalism for radiative transfer in order
to explain the observational results. In Section \ref{results} we discuss the comparison between theory and
data, while in Section \ref{conclusions} we draw our conclusions and give future observational perspects.
\section{Spectral index evolution in NS systems}
\label{sect_alpha}
Starting from the considerations of the previous section, it is important to make a comparison between the index
evolution in accreting BH and NS sources.
The spectral state of BH sources may be generally divided into \emph{low/hard} state (LHS), where the spectrum is dominated by a TC component with electron temperature $kT_{\rm e} \sim$ 60-100 keV, \emph{intermediate} state (IS), with a BB bump (presumably coming from the accretion disk) and a superposed PL high-energy component, and \emph{high/soft} state (HSS) where the BB component gets even stronger and the PL emission gets steeper.
Actually, in BH systems generally \emph{one high energy photon index} $\Gamma$ is observed
[see e.g. recent results on GRS 1915+015 by Titarchuk \& Seifina (2009) and \cite{st09}] and
its value evolves with the source mass accretion rate.
The latter, which is not a direct observable quantity, may be inferred in an indirect way by means of the
the normalization of the disk flux $N_{\rm disk}$.
In a $\Gamma$ vs $N_{\rm disk}$/Quasi Periodic Oscillation (QPO) frequency diagram, it was found \citep{st09, ts09,mtf09}
that the photon index $\Gamma$ progressively increases from $\sim$ 1.6-1.8 as the source moves from the \emph{low/hard} to the \emph{high/soft} states, until it reaches saturation around
$\Gamma \sim 2.2-2.4$ (depending on the source).
This interpretation of the observed BH spectra have been performed using ``so called'' the BMC model \citep{tmk97}, whose emerging spectral shape is described by the formula
\begin{equation}
F(E)=\frac{C_N}{A+1} [BB(E)+ A \times G(E,E_0) \ast BB(E_0)].
\label{bmc}
\end{equation}
In formula (\ref{bmc}), the first term of the right-hand side represents
the direct seed photon BB-like spectrum, while the second one gives its modification
due to Comptonization (convolution with GF).
The hardness of the spectrum is dictated by the energy index $\alpha$ of the
GF which, in BMC, is a broken-PL with $G(E,E_0) \propto E^{\alpha+3}$ for
$E < E_0$ and $G(E,E_0) \propto E^{-\alpha}$ for $E > E_0$, where $E_0$ is the monochromatic
input energy. It is important to keep in mind that the GF in BMC does not contain the exponential spectral rollover,
so when the latter is observed in the X-ray spectrum of a source, it can be taken into account by multiplying
BMC with an exponential e-folding factor $\propto e^{-E/E_c}$.
The index saturation observed in BH sources as they move to the HSS finds a natural explanation in the framework of a bulk-dominated scenario
\citep[see mathematical proof of this statement in e.g.][]{ts09}.
Moreover, it is interesting to note that when the sources are in the
LHS, the points in the $\Gamma$ vs $N_{\rm disk}$/QPO diagram form a plateau around $\Gamma \sim$1.5 before the rising phase.
This clear mapping of the energy (or photon) index evolution is possible in BH sources as they are generally strongly variable
and their X-ray spectrum allows a one-to-one correspondence between the spectral state and the energy
index $\alpha$ (or $\Gamma$) using spectral modeling according to formula (\ref{bmc}). One of the advantages of using
BMC model is that it is a \emph{generic} Comptonization model, namely it allows to map the spectral
evolution of sources through the Comptonization index $\alpha$, which is a direct measurable quantity, no matter
which are the underlying physical conditions. The theoretical interpretation of the source spectral formation
is in fact related to a later scientific discussion.
For NS sources however, situation is less straightforward to face.
In fact, most variable sources such as, e.g., \mbox{GX 354--0} \citep{falanga06}
or 4U 1608--52 \citep{gd02} when moving from the \emph{hard} state to the \emph{soft}
state do exhibit pure TC spectra, with electron temperature $kT_{\rm e}$ progressively decreasing and optical depth $\tau_0$ increasing, respectively.
On the other hand, bright LMXBs of the GX class such as the classical six known
Z sources (Sco X--1, GX 17+2, Cyg X--2, GX 340+0, GX 5--1 and GX 349+2) and,
more recently, GX 13+1, show only a small evolution of their persistent X-ray continuum
(dominated by the strong TC bump with $kT_{\rm e} \sim$ 3-5 keV and $\tau_0 \gg 1$). In addition they are characterized by the presence of a transient hard X-ray PL-like component
(\emph{intermediate} state).
Other persistently bright sources such as GX 3+1, GX 9+1 and GX 9+1
have been only observed in the \emph{soft} state with no evidence of hard X-ray tails.
In fact, mapping the evolution of the transient hard X-ray tail of NS sources in the
\emph{intermediate} state has not been yet possible because of the insufficient statistics
available at high energies. Some attempts \citep{ds00a,ds02,ds06} were actually undertaken to establish changes in the intensity of the transient hard tail by fitting it with a simple PL and, when statistics was poor, fixing the index
allowing to
vary only its flux. But nothing could be concluded about the index evolution.
Thus, at the present status of knowledge, serious investigations can be performed
only on the evolution of the spectral index related to the persistent TC component.
This was done for the first time by \citet[][hereafter TS05]{ts05}, who performed
a systematic analysis of the variable NS X-ray binary 4U 1728-34 from the \emph{hard} state
to the \emph{soft} state using observations from the Proportional Counter Array (PCA, 3-30 keV) onboard {\it RXTE}.
The source spectra were fitted with a two-BMC model, with $A \gg 1$
(see formula [\ref{bmc}]) and with the GF spectral index fixed equal for both BMC components. Thus, the model used by TS05 actually was
$$F(E)= C_{ns}G(E,E_0) \ast BB(E_0) + C_{disk}G(E,E_0) \ast BB(E_0).$$
The main result found by TS05 was that as the source moved from
the \emph{hard} to the \emph{soft} state the index $\Gamma$ ($=\alpha+1$) progressively increased with no evidence of saturation, unlike the BH case. Actually, the final \emph{soft} state of 4U 1728-3
was represented by the sum of two BB component, because for $\alpha \gg 1$, $G(E,E_0) \ast BB(E_0) \approx BB(E).$
This result however needs a revision. Namely, fitting the NS LMXBs \emph{soft} state
spectrum with a two-BB model can be possible because of the lack of data below 3 keV, which is of key importance.
The limited broad-band resolution allows however the possibility to have
different models with same good fitting results. For example, recent results of the analysis of PCA/{\it RXTE}\ data for 4U 1608--52 by Ding et al. (2010 in preparation) show that the \emph{soft} state spectrum of the source
can be fitted either by a two-BB model or by a single TC model (e.g., {\sc Comptb})
with $\alpha \sim$ 1. Note also that Falanga et al. (2006) fitted the \emph{soft state}
spectrum of GX 354--0 with a DBB+{\sc Comptt}\ model, from which the inferred $\alpha$-value
is again $\sim$ 1.
Actually, when looking at the {\it BeppoSAX}\ results obtained over years of observations of NS LMXB sources, it results that the \emph{soft} state spectra of these systems,
rather than 2 BBs, need to be described by the sum of a BB component
plus an unsaturated TC spectrum with cut-off energy below 10 keV.
\subsection{Observational results}
We considered a sample of sources taken from the literature and for which we can make a fiducial measurement of $\alpha$.
In our choice of the sources, we adopted the criterion to consider those in which
$\alpha$ was determined either directly from the fit, as it can be done using the
{\sc Comptb}\ model (where $\alpha$ is a free parameter, see F08), or can be derived
from the temperature and optical depth obtained by the {\sc Comptt}\ model \citep{t94}.
The sources belonging to the first case are \mbox{Sco X--1}, \mbox{GX 17+2}, \mbox{Cyg X--2}, \mbox{GX 340+0}, \mbox{GX 3+1}
(see Table 2 in F08) and \mbox{GS 1826--238} \citep[see Table 2 in][]{cocchi10}.
For \mbox{GX 349+2} \citep{ds00a} and \mbox{GX 354--0} \citep{ds00b} we derived the value of spectral index
$\alpha$ using equation for the non-relativistic regime
(see Eq. [22] in Titarchuk \& Lyubarskij 1995, hereafter TL95)
\begin{equation}
\alpha=-\frac{3}{2}+\sqrt{\frac{9}{4}+\frac{\beta}{\Theta}},
\label{alpha_general}
\end{equation}
where $\Theta \equiv kT_{\rm e}/m_e c^2$ and $\beta$-parameter
defined in formula (17) of TL95 for spherical geometry
as it was assumed by the authors. In the case of X1658--298, \cite{oosterbroek01} assumed a slab geometry, thus $\alpha$ was obtained from equation (\ref{alpha_general}) and formula (17) of TL95 for a slab geometry.
For 1E 1724--3045, \cite{barret00} report the best-fit value of optical depth $\tau_0$ of Comptonization region both for the case of spherical and slab geometry. We checked that the two derived values of $\alpha$ are perfectly consistent.
The errors on $\alpha$ for sources for which $kT_{\rm e}$ and $\tau_0$ were reported, have been computed considering that the
function $\alpha[kT_{\rm e}, \beta(\tau_0)]$ gets its absolute minimum and maximum
values at the boundary of the box of its domain delimited by the minimum and maximum value ($kT^{\rm min}_{\rm e}$, $kT^{\rm max}_{\rm e}$) and ($\tau^{\rm min}_0$, $\tau^{\rm max}_0$)
obtained in computing the errors at 90\% confidence level for the electron temperature and optical depth by {\rm XSPEC}.
In Figure \ref{alpha_data} we report the measured values of $\alpha$ for this
sample of sources as a function of the electron temperature $kT_{\rm e}$.
This parameter can be considered indeed a good tracer of the source spectral state because
$kT_{\rm e}$ decreases when sources move from the \emph{hard} to \emph{soft} state as a result of a more efficient electron cooling by the enhanced seed photon supply.
Moreover, the electron temperature $kT_{\rm e}$ is a directly measurable quantity because it is related to the cut-off energy of the spectrum and it presents the advantage of being distance-independent.
On the other hand, the instrumental energy-band coverage and
accumulation time can play some role in biasing the measured index value.
For instance, \cite{falanga06} performed a systematic analysis of GX 354--0 as a function
of its position on the hardness-intensity diagram when the source moved from the \emph{hard} to \emph{soft} state.
The authors fitted the 3-100 keV spectrum with a multicolour disk BB \citep[DBB,][]{mitsuda84} plus {\sc Comptt}\ model using the slab geometry.
We computed the derived $\alpha$-values from their best-fit parameters but the trend was not monotonic, covering the range $\sim$ 1-3 and
reflecting such a behaviour of the $kT_{\rm e}-\tau_0$ parameters
(decreasing of $kT_{\rm e}$ was not followed by increasing of $\tau_0$, as expected).
It is not clear whether this was due to the lack of data below 3 keV, which is necessary to constrain the seed photon temperature or due to accumulation time.
Thus we prefer to skip these measurements.
It is also worth mentioning that \cite{oosterbroek01} performed a {\it BeppoSAX}\ analysis of GX 3+1 and Ser X--1, both sources characterized by typical \emph{soft} state spectra, fitting them with a DBB+{\sc Comptt}\ model, but they did not specify which geometry
(sphere or slab) it was assumed. We found $\alpha_{sph}=2.44^{+0.89}_{-0.59}$ and $\alpha_{sl}=0.87^{+0.42}_{-0.25}$ for GX 3+1, $\alpha_{sph}=1.41^{+0.52}_{-0.34}$ and $\alpha_{ls}=0.45^{+0.20}_{-0.12}$ for
Ser X--1, respectively.
Looking at Fig. \ref{alpha_data} we note that for all the analyzed sources the spectral index $\alpha$
lies in a belt around $1 \pm 0.2$, a part for the case of GX 354--0 where $\alpha \sim$ 1.6.
We give a possible interpretation of these {\it observational} results in the Discussion.
\begin{figure}
\centering
\includegraphics[width=6cm, angle=-90]{fig1.ps}
\caption{Thermal Comptonization index $\alpha$ for sources in different spectral states as a
function of the electron temperature $kT_{\rm e}$.
Reference papers: \mbox{Cyg X--2}, Farinelli et al. (2009); \mbox{Sco X--1}, \mbox{GX 17+2},
\mbox{GX 340+0} and \mbox{GX 3+1}, Farinelli et al. (2008); \mbox{GX 354--0}, Di Salvo et al. (2000b);
\mbox{GX 349+2}, Di Salvo et al. (2001), \mbox{X 1658--298}, Oosterbroek et al. (2001);
\mbox{GS 1826--238}, Cocchi et al. (2010); \mbox{1E 1724--3045}, Barret et al. (2000).}
\label{alpha_data}
\end{figure}
\section{A model of Comptonization region in the neutron star case}
\label{model}
The determination of spectral index $\alpha$ obtained from Comptonization of seed photons in a bounded medium
has been faced since a long time. The emerging radiation spectrum depends on several parameters such as the geometry
of the plasma (e.g., slab or sphere), the electron temperature and optical depth, and the space distribution of
the seed photons inside the medium. Sunyaev \& Titarchuk (1980) report the value $\alpha$ obtained from the
solution of the stationary radiative transfer equation in the non-relativistic case (Fokker-Planxk approximation)
obtained as a convolution of the time-dependent equation with the time-escape probability distribution P(u)
for the case of source photons distributed according to the first eigenfunction of the space operator.
Later \cite{t94}, \cite{ht95} and TL95 extended the results to the sub-relativistic case considering both slab and spherical
geometry.
In order to understand what it does happen in NS LMXBs sources, one has to consider the hydrodynamical
conditions in the region between the accretion disk and the NS surface.
We will refer to this region as the transition layer (TL), often also called boundary layer.
Actually, the production of a strong TC bump in the persistent X-ray spectra of NS LMXBs is thought to originate in
this TL, namely the region where matter deviates from its Keplerian angular velocity in order to match that of the slowly
spinning NS. The radiative and hydrodynamical configuration of the TL is mostly dictated by the Reynolds number
$\gamma$ \citep{tlm98,to99}, which is proportional to the mass accretion rate and is eventually the inverse
of the viscosity parameter of the Shakura-Sunyaev disk. In particular, $\gamma$ \ determines the radial extension
of the TL and in turn, from the mass accretion rate, its optical depth.
It is worth pointing out that determination of the vertical height-scale of the TL is in fact a very complicated problem, as it should require
a complete 3D magneto-hydrodynamical treatment of the problem.
Using the slim disk (thus vertical-averaged) equations for determining the radial thermo-hydrodynamical structure
of the TL may be in fact an issue \citep[e.g.,][]{ps01}.
The enhanced radiation and thermal pressure because of higher electron temperature are expected to increase
the vertical height-scale of the TL with H $\approx R_{ns}$. Moreover, the solution for the angular momentum
equation \citep{tlm98,to99} for Reynolds number $\gamma > 5-10$ gives a TL radial extension $\Delta R_{TL} \la 0.5 R_{NS}$ .
With these characteristic length-scales, it seems more plausible to approximate the TL geometry to a slab whose normal is
directed along the disk plane.
It is worth emphasizing that Haardt \& Maraschi (1993, hereafter HM93) determined the theoretical Comptonization index
$\alpha$ considering a two-phase model for accretion disks in AGN in which a hot corona is surrounding and sandwiching
the underlying cold accretion disk. The model can in principle be applied also to the case of solar-mass BH
sources. HM93 assume that the corona and the disk are two slabs at significant different temperatures
and put in contact each other. The authors concentrated on the case of high temperature ($kT_{\rm e} \ga $ 50 keV) and
low optical depth ($\tau < $1) for the corona, so that the diffusion approximation cannot hold, unlike what
we are considering. One of the consequences of the high-temperature treatment is that electron scattering is
anisotropic with a significant fraction of the power back-irradiating the disk. In the HM93 model, the \emph{inner}
boundary condition of the hot corona is the disk cool surface
(with $kT_{\rm bb} < $ 5 eV) with energy-dependent albedo.
Note also that in their geometry, 100\% of the disk flux is intercepted and reprocessed by the top plasma.
In the geometry considered here on the other hand, it is possible that part of the disk emission directly escapes
the system, while a fraction of its flux is intercepted by the TL. We are actually interested here on
this portion of the intercepted disk flux (see next Section).
Because of these differences between the two models, a direct comparison of
the derived theoretical results is not straightforward. The reader can refer to the paper of HM93 for further details
in order to better understand the differences of our and their approaches.
\subsection{Energy release in the NS transition layer}
The energy balance in the TL is dictated by Coulomb collisions with protons (gravitational energy release), while inverse Compton
and free-free emission are the main cooling channels
(see a formulation of this problem in the pioneer work by Zel'dovich \& Shakura 1969).
In fact, for the characteristics electron temperature (3 keV $\la kT_e\la$ 30 keV) and density values
($\la 10^{-5}$ g cm$^{-3}$) of these regions for LMXBs, Compton cooling dominates over free-free emission and
the relation between the energy flux per unit surface area of corona $Q_{\rm cor}$, the radiation energy density
$\varepsilon(\tau)$ and electron temperature $T_e$ is given by (see also TLM98)
\begin{equation}
\frac{Q_{cor}}{\tau_0} \approx 20.2\varepsilon(\tau)T_e(\tau),
\label{energy_balance}
\end{equation}
where $\tau_0$ is the characteristic optical depth of
the TL.
The distribution $\varepsilon(\tau)$ is obtained as a solution of the diffusion equation
\begin{equation}
\frac{d^2\varepsilon}{d\tau^2} =-\frac{3Q_{\rm tot}}{c\tau_0},
\label{diffusion_equation}
\end{equation}
where now $Q_{\rm tot}=Q_{\rm cor} + Q_{\rm disk}$ is the sum of the corona (TL) and intercepted disk fluxes, respectively. The two boundary conditions for equation (\ref{diffusion_equation})
are written as
\begin{equation}
\frac{d\varepsilon}{d\tau} \vert_{\tau=\tau_0}=0,
\label{bc1}
\end{equation}
\begin{equation}
\frac{d\varepsilon}{d\tau}- \frac{3}{2}\varepsilon\vert_{\tau=0}=0
\label{bc2}
\end{equation}
which represent the case of albedo A=1 at the NS source ($\tau=\tau_0$) and no diffusion emission falling from outside onto outer corona boundary
($\tau=0$). The condition for A=1 arises from the well-established observational result of NS temperature $kT_{\rm bb} \sim$ 1
keV, which implies the presence of a ionized NS atmosphere. This is different from the case considered by HM93, where the
cool disk temperature ($<$ 5 eV) gives rise to an energy-dependent albedo with photoelectric absorption for impinging
photons with energy $ \la$ 10 keV. Another important consideration to keep in mind is that the diffusion equation
(\ref{diffusion_equation}) is to be considered frequency-integrated. This means that we are not dealing with the
specific (energy-dependent) shape of the reflected spectrum from the NS surface, we are only dealing with
the total energy density.
The solution for $\varepsilon(\tau)$ is then given by
\begin{equation}
\varepsilon(\tau)=\frac{2Q_{\rm tot}}{c} \left[1+ \frac{3}{2}\tau_0\left(\frac{\tau}{\tau_0} -
\frac{\tau^2}{2\tau_0^2}\right)\right].
\label{ene_vs_tau}
\end{equation}
It is worth pointing out that $d\varepsilon/d\tau > 0$ for $\tau<\tau_0$, and as
$F_{rad} \propto d\varepsilon/d\tau$ for NS sources the radiative
force always plays against gravity, unlike the case of BH sources.
Note that the spectra of NS sources
both in the soft and hard state can be adequately fitted by single-temperature Comptonization models (Pazis et al. 2006,
Falanga et al. 2005, Farinelli et al. 2008, Cocchi et al. 2010). This observational fact demonstrates
that an assumption of isothermal plasma in the TL can be applicable to X-ray data analysis
from NS binaries. The question is how one can estimate this average temperature of the TL which is, in fact, established by photon scattering and cooling processes.
In order to establish this average plasma temperature $T_e$ one should estimate the mean energy density in
the TL as
\begin{equation}
<\varepsilon(\tau)>=\frac{1}{\tau_0}\int^{\tau_0}_0 \varepsilon(\tau) d\tau=\frac{
Q_{\rm tot}}{c}(2+\tau_0).
\label{average_ene}
\end{equation}
It is worth emphasizing the similarity between equations (\ref{energy_balance}) and (\ref{average_ene}) in
our paper and equation (13) in \cite{bk80} who studied the radiation emission due to gas accretion onto a NS.
If we now substitute the result of equation (\ref{average_ene}) into equation (\ref{energy_balance}), after a bit of
straightforward algebra we obtain
\begin{equation}
\frac{kT_{\rm e} \tau_0 (2+\tau_0)}{m_e c^2}=\frac{0.25}{1+Q_{\rm disk}/Q_{\rm cor}}.
\label{ktetau}
\end{equation}
Keeping in mind the definition of the Compton parameter $Y \approx A N_{sc}$ (Rybicki \& Ligthman 1989),
where $A\sim 4kT_e/m_ec^2$ and $N_{sc}\sim$ Max$(\tau_0^2, \tau_0)$ are the average photon energy gain per scattering and average
number of scatterings, respectively, we can
rewrite equation (\ref{ktetau}) as follows
\begin{equation}
Y \sim \frac{1}{1+Q_{\rm disk}/Q_{\rm cor}}.
\label{compton_par}
\end{equation}
Equation (\ref{compton_par}) is one of the main points of our theoretical model and shows that in the diffusion approximation the Compton parameter, which determines
the spectral index, is just a function of the corona and disk cooling fluxes.
\subsection{ The radiative transfer model of small variations of index $\alpha$ during spectral state transition in NS sources
}
As we have shown in Section 2.1, the observed spectral index $\alpha$ of most NS LMXBs undergoes small variation around 1,
namely $\alpha= 1\pm 0.2$ when the electron temperature of Compton cloud varies from about 2.5 to 25 keV (see Fig. \ref{alpha_data}).
Thus here we propose a model the spectral formation in the TL (corona) which can explain the stability of index $\alpha$ if energy release in the disk is much less than that in the
TL. Namely we show that $\alpha \approx 1+ \rm {O} (Q_{disk}/Q_{cor})$.
As already pointed out in classical works (Sunyaev \& Titarchuk 1980, 1985, hereafter ST85, Titarchuk 1994), spectral formation in plasma clouds of finite dimensions (bounded medium) is related to the distribution law of the number of scatterings that seed photons experience before escaping.
If $u_{av}$ denotes the average number of photon scatterings and the dimensionless scattering number is $u=N_e \sigma_T c t$, then the distribution law for $u\gg u_{av}$ is given by (see ST85)
\begin{equation}
P(u)=A(u_{av},\tau_0) e^{-\beta u}.
\label{prob_law}
\end{equation}
For a diffusion regime when $\tau_0 \ga 1.5$, it results $\beta=\lambda^2_1/3$, where $\lambda_1$ is the first eigenvalue of the diffusion space operator.
As reported in ST85, the eigenvalue problem for photon diffusion in a slab
with total optical depth $2\tau_0$ is derived from solution of the differential equation
for the zero-moment intensity
\begin{equation}
\frac{d^2J}{d\tau^2}+\lambda^2 J=0,
\label{eigenval_eq}
\end{equation}
with absorption boundary conditions $dJ/d\tau-(3/2)J=0$ and $dJ/d\tau+(3/2)J=0$,
for $\tau=0$ and $\tau=2\tau_0$, respectively.
This leads to the trascendental equation for the eigenvalue $\lambda_n$, $n=1,2, 3 ...$
\begin{equation}
\rm{tan}(\lambda_n\tau_0)=\frac{2}{3\lambda_n},
\end{equation}
which has the solution for $\tau_0 \gg 1$ and $n=1$
\begin{equation}
\lambda_1= \frac{\pi}{2(\tau_0 + 2/3)}.
\end{equation}
It is important to emphasize that the same result for $\lambda_1$ is obtained by solving equation (\ref{eigenval_eq}) for a slab with total optical depth
$\tau_0$ but with reflection condition $dJ/d\tau=0$ at $\tau=\tau_0$.
This is not surprising as such a condition is actually met at the center of a symmetric slab with total optical depth $2\tau_0$ and $0 \leq \tau \leq 2\tau_0$. Thus, the same mathematical result is obtained for two different geometrical configurations. In the first case (symmetric slab with total optical depth $2\tau_0$) it represents, e.g., an accretion disk
(ST85 treatment), in our present case we are dealing with a boundary layer
with total optical depth $\tau_0$, whose asymmetry is due to the presence of a reflector
(NS surface) at one of the two boundaries. In both cases one obtains
\begin{equation}
\beta=\frac{\pi^2}{12(\tau_0 + 2/3)^2}.
\label{beta_as}
\end{equation}
Generalizing to the case of arbitrary optical depth $\tau_0$, the diffusion
operator $L_{\rm diff}=(1/3)d^2J/d\tau^2$ is replaced by the radiative transfer operator $L_{\tau}$ applied to $J(\tau)$ (see ST85 and TL95)
which for the disk geometry is
\begin{equation}
L_{\tau}J = \frac{1}{2}\int^{2\tau_0}_0 J(\tau') E_1(|\tau-\tau'|) d\tau' -J,
\end{equation}
where $E_1(z)$ is the exponential integral of the first order.
In this case, the derived value for $\beta$ is \citep[][TL95]{t94}
\begin{equation}
\beta=\frac{\pi^2}{12(\tau_0 + 2/3)^2} (1-e^{-1.35\tau_0}) + 0.45e^{-3.7\tau_0} {\rm ln}\frac{10}{3\tau_0}.
\label{beta_general}
\end{equation}
Now having in mind equation (\ref{ktetau}), we introduce the parameter
\begin{equation}
\beta_{\rm diff}=\frac{1}{\tau_0 (2+\tau_0)},
\label{beta_diff}
\end{equation}
and in Figure \ref{beta_vs_tau} we show the values of $\beta$ for the cases reported
in formulas (\ref{beta_as}), (\ref{beta_general}) and (\ref{beta_diff}) as a function
of optical depth $\tau_0$.
It is possible to see that actually for $\tau_0 \ga 1.5$ all values of $\beta$ are practically close each other,
but they deviate for $\tau_0 \la 1$. For example their difference is about 30\% for $\tau_0$=1.
Using the definition of $\alpha$ (see Eq. \ref{alpha_general}), where $\beta$ is replaced by $\beta_{\rm diff}$
(Eq. \ref{beta_diff}), and equation (\ref{ktetau}), we obtain the diffusion spectral index as
\begin{equation}
\alpha_{\rm diff}= -\frac{3}{2}+\sqrt{\frac{9}{4}+ \frac{1+Q_{\rm disk}/Q_{\rm cor}}{0.25}},
\label{alpha_diff}
\end{equation}
or $ \alpha_{\rm diff}=1+0.8~ Q_{\rm disk}/Q_{\rm cor}$ for $Q_{\rm disk}/Q_{\rm cor}<1$.
Thus as it follows from Eq. (\ref{alpha_diff}), in the diffusion regime the TC spectral index can be expressed
in terms of $Q_{\rm disk}/Q_{\rm cor}$ (the intercepted disk over coronal fluxes), instead of TL electron temperature $kT_e$ and optical depth $\tau_0$
(see Eqs. [\ref{alpha_general}] and [\ref{beta_as}]).
In Figure \ref{alpha_plot} we present a plot of $\alpha_{\rm diff}$ as a function
of $Q_{\rm disk}/Q_{\rm cor}$, which shows that it ranges from 1 to 1.6 as $Q_{\rm disk}/Q_{\rm cor}$
increases from 0 to 1. One can see the observable values of index $\alpha\sim1$ takes place if the energy
release in the disk is much less that in TL, namely if $Q_{disk}/Q_{cor}\ll1$.
\begin{figure}
\centering
\includegraphics[width=6cm, angle=-90]{fig2.ps}
\caption{Values of the $\beta$-parameter as a function of the optical depth $\tau_0$. The solid, dashed and dotted lines correspond to definition of $\beta$ given in equations (\ref{beta_as}), (\ref{beta_general}) and (\ref{beta_diff}), respectively.}
\label{beta_vs_tau}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=6cm, angle=-90]{fig3.ps}
\caption{Theoretical thermal Comptonization index $\alpha$ as a function of the ratio $Q_{\rm disk}/Q_{\rm cor}$ according to equation (\ref{alpha_diff}).}
\label{alpha_plot}
\end{figure}
\section{Results and discussion}
\label{results}
In this Paper we compare observational data of a sample of NS sources (Fig. \ref{alpha_data}) with the theoretical
results which follows from the radiative transfer model in the diffusion approximation.
The data show that \emph{independently of the source spectral state}, which we have parametrized through
the measured TL electron temperature $kT_e$, the spectral index $\alpha= 1\pm 0.2$.
We derived an estimate of the energy index $\alpha$ for TC spectra in NS LMXBs using an equation for the diffusion approximation in a slab geometry with the reflection (100 \%) boundary condition, which is valid for optical depth $\tau_0 \ga 1.5$. In particular, we find that in
this approximation it is possible to express the value of $\alpha$ as a function of the ratio of the flux from the accretion
disk intercepted by the corona (TL, see Eq. [\ref{alpha_diff}]) and the energy release in the corona itself.
The agreement of the model with the data takes place when the condition $Q_{\rm disk}/Q_{\rm cor} \ll 1$ holds
(see Fig. \ref{alpha_plot}).
This behavior of spectral index $\alpha$ actually has important consequences on putting constraints in the accretion geometry of NS LMXBs.
In fact, as already pointed-out in the Introduction, the broad-band persistent X-ray spectra of these sources
are usually fitted by a two-component model consisting of BB-like emission plus a strong TC bump. Both the
cases for which the BB temperature ($kT_{\rm bb}$) is lower or higher than that of the seed photons of the TC bump ($kT_{\rm s}$) provide
equally acceptable fits. In the first case ($kT_{\rm bb} < kT_{\rm s}$), the origin of the direct BB component
is attributed to the accretion disk, in the second case ($kT_{\rm bb} > kT_{\rm s}$) to the NS surface. This dichotomy in the spectral analysis has not been overcome
for a long time. Some help in this direction has come with the discovery of the transient
X-ray tails in some of the brightest sources (see references in the Introduction): in fact,
if the origin of the hard PL-like X-ray emission is attributed to a combined thermal plus bulk (converging flow)
effect in the innermost part of the TL close to the NS surface,
then it becomes natural to suggest that the observed direct BB photon spectrum mostly
originates in the TL/NS surface region, providing the seed photons for bulk Comptonization
\citep[see Fig. 1 in][]{farinelli07}.
The theoretical results derived in this Paper, in our opinion strengthen this
scenario. In fact, when computing the total energetic spectral budget of the sources
in the 0.1-40 keV (where most of the emission is produced) the dominating TC bump carries out more than 70\% of
the source luminosity, the remaining part due to the direct BB component.
If this BB originates close to the NS surface, then it is evident that the disk contribution
to the X-ray luminosity is very small. Given that $Q_{\rm disk}$ in equation (\ref{alpha_diff}) represents
the flux from the accretion disk intercepted by the corona and thus is smaller or at most equal
to the directly emitted part, this eventually leads to $\alpha \sim 1$. In this framework,
we can make some considerations about the higher value of $\alpha$ ($\sim$ 1.6) measured for GX 354--0
(see Fig. \ref{alpha_data}).
Di Salvo et al. (2000b) fitted the broad-band spectrum of the source with a BB plus {\sc Comptt}\ model
(Titarchuk 1994) with $kT_{\rm bb} < kT_{\rm s}$, a modelization corresponding to the case in which a significantly higher
fraction of the X-ray luminosity comes from the accretion disk. This would turn of course in an enhanced
value of $Q_{\rm disk}$, and looking at Figure \ref{alpha_plot} it is straightforward to see that increasing $Q_{\rm disk}/Q_{\rm cor}$
corresponds to increasing $\alpha$.
Actually, it would be interesting to see what it does happen by fitting the {\it BeppoSAX}\ spectra of GX 354--0 with
the same model but with $kT_{\rm bb} > kT_{\rm s}$.
Note also in Fig. \ref{alpha_data} that for 1E 1724--3045 and two spectral states of GS 1826--238,
$\alpha \sim $1 with electron temperature $kT_{\rm e} \ga 20$ keV. Using equation (\ref{ktetau}) and the best-fit
values of $kT_{\rm e}$ reported for the two sources, with $Q_{\rm disk}/Q_{\rm cor} \sim$ 0, we obtain $\tau_0 \sim$ 1.3
for 1E 1724--3045 and $\tau_0 \sim$ 1.6-1.7 for GS 1826--238, respectively. These values of the optical
depth allow actually to deal with the diffusion approximation within degree of accuracy still satisfactory
as can be seen from Fig. \ref{beta_vs_tau}. Moreover, it is worth emphasizing that for a slab with optical
depth $\tau_0$ and inner reflection boundary condition, photons which are back-scattered from the reflecting surface
before escaping experience in fact an optical depth $\sim 2 \tau_0$, which further enhances the diffusion approximation
validity.
The other issue to point out is that the lower limit on $\alpha$ (for the extreme
case $Q_{\rm disk}/Q_{\rm cor} =$0) derived by our model is 1, while there is a handful of sources for which $0.8 < \alpha < 1$.
Different reasons can in fact concur to this result. First of all, it is well known that
multi-component modeling of X-ray spectra may have some influence in the determination of the
best-fit parameters. In particular, in the energy band where TC dominates ($\la$ 30 keV) Gaussian
emission lines around 6.4-6.7 keV are often observed and inclusion of narrow-feature component
in the model can affect the continuum parameters. Additional biases can come from the energy-band
coverage (in particular when using {\it RXTE}\ data, which start from about 3 keV) and uncertainties in the calibration of the
instrumental effective area, which may play some role in particular for spectra which are far away from being powerlaw-like.
In terms of theoretical predictions, our analytical model is intended to provide a description
of the observed stability of the spectral index but may of course have some limitations, in particular the energy-independent treatment of the radiation field and the simple slab
approximation for the TL geometry. However
it is possible that the vertical height-scale $H$ of
the TL is higher than its radial extension, it is also likely that there is some dependence of
$H$ on the radial distance from the NS surface. Moreover, in the pure plane-parallel geometry photons are allowed to escape only through the surface of the slab, while in this case the slab has limited extension and photons presumably can escape also from its ``walls''.
\section{Conclusions}
\label{conclusions}
We reported results on the value of the thermal Comptonization (TC) spectral index $\alpha$
for a sample of LMXBs NS sources in different spectral cases and we found that a part from GX 354--0
it lies in a belt around $1 \pm 0.2$. We proposed a simple theoretical model using the diffusion
approximation where $\alpha$ is found to be a function only of the ratio of the disk and corona
fluxes. In particular when $Q_{\rm disk}/Q_{\rm cor} \ll 1$ it results $\alpha \sim $1, which is consistent
with observations. We are hopeful that our work will encourage to significantly extend the sample
of observed NS sources using archival data and observations from present and future missions,
in particular using as broad as possible energy band, especially below 3 keV in order
to avoid biases in the spectral results.
We also claim that our model can be helpful in solving the dichotomy related to the fact that
equally good fit are obtained for the cases in which the observed direct BB component has a temperature higher or
lower than that of the seed photons subjected to TC.
\begin{acknowledgements}
The authors are grateful to the referee whose suggestions strongly improved the quality of the paper with
respect to the first version. This work was supported by grant from Italian PRIN-INAF 2007, ``Bulk motion Comptonization
models in X-ray Binaries: from phenomenology to physics'', PI M. Cocchi.
\end{acknowledgements}
\bibliographystyle{aa}
|
1,941,325,220,459 | arxiv |
\section{Introduction}
\label{introduction:section}
\parg{Background.}
2-player games can be used to model the interaction of
a controller (player 0) who makes choices in a reactive
system, and a malicious adversary (player 1) who represents
an attacker.
To model randomness in the system
(e.g., unreliability; randomized algorithms),
a third player `random' is defined who makes choices according
to a predefined probability distribution. The resulting
stochastic game is called a $2\frac{1}{2}$-player game in the terminology of \cite{chatterjee03simple}.
The choices of the players induce a run of the system, and
the winning conditions of the game are expressed in terms of predicates
on runs.
Most classic work on algorithms for stochastic games has focused
on finite-state systems (e.g.,
\cite{shapley-1953-stochastic,condon-1992-ic-complexity,AHK:FOCS98,chatterjee03simple}),
but more recently several classes of infinite-state systems have been
considered as well.
Stochastic games on infinite-state probabilistic recursive systems (i.e.,
probabilistic pushdown automata with unbounded stacks) were studied in
\cite{Etessami:Yannakakis:ICALP05,EY:LMCS2008,EWY:ICALP08}.
A different (and incomparable) class of infinite-state systems are channel
systems, which use unbounded communication buffers instead of unbounded
recursion.
{\it Channel Systems} consist of nondeterministic
finite-state machines that communicate by asynchronous message passing
via unbounded FIFO communication channels. They are also known as
communicating finite-state machines (CFSM) \cite{Brand:CFSM}.
Channel Systems are a very expressive model that
can encode the behavior of Turing machines, by storing the content of an unbounded
tape in a channel \cite{Brand:CFSM}.
Therefore, all verification questions are undecidable on Channel Systems.
A {\it Lossy Channel System (LCS)} \cite{AbJo:lossy,Finkel:completely:specified} consists of
finite-state machines that communicate by asynchronous message passing
via unbounded \emph{unreliable} (i.e., lossy) FIFO communication channels,
i.e., messages can spontaneously disappear from channels.
The original motivation for LCS is to capture the behavior of communication protocols
which are designed to operate correctly even if the communication medium is unreliable
(i.e., if messages can be lost).
Additionally (and quite unexpectedly at the time),
the lossiness assumption makes safety/reachability and termination decidable \cite{AbJo:lossy,Finkel:completely:specified},
albeit of non-primitive recursive complexity \cite{schnoebelen-2002-ipl-verifying}.
However, other important verification problems are still undecidable for LCS,
e.g., recurrent reachability (i.e., B\"uchi properties), boundedness, and behavioural equivalences
\cite{AbJo:lossy:undecidable:journal,Schnoebelen:2001,Mayr:2003}.
A {\it Probabilistic Lossy Channel System (PLCS)}
\cite{Schnoeblen:plcs,Parosh:Alex:PLCS}
is a probabilistic variant
of LCS where, in each computation step, each message
can be lost independently with a given probability.
This solves two limitations of LCS.
First, from a modelling viewpoint, probabilistic losses are more realistic
than the overly pessimistic setting of LCS where all messages can always be lost at any time.
Second, in PLCS almost-sure recurrent reachability properties become decidable (unlike for LCS) \cite{Schnoeblen:plcs,Parosh:Alex:PLCS}.
Several algorithms for symbolic model checking of PLCS have been presented
\cite{Parosh:etal:attractor:IC,Rabinovich:plcs}.
The only reason why certain questions
are decidable for LCS/PLCS is that the message loss induces a quasi-order
on the configurations, which has the properties of a simulation.
Similarly to Turing machines and CFSM,
one can encode many classes of infinite-state probabilistic transition
systems into a PLCS.
Some examples are:
\begin{itemize}
\item
Queuing systems where waiting customers in a queue drop out
with a certain probability in every time interval.
This is similar to the well-studied class of queuing systems
with impatient customers which practice {\em reneging}, i.e.,
drop out of a queue after a given maximal waiting time; see
\cite{Wang-Li-Jiang:Review} section II.B.
Like in some works cited in \cite{Wang-Li-Jiang:Review}, the maximal waiting time
in our model is exponentially distributed.
In basic PLCS, unlike in \cite{Wang-Li-Jiang:Review}, this exponential distribution
does not depend on the current number of waiting customers.
However, an extension of PLCS with this feature would still
be analyzable in our framework (except in the pathological case where
a high number of waiting customers increases the customers patience
exponentially, because such a system would not necessarily have a so-called \emph{finite attractor}; see below).
\item
Probabilistic resource trading games with
probabilistically fluctuating prices.
The given stores of resources are
encoded by counters (i.e., channels), which exhibit a probabilistic decline
(due to storage costs, decay, corrosion, obsolescence, etc).
\item
Systems modelling operation cost/reward, which is stored in counters/channels,
but probabilistically discounted/decaying over time.
\item
Systems which are periodically restarted (though not necessarily by a
deterministic schedule), due to, e.g., energy depletion or maintenance work.
\end{itemize}
Due to this wide applicability of PLCS, we focus on this model in this paper.
However, our main results are formulated in more general terms referring
to infinite Markov chains with a finite attractor; see below.
\parg{Previous work.}
In \cite{BBS:ACM2007}, a non-deterministic extension of PLCS was introduced
where one player controls transitions in the control graph
and message losses are fully probabilistic.
This yields a Markov decision process (i.e., a $1\frac{1}{2}$-player game)
on the infinite graphs induced by PLCS.
It was shown in \cite{BBS:ACM2007}
that $1\frac{1}{2}$-player games with \emph{almost-sure}
repeated reachability (B\"uchi) objectives are decidable and pure memoryless determined.
In \cite{ABDMS:FOSSACS08}, $2\frac{1}{2}$-player games on PLCS are considered,
where the players control transitions in the control graph
and message losses are probabilistic.
Almost-sure B\"uchi objectives are decidable for this class,
and pure memoryless strategies suffice for \emph{both players} \cite{ABDMS:FOSSACS08}.
Generalized B\"uchi objectives are also decidable,
and finite-memory strategies suffice for the player,
while memoryless strategies suffice for the opponent \cite{BS-qapl2013}.
On the other hand, $1\frac{1}{2}$-player games on
PLCS with \emph{positive probability}
B\"uchi objectives, i.e., almost-sure co-B\"uchi objectives from the
(here passive) opponent's point of view,
can require infinite memory to win
and are also undecidable \cite{BBS:ACM2007}.
However, if the player is restricted to finite-memory strategies,
$1\frac{1}{2}$-player games with positive probability
\emph{parity objectives}
(even the more general \emph{Streett objectives})
become decidable and memoryless strategies suffice for the player \cite{BBS:ACM2007}.
Note that the finite-memory case and the infinite-memory one are a priori incomparable problems,
and neither subsumes the other.
Cf. Section~\ref{conclusions:section}.
Non-stochastic (2-player) parity games on infinite graphs were studied
in \cite{zielonka1998infinite}, where it is shown that such games are
determined, and that both players possess winning memoryless strategies in
their respective winning sets. Furthermore,
a scheme for computing the winning
sets and winning strategies is given.
Stochastic games ($2\frac12$-player games) with parity conditions on
\emph{finite} graphs are known to be memoryless determined and
effectively solvable
\cite{alfaro-2000-lics-concurrent,chatterjee03simple,chatterjee-2006-qest-strategy}.
\parg{Our contribution.}
We give an algorithm to
decide almost-sure \emph{parity} games for probabilistic lossy channel systems
in the case where the players
are restricted to finite memory strategies.
We do that in two steps.
First, we give our result in general terms
(Section~\ref{parity:section}):
We
consider the class of
$2\frac{1}{2}$-player games with
almost-sure parity wining conditions on possibly infinite game graphs,
under the assumption
that the game contains a {\it finite attractor}.
An attractor is a set $A$ of states
such that, regardless of the strategies used by the players,
the probability measure of the runs which
visit $A$ infinitely often is one.%
\footnote{
In the game community (e.g., \cite{zielonka1998infinite})
the word {\it attractor}
is used to
denote what we call a {\it force set}
in Section~\ref{reachability:section}.
In the infinite-state systems community
(e.g., \cite{Parosh:etal:attractor:IC,Parosh:etal:MC:infinite:journal}), the word is used
in the same way as we use it in this paper.}
Note that this means neither that $A$ is absorbing, nor
that every run must visit $A$.
We present a general scheme characterizing the set
of winning states for each player.
The scheme is a generalization of the well-known scheme for
non-stochastic games in \cite{zielonka1998infinite}.
In fact, the
constructions are equivalent in the case that no probabilistic states
are present.
We show correctness of the scheme for games where each player is
restricted to a finite-memory strategy.
The correctness proof here is more involved than in the
non-stochastic case of \cite{zielonka1998infinite};
we rely on the existence of a finite attractor and the restriction
of the players to use finite-memory strategies.
Furthermore, we show that if a player is winning against all
finite-memory strategies of the other player then he
can win using a \emph{memoryless} strategy.
In the second step (Section~\ref{sec:application:sglcs}),
we show that the scheme can be instantiated
for lossy channel systems.
The above two steps yield an algorithm to
decide parity games in the case when the players
are restricted to finite memory strategies.
If the players are allowed infinite memory, then the problem
is undecidable already for $1\frac{1}{2}$-player games
with co-B\"uchi objectives (a special case of 2-color parity objectives)
\cite{BBS:ACM2007}.
Note that even if the players are restricted to finite memory strategies,
such a strategy (even a memoryless one) on an infinite game graph
is still an infinite object. Thus, unlike for finite game graphs,
one cannot solve a game by just guessing strategies
and then checking if they are winning.
Instead, we show how to
effectively compute a finite, symbolic representation of the
(possibly infinite) set of winning states for each player
as a regular language (Section~\ref{algorithm:section}),
and a finite description of winning strategies (Section~\ref{sec:strategy:construction}).
\section{Preliminaries}
\label{prels:section}
\parg{Notation.}
Let $\mathbb O$ and $\mathbb N$ denote the set of ordinal resp.\ natural numbers.
With $\alpha$, $\beta$, and $\gamma$ we denote arbitrary ordinals,
while with $\lambda$ we denote limit ordinals.
We use $f:X\to Y$ to denote that $f$ is a total function from $X$ to $Y$, and
use $f:X\rightharpoonup Y$ to denote that $f$ is a partial function from $X$
to $Y$.
We write $f(x)=\bot$ to denote that $f$ is undefined on $x$, and define
$\domof{f}:=\setcomp{x}{f(x)\neq\bot}$.
We say that $f$ is an {\it extension} of $g$ if $g(x)=f(x)$ whenever $g(x)\neq\bot$.
For $X'\subseteq X$, we use $f| X'$ to denote the restriction of $f$ to $X'$.
We will sometimes need to pick an arbitrary element from a set.
To simplify the exposition, we let $\selectfrom{X}$ denote an arbitrary
but fixed element of the nonempty set $X$.
A \emph{probability distribution} on a countable set $X$ is a function
$f:X\to[0,1]$ such that $\sum_{x\in X}f(x)=1$.
For a set $X$, we use $X^*$ and $X^\omega$ to denote the sets of finite
and infinite words over $X$, respectively.
The empty word is denoted by $\varepsilon$.
\parg{Games.}
A \emph{game} (of \emph{rank $n$})
is a tuple ${\mathcal G}=\tuple{S,\states^0,\states^1,\states^R,{\longrightarrow},P,{\mathtt{Col}}}$ defined as follows.
$S$ is a set of \emph{states}, partitioned into
the pairwise disjoint sets of \emph{random states} $\states^R$,
states $\states^0$ of Player$~0$, and states $\states^1$ of Player~$1$.
${\longrightarrow}\subseteqS\timesS$ is the
\emph{transition relation}.
%
We write $s{\longrightarrow}{}s'$ to denote that
$\tuple{s,s'}\in{\longrightarrow}{}$.
%
We assume that for each $s$ there is at least one and at most
countably many $s'$ with $s{\longrightarrow}{}s'$.
The \emph{probability function}
$P:\states^R\timesS\to[0,1]$ satisfies both
$\foralls\in\states^R.\foralls'\inS .
(P(s,s')>0\iffs\transitions')$ and
$\foralls\in\states^R .\sum_{s'\inS}
P(s,s') = 1$.
%
(The sum is well-defined since we assumed that the number of successors of any state is at most countable.)
The \emph{coloring function} is defined as
${\mathtt{Col}}:S\to\{0,\dots,n\}$,
where $\colorofs$ is called the
\emph{color} of state $s$.
Let
$Q\subseteqS$ be a set of states.
%
We use $\gcomplementof{\mathcal G}Q:=S-Q$ to denote the
{\it complement} of $Q$.
%
Define
$\zcutQ:=Q\cap\states^0$,
$\ocutQ:=Q\cap\states^1$,
$\zocutQ:=\zcutQ\cup\ocutQ$, and
$\rcutQ:=Q\cap\states^R$.
%
For $n\in\mathbb N$ and $\sim\;\in\{=,\leq\}$, let $\colorset
Q\sim n:=\setcomp{s\in Q}{\colorofs\sim n}$
denote the sets of states in $Q$ with color $\sim n$.
A \emph{run} $\rho$ in ${\mathcal G}$ is an infinite sequence
$s_0s_1\cdots$ of states s.t.
$s_i{\longrightarrow}{}s_{i+1}$ for all $i\geq 0$;
$\rho(i)$ denotes $s_i$.
A \emph{path} $\pi$ is a finite sequence $s_0\cdotss_n$ of
states s.t. $s_i{\longrightarrow}{}s_{i+1}$ for all $i:0\leq
i<n$.
We say that $\rho$ (or $\pi$) \emph{visits} $s$ if
$s=s_i$ for some $i$.
For any $Q\subseteqS$, we use $\pthset{Q}$ to
denote the set of paths that end in some state in $Q$.
Intuitively, the choices of the players and the resolution of randomness induce a run
$s_0s_1\cdots$,
starting in some initial state $s_0\inS$;
state $s_{i+1}$ is chosen as a successor of $s_i$,
and this choice is made by Player~0 if $s_i\in\states^0$,
by Player~1 if $s_i\in\states^1$,
and it is chosen randomly according to the probability distribution
$P(s_i,\cdot)$ if $s_i\in\states^R$.
\parg{Strategies.}
For $x\in\set{0,1}$,
a strategy for Player~$x$ prescribes the next move,
given the current prefix of the run.
Formally, a \emph{strategy} of Player~$x$ is a partial
function $\strat^x:\pthset{\states^x}\partialtoS$ s.t.
$s_n{\longrightarrow}\strat^x(s_0\cdotss_{n})$ if
$\strat^x(s_0\cdotss_{n})$ is
defined.
The strategy $\strat^x$ prescribes for Player~$x$ the next move,
given the current prefix of the run.
A run $\rho=s_0s_1\cdots$ is said to be {\it consistent}
with a strategy $\strat^x$ of Player~$x$ if
$s_{i+1}=\strat^x(s_0s_1\cdotss_i)$
whenever $\strat^x(s_0s_1\cdotss_i)\neq\bot$.
We say that $\rho$ is {\it induced} by $\tuple{s,\strat^x,\strat^{1-x}}$
if $s_0=s$ and $\rho$ is consistent with both $\strat^x$ and $\strat^{1-x}$.
We use ${\it Run}{{\mathcal G},s,\strat^x,\strat^{1-x}}$ to denote the set of runs
in ${\mathcal G}$ induced by
$\tuple{s,\strat^x,\strat^{1-x}}$.
We say that $\strat^x$ is total if it is defined for every
$\pi\in\pthset{\states^x}$.
A strategy $\strat^x$ of Player~$x$ is \emph{memoryless}
if the next state only depends on the current state and not on the
previous history of the run, i.e., for any path
$s_0\cdotss_n\in\pthset{\states^x}$, we have
$\strat^x(s_0\cdotss_n)=\strat^x(s_n)$.
%
A \emph{finite-memory strategy} updates a finite memory
each time a transition is taken, and the next state depends only on
the current state and memory.
%
Formally, we define a \emph{memory structure} for Player~$x$ as a quadruple
$\memstrat{}=\tuple{\memory,\memconf_0,\memtrans,\memmem}$ satisfying the following properties.
%
The nonempty set $M$ is called the \emph{memory} and
%
$m_0\inM$ is the \emph{initial memory
configuration}.
%
For a current memory configuration $m$ and a current state $s$, the next
state is given by
$\tau : \states^x\timesM \toS$, where
$s{\longrightarrow}\tau(s,m)$.
%
The next memory configuration is given by
$\mu:S\timesM\toM$.
%
We extend $\mu$ to paths by
$\mu(\varepsilon,m)=m$ and
$\mu(s_0\cdotss_n,m) =
\mu(s_n,\mu(s_0\cdotss_{n-1},m))$.
%
The total strategy $\memstratstrat{}:\pthset{\states^x}\toS$
induced by $\memstrat{}$ is given by $
\memstratstrat{}(s_0\cdotss_{n})
:=\tau(s_{n},\mu(s_0\cdotss_{n-1},\memconf_0))
$.
%
A total strategy $\strat^x$ is said to have \emph{finite memory} if
there is a memory structure $\memstrat{}=\tuple{\memory,\memconf_0,\memtrans,\memmem}$ where
$M$ is finite and $\strat^x=\memstratstrat{}$.
Consider a run $\rho=s_0s_1\cdots\in{\it Run}{{\mathcal G},s,\strat^x,\strat^{1-x}}$ where $\strat^{1-x}$ is induced by $\memstrat{}$.
We say that $\rho$ \emph{visits} the configuration
$\tuple{s,m}$ if there is an
$i$ such that $s_i=s$ and
$\mu(s_0s_1\cdotss_{i-1},m_0)=m$.
We use $F_{\it all}^x({\mathcal G})$, $F^x_{\it finite}({\mathcal G})$, and $F^{x}_\emptyset({\mathcal G})$
to denote the set of {\it all}, {\it finite-memory}, and
{\it memoryless} strategies respectively of Player~$x$ in ${\mathcal G}$.
Note that memoryless strategies and strategies in general can be
partial, whereas for simplicity we only define total finite-memory
strategies.
\parg{Probability Measures.}
We use the standard definition of probability measures for a set of runs
\cite{billingsley-1986-probability}.
First, we define the measure for total strategies, and then we extend
it to general (partial) strategies.
Consider a game ${\mathcal G}=\tuple{S,\states^0,\states^1,\states^R,{\longrightarrow},P,{\mathtt{Col}}}$, an initial state $s$, and total
strategies $\strat^x$ and $\strat^{1-x}$ of Players~$x$ and~${1-x}$.
Let
$\Omega^{s}=\stateS^{\omega}$ denote the set of all infinite
sequences of states starting from $s$.
For a measurable set
${{\mathfrak R}}\subseteq\Omega^s$,
we define ${\mathcal P}_{{\mathcal G},s,\strat^x,\strat^{1-x}}({{\mathfrak R}})$ to be
the probability measure of ${\mathfrak R}$ under the strategies $\strat^x,\strat^{1-x}$.
This measure is well-defined
\cite{billingsley-1986-probability}.
For (partial) strategies
$\strat^x$ and $\strat^{1-x}$ of Players~$x$ and~${1-x}$,
$\sim\;\in\{<,\leq,=,\geq,>\}$,
a real number $c\in[0,1]$,
and any measurable set ${\mathfrak R}\subseteq\Omega^s$,
we define ${\mathcal P}_{{\mathcal G},s,\strat^x,\strat^{1-x}}({{\mathfrak R}}) \sim c$
iff ${\mathcal P}_{{\mathcal G},s,g^x,g^{{1-x}}}({\mathfrak R})\sim c$ for all total
strategies $g^x$ and $g^{{1-x}}$ that are extensions of
$\strat^x$ resp.\ $\strat^{1-x}$.
\parg{Winning Conditions.}
The winner of the game is determined by a predicate on
infinite runs.
We assume familiarity with the syntax and semantics of the temporal
logic ${\mathit CTL}^*$ (see, e.g., \cite{CGP:book}).
Formulas are interpreted on the structure $(S,{\longrightarrow})$.
We use
$\denotationof{{\varphi}}{s}$ to denote the set of runs starting from
$s$ that satisfy the ${\mathit CTL}^*$ path-formula ${\varphi}$.
This set is measurable \cite{Vardi:probabilistic},
and we just write
${\mathcal P}_{{\mathcal G},s,\strat^x,\strat^{1-x}}({\varphi}) \sim c$
instead of
${\mathcal P}_{{\mathcal G},s,\strat^x,\strat^{1-x}}(\denotationof\formulas) \sim c$.
We will consider games with \emph{parity}
winning conditions,
whereby Player~1 wins if the largest color that occurs infinitely often in the
infinite run is odd, and Player~0 wins if it is even.
Thus, the winning condition for Player~$x$ can be expressed in
${\mathit CTL}^*$
as
\[
\mbox{$x$-}\parity :=
\bigvee_{i\in \{0,\dots,n\}\wedge (i\bmod 2)=x}(
\Box\Diamond \colorset S=i \wedge
\Diamond\Box \colorset S\leq i) \ .
\]
\parg{Winning Sets.}
For a strategy $\strat^x$ of Player~$x$,
and a set $F^{{1-x}}$ of strategies of Player~${1-x}$,
we define
\begin{align*}
\winset^x(\strat^x,F^{{1-x}})({\mathcal G},{\varphi}^{\sim c}):=
\setcomp{s}{\forall \strat^{1-x}\inF^{{1-x}} .
\strat^{1-x}\text{ is total} \implies
{\mathcal P}_{{\mathcal G},s,\strat^x,\strat^{1-x}}({\varphi})\sim c}
\end{align*}
If there is a strategy $\strat^x$ such that $s\in\winset^x(\strat^x,F^{{1-x}})({\mathcal G},{\varphi}^{\sim c})$,
then we say that $s$ is a {\it winning state}
for Player~$x$ in ${\mathcal G}$
wrt.\ ${\varphi}^{\sim c}$ (and $\strat^x$ is \emph{winning at $s$}),
provided that Player~${1-x}$
is restricted to strategies in
$F^{{1-x}}$.
Sometimes, when the parameters ${\mathcal G}$, $s$,
$F^{{1-x}}$, ${\varphi}$, and $\sim c$ are known, we will not mention them and
may simply say that ``$s$ is a winning state'' or that
``$\strat^x$ is a winning strategy'', etc.
If
$s\in\winset^x(\strat^x,F^{{1-x}})({\mathcal G},{\varphi}^{=1})$,
then
we say that Player~$x$
wins from $s$ \emph{almost surely (a.s.)}.
If
$s\in\winset^x(\strat^x,F^{{1-x}})({\mathcal G},{\varphi}^{>0})$,
then
we say that Player~$x$ wins from $s$ \emph{with positive probability (w.p.p.)}.
We also define
$\vinset^x(\strat^x,F^{{1-x}})({\mathcal G},{\varphi}):=
\setcomp{s}{\forall \strat^{1-x}\inF^{{1-x}}.\;{\it Run}{{\mathcal G},s,\strat^x,\strat^{1-x}}\subseteq\denotationof\formulas}$.
If $s\in\vinset^x(\strat^x,F^{{1-x}})({\mathcal G},{\varphi})$, then we say that
Player~$x$ {\it surely} wins from $s$.
Notice that any strategy that is surely winning from a state $s$
is also winning from $s$ a.s., and any strategy that is winning a.s.\ is also winning w.p.p., i.e.,
$\vinset^x(\strat^x,F^{{1-x}})({\mathcal G},{\varphi})\subseteq
\winset^x(\strat^x,F^{{1-x}})({\mathcal G},{\varphi}^{=1})\subseteq
\winset^x(\strat^x,F^{{1-x}})({\mathcal G},{\varphi}^{>0})$.
\parg{Determinacy and Solvability.}
A game is called \emph{determined} wrt. an objective ${\varphi}^{\sim c}$ and
two sets $F^0,F^1$ of strategies of Player~$0$, resp.\ Player~$1$,
if, for every state $s$,
Player~$x$ has a strategy $\strat^x\inF^x$ that is winning against all strategies $g\inF^{{1-x}}$ of the opponent,
i.e., $s \in \winset^x(\strat^x,F^{{1-x}})({\mathcal G},\textrm{cond}_x)$,
where $\textrm{cond}_0 = {\varphi}^{\sim c}$ and $\textrm{cond}_1 = {\varphi}^{\not\sim c}$.
By \emph{solving} a determined game, we mean giving an algorithm to
compute symbolic representations of the sets of states which are
winning for either player and a symbolic representation of the corresponding winning strategies.
\parg{Attractors.}
A set $A\subseteqS$ is said
to be an {\it attractor} if,
for each state $s\inS$ and strategies
$\strat^0,\strat^1$ of Player~$0$ resp.\ Player~$1$,
it is the case that
${\mathcal P}_{{\mathcal G},s,\strat^0,\strat^1}(\DiamondA)=1$.
In other words, regardless of where
we start a run and regardless of the strategies
used by the players, we will
reach a state inside the attractor a.s..
It is straightforward to see that this also implies that
${\mathcal P}_{{\mathcal G},s,\strat^0,\strat^1}(\Box\DiamondA)=1$,
i.e., the attractor will be visited infinitely often a.s.
\parg{Transition Systems.}
Consider strategies $\strat^x\inF^{x}_\emptyset$ and
$\strat^{1-x}\inF^{{1-x}}_{\it finite}$ of
Player~$x$ resp.\ Player~${1-x}$,
where $\strat^x$ is memoryless and
$\strat^{1-x}$ is finite-memory.
Suppose that $\strat^{1-x}$ is
induced by memory structure $\memstrat{}=\tuple{\memory,\memconf_0,\memtrans,\memmem}$.
We define the {\it transition system}
${\mathcal T}$
induced
by ${\mathcal G},\strat^{1-x},\strat^x$ to be the pair
$\tuple{\states_{M{}},\longleadsto}$
where
$\states_{M{}}=S\timesM$, and
$\longleadsto\subseteq\states_{M{}}\times\states_{M{}}$ such that
$\tuple{s_1,m_1}\longleadsto\tuple{s_2,m_2}$ if
$m_2=\mu(s_1,m_1)$, and
one of the following three conditions is satisfied:
(i)
$s_1\in\states^x$ and
either $s_2=\strat^x(s_1)$ or $\strat^x(s_1)=\bot$,
(ii)
$s_1\in\states^{1-x}$ and $s_2=\tau(s_1,m_1)$, or
(iii)
$s_1\in\states^R$ and $P(s_1,s_2)>0$.
Consider the directed acyclic graph (DAG) of maximal strongly
connected components (SCCs) of the transition system
${\mathcal T}$.
An SCC is called a {\it bottom SCC (BSCC)} if no other SCC is
reachable from
it.
Observe that the existence of BSCCs is not guaranteed in an
infinite transition system.
However, if ${\mathcal G}$ contains a finite attractor $A$
and $M$ is finite
then ${\mathcal T}$ contains at least one BSCC, and in fact
each BSCC contains at least one element
$\tuple{s_A,m}$
with $s_A\inA$.
In particular, for any state $s\inS$,
any run $\rho\in{\it Run}{{\mathcal G},s,\strat^x,\strat^{1-x}}$ will visit
a configuration $\tuple{s_A,m}$
infinitely often a.s.\
where $s_A\inA$
and $\tuple{s_A,m}\inB$
for
some BSCC $B$.
\section{Reachability}
\label{reachability:section}
In this section we present some concepts related
to checking reachability objectives in games.
First, we define basic notions.
Then we recall a standard scheme
(described e.g. in \cite{zielonka1998infinite})
for
checking reachability winning conditions, and
state some of its properties that we use
in the later sections.
In this section, we do not use the finite attractor property,
nor do we restrict the class of strategies in any way.
Below, fix a game ${\mathcal G}=\tuple{S,\states^0,\states^1,\states^R,{\longrightarrow},P,{\mathtt{Col}}}$.
\parg{Reachability Properties.}
Fix a state $s\inS$ and sets of states
$Q,Q'\subseteqS$.
Let $\postof{\mathcal G}{s}:=\{s':s\transitions'\}$ denote the
set of \emph{successors} of $s$. Extend it to sets of states by
$\postof{\mathcal G}{Q}:=\bigcup_{s\inQ}\postof{\mathcal G}{s}$.
%
Note that for any given state $s\in\states^R$,
$P(s,\cdot)$ is a probability distribution over
$\postof{\mathcal G}{s}$.
%
Let $\preof{\mathcal G}{s}:=\{s':s'\transitions\}$ denote the
set of \emph{predecessors} of $s$, and extend it to sets of
states as above.
We define $\dualpreof{\mathcal G}{Q}:=\gcomplementof{\mathcal G}{\preof{\mathcal G}{\gcomplementof{\mathcal G}Q}}$,
i.e., it denotes the set of states whose successors
{\it all} belong to $Q$.
%
We say that $Q$ is \emph{sink-free} if
$\postof{\mathcal G}{s}\capQ\neq\emptyset$ for all $s\inQ$,
and \emph{closable} if it is sink-free and
$\postof{\mathcal G}s\subseteqQ$ for all $s\in\rcutQ$.
%
If $Q$ is closable then each state in $\zocutQ$
has at least one successor in $Q$, and all
the successors of states in $\rcutQ$ are in $Q$.
For $x\in\set{0,1}$, we say that $Q$ is an \emph{$x$-trap} if it is closable and $\postof{\mathcal G}{s}\subseteqQ$
for all $s\in\xcutQ$.
%
Notice that $S$ is both a $0$-trap and a $1$-trap, and
in particular it is both sink-free and closable.
The following lemma states that, starting from a state inside a set of states
$Q$ that is a trap for one player,
the other player can surely keep the run inside $Q$.
\begin{lem}
\label{trap:certainly:lemma}
If $Q$ is a $({1-x})$-trap, then there exists
a memoryless strategy $\strat^x\inF^{x}_\emptyset({\mathcal G})$ for Player~$x$
such that
$Q\subseteq\vinset^x(\strat^x,F_{\it all}^{{1-x}}({\mathcal G}))({\mathcal G},\alwaysQ)$.
\end{lem}
\begin{proof}
We define a memoryless strategy $\strat^x$ of Player~$x$ that is
surely winning
from any state $s\inQ$, i.e.,
$Q\subseteq\vinset^x(\strat^x,F_{\it all}^{{1-x}}({\mathcal G}))({\mathcal G},\alwaysQ)$.
For a state $s\in\xcutQ$, we define
$\strat^x(s)=\selectfrom{\postof{\mathcal G}{s}\capQ}$.
This is well-defined since $Q$
is a $({1-x})$-trap.
We can now show that any run that starts
from a state $s\inQ$ and
that is consistent with
$\strat^x$ will surely remain inside
$Q$.
Let $\strat^{1-x}$ be any strategy of Player~${1-x}$, and
let $s_0s_1\ldots\in{\it Run}{{\mathcal G},s,\strat^x,\strat^{1-x}}$.
We show, by induction on $i$,
that $s_i\inQ$ for all $i\geq 0$.
The base case is clear since $s_0=s\inQ$.
For $i>1$, we consider three cases depending on $s_i$:
\begin{itemize}
\item
$s_i\in\xcutS$.
By the induction hypothesis we know that
$s_i\inQ$,
and hence by definition of $\strat^x$ we know that
$s_{i+1}=\strat^x(s_i)\inQ$.
\item
$s_i\in\ycutS$.
By the induction hypothesis we know that
$s_i\inQ$,
and hence $s_{i+1}\inQ$
since $Q$ is a $({1-x})$-trap.
\item
$s_i\in\rcutS$.
By the induction hypothesis we know that
$s_i\inQ$,
and hence $s_{i+1}\inQ$
since $Q$ is closable. \qedhere
\end{itemize}
\end{proof}
\parg{Scheme.}
Given a set ${\tt Target}\subseteqS$, we give a scheme for
computing a partitioning of $S$ into two sets
${\it Force}^x({\mathcal G},{\tt Target})$ and ${\it Avoid}^{1-x}({\mathcal G},{\tt Target})$
s.t. 1) Player~$x$ has a memoryless strategy on ${\it Force}^x({\mathcal G},{\tt Target})$ to force the game to ${\tt Target}$ w.p.p.,
and 2) Player~${1-x}$ has a memoryless strategy on ${\it Avoid}^{1-x}({\mathcal G},{\tt Target})$ to surely avoid ${\tt Target}$.
The scheme and its correctness is adapted from \cite{zielonka1998infinite} to the stochastic setting.
First, we characterize the states that are winning for Player~$x$,
by defining an increasing set of
states each of which consists of winning states
for Player~$x$, as follows:%
\begin{align*}
{\mathcal R\,\,\!\!}_0&:={\tt Target}\\
{\mathcal R\,\,\!\!}_{\alpha+1}&:={\mathcal R\,\,\!\!}_\alpha\cup
\rcut{\preof{\mathcal G}{{\mathcal R\,\,\!\!}_\alpha}}\cup
\xcut{\preof{\mathcal G}{{\mathcal R\,\,\!\!}_\alpha}}\cup
\ycut{\dualpreof{\mathcal G}{{\mathcal R\,\,\!\!}_\alpha}} \\
{\mathcal R\,\,\!\!}_\lambda&:=\bigcup_{\alpha<\lambda}{\mathcal R\,\,\!\!}_\alpha
\qquad \textrm { (for $\lambda$ a limit ordinal) }
\end{align*}
Clearly, the sequence is non-decreasing, i.e., ${\mathcal R\,\,\!\!}_\alpha\subseteq{\mathcal R\,\,\!\!}_\beta$ when $\alpha \leq \beta$,
and since the sequence is bounded by $S$, it converges at some (possibly infinite) ordinal.
We state this as a lemma:
\begin{lem}
\label{reachability:tp:lemma}
There is a $\gamma\in\mathbb O$ such that
${\mathcal R\,\,\!\!}_\gamma=\bigcup_{\alpha\in\mathbb O}{\mathcal R\,\,\!\!}_\alpha$.
\end{lem}
\noindent
Let $\gamma$ be the smallest ordinal s.t. ${\mathcal R\,\,\!\!}_\gamma = {\mathcal R\,\,\!\!}_{\gamma+1}$ (it exists by the lemma above).
We define
\begin{align*}
{\it Force}^x({\mathcal G},{\tt Target})&:={\mathcal R\,\,\!\!}_\gamma\\
{\it Avoid}^{1-x}({\mathcal G},{\tt Target})&:=\;\gcomplementof{\mathcal G}{{\mathcal R\,\,\!\!}_\gamma}
\end{align*}
\begin{lem}
\label{not:reach:trap:lemma}
${\it Avoid}^{1-x}({\mathcal G},{\tt Target})$ is an $x$-trap.
\end{lem}
\begin{proof}
Recall that ${\it Avoid}^{1-x}({\mathcal G},{\tt Target})=\gcomplementof{\mathcal G}{{\mathcal R\,\,\!\!}_\gamma}$
and ${\mathcal R\,\,\!\!}_{\gamma+1}\subseteq{\mathcal R\,\,\!\!}_\gamma$.
First, we prove that $\gcomplementof{\mathcal G}{\mathcal R\,\,\!\!}_\gamma$ is sink-free.
There are two cases to consider:
\begin{itemize}
\item
$s\in\xcut{\gcomplementof{\mathcal G}{\mathcal R\,\,\!\!}_\gamma} \cup \rcut{\gcomplementof{\mathcal G}{\mathcal R\,\,\!\!}_\gamma}$.
First, $\postof{\mathcal G}s\subseteq\;\gcomplementof{\mathcal G}{\mathcal R\,\,\!\!}_\gamma$.
Indeed, if not, we would have $\postof{\mathcal G}s\cap{\mathcal R\,\,\!\!}_\gamma\neq\emptyset$,
and thus $s\in{\mathcal R\,\,\!\!}_{\gamma+1}\subseteq{\mathcal R\,\,\!\!}_\gamma$,
which is a contradiction.
Second, since $S$ is sink-free, we have
$\postof{\mathcal G}s\neq\emptyset$,
and thus $\postof{\mathcal G}s\cap\gcomplementof{\mathcal G}{\mathcal R\,\,\!\!}_\gamma\neq\emptyset$.
\item
$s\in\ycut{\gcomplementof{\mathcal G}{\mathcal R\,\,\!\!}_\gamma}$.
We clearly have $\postof{\mathcal G}s\cap\gcomplementof{\mathcal G}{\mathcal R\,\,\!\!}_\gamma\neq\emptyset$,
otherwise $\postof{\mathcal G}s\subseteq{\mathcal R\,\,\!\!}_\gamma$,
and thus $s\in{\mathcal R\,\,\!\!}_{\gamma+1}\subseteq{\mathcal R\,\,\!\!}_\gamma$,
which is a contradiction.
\end{itemize}
Second, when proving sink-freeness above, we showed that
$\postof{\mathcal G}s\subseteq\;\gcomplementof{\mathcal G}{\mathcal R\,\,\!\!}_\gamma$
for any $s\in\rcut{\gcomplementof{\mathcal G}{\mathcal R\,\,\!\!}_\gamma}$
which means that $\gcomplementof{\mathcal G}{\mathcal R\,\,\!\!}_\gamma$
is closable.
Finally, we also showed
that $\postof{\mathcal G}s\subseteq\;\gcomplementof{\mathcal G}{\mathcal R\,\,\!\!}_\gamma$
for any $s\in\xcut{\gcomplementof{\mathcal G}{\mathcal R\,\,\!\!}_\gamma}$,
which means that $\gcomplementof{\mathcal G}{\mathcal R\,\,\!\!}_\gamma$ is an $x$-trap,
thus concluding the proof.
\end{proof}
The following lemma shows correctness of the construction.
In fact, it shows that a winning player also has a
memoryless strategy which is winning against an arbitrary opponent.
\begin{lem}
\label{reachability:correct:lemma}
There are memoryless strategies ${\it force}^x({\mathcal G},{\tt Target})\inF^{x}_\emptyset({\mathcal G})$ for Player~$x$
and \\${\it avoid}^{1-x}({\mathcal G},{\tt Target})\inF^{{1-x}}_\emptyset({\mathcal G})$ for Player~${1-x}$ s.t.
\begin{align*}
{\it Force}^x({\mathcal G},{\tt Target})&\subseteq
\winset^x({\it force}^x({\mathcal G},{\tt Target}),F_{\it all}^{{1-x}}({\mathcal G}))({\mathcal G},\Diamond{\tt Target}^{>0}) \\
{\it Avoid}^{1-x}({\mathcal G},{\tt Target})&\subseteq
\vinset^{1-x}({\it avoid}^{1-x}({\mathcal G},{\tt Target}),F_{\it all}^x({\mathcal G}))({\mathcal G},\Box(\gcomplementof{\mathcal G}{\tt Target}))
\end{align*}
\end{lem}
\begin{proof}
Let ${\mathcal R\,\,\!\!}={\it Force}^x({\mathcal G},{\tt Target})$.
To prove the first claim, we define a memoryless strategy $\strat^x$ of Player~$x$ that is winning
from ${\mathcal R\,\,\!\!}$.
For any $s\in\xcut{{\mathcal R\,\,\!\!}}$,
let $\alpha$ be the unique ordinal s.t. $s\in\xcut{{\mathcal R\,\,\!\!}_{\alpha+1}\setminus{\mathcal R\,\,\!\!}_\alpha}$.
Then, we define
$\strat^x(s):=\selectfrom{\postof{\mathcal G}{s}\cap{\mathcal R\,\,\!\!}_\alpha}$.
We show that $\strat^x$ forces the run to the target set ${\tt Target}$ w.p.p. against an arbitrary opponent.
Fix a strategy $\strat^{1-x}$ for Player~${1-x}$.
We show that
${\mathcal P}_{{\mathcal G},s,\strat^x,\strat^{1-x}}(\Diamond{\tt Target})>0$ by transfinite induction.
If $s\in{\mathcal R\,\,\!\!}_0$, then the claim follows trivially.
If $s\in{\mathcal R\,\,\!\!}_{\alpha+1}$, then either
$s\in{\mathcal R\,\,\!\!}_{\alpha}$
in which case the claim holds by the induction hypothesis,
or $s\in{\mathcal R\,\,\!\!}_{\alpha+1}\setminus{\mathcal R\,\,\!\!}_{\alpha}$.
In the latter case, there are three sub-cases:
\begin{itemize}
\item
$s\in\xcut{{\mathcal R\,\,\!\!}_{\alpha+1}\setminus{\mathcal R\,\,\!\!}_\alpha}$.
%
By definition of $\strat^x$,
we know that $\strat^x(s)=s'$ for some $s'\in{\mathcal R\,\,\!\!}_{\alpha}$.
%
By the induction hypothesis,
${\mathcal P}_{{\mathcal G},s',\strat^0,\strat^1}(\Diamond{\tt Target})>0$, and hence
${\mathcal P}_{{\mathcal G},s,\strat^0,\strat^1}(\Diamond{\tt Target})>0$.
%
\item
$s\in\ycut{{\mathcal R\,\,\!\!}_{\alpha+1}\setminus{\mathcal R\,\,\!\!}_{\alpha}}$.
%
Let $s'$ be the successor of $s$ chosen by $\strat^{1-x}$.
%
By definition of ${\mathcal R\,\,\!\!}_{\alpha+1}$,
we know that $s'\in{\mathcal R\,\,\!\!}_{\alpha}$.
%
Then, the proof follows as in the previous case.
\item
%
$s\in\rcut{{\mathcal R\,\,\!\!}_{\alpha+1}\setminus{\mathcal R\,\,\!\!}_{\alpha}}$.
%
By definition of ${\mathcal R\,\,\!\!}_{\alpha+1}$,
there is a $s'\in{\mathcal R\,\,\!\!}_{\alpha}$
such that $P(s,s')>0$.
%
By the induction hypothesis,
${\mathcal P}_{{\mathcal G},s,\strat^0,\strat^1}(\Diamond{\tt Target})
\geq
{\mathcal P}_{{\mathcal G},s',\strat^0,\strat^1}(\Diamond{\tt Target})\cdotP(s,s')>0$.
\end{itemize}
Finally, if $s\in{\mathcal R\,\,\!\!}_{\lambda}$
for a limit ordinal $\lambda$, then
$s\in{\mathcal R\,\,\!\!}_{\alpha}$
for some $\alpha<\lambda$,
and the claim follows by the induction hypothesis.
From Lemma~\ref{not:reach:trap:lemma} and
Lemma~\ref{trap:certainly:lemma} it follows that
there is a strategy $\strat^{1-x}$ for Player~${1-x}$ such that
${\it Avoid}^{1-x}({\mathcal G},{\tt Target})\subseteq\vinset^{1-x}(\strat^{1-x},F_{\it all}^x)({\mathcal G},\Box({\it Avoid}^{1-x}({\mathcal G},{\tt Target})
))$.
The second claim follows then from the fact that
${\tt Target}\cap {\it Avoid}^{1-x}({\mathcal G},{\tt Target})=\emptyset$.
\end{proof}
\section{Parity Conditions}
\label{parity:section}
We describe a scheme for solving stochastic parity games with
almost-sure winning conditions on infinite graphs,
under the conditions that the game has a finite attractor (as
defined in Section~\ref{prels:section}),
and that the players are restricted to finite-memory strategies.
We define a sequence of functions ${\mathcal C}_0,{\mathcal C}_1,\ldots$
Each ${\mathcal C}_n$ takes a single argument, a game of rank at most $n$,
and it returns the set of states where Player~$x$ wins a.s., with
$x=n\bmod 2$.
In other words, the player that has the same parity as color $n$ wins a.s.\ in
${\mathcal C}_n({\mathcal G})$.
We provide a memoryless strategy that is winning a.s.~for Player~$x$
in ${\mathcal C}_n({\mathcal G})$ against any finite-memory strategy of Player~${1-x}$,
and a memoryless strategy that is winning w.p.p.~for Player~${1-x}$ in
$\gcomplementof {\mathcal G} {\mathcal C}_n({\mathcal G})$ against any finite-memory strategy of
Player~$x$.
The scheme is by induction on $n$ and is related to \cite{zielonka1998infinite}.
In the rest of the section, we make use of the following notion of sub-game.
For a closable $\gcomplementof{\mathcal G}Q$, we define the \emph{sub-game}
${\mathcal G}\cutQ:=\tuple{Q',\zcut{Q'},\ocut{Q'},
\rcut{Q'},{\longrightarrow}',P',{\mathtt{Col}}'}$, where
$Q':=\gcomplementof{\mathcal G}Q$ is the new set of states,
${\longrightarrow}':={\longrightarrow}\cap({Q'}\times{Q'})$,
$P':=P|(\rcut{Q'}\times{Q'})$,
${\mathtt{Col}}':={\mathtt{Col}}|{Q'}$.
Notice that $P'(s)$ is a probability
distribution for any $s\in\rcut{Q'}$ since $Q'$ is closable.
We use ${\mathcal G}\cutQ_1\cutQ_2$ to denote
$({\mathcal G}\cutQ_1)\cutQ_2$.
For the base case, let ${\mathcal C}_0({\mathcal G}):=S$
for any game ${\mathcal G}$ of rank 0.
Indeed, from any configuration Player 0 trivially wins a.s.\ (even surely) because there is only color 0.
\input{fig_parity_schema}
For $n\geq 1$, let ${\mathcal G}$ be a game of rank $n$.
In the following, let \[ x=n\bmod 2 .\]
${\mathcal C}_n({\mathcal G})$ is defined with the help of two auxiliary transfinite sequences
of sets of states $\set{{\mathcal X}_\alpha}_{\alpha\in\mathbb O}$ and $\set{{\mathcal Y}_\alpha}_{\alpha\in\mathbb O}$.
The construction ensures that
${\mathcal X}_0\subseteq{\mathcal Y}_0\subseteq{\mathcal X}_1\subseteq{\mathcal Y}_1\subseteq\cdots$,
and that the states of ${\mathcal X}_\alpha,{\mathcal Y}_\alpha$ are winning w.p.p.\
for Player~${1-x}$.
We use strong induction, i.e., to construct ${\mathcal X}_\alpha$ we assume that
${\mathcal X}_\beta$ has been constructed for all $\beta<\alpha$, and it suffices to state
one unified inductive step rather than distinguishing between base
case, successor ordinals and non-zero limit ordinals.
In the (unified) inductive step, we have already constructed
${\mathcal X}_\beta$ and ${\mathcal Y}_\beta$ for all $\beta<\alpha$.
Our construction of ${\mathcal X}_\alpha$ and ${\mathcal Y}_\alpha$ is in three steps (cf. Figure~\ref{fig:parity:schema}):
\begin{enumerate}
\item ${\mathcal X}_\alpha$ is the set of states
where Player~${1-x}$ can force the run to visit
$\bigcup_{\beta<\alpha}{\mathcal Y}_\beta$
w.p.p.
\item Find a set of states where
Player~${1-x}$ wins w.p.p.\ in
the sub-game ${\mathcal G}\ominus{\mathcal X}_\alpha$.
\item Take ${\mathcal Y}_\alpha$ to be the union of ${\mathcal X}_\alpha$ and the set constructed in step 2.
\end{enumerate}
We next show how to find the winning states in
the sub-game ${\mathcal G}\ominus{\mathcal X}_\alpha$ in step 2.
We first compute the set of states where Player~$x$ can force the play
in ${\mathcal G}\ominus{\mathcal X}_\alpha$ to reach a state with color $n$
w.p.p.
We call this set ${\mathcal Z}_\alpha$.
The sub-game ${\mathcal G}\ominus{\mathcal X}_\alpha\ominus{\mathcal Z}_\alpha$ does not contain any states of color $n$.
Therefore, this game can be completely solved, using the already constructed
function ${\mathcal C}_{n-1}({\mathcal G}\ominus{\mathcal X}_\alpha\ominus{\mathcal Z}_\alpha)$.
The resulting winning set is winning a.s.\ in
${\mathcal G}\ominus{\mathcal X}_\alpha\ominus{\mathcal Z}_\alpha$, hence it is winning w.p.p.
We will prove that the states where Player~${1-x}$ wins w.p.p.\ in
${\mathcal G}\ominus{\mathcal X}_\alpha\ominus{\mathcal Z}_\alpha$ are winning
w.p.p.\ also in ${\mathcal G}$.
We thus take ${\mathcal Y}_\alpha$ as the union of ${\mathcal X}_\alpha$ and
${\mathcal C}_{n-1}({\mathcal G}\ominus{\mathcal X}_\alpha\ominus{\mathcal Z}_\alpha)$.
We define the sequences formally:
\begin{align*}
{\mathcal X}_\alpha &:= {\it Force}^{1-x}({\mathcal G},{\textstyle\bigcup_{\beta<\alpha}{\mathcal Y}_\beta}) \\
{\mathcal Z}_\alpha &:= {\it Force}^x({\mathcal G}\ominus{\mathcal X}_\alpha,\colorset{\gcomplementof{\mathcal G}{\mathcal X}_\alpha}=n) \\
{\mathcal Y}_\alpha &:= {\mathcal X}_\alpha\cup{\mathcal C}_{n-1}({\mathcal G}\ominus{\mathcal X}_\alpha\ominus{\mathcal Z}_\alpha)
\end{align*}
Notice that the sub-games ${\mathcal G}\ominus{\mathcal X}_\alpha$ and
${\mathcal G}\ominus{\mathcal X}_\alpha\ominus{\mathcal Z}_\alpha$ are well-defined, since
$\gcomplementof{\mathcal G}{\mathcal X}_\alpha$ is closable in ${\mathcal G}$ (by Lemma~\ref{not:reach:trap:lemma}), and
$\gcomplementof{{\mathcal G}\ominus{\mathcal X}_\alpha}{\mathcal Z}_\alpha$ is closable in ${\mathcal G}\ominus{\mathcal X}_\alpha$.
By the definition, for $\alpha \leq \beta$ we get
${\mathcal Y}_\alpha \subseteq {\mathcal X}_\beta \subseteq {\mathcal Y}_\beta$.
As in Lemma~\ref{reachability:tp:lemma},
we can prove that this sequence converges:
\begin{lem}
\label{tp:lemma}
There exists a $\gamma\in\mathbb O$ such that
${\mathcal X}_\gamma = {\mathcal Y}_\gamma = \bigcup_{\alpha\in\mathbb O}{\mathcal Y}_\alpha$.
\end{lem}
\noindent
Let $\gamma$ be the least ordinal s.t. $X_{\gamma + 1} = X_{\gamma}$ (which exists by the lemma above).
We define
\begin{align}
\label{eq:cset:def}
{\mathcal C}_n({\mathcal G}):=\gcomplementof{\mathcal G}{{\mathcal X}_\gamma}
\end{align}
The following lemma shows the correctness of
the construction.
Recall that we assume that ${\mathcal G}$ is of rank $n$ and that it contains
a finite attractor.
\begin{lem}
\label{cn:infinite:termination:lemma}
There are memoryless strategies
$\strat^x_c\inF^{x}_\emptyset({\mathcal G})$ for Player~$x$ and
$\strat^{1-x}_c\inF^{{1-x}}_\emptyset({\mathcal G})$ for Player~${1-x}$
such that the following two properties hold:
\begin{align}
{\mathcal C}_n({\mathcal G})\;&\subseteq\;\winset^x(\strat^x_c,F^{{1-x}}_{\it finite}({\mathcal G}))({\mathcal G},{\mbox{$x$-}\parity}^{=1} ) \\
\gcomplementof{\mathcal G}{{\mathcal C}_n({\mathcal G})}\;&\subseteq\;\winset^{1-x}(\strat^{1-x}_c,F^x_{\it finite}({\mathcal G}))({\mathcal G},{\mbox{$(1-x)$-}\parity}^{>0} )
\end{align}
\end{lem}
\begin{proof}
Using induction on $n$,
we define the strategies $\strat^x_c,\strat^{1-x}_c$,
and prove that the strategies are indeed winning.
\parg{Construction of $\strat^x_c$.}
For $n\geq 1$, recall that $\gamma$ is the least ordinal s.t. ${\mathcal X}_{\gamma+1} = {\mathcal X}_\gamma$ (as defined above),
and define $\complementof{{\mathcal X}_\gamma}:=\gcomplementof{\mathcal G}{{\mathcal X}_\gamma}$ and
$\complementof{{\mathcal Z}_\gamma}:=\gcomplementof{\mathcal G}{{\mathcal Z}_\gamma}$.
By definition, ${\mathcal C}_n({\mathcal G})=\complementof{{\mathcal X}_\gamma}$.
For a state $s\in\complementof{{\mathcal X}_\gamma}$, we define $\strat^x_c(s)$ depending
on the membership of $s$ in one of the following three partitions
of $\complementof{{\mathcal X}_\gamma}$:
\begin{enumerate}
\item
$s\in \complementof{{\mathcal X}_\gamma}\cap\complementof{{\mathcal Z}_\gamma}$.
Define ${\mathcal G}':={\mathcal G}\ominus{\mathcal X}_\gamma\ominus{\mathcal Z}_\gamma$.
By the definition of $\gamma$, we have that
${\mathcal X}_{\gamma+1}\setminus{\mathcal X}_\gamma=\emptyset$.
By the construction of ${\mathcal Y}_\alpha$ we have, for an arbitrary $\alpha$,
that ${\mathcal C}_{n-1}({\mathcal G}\ominus{\mathcal X}_\alpha\ominus{\mathcal Z}_\alpha)={\mathcal Y}_\alpha\setminus{\mathcal X}_\alpha$,
and by the construction of ${\mathcal X}_{\alpha+1}$, we have that
${\mathcal Y}_\alpha\setminus{\mathcal X}_\alpha\subseteq{\mathcal X}_{\alpha+1}\setminus{\mathcal X}_\alpha$.
By combining these facts, we obtain
${\mathcal C}_{n-1}({\mathcal G}')\subseteq{\mathcal X}_{\gamma+1}\setminus{\mathcal X}_\gamma=\emptyset$.
Since ${\mathcal G}\ominus{\mathcal X}_\gamma\ominus{\mathcal Z}_\gamma$ does not contain any states of
color $n$
(or higher), it follows by the induction hypothesis that there is a
memoryless strategy $f_1\inF^{x}_\emptyset({\mathcal G}')$ such that
$\gcomplementof{{\mathcal G}'}{{\mathcal C}_{n-1}({\mathcal G}')}\;\subseteq\;\winset^x(f_1,F^{{1-x}}_{\it finite}({\mathcal G}'))({\mathcal G}',{\mbox{$x$-}\parity}^{>0} )$.
We define $\strat^x_c(s):=f_1(s)$.
(Later, we will prove that in fact $f_1$ is winning a.s.)
\item
$s\in \complementof{{\mathcal X}_\gamma}\cap\colorset{{\mathcal Z}_\gamma}< n$.
Define $\strat^x_c(s):={\it force}^x({\mathcal G}\ominus{\mathcal X}_\gamma,\colorset{{\mathcal Z}_\gamma}=n)(s)$.
\item
$s\in \complementof{{\mathcal X}_\gamma}\cap\colorset{{\mathcal Z}_\gamma}=n$.
Lemma~\ref{not:reach:trap:lemma} shows
$\postof{\mathcal G}s\cap\complementof{{\mathcal X}_\gamma}\neq\emptyset$.
Define
$\strat^x_c(s):=\selectfrom{\postof{\mathcal G}s\cap\complementof{{\mathcal X}_\gamma}}$.
\end{enumerate}
\parg{Correctness of $\strat^x_c$.}
Let $\strat^{1-x}\inF^{{1-x}}_{\it finite}({\mathcal G})$ be a finite-memory strategy for Player~${1-x}$.
We show that
${\mathcal P}_{{\mathcal G},s,\strat^x_c,\strat^{1-x}}(\mbox{$x$-}\parity )=1$
for any state $s\in{\mathcal C}_n({\mathcal G})$.
First, we give a straightforward proof that any run
$s_0s_1\cdots\in{\it Run}{{\mathcal G},s,\strat^x_c,\strat^{1-x}}$ will always stay inside
$\complementof{{\mathcal X}_\gamma}$, i.e.,
$s_i\in\complementof{{\mathcal X}_\gamma}$ for all $i\geq 0$.
We use induction on $i$.
The base case follows from $s_0=s\in\complementof{{\mathcal X}_\gamma}$.
For the induction step, we assume that
$s_i\in\complementof{{\mathcal X}_\gamma}$, and show that
$s_{i+1}\in\complementof{{\mathcal X}_\gamma}$.
We consider the following cases:
\begin{itemize}
\item
$s_i\in\ycut{\complementof{{\mathcal X}_\gamma}}\cup\rcut{\complementof{{\mathcal X}_\gamma}}$.
The result follows
since $\complementof{{\mathcal X}_\gamma}$ is a
(${1-x}$)-trap in ${\mathcal G}$
(by Lemma~\ref{not:reach:trap:lemma}).
\item
$s_i\in\xcut{\complementof{{\mathcal X}_\gamma}\cap\complementof{{\mathcal Z}_\gamma}}$.
We know that $s_{i+1}=f_1(s_i)$.
Since $f_1\inF^{x}_\emptyset({\mathcal G}\ominus{\mathcal X}_\gamma\ominus{\mathcal Z}_\gamma)$ it follows that
$s_{i+1}\in\complementof{{\mathcal X}_\gamma}\cap\complementof{{\mathcal Z}_\gamma}$, and in particular
$s_{i+1}\in\complementof{{\mathcal X}_\gamma}$.
\item
$s_i\in\xcut{\complementof{{\mathcal X}_\gamma}\cap\colorset{{\mathcal Z}_\gamma}< n}$.
We know that $s_{i+1}={\it force}^x({\mathcal G}\ominus{\mathcal X}_{\gamma},\colorset{{\mathcal Z}_\gamma}=n) (s_i)$.
The result follows by the fact that
${\it force}^x({\mathcal G}\ominus{\mathcal X}_{\gamma},\colorset{{\mathcal Z}_\gamma}=n)$ is a strategy
in ${\mathcal G}\ominus{\mathcal X}_\gamma$.
\item
$s_i\in\xcut{\complementof{{\mathcal X}_\gamma}\cap\colorset{{\mathcal Z}_\gamma}=n}$.
We have $s_{i+1}\in
\postof{\mathcal G}{s_i}\cap\complementof{{\mathcal X}_\gamma}$,
and in particular $s_{i+1}\in\complementof{{\mathcal X}_\gamma}$.
\end{itemize}
We now prove the main claim.
This is where we need the assumption of finite attractor and
finite-memory strategies.
Let us again consider a run
$\rho\in{\it Run}{{\mathcal G},s,\strat^x_c,\strat^{1-x}}$.
We show that $\rho$ is a.s.\ winning for Player~$x$
with respect to $\mbox{$x$-}\parity$ in ${\mathcal G}$.
Let $\strat^{1-x}$ be induced by a memory structure
$\memstrat{}=\tuple{\memory,\memconf_0,\memtrans,\memmem}$.
Let ${\mathcal T}$ be the transition system induced
by ${\mathcal G}$, $\strat^x_c$, and $\strat^{1-x}$.
As explained in Section~\ref{prels:section},
$\rho$ will a.s.\ visit
a configuration $\tuple{s_A,m}\inB$
for some BSCC $B$ in ${\mathcal T}$.
Since there exists a finite attractor,
each state that occurs in $B$ will a.s.\
be visited infinitely often by $\rho$.
Let ${n_{\text{max}}}$ be the maximal color occurring among
the states of $B$.
There are two possible cases:
\begin{itemize}
\item ${n_{\text{max}}}=n$.
Since each state in ${\mathcal G}$ has color at most $n$,
Player~$x$ will a.s.\ win.
\item ${n_{\text{max}}}<n$.
This implies that
$\setcomp{s_B}{\tuple{s_B,m}\inB}
\subseteq\complementof{\mathcal Z}_\gamma$, and hence
Player~$x$ uses the strategy
$f_1$ to win the game in ${\mathcal G}\ominus{\mathcal X}_\gamma\ominus{\mathcal Z}_\gamma$ w.p.p.
Then, either
(i) ${n_{\text{max}}}\bmod 2=x$ in which case
all states inside $B$ are almost sure winning for
Player~$x$; or
(ii) ${n_{\text{max}}}\bmod 2={1-x}$ in which case
all states inside $B$ are almost sure losing for Player~$x$.
The result follows from the fact that case (ii) gives a contradiction
since all states in
${\mathcal G}\ominus{\mathcal X}_\gamma\ominus{\mathcal Z}_\gamma$
(including those in $B$) are winning for Player~$x$ w.p.p.
\end{itemize}
\parg{Construction of $\strat^{1-x}_c$.}
We define a strategy $\strat^{1-x}_c$ such that, for all $\alpha$,
the following inclusion holds:
${\mathcal X}_\alpha\subseteq{\mathcal Y}_\alpha\subseteq\winset^{1-x}(\strat^{1-x}_c,F^x_{\it finite}({\mathcal G}))({\mathcal G},{\mbox{$(1-x)$-}\parity}^{>0})$.
The result then follows from the definition of ${\mathcal C}_n({\mathcal G})$.
The inclusion ${\mathcal X}_\alpha\subseteq{\mathcal Y}_\alpha$ holds by the definition of ${\mathcal Y}_\alpha$.
For any state $s\in\gcomplementof{\mathcal G}{{\mathcal C}_n({\mathcal G})}$,
we define $\strat^{1-x}_c(s)$ as follows.
Let $\alpha$ be the smallest ordinal such that $s\in{\mathcal Y}_\alpha$.
Such an $\alpha$ exists by the well-ordering of ordinals
and since $\gcomplementof{\mathcal G}{{\mathcal C}_n({\mathcal G})}=\bigcup_{\beta\in\mathbb O}{\mathcal X}_\beta=\bigcup_{\beta\in\mathbb O}{\mathcal Y}_\beta$.
Now there are two cases:
\begin{itemize}
\item
$s\in{\mathcal X}_\alpha\setminus\bigcup_{\beta<\alpha}{\mathcal Y}_\beta$.
%
Define
$\strat^{1-x}_c(s):=
f_1(s):=
{\it force}^{1-x}({\mathcal G},\bigcup_{\beta<\alpha}{\mathcal Y}_\beta)(s)$.
%
\item
$s\in{\mathcal C}_{n-1}({\mathcal G}\ominus{\mathcal X}_\alpha\ominus{\mathcal Z}_\alpha)$.
%
By the induction hypothesis (on $n$),
there is a memoryless strategy
$f_2\inF^{{1-x}}_\emptyset({\mathcal G}\ominus{\mathcal X}_\alpha\ominus{\mathcal Z}_\alpha)$ of Player~${1-x}$ such that
$s\in
\winset^{1-x}(f_2,F^x_{\it finite}({\mathcal G}\ominus{\mathcal X}_\alpha\ominus{\mathcal Z}_\alpha))
({\mathcal G}\ominus{\mathcal X}_\alpha\ominus{\mathcal Z}_\alpha,{\mbox{$(1-x)$-}\parity}^{=1} )$.
%
Define $\strat^{1-x}_c(s):=f_2(s)$.
\end{itemize}
\parg{Correctness of $\strat^{1-x}_c$.}
Let $\strat^x\inF^x_{\it finite}({\mathcal G})$ be a finite-memory strategy for
Player~$x$.
We now use induction on $\alpha$ to show that
${\mathcal P}_{{\mathcal G},s,\strat^{1-x}_c,\strat^x}(\mbox{$(1-x)$-}\parity )>0$
for any state $s\in{\mathcal Y}_\alpha$.
There are three cases:\looseness=-1
\begin{enumerate}
\item
If $s\in\bigcup_{\beta<\alpha}{\mathcal Y}_\beta$,
then $s \in {\mathcal Y}_\beta$ for some $\beta < \alpha$
and the result follows
by the induction hypothesis on $\beta$.
%
\item
If $s\in{\mathcal X}_\alpha \setminus \bigcup_{\beta<\alpha}{\mathcal Y}_\beta$, then we know that
Player~${1-x}$ can use $f_1$ to force the game w.p.p.\ to
$\bigcup_{\beta<\alpha}{\mathcal Y}_\beta$
from which she wins w.p.p.
%
\item
If $s\in{\mathcal C}_{n-1}({\mathcal G}\ominus{\mathcal X}_\alpha\ominus{\mathcal Z}_\alpha)$, then
Player~${1-x}$ uses $f_2$.
%
There are now two sub-cases:
%
either (i) there is a run from $s$
consistent with $\strat^x$ and $\strat^{1-x}_c$
that reaches ${\mathcal X}_\alpha$;
or (ii) there is no such run.
In sub-case (i), the run reaches ${\mathcal X}_\alpha$ w.p.p.\
Then, by cases~1 and~2, Player~${1-x}$ wins w.p.p.\looseness=-1
In sub-case (ii), all runs stay forever outside ${\mathcal X}_\alpha$.
%
So the game is in effect played on ${\mathcal G}\ominus{\mathcal X}_\alpha$.
%
Notice then that any run from $s$ that is consistent
with $\strat^x$ and $\strat^{1-x}_c$
stays forever in ${\mathcal G}\ominus{\mathcal X}_\alpha\ominus{\mathcal Z}_\alpha$.
%
The reason is that (by Lemma~\ref{not:reach:trap:lemma})
$\gcomplementof{{\mathcal G}\ominus{\mathcal X}_\alpha}{\mathcal Z}_\alpha$
is an $x$-trap in
${\mathcal G}\ominus{\mathcal X}_\alpha$.
%
Since all runs remain inside ${\mathcal G}\ominus{\mathcal X}_\alpha\ominus{\mathcal Z}_\alpha$,
Player~${1-x}$ wins w.p.p.\ (even a.s.) wrt.\ $\mbox{$(1-x)$-}\parity$ using $f_2$.
\qedhere
\end{enumerate}
\end{proof}
The following theorem follows immediately from the previous lemmas.
\begin{thm}
Stochastic parity games with almost sure winning conditions on
infinite graphs are memoryless determined, provided there exists a
finite attractor and the players are restricted to finite-memory
strategies.
\end{thm}
\parg{Remark.}
We can compute both the a.s.\ winning set and the w.p.p.\ winning set
for both players as follows.
Let ${n_{\text{max}}}$ be the maximal color occurring in the game.
Then:
\begin{itemize}
\item Player~$x$ wins a.s.\ in ${\mathcal C}_{{n_{\text{max}}}}({\mathcal G})$ and w.p.p.\ in $\gcomplementof{\mathcal G}{\mathcal C}_{{n_{\text{max}}}+1}({\mathcal G})$;
\item Player~${1-x}$ wins a.s.\ in ${\mathcal C}_{{n_{\text{max}}}+1}({\mathcal G})$ and w.p.p.\ in $\gcomplementof{\mathcal G}{\mathcal C}_{{n_{\text{max}}}}({\mathcal G})$.
\end{itemize}
\section{Application to lossy channel systems}
\label{sec:application:sglcs}
\subsection{Lossy channel systems}
\label{sglcs:section}
A \emph{lossy channel system (LCS)}
is a finite-state machine
equipped with a finite number of unbounded fifo channels (queues) \cite{AbJo:lossy}.
The system is \emph{lossy} in the sense that, before and after a transition, an
arbitrary number of messages may be lost from the channels.
We consider \emph{stochastic game-LCS (SG-LCS)}: each individual %
message is lost independently with probability $\lambda$
in every step, where
$\lambda >0$ is a parameter of the system.
The set of control states is
partitioned into states belonging to Player~$0$ and~$1$.
The player who owns the current control state chooses an enabled
outgoing transition.
Formally, a SG-LCS of rank $n$ is a tuple
${\mathcal L}=\tuple{{\tt S},\lcsstates^\pz,\lcsstates^\po,{\tt C},{\tt M},{\tt T},\lossp,\coloring}$ where ${\tt S}$ is a
finite set of \emph{control states} partitioned into control states
$\lcsstates^\pz,\lcsstates^\po$ of Player~$0$
and~$1$;
${\tt C}$ is a finite set of \emph{channels}, ${\tt M}$ is a finite
set called the \emph{message alphabet}, ${\tt T}$ is a set of
\emph{transitions}, $0<\lambda<1$ is the \emph{loss rate}, and
${\mathtt{Col}}:{\tt S}\to\{0,\dots,n\}$ is the {\it coloring} function.
Each transition ${\tt t}\in{\tt T}$ is of the form
${\tt s}\transitionx{{\tt op}}{\tt s}'$, where
${\tt s},{\tt s}'\in{\tt S}$ and ${\tt op}$ is one of
the following three forms:
${\tt c}!{\tt m}$ (send message ${\tt m}\in{\tt M}$ in channel
${\tt c}\in{\tt C}$), ${\tt c}?{\tt m}$ (receive message ${\tt m}$ from
channel ${\tt c}$), or ${\tt nop}$ (do not modify the channels).
The SG-LCS ${\mathcal L}$ induces a game
${\mathcal G}=\tuple{S,\states^0,\states^1,\states^R,{\longrightarrow},P,{\mathtt{Col}}}$, where
$S={\tt S}\times({\tt M}^*)^{\tt C}\times\{0,1\}$.
That is, each state in the game (also called a \emph{configuration})
consists of a control state, a function
that assigns a finite word over the message alphabet to each channel,
and one of the symbols 0 or 1.
States where the last symbol is 0 are random:
$\states^R={\tt S}\times({\tt M}^*)^{\tt C}\times\{0\}$.
The other states belong to a player according to the control state:
${\states^\px}=\lcsstates^\px\times({\tt M}^*)^{\tt C}\times\{1\}$.
Transitions out of states of the form
$s=({\tt s},{\tt x},1)$ model transitions in
${\tt T}$ leaving control state ${\tt s}$.
On the other hand, transitions leaving configurations of the form
$s=({\tt s},{\tt x},0)$ model message losses.
More precisely, transitions are defined as follows:\looseness=-1
\begin{itemize}
\item
If $s=({\tt s},{\tt x},1),
s'=({\tt s}',{\tt x}',0)\inS$, then
we have $s\transitionx{}s'$ iff
${\tt s}\transitionx{{\tt op}}{\tt s}'$ is a transition in ${\tt T}$ and
(i)
if ${\tt op} = {\tt nop}$, then ${\tt x}={\tt x}'$;
(ii)
if ${\tt op} = {\tt c}!{\tt m}$, then ${\tt x}{\tt c} = w$ and
${\tt x}' = {\tt x}[{\tt c} \mapsto w \cdot {\tt m}]$
(iii)
if ${\tt op} = {\tt c}?{\tt m}$, then ${\tt x}{\tt c} = {\tt m}\cdot w$ and
${\tt x}' = {\tt x}[{\tt c} \mapsto w]$,
where the notation ${\tt x}[{\tt c} \mapsto w]$ represents the channel assignment which is the same as ${\tt x}$
except that it maps ${\tt c}$ to the word $w \in M^*$.
\item
To model message losses, we introduce the subword ordering $\preceq$
on words: $x\preceq y$ iff $x$ is a word obtained by removing zero or
more messages from arbitrary positions of $y$.
This is extended to channel contents
${\tt x},{\tt x}'\in({\tt M}^*)^{\tt C}$ by
${\tt x}\preceq{\tt x}'$ iff
${\tt x}({\tt c})\preceq{\tt x}'({\tt c})$ for all
channels ${\tt c}\in{\tt C}$, and to configurations
$s=({\tt s},{\tt x},i),s'=({\tt s}',{\tt x}',i')\inS$
by $s\preceqs'$ iff ${\tt s}={\tt s}'$,
${\tt x}\preceq{\tt x}'$, and $i=i'$.
For any $s=({\tt s},{\tt x},0)$ and any ${\tt x}'\preceq{\tt x}$, there is a transition
$s{\longrightarrow}({\tt s},{\tt x}',1)$.
The probability of random transitions is given by
$P(({\tt s},{\tt x},0),({\tt s},{\tt x}',1)) =
a\cdot\lambda^{c-b}\cdot(1-\lambda)^c$, where $a$ is the number of ways to
obtain ${\tt x}'$ by losing messages in ${\tt x}$, $b$ is
the total number of messages in all channels of ${\tt x}$, and $c$ is the
total number of messages in all channels of ${\tt x}'$
(see \cite{Parosh:etal:attractor:IC} for details).\looseness=-1
\end{itemize}
\noindent
Every configuration of the form $({\tt s},{\tt x},0)$ has at least one
successor, namely $({\tt s},{\tt x},1)$.
If a configuration $({\tt s},{\tt x},1)$ does not have successors
according to the rules above, then we add a transition
$({\tt s},{\tt x},1){\longrightarrow}({\tt s},{\tt x},0)$,
to ensure that the induced game is sink-free.
Finally, for a configuration $s=({\tt s},{\tt x},i)$, we define
${\mathtt{Col}}(s):={\mathtt{Col}}({\tt s})$.
Notice that the graph of the game is bipartite, in the sense that
a configuration in $\states^R$ has only
transitions to configurations in $\zocutS$,
and vice versa.
We say that a set of channel contents $\tt X \subseteq ({\tt M}^*)^{\tt C}$
is \emph{regular} if it is a finite union of sets of the form $\tt Y \subseteq ({\tt M}^*)^{\tt C}$
where $\tt Y(c)$ is a regular subset of ${\tt M}^*$ for every $c \in {\tt C}$
(this coincides with the notion of recognisable subset of $({\tt M}^*)^{\tt C}$;
cf. \cite{Berstel}).
We extend the notion of regularity to a set of configurations $P \subseteq S$
by saying that $P$ is \emph{regular} iff, for every control state ${\tt s} \in {\tt S}$ and $i \in \{0,1\}$,
there exists a regular set of channel contents $\tt X_{{\tt s}, i} \subseteq ({\tt M}^*)^{\tt C}$
s.t. $P = \setcomp {({\tt s}, {\tt x}, i)} {{\tt s} \in {\tt S}, i \in \{0,1\}, {\tt x} \in \tt X_{{\tt s}, i}}$.
In the qualitative {\it parity game problem} for SG-LCS, we
want to characterize the sets of configurations
where Player~$x$ can force the \mbox{$x$-}\parity{} condition to hold a.s.,
for both players.
\subsection{From scheme to algorithm}
\label{algorithm:section}
We transform the scheme of Section~\ref{parity:section}
into an algorithm for deciding the a.s.\ parity game problem for SG-LCS.
Consider an SG-LCS ${\mathcal L}=\tuple{{\tt S},\lcsstates^\pz,\lcsstates^\po,{\tt C},{\tt M},{\tt T},\lossp,\coloring}$ and
the induced game ${\mathcal G}=\tuple{S,\states^0,\states^1,\states^R,{\longrightarrow},P,{\mathtt{Col}}}$
of some rank $n$.
Furthermore, assume that the players
are restricted to finite-memory strategies.
We show the following.
\begin{thm}
\label{thm:memoryless:determinacy}
The sets of winning configurations for Players~$0$~and~$1$
are effectively computable as regular sets of configurations.
Furthermore, from each configuration, memoryless strategies suffice for the winning player.
\end{thm}
In the statement of the theorem, ``effectively'' means that
a finite description of the regular sets of winning configurations is computable.
We give the proof in several steps.
First, we show that the game induced by an SG-LCS contains a finite
attractor (Lemma~\ref{fattractors:lemma}).
Then, we show that the scheme in
Section~\ref{reachability:section} for computing winning configurations
wrt.\ reachability objectives
is guaranteed to terminate (Lemma~\ref{reachable:termination:lemma}).
Furthermore, we show that the scheme in
Section~\ref{parity:section} for computing winning configurations
wrt. a.s.\ parity objectives is guaranteed to terminate
(Lemma~\ref{parity:termination:lemma}).
Notice that
Lemmas~\ref{reachable:termination:lemma}~and~\ref{parity:termination:lemma}
imply that for SG-LCS our transfinite constructions
stabilize below $\omega$ (the first infinite ordinal).
Finally, we show that each step in the above two schemes
can be performed using standard operations on regular languages
(Lemmas~\ref{reachability:compuability:lemma}~and~\ref{parity:compuability:lemma}).
\parg{Finite attractor.}
In \cite{Parosh:etal:attractor:IC} it was shown that
any Markov chain induced
by a Probabilistic LCS contains a finite attractor.
The proof can be carried over in a straightforward manner
to the current setting.
More precisely, the finite attractor is given by
$A=({\tt S}\times\pmb{\emptyword}\times\{0,1\})$
where $\pmb{\emptyword}({\tt c})=\varepsilon$ for each ${\tt c}\in{\tt C}$.
In other words, $A$ is given by
the set of configurations in which all channels are empty.
The proof relies on the observation that if the number of messages
in some channel is sufficiently large, it is more likely that the number of
messages decreases than that it increases in the next step.
This gives the following.
\begin{lem}
\label{fattractors:lemma}
${\mathcal G}$ contains a finite attractor.
\end{lem}
\parg{Termination of Reachability Scheme.}
For a set of configurations $Q\subseteqS$,
we define the {\it upward closure} of $Q$ by
$\ucofQ:=\setcomp{s}{\existss'\inQ.\,s'\preceqs}$.
A set $U\subseteqQ\subseteqS$
is said to be {\it $Q$-upward-closed}
(or {\it $Q$-u.c.} for short) if
$(\ucof{U})\capQ=U$.
We say that $U$ is {\it upward closed} if it is
$S$-u.c.
\newcommand{j}{j}
\begin{lem}
\label{higman:lemma}
%
If $Q_0\subseteq Q_1\subseteq\cdots$,
and for all $i$ it holds that
$Q_i\subseteq Q$ and $Q_i$ is $Q$-u.c., then there is an $j\in\mathbb N$
such that $Q_i=Q_j$ for all $i\geq j$.
\end{lem}
\begin{proof}
By Higman's lemma \cite{higman:divisibility}, there is a $j\in\mathbb N$ s.t.
$Q_i\!\uparrow=Q_j\!\uparrow$ for all $i\geq j$.
%
Hence, $Q_i\!\uparrow\cap Q=Q_j\!\uparrow\cap Q$
for all $i\geqj$.
%
Since all $Q_i$ are $Q$-u.c.,
$Q_i\!\uparrow\cap Q=Q_i$
for all $i\geqj$.
%
So $Q_i=Q_j$ for all $i\geq j$.
\end{proof}
Now, we can show termination of the reachability scheme.
\begin{lem}
\label{reachable:termination:lemma}
%
There exists a finite $j\in\mathbb N$ such
that ${\mathcal R\,\,\!\!}_i={\mathcal R\,\,\!\!}_j$ for all $i\geqj$.
%
\end{lem}
\begin{proof}
First, we show that $\rcut{{\mathcal R\,\,\!\!}_i\setminus{\tt Target}}$ is
$(\gcomplementof{\mathcal G}{\tt Target})$-u.c. for all $i\in\mathbb N$.
We use induction on $i$.
For $i=0$ the result is trivial since
${\mathcal R\,\,\!\!}_i\setminus{\tt Target}=\emptyset$.
For $i>0$, suppose that
$s=({\tt s},{\tt x},0)\in\rcut{{\mathcal R\,\,\!\!}_i}\setminus{\tt Target}$.
This means that
$s{\longrightarrow}({\tt s},{\tt x}',1)\in{\mathcal R\,\,\!\!}_{i-1}$
for some ${\tt x}'\preceq{\tt x}$, and hence
$s'{\longrightarrow}({\tt s},{\tt x}',1)$
for all $s'$ s.t. $s\preceqs'$.
By Lemma~\ref{higman:lemma},
there is a $j'\in\mathbb N$
such that $\rcut{{\mathcal R\,\,\!\!}_i}\setminus{\tt Target}=\rcut{{\mathcal R\,\,\!\!}_{j'}}\setminus{\tt Target}$
for all $i\geq j'$.
Since ${\mathcal R\,\,\!\!}_i\supseteq{\tt Target}$ for all $i\geq 0$ it follows that
$\rcut{{\mathcal R\,\,\!\!}_i}=\rcut{{\mathcal R\,\,\!\!}_{j'}}$
for all $i\geq j'$.
Since the graph of ${\mathcal G}$ is bipartite
(as explained in Section~\ref{sglcs:section}),
$\xcut{\preof{\mathcal G}{{\mathcal R\,\,\!\!}_i}}=\xcut{\preof{\mathcal G}{\rcut{{\mathcal R\,\,\!\!}_i}}}$
and
$\ycut{\dualpreof{\mathcal G}{{\mathcal R\,\,\!\!}_i}}=\ycut{\dualpreof{\mathcal G}{\rcut{{\mathcal R\,\,\!\!}_i}}}$.
Since $\rcut{{\mathcal R\,\,\!\!}_i}=\rcut{{\mathcal R\,\,\!\!}_{j'}}$ for all $i\geqj'$,
we have
$\xcut{\preof{\mathcal G}{{\mathcal R\,\,\!\!}_i}}=\xcut{\preof{\mathcal G}{\rcut{\mathcal R\,\,\!\!}_{j'}}}\subseteq{\mathcal R\,\,\!\!}_{j'+1}$ and
$\ycut{\dualpreof{\mathcal G}{{\mathcal R\,\,\!\!}_i}}=\ycut{\dualpreof{\mathcal G}{\rcut{\mathcal R\,\,\!\!}_{j'}}}\subseteq{\mathcal R\,\,\!\!}_{j'+1}$.
It then follows that ${\mathcal R\,\,\!\!}_i={\mathcal R\,\,\!\!}_{j}$ for all $i\geqj:=j'+1$.
\end{proof}
\parg{Termination of Parity Scheme.}
We prove that the scheme from Section~\ref{parity:section}
terminates under the condition that the reachability sets are computable
and that there exists a finite attractor.
This suffices since, by the part above, the reachability scheme terminates,
thus yielding computability of the reachability set.
However, here we prove termination of the parity scheme with no further assumption on the reachability sets other than their computability.
We first prove two immediate auxiliary lemmas.
\begin{lem}
\label{closable:attractor:lemma}
A closable set intersects every attractor.
\end{lem}
\begin{proof}
In any closable set, the players can choose strategies that force
the game to remain in the set surely.
The lemma now follows since an attractor is visited almost surely
by any run, and this would be impossible if the attractor did not
have any element in the set.
\end{proof}
\begin{lem}
\label{cset:trap:lemma}
${\mathcal C}_n({\mathcal G})$ is a $({1-x})$-trap.
\end{lem}
\begin{proof}
${\mathcal C}_0({\mathcal G})$ is trivially a $({1-x})$-trap.
For $i\geq 1$, the result follows immediately from the definition of ${\mathcal C}_n({\mathcal G})$ in Eq~\ref{eq:cset:def}
as the complement of a force set (by Lemma~\ref{not:reach:trap:lemma}).
\end{proof}
\begin{lem}
\label{parity:termination:lemma}
There is a finite $j\in\mathbb N$ such that ${\mathcal X}_i={\mathcal X}_j$ for all
$i\geqj$.
\end{lem}
\begin{proof}
We will prove the claim by showing that
${\mathcal C}_{n-1}({\mathcal G}\ominus{\mathcal X}_i\ominus{\mathcal Z}_i)$ in the definition of
${\mathcal Y}_i$ contains an element from the attractor, and that the
${\mathcal C}_{n-1}({\mathcal G}\ominus{\mathcal X}_i\ominus{\mathcal Z}_i)$ sets constructed in
different steps $i$ are disjoint.
First, ${\mathcal C}_{n-1}({\mathcal G}\ominus{\mathcal X}_i\ominus{\mathcal Z}_i)$ is an $x$-trap
by Lemma~\ref{cset:trap:lemma}.
Hence it is closable, and therefore
Lemma~\ref{closable:attractor:lemma} implies that it contains an
element from the attractor.
Second, by the definition of the $\ominus$ operator, ${\mathcal X}_i$ and
${\mathcal G}\ominus{\mathcal X}_i\ominus{\mathcal Z}_i$ are disjoint.
Since
${\mathcal C}_{n-1}({\mathcal G}\ominus{\mathcal X}_i\ominus{\mathcal Z}_i)\subseteq S\setminus{\mathcal X}_i\setminus{\mathcal Z}_i$,
it follows that ${\mathcal Y}_i$ is the \emph{disjoint} union of ${\mathcal X}_i$ and
${\mathcal C}_{n-1}({\mathcal G}\ominus{\mathcal X}_i\ominus{\mathcal Z}_i)$.
Then, the definition of ${\mathcal X}_i$ implies that
${\mathcal C}_{n-1}({\mathcal G}\ominus{\mathcal X}_i\ominus{\mathcal Z}_i)\subseteq{\mathcal Y}_i\setminus\bigcup_{j<i}{\mathcal Y}_j$.
Hence, if $j\neq i$, ${\mathcal C}_{n-1}({\mathcal G}\ominus{\mathcal X}_i\ominus{\mathcal Z}_i)$ and
${\mathcal C}_{n-1}({\mathcal G}\ominus{\mathcal X}_j\ominus{\mathcal Z}_j)$ are disjoint.
Since all ${\mathcal C}_{n-1}({\mathcal G}\ominus{\mathcal X}_i\ominus{\mathcal Z}_i)$ sets are
disjoint, and each of them contains at least one element of the
attractor, and the attractor is finite, the algorithm terminates in
at most $|A|$ steps.
\end{proof}
\parg{Computability.}
Regular languages of configurations are effectively closed under the operations of
upward-closure, predecessor, set-theoretic union, intersection, and complement \cite{AbdullaBouajjaniDorso:JLC:2008}.
For completeness, we show these properties below.
\begin{lem}
\label{lem:upward-closure:regular}
If $P$ is a regular set of configurations, then its upward-closure $\ucof P$ is effectively regular.
\end{lem}
\begin{proof}
A regular set $P$ of configurations is by definition of the form
\[
P = \setcomp {({\tt s}, {\tt x}, i)} {{\tt s} \in {\tt S}, i \in \{0,1\}, {\tt x} \in \tt X_{{\tt s}, i}}
\]
where the $\tt X_{{\tt s}, i}$'s are regular sets of channel contents.
It thus suffices to show that
$\ucof {\tt X} := \setcomp{{\tt x}}{\exists{\tt x}'\in\tt X.\,{\tt x}'\preceq{\tt x}}$
is an effectively regular set of channel contents
when $\tt X$ is a regular set of channel contents.
By definition, $\tt X$ is a finite union of sets of the form
$\tt Y \subseteq ({\tt M}^*)^{\tt C}$ with $\tt Y(c)$ regular for every $c \in {\tt C}$.
Let $\ucof {\tt X}$ be the union of the $\ucof {\tt Y}$,
where, for every $c \in {\tt C}$,
a finite automaton recognizing $\ucof {\tt Y}(c)$ is obtained from a finite automaton recognizing $\tt Y(c)$
by adding a self-loop labeled with $M$ on every state thereof.
\end{proof}
\begin{lem}
\label{lem:boolean_op:regular}
If $P, Q$ are regular sets of configurations,
then $P \cup Q$, $P \cap Q$, and $S \setminus P$
are effectively regular sets of configurations.
\end{lem}
\begin{proof}
The proof is very similar to the one in the previous lemma,
by exploiting the fact that regular languages are closed under the operations of union, intersection, and complement.
\end{proof}
\begin{lem}
\label{lem:pre:regular}
If $P$ is a regular set of configurations,
then $\preof{\mathcal G} P$ is an effectively regular set of configurations.
\end{lem}
\begin{proof}
Let $P$ be a regular set of configurations.
By a case analysis on which transition is taken, we can write
%
\begin{align*}
\preof{\mathcal G} P = \bigcup_{{\tt t} \in {\tt T}} \preof{\mathcal G} {P, {\tt t}} \cup \preRof{\mathcal G} P
\end{align*}
%
where
\begin{align*}
\preof{\mathcal G} {P, {\tt s}\transitionx{\tt nop}{\tt s}'}
&:= \setcomp {({\tt s}, {\tt x}, 1)} {({\tt s}', {\tt x}, 0) \in P} \\
\preof{\mathcal G} {P, {\tt s}\transitionx{{\tt c}!{\tt m}}{\tt s}'}
&:= \setcomp {({\tt s}, {\tt x}, 1)} {({\tt s}', {\tt x}', 0) \in P.\,
{\tt x}'(c) = w \cdot {\tt m},
{\tt x} = {\tt x}[{\tt c} \mapsto w]} \\
\preof{\mathcal G} {P, {\tt s}\transitionx{{\tt c}?{\tt m}}{\tt s}'}
&:= \setcomp {({\tt s}, {\tt x}, 1)} {({\tt s}', {\tt x}', 0) \in P.\,
{\tt x} = {\tt x}'[{\tt c} \mapsto {\tt m} \cdot {\tt x}({\tt c})]} \\
\preRof{\mathcal G} P
&:= \setcomp {({\tt s}, {\tt x}, 0)} {({\tt s}', {\tt x}', 1) \in P.\,
{\tt x}' \preceq {\tt x}} = \ucof {\setcomp {({\tt s}, {\tt x}', 0)} {({\tt s}', {\tt x}', 1) \in P}}
\end{align*}
%
Then, $\preof{\mathcal G} {P, {\tt s}\transitionx{\tt nop}{\tt s}'}$ is clearly effectively regular,
$\preof{\mathcal G} {P, {\tt s}\transitionx{{\tt c}!{\tt m}}{\tt s}'}$ is regular,
because regular languages are effectively closed under (right) quotients,
$\preof{\mathcal G} {P, {\tt s}\transitionx{{\tt c}?{\tt m}}{\tt s}'}$ is regular,
because regular language are effectively closed under (left) concatenation with single symbols,
and $\preRof{\mathcal G} P$ is effectively regular by Lemma~\ref{lem:upward-closure:regular}.
\end{proof}
The lemmas above show that all operations
used in computing ${\it Force}^x({\mathcal G},{\tt Target})$ effectively preserve regularity.
Thus we obtain the following lemma.
\begin{lem}
\label{reachability:compuability:lemma}
If ${\tt Target}$ is regular, then
${\it Force}^x({\mathcal G},{\tt Target})$ is effectively regular.
\end{lem}
\begin{lem}
\label{parity:compuability:lemma}
For each $n$, ${\mathcal C}_n({\mathcal G})$ is
effectively regular.
\end{lem}
\begin{proof}
The set $S$ is regular, and hence
${\mathcal C}_0({\mathcal G})=S$
is effectively regular.
The result for $n>0$ follows from
Lemma~\ref{reachability:compuability:lemma} and from the fact
that the rest of the operations used to build ${\mathcal C}_n({\mathcal G})$
are those of set complement and union.
\end{proof}
\subsection{Construction of regular winning strategies}
\label{sec:strategy:construction}
In this section,
we show that the memoryless winning strategies constructed in Theorem~\ref{thm:memoryless:determinacy}
can be finitely represented as a (finite) list of rules with regular guards on the channel contents.
This representation can be easily turned in a more low-level one, e.g.,
a finite automaton with output reading the channel contents and outputting the rule do be played next,
but for the ease of presentation we have chosen a more high-level description.
\parg{Preliminaries.}
Let ${\mathcal L}=\tuple{{\tt S},\lcsstates^\pz,\lcsstates^\po,{\tt C},{\tt M},{\tt T},\lossp,\coloring}$ be a SG-LCS.
A \emph{(memoryless) regular SG-LCS strategy} ${\tt f}$ for Player~$x$ is a finite list of guarded rules
$\{ {\tt s}_i, X_i \transitionx{{\tt op}_i} {\tt s}'_i \}_{i=1}^n$,
where the \emph{guard} $X_i\subseteq ({\tt M}^*)^{\tt C}$ is a regular
set of channel contents and
${\tt s}_i\transitionx{{\tt op}_i}{\tt s}'_i$ is a transition in
${\tt T}$ s.t. ${\tt s}_i\in\lcsstates^\px$ and:
\begin{itemize}
\item If ${\tt op}_i = {\tt c}?{\tt m}$, every ${\tt x}\in X_i$ has ${\tt m}$ as the first symbol of ${\tt x}({\tt c})$.
\item Guards for the same control state are disjoint; i.e., for each
$i, j$, if ${\tt s}_i={\tt s}_j$ then $X_i\cap X_j=\emptyset$.
\end{itemize}
\noindent
The \emph{domain} of a regular SG-LCS strategy ${\tt f}$ is
\begin{align*}
\domof{{\tt f}} = \setcomp{({\tt s}, {\tt x})}
{\textrm{ there exists a guarded rule } {\tt s}, X \transitionx{{\tt op}} {\tt s}' \in {\tt f} \textrm{ s.t. } {\tt x} \in X}
\end{align*}
Intuitively, the rule $({\tt s}_i,X_i\transitionx{{\tt op}_i}{\tt s}'_i)$ should be applied from control state ${\tt s}_i$ if the channel contents belong to the guard $X_i$.
Formally, let ${\mathcal G}=\tuple{S,\states^0,\states^1,\states^R,{\longrightarrow},P,{\mathtt{Col}}}$ be the game induced by ${\mathcal L}$.
The (partial, memoryless) \emph{induced strategy} $\induced{\tt f}$ of a regular SG-LCS strategy ${\tt f}$ is defined,
for every $({\tt s},{\tt x})\in\domof{{\tt f}}$,
as $\induced{\tt f}({\tt s}, {\tt x}, 1) = ({\tt s}'_i, {\tt x}', 0)$,
where ${\tt s}_i, X_i \transitionx{{\tt op}} {\tt s}'_i$ is the unique guarded rule in ${\tt f}$ such that ${\tt s}_i={\tt s}$ and ${\tt x} \in X_i$,
and ${\tt x}'$ is the unique channel contents s.t.
$({\tt s}, {\tt x}, 1) {\longrightarrow} ({\tt s}', {\tt x}', 0)$ in the game ${\mathcal G}$.
If $({\tt s},{\tt x})\not\in\domof{{\tt f}}$, then $\induced{\tt f}({\tt s},{\tt x},1)=\bot$.
Given two regular SG-LCS strategies ${\tt f}_0, {\tt f}_1$ with disjoint domains,
their \emph{union} ${\tt f}_0 \cup {\tt f}_1$ is the regular SG-LCS strategy obtained by concatenating the lists of guarded rules of ${\tt f}_0$ and ${\tt f}_1$.
Given two sets of configurations $Q, Q' \subseteq S$,
a $\emph{selection function}$ from $Q$ to $Q'$ is any function $f : Q \mapsto Q'$ s.t.,
for every $({\tt s}, {\tt x}) \in Q$,
\begin{align*}
f({\tt s}, {\tt x}) \in \left(\postof{\mathcal G}{{\tt s}, {\tt x}}\cap Q'\right)
\end{align*}
In other words, a selection function picks a legal successor in $Q'$ for every configuration in $Q$.
\parg{Construction.}
The rest of this section is devoted to the construction of regular winning strategies for both players,
as summarised by the following theorem.
\begin{thm}
\label{thm:regular:strategies}
Memoryless winning strategies for both players are effectively computable as regular SG-LCS strategies.
\end{thm}
We begin by showing that, if the set of selection functions is non-empty,
then there are simple selection functions induced by \emph{regular} SG-LCS strategies.
\begin{lem}
\label{lemma:select:regular}
Let $Q, Q' \subseteq S$ be two regular sets of configurations.
If there exists a selection function from $Q$ to $Q'$,
then there exists a regular SG-LCS strategy ${\tt f}$
s.t. $\induced{\tt f}$ is a selection function from $Q$ to $Q'$.\looseness=-1
\end{lem}
\begin{proof}
Let $f$ be a selection function from $Q$ to $Q'$;
in particular, the set $\postof{\mathcal G}{{\tt s}, {\tt x}}\cap Q'$ is non-empty
for each $({\tt s}, {\tt x}) \in Q$.
%
Let ${\tt T} = \set{{\tt s}_0 \transitionx{{\tt op}_0} {\tt s}_0', \dots, {\tt s}_k \transitionx{{\tt op}_k} {\tt s}_k'}$
be the finitely many transitions of ${\mathcal L}$.
For every $i \in \set{0, \dots, k}$,
let $P_i$ be the set of predecessors of $Q'$ in $Q$ via transition ${\tt s}_i \transitionx{{\tt op}_i} {\tt s}_i'$, i.e.,
%
\begin{align*}
P_i = \preof{\mathcal G} {Q', {\tt s}_i \transitionx{{\tt op}_i} {\tt s}_i'} \cap Q
= \setcomp {({\tt s}_i, {\tt x}) \in Q} { \textrm{there exists } ({\tt s}_i', {\tt x}') \in Q' \cdot ({\tt s}_i, {\tt x}) \transitionx{{\tt op}_i} ({\tt s}_i', {\tt x}') }
\end{align*}
%
Since $Q, Q'$ are regular,
$\preof{\mathcal G} {Q', {\tt s}_i \transitionx{{\tt op}_i} {\tt s}_i'}$ is regular (cf. Lemma~\ref{lem:pre:regular}),
and thus $P_i$ is regular too.
%
Consider the sequence of (regular) sets $Q_0 = P_0$, and, for $0 < i \leq k$, $Q_i = P_i \setminus \bigcup_{0 \leq j < i} Q_j$,
and let $Q_{i_0}, \dots, Q_{i_h}$ be the subsequence of non-empty sets.
Then, $\set{Q_{i_0}, \dots, Q_{i_h}}$ is a (regular) partition of $Q$:
The sets are disjoint by definition, and each $({\tt s}, {\tt x}) \in Q$
belongs to some $Q_{i_j}$ since $\postof{\mathcal G}{{\tt s}, {\tt x}}\cap Q'$ is non-empty.
Let $\set{X_{i_0}, \dots, X_{i_h}} \subseteq 2^{({\tt M}^*)^{\tt C}}$ be the set of regular channel contents
s.t., for $0 \leq j \leq h$,
$Q_{i_j}$ is of the form $\setcomp{({\tt s}_{i_j}, {\tt x})}{{\tt x} \in X_{i_j}}$.
%
Let ${\tt f}$ be the following regular SG-LCS strategy:
%
\begin{align}
\{ s_{i_0}, X_{i_0} \transitionx{{\tt op}_{i_0}} {\tt s}'_{i_0}, \dots, s_{i_h}, X_{i_h} \transitionx{{\tt op}_{i_h}} {\tt s}'_{i_h} \}
\end{align}
%
By definition, $\induced{\tt f}$ is a selection function from $Q$ to $Q'$.
\end{proof}
In the next lemma, we show that regular SG-LCS strategies suffice to keep the game in regular traps.
\begin{lem}
\label{lemma:trap:certainly:regular}
If $Q$ is a $({1-x})$-trap and regular, then there exists
a regular SG-LCS strategy ${\tt f}$ for Player~$x$
such that
$Q\subseteq\vinset^x(\induced{\tt f},F_{\it all}^{{1-x}}({\mathcal G}))({\mathcal G},\alwaysQ)$.
\end{lem}
\begin{proof}
By Lemma~\ref{trap:certainly:lemma},
there exists a memoryless strategy $\strat^x$ for Player~$x$ with the required property.
Moreover, by inspecting the proof of the lemma,
we can see that $\strat^x$ is defined as
$\strat^x(s)=\selectfrom{\postof{\mathcal G}{s}\capQ}$
for every configuration $s \in \xcutQ$,
i.e., $\strat^x$ is a selection function from $\xcutQ$ to $Q$,
and, in fact, any such selection function can be taken.
By Lemma~\ref{lemma:select:regular},
there exists a regular SG-LCS strategy ${\tt f}$ s.t.
the induced strategy $\induced {\tt f}$ is a selection function from $\xcutQ$ to $Q$.
\end{proof}
The following lemma shows that there are regular SG-LCS strategies for the reachability and safety objective
(cf. Lemma~\ref{reachability:correct:lemma}).
\begin{lem}
\label{lemma:reachability:regular:strategy}
Let ${\tt Target} \subseteq S$ be a regular set of configurations.
There exist regular SG-LCS strategies ${\tt force}^x({\mathcal G},{\tt Target})$ for Player~$x$
and ${\tt avoid}^{1-x}({\mathcal G},{\tt Target})$ for Player~${1-x}$~s.t.
\begin{align*}
{\it Force}^x({\mathcal G},{\tt Target})&\subseteq
\winset^x(\induced{{\tt force}^x({\mathcal G},{\tt Target})},F_{\it all}^{{1-x}}({\mathcal G}))({\mathcal G},\Diamond{\tt Target}^{>0}) \\
{\it Avoid}^{1-x}({\mathcal G},{\tt Target})&\subseteq
\vinset^{1-x}(\induced{{\tt avoid}^{1-x}({\mathcal G},{\tt Target})},F_{\it all}^x({\mathcal G}))({\mathcal G},\Box(\gcomplementof{\mathcal G}{\tt Target}))
\end{align*}
\end{lem}
\begin{proof}
We first show a regular SG-LCS strategy for Player~$x$ for the reachability objective.
Consider the sequence of sets ${\mathcal R\,\,\!\!}_0, {\mathcal R\,\,\!\!}_1, \dots$ constructed in Section~\ref{reachability:section}.
By Lemma~\ref{reachable:termination:lemma}, there exists $j \in \mathbb N$ s.t. $\forall i > j$, ${\mathcal R\,\,\!\!}_i = {\mathcal R\,\,\!\!}_j$.
Moreover, since ${\mathcal R\,\,\!\!}_i$ is built starting from the regular set ${\tt Target}$
and according to regularity-preserving operations
(union, predecessor, and complement; cf. Lemmas~\ref{lem:boolean_op:regular} and \ref{lem:pre:regular}),
${\mathcal R\,\,\!\!}_i$ is regular for every $0 \leq i \leq j$.
%
Consider the sequence of regular sets $R_0 = {\mathcal R\,\,\!\!}_0$ and $R_i = {\mathcal R\,\,\!\!}_i \setminus {\mathcal R\,\,\!\!}_{i-1}$
for every $0 < i \leq j$.
Recall the definition of ${\it force}^x({\mathcal G},{\tt Target})$ in the proof of Lemma~\ref{reachability:correct:lemma}:
For every $0<i\leq j$,
${\it force}^x({\mathcal G},{\tt Target})$ was uniformly defined on $R_i$ as
%
\begin{align*}
{\it force}^x({\mathcal G},{\tt Target})(s) = \selectfrom{\postof{\mathcal G}{s}\cap{\mathcal R\,\,\!\!}_{i - 1}}.
\end{align*}
%
Therefore, there exists a selection function from $R_i$ to ${\mathcal R\,\,\!\!}_{i-1}$, for every $0 < i \leq j$.
Since the $R_i$'s and ${\mathcal R\,\,\!\!}_i$'s are regular, by Lemma~\ref{lemma:select:regular},
there exists a regular SG-LCS strategy $f_i$ with domain $R_i$ inducing such a selection function.
Since the $R_i$'s are disjoint, and since any selection function is correct,
take as ${\tt force}^x({\mathcal G},{\tt Target})$
the union strategy $f_0 \cup \cdots \cup f_j$.
Since the actual choice of the selection function is irrelevant, we conclude that
\[
{\it Force}^x({\mathcal G},{\tt Target})\subseteq
\winset^x(\induced{{\tt force}^x({\mathcal G},{\tt Target})},F_{\it all}^{{1-x}}({\mathcal G}))({\mathcal G},\Diamond{\tt Target}^{>0})
\]
We conclude the proof by providing the required regular SG-LCS strategy for Player~${1-x}$ for the safety objective.
By Lemma~\ref{not:reach:trap:lemma}, ${\it Avoid}^{1-x}({\mathcal G},{\tt Target})$ is an $x$-trap.
Since ${\it Avoid}^{1-x}({\mathcal G},{\tt Target})$ is regular,
by Lemma~\ref{lemma:trap:certainly:regular} there exists a regular SG-LCS strategy ${\tt avoid}^{1-x}({\mathcal G},{\tt Target})$
such that ${\it Avoid}^{1-x}({\mathcal G},{\tt Target})\subseteq
\vinset^{1-x}(\induced{{\tt avoid}^{1-x}({\mathcal G},{\tt Target})},F_{\it all}^x({\mathcal G}))({\mathcal G},\Box(\gcomplementof{\mathcal G}{\tt Target}))$.
\end{proof}
To conclude the proof of Theorem~\ref{thm:regular:strategies},
we show that regular SG-LCS strategies suffice for the parity objective
(cf. Lemma~\ref{cn:infinite:termination:lemma}).
\begin{lem}
\label{lemma:parity:regular:strategy}
There are regular SG-LCS strategies
$\lcsstrat^x_c$ for Player~$x$ and $\lcsstrat^{1-x}_c$ for Player~${1-x}$ such that
%
\begin{align*}
{\mathcal C}_n({\mathcal G})\;&\subseteq\;\winset^x(\induced\lcsstrat^x_c,F^{{1-x}}_{\it finite}({\mathcal G}))({\mathcal G},{\mbox{$x$-}\parity}^{=1} ) \\
\gcomplementof{\mathcal G}{{\mathcal C}_n({\mathcal G})}\;&\subseteq\;\winset^{1-x}(\induced\lcsstrat^{1-x}_c,F^x_{\it finite}({\mathcal G}))({\mathcal G},{\mbox{$(1-x)$-}\parity}^{>0} )
\end{align*}
\end{lem}
\begin{proof}
We define regular SG-LCS strategies $\lcsstrat^x_c$ for Player~$x$ and $\lcsstrat^{1-x}_c$ for Player~${1-x}$
by induction on $n \geq 1$.
%
By inspecting the proof of Lemma~\ref{cn:infinite:termination:lemma},
we note that winning strategies for both players are constructed according to a case analysis on disjoint regular domains,
for which winning regular SG-LCS strategies exist either by induction hypothesis,
or by Lemma~\ref{lemma:reachability:regular:strategy} (for reachability).
%
%
Recall that, by Lemma~\ref{parity:termination:lemma},
there exists $i \in \mathbb N$ s.t. ${\mathcal X}_j = {\mathcal X}_i$ for every $j > i$.
Moreover, all the sets ${\mathcal X}_j, {\mathcal Y}_j, {\mathcal Z}_j$ involved in the construction are regular for every $0 \leq j \leq i$
since they are constructed starting from regular sets and according to regularity-preserving operations
(boolean operations, cf. Lemma~\ref{lem:boolean_op:regular};
force-sets, cf. Lemma~\ref{lemma:reachability:regular:strategy}).
\parg{Construction of $\lcsstrat^x_c$.}
%
Define the two regular sets of configurations $\complementof{{\mathcal X}_j}:=\gcomplementof{\mathcal G}{{\mathcal X}_j}$ and
$\complementof{{\mathcal Z}_j}:=\gcomplementof{\mathcal G}{{\mathcal Z}_j}$.
%
By definition, ${\mathcal C}_n({\mathcal G})=\complementof{{\mathcal X}_j}$.
%
Following Lemma~\ref{cn:infinite:termination:lemma},
we define $\lcsstrat^x_c(s)$ depending on the membership of $s$
in one of the following three partitions of $\complementof{{\mathcal X}_j}$:
%
\begin{align*}
\set{\complementof{{\mathcal X}_j}\cap\complementof{{\mathcal Z}_j},
\ \ \complementof{{\mathcal X}_j}\cap\colorset{{\mathcal Z}_j}<n,
\ \ \complementof{{\mathcal X}_j}\cap\colorset{{\mathcal Z}_j}=n}
\end{align*}
%
In the first case, note that ${\mathcal G}\ominus{\mathcal X}_j\ominus{\mathcal Z}_j$ does not contain any configurations of color $\geq n$ (cf.\break Lemma~\ref{cn:infinite:termination:lemma}).
Thus, by the induction hypothesis, there is a regular SG-LCS strategy ${\tt f}_j$
for Player~$x$ in ${\mathcal G}\ominus{\mathcal X}_j\ominus{\mathcal Z}_j$ such that the
induced strategy has domain $\complementof{{\mathcal X}_j}\cap\complementof{{\mathcal Z}_j}$.
%
In the second case, let ${\tt f}_2$ be the regular SG-LCS strategy
${\tt force}^x({\mathcal G}\ominus{\mathcal X}_j,\colorset{{\mathcal Z}_j}=n)$, for which
the induced strategy has domain $\complementof{{\mathcal X}_j}\cap\colorset{{\mathcal Z}_j}<n$
(it exists by Lemma~\ref{lemma:reachability:regular:strategy}).
%
Finally, in the third case, the strategy $\selectfrom{\postof{\mathcal G}\cdot\cap\complementof{{\mathcal X}_j}}$
witnesses the existence of a selection function
from $\complementof{{\mathcal X}_j}\cap\colorset{{\mathcal Z}_j}=n$ to $\complementof{{\mathcal X}_j}$.
Let ${\tt f}_3$ be a regular SG-LCS strategy
inducing a selection function from
$\complementof{{\mathcal X}_j}\cap\colorset{{\mathcal Z}_j}=n$ to
$\complementof{{\mathcal X}_j}$
(it exists by Lemma~\ref{lemma:select:regular}).
%
Then, $\lcsstrat^x_c$ is defined as the union of the three previously constructed strategies:\looseness=-1
%
\begin{align*}
\lcsstrat^x_c := {\tt f}_1 \cup {\tt f}_2 \cup {\tt f}_3
\end{align*}
%
Since the actual choice of selection function is irrelevant,
$\lcsstrat^x_c$ induces a correct strategy by the same arguments as in the proof of Lemma~\ref{cn:infinite:termination:lemma},
i.e., ${\mathcal C}_n({\mathcal G})\;\subseteq\;\winset^x(\induced\lcsstrat^x_c,F^{{1-x}}_{\it finite}({\mathcal G}))({\mathcal G},{\mbox{$x$-}\parity}^{=1})$.
\parg{Construction of $\lcsstrat^{1-x}_c$.}
Recall that $\gcomplementof{\mathcal G}{{\mathcal C}_n({\mathcal G})} = {\mathcal Y}_j = {\mathcal X}_j$, and,
for every $0 \leq i \leq j$,
${\mathcal Y}_i = {\mathcal X}_i\cup{\mathcal C}_{n-1}({\mathcal G}\ominus{\mathcal X}_i\ominus{\mathcal Z}_i)$.
For every $1 \leq i \leq j$, let ${\tt f}_i^1$ be the regular SG-LCS strategy ${\tt force}^{1-x}({\mathcal G},{\mathcal Y}_{i-1})$
with domain ${\mathcal X}_i \setminus {\mathcal Y}_{i-1}$
(it exists by Lemma~\ref{lemma:reachability:regular:strategy}).
%
By the induction hypothesis, there is also a regular SG-LCS strategy ${\tt f}_i^2$ such that the induced strategy has domain ${\mathcal C}_{n-1}({\mathcal G}\ominus{\mathcal X}_i\ominus{\mathcal Z}_i)$,
which is winning a.s. for Player~${1-x}$ on this domain.
Then, $\lcsstrat^{1-x}_c$ is defined as
%
\begin{align*}
\lcsstrat^{1-x}_c := {\tt f}_1^1 \cup {\tt f}_1^2 \cup \cdots \cup {\tt f}_j^1 \cup {\tt f}_j^2
\end{align*}
%
By reasoning as in the proof of Lemma~\ref{cn:infinite:termination:lemma},
$\lcsstrat^{1-x}_c$ induces a correct strategy, i.e.,
$\gcomplementof{\mathcal G}{{\mathcal C}_n({\mathcal G})}\;\subseteq\;\winset^{1-x}(\induced\lcsstrat^{1-x}_c,F^x_{\it finite}({\mathcal G}))({\mathcal G},{\mbox{$(1-x)$-}\parity}^{>0})$.
\end{proof}
\section{Conclusions and Discussion}
\label{conclusions:section}
We have presented a scheme for solving stochastic games with a.s.\ and w.p.p.\ parity winning
conditions under the two requirements that
(i) the game contains a finite
attractor and
(ii) both players are restricted to finite-memory strategies.
We have shown that this class of games is memoryless determined.
The method is instantiated to prove decidability
of a.s.\ and w.p.p.\ parity games induced by lossy channel systems.
\input{fig_no_finite_attractor}
The two above requirements are both necessary for
our method.
To see why our scheme fails if the game lacks a {\bf finite attractor},
consider the game in Figure~\ref{req:figure}
(a variant of the Gambler's ruin problem).
All states are random, i.e., $\states^0=\states^1=\emptyset$,
and ${\mathtt{Col}}(s_0)=1$ and ${\mathtt{Col}}(s_i)=0$ when $i>0$.
The probability to go right from
any state is $0.7$ and the probability to go left (or to make a
self-loop in $s_0$) is $0.3$.
This game does not have any finite attractor.
It can be shown that
the probability to reach $s_0$ infinitely often is 0 for all initial
states. However, our construction will classify all states as winning
for Player 1.
More precisely, the construction of ${\mathcal C}_1({\mathcal G})$ converges
after one iteration, with ${\mathcal Z}_\alpha=S$ and ${\mathcal X}_\alpha={\mathcal Y}_\alpha=\emptyset$ for all $\alpha$, and
${\mathcal C}_1({\mathcal G})=S$.
Intuitively, the problem is that even if the force-set of $\set{s_0}$
(which is the entire set of states) is visited infinitely many times,
the probability of visiting $\set{s_0}$ infinitely often is still zero,
since the probability of returning to $\set{s_0}$ gets smaller and
smaller. Such behavior is impossible in a game graph that contains
a finite attractor.
Our scheme also fails when the players are not both restricted to {\bf finite-memory strategies}.
Solving a game under a finite-memory restriction is a
different problem from when arbitrary strategies are allowed (not a sub-problem).
In fact, it was shown in \cite{BBS:ACM2007} that
for arbitrary strategies, the problem is undecidable.
We show two simple examples of stochastic games on LCSs where the two problems yield different results (see also \cite{BBS:ACM2007}).
In one case, we show that infinite memory is more powerful for Player 1 with a w.p.p.\ objective
(cf. Figure~\ref{fig:infinite_memory_wpp_example}),
while in the other case infinite memory helps w.r.t. an a.s.\ objective
(cf. Figure~\ref{fig:infinite_memory_as_example}).
In both cases, Player 0 does not play in the game, thus the memory allowed to her is irrelevant.
\begin{figure}
\subfloat[W.p.p. winning condition]
\label{fig:infinite_memory_wpp_example}
$\qquad$
\begin{tikzpicture}
\node[opponent-node] (s0) {$p:0$};
\node[opponent-node] (s1) [right of = s0, node distance=1.8cm] {$q:0$};
\node[opponent-node] (s2) [right of = s1, node distance=2cm] {$r:1$};
\draw[transition-edge] (s0) to [out=60, in=120, loop] node [above] {$c!1$} (s0);
\draw[transition-edge] (s0) to node [above] {${\tt nop}$} (s1);
\draw[transition-edge] (s1) to [out=60, in=120, loop] node [above] {${\tt nop}$} (s1);
\draw[transition-edge] (s1) to node [above] {$c?1$} (s2);
\draw[transition-edge] (s2) to [bend left, out=30, in=150] node [below] {${\tt nop}$} (s0);
\end{tikzpicture}
$\qquad$
}
$\quad$
\subfloat[A.s. winning condition] {
\label{fig:infinite_memory_as_example}
$\qquad$
\begin{tikzpicture}
\node[opponent-node] (s0) {$0$};
\node[opponent-node] (s1) [right of = s0, node distance=1.6cm] {$1$};
\node[opponent-node] (s2) [right of = s1, node distance=1.6cm] {$2$};
\draw[transition-edge] (s0) to [out=60, in=120, loop] node [above] {$c!1$} (s0);
\draw[transition-edge] (s0) to [bend left] node [above] {${\tt nop}$} (s1);
\draw[transition-edge] (s1) to [bend left] node [below] {$c?1$} (s0);
\draw[transition-edge] (s1) to node [above] {${\tt nop}$} (s2);
\draw[transition-edge] (s2) to [bend left, out=90, in=90] node [below] {${\tt nop}$} (s0);
\end{tikzpicture}
$\qquad$
}
\caption{Infinite memory helps Player 1.}
\end{figure}
First, we show that infinite memory is more powerful for w.p.p.\ objectives.
In Figure~\ref{fig:infinite_memory_wpp_example}, Player~1 plays on control states $p$, $q$, and $r$.
Player 1's objective is to visit state $r$ infinitely often w.p.p..
To ensure this, from state $p$ Player~1 pumps up the channel to a sufficiently large size $k$
(which can be done a.s. for any $k$ given enough time),
and then she goes to the risk state $q$. If each message can be lost independently with probability $\frac 1 2$,
the probability that all messages are lost, and thus that Player 1 is stuck forever in $q$, is $2^{-k}$.
Otherwise, with probability $1 - 2^{-k}$ Player 1 can visit $r$ once,
and then go back to $p$.
The strategy of Player 1 is to realise an infinite sequence $k_0 < k_1 < \cdots$
s.t. the probability of visiting state $r$ infinitely often,
which is $\prod_{i = 0}^\infty (1 - 2^{-k_i})$,
can be made strictly positive.
Clearly, if Player 1 has infinite memory, then she can realize such a sequence by distinguishing different visits to control state $p$ and same channel contents.
On the other side, if Player 1 is restricted to finite memory, then either the game eventually stays forever in $p$ (which is losing),
or the infinite sequence $k_0, k_1, \dots$ is upper-bounded by some finite $n$,
which makes the infinite product above equal to $0$.
In both cases, Player 1 loses if she has only finite memory.
Notice that Player~1 wins not only w.p.p., but even limit-sure in this example.
In other words, for every $\varepsilon > 0$ there is an infinite-memory strategy
s.t. the parity objective is satisfied with probability $\geq \varepsilon$.
We don't know whether there are examples where a similar phenomenon can be reproduced under finite-memory/memoryless strategies.
We now show that infinite memory is more powerful for a.s.\ objectives.
An example similar to the previous case can be given for the a.s. winning mode with a 3-color parity condition.
In Figure~\ref{fig:infinite_memory_as_example}, Player 1 controls states $0$, $1$, and $2$,
whose color equals their name.
Thus, the objective of Player 1 is to a.s. visit state $1$ infinitely often and state $2$ only finitely often.
The strategy is similar as in the previous example:
Player 1 tries to pump up the channel in state $0$,
and then she goes to the risk state $1$.
From here, with low probability all messages are lost, and the penalty is to visit state $2$ once.
Otherwise, the game can go back directly to state $0$ without visiting state $2$.
In both cases, the game restarts afresh from state $0$.
An analysis as in the previous example shows that, if Player 1 is restricted to finite memory,
then the probability of visiting state $2$ from state $1$ can be bounded from below.
This implies that, whenever state $1$ is visited infinitely often, then so is state $2$ a.s.,
and so Player 1 is losing.
On the other hand, there is an infinite-memory strategy for Player 1
s.t. the probability of visiting state $2$ for $n$ times goes to $0$ as $n$ goes to infinity,
which implies that the probability of visiting state $2$ only finitely often is 1.
As future work, we will consider extending our framework
to (fragments of) probabilistic extensions of other models
such as Petri nets and noisy Turing machines
\cite{Parosh:etal:MC:infinite:journal}.
\bibliographystyle{alpha}
|
1,941,325,220,460 | arxiv | \subsection{Three-queue system}
\label{sec: 3q}
This section presents the heavy traffic distribution for a simpler system which we call the Three-queue system. The dynamics of the Three-queue system is similar to that of the Input-queued switch.
We provide the mathematical model for the Three-queue system considered in this paper in Section \ref{sec: 3q_model}. We also present the result about the state-space collapse of the Three-queue system onto a two-dimensional subspace in the same section. Finally, the results related to Three-queue system is presented in \ref{sec: 3q_results}.
\subsubsection{Model of the Three-queue system}
\label{sec: 3q_model}
We consider a simplification of $2\times 2$ Input-queued switch (consisting of four queues) by picking the arrival rate for the fourth queue to be zero, i.e., $\lambda_4 = 0$. We use the same model as in Section \ref{sec: switch_model} with the modification that $q_4(t)=0$ for all values of $t$. Thus, the queue length vector is given by by $\mathbf{q}(t) = \big(q_1(t), q_2(t), q_3(t) \big)$ and the arrival vector is given by $\mathbf{a}(t) = \big( a_1(t),a_2(t),a_3(t)\big) $ with $\mathbb{E}[\mathbf{a}(t)] = \boldsymbol{\lambda}$ and Var$(\mathbf{a}(t)) = \boldsymbol{\sigma}^2$, where $\boldsymbol \sigma^2$ is a $3\times 3$ diagonal matrix. The two possible schedules for the three-queue system are $(1,0,0)$ and $(0,1,1)$. Further, without loss of generality, we assume that the schedule $(1,0,0)$ is chosen only if $q_1(t)>0$, because if $q_1(t)=0$, choosing the schedule $(1,0,0)$ does not provide any service. So, the first is not chosen for server unless there are jobs waiting in the queue. This implies that unused service for the first queue is always zero, i.e., $u_1(t)=0$ for all $t\geq 0$. Now, the capacity region and the corresponding boundary is given by
\begin{align*}
\mathcal{C} = \Big\{ \boldsymbol \lambda \in \mathbb{R}^3_+ : \lambda_1+ \lambda_2 <1, \lambda_1 + \lambda_3 <1\Big\}, && \mathcal{F} = \Big\{ \boldsymbol \nu \in \mathbb{R}^3_+ : \nu_1+ \nu_2 =1, \nu_1+ \nu_3 =1\Big\}
\end{align*}
We assume that for any $i$, $\lambda_i >0$, as otherwise the system can be simplified further. Thus, there exists a $\boldsymbol \nu \in \mathcal{F}$ and the \textit{heavy traffic parameter} $\epsilon \in (0,1)$ such that $\mathbf{\boldsymbol \lambda} = (1-\epsilon)\mathbf{\boldsymbol \nu}$. The parameter $\epsilon$ is a measure of distance of the arrival rate vector from the boundary $\mathcal{F}$. The system approaches heavy traffic when the heavy traffic parameter $\epsilon$ tends to $0$.
And as $\lambda_i > 0$ for all $i$, $\nu_{\min} \triangleq \min_{i} \nu_{i} >0.$
The state space collapse for the Three-queue system says that the 3-dimensional state vector can be closely approximated by a two dimensional workload process in heavy traffic.
Consider the subspace $\mathcal{S} \subseteq \mathbb{R}^3$ given by,
\begin{equation*}
\mathcal{S} = \Big\{ \mathbf{y} \in \mathbb{R}^3 : y_1 = y_2 + y_3 \Big\} = \Big\{ \mathbf{y} \in \mathbb{R}^{3} : \exists \mathbf{w} \in \mathbb{R}^{2} \ s.t. \ \mathbf{y} =\mathbf B \mathbf w \Big\}.
\end{equation*}
where $\mathbf B = \begin{bmatrix}
1 & 1 & 0\\
1 & 0 & 1
\end{bmatrix}^T$. Now, for any vector $\mathbf{x} \in \mathbb{C}^3$, we define $\mathbf{x}_{\|}$ as the projection to the subspace $\mathcal{S}$ and $\mathbf x_{\perp} = \mathbf{x} - \mathbf{x}_{\|}$.
Note that unlike Input-queued switch, for Three-queue system, the lower dimensional representation of $\mathbf x_{\|}$ is unique, i.e., there is a unique $\mathbf w$ such that $\mathbf x_{\|} = \mathbf B \mathbf w $.
\begin{definition}
\label{def: 3q_ssc}
For the Three-queue system as defined in Section \ref{sec: 3q_model}
operating under a given scheduling algorithm, we say that the algorithm achieves \textit{state space collapse}, if for every $\theta \in \mathbb{R}$, there exists $\epsilon( \theta) >0$ such that for every $0< \epsilon \leq \epsilon( \theta)$, the steady state queue length vector satisfies,
\begin{equation*}
\mathbb{E}[e^{\epsilon \theta \| \mathbf{q}_{\perp} \|}] < C^\star< \infty.
\end{equation*}
where $\mathbf{q}_{\perp} = \mathbf{q} - \mathbf{q}_{\|} $ and the expectation is taken under the steady-state distribution.
As a conclusion, for any scheduling policy that achieves state space collapse, we have that for every $ \theta \in \mathbb{R}$, $\lim_{\epsilon \rightarrow 0} \mathbb{E}[e^{\epsilon \theta \| \mathbf{q}_{\perp} \|}] < \infty.$ Furthermore, there exists a $C_r$ independent of $\epsilon$ such that,
\begin{equation}
\label{eq: 3q_bound}
\mathbb{E} \Big [\|\mathbf{q}_{\perp }\|^r \Big] \leq C_r \quad \forall r \geq 1.
\end{equation}
\end{definition}
For Three-queue system, MaxWeight scheduling chooses the schedule $(1,0,0)$ if $q_1(t)> q_2(t)+q_3(t)$, otherwise it chooses the schedule $(0,1,1)$. Similar to Input-queued switch, for Three-queue system also, MaxWeight scheduling achieves state space collapse according to Definition \ref{def: 3q_ssc}. The proof follows on similar lines as that in \cite[Proposition 2]{maguluri2016heavy}.
\subsubsection{Results for Three-queue system}
\label{sec: 3q_results}
In this section, we present the results related to the heavy traffic distribution of the Three-queue system under general variance condition. Theorem \ref{theo: 3q_functional_eq} presents the functional equation that the heavy traffic distribution of the Three-queue system satisfies. Theorem \ref{thm: 3q_dist} provides the heavy traffic distribution for the Three-queue system under a condition on the variance of the arrival process.
\begin{theorem}
\label{theo: 3q_functional_eq}
Consider the Three-queue system as defined in Section \ref{sec: 3q_model} operating under a scheduling algorithm that achieves state space collapse according to Definition \ref{def: 3q_ssc}. Let $\Theta = \{\boldsymbol \theta \in \mathbb{C}^{n^2} : \boldsymbol \theta \in \mathcal{S}, \ Re(\mathbf B^T \boldsymbol \theta) \leq \mathbf 0_{3}\}$
and take $\boldsymbol\theta \in \boldsymbol \Theta$.
Then, for all $\boldsymbol \theta \in \Theta$, the limiting scaled queue length satisfies,
\begin{equation}
\label{eq: 3q_functional_eq}
\left( -\frac{1}{2}\langle \boldsymbol{\theta}, \mathbf{1}_{3} \rangle + \frac{1}{2} \langle \boldsymbol{\theta} , \boldsymbol\sigma^2 \boldsymbol{\theta} \rangle \right) L(\boldsymbol{\theta}) + \theta_2 M_2(\boldsymbol\theta)+\theta_3 M_3(\boldsymbol\theta) = 0.
\end{equation}
where
\begin{align*}
L(\boldsymbol \theta) = \lim_{\epsilon \rightarrow 0} \mathbb{E}[e^{\epsilon \langle \boldsymbol \theta, \mathbf{q} \rangle}], && M_2(\boldsymbol \theta) = \lim_{\epsilon \rightarrow 0} \frac{1}{\epsilon} \mathbb{E} [u_2e^{\epsilon (\theta_2 + 2 \theta_3)q_3}], && M_3(\boldsymbol \theta) = \lim_{\epsilon \rightarrow 0} \frac{1}{\epsilon} \mathbb{E} [u_3e^{\epsilon (2\theta_2 + \theta_3)q_2}],
\end{align*}
where the expectation is taken under steady state distribution.
\end{theorem}
Theorem \ref{theo: 3q_functional_eq} gives a characterization of the functional equation for the three-queue system. The functional equation presented in Theorem \ref{theo: 3q_functional_eq} can be derived by performing the Lyapunov drift analysis on the complex exponential function and then doing the second-order approximation in terms of the heavy traffic parameter $\epsilon$. Note that $M_1(\boldsymbol \theta) =0$ as $u_1 =0$ by our convention. We provide a brief outline of the proof in Section \ref{sec: 3q_outline} and the complete proof is provided in Appendix \ref{app: 3q_functional_eq}. Next, we provide a uniqueness result that says that there is a unique solution for the functional equation provide in Eq. \eqref{eq: 3q_functional_eq}.
\begin{lemma}
\label{lem: 3q_uniqueness}
Consider the Three-queue system as defined in Section \ref{sec: 3q_model} operating under a scheduling algorithm that achieves state space collapse according to Definition \ref{def: 3q_ssc}. Then, there is a unique solution to the functional equation given in Eq. \eqref{eq: 3q_functional_eq} that is a valid Laplace transform of a joint distribution of two variables.
\end{lemma}
We use Lemma \ref{lem: 3q_uniqueness} to ensure that the functional equation for the Three-queue system given in Eq. \eqref{eq: 3q_functional_eq} has a unique solution. The complete proof of Lemma \ref{lem: 3q_uniqueness} is provided in Appendix \ref{app: 3q_uniqueness}. A brief outline of the proof for Lemma \ref{lem: 3q_uniqueness} is provided in Section \ref{sec: 3q_outline}. This is in contrast with the Conjecture \ref{lem: switch_uniqueness}, as even though the uniqueness of the solution of the functional equation for switch is not known, for simpler systems like Three-queue system, we are able to prove the uniqueness of the solution of the functional equation.
\begin{theorem}
\label{thm: 3q_dist}
Consider the Three-queue system as defined in Section \ref{sec: 3q_model} operating under a scheduling algorithm that achieves state space collapse. Suppose the variance vector $\boldsymbol \sigma^2$ satisfy the condition $2\sigma_1^2 = \sigma_2^2 + \sigma_3^2 $. Then, the heavy traffic steady state queue length vector is given by
\begin{equation*}
\lim_{\epsilon \rightarrow 0} \epsilon \mathbf{q} = (\Upsilon_1+\Upsilon_2,\Upsilon_1,\Upsilon_2)
\end{equation*}
where $\Upsilon_1$ and $\Upsilon_2$ are independent exponential random variables of variance $\frac{3\sigma_2^2 + \sigma_3^2}{4}$ and $\frac{\sigma_2^2 + 3\sigma_3^2}{4}$ respectively.
\end{theorem}
Theorem \ref{thm: 3q_dist} says that the limiting distribution of the three dimensional state vector of Three-queue system can be represented by using two independent exponential random variables as long as the variances satisfy the condition $2\sigma_1^2 = \sigma_2^2 + \sigma_3^2 $. The proof of Theorem \ref{thm: 3q_dist} uses the results provided in Theorem \ref{theo: 3q_functional_eq} and Lemma \ref{lem: 3q_uniqueness}. The mathematical details regarding the proof of Theorem \ref{thm: 3q_dist} is provided in Appendix \ref{app: 3q_dist}.
\begin{corollary}
\label{cor: 3q_max}
For the Three-queue system as defined in Section \ref{sec: 3q_model}, MaxWeight scheduling satisfies the functional equation given in Theorem \ref{theo: 3q_functional_eq} and the heavy traffic distribution in Theorem \ref{thm: 3q_dist}.
\end{corollary}
As mentioned in Section \ref{sec: 3q_model}, MaxWeight scheduling achieves state space collapse according to Definition \ref{def: 3q_ssc}. Now, Corollary \ref{cor: 3q_max} is a direct application of Theorem \ref{theo: 3q_functional_eq} and Theorem \ref{thm: 3q_dist}.
\subsubsection{Comparison with Input-queued switch}
Three-queue system is a simpler queueing system as compared to Input-queued switch but they are analogous in basic structure. Even though, there are some distinctions in terms of the behavior of these two systems. The first difference between these two systems is the state space collapse result. For the Three-queue system, the state space collapse occurs to a two-dimensional subspace, and the two dimensional representation of the projection of the queue length vector to the corresponding subspace is unique. In contrast to that, for the Input-queued switch of size $n$, the state space collapse occurs to a subspace of size $2n-1$ and the projection of the queue length vector to its corresponding subspace does not have a unique representation. This non-unique representation in the case of Input-queued switch is, intuitively, the reason behind an additional $\Tilde{\Upsilon}$ term for Input-queued switch as seen in Theorem \ref{thm: switch_dist}.
Another difference is in terms of the uniqueness of the functional equation of the two systems. The functional equation for the Three-queue system involves only two variables and so the functional equation for the Three-queue system has a unique solution as shown by Lemma \ref{lem: 3q_uniqueness}. As opposed to that, the functional equation for the Input-queued switch has more than two variables. Currently, we do not have proof of the uniqueness of the solution for the functional equation for the Input-queued switch. Finally, the third difference between these two systems is in terms of the condition on the variances of the arrival process under which the heavy traffic distribution is shown to be represented by using independent exponential random variables. For Three-queue system, the variance condition in Theorem \ref{thm: 3q_dist} (i.e. $2\sigma_1^2 = \sigma_2^2 + \sigma_3^2$) is more general than the symmetric variance condition (i.e., $\boldsymbol \sigma^2 = \sigma^2 \mathbf I_{n^2}$) as considered in Theorem \ref{thm: switch_dist}.
\section{Proofs for Three-queue system}
\label{app: 3q}
\color{red}
\color{black}
\subsection{Properties of the projection}
\label{app: 3q_projection}
\begin{lemma}
\label{lem: 3q_projection}
We define some matrices as follows:
\begin{align*}
\mathbf B = \begin{bmatrix}
1 & 1\\
1 & 0\\
0 & 1
\end{bmatrix}, &&
\mathbf D = \mathbf B^T\mathbf B = \begin{bmatrix}
2 & 1\\
1 & 2
\end{bmatrix}, &&
\mathbf D^{-1} = \frac{1}{3} \begin{bmatrix}
2 & -1 \\
-1 & 2
\end{bmatrix}, &&
\mathbf A = \mathbf{B}(\mathbf{B}^T\mathbf{B})^{-1}\mathbf{B}^T= \frac{1}{3} \begin{bmatrix}
2 & 1 & 1\\
1 & 2 & -1\\
1&-1&2
\end{bmatrix}.
\end{align*}
Let $\mathbf x \in \mathbb{C}^3$ and suppose $\mathbf x_{\|}$ denotes the projection of $\mathbf{x}$ onto the space $\mathcal{S}$, where $\mathcal{S}$ is the space spanned by the columns of $\mathbf B$. And $\mathbf x_{\perp} = \mathbf x - \mathbf x_{\|}$.
\begin{enumerate}
\item For any $\boldsymbol \phi \in \mathbb{C}^2$, the vector $\boldsymbol \theta = \mathbf B \boldsymbol \phi$ lies in the space $\mathcal{S}$. Also, for any $\mathbf x \in \mathbb{C}^3$,
\begin{equation*}
\langle \boldsymbol \theta, \mathbf{x} \rangle = \langle \boldsymbol \theta, \mathbf{x}_{\|} \rangle = \langle \mathbf{d}_1, \boldsymbol \phi \rangle x_{\| 1} + \langle \mathbf{d}_2, \boldsymbol \phi \rangle x_{\| 2},
\end{equation*}
where $\mathbf{d}_1$ and $\mathbf{d}_2$ are columns of $\mathbf D $.
\item The closed form expression for $\mathbf x_{\|}$ is given by $\mathbf x_{\|} = \mathbf A \mathbf{x}$. And the perpendicular component $\mathbf x_\perp$ is
\begin{equation*}
\mathbf x_{\perp} = \frac{1}{3}(x_2+x_3 -x_1)\begin{bmatrix}
-1\\1\\1
\end{bmatrix}.
\end{equation*}
\end{enumerate}
\end{lemma}
Lemma \ref{lem: 3q_projection} provides the properties of the projection of any vector onto the space $\mathcal{S}$. By the definition of the matrix $\mathbf{A}$ we have the relation that $\mathbf{B}^T \mathbf{A} = \mathbf{B}^T$ and $\mathbf{A}\mathbf{B} = \mathbf{B}$. The proof of Lemma \ref{lem: 3q_projection} follows by simple application of linear algebra and the mathematical details, as provided below.
\begin{proof}[Proof of Lemma \ref{lem: 3q_projection}]
Proof of part 1 simply follows by the structure of the subspace $\mathcal{S}$. Observe that as $\boldsymbol \theta \in \mathcal{S}$, we have that $\langle \boldsymbol \theta, \mathbf{x}_\perp \rangle = 0$. This gives us that
\begin{align*}
\langle \boldsymbol \theta, \mathbf{x} \rangle = \langle \boldsymbol \theta, \mathbf{x}_{\|} \rangle &= \theta_1(x_{\| 2}+x_{\| 3})+\theta_2 x_{\| 2} +\theta_3 x_{\| 3} \\
& = (\phi_1+\phi_2)(x_{\| 2}+x_{\| 3})+\phi_1 x_{\| 2} +\phi_2 x_{\| 3}\\
& = (2\phi_1+\phi_2)x_{\| 2}+(\phi_1+2\phi_2)x_{\| 3 }\\
& = \langle \mathbf{d}_1, \boldsymbol \phi \rangle x_{\| 2} + \langle \mathbf{d}_2, \boldsymbol \phi \rangle x_{\| 3}.
\end{align*}
This completes the proof of part 1. The proof of part 2 follows simply by using the theory of projection to a linear subspace.
\end{proof}
\subsection{Required Lemma}
\label{app: 3q_mgf_equivalence}
Before presenting the proof of the results for Three-queue system, we present a necessary Lemma as given below.
\begin{lemma}
\label{lem: 3q_mgf_equivalence}
Consider the Three-queue system as defined in Section \ref{sec: 3q_model} operating under scheduling policy that achieves state space collapse according to the Definition \ref{def: 3q_ssc}. For any $\tilde{\boldsymbol \theta} \in \mathbb{C}^3$, let $\boldsymbol \theta$ be its projection onto the space $\mathcal{S}$ and suppose $\exists \boldsymbol \phi \in \mathbb C^2$ such that $\boldsymbol \theta = \mathbf B \boldsymbol \phi \in \boldsymbol \Theta $, then we have that
\begin{enumerate}
\item \begin{align*}
\lim_{\epsilon\rightarrow 0} \big| \mathbb{E} \big[ e^{\epsilon \langle \Tilde{\boldsymbol \theta} , \mathbf{q} \rangle} \big] \big| < \infty, && \lim_{\epsilon \rightarrow 0} \frac{1}{\epsilon} \big|\mathbb{E}[u_2 e^{\epsilon \langle \Tilde{\boldsymbol \theta} , \mathbf{q} \rangle}] \big|<\infty, &&\lim_{\epsilon \rightarrow 0} \frac{1}{\epsilon} \big|\mathbb{E}[u_3 e^{\epsilon \langle \Tilde{\boldsymbol \theta} , \mathbf{q} \rangle}] \big| < \infty.
\end{align*}
The results holds even after replacing $\Tilde{\boldsymbol \theta} $ by $\boldsymbol \theta$.
\item \begin{equation*}
\lim_{\epsilon \rightarrow 0}\mathbb{E}[e^{\epsilon \langle \tilde{\boldsymbol \theta}, \mathbf{q} \rangle}] = \lim_{\epsilon \rightarrow 0} \mathbb{E}[e^{\epsilon \langle \boldsymbol \theta, \mathbf{q} \rangle}] = \lim_{\epsilon \rightarrow 0} \mathbb{E}[e^{\epsilon (\langle \mathbf{d}_1, \boldsymbol \phi \rangle q_2 + \langle \mathbf{d}_2, \boldsymbol \phi \rangle q_3)}],
\end{equation*}
\item \begin{equation*}
\lim_{\epsilon \rightarrow 0} \frac{1}{\epsilon} \mathbb{E}[ u_2e^{\epsilon \langle \tilde{\boldsymbol \theta}, \mathbf{q} \rangle}]
= \lim_{\epsilon \rightarrow 0} \frac{1}{\epsilon}\mathbb{E}[u_2 e^{\epsilon \langle \boldsymbol \theta, \mathbf{q} \rangle}]
= \lim_{\epsilon \rightarrow 0}\frac{1}{\epsilon} \mathbb{E}[u_2 e^{\epsilon \langle \mathbf{d}_2, \boldsymbol \phi \rangle q_3}],
\end{equation*}
\item \begin{equation*}
\lim_{\epsilon \rightarrow 0} \frac{1}{\epsilon} \mathbb{E}[ u_3e^{\epsilon \langle \tilde{\boldsymbol \theta}, \mathbf{q} \rangle}]
= \lim_{\epsilon \rightarrow 0} \frac{1}{\epsilon}\mathbb{E}[u_3 e^{\epsilon \langle \boldsymbol \theta, \mathbf{q} \rangle}]
= \lim_{\epsilon \rightarrow 0}\frac{1}{\epsilon} \mathbb{E}[u_3 e^{\epsilon \langle \mathbf{d}_1, \boldsymbol \phi \rangle q_2}],
\end{equation*}
\end{enumerate}
where all the expectation are taken under the steady state distribution.
\end{lemma}
According to Lemma \ref{lem: 3q_mgf_equivalence}, in order to characterize the heavy traffic steady state distribution of the Three-queue system, we just need to the consider the set of $\boldsymbol \theta$ that lie in $\mathcal{S}$. This is a consequence of the state space collapse of the Three-queue system onto the subspace $\mathcal{S}$.
\begin{proof}[Proof of Lemma \ref{lem: 3q_mgf_equivalence}]
\begin{itemize}
\item[(1)]
As $\mathcal{S}$ is a linear subspace, suppose $\tilde{\boldsymbol \theta} = \boldsymbol \theta +\boldsymbol \theta_{\perp}$. Then,
\begin{align}
\label{eq: 3q_theta_relation}
\langle \tilde{\boldsymbol \theta}, \mathbf{q} \rangle &= \langle \boldsymbol \theta , \mathbf{q} \rangle + \langle \boldsymbol \theta_{\perp} , \mathbf{q} \rangle \nonumber\\
& \stackrel{(a)}{=} \langle \boldsymbol \theta , \mathbf{q} \rangle + \langle \boldsymbol \theta_{\perp} , \mathbf{q}_{\perp} \rangle \nonumber\\
&\stackrel{(b)}{=} (\phi_1 +\phi_2)q_1 + \phi_1 q_2 + \phi_2 q_3 + \langle \boldsymbol \theta_{\perp} , \mathbf{q}_{\perp} \rangle\nonumber\\
& = (2\phi_1 +\phi_2)q_2 +(\phi_1 +2\phi_2) q_3 + (\phi_1 +\phi_2) (q_1 - q_2 - q_3) + \langle \boldsymbol \theta_{\perp} , \mathbf{q}_{\perp} \rangle\nonumber\\
& \stackrel{(c)}{=}\langle \mathbf{d}_1, \boldsymbol \phi \rangle q_2 + \langle \mathbf{d}_2, \boldsymbol \phi \rangle q_3 - \langle 3(\phi_1 +\phi_2)\mathbf{1}_3, \mathbf{q}_{\perp} \rangle+ \langle \boldsymbol \theta_{\perp} , \mathbf{q}_{\perp} \rangle\nonumber\\
& \stackrel{(d)}{=} \langle \mathbf{d}_1, \boldsymbol \phi \rangle q_2 + \langle \mathbf{d}_2, \boldsymbol \phi \rangle q_3 + \langle \boldsymbol \theta' , \mathbf{q}_{\perp} \rangle,
\end{align}
where (a) follows because $\langle \boldsymbol \theta_{\perp} , \mathbf{q}_\| \rangle =0$ be the property of projection; (b) follows by using $\boldsymbol \theta = \mathbf B \boldsymbol \phi$; $\mathbf d_1$ and $\mathbf d_2$ are columns of $\mathbf D$; and (d) follows by taking $\boldsymbol \theta' = \boldsymbol \theta_{\perp} - 3(\phi_1+\phi_2)\mathbf 1_3$.
Now, suppose $\mathbf{q}$ follows the steady state distribution of the Markov process $\{\mathbf{q}(t)\}_{t=0}^\infty$, then
\begin{align}
\label{eq: 3q_laplacelim}
\lim_{\epsilon\rightarrow 0} \big| \mathbb{E} \big[ e^{\epsilon \langle \Tilde{\boldsymbol \theta} , \mathbf{q} \rangle} \big] \big| & \leq \lim_{\epsilon\rightarrow 0} \mathbb{E} \big[\big| e^{\epsilon (\langle \mathbf{d}_1, \boldsymbol \phi \rangle q_2 + \langle \mathbf{d}_2, \boldsymbol \phi \rangle q_3 - \langle \boldsymbol \theta', \mathbf{q}_{\perp} \rangle)}\big| \big]\nonumber\\
& \stackrel{(a)}{\leq} \lim_{\epsilon\rightarrow 0} \mathbb{E} \big[\big| e^{-\epsilon \langle \boldsymbol \theta', \mathbf{q}_{\perp} \rangle}\big| \big]\nonumber\\
&\stackrel{(b)}{<} \infty,
\end{align}
where (a) follows by using $Re(\langle \mathbf{d}_1, \boldsymbol \phi \rangle ) \leq 0$ and $Re(\langle \mathbf{d}_2, \boldsymbol \phi \rangle ) \leq 0$ for any $\boldsymbol \phi $ such that $\mathbf B \boldsymbol \phi \in \boldsymbol \Theta$; and (b) follows by using the Definition \ref{def: 3q_ssc}. The queue length vector evolve according to the equation,
\begin{align}
\label{eq: 3q_lindley}
\mathbf{q}(t+1) &= [\mathbf{q}(t) + \mathbf{a}(t) - \mathbf{s}(t)]^+ = \mathbf{q}(t) + \mathbf{a}(t) - \mathbf{s}(t) + \mathbf{u}(t),
\end{align}
where $\mathbf{u}(t)$ is the unused service in the time slot $t$.
Suppose $\mathbf{q}^+$ is the state of the Markov chain following the state $\mathbf{q}$, then, as the system is stable and $\mathbf{q}$ follows the steady state distribution, $\mathbf{q}^+$ also follows the steady state distribution and so,
\begin{align*}
\mathbb{E}[q_1^+ + q_2^+] - \mathbb{E}[ q_1+q_2] = 0,
\end{align*}
as the drift is zero in steady state.
Then, by using the Eq. \eqref{eq: 3q_lindley},
\begin{align}
\label{eq: 3q_unused_epsilon}
\mathbb{E}[ u_1+u_2] = \mathbb{E}[ s_1+s_2] -\lambda_1 - \lambda_2 = 1-\lambda_1 - \lambda_2 = \epsilon,
\end{align}
where the second equality follows because the chosen schedule can either be $(1,0,0)$ or $(0,1,1)$; third equality follows because $\boldsymbol\lambda = (1-\epsilon)\boldsymbol\nu$ where $\boldsymbol \nu \in \mathcal{F}$ as mentioned in Section \ref{sec: 3q_model}. By using a similar argument, we can show that $\mathbb{E}[ u_1+u_3] = \epsilon$. Now,
\begin{align}
\label{eq: 3q_laplace_boundarylim}
\lim_{\epsilon \rightarrow 0} \frac{1}{\epsilon} \big|\mathbb{E}[u_2 e^{\epsilon \langle \Tilde{\boldsymbol \theta}, \mathbf{q} \rangle}] \big| &\leq \lim_{\epsilon\rightarrow 0} \frac{1}{\epsilon} \mathbb{E} \big[ u_2\big| e^{-\epsilon \langle \boldsymbol \theta', \mathbf{q}_{\perp} \rangle}\big| \big]\nonumber\\
& \stackrel{(a)}{\leq } \lim_{\epsilon\rightarrow 0} \frac{1}{\epsilon} \mathbb{E} \big[ u_2 e^{\epsilon \|\boldsymbol \theta'\|\| \mathbf{q}_{\perp}\| } \big]\nonumber\\
& \stackrel{(b)}{\leq } \lim_{\epsilon\rightarrow 0} \frac{1}{\epsilon} \mathbb{E} [ u_2 ] + \|\boldsymbol \theta'\| \lim_{\epsilon\rightarrow 0} \frac{1}{\epsilon} \mathbb{E} \big[ \epsilon u_2 \| \mathbf{q}_{\perp}\| e^{\epsilon \|\boldsymbol \theta'\|\| \mathbf{q}_{\perp}\| } \big]\nonumber\\
&\stackrel{(c)}{\leq } 1 + \|\boldsymbol \theta'\| \lim_{\epsilon\rightarrow 0} \mathbb{E} \big[ \| \mathbf{q}_{\perp}\| e^{\epsilon \|\boldsymbol \theta'\|\| \mathbf{q}_{\perp}\| } \big]\nonumber\\
&\stackrel{(d)}{\leq } 1 + \|\boldsymbol \theta'\| \lim_{\epsilon\rightarrow 0} \mathbb{E} \big[ \| \mathbf{q}_{\perp}\|^2 \big]^{\frac{1}{2}} \mathbb{E} \big[ e^{2\epsilon \|\boldsymbol \theta'\|\| \mathbf{q}_{\perp}\| }\big]^{\frac{1}{2}}\nonumber\\
& \stackrel{(e)}{< } \infty,
\end{align}
where (a) follows as by Cauchy-Schwarz inequality, $| \langle \boldsymbol \theta', \mathbf{q}_{\perp} \rangle| \leq \|\boldsymbol \theta'\|\| \mathbf{q}_{\perp} \|$; (b) follows by using $e^x \leq 1+ xe^x$ for all $x\geq 0$; (c) follows as $\mathbb{E} [ u_2 ] \leq \epsilon$ and also $u_2 \leq 1$; (d) follows by using the Cauchy-Schwarz inequality again; and finally, (e) follows as by using Definition \ref{def: 3q_ssc},
\begin{align*}
\lim_{\epsilon\rightarrow 0} \mathbb{E} \big[ \| \mathbf{q}_{\perp}\|^2 \big] < \infty, && \lim_{\epsilon\rightarrow 0}\mathbb{E} \big[ e^{2\epsilon \|\boldsymbol \theta'\|\| \mathbf{q}_{\perp}\| }\big] < \infty.
\end{align*}
Note that the argument in Eq. \eqref{eq: 3q_laplace_boundarylim} works if we replace $u_2$ by $u_3$.
Also, by using the relation presented in Eq. \eqref{eq: 3q_theta_relation}, the arguments in Eq. \eqref{eq: 3q_laplacelim} and Eq. \ref{eq: 3q_laplace_boundarylim} hold after replacing $\Tilde{\boldsymbol \theta}$ with $\boldsymbol \theta$ and $\boldsymbol \theta'$ with $-3(\phi_1 +\phi_2)\mathbf{1}_3$, which completes the result in part 1.
\item[(2)] From Eq. \eqref{eq: 3q_theta_relation}, $\langle \tilde{\boldsymbol \theta}, \mathbf{q} \rangle = \langle \mathbf{d}_1, \boldsymbol \phi \rangle q_2 + \langle \mathbf{d}_2, \boldsymbol \phi \rangle q_3 + \langle \boldsymbol \theta' , \mathbf{q}_{\perp} \rangle$. Then,
\begin{align*}
\label{eq: 3q_mgf_equi_part1}
\left| \mathbb{E}[e^{\epsilon \langle \tilde{\boldsymbol \theta}, \mathbf{q} \rangle}] - \mathbb{E}[e^{\epsilon (\langle \mathbf{d}_1, \boldsymbol \phi \rangle q_2 + \langle \mathbf{d}_2, \boldsymbol \phi \rangle q_3)}] \right| &= \mathbb{E}\big[\left|e^{\epsilon \langle \mathbf{d}_1, \boldsymbol \phi \rangle q_2 + \langle \mathbf{d}_2, \boldsymbol \phi \rangle q_3} \right| \big| \big( 1- e^{\epsilon \langle \boldsymbol \theta', \mathbf{q}_{\perp} \rangle} \big) \big|\big] \nonumber\\
&\stackrel{(a)}{\leq} \mathbb{E}\bigg[\left| 1- e^{\epsilon \langle \boldsymbol \theta', \mathbf{q}_{\perp} \rangle} \right| \bigg]\nonumber \allowdisplaybreaks \\
& \stackrel{(b)}{\leq} \mathbb{E}\bigg[ |\epsilon \langle \boldsymbol \theta', \mathbf{q}_{\perp} \rangle| e^{\epsilon | \langle \boldsymbol \theta', \mathbf{q}_{\perp} \rangle|} \bigg] \nonumber \allowdisplaybreaks \\
& \stackrel{(c)}{\leq} \mathbb{E}\big[ |\epsilon \langle \boldsymbol \theta', \mathbf{q}_{\perp} \rangle|^{2} \big]^{\frac{1}{2}} \mathbb{E}\bigg[ e^{2\epsilon | \langle \boldsymbol \theta', \mathbf{q}_{\perp} \rangle|} \bigg]^{\frac{1}{2}} \nonumber \allowdisplaybreaks \\
& \stackrel{(d)}{\leq} \epsilon \| \boldsymbol \theta' \| \mathbb{E}\big[ \| \mathbf{q}_{\perp} \|^{2} \big]^{\frac{1}{2}} \mathbb{E}\bigg[ e^{2\epsilon \|\boldsymbol \theta'\|\| \mathbf{q}_{\perp}\|} \bigg]^{\frac{1}{2}}\nonumber
\end{align*}
where (a) follows by using $Re(\langle \mathbf{d}_1, \boldsymbol \phi \rangle ) \leq 0$ and $Re(\langle \mathbf{d}_2, \boldsymbol \phi \rangle ) \leq 0$ as $\mathbf B \boldsymbol \phi \in \boldsymbol \Theta $; (b) holds because $|e^x-1| \leq |x|e^{|x|}$ for any $x\in \mathbb{C}$; (c) and (d) holds by using Cauchy-Schwarz inequality. Now, by arguments presented in Eq. \eqref{eq: 3q_laplace_boundarylim}, $\lim_{\epsilon\rightarrow 0} \mathbb{E} \big[ \| \mathbf{q}_{\perp}\|^2 \big]^{\frac{1}{2}} \mathbb{E} \big[ e^{2\epsilon \|\boldsymbol \theta'\|\| \mathbf{q}_{\perp}\| }\big]^{\frac{1}{2}} < \infty$, and so,
\begin{equation*}
\lim_{\epsilon\rightarrow 0} \left| \mathbb{E}[e^{\epsilon \langle \tilde{\boldsymbol \theta}, \mathbf{q} \rangle}] - \mathbb{E}[e^{\epsilon (\langle \mathbf{d}_1, \boldsymbol \phi \rangle q_2 + \langle \mathbf{d}_2, \boldsymbol \phi \rangle q_3)}] \right| = \lim_{\epsilon\rightarrow 0} \epsilon \| \boldsymbol \theta' \| \mathbb{E}\big[ \| \mathbf{q}_{\perp} \|^{2} \big]^{\frac{1}{2}} \mathbb{E}\bigg[ e^{2\epsilon \|\boldsymbol \theta'\|\| \mathbf{q}_{\perp}\|} \bigg]^{\frac{1}{2}} = 0.
\end{equation*}
Note that the same argument holds after replacing $\Tilde{\boldsymbol \theta}$ with $\boldsymbol \theta$ and $\boldsymbol \theta'$ with $-3(\phi_1 +\phi_2)\mathbf{1}_3$, which gives us the result
\begin{equation*}
\lim_{\epsilon\rightarrow 0} \left| \mathbb{E}[e^{\epsilon \langle \boldsymbol \theta, \mathbf{q} \rangle}] - \mathbb{E}[e^{\epsilon (\langle \mathbf{d}_1, \boldsymbol \phi \rangle q_2 + \langle \mathbf{d}_2, \boldsymbol \phi \rangle q_3)}] \right| = 0.
\end{equation*}
This completes the proof of part 2.
\item[(2)] Using the similar arguments as in part 2,
\begin{align*}
\frac{1}{\epsilon}\left| \mathbb{E}[u_2e^{\epsilon \langle \tilde{\boldsymbol \theta}, \mathbf{q} \rangle}] - \mathbb{E}[u_2e^{\epsilon (\langle \mathbf{d}_1, \boldsymbol \phi \rangle q_2 + \langle \mathbf{d}_2, \boldsymbol \phi \rangle q_3)}] \right|
&\leq \frac{1}{\epsilon} \mathbb{E}\bigg[ u_2 |\epsilon \langle \boldsymbol \theta', \mathbf{q}_{\perp} \rangle| e^{\epsilon | \langle \boldsymbol \theta', \mathbf{q}_{\perp} \rangle|} \bigg] \nonumber \allowdisplaybreaks \\
& = \mathbb{E}\bigg[ u_2 | \langle \boldsymbol \theta', \mathbf{q}_{\perp} \rangle| e^{\epsilon | \langle \boldsymbol \theta', \mathbf{q}_{\perp} \rangle|} \bigg] \nonumber \allowdisplaybreaks \\
& \leq \mathbb{E}[u_2^{2}]^{\frac{1}{2}} \mathbb{E}\bigg[ | \langle \boldsymbol \theta', \mathbf{q}_{\perp} \rangle|^2 e^{2\epsilon | \langle \boldsymbol \theta', \mathbf{q}_{\perp} \rangle|} \bigg]^{\frac{1}{2}} \nonumber \allowdisplaybreaks \\
& \leq \sqrt{\epsilon} \| \boldsymbol \theta' \| \mathbb{E}\big[ \| \mathbf{q}_{\perp} \|^{4} \big]^{\frac{1}{4}} \mathbb{E}\bigg[ e^{4\epsilon \|\boldsymbol \theta'\|\| \mathbf{q}_{\perp}\|} \bigg]^{\frac{1}{4}},
\end{align*}
where last two inequalities follow by Cauchy-Schwarz inequality and using $\mathbb{E}[u_2^{2}]=\mathbb{E}[u_2]\leq \epsilon$ as shown in Eq. \eqref{eq: 3q_unused_epsilon}. Now, by using Definition \ref{def: 3q_ssc},
\begin{align*}
\lim_{\epsilon\rightarrow 0} \mathbb{E} \big[ \| \mathbf{q}_{\perp}\|^4 \big] < \infty, && \lim_{\epsilon\rightarrow 0}\mathbb{E} \big[ e^{4\epsilon \|\boldsymbol \theta'\|\| \mathbf{q}_{\perp}\| }\big] < \infty.
\end{align*}
and also, $u_2 = 1$ only if $q_2 = 0$, and so,
\begin{equation*}
u_2e^{\epsilon (\langle \mathbf{d}_1, \boldsymbol \phi \rangle q_2 + \langle \mathbf{d}_2, \boldsymbol \phi \rangle q_3)} = u_2e^{\epsilon \langle \mathbf{d}_2, \boldsymbol \phi \rangle q_3}.
\end{equation*}
Combining these with the above argument gives us,
\begin{align*}
\lim_{\epsilon \rightarrow 0} \frac{1}{\epsilon}\left| \mathbb{E}[u_2e^{\epsilon \langle \tilde{\boldsymbol \theta}, \mathbf{q} \rangle}] - \mathbb{E}[u_2e^{\epsilon \langle \mathbf{d}_2, \boldsymbol \phi \rangle q_3}] \right| = \lim_{\epsilon \rightarrow 0} \sqrt{\epsilon} \| \boldsymbol \theta' \| \mathbb{E}\big[ \| \mathbf{q}_{\perp} \|^{4} \big]^{\frac{1}{4}} \mathbb{E}\bigg[ e^{4\epsilon \|\boldsymbol \theta'\|\| \mathbf{q}_{\perp}\|} \bigg]^{\frac{1}{4}} = 0.
\end{align*}
Note that the same argument holds after replacing $\Tilde{\boldsymbol \theta}$ with $\boldsymbol \theta$ and $\boldsymbol \theta'$ with $-3(\phi_1 +\phi_2)\mathbf{1}_3$, which gives us the result
\begin{align*}
\lim_{\epsilon \rightarrow 0} \frac{1}{\epsilon}\left| \mathbb{E}[u_2e^{\epsilon \langle \boldsymbol \theta, \mathbf{q} \rangle}] - \mathbb{E}[u_2e^{\epsilon \langle \mathbf{d}_2, \boldsymbol \phi \rangle q_3}] \right| = 0.
\end{align*}
This completes the proof of part 3. Now, proof of part 4 follows on exact same lines as that of part 3.
\end{itemize}
\end{proof}
\subsection{Proof of Theorem \ref{theo: 3q_functional_eq}}
\label{app: 3q_functional_eq}
\begin{proof}[Proof of Theorem \ref{theo: 3q_functional_eq}]
As mentioned in the theorem, $\boldsymbol \theta \in \boldsymbol \Theta$ and suppose $\boldsymbol \phi \in \mathbb C^2$ such that $\boldsymbol \theta = \mathbf B \boldsymbol \phi$. With slight abuse of notation, we are using $\boldsymbol \theta$ and $\boldsymbol \phi$ interchangeably. Using Lemma \ref{lem: 3q_mgf_equivalence} presented in Appendix \ref{app: 3q_mgf_equivalence}, we get that, $|L(\boldsymbol \phi)| < \infty$, $|M_2(\boldsymbol \phi)| < \infty$ and $|M_3(\boldsymbol \phi)| < \infty$ for all $\mathbf B \boldsymbol \phi \in \Theta$. Suppose $\mathbf{q}$ follows the steady state distribution and $\mathbf{q}^+$ is the state of the Markov chain following the state $\mathbf{q}$, then, as the system is stable, $\mathbf{q}^+$ also follows the steady state distribution.
Now,
\begin{align*}
e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{q}^+ \rangle } \Big( e^{- \epsilon \langle \boldsymbol{\theta}, \mathbf{u} \rangle} -1 \Big) = e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{q}^+ \rangle } \Big( -\epsilon \langle \boldsymbol{\theta}, \mathbf{u} \rangle + \frac{\epsilon^2}{2} \langle \boldsymbol{\theta}, \mathbf{u} \rangle^2 + \sum_{k = 3}^\infty \frac{\epsilon^k}{k!} \langle \boldsymbol{\theta}, \mathbf{u} \rangle^k \Big),
\end{align*}
where the first equality follows by using Lemma \ref{lem: 3q_projection} and second equality follows by the definition of complex exponential function. For the second term, by using Cauchy-Schwarz inequality and the fact that $u_i$'s are binary variables, we get that,
\begin{align*}
\lim_{\epsilon \rightarrow 0 } \Big| \mathbb{E} \Big[ \langle \boldsymbol{\theta}, \mathbf{u} \rangle^2 e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{q}^+ \rangle } \Big] \Big|
&\leq \|\boldsymbol{\theta}\|^2 \lim_{\epsilon \rightarrow 0} \mathbb{E} \Big[ \| \mathbf{u} \|^2 \big| e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{q}^+ \rangle} \big| \Big] \\
& =\|\boldsymbol{\theta}\|^2 \sum_{i=1}^3 \lim_{\epsilon \rightarrow 0} \mathbb{E} [ u_i \big| e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{q}^+ \rangle} \big| ]\\
& \leq \|\boldsymbol{\theta}\|^2 \sum_{i=1}^3 \lim_{\epsilon \rightarrow 0} \mathbb{E} [ u_i]^{\frac{1}{2}} \big|\mathbb{E} [ e^{ 2\epsilon \langle \boldsymbol{\theta}, \mathbf{q}^+ \rangle} \big| ]^{\frac{1}{2}}\\
&= 0.
\end{align*}
where
the last equality follows because $\lim_{\epsilon \rightarrow 0} \mathbb{E} [ e^{ 2\epsilon \langle \boldsymbol{\theta}, \mathbf{q}^+ \rangle} \big| ]$ exists by similar arguments as in Eq. \eqref{eq: 3q_laplacelim} and $\lim_{\epsilon \rightarrow 0} \mathbb{E} [ u_i] = 0$ for all $i$ by using Eq. \eqref{eq: 3q_unused_epsilon}. Now, as $u_i$'s are Bernoulli random variable, so $\|\mathbf{u}\|^2 \leq 3$, and thus,
\begin{align*}
\lim_{\epsilon \rightarrow 0} \Big| \sum_{k = 3}^\infty \frac{\epsilon^{k-2}}{k!} \mathbb{E} \Big[ \langle \boldsymbol{\theta}, \mathbf{u} \rangle^k e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{q}^+ \rangle} \Big] \Big|
&\leq \lim_{\epsilon \rightarrow 0} \sum_{k = 3}^\infty \frac{\epsilon^{k-2}}{k!} \mathbb{E} \Big[ \|\boldsymbol{\theta}\|^{k} \|\mathbf{u}\|^{k} \big| e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{q}^+ \rangle} \big| \Big] \\
&\leq \lim_{\epsilon \rightarrow 0} \sum_{k = 3}^\infty \frac{\epsilon^{k-2}}{k!} \|\boldsymbol{\theta}\|^{k} 3^{k} \mathbb{E} \Big[ \big| e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{q}^+ \rangle} \big| \Big] \\
&\leq \lim_{\epsilon \rightarrow 0} \mathbb{E} \Big[ \big| e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{q}^+ \rangle} \big| \Big] \sum_{k = 3}^\infty \frac{\epsilon^{k-2}}{k!} \|\boldsymbol{\theta}\|^{k} 3^{k} \\
& \stackrel{(a)}{=} \lim_{\epsilon \rightarrow 0} \mathbb{E} \Big[ \big| e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{q}^+ \rangle} \big| \Big] \times \lim_{\epsilon \rightarrow 0} \sum_{k = 3}^\infty \frac{\epsilon^{k-2}}{k!} \|\boldsymbol{\theta}\|^{k} 3^{k} \\
& \stackrel{(b)}{=} 0,
\end{align*}
where (a) and (b) holds by using that the first term, $\lim_{\epsilon \rightarrow 0} \mathbb{E} \Big[ \big| e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{q}^+ \rangle} \big| \Big] =\lim_{\epsilon \rightarrow 0} \mathbb{E} \Big[ \big| e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{q} \rangle} \big| \Big]$ exists by using Lemma \ref{lem: 3q_mgf_equivalence} presented in Appendix \ref{app: 3q_mgf_equivalence}, and also $\lim_{\epsilon \rightarrow 0} \sum_{k = 3}^\infty \frac{\epsilon^{k-2}}{k!} \|\boldsymbol{\theta}\|^{k} 3^{k} $ exists and is equal to zero.
This gives us that
\begin{equation}
\label{eq: 3q_lhs_u}
\lim_{\epsilon \rightarrow 0} \frac{1}{\epsilon^2} \mathbb{E} \Bigg[ e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{q}^+ \rangle } \Big( -\epsilon \langle \boldsymbol{\theta}, \mathbf{u} \rangle + \frac{\epsilon^2}{2} \langle \boldsymbol{\theta}, \mathbf{u} \rangle^2 + \sum_{k = 3}^\infty \frac{\epsilon^k}{k!} \langle \boldsymbol{\theta}, \mathbf{u} \rangle^k \Big) \Bigg] = - \Bigg \langle \boldsymbol{\theta}, \lim_{\epsilon \rightarrow 0} \frac{1}{\epsilon} \mathbb{E} \Big[ \mathbf{u} e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{q}^+ \rangle} \Big] \Bigg \rangle.
\end{equation}
Thus, for any $i$,
\begin{align}
\label{eq: 3q_plus_remove}
\lim_{\epsilon \rightarrow 0} \frac{1}{\epsilon} \Big| \mathbb{E} \Big[ u_i \Big( e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{q}^+ \rangle} - e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{q} \rangle }\Big) \Big] \Big|
&\leq \lim_{\epsilon \rightarrow 0} \frac{1}{\epsilon} \mathbb{E} [u_i^2]^{\frac{1}{2}} \mathbb{E} \Big[ \Big| e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{q} \rangle} \Big( e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{a} - \mathbf{s} +\mathbf{u} \rangle } - 1 \Big)\Big|^2 \Big]^{\frac{1}{2}} \nonumber\\
& \stackrel{(a)}{\leq } \lim_{\epsilon \rightarrow 0} \frac{1}{\sqrt{\epsilon}} \mathbb{E} \Big[ \big|e^{ 2\epsilon \langle \boldsymbol{\theta}, \mathbf{q} \rangle}\big| \Big| e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{a} - \mathbf{s} +\mathbf{u} \rangle } - 1 \Big|^2 \Big]^{\frac{1}{2}} \allowdisplaybreaks\nonumber\\
& \leq \lim_{\epsilon \rightarrow 0} \frac{1}{\sqrt{\epsilon}} \mathbb{E} \Big[ \big|e^{ 2\epsilon \langle \boldsymbol{\theta}, \mathbf{q} \rangle}\big| \Big( e^{ \epsilon |\langle \boldsymbol{\theta}, \mathbf{a} - \mathbf{s} +\mathbf{u} \rangle| } - 1 \Big)^2 \Big]^{\frac{1}{2}} \allowdisplaybreaks\nonumber\\
& \stackrel{(b)}{\leq } \lim_{\epsilon \rightarrow 0} \frac{1}{\sqrt{\epsilon}} \mathbb{E} \Big[ \big|e^{ 2\epsilon \langle \boldsymbol{\theta}, \mathbf{q} \rangle}\big| \Big( e^{ 3\epsilon \| \boldsymbol{\theta}\| a_{\max} } - 1 \Big)^2 \Big]^{\frac{1}{2}} \allowdisplaybreaks\nonumber\\
& \leq \lim_{\epsilon \rightarrow 0} \frac{1}{\sqrt{\epsilon}}\Big( e^{ 3\epsilon \| \boldsymbol{\theta}\| a_{\max} } - 1 \Big) \mathbb{E} \Big[ \big|e^{ 2\epsilon \langle \boldsymbol{\theta}, \mathbf{q} \rangle}\big| \Big]^{\frac{1}{2}} \allowdisplaybreaks\nonumber\\
& =0,
\end{align}
where (a) follows using $\mathbb{E}[u_i^{2}]=\mathbb{E}[u_i ]\leq \epsilon$ as shown in Eq. \eqref{eq: 3q_unused_epsilon}; and (b) follows as $a_i -s_i +u_i \leq a_{\max}$ for all $i\in \{1,2,3\}$. Combining this with Eq. \eqref{eq: 3q_lhs_u},
\begin{equation}
\label{eq: 3q_marginal_m}
\lim_{\epsilon \rightarrow 0} \frac{1}{\epsilon^2} \mathbb{E} \Big[e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{q}^+ \rangle } \Big( e^{- \epsilon \langle \boldsymbol{\theta}, \mathbf{u} \rangle} -1 \Big) \Big] = - \Bigg \langle \boldsymbol{\theta}, \lim_{\epsilon \rightarrow 0} \frac{1}{\epsilon} \mathbb{E} \Big[ \mathbf{u} e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{q} \rangle} \Big] \Bigg \rangle = - \langle \boldsymbol{\theta}, \mathbf{M}(\boldsymbol{\phi}) \rangle,
\end{equation}
where the last equality follows by using Lemma \ref{lem: 3q_mgf_equivalence} presented in Appendix \ref{app: 3q_mgf_equivalence}.
By using the equation $\mathbf{q}^+ = \mathbf{q} + \mathbf{a} - \mathbf{s} +\mathbf{u}$,
\begin{align}
\label{eq: 3q_fun_eq_theo_lhs}
\mathbb{E} \Big[ e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{q}^+ \rangle } \Big( e^{- \epsilon \langle \boldsymbol{\theta}, \mathbf{u} \rangle} -1 \Big) \Big]
& = \mathbb{E} \Big[ e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{q} \rangle } e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{a }- \mathbf{s } \rangle } \Big] - \mathbb{E} \Big[ e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{q}^+ \rangle}\Big] \allowdisplaybreaks \nonumber \\
& \stackrel{(a)}{=} \mathbb{E} \Big[ e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{q} \rangle } \Big] \mathbb{E} \Big[ e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{a }- \mathbf{s } \rangle } \Big] - \mathbb{E} \Big[ e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{q}^+ \rangle}\Big] \allowdisplaybreaks \nonumber \\
& \stackrel{(b)}{=} \mathbb{E} \Big[ e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{q} \rangle } \Big] \Bigg( \mathbb{E} \Big[ e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{a }- \mathbf{s } \rangle } \Big] - 1\Bigg) \allowdisplaybreaks \nonumber \\
& \stackrel{(c)}{=} \mathbb{E} \Big[ e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{q} \rangle } \Big] \Bigg( \mathbb{E} [ \epsilon \langle \boldsymbol{\theta}, \mathbf{a }- \mathbf{s } \rangle + \frac{\epsilon^2}{2} \langle \boldsymbol{\theta}, \mathbf{a }- \mathbf{s } \rangle^2 + \sum_{k=3}^\infty \frac{\epsilon^k}{k!} \langle \boldsymbol{\theta}, \mathbf{a }- \mathbf{s } \rangle^k \Bigg),
\end{align}
where (a) follows as the arrivals are independent of the queue length vector and $\langle \boldsymbol \theta,\mathbf{s}\rangle = (\phi_1+\phi_2)s_1 +\phi_1 s_2 + \phi_2 s_3 = \langle \boldsymbol \phi, \mathbf{1}_2\rangle$ for both the schedules $(1,0,0)$ and $(0,1,1)$, and so $\langle \boldsymbol \theta,\mathbf{s}\rangle$ is also independent of $\mathbf{q}$; (b) holds as $\mathbf{q}$ follows steady state distribution, then $\mathbf{q}^{+}$ also follows steady state distribution; and (c) follows by the definition of complex exponential function. Similar to $\langle \boldsymbol \theta,\mathbf{s}\rangle$, we have
\begin{equation*}
\mathbb{E}[\langle \boldsymbol \theta,\mathbf{a}\rangle] = \langle \boldsymbol \theta,\boldsymbol \lambda\rangle = \langle \boldsymbol \theta,\boldsymbol \lambda\rangle = (1-\epsilon) \langle \boldsymbol \theta,\boldsymbol \nu\rangle = (1-\epsilon) \langle \boldsymbol \phi, \mathbf{1}_2\rangle
\end{equation*}
where last equality holds because $\boldsymbol \nu\in \mathcal{F}$. Thus,
\begin{align*}
\mathbb{E} [ \langle \boldsymbol{\theta}, \mathbf{a }- \mathbf{s } \rangle ] = - \epsilon \langle \boldsymbol \phi, \mathbf{1}_2 \rangle.
\end{align*}
This gives us that $\lim_{\epsilon \rightarrow 0} \frac{1}{\epsilon} \mathbb{E} [ \langle \boldsymbol{\theta}, \mathbf{a }- \mathbf{s } \rangle ] = - \langle \boldsymbol \phi, \mathbf{1}_2 \rangle$. Also,
\begin{align}
\label{eq: 3q_func_eq_variance}
\mathbb{E}[ \langle \boldsymbol \theta , \mathbf{a }- \mathbf{s } \rangle^2 ] &= \text{Var} \big(\langle \boldsymbol \theta , \mathbf{a }- \mathbf{s } \rangle\big) + \big(\mathbb{E}[ \langle \boldsymbol \theta , \mathbf{a }- \mathbf{s } \rangle ] \big)^2 \nonumber \allowdisplaybreaks\\
& \stackrel{(a)}{=} \text{Var}\big(\langle \boldsymbol \theta , \mathbf{a} \rangle\big) + \epsilon^2\langle \boldsymbol \phi, \mathbf{1}_2 \rangle^2\nonumber \allowdisplaybreaks\\
& = \boldsymbol \theta^T \boldsymbol \sigma^2 \boldsymbol \theta + \epsilon^2 \langle \boldsymbol \phi, \mathbf{1}_2 \rangle^2 \nonumber\\
& = \boldsymbol \phi^T \mathbf{B}^T \boldsymbol \sigma^2 \mathbf{B} \boldsymbol \phi + \epsilon^2 \langle \boldsymbol \phi, \mathbf{1}_2 \rangle^2 \nonumber\allowdisplaybreaks\\
& \stackrel{(b)}{=} \langle \boldsymbol \phi, \boldsymbol \Gamma \boldsymbol \phi \rangle + \epsilon^2 \langle \boldsymbol \phi, \mathbf{1}_2 \rangle^2,
\end{align}
where (a) follows because $\langle \boldsymbol \theta , \mathbf{s}\rangle$ is constant and (b) follows by taking $\boldsymbol \Gamma = \mathbf{B}^T\boldsymbol \sigma^2 \mathbf{B}$. Thus,
\begin{equation*}
\lim_{\epsilon \rightarrow 0} \mathbb{E}[ \langle \boldsymbol \theta , \mathbf{a }- \mathbf{s } \rangle^2 ] = \langle \boldsymbol \phi, \boldsymbol \Gamma \boldsymbol \phi \rangle.
\end{equation*}
Also, as arrivals are bounded by $a_{\max}$, $|a_i -s_i| \leq a_{\max} $ and so $\|\mathbf{a }- \mathbf{s } \| \leq 3a_{\max}$. Then,
\begin{align*}
\lim_{\epsilon \rightarrow 0} \Big| \sum_{k=3}^\infty \frac{\epsilon^{k-1}}{k!} \mathbb{E} [ \langle \boldsymbol{\theta}, \mathbf{a }- \mathbf{s } \rangle^k ] \Big| & \leq \lim_{\epsilon \rightarrow 0} \sum_{k=3}^\infty \frac{\epsilon^{k-2}}{k!} \| \boldsymbol{\theta}\|^{k} \mathbb{E} [ \|\mathbf{a }- \mathbf{s } \|^k ] \leq \lim_{\epsilon \rightarrow 0} \sum_{k=3}^\infty \frac{\epsilon^{k-2}}{k!} \| \boldsymbol{\theta}\|^{k} 3^k a_{\max}^k = 0.
\end{align*}
Using the above arguments and the fact that $\lim_{\epsilon \rightarrow 0} \mathbb{E} \Big[ e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{q} \rangle } \Big]$ exists by Lemma \ref{lem: 3q_mgf_equivalence} presented in Appendix \ref{app: 3q_mgf_equivalence}, we get that for any $\boldsymbol \phi \in \Phi$,
\begin{align*}
\lim_{\epsilon \rightarrow 0} \frac{1}{\epsilon^2} \mathbb{E} \Big[ e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{q}^+ \rangle } \Big( e^{- \epsilon \langle \boldsymbol{\theta}, \mathbf{u} \rangle} -1 \Big) \Big]
& = \Big( -\langle \boldsymbol{\phi}, \mathbf{1}_2 \rangle + \frac{1}{2} \langle \boldsymbol{\phi} , \Gamma \boldsymbol{\phi} \rangle \Big) \lim_{\epsilon \rightarrow 0} \mathbb{E} \Big[ e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{q} \rangle } \Big] \\
& = \left( -\langle \boldsymbol{\phi}, \mathbf{1}_2 \rangle + \frac{1}{2} \langle \boldsymbol{\phi} , \Gamma \boldsymbol{\phi} \rangle \right) L(\boldsymbol{\phi}),
\end{align*}
where $L(\boldsymbol{\phi})$ is the Laplace transform of heavy traffic distribution of the queue length, i.e., $L(\boldsymbol{\phi}) = \lim_{\epsilon \rightarrow 0} \mathbb{E} \Big[ e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{q} \rangle } \Big]$.
Combining this with Eq. \eqref{eq: 3q_marginal_m}, for any $\boldsymbol \phi \in \Phi$,
\begin{equation*}
\left( -\langle \boldsymbol{\phi},\mathbf{1}_2 \rangle + \frac{1}{2} \langle \boldsymbol{\phi} , \boldsymbol \Gamma \boldsymbol{\phi} \rangle \right) L(\boldsymbol{\phi}) + \langle \boldsymbol{\theta}, \mathbf{M}(\boldsymbol{\phi}) \rangle = 0
\end{equation*}
where
\begin{align*}
L(\boldsymbol \phi) = \lim_{\epsilon \rightarrow 0} \mathbb{E}[e^{\epsilon \langle \boldsymbol \theta, \mathbf{q} \rangle}] && M_2(\boldsymbol \phi) = \lim_{\epsilon \rightarrow 0} \frac{1}{\epsilon} \mathbb{E} [u_2e^{\epsilon \langle \mathbf{d}_2, \boldsymbol \phi \rangle q_3}] && M_3(\boldsymbol \phi) = \lim_{\epsilon \rightarrow 0} \frac{1}{\epsilon} \mathbb{E} [u_3e^{\epsilon\langle \mathbf{d}_1, \boldsymbol \phi \rangle q_2}].
\end{align*}
Note that $M_1(\boldsymbol\phi)=0$ because $u_1=0$ by the definition of the service process.
Finally, as $\boldsymbol \theta = \mathbf B \boldsymbol \phi$, we get that $\theta_2 = \phi_1$ and $\theta_3 = \phi_2$. Thus, $ \langle \boldsymbol{\theta}, \mathbf{M}(\boldsymbol{\phi}) \rangle = \phi_1 M_2(\boldsymbol\phi)+\phi_2 M_3(\boldsymbol\phi)$. This gives us the Laplace equation in Eq. \eqref{eq: 3q_functional_eq}.
\end{proof}
\subsection{Proof of Lemma \ref{lem: 3q_uniqueness}}
\label{app: 3q_uniqueness}
\begin{proof}[Proof of Lemma \ref{lem: 3q_uniqueness}]
In order to prove Lemma \ref{lem: 3q_uniqueness}, we are going to use Lemma \ref{lem: functional_uniqueness}. We already know that the heavy traffic distribution exists.
Next, we do a linear transform of the variable $\boldsymbol \phi$ so that the Laplace transform $M_2(\cdot)$ and $M_3(\cdot)$ depends only on one variable. We pick $\psi_1 = \langle \mathbf{d}_1, \boldsymbol \phi \rangle$ and $\psi_2 = \langle \mathbf{d}_2, \boldsymbol \phi \rangle$, i.e., $\boldsymbol \psi = (\psi_1,\psi_2) = \mathbf D \boldsymbol \phi$. Thus, $\boldsymbol \phi = \mathbf D^{-1} \boldsymbol \psi $, i.e., $\phi_1 = \frac{1}{3} (2\psi_1 - \psi_2)$ and $\phi_2 =\frac{1}{3} ( 2\psi_2 - \psi_1)$. Then, $M_2(\boldsymbol \phi) = M_2(\psi_2)$ and $M_3(\boldsymbol \phi) = M_3(\psi_1)$.
The functional equation can be rewritten as,
\begin{equation*}
\left( - \frac{1}{3} \langle \boldsymbol{\psi},\mathbf{1}_2 \rangle + \frac{1}{2} \langle \boldsymbol{\psi} , \tilde{\boldsymbol \Gamma} \boldsymbol{\psi} \rangle \right) L(\boldsymbol{\psi}) +
\frac{1}{3} (2\psi_1 - \psi_2)M_2(\psi_2) +\frac{1}{3} (2\psi_2 - \psi_1)M_3(\psi_1) = 0,
\end{equation*}
where $\tilde{\boldsymbol \Gamma} =\mathbf D^{-1} \mathbf{B}^T\boldsymbol \sigma^2 \mathbf{B} \mathbf D^{-1} $. As the functional equation in Eq. \eqref{eq: 3q_functional_eq} holds for any $\boldsymbol \phi \in \Phi$, so the rewritten functional equation above holds for any $\boldsymbol \psi$ such that $Re(\boldsymbol \psi) \leq \mathbf{0}_2$. Now, by using Lemma \ref{lem: 3q_mgf_equivalence} presented in Appendix \ref{app: 3q_mgf_equivalence},
\begin{align*}
L(\boldsymbol \psi) = \lim_{\epsilon \rightarrow 0} \mathbb{E}[e^{\epsilon (\psi_1 q_2 + \psi_2 q_3)}],
&& M_2(\psi_2) = \lim_{\epsilon \rightarrow 0} \frac{1}{\epsilon} \mathbb{E} [u_2e^{\epsilon \psi_2 q_3}] && M_3(\psi_1) = \lim_{\epsilon \rightarrow 0} \frac{1}{\epsilon} \mathbb{E} [u_3e^{\epsilon\psi_1 q_2}].
\end{align*}
Now, note that $M_2(\psi_2)$ can be rewritten as
\begin{equation*}
M_2(\psi_2) = \lim_{\epsilon \rightarrow 0} \frac{1}{\epsilon} \mathbb{E} [u_2\mathbf{1}_{\{q_2=0\}} e^{\epsilon \psi_2 q_3}].
\end{equation*}
This holds because if $u_2=1$ then there is unused service, which implies that $q_2 =0$ and so, $u_2 = u_2\mathbf{1}_{\{q_2=0\}}$. Thus, $M_2(\psi_2)$ is a Laplace transform of a boundary measure that is restricted to the axes $q_2=0$. Similarly, $ M_3(\psi_1)$ is a Laplace transform of a boundary measure restricted to the axes $q_3=0$.
Now, this matches with the form we have in Eq. \eqref{eq: functional}. In this case, the reflection matrix $\mathbf R$ matches with $\mathbf{D}^{-1}$, i.e., $\mathbf{R} = \mathbf{D}^{-1}$. And the interior drift is $\boldsymbol\alpha = -\frac{1}{3} \mathbf{1}_2$. Now it is easy to observe that the required conditions in Lemma \ref{lem: functional_uniqueness} is satisfied. This gives us that the functional equation for Three-queue system has a unique solution.
\end{proof}
\subsection{Proof of Theorem \ref{thm: 3q_dist}}
\label{app: 3q_dist}
\begin{proof}[Proof of Theorem \ref{thm: 3q_dist}]
If we pick any $\Tilde{\boldsymbol \theta} \in \mathbb{C}^3 $ such that $\boldsymbol \theta = \mathbf B \boldsymbol \phi$ where $\boldsymbol \phi \in \Phi$, then from Lemma \ref{lem: 3q_mgf_equivalence},
\begin{equation*}
\lim_{\epsilon \rightarrow 0}\mathbb{E}[e^{\epsilon \langle \Tilde{\boldsymbol \theta}, \mathbf{q} \rangle}] = \lim_{\epsilon \rightarrow 0} \mathbb{E}[e^{\epsilon \langle \boldsymbol \theta, \mathbf{q} \rangle}] = L(\boldsymbol \phi).
\end{equation*}
This implies that it is enough to characterize the Laplace transform of the queue length vector for $\boldsymbol \theta \in \mathcal{S}$. Now, from here onwards we pick $\boldsymbol \theta \in \mathcal{S}$.
Pick $\boldsymbol \phi $ such that $\boldsymbol \theta = \mathbf{B}\boldsymbol\phi \in \boldsymbol \Theta$. The Laplace transform of the considered distribution is given by
\begin{align*}
\mathbb{E}[e^{\theta_1 (\Upsilon_1+\Upsilon_2)+\theta_2 \Upsilon_1 + \theta_3 \Upsilon_2 }] &= \mathbb{E}[e^{(2\theta_2 + \theta_3) \Upsilon_1+(\theta_2 + 2\theta_3)\Upsilon_2}] \\
&= \frac{1}{\bigg( 1- (2\theta_2 + \theta_3) \frac{3\sigma_2^2 + \sigma_3^2}{8}\bigg)\bigg( 1- (\theta_2 + 2\theta_3)\frac{\sigma_2^2 + 3\sigma_3^2}{8}\bigg)}.
\end{align*}
Now pick
\begin{align}
\label{eq: 3q_laplacesolution}
L(\boldsymbol \phi) = \frac{1}{\bigg( 1- \langle \mathbf{d}_1, \boldsymbol \phi \rangle \frac{3\sigma_2^2 + \sigma_3^2}{8}\bigg)\bigg( 1- \langle \mathbf{d}_2, \boldsymbol \phi \rangle\frac{\sigma_2^2 + 3\sigma_3^2}{8}\bigg)},\nonumber\\ M_2 (\boldsymbol \phi) = \frac{1}{1- \langle \mathbf{d}_2, \boldsymbol \phi \rangle\frac{\sigma_2^2 + 3\sigma_3^2}{8}}, && M_3 (\boldsymbol \phi) = \frac{1}{1- \langle \mathbf{d}_1, \boldsymbol \phi \rangle\frac{3\sigma_2^2 + \sigma_3^2}{8}}.
\end{align}
For this to satisfy the functional equation given in Eq. \eqref{eq: 3q_functional_eq}, we need,
\begin{equation*}
\left( -\langle \boldsymbol{\phi},\mathbf{1} \rangle + \frac{1}{2} \langle \boldsymbol{\phi} , \boldsymbol \Gamma \boldsymbol{\phi} \rangle \right) + \phi_1 \bigg( 1- \langle \mathbf{d}_1, \boldsymbol \phi \rangle\frac{3\sigma_2^2 + \sigma_3^2}{8}\bigg)+\phi_2 \bigg( 1- \langle \mathbf{d}_2, \boldsymbol \phi \rangle\frac{\sigma_2^2 + 3\sigma_3^2}{8}\bigg) = 0,
\end{equation*}
which can be simplified to the condition,
\begin{equation}
\label{eq: 3q_provefunctional}
4 \langle \boldsymbol{\phi} , \boldsymbol \Gamma \boldsymbol{\phi} \rangle - \phi_1 \langle \mathbf{d}_1, \boldsymbol \phi \rangle(3\sigma_2^2 + \sigma_3^2)-\phi_2 \langle \mathbf{d}_2, \boldsymbol \phi \rangle(\sigma_2^2 + 3\sigma_3^2) = 0.
\end{equation}
For the first term,
\begin{align}
\label{eq: 3q_provefunctionalterm1}
\langle \boldsymbol{\phi} , \boldsymbol \Gamma \boldsymbol{\phi} \rangle = \boldsymbol{\phi}^T\mathbf{B}^T \boldsymbol \sigma^2 \mathbf{B} \boldsymbol{\phi} &= \begin{bmatrix}
\phi_1 + \phi_2& \phi_1 & \phi_2
\end{bmatrix}
\begin{bmatrix}
\frac{\sigma_2^2 +\sigma_3^2}{2} & 0 & 0\\
0& \sigma_2^2 & 0\\
0 & 0 & \sigma_3^2
\end{bmatrix}
\begin{bmatrix}
\phi_1+ \phi_2 \\ \phi_1 \\ \phi_2
\end{bmatrix} \nonumber \allowdisplaybreaks\\
&=\phi_1^2 \Big( \frac{3\sigma_2^2+\sigma_3^2}{2} \Big) +\phi_2^2 \Big( \frac{\sigma_2^2+3\sigma_3^2}{2} \Big) + \phi_1\phi_2 (\sigma_2^2+\sigma_3^2).
\end{align}
Next,
\begin{align}
\label{eq: 3q_provefunctionalterm2}
\phi_1 \langle \mathbf{d}_1, \boldsymbol \phi \rangle(3\sigma_2^2 + \sigma_3^2)+\phi_2 \langle \mathbf{d}_2, \boldsymbol \phi \rangle(\sigma_2^2 + 3\sigma_3^2) &= \phi_1 (2\phi_1 + \phi_2)(3\sigma_2^2 + \sigma_3^2)+\phi_2 (\phi_1 + 2\phi_2)(\sigma_2^2 + 3\sigma_3^2)\nonumber\\
& = 2 \phi_1^2 ( 3\sigma_2^2+\sigma_3^2) +2\phi_2^2( \sigma_2^2+3\sigma_3^2) + 4\phi_1\phi_2 (\sigma_2^2+\sigma_3^2).
\end{align}
From Eq. \eqref{eq: 3q_provefunctionalterm1} and Eq. \eqref{eq: 3q_provefunctionalterm2}, we can easily observe that Eq. \eqref{eq: 3q_provefunctional} is satisfied. Thus, $L(\boldsymbol \phi)$, $M_1(\boldsymbol \phi)$ and $M_2(\boldsymbol \phi)$ given in Eq. \eqref{eq: 3q_laplacesolution} solves the functional equation given in Eq. \eqref{eq: 3q_functional_eq}. From Lemma \ref{lem: 3q_uniqueness}, we get that the solution given by Eq. \eqref{eq: 3q_provefunctional} is the unique solution and so $L(\boldsymbol \phi)$ in Eq. \eqref{eq: 3q_laplacesolution} gives the Laplace transform of the heavy traffic joint distribution. This completes the proof.
\end{proof}
\section{Proof of Results for Input-queued switch}
\subsection{Properties of the projection}
\label{sec: switch_projection}
\begin{lemma}
\label{lem: switch_projection}
Let $\mathbf B \in \{0,1\}^{n^2\times2n}$ is such that for any $1\leq i,j\leq n$,
\begin{equation*}
B_{i+n(j-1),i} =B_{i+n(j-1),n+j} = 1,
\end{equation*}
and all other elements are zero.
Let $\mathbf x \in \mathbb{C}^{n^2}$ and suppose $\mathbf x_{\|}$ denotes the projection of $\mathbf{x}$ onto the space $\mathcal{S}$ where
\begin{equation*}
\mathcal{S} = \Big\{ \mathbf{x} \in \mathbb{R}^{n^2} : \exists \mathbf{w} \in \mathbb{R}^{2n} \ s.t. \ \mathbf x = \mathbf
B \mathbf w\Big\}.
\end{equation*}
And $\mathbf x_{\perp} = \mathbf x - \mathbf x_{\|}$.
Further, define $\mathbf{D} = \mathbf{B}^T\mathbf{B}$ and $\mathbf A = \mathbf{B}(\mathbf{B}^T\mathbf{B})^{-1}\mathbf{B}^T$. Then,
\begin{enumerate}
\item The matrix $\mathbf B$ satisfies
\begin{align*}
\mathbf{B}\begin{bmatrix} \mathbf{1}_n \\\mathbf{0}_n \end{bmatrix} = \mathbf{B}\begin{bmatrix} \mathbf{0}_n \\ \mathbf{1}_n \end{bmatrix} = \mathbf{1}_{n^2}, && \mathbf{B}^T \mathbf{1}_{n^2} = n\mathbf{1}_{2n}.
\end{align*}
This also gives us that
\begin{align*}
\sum_{i=1}^n \mathbf d_i =\mathbf{B}^T\mathbf{B}\begin{bmatrix} \mathbf{1}_n \\\mathbf{0}_n \end{bmatrix}= \mathbf 1_{2n}, &&\sum_{j=1}^n \mathbf d_{n+j} =\mathbf{B}^T\mathbf{B}\begin{bmatrix} \mathbf{0}_n \\\mathbf{1}_n \end{bmatrix}= \mathbf 1_{2n}, &&
\sum_{i=1}^{2n}\mathbf d_i = 2n \mathbf 1_{2n},
\end{align*}
where $\{\mathbf d_1, \dots , \mathbf d_{2n}\}$ are columns of $ \mathbf{D}$.
\item The closed form expression for $\mathbf x_{\|}$ is given by $\mathbf x_{\|} = \mathbf A \mathbf{x}$.
\item For any $\boldsymbol \phi \in \mathbb{C}^{2n}$, the vector $\boldsymbol \theta = \mathbf B \boldsymbol \phi$ lies in the space $\mathcal{S}$. Also, for any $\mathbf x \in \mathbb{C}^{n^2}$, and suppose $\mathbf w \in \mathbb{C}^{2n}$ is such that $\mathbf x_{\|} =\mathbf B\mathbf{w}$. Then,
\begin{equation*}
\langle \boldsymbol \theta, \mathbf{x} \rangle = \langle \boldsymbol \theta, \mathbf{x}_{\|} \rangle = \boldsymbol \phi^T\mathbf{B}^T\mathbf{B} \mathbf w = \sum_{i=1}^{2n} \langle \boldsymbol \phi , \mathbf d_i \rangle w_i.
\end{equation*}
\end{enumerate}
\end{lemma}
Lemma \ref{lem: switch_projection} provides the properties of the projection of any vector onto the space $\mathcal{S}$. By the definition of the matrix $\mathbf{A}$ we have the relation that $\mathbf{B}^T \mathbf{A} = \mathbf{B}^T$ and $\mathbf{A}\mathbf{B} = \mathbf{B}$.
\begin{proof}[Proof of Lemma \ref{lem: switch_projection}]
Proof of Lemma \ref{lem: switch_projection} part 1 follows directly from the structure of the matrix $\mathbf B$. Part 2 follows as $A$ is the projection matrix for the space spanned by the matrix $\mathbf B$. For part 3, note that $\mathbf x_{\perp}$ is perpendicular to the subspace $\mathcal S$. Then, as $\boldsymbol \theta \in \mathcal{S}$, $\langle \boldsymbol \theta, \mathbf{x}_{\perp} \rangle =0$ and so,
\begin{align*}
\langle \boldsymbol \theta, \mathbf{x} \rangle &= \langle \boldsymbol \theta, \mathbf{x} - \mathbf{x}_\perp \rangle =\langle \boldsymbol \theta, \mathbf{x}_\| \rangle = \boldsymbol \phi^T\mathbf{B}^T\mathbf{B} \mathbf w = \sum_{i=1}^{2n} \langle \boldsymbol \phi , \mathbf d_i \rangle w_i,
\end{align*}
where third equality follows as $\boldsymbol \theta = \mathbf B \boldsymbol \phi$ and $\mathbf x_{\|} =\mathbf B\mathbf{w}$; and the last equality follows by using the definition of $\mathbf D$.
\end{proof}
\subsection{Required Lemma}
\label{app: switch_mgf_equivalence}
Before presenting the results for the Input-queued switch, we present a required lemma as given below. For the ease of notations, we consider $\boldsymbol \Phi = \{\boldsymbol \phi\in \mathbb C^{2n}: Re(\langle \mathbf d_i, \boldsymbol \phi \rangle \leq 0, \ \forall 1\leq i\leq 2n)\}$, where $\mathbf d_i$'s are the columns of the matrix $\mathbf D = \mathbf B^T \mathbf B$. Then, for any $\boldsymbol\phi \in \boldsymbol\Phi$, $\boldsymbol\theta = \mathbf B \boldsymbol \phi \in \boldsymbol \Theta$.
\begin{lemma}
\label{lem: switch_mgf_equivalence}
Consider the Input-queued switch system as defined in Section \ref{sec: switch_model} operating under scheduling policy that achieves state space collapse according to the Definition \ref{def: switch_ssc}. For any $\tilde{\boldsymbol \theta} \in \mathbb{C}^{n^2}$, let $\boldsymbol \theta$ be its projection onto the space $\mathcal{S}$ and suppose $\exists \boldsymbol \phi \in \boldsymbol \Phi$ such that $\boldsymbol \theta = \mathbf B \boldsymbol \phi $, then we have that
\begin{enumerate}
\item \begin{align*}
\lim_{\epsilon\rightarrow 0} \big| \mathbb{E} \big[ e^{\epsilon \langle \Tilde{\boldsymbol \theta} , \mathbf{q} \rangle} \big] \big| < \infty, && \lim_{\epsilon \rightarrow 0} \frac{1}{\epsilon} \big|\mathbb{E}[u_k e^{\epsilon \langle \Tilde{\boldsymbol \theta} , \mathbf{q} \rangle}] \big|<\infty, \ \ \ \ \forall k \in \{1,2,\dots,n^2\}
\end{align*}
The results holds even after replacing $\Tilde{\boldsymbol \theta} $ by $\boldsymbol \theta$.
\item Suppose $\mathbf r \in \mathbb R^{2n}_{+}$ is such that $\mathbf{q}_{\| \mathcal{ K}} = \mathbf B \mathbf r$. Then,
\begin{equation*}
\lim_{\epsilon \rightarrow 0}\mathbb{E}[e^{\epsilon\langle \Tilde{\boldsymbol \theta}, \mathbf{q} \rangle }] = \lim_{\epsilon \rightarrow 0} \mathbb{E}[e^{\epsilon \langle \boldsymbol \theta, \mathbf{q} \rangle}] = \lim_{\epsilon \rightarrow 0} \mathbb{E}[e^{\epsilon \sum_{l=1}^{2n} \langle \boldsymbol \phi,\mathbf d_l \rangle r_l}],
\end{equation*}
\item For all $k =i+n(j-1)$ such that $i,j\in \{1,2,\dots,n\}$, \begin{equation*}
\lim_{\epsilon \rightarrow 0} \frac{1}{\epsilon} \mathbb{E}[ u_k e^{\epsilon \langle \tilde{\boldsymbol \theta}, \mathbf{q} \rangle}]
= \lim_{\epsilon \rightarrow 0} \frac{1}{\epsilon}\mathbb{E}[u_k e^{\epsilon \langle \boldsymbol \theta, \mathbf{q} \rangle}]
= \lim_{\epsilon \rightarrow 0}\frac{1}{\epsilon} \mathbb{E}[u_k e^{\epsilon \sum_{l=1, \\ l\neq i,l\neq n+j}^{2n} \langle \boldsymbol \phi,\mathbf d_l \rangle r_l}],
\end{equation*}
\end{enumerate}
where all the expectation are taken under the steady state distribution.
\end{lemma}
Part 1 of Lemma \ref{lem: switch_mgf_equivalence} says that the Laplace transform of the heavy traffic distribution $(L(\boldsymbol \phi), \mathbf M(\boldsymbol \phi))$ exists. This is necessary to establish the functional equation given in Eq. \eqref{eq: switch_functional_eq}. Part 2 and 3 of Lemma \ref{lem: switch_mgf_equivalence} says that the Laplace transform of the heavy traffic distribution depends only on the limiting distribution of the projection of the state vector $\mathbf{q}$ onto the cone $\mathcal{K}$.
\begin{proof}[Proof of Lemma \ref{lem: switch_mgf_equivalence}]
\begin{itemize}
\item[(1)]
As $\mathcal{S}$ is a linear subspace, suppose $\tilde{\boldsymbol \theta} = \boldsymbol \theta +\boldsymbol \theta_{\perp}$. Also, suppose $\mathbf r \in \mathbb R^{2n}_{+}$ is such that $\mathbf{q}_{\| \mathcal{ K}} = \mathbf B \mathbf r$. Then,
\begin{align}
\label{eq: switch_theta_relation}
\langle \tilde{\boldsymbol \theta}, \mathbf{q} \rangle &= \langle \boldsymbol \Tilde{\boldsymbol \theta} , \mathbf{q}_{\|\mathcal K} \rangle + \langle \Tilde{\boldsymbol \theta} , \mathbf{q}_{\perp \mathcal K} \rangle \nonumber\\
&= \langle \boldsymbol \boldsymbol \theta , \mathbf{q}_{\|\mathcal K} \rangle + \langle \boldsymbol \boldsymbol \theta_{\perp} , \mathbf{q}_{\|\mathcal K} \rangle+ \langle \Tilde{\boldsymbol \theta} , \mathbf{q}_{\perp \mathcal K} \rangle \nonumber\\
&\stackrel{(a)}{=} \langle \boldsymbol \boldsymbol \theta , \mathbf{q}_{\|\mathcal K} \rangle + \langle \Tilde{\boldsymbol \theta} , \mathbf{q}_{\perp \mathcal K} \rangle \nonumber\\
& \stackrel{(b)}{=} \sum_{i=1}^{2n} \langle \boldsymbol \phi,\mathbf d_i \rangle r_i + \langle \Tilde{\boldsymbol \theta} , \mathbf{q}_{\perp \mathcal K} \rangle,
\end{align}
where (a) follows because $\mathbf{q}_{\|\mathcal K} \in \mathcal{ K} \subset \mathcal S$ and $\boldsymbol \theta_{\perp}$ is orthogonal to the subspace $\mathcal{S}$ and so, $\langle \boldsymbol \boldsymbol \theta_{\perp} , \mathbf{q}_{\|\mathcal K} \rangle =0$, and (b) follows by using $\mathbf{q}_{\| \mathcal{ K}} = \mathbf B \mathbf r$ and $\boldsymbol \theta = \mathbf B \boldsymbol \phi$.
Now, suppose $\mathbf{q}$ follows the steady state distribution of the Markov process $\{\mathbf{q}(t)\}_{t=0}^\infty$, then
\begin{align}
\label{eq: switch_laplacelim}
\lim_{\epsilon\rightarrow 0} \big| \mathbb{E} \big[ e^{\epsilon \langle \Tilde{\boldsymbol \theta} , \mathbf{q} \rangle} \big] \big| & = \lim_{\epsilon\rightarrow 0} \mathbb{E} \big[\big| e^{\epsilon (\sum_{i=1}^{2n} \langle \boldsymbol \phi,\mathbf d_i \rangle r_i + \langle \Tilde{\boldsymbol \theta} , \mathbf{q}_{\perp \mathcal K} \rangle)}\big| \big]\nonumber\\
& \stackrel{(a)}{\leq} \lim_{\epsilon\rightarrow 0} \mathbb{E} \big[\big| e^{\epsilon \langle \Tilde{\boldsymbol \theta} , \mathbf{q}_{\perp \mathcal K} \rangle}\big| \big]\nonumber\\
& \stackrel{(b)}{\leq} \lim_{\epsilon\rightarrow 0} \mathbb{E} \big[ e^{\epsilon \| \Tilde{\boldsymbol \theta} \|\| \mathbf{q}_{\perp \mathcal K} \|} \big]\nonumber\\
&\stackrel{(c)}{<} \infty,
\end{align}
where (a) follows by using $Re(\langle \mathbf{d}_i, \boldsymbol \phi \rangle ) \leq 0, \forall i\in \{1,2,\dots 2n\} $ for any $\boldsymbol \phi \in \Phi $; (b) follows by Cauchy-Schwarz inequality; and (c) follows by using the Definition \ref{def: switch_ssc}. The queue length vector evolve according to the equation,
\begin{align}
\label{eq: switch_lindley}
\mathbf{q}(t+1) &= [\mathbf{q}(t) + \mathbf{a}(t) - \mathbf{s}(t)]^+ = \mathbf{q}(t) + \mathbf{a}(t) - \mathbf{s}(t) + \mathbf{u}(t),
\end{align}
where $\mathbf{u}(t)$ is the unused service in the time slot $t$.
Suppose $\mathbf{q}^+$ is the state of the Markov chain following the state $\mathbf{q}$, then, as the system is stable and $\mathbf{q}$ follows the steady state distribution, $\mathbf{q}^+$ also follows the steady state distribution and so, for any $i \in \{1,2,\dots ,n\}$,
\begin{align*}
\mathbb{E}\big[\sum_{j=1}^n q_{i + n(j-1)}^+ \big] - \mathbb{E}\big[ \sum_{j=1}^n q_{i + n(j-1)}\big] = 0,
\end{align*}
as the drift is zero in steady state.
Then, by using the Eq. \eqref{eq: switch_lindley}, for any $i \in \{1,2,\dots ,n\}$,
\begin{align}
\label{eq: switch_unused_epsilon}
\mathbb{E}\big[ \sum_{j=1}^n u_{i + n(j-1)}\big] = \mathbb{E}\big[ \sum_{j=1}^n s_{i + n(j-1)}\big] -\sum_{j=1}^n \lambda_{i + n(j-1)} = 1-\sum_{j=1}^n \lambda_{i + n(j-1)}= \epsilon,
\end{align}
where the second equality follows because $\sum_{j=1}^n s_{i + n(j-1)}=1$ for any schedule in $\mathcal X$; third equality follows because $\boldsymbol\lambda = (1-\epsilon)\boldsymbol\nu$ where $\boldsymbol \nu \in \mathcal{F}$ as mentioned in Section \ref{sec: switch_model}. By using a similar argument, we can show that $\mathbb{E}\big[ \sum_{i=1}^n u_{i + n(j-1)}\big] = \epsilon, \forall j \in \{1,2,\dots ,n\}$. Now, for any $k\in\{1,2,\dots,n^2\}$,
\begin{align}
\label{eq: switch_laplace_boundarylim}
\lim_{\epsilon \rightarrow 0} \frac{1}{\epsilon} \big|\mathbb{E}[u_k e^{\epsilon \langle \Tilde{\boldsymbol \theta}, \mathbf{q} \rangle}] \big| &\leq \lim_{\epsilon\rightarrow 0} \frac{1}{\epsilon} \mathbb{E} \big[ u_k\big| e^{\epsilon \langle \Tilde{\boldsymbol \theta}, \mathbf{q}_{\perp \mathcal K} \rangle}\big| \big]\nonumber\\
& \stackrel{(a)}{\leq } \lim_{\epsilon\rightarrow 0} \frac{1}{\epsilon} \mathbb{E} \big[ u_k e^{\epsilon \|\Tilde{\boldsymbol \theta}\|\| \mathbf{q}_{\perp \mathcal K}\| } \big]\nonumber\\
& \stackrel{(b)}{\leq } \lim_{\epsilon\rightarrow 0} \frac{1}{\epsilon} \mathbb{E} [ u_k ] + \|\Tilde{\boldsymbol \theta}\| \lim_{\epsilon\rightarrow 0} \frac{1}{\epsilon} \mathbb{E} \big[ \epsilon u_k \| \mathbf{q}_{\perp \mathcal K}\| e^{\epsilon \|\Tilde{\boldsymbol \theta}\|\| \mathbf{q}_{\perp \mathcal K}\| } \big]\nonumber\\
&\stackrel{(c)}{\leq } 1 + \|\Tilde{\boldsymbol \theta}\| \lim_{\epsilon\rightarrow 0} \mathbb{E} \big[ \| \mathbf{q}_{\perp \mathcal K}\| e^{\epsilon \|\Tilde{\boldsymbol \theta}\|\| \mathbf{q}_{\perp \mathcal K}\| } \big]\nonumber\\
&\stackrel{(d)}{\leq } 1 + \|\Tilde{\boldsymbol \theta}\| \lim_{\epsilon\rightarrow 0} \mathbb{E} \big[ \| \mathbf{q}_{\perp \mathcal K}\|^2 \big]^{\frac{1}{2}} \mathbb{E} \big[ e^{2\epsilon \|\Tilde{\boldsymbol \theta}\|\| \mathbf{q}_{\perp \mathcal K}\| }\big]^{\frac{1}{2}}\nonumber\\
& \stackrel{(e)}{< } \infty,
\end{align}
where (a) and (d) follows by Cauchy-Schwarz inequality, $| \langle \Tilde{\boldsymbol \theta}, \mathbf{q}_{\perp \mathcal K} \rangle| \leq \|\Tilde{\boldsymbol \theta}\|\| \mathbf{q}_{\perp \mathcal K} \|$; (b) follows by using $e^x \leq 1+ xe^x$ for all $x\geq 0$; (c) follows as, for any $k\in\{1,2,\dots,n^2\}$, $\mathbb{E} [ u_k ] \leq \epsilon$ and $u_k \leq 1$; and finally, (e) follows as by using Definition \ref{def: switch_ssc},
\begin{align*}
\lim_{\epsilon\rightarrow 0} \mathbb{E} \big[ \| \mathbf{q}_{\perp \mathcal K}\|^2 \big] < \infty, && \lim_{\epsilon\rightarrow 0}\mathbb{E} \big[ e^{2\epsilon \|\Tilde{\boldsymbol \theta}\|\| \mathbf{q}_{\perp \mathcal K}\| }\big] < \infty.
\end{align*}
Also, as the arguments in Eq. \eqref{eq: switch_laplacelim} and Eq. \eqref{eq: switch_laplace_boundarylim} hold for any $\Tilde{\boldsymbol \theta} \in \mathbb C^{n^2}$, it also holds if we replace $\Tilde{\boldsymbol \theta}$ by $\boldsymbol \theta \in \mathcal S$.
\item[(2)] From Eq. \eqref{eq: switch_theta_relation}, $\langle \tilde{\boldsymbol \theta}, \mathbf{q} \rangle = \sum_{i=1}^{2n} \langle \boldsymbol \phi,\mathbf d_i \rangle r_i + \langle \Tilde{\boldsymbol \theta} , \mathbf{q}_{\perp \mathcal K} \rangle$. Then,
\begin{align*}
\label{eq: 3q_mgf_equi_part1}
\left| \mathbb{E}[e^{\epsilon \langle \tilde{\boldsymbol \theta}, \mathbf{q} \rangle}] - \mathbb{E}[e^{\epsilon \sum_{i=1}^{2n} \langle \boldsymbol \phi,\mathbf d_i \rangle r_i }] \right|
&= \mathbb{E}\big[\left|e^{\epsilon \sum_{i=1}^{2n} \langle \boldsymbol \phi,\mathbf d_i \rangle r_i } \right| \big| \big( 1- e^{\epsilon \langle \Tilde{\boldsymbol \theta}, \mathbf{q}_{\perp \mathcal K} \rangle} \big) \big|\big] \nonumber\\
&\stackrel{(a)}{\leq} \mathbb{E}\bigg[\left| 1- e^{\epsilon \langle \Tilde{\boldsymbol \theta}, \mathbf{q}_{\perp \mathcal K} \rangle} \right| \bigg]\nonumber \allowdisplaybreaks \\
& \stackrel{(b)}{\leq} \mathbb{E}\bigg[ |\epsilon \langle \Tilde{\boldsymbol \theta}, \mathbf{q}_{\perp \mathcal K} \rangle| e^{\epsilon | \langle \Tilde{\boldsymbol \theta}, \mathbf{q}_{\perp \mathcal K} \rangle|} \bigg] \nonumber \allowdisplaybreaks \\
& \stackrel{(c)}{\leq} \mathbb{E}\big[ |\epsilon \langle \Tilde{\boldsymbol \theta}, \mathbf{q}_{\perp \mathcal K} \rangle|^{2} \big]^{\frac{1}{2}} \mathbb{E}\bigg[ e^{2\epsilon | \langle \Tilde{\boldsymbol \theta}, \mathbf{q}_{\perp \mathcal K} \rangle|} \bigg]^{\frac{1}{2}} \nonumber \allowdisplaybreaks \\
& \stackrel{(d)}{\leq} \epsilon \| \Tilde{\boldsymbol \theta} \| \mathbb{E}\big[ \| \mathbf{q}_{\perp \mathcal K} \|^{2} \big]^{\frac{1}{2}} \mathbb{E}\bigg[ e^{2\epsilon \|\Tilde{\boldsymbol \theta}\|\| \mathbf{q}_{\perp \mathcal K}\|} \bigg]^{\frac{1}{2}}\nonumber
\end{align*}
where (a) follows by using $Re(\langle \mathbf{d}_i, \boldsymbol \phi \rangle ) \leq 0$ for all $i \in \{1,2,\dots,2n\}$ for any $\boldsymbol \phi \in \Phi $; (b) holds because $|e^x-1| \leq |x|e^{|x|}$ for any $x\in \mathbb{C}$; (c) and (d) holds by using Cauchy-Schwarz inequality. Now, by arguments presented in Eq. \eqref{eq: switch_laplace_boundarylim}, $\lim_{\epsilon\rightarrow 0} \mathbb{E} \big[ \| \mathbf{q}_{\perp \mathcal K}\|^2 \big]^{\frac{1}{2}} \mathbb{E} \big[ e^{2\epsilon \|\Tilde{\boldsymbol \theta}\|\| \mathbf{q}_{\perp \mathcal K}\| }\big]^{\frac{1}{2}} < \infty$, and so,
\begin{equation*}
\lim_{\epsilon\rightarrow 0} \left| \mathbb{E}[e^{\epsilon \langle \tilde{\boldsymbol \theta}, \mathbf{q} \rangle}] - \mathbb{E}[e^{\epsilon \sum_{i=1}^{2n} \langle \boldsymbol \phi,\mathbf d_i \rangle r_i }] \right| = \lim_{\epsilon\rightarrow 0} \epsilon \| \Tilde{\boldsymbol \theta} \| \mathbb{E}\big[ \| \mathbf{q}_{\perp \mathcal K} \|^{2} \big]^{\frac{1}{2}} \mathbb{E}\bigg[ e^{2\epsilon \|\Tilde{\boldsymbol \theta}\|\| \mathbf{q}_{\perp \mathcal K}\|} \bigg]^{\frac{1}{2}} = 0.
\end{equation*}
Note that, as the above argument holds for any $\Tilde{\boldsymbol \theta} \in \mathbb C^{n^2}$, it also holds if we replace $\Tilde{\boldsymbol \theta}$ by $\boldsymbol \theta \in \mathcal S$.
This completes the proof of part 2.
\item[(3)] Suppose $k = i+n(j-1)$ where $i,j\in\{1,2,\dots, n\}$. Also, if $u_k = 1$ then $q_k = 0$, which implies that $q_{\perp \mathcal K, i+n(j-1)}+ r_i +r_{n+j} = 0$. Now, as $r_i \geq 0$ and $r_{n+j} \geq 0$, we get
\begin{align*}
|r_i| \leq |\mathbf{q}_{\perp \mathcal K, i+n(j-1)}|\leq \|\mathbf{q}_{\perp \mathcal K}\|, && |r_{n+j}| \leq |\mathbf{q}_{\perp \mathcal K, i+n(j-1)}|\leq \|\mathbf{q}_{\perp \mathcal K}\|.
\end{align*}
This gives us that,
\begin{align*}
|\langle \Tilde{\boldsymbol \theta}, \mathbf{q}_{\perp \mathcal K} \rangle + \langle \boldsymbol \phi,\mathbf d_i \rangle r_i +\langle \boldsymbol \phi,\mathbf d_{n+j} \rangle r_{n+j}| \leq \big( \| \Tilde{\boldsymbol \theta}\| + |\langle \boldsymbol \phi,\mathbf d_i \rangle| + |\langle \boldsymbol \phi,\mathbf d_{n+j} \rangle| \big) \| \mathbf{q}_{\perp \mathcal K} \|
\end{align*}
Now, by picking $\| \Tilde{\boldsymbol \theta}\| + |\langle \boldsymbol \phi,\mathbf d_i \rangle| + |\langle \boldsymbol \phi,\mathbf d_{n+j} \rangle| = \theta'$,
\begin{align*}
\frac{1}{\epsilon}\Big| \mathbb{E}[u_k e^{\epsilon \langle \tilde{\boldsymbol \theta}, \mathbf{q} \rangle}] & - \mathbb{E}[u_k e^{\epsilon \sum_{l=1, l\neq i,l\neq n+j}^{2n} \langle \boldsymbol \phi,\mathbf d_l \rangle r_l}] \Big| \\
&=\frac{1}{\epsilon}\mathbb{E}\Big[u_k \left|e^{\epsilon \sum_{l=1,l\neq i,l\neq n+j}^{2n} \langle \boldsymbol \phi,\mathbf d_l \rangle r_l } \right| \big| \big( 1- e^{\epsilon (\langle \Tilde{\boldsymbol \theta}, \mathbf{q}_{\perp \mathcal K} \rangle + \langle \boldsymbol \phi,\mathbf d_i \rangle r_i +\langle \boldsymbol \phi,\mathbf d_{n+j} \rangle r_{n+j})} \big) \big|\Big] \nonumber \allowdisplaybreaks\\
&\stackrel{(a)}{\leq} \frac{1}{\epsilon}\mathbb{E}\Big[u_k \big| 1- e^{\epsilon (\langle \Tilde{\boldsymbol \theta}, \mathbf{q}_{\perp \mathcal K} \rangle + \langle \boldsymbol \phi,\mathbf d_i \rangle r_i +\langle \boldsymbol \phi,\mathbf d_{n+j} \rangle r_{n+j})} \big|\Big] \nonumber \allowdisplaybreaks\\
&\stackrel{(b)}{\leq} \frac{1}{\epsilon}\mathbb{E}\Big[u_k \big( e^{\epsilon \theta' \|\mathbf{q}_{\perp \mathcal K}\|} -1 \big)\Big] \nonumber \allowdisplaybreaks\\
& \stackrel{(c)}{\leq} \mathbb{E}\bigg[ u_k \theta' \|\mathbf{q}_{\perp \mathcal K}\| e^{\epsilon \theta' \|\mathbf{q}_{\perp \mathcal K}\|} \bigg] \nonumber \allowdisplaybreaks \\
& \leq \mathbb{E}[u_k^{2}]^{\frac{1}{2}} \mathbb{E}\bigg[ (\theta' \|\mathbf{q}_{\perp \mathcal K}\|)^2 e^{2\epsilon \theta' \|\mathbf{q}_{\perp \mathcal K}\|} \bigg]^{\frac{1}{2}} \nonumber \allowdisplaybreaks \\
& \leq \sqrt{\epsilon} \theta' \mathbb{E}\big[ \| \mathbf{q}_{\perp \mathcal K} \|^{4} \big]^{\frac{1}{4}} \mathbb{E}\bigg[ e^{4\epsilon \theta'\| \mathbf{q}_{\perp \mathcal K}\|} \bigg]^{\frac{1}{4}},
\end{align*}
where (a) follows by using $Re(\langle \mathbf{d}_i, \boldsymbol \phi \rangle ) \leq 0$ for all $i \in \{1,2,\dots,2n\}$ for any $\boldsymbol \phi \in \Phi $; (b) follows as $\| \Tilde{\boldsymbol \theta}\| + |\langle \boldsymbol \phi,\mathbf d_i \rangle| + |\langle \boldsymbol \phi,\mathbf d_{n+j} \rangle| = \theta'$; (c) holds because $|e^x-1| \leq |x|e^{|x|}$ for any $x\in \mathbb{C}$; and last two inequalities follow by Cauchy-Schwarz inequality and using $\mathbb{E}[u_2^{2}]=\mathbb{E}[u_2]\leq \epsilon$ as shown in Eq. \eqref{eq: switch_unused_epsilon}. Now, by using Definition \ref{def: switch_ssc},
\begin{align*}
\lim_{\epsilon\rightarrow 0} \mathbb{E} \big[ \| \mathbf{q}_{\perp \mathcal K}\|^4 \big] < \infty, && \lim_{\epsilon\rightarrow 0}\mathbb{E} \big[ e^{4\epsilon \theta'\| \mathbf{q}_{\perp \mathcal K}\| }\big] < \infty.
\end{align*}
Combining these with the above argument gives us,
\begin{align*}
\lim_{\epsilon \rightarrow 0} \frac{1}{\epsilon}\Big| \mathbb{E}[u_k e^{\epsilon \langle \tilde{\boldsymbol \theta}, \mathbf{q} \rangle}] & - \mathbb{E}[u_k e^{\epsilon \sum_{l=1, l\neq i,l\neq n+j}^{2n} \langle \boldsymbol \phi,\mathbf d_l \rangle r_l}] \Big| = \lim_{\epsilon \rightarrow 0} \sqrt{\epsilon} \theta' \mathbb{E}\big[ \| \mathbf{q}_{\perp \mathcal K} \|^{4} \big]^{\frac{1}{4}} \mathbb{E}\bigg[ e^{4\epsilon \theta'\| \mathbf{q}_{\perp \mathcal K}\|} \bigg]^{\frac{1}{4}} = 0.
\end{align*}
Note that the same argument holds after replacing $\Tilde{\boldsymbol \theta}$ with $\boldsymbol \theta$.
\hfill $\blacksquare$
\end{itemize}
\end{proof}
\subsection{Proof of Theorem \ref{thm: switch_functional_eq}}
\label{app: switch_functional_eq}
\begin{proof}[Proof of Theorem \ref{thm: switch_functional_eq}]
For the ease of notations, we consider $\boldsymbol \Phi = \{\boldsymbol \phi\in \mathbb C^{2n}: Re(\langle \mathbf d_i, \boldsymbol \phi \rangle \leq 0, \ \forall 1\leq i\leq 2n)\}$, where $\mathbf d_i$'s are the columns of the matrix $\mathbf D = \mathbf B^T \mathbf B$. Then, for any $\boldsymbol\phi \in \boldsymbol\Phi$, $\boldsymbol\theta = \mathbf B \boldsymbol \phi \in \boldsymbol \Theta$. Also, with slight abuse of notation, we use $\boldsymbol \theta$ and $\boldsymbol \phi$ interchangeably. Using Lemma \ref{lem: switch_mgf_equivalence} presented in Appendix \ref{app: switch_mgf_equivalence}, we get that, $|L(\boldsymbol \phi)| < \infty$, $|M_k(\boldsymbol \phi)| < \infty, \forall k\in \{1,2,\dots,n^2\}$ for all $\boldsymbol \phi \in \Phi$. Suppose $\mathbf{q}$ follows the steady state distribution and $\mathbf{q}^+$ is the state of the Markov chain following the state $\mathbf{q}$, then, as the system is stable, $\mathbf{q}^+$ also follows the steady state distribution.
Then,
\begin{align}
\label{eq: switch_mfunction}
\lim_{\epsilon \rightarrow 0} \frac{1}{\epsilon^2} \mathbb{E} \Bigg[ e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{q}^+ \rangle } \Big( e^{- \epsilon \langle \boldsymbol{\theta}, \mathbf{u} \rangle} -1 \Big) \Bigg] &=\lim_{\epsilon \rightarrow 0} \frac{1}{\epsilon^2} \mathbb{E} \Bigg[ e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{q}^+ \rangle } \Big( -\epsilon \langle \boldsymbol{\theta}, \mathbf{u} \rangle + \frac{\epsilon^2}{2} \langle \boldsymbol{\theta}, \mathbf{u} \rangle^2 + \sum_{k = 3}^\infty \frac{\epsilon^k}{k!} \langle \boldsymbol{\theta}, \mathbf{u} \rangle^k \Big) \Bigg]\nonumber\allowdisplaybreaks\\
&\stackrel{(a)}{=} - \Bigg \langle \boldsymbol{\theta}, \lim_{\epsilon \rightarrow 0} \frac{1}{\epsilon} \mathbb{E} \Big[ \mathbf{u} e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{q}^+ \rangle} \Big] \Bigg \rangle \nonumber\allowdisplaybreaks\\
&\stackrel{(b)}{=} - \Bigg \langle \boldsymbol{\theta}, \lim_{\epsilon \rightarrow 0} \frac{1}{\epsilon} \mathbb{E} \Big[ \mathbf{u} e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{q} \rangle} \Big] \Bigg \rangle \nonumber\allowdisplaybreaks\\
&= - \langle \boldsymbol{\theta}, \mathbf{M}(\boldsymbol{\phi }) \rangle,
\end{align}
where (a) and (b) follows on exact same lines as Eq. \eqref{eq: 3q_lhs_u} and Eq. \eqref{eq: 3q_plus_remove} in the proof of Theorem \ref{theo: 3q_functional_eq}. Now, by using similar argument as in Eq. \eqref{eq: 3q_fun_eq_theo_lhs},
\begin{align*}
\mathbb{E} \Big[ e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{q}^+ \rangle } \Big( e^{- \epsilon \langle \boldsymbol{\theta}, \mathbf{u} \rangle} -1 \Big) \Big]
= \mathbb{E} \Big[ e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{q} \rangle } \Big] \Bigg( \mathbb{E} [ \epsilon \langle \boldsymbol{\theta}, \mathbf{a }- \mathbf{s } \rangle + \frac{\epsilon^2}{2} \langle \boldsymbol{\theta}, \mathbf{a }- \mathbf{s } \rangle^2 + \sum_{k=3}^\infty \frac{\epsilon^k}{k!} \langle \boldsymbol{\theta}, \mathbf{a }- \mathbf{s } \rangle^k \Bigg)
\end{align*}
By the definition of the subspace $\mathcal{S}$, for any schedule in $\mathcal{X}$, $\langle \boldsymbol \theta,\mathbf{s}\rangle = \langle \boldsymbol \phi, \mathbf 1_{2n} \rangle$. Similarly,
\begin{equation*}
\mathbb{E}[\langle \boldsymbol \theta,\mathbf{a}\rangle] = \langle \boldsymbol \theta,\boldsymbol \lambda \rangle = (1-\epsilon) \langle \boldsymbol \theta,\boldsymbol \nu \rangle = (1-\epsilon) \langle \boldsymbol \phi, \mathbf 1_{2n} \rangle.
\end{equation*}
where last equality holds because $\boldsymbol \nu\in \mathcal{F}$. Thus,
\begin{align*}
\mathbb{E} [ \langle \boldsymbol{\theta}, \mathbf{a }- \mathbf{s } \rangle ]
& = - \epsilon \langle \boldsymbol \phi, \mathbf{1}_{2n} \rangle.
\end{align*}
This gives us that $\lim_{\epsilon \rightarrow 0} \frac{1}{\epsilon} \mathbb{E} [ \langle \boldsymbol{\theta}, \mathbf{a }- \mathbf{s } \rangle ] = - \langle \boldsymbol \phi, \mathbf{1}_{2n} \rangle$. Also, by making similar argument as in Eq. \eqref{eq: 3q_func_eq_variance},
\begin{align*}
\mathbb{E}[ \langle \boldsymbol \theta , \mathbf{a }- \mathbf{s } \rangle^2 ] &= \langle \boldsymbol \phi, \boldsymbol \Gamma \boldsymbol \phi \rangle + \epsilon^2 \langle \boldsymbol \phi, \mathbf{1} \rangle^2,
\end{align*}
where $\boldsymbol \Gamma = \mathbf{B}^T\boldsymbol \sigma^2 \mathbf{B}$. Thus,
\begin{equation*}
\lim_{\epsilon \rightarrow 0} \mathbb{E}[ \langle \boldsymbol \theta , \mathbf{a }- \mathbf{s } \rangle^2 ] = \langle \boldsymbol \phi, \boldsymbol \Gamma \boldsymbol \phi \rangle
\end{equation*}
Now, as the arrival and the service process are bounded, i.e., $a_{i+n(j-1)} \leq a_{\max} $ for all $i,j \in \{1,\dots,n\}$, so
\begin{align*}
\lim_{\epsilon \rightarrow 0} \Big| \sum_{k=3}^\infty \frac{\epsilon^{k-1}}{k!} \mathbb{E} [ \langle \boldsymbol{\theta}, \mathbf{a }- \mathbf{s } \rangle^k ] \Big|& \leq \lim_{\epsilon \rightarrow 0} \sum_{k=3}^\infty \frac{\epsilon^{k-2}}{k!} \| \boldsymbol{\theta}\|^{k} \mathbb{E} [ \|\mathbf{a }- \mathbf{s } \|^k ] \\
& \leq \lim_{\epsilon \rightarrow 0} \sum_{k=3}^\infty \frac{\epsilon^{k-2}}{k!} \| \boldsymbol{\theta}\|^{k} n^{2k}a_{\max}^k = 0.
\end{align*}
Using the above arguments, we get that
\begin{align*}
\lim_{\epsilon \rightarrow 0} \frac{1}{\epsilon^2} \mathbb{E} \Big[ e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{q}^+ \rangle } \Big( e^{- \epsilon \langle \boldsymbol{\theta}, \mathbf{u} \rangle} -1 \Big) \Big]
&= \Big( -\langle \boldsymbol{\phi}, \mathbf{1}_{2n} \rangle + \frac{1}{2} \langle \boldsymbol{\phi} , \boldsymbol\Gamma \boldsymbol{\phi} \rangle \Big) \lim_{\epsilon \rightarrow 0} \mathbb{E} \Big[ e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{q} \rangle } \Big] \\
&= \left( -\langle \boldsymbol{\phi}, \mathbf{1}_{2n} \rangle + \frac{1}{2} \langle \boldsymbol{\phi} , \boldsymbol\Gamma \boldsymbol{\phi} \rangle \right) L(\boldsymbol{\phi}),
\end{align*}
Combining this with Eq. \eqref{eq: switch_mfunction} gives us the functional equation as in Eq. \eqref{eq: switch_functional_eq}.
\end{proof}
\subsection{Proof of Theorem \ref{thm: switch_dist}}
\label{app: switch_dist}
\begin{proof}[Proof for Theorem \ref{thm: switch_dist}]
For the ease of notations, we consider $\boldsymbol \Phi = \{\boldsymbol \phi\in \mathbb C^{2n}: Re(\langle \mathbf d_i, \boldsymbol \phi \rangle \leq 0, \ \forall 1\leq i\leq 2n)\}$, where $\mathbf d_i$'s are the columns of the matrix $\mathbf D = \mathbf B^T \mathbf B$. Then, for any $\boldsymbol\phi \in \boldsymbol\Phi$, $\boldsymbol\theta = \mathbf B \boldsymbol \phi \in \boldsymbol \Theta$. Also, with slight abuse of notation, we use $\boldsymbol \theta$ and $\boldsymbol \phi$ interchangeably. Using Lemma \ref{lem: switch_projection}, and as $\boldsymbol \theta = \mathbf{B} \boldsymbol \phi$,
\begin{equation*}
\lim_{\epsilon \rightarrow 0} \epsilon \langle \boldsymbol \theta, \mathbf{q}\rangle = \sum_{i=1}^{2n} \langle \boldsymbol \phi , \mathbf d_i \rangle (\Upsilon_i - \Tilde{\Upsilon} ),
\end{equation*}
where $\Tilde{\Upsilon} = \min_{1\leq k\leq 2n} \Upsilon_k $.
For any $i$ and $j \neq i$, due to the strong memoryless property of exponential random variables, $\{\Upsilon_{j} - \Upsilon_{i} | \Tilde{\Upsilon} = \Upsilon_{i}\}$ is an exponential random variable with mean $\frac{\sigma^2}{2}$. Also, for any $j$ and $k$, $\{\Upsilon_{j} - \Upsilon_{i} | \Tilde{\Upsilon} = \Upsilon_{i}\}$ and $\{\Upsilon_{k} - \Upsilon_{i} | \Tilde{\Upsilon} = \Upsilon_{i}\}$ are independent of each other. And as $\{\Upsilon_1,\dots,\Upsilon_{2n}\}$ are independent and identically distributed, $\mathbb{P}(\Tilde{\Upsilon} = \Upsilon_{i}) = \frac{1}{2n}$ for any $i$. Thus, the Laplace transform of the considered distribution is given by
\begin{align*}
\mathbb{E}[ e^{\sum_{j=1}^{2n} \langle \boldsymbol \phi , \mathbf d_j \rangle (\Upsilon_j - \Tilde{\Upsilon} )}]
& = \sum_{i=1}^{2n} \mathbb{P}(\Tilde{\Upsilon} = \Upsilon_{i}) \mathbb{E}[ e^{\sum_{j=1}^{2n} \langle \boldsymbol \phi , \mathbf d_j \rangle (\Upsilon_j - \Tilde{\Upsilon} )} | \Tilde{\Upsilon} = \Upsilon_{i}] \allowdisplaybreaks\\
& = \sum_{i=1}^{2n} \mathbb{P}(\Tilde{\Upsilon} = \Upsilon_{i}) \prod_{j\neq i} \mathbb{E}[ e^{\langle \boldsymbol \phi , \mathbf d_j \rangle (\Upsilon_j - \Tilde{\Upsilon} )} | \Tilde{\Upsilon} = \Upsilon_{i}]\allowdisplaybreaks\\
& = \sum_{i=1}^{2n} \frac{1}{2n} \times \frac{ 1- \langle \boldsymbol \phi , \mathbf d_i \rangle \frac{\sigma^2}{2} }{ \prod_{j}\big( 1- \langle \boldsymbol \phi , \mathbf d_j \rangle \frac{\sigma^2}{2}\big)}\\
& = \frac{ 1 - \langle \boldsymbol \phi , \mathbf 1_{2n}\rangle \frac{\sigma^2}{2}\ }{\prod_{j}\big( 1- \langle \boldsymbol \phi , \mathbf d_j \rangle \frac{\sigma^2}{2}\big)}.
\end{align*}
Thus, for any $i,j \in \{1,\dots,n\}$
\begin{align}
\label{eq: switch_laplacesolution}
L(\boldsymbol \phi) = \frac{ 1 - \langle \boldsymbol \phi , \mathbf 1_{2n}\rangle \frac{\sigma^2}{2}\ }{\prod_{j}\big( 1- \langle \boldsymbol \phi , \mathbf d_j \rangle \frac{\sigma^2}{2}\big)}, && M_{i+n(j-1)}(\boldsymbol \phi)= \frac{\big( 1- \langle \boldsymbol \phi , \mathbf d_i \rangle \frac{\sigma^2}{2}\big)\times\big( 1- \langle \boldsymbol \phi , \mathbf d_{n+j} \rangle \frac{\sigma^2}{2}\big)}{n\prod_{k}\big( 1- \langle \boldsymbol \phi , \mathbf d_k \rangle \frac{\sigma^2}{2}\big)}.
\end{align}
Now, for this to satisfy the functional equation given in Eq. \eqref{eq: switch_functional_eq}, we need,
\begin{align}
\label{eq: switch_provefunctional}
\Big( -\langle \boldsymbol{\phi}, \mathbf{1}_{2n} \rangle + &\frac{1}{2} \langle \boldsymbol{\phi} , \boldsymbol\Gamma \boldsymbol{\phi} \rangle \Big) \left(1 - \langle \boldsymbol \phi , \mathbf 1_{2n}\rangle \frac{\sigma^2}{2}\right) \nonumber\\
&= - \frac{1}{n}\sum_{i=1}^n\sum_{j=1}^n \theta_{i+n(j-1)} \times\left( 1- \langle \boldsymbol \phi , \mathbf d_i \rangle \frac{\sigma^2}{2}\right)\times\left( 1- \langle \boldsymbol \phi , \mathbf d_{n+j} \rangle \frac{\sigma^2}{2}\right).
\end{align}
Under the symmetric variance condition, $\boldsymbol \sigma^2 = \sigma^2 \mathbf{I}_{n^2}$, then,
\begin{align}
\label{eq: switch_lhs}
\langle \boldsymbol{\phi} , \boldsymbol\Gamma \boldsymbol{\phi} \rangle = \boldsymbol{\phi}^T \mathbf{B}^T \boldsymbol \sigma^2 \mathbf{B}\boldsymbol{\phi} = \sigma^2\boldsymbol{\phi}^T \mathbf{B}^T \mathbf{I}_{n^2} \mathbf{B}\boldsymbol{\phi} = \sigma^2\boldsymbol{\phi}^T \mathbf{B}^T \mathbf{B}\boldsymbol{\phi} = \sigma^2 \langle\boldsymbol{\theta}, \boldsymbol{\theta} \rangle
\end{align}
The RHS in the Eq. \eqref{eq: switch_provefunctional} can be simplified as follows,
\begin{align*}
\frac{1}{n} \sum_{i=1}^n\sum_{j=1}^n &\theta_{i+n(j-1)} \times \left( 1- \langle \boldsymbol \phi , \mathbf d_i \rangle \frac{\sigma^2}{2}\right) \times\left( 1- \langle \boldsymbol \phi , \mathbf d_{n+j} \rangle \frac{\sigma^2}{2}\right)\\
&= \frac{1}{n}\sum_{i=1}^n\sum_{j=1}^n (\phi_{i} +\phi_{n+j}) \times \left( 1- \langle \boldsymbol \phi , \mathbf d_i \rangle \frac{\sigma^2}{2}\right)\times\left( 1- \langle \boldsymbol \phi , \mathbf d_{n+j} \rangle \frac{\sigma^2}{2}\right)\allowdisplaybreaks\\
& = \frac{1}{n}\sum_{i=1}^n \phi_{i}\left( 1- \langle \boldsymbol \phi , \mathbf d_i \rangle \frac{\sigma^2}{2}\right)\sum_{j=1}^n\left( 1- \langle \boldsymbol \phi , \mathbf d_{n+j} \rangle \frac{\sigma^2}{2}\right)\allowdisplaybreaks\\
& \quad \quad+\frac{1}{n}\sum_{j=1}^n \phi_{n+j}\left( 1- \langle \boldsymbol \phi , \mathbf d_{n+j} \rangle \frac{\sigma^2}{2}\right)\sum_{i=1}^n\left( 1- \langle \boldsymbol \phi , \mathbf d_i \rangle \frac{\sigma^2}{2}\right)\allowdisplaybreaks\\
& = \sum_{i} \phi_{i}\left( 1- \langle \boldsymbol \phi , \mathbf d_i \rangle \frac{\sigma^2}{2}\right)\left( 1- \langle \boldsymbol \phi , \mathbf 1_{2n} \rangle \frac{\sigma^2}{2}\right)+ \sum_{j} \phi_{j}\left( 1- \langle \boldsymbol \phi , \mathbf d_j \rangle \frac{\sigma^2}{2}\right)\left( 1- \langle \boldsymbol \phi , \mathbf 1_{2n} \rangle\frac{\sigma^2}{2}\right)\allowdisplaybreaks\\
& = \left( 1- \langle \boldsymbol \phi , \mathbf 1_{2n} \rangle \frac{\sigma^2}{2}\right)\sum_{i=1}^{2n} \phi_{i}\left( 1- \langle \boldsymbol \phi , \mathbf d_i \rangle \frac{\sigma^2}{2}\right)\allowdisplaybreaks\\
& = \left( 1- \langle \boldsymbol \phi , \mathbf 1_{2n} \rangle \frac{\sigma^2}{2}\right) \left( \langle \boldsymbol \phi , \mathbf 1_{2n} \rangle -\frac{\sigma^2}{2} \sum_{i=1}^{2n} \langle \boldsymbol \phi , \mathbf d_i \rangle \phi_i \right)\allowdisplaybreaks\\
& = \left( 1- \langle \boldsymbol \phi , \mathbf 1_{2n} \rangle \frac{\sigma^2}{2}\right) \left( \langle \boldsymbol \phi , \mathbf 1_{2n} \rangle -\frac{\sigma^2}{2} \langle \boldsymbol \theta, \boldsymbol \theta \rangle\right)
\end{align*}
Combining the above with Eq. \eqref{eq: switch_lhs} gives us that Eq. \eqref{eq: switch_provefunctional} is satisfied. This shows that $L(\boldsymbol \phi) $ and $\mathbf M (\boldsymbol \phi)$ given in Eq. \eqref{eq: switch_laplacesolution} is a solution of the functional equation. Now, under the assumption that Conjecture \ref{lem: switch_uniqueness} holds true, we get that Eq. \eqref{eq: switch_laplacesolution} gives the unique solution to the functional equation given by Eq. \eqref{eq: switch_functional_eq}. This completes the proof.
\end{proof}
\section{Proofs for $\mathcal{N}$-system }
\label{app: n_sys}
\subsection{Proof of state space collapse for MaxWeight}
\label{app: n_sys_ssc}
\begin{proof}
The proof follows by using $V(\mathbf{q}) = (q_2-q_1)\mathbf{1}_{\{q_2>q_1\}}$ as the Lyapunov test function and applying \cite[Lemma 10]{Weina_bandwidth}. Suppose $V(\mathbf{q}) \geq 2$, then $q_2 \geq q_1 +2$ and so there are three possible transitions for the queue length vector:
\begin{itemize}
\item First transition is due to an arrival to the first queue which increases $q_1$ by $1$. The rate of this transition is $\lambda_1$.
\item Second transition is due to an arrival to the second queue which increases $q_2$ by $1$ and the transition rate is $\lambda_2$.
\item Third transition is because of service to the second queue. Note that as $q_2 >q_1$, only the second queue is served by both the queues. In this case, $q_2$ decreases by $1$ and the rate of this transition is $\mu_1 + \mu_2$.
\end{itemize}
Also, note that in all three cases we still satisfy the condition that second queue is greater than the first queue. Using the above three cases, the drift of the Lyapunov function $V(\mathbf{q})$ when $V(\mathbf{q}) \geq 2$ is given by,
\begin{align*}
\Delta V(\mathbf{q}) = \lambda_1 \times (-1) + \lambda_2 \times 1 + (\mu_1 + \mu_2) \times (-1) \stackrel{(a)}{=} -2\mu_1 + \epsilon (2\mu_1 - \gamma \mu_1 -\gamma \mu_2) \stackrel{(b)}{\leq} -\mu_1
\end{align*}
where (a) follows by using the Eq. \eqref{eq: n_sys_arrival_vector} and (b) easily follows whenever $2\mu_1 - \gamma \mu_1 -\gamma \mu_2 \leq 0$ or $2\mu_1 - \gamma \mu_1 -\gamma \mu_2> 0$ and $\epsilon \leq \frac{1}{\mu_1} (2\mu_1 - \gamma \mu_1 -\gamma \mu_2)$. This fulfils the first requirement in \cite[Lemma 10]{Weina_bandwidth}. For the second condition, note that any transition will change the queue length of exactly one queue by exactly 1 (either increase or decrease). So, for any transition, $V(\mathbf{q})$ can change by atmost 1. And finally, the third condition in \cite[Lemma 10]{Weina_bandwidth} is satisfied because all the transition rates are finite. Then, from the result of \cite[Lemma 10]{Weina_bandwidth}, we get that MaxWeight achieves state space collapse according to Definition \ref{def: n_sys_ssc}.
\end{proof}
\subsection{Required Lemma}
\begin{lemma}
\label{lem: nsys_mgf_equivalence}
Consider the $\mathcal{N}$-system as defined in Section \ref{sec: n_sys_model} operating under scheduling policy that achieves state space collapse according to the Definition \ref{def: n_sys_ssc}. For any $\boldsymbol \phi \in \boldsymbol \Phi$ where $\boldsymbol \Phi = \{\boldsymbol \phi \in\mathbb{C}^2: Re(\phi_1) \leq 0, Re(\phi_1 + \phi_2 )\leq 0\}$, then
\begin{enumerate}
\item \begin{align*}
\lim_{\epsilon\rightarrow 0} \big| \mathbb{E}[e^{\epsilon(\phi_1 q_1+\phi_2 q_2)}]\big| < \infty, && \lim_{\epsilon \rightarrow 0} \big| \mathbb{E}[e^{\epsilon(\phi_1+ \phi_2)q_2} |q_1 \leq q_2] \big|<\infty, &&\lim_{\epsilon \rightarrow 0}\big| \mathbb{E}[e^{\epsilon \phi_1 q_1}|q_2 =0] \big| < \infty.
\end{align*}
\item \begin{equation*}
\lim_{\epsilon \rightarrow 0} \mathbb{E}[e^{\epsilon(\phi_1 q_1+\phi_2 q_2)}] = \lim_{\epsilon \rightarrow 0} \mathbb{E}[e^{\epsilon(\phi_1 (q_1-q_2)\mathbf{1}_{\{q_1\geq q_2\}} + (\phi_1 + \phi_2)q_2 )}],
\end{equation*}
\end{enumerate}
where all the expectation are taken under the steady state distribution.
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lem: nsys_mgf_equivalence}]
As $q_{\perp} = (q_2-q_1)\mathbf{1}_{\{q_1 < q_2\}}$, we have
\begin{align*}
\phi_{1} q_1 + \phi_2 q_2 &= \phi_1 (q_1-q_2)\mathbf{1}_{\{q_1\geq q_2\}} + (\phi_1 + \phi_2)q_2 + \phi_1 (q_1-q_2)\mathbf{1}_{\{q_1 < q_2\}}\\
& = \phi_1 (q_1-q_2)\mathbf{1}_{\{q_1\geq q_2\}} + (\phi_1 + \phi_2)q_2 - \phi_1 q_{\perp},
\end{align*}
where $q_\perp = (q_1-q_2)\mathbf{1}_{\{q_1 < q_2\}}$. This gives us that,
\begin{align*}
\lim_{\epsilon\rightarrow 0} \big| \mathbb{E}[e^{\epsilon(\phi_1 q_1+\phi_2 q_2)}] \big| &\leq \lim_{\epsilon\rightarrow 0} \mathbb{E}[|e^{\epsilon(\phi_1 (q_1-q_2)\mathbf{1}_{\{q_1\geq q_2\}} + (\phi_1 + \phi_2)q_2 - \phi_1 q_{\perp})}|] \\
&\stackrel{(a)}{\leq} \lim_{\epsilon\rightarrow 0} \mathbb{E}[|e^{-\epsilon\phi_1 q_{\perp}}|]\\
&\stackrel{(b)}{<} \infty,
\end{align*}
where (a) holds as $Re(\phi_1)\leq $ and $Re(\phi_1 + \phi_2) \leq 0$ and (b) holds by using Definition \ref{def: n_sys_ssc}. Similarly, as $Re(\phi_1)\leq $ and $Re(\phi_1 + \phi_2) \leq 0$, $|e^{\epsilon(\phi_1+ \phi_2)q_2}|\leq 1$ and $|e^{\epsilon \phi_1 q_1}| \leq 1$ and so,
\begin{align*}
\lim_{\epsilon \rightarrow 0} \big| \mathbb{E}[e^{\epsilon(\phi_1+ \phi_2)q_2} |q_1 \leq q_2] \big|<\infty, &&\lim_{\epsilon \rightarrow 0}\big| \mathbb{E}[e^{\epsilon \phi_1 q_1}|q_2 =0] \big| < \infty.
\end{align*}
This completes the proof of part 1. Now, for part 2,
\begin{align*}
\lim_{\epsilon \rightarrow 0} \big|\mathbb{E}[e^{\epsilon(\phi_1 q_1+\phi_2 q_2)}] -& \mathbb{E}[e^{\epsilon(\phi_1 (q_1-q_2)\mathbf{1}_{\{q_1\geq q_2\}} + (\phi_1 + \phi_2)q_2 )}]\big| \allowdisplaybreaks\\
&= \lim_{\epsilon \rightarrow 0} \big| \mathbb{E}[e^{\epsilon(\phi_1 (q_1-q_2)\mathbf{1}_{\{q_1\geq q_2\}} + (\phi_1 + \phi_2)q_2 )} \big(e^{-\epsilon\phi_1 q_{\perp}} -1 \big)]\big|\allowdisplaybreaks\\
& \stackrel{(a)}{\leq}\lim_{\epsilon \rightarrow 0} \mathbb{E}[\big| e^{-\epsilon\phi_1 q_{\perp}} -1 \big| ]\allowdisplaybreaks\\
& \stackrel{(b)}{\leq}\lim_{\epsilon \rightarrow 0} \mathbb{E}[\epsilon |\phi_1|| q_{\perp}| e^{\epsilon|\phi_1 ||q_{\perp}} | ]\allowdisplaybreaks\\
& \stackrel{(c)}{\leq}\lim_{\epsilon \rightarrow 0} \epsilon |\phi_1| \mathbb{E}[| q_{\perp}|^2]^{\frac{1}{2}} \mathbb{E}[e^{2\epsilon|\phi_1 ||q_{\perp}} | ]^{\frac{1}{2}},
\end{align*}
where (a) holds as $Re(\phi_1)\leq $ and $Re(\phi_1 + \phi_2) \leq 0$; (b) holds because $|e^x-1| \leq |x|e^{|x|}$ for any $x\in \mathbb{C}$; (c) holds by using Cauchy-Schwarz inequality. Now, as the scheduling policy achieves state space collapse according to Definition \ref{def: n_sys_ssc}, $\lim_{\epsilon \rightarrow 0} \mathbb{E}[| q_{\perp}|^2]^{\frac{1}{2}} \mathbb{E}[e^{2\epsilon|\phi_1 ||q_{\perp}} | ]^{\frac{1}{2}} < \infty$. This gives us that,
\begin{align*}
\lim_{\epsilon \rightarrow 0} \big|\mathbb{E}[e^{\epsilon(\phi_1 q_1+\phi_2 q_2)}] - \mathbb{E}[e^{\epsilon(\phi_1 (q_1-q_2)\mathbf{1}_{\{q_1\geq q_2\}} + (\phi_1 + \phi_2)q_2 )}]\big| = 0.
\end{align*}
This completes the proof of part 2.
\end{proof}
\subsection{Proof of Theorem \ref{theo: n_sys_mgf_eq}}
\label{app: n_sys_mgf_eq}
\begin{proof}[Proof of Theorem \ref{theo: n_sys_mgf_eq}]
In order to prove the Eq. \eqref{eq: n_sys_mgf_eq} in Theorem \ref{theo: n_sys_mgf_eq}, we first need that the Laplace transform of the heavy traffic distribution exists, i.e., the absolute value of $L(\boldsymbol \phi), M_1(\boldsymbol \phi)$ and $M_2(\boldsymbol \phi)$ are finite.
Using Lemma \ref{lem: nsys_mgf_equivalence}, we get that, $|L(\boldsymbol \phi)| < \infty$ for all $\boldsymbol \phi \in \Phi$. Similarly, $|M_1(\boldsymbol \phi)| < \infty$ and $|M_2(\boldsymbol \phi)| < \infty$ for all $\boldsymbol \phi \in \Phi$.
Now, consider the exponential Lyapunov function
\begin{equation*}
V(q_1,q_2) = e^{\epsilon(\phi_1 q_1+\phi_2 q_2)}.
\end{equation*}
The drift of the function $V(q_1,q_2) $ is given by,
\begin{align}
\label{eq: n_sys_thm1_drift}
\Delta V(q_1,q_2) = e^{\epsilon(\phi_1 q_1 + \phi_2 q_2)}& [ \lambda_1 (e^{\epsilon\phi_1 }-1) + \lambda_2 (e^{\epsilon\phi_2 }-1)+ \mu_1(e^{-\epsilon\phi_1 }-1) \mathbf{1}_{\{q_1 > q_2,q_1>0\}} \nonumber\\
& \ +\mu_2(e^{-\epsilon\phi_2 }-1) \mathbf{1}_{\{q_1 > q_2,q_2>0\}}
+ (\mu_2+\mu_1)(e^{-\epsilon\phi_2 }-1) \mathbf{1}_{\{q_1 \leq q_2,q_2 >0\}} \big].
\end{align}
In the above equation, the first two terms correspond to the drift due to arrivals in each of the queue; the third and the fourth term correspond to the service for each queue when $q_1>q_2$; and the last term correspond to the service when $q_2\geq q_1$ in which case the only $q_2$ is served. We have also used the fact that a queue can be served only when the queue length is greater than zero. The drift of a well defined Lyapunov function is zero in steady state. So, $\mathbb{E}[\Delta V(q_1,q_2)] = 0$ for all $\boldsymbol \phi \in \Phi$, where expectation is taken under steady state distribution.
By putting $\phi_2 = 0$ in Eq. \eqref{eq: n_sys_thm1_drift}, we get that
\begin{align*}
\Delta V(q_1,q_2) = e^{\epsilon\phi_1 q_1} [ \lambda_1 (e^{\epsilon\phi_1 }-1) + \mu_1(e^{-\epsilon\phi_1 }-1) \mathbf{1}_{\{q_1 > q_2\}} \big].
\end{align*}
We can now use $\mathbb{E}[\Delta V(q_1,q_2)] = 0$ in steady state to get
\begin{align*}
\lambda_1 \mathbb{E}[ e^{\epsilon\phi q_1} ] &= \mu_1 e^{-\epsilon\phi_1} \mathbb{E}[ e^{\epsilon\phi_1 q_1}\mathbf{1}_{\{q_1 > q_2\}} ]
\end{align*}
Now, by again putting $\phi_1 =0$ in the above equation, we get that,
\begin{align}
\label{eq: n_sys_prob_surface1}
\mathbb{P}(q_1 \leq q_2) = 1-\frac{\lambda_1}{ \mu_1} = \epsilon
\end{align}
After some manipulation, we can rewrite the Eq. \eqref{eq: n_sys_thm1_drift} as
\begin{align*}
\Delta V(q_1,q_2) = e^{\epsilon(\phi_1 q_1 + \phi_2 q_2)}& \big[ \lambda_1 (e^{\epsilon\phi_1 }-1) + \lambda_2 (e^{\epsilon\phi_2 }-1)+ \mu_1(e^{-\epsilon\phi_1 }-1) +\mu_2(e^{-\epsilon\phi_2 }-1)\\
& \ \ + \mu_1(e^{-\epsilon\phi_2 }-e^{-\epsilon\phi_1 })\mathbf{1}_{\{q_1 \leq q_2\}}\big]\\
& \ \ - \mu_2e^{\epsilon\phi_1 q_1 }(e^{-\epsilon\phi_2 }-1) \mathbf{1}_{\{q_2 =0\}} - \mu_1(e^{-\epsilon\phi_2 }-1) \mathbf{1}_{\{q_1=q_2 =0\}},
\end{align*}
where we have used that $e^{\epsilon(\phi_1 q_1+\phi_2 q_2) }\mathbf{1}_{\{q_2 =0\}}= e^{\epsilon\phi_1 q_1 }\mathbf{1}_{\{q_2 =0\}}$ and $e^{\epsilon(\phi_1 q_1+\phi_2 q_2) }\mathbf{1}_{\{q_1=q_2 =0\}}= \mathbf{1}_{\{q_2 =0\}}$.
By taking expectation,
\begin{align}
\label{eq: n_sys_funcprove_termall}
\mathbb{E}[\Delta V(q_1,q_2)] =\big[& \lambda_1 (e^{\epsilon\phi_1 }-1) + \lambda_2 (e^{\epsilon\phi_2 }-1)+ \mu_1(e^{-\epsilon\phi_1 }-1) +\mu_2(e^{-\epsilon\phi_2 }-1)\big] \mathbb{E}[e^{\epsilon(\phi_1 q_1 + \phi_2 q_2)}]\nonumber \\
& +\mu_1(e^{-\epsilon\phi_2 }-e^{-\epsilon\phi_1 }) \mathbb{P}(q_1\leq q_2) \mathbb{E}[e^{\epsilon(\phi_1 q_1 + \phi_2 q_2)}| q_1\leq q_2]\nonumber\\
& - \mu_2 (e^{-\epsilon\phi_2 }-1)\mathbb{P}(q_1=0)\mathbb{E}[e^{\epsilon\phi_1 q_1 }| q_2 = 0] - \mu_1(e^{-\epsilon\phi_2 }-1) \mathbb{P}(q_1=q_2 =0).
\end{align}
Next, by putting $\phi_1 = 0$ in the above equation,
\begin{equation*}
(\mu_2 - \lambda_2e^{\epsilon \phi_2})\mathbb{E}[e^{\epsilon\phi_2 q_2}] + \mu_1 \mathbb{P}(q_1\leq q_2) \mathbb{E}[e^{\epsilon\phi_2 q_2}| q_1\leq q_2] - \mu_2 \mathbb{P}(q_1=0) - \mu_1 \mathbb{P}(q_1=q_2 =0) = 0.
\end{equation*}
And now by putting $\phi_2 =0$,
\begin{align*}
& \mu_2 - \lambda_2 +\mu_1 \mathbb{P}(q_1\leq q_2) + \mu_2 \mathbb{P}(q_1=0) + \mu_1 \mathbb{P}(q_1=q_2 =0) = 0.
\end{align*}
We can simplify the above equation by putting $\mathbb{P}(q_1\leq q_2) = 1- \frac{\lambda_1}{\mu_1}$ from Eq. \eqref{eq: n_sys_prob_surface1}, to get
\begin{align}
\label{eq: n_sys_prob_surface2}
\mu_2 \mathbb{P}(q_1=0) + \mu_1 \mathbb{P}(q_1=q_2 =0) = \mu_1 + \mu_2 - \lambda_1- \lambda_2 = \gamma \epsilon(\mu_1 + \mu_2).
\end{align}
Now, we can do the heavy traffic approximation, where we use the Taylor expansion of complex exponential function upto the second order. For the first term,
\begin{align}
\label{eq: n_sys_funcprove_term1}
& \lambda_1 (e^{\epsilon\phi_1 }-1) + \lambda_2 (e^{\epsilon\phi_2 }-1)+ \mu_1(e^{-\epsilon\phi_1 }-1) +\mu_2(e^{-\epsilon\phi_2 }-1)\nonumber\\
& \stackrel{(a)}{=} \mu_1(1-\epsilon) \big(\epsilon\phi_1+ \epsilon^2 \frac{\phi_1^2}{2}\big) + \big((1-\gamma\epsilon) \mu_2 + \epsilon \mu_1 (1-\gamma)\big)\big(\epsilon\phi_2+ \epsilon^2 \frac{\phi_2^2}{2}\big) \nonumber\\
& \quad \quad + \mu_1 \big(-\epsilon\phi_1+ \epsilon^2 \frac{\phi_1^2}{2}\big) + \mu_2 \big(-\epsilon\phi_2+ \epsilon^2 \frac{\phi_2^2}{2}\big) + o(\epsilon^2)\nonumber\\
& = \epsilon^2 \big[\mu_1 (-\phi_1+\phi_1^2) + \mu_2 (-\gamma\phi_2+\phi_2^2)+ \phi_2\mu_1(1-\gamma) \big] +o(\epsilon^2),
\end{align}
where (a) follows by using the value of $\lambda_1$ and $\lambda_2$ as in Eq. \eqref{eq: n_sys_arrival_vector}. For the second term,
\begin{align*}
\Big| \mathbb{E} \Big[ \mathbf{1}_{\{q_1\leq q_2\}} \big( e^{\epsilon (\phi_1 q_1 +\phi_2 q_2)} - e^{\epsilon (\phi_1 + \phi_2)q_2}\big) \Big] \Big| &= \Big| \mathbb{E} \Big[ \mathbf{1}_{\{q_1\leq q_2\}} e^{\epsilon (\phi_1 + \phi_2)q_2} \big( e^{-\epsilon \phi_1(q_2-q_1)} - 1 \big)\Big] \Big| \allowdisplaybreaks\\
&\stackrel{(a)}{\leq} \mathbb{E} \Big[ \mathbf{1}_{\{q_1\leq q_2\}} \Big| e^{-\epsilon \phi_1(q_2-q_1)} - 1 \Big|\Big] \allowdisplaybreaks\\
& \stackrel{(b)}{=} \mathbb{E} \Big[ \mathbf{1}_{\{q_1\leq q_2\}} \big| e^{-\epsilon \phi_1(q_2-q_1)\mathbf{1}_{\{q_1< q_2\}} } - 1 \big|\Big] \allowdisplaybreaks\\
& \stackrel{(c)}{\leq}\mathbb{E} [ \mathbf{1}_{\{q_1\leq q_2\}}]^{\frac{1}{2}} \mathbb{E} \Big[ \big| e^{-\epsilon\phi_1 q_{\perp}} - 1 \big|^2\Big]^{\frac{1}{2}}\allowdisplaybreaks\\
& \stackrel{(d)}{\leq} \sqrt{\epsilon} \mathbb{E} \Big[ \big| e^{\epsilon |\phi_1 |q_{\perp}} - 1 \big|^2\Big]^{\frac{1}{2}}\allowdisplaybreaks\\
&\stackrel{(e)}{\leq} \epsilon^{\frac{3}{2}}|\phi_1 | \mathbb{E} \Big[ q_{\perp}^2 e^{2\epsilon |\phi_1|q_{\perp}} \Big]^{\frac{1}{2}}\allowdisplaybreaks\\
&\stackrel{(f)}{\leq} \epsilon^{\frac{3}{2}}|\phi_1| \mathbb{E} [ q_{\perp}^4]^{\frac{1}{4}}\mathbb{E} \Big[ e^{4\epsilon |\phi_1 |q_{\perp}} \Big]^{\frac{1}{4}},
\end{align*}
where (a) follows as $Re(\phi_1 +\phi_2)\leq 0$ and so $|e^{\epsilon (\phi_1 + \phi_2)q_2}| \leq 1$; (b) follows as the term inside expectation is zero when $q_1 \geq q_2$; (c) uses the notation $q_\perp = (q_1-q_2)\mathbf{1}_{\{q_1< q_2\}}$ and the Cauchy-Schwarz inequality; (d) follows by Eq. \eqref{eq: n_sys_prob_surface1}; (e) uses the fact that $|e^x -1|\leq |x|e^{|x|}$ for all $x\in \mathbb{C}$ and finally (f) follows by Cauchy-Schwarz inequality.
Now, from Definition \ref{def: n_sys_ssc}, we know that $\mathbb{E} [ q_{\perp}^4] < \infty$ and $\mathbb{E} \Big[ e^{4\epsilon |\phi_1|q_{\perp}} \Big] < \infty$. This gives us that
\begin{equation*}
\mathbb{E} \Big[ \mathbf{1}_{\{q_1\leq q_2\}} e^{\epsilon (\phi_1 q_1 +\phi_2 q_2)} \Big] = \mathbb{E} \Big[ \mathbf{1}_{\{q_1\leq q_2\}} e^{\epsilon (\phi_1 + \phi_2)q_2} \Big] +o(\epsilon).
\end{equation*}
And so, by using $(e^{-\epsilon\phi_2} -e^{-\epsilon\phi_1}) = -\epsilon(\phi_2 - \phi_1) + o(\epsilon)$ and $\mathbb{P}(q_1\leq q_2 ) = \epsilon$ from Eq. \eqref{eq: n_sys_prob_surface1},
\begin{align}
\label{eq: n_sys_funcprove_term2}
\mu_1 (e^{-\epsilon\phi_2} -e^{-\epsilon\phi_1}) \mathbb{E} \Big[ \mathbf{1}_{\{q_1\leq q_2\}} e^{\epsilon (\phi_1 q_1 +\phi_2 q_2)} \Big] &= \epsilon^2 \mu_1 (\phi_1-\phi_2) \mathbb{E} \Big[ e^{\epsilon (\phi_1 + \phi_2)q_2} |q_1\leq q_2 \Big] +o(\epsilon^2).
\end{align}
Finally,
\begin{align}
\label{eq: n_sys_funcprove_term3}
\mu_2 & (e^{-\epsilon\phi_2 }-1)\mathbb{P}(q_1=0)\mathbb{E}[e^{\epsilon\phi_1 q_1 }| q_2 = 0] + \mu_1(e^{-\epsilon\phi_2 }-1) \mathbb{P}(q_1=q_2 =0)\nonumber \\
& = -\epsilon^2 \bigg[ \mu_2 \phi_2 \frac{\mathbb{P}(q_1=0)}{\epsilon} \mathbb{E}[e^{\epsilon\phi_1 q_1 }| q_2 = 0] + \mu_1 \phi_2 \frac{\mathbb{P}(q_1=q_2=0)}{\epsilon} \bigg] + o(\epsilon^2).
\end{align}
Now by substituting Eq. \eqref{eq: n_sys_funcprove_term1}, \eqref{eq: n_sys_funcprove_term2} and \eqref{eq: n_sys_funcprove_term3} in the Eq. \eqref{eq: n_sys_funcprove_termall} and taking $\lim_{\epsilon \rightarrow 0} \frac{1}{\epsilon^2} \mathbb{E}[\Delta V(q_1,q_2)] = 0$, where the expectation is taken under steady state distribution, we get that for any $\boldsymbol \phi \in \Phi$,
\begin{equation}
\label{eq: nsys_mgf_withp2}
L(\boldsymbol \phi) \big[ \mu_2 (-\gamma\phi_2 +\phi_2^2)+\phi_2\mu_1(1-\gamma)+ \mu_1 (-\phi_1+\phi_1^2)\big] +\mu_1(\phi_1-\phi_2) M_1(\boldsymbol \phi) +\mu_2\phi_2 p_1 M_2(\boldsymbol \phi) + \mu_1 \phi_2 p_2 =0,
\end{equation}
where $L(\boldsymbol \phi), M_1(\boldsymbol \phi) $ and $M_2(\boldsymbol \phi) $ are as defined in the Theorem \ref{theo: n_sys_mgf_eq} and
\begin{align*}
p_1 = \lim_{\epsilon\rightarrow 0} \frac{\mathbb{P}(q_2=0)}{\epsilon} && p_2 = \lim_{\epsilon\rightarrow 0} \frac{\mathbb{P}( q_1=q_2=0)}{\epsilon}
\end{align*}
Now, we claim that $p_2 =0$. This can be seen as follows. By putting $\phi_1 = 0$ in Eq. \eqref{eq: nsys_mgf_withp2},
\begin{equation*}
L(\boldsymbol \phi) \big[ \mu_2 (-\gamma\phi_2 +\phi_2^2)+\phi_2\mu_1(1-\gamma)\big] -\mu_1\phi_2 M_1(\boldsymbol \phi) +\mu_2\phi_2 p_1 M_2(\boldsymbol \phi) + \mu_1 \phi_2 p_2 =0.
\end{equation*}
As the above equation is true for any $\phi_2$ such that $Re(\phi_2)\leq 0$, we get,
\begin{equation*}
L(\boldsymbol \phi) \big[ \mu_2 (-\gamma +\phi_2)+\mu_1(1-\gamma)\big] -\mu_1 M_1(\boldsymbol \phi) +\mu_2 p_1 M_2(\boldsymbol \phi) + \mu_1 p_2 =0.
\end{equation*}
Now, by the property of exponential function, as $Re(\phi_2) \rightarrow -\infty$, we have $L(\boldsymbol\phi) \rightarrow 0$, $M_1(\boldsymbol \phi)\rightarrow 0$, $M_2(\boldsymbol \phi)\rightarrow 0$ and finally $\phi_2L(\boldsymbol\phi) \rightarrow 0$. By using this in the previous equation, we get $p_2 =0$. Using this in Eq. \eqref{eq: n_sys_prob_surface2} gives us that $p_1 = \frac{\gamma(\mu_1+\mu_2)}{\mu_2}$. Substituting the value of $p_1$ back in Eq. \eqref{eq: nsys_mgf_withp2} completes the proof.
\end{proof}
\subsection{Proof of Lemma \ref{lem: n_sys_uniqueness}}
\label{app: n_sys_uniqueness}
\begin{proof}[Proof of Lemma \ref{lem: n_sys_uniqueness}]
In order to prove Lemma \ref{lem: n_sys_uniqueness}, we are going to Lemma \ref{lem: functional_uniqueness}. We already know that the heavy traffic distribution of the scaled queue length vector i.e., the distribution of $\lim_{\epsilon \rightarrow 0} \epsilon \mathbf{q}$ exists. Next, we do a linear transform of the variable $\boldsymbol \phi$ so that the Laplace transform $M_1(\cdot)$ and $M_2(\cdot)$ depends only on one variable. We can pick $\psi_1 = \phi_1$ and $\psi_2 = \phi_1 +\phi_2$, i.e., $\boldsymbol \psi = (\psi_1,\psi_2) = (\phi_1,\phi_1 +\phi_2)$. Which implies that $\boldsymbol \phi = (\psi_1,\psi_2-\psi_1) $. Then, $M_1(\boldsymbol \phi) = M_1(\psi_2)$ and $M_2(\boldsymbol \phi) = M_2(\psi_1)$.
The functional equation can be rewritten as,
\begin{align*}
\big((\mu_1+\mu_2) \psi_1^2 +\mu_2 \psi_2^2 -2\mu_2 \psi_1\psi_2 +& \psi_1(\gamma(\mu_1+\mu_2) - 2\mu_1) + \psi_2(\mu_1 - \gamma(\mu_1+\mu_2)) \big) L(\boldsymbol{\psi})\\
&+ \mu_1(2\psi_1 -\psi_2) M_1(\psi_2) +\gamma(\mu_1+\mu_2)(\psi_2 - \psi_1)M_2(\psi_1) = 0.
\end{align*}
As the functional equation in Eq. \eqref{eq: n_sys_mgf_eq} holds for any $\boldsymbol \phi \in \Phi$, so the rewritten functional equation above holds for any $\boldsymbol \psi$ such that $Re(\boldsymbol \psi) \leq \mathbf{0}_2$.
Also, as shown in the proof of Theorem \ref{theo: n_sys_mgf_eq},
\begin{align*}
\phi_{1} q_1 + \phi_2 q_2 = \psi_1 (q_1-q_2)\mathbf{1}_{\{q_1\geq q_2\}} + \psi_2q_2 + \psi_1 q_{\perp},
\end{align*}
By using the state space collapse,
\begin{align*}
L(\boldsymbol \phi) = \lim_{\epsilon\rightarrow 0} \mathbb{E}\big[ e^{\epsilon (\psi_1 (q_1-q_2)\mathbf{1}_{\{q_1\geq q_2\}} + \psi_2q_2)}\big]
\end{align*}
This can be seen as follows,
\begin{align*}
\lim_{\epsilon \rightarrow 0} \Big|\mathbb{E}[ & e^{\epsilon( \phi_{1} q_1 + \phi_2 q_2)}] - \mathbb{E}\big[ e^{\epsilon (\psi_1 (q_1-q_2)\mathbf{1}_{\{q_1\geq q_2\}} + \psi_2q_2)}\big]\Big|\\
& \leq\lim_{\epsilon \rightarrow 0} \mathbb{E}\big[ \big| e^{\epsilon (\psi_1 (q_1-q_2)\mathbf{1}_{\{q_1\geq q_2\}} + \psi_2q_2)}\big| \big|(1- e^{\epsilon\phi_1 q_\perp})\big|\big]\\
&\stackrel{(a)}{\leq} \lim_{\epsilon \rightarrow 0} \mathbb{E}\big[ \big|(1- e^{\epsilon\psi_1 q_\perp})\big|\big]\\
&\stackrel{(b)}{\leq}\lim_{\epsilon \rightarrow 0} \epsilon |\psi_1| \mathbb{E}[|q_\perp|] \mathbb{E}[e^{|\epsilon\psi_1 q_\perp|}]\\
& \stackrel{(c)}{=} 0,
\end{align*}
where (a) holds as $\boldsymbol \phi\in \Phi$ and so, $ \big| e^{\epsilon (\psi_1 (q_1-q_2)\mathbf{1}_{\{q_1\geq q_2\}} + \psi_2q_2)}\big| \leq 1$; (b) holds as $|e^x-1|\leq |x|e^{|x|}$ for any $x\in \mathbb{C}$; and (c) holds by using Definition \ref{def: n_sys_ssc}. Now, $M_1(\boldsymbol \phi)$ is the Laplace transform of $(\psi_1 (q_1-q_2)\mathbf{1}_{\{q_1\geq q_2\}} + \psi_2q_2)$ under the condition that $(q_1-q_2)\mathbf{1}_{\{q_1\geq q_2\}} =0$ which is same as the condition $q_1 \leq q_2$. Similarly, $M_2(\boldsymbol \phi)$ is the Laplace transform under the condition that $q_2 =0$ in which case $\psi_1 (q_1-q_2)\mathbf{1}_{\{q_1\geq q_2\}} + \psi_2q_2)$ reduces to $\psi_1 q_1$. Thus, $M_1(\boldsymbol \phi)$ and $M_2(\boldsymbol \phi)$ define the Laplace transform of the boundary measures of $(\psi_1 (q_1-q_2)\mathbf{1}_{\{q_1\geq q_2\}} + \psi_2q_2)$. Now, this matches with the form we have in Eq. \eqref{eq: functional}. In this case, the reflection matrix $\mathbf R$ and the interior drift $\boldsymbol \alpha$ is
\begin{align*}
\mathbf{R} = \begin{bmatrix}
2\mu_1 && - \gamma (\mu_1 + \mu_2)\\
-\mu_1 && \gamma (\mu_1 + \mu_2).
\end{bmatrix} && \boldsymbol \alpha = \begin{bmatrix}
\gamma (\mu_1 + \mu_2)-2\mu_1\\
\mu_1 - \gamma (\mu_1 + \mu_2).
\end{bmatrix}
\end{align*}
Now we can observe that the required conditions in Lemma \ref{lem: functional_uniqueness} is satisfied. This gives us that the functional equation for $\mathcal{N}$-system has a unique solution.
\end{proof}
\subsection{Proof of Theorem \ref{theo: n_sys_distribution}}
\label{app: n_sys_distribution}
\begin{proof}[Proof of Theorem \ref{theo: n_sys_distribution}]
We know that the Laplace transform of distribution uniquely defines the distribution.
For the considered distribution, $\epsilon q_1 \rightarrow \Upsilon_1+ \Upsilon_2$ and $\epsilon q_2 \rightarrow \Upsilon_2$ as $\epsilon \rightarrow 0$ is equivalent to saying that for all $\boldsymbol \phi \in \Phi$,
\begin{equation*}
\lim_{\epsilon\rightarrow 0} \mathbb{E}[e^{\epsilon(\phi_1 q_1+\phi_2 q_2)}] = \mathbb{E}[e^{\phi_1 \Upsilon_1+(\phi_1+\phi_2) \Upsilon_2}] = \frac{1}{(1-\phi_1)\big(1- \frac{\phi_1+\phi_2}{2\gamma}\big)}.
\end{equation*}
Under the condition $\mu_1 = \mu_2$, the Laplace equation given in Eq. \eqref{eq: n_sys_mgf_eq} becomes,
\begin{align}
L(\boldsymbol \phi) \big[ (1-2\gamma)\phi_2 & +\phi_2^2 -\phi_1+\phi_1^2\big] +(\phi_1-\phi_2) M_1(\boldsymbol \phi) +2\gamma\phi_2 M_1(\boldsymbol \phi) =0,
\end{align}
Now, if we choose,
\begin{align*}
L(\boldsymbol \phi) = \frac{1}{(1-\phi_1)\big(1- \frac{\phi_1+\phi_2}{2\gamma}\big)},&&
M_1(\boldsymbol \phi) = \frac{1}{1- \frac{\phi_1+\phi_2}{2\gamma}}, &&
M_2(\boldsymbol \phi) = \frac{1}{1-\phi_1},
\end{align*}
then,
\begin{align}
L(\boldsymbol \phi) &\big[ (1-2\gamma)\phi_2 +\phi_2^2 -\phi_1+\phi_1^2\big] +(\phi_1-\phi_2) M_1(\boldsymbol \phi) +2\gamma\phi_2 M_1(\boldsymbol \phi) \nonumber\\
&= \frac{(1-2\gamma)\phi_2 +\phi_2^2 -\phi_1+\phi_1^2}{(1-\phi_1)\big(1- \frac{\phi_1+\phi_2}{2\gamma}\big)}+\frac{\phi_1-\phi_2}{1- \frac{\phi_1+\phi_2}{2\gamma}} + \frac{2\gamma\phi_2}{1-\phi_1}\nonumber\allowdisplaybreaks\\
& = \frac{1}{(1-\phi_1)\big(1- \frac{\phi_1+\phi_2}{2\gamma}\big)} \left[ (1-2\gamma)\phi_2 +\phi_2^2 -\phi_1+\phi_1^2 + (1-\phi_1)(\phi_1-\phi_2) + 2\gamma\phi_2\left(1- \frac{\phi_1+\phi_2}{2\gamma}\right)\right]\nonumber\allowdisplaybreaks\\
& = 0.
\end{align}
Thus, for the chosen solution $(L(\boldsymbol \phi) , M_1(\boldsymbol \phi), M_2(\boldsymbol \phi))$, the Laplace equation in Eq. \eqref{eq: n_sys_mgf_eq} is satisfied. And from Lemma \ref{lem: n_sys_uniqueness}, we know that there is a unique solution to the Eq. \eqref{eq: n_sys_mgf_eq}. Thus, this gives the distribution of the heavy traffic steady state queue length.
\end{proof}
\end{appendix}
\section{Discussion \& Future work}
\label{sec: discussion}
In this paper, we looked at queueing systems that do not satisfy the CRP condition and developed a technique using the Laplace transform to specify the heavy traffic distribution. The idea is to use complex exponential as the test function and set its drift to zero in steady-state. If the system satisfies the CRP condition, then this analysis gives the explicit closed-form expression for the Laplace transform of the heavy traffic distribution. For a non-CRP system, the same analysis gives an implicit equation which is termed the functional equation of the system. For the considered systems, we characterized their heavy traffic distribution using the functional equation and provided the solution to the functional equation under some specific conditions on the system parameters. Before concluding this paper, we present a few small remarks and future directions for this work.
\subsubsection{Uniqueness of functional equation for Input-queued switch} Proving the uniqueness of the solution the functional equation for the Input-queued switch has turned out to be a difficult task. The ideas presented in \cite{franceschi2019integral} are not enough to prove the statement given by Conjecture \ref{lem: switch_uniqueness}. One idea is to look at the extensions of the Carleman Boundary value problem and then attempt a similar technique as presented in \cite{franceschi2019integral}. Completing the proof for Conjecture \ref{lem: switch_uniqueness} is crucial to extend the results presented in this paper to more general SPNs.
\subsubsection{Heavy traffic distribution under general variance condition} For all three queueing systems considered in this paper, we have shown that the heavy traffic distribution of the steady-state scaled queue length vector can be represented by independent exponential random variables under some specific condition on the variance of the arrival process. The authors in \cite{franceschi2019integral} use the theory of the Carleman Boundary value problem to solve for the heavy traffic distribution when the corresponding functional equation consists of two variables, and provide the solution as a Cauchy integral which are hard to interpret. Finding the heavy traffic distribution under more general variance conditions is still an open problem.
\section{Introduction}
Stochastic Processing Networks (SPNs) \cite{williams_survey_SPN} are ubiquitous in engineering with applications in manufacturing, telecommunications, transportation, computer systems, etc. A general stochastic processing network consists of jobs or packets that compete for service by limited resources. SPNs in general are modeled using a set of interacting queues. A key performance metric of interest in such systems is delay and queue length. In general, it is not possible to obtain an exact expression for the steady-state behavior in such queues. Therefore, SPNs are studied in various asymptotic regimes. In this paper, we consider the heavy-traffic regime where the system is loaded close to its capacity. The queue length in this case, usually blows up to infinity, at a rate of $1/\epsilon$, where $\epsilon$ is the heavy-traffic parameter, denoting the gap between the arrival rate and the system capacity. Therefore, the objective of interest is typically the asymptotic behavior of the queue length, scaled by $\epsilon$.
Heavy-traffic analysis took root in the work of Kingman \cite{kingman1962_brownian}, who showed that the scaled queue length of a single server queue converges to an exponential random variable in heavy traffic. This was done using diffusion limit approximation and studying the limiting reflected Brownian motion process.
Since then, a variety of SPNs has been studied in heavy traffic. A key phenomenon in the heavy-traffic regime is that the multi-dimensional queue-length vector typically collapses to a lower-dimensional subspace. This is called the \textit{State Space Collapse} (SSC), and simplifies the analysis of an SPN. When the so-called Complete Resource Pooling (CRP) condition is satisfied, various SPNs exhibit an SSC to a one-dimension subspace, i.e., a line. In this case, the SPN behaves like a single server queue in heavy traffic, and the limiting distribution of scaled queue lengths can be represented using a single exponential random variable. CRP intuitively means that there is a single bottleneck in the system leading to heavy traffic. A popular example of such a system is the load-balancing system under an algorithm such as join-the-shortest queue \cite{foschini1978basic}.
However, several SPNs that arise in practice do not satisfy the CRP condition, and the SSC occurs to a multi-dimensional subspace. Except in special cases, the classical diffusion limit approach fails in this setting.
Recent work \cite{maguluri2016heavy,Hurtado-gen-switch-SIGMETRICS} developed the drift method and used it to characterize the mean (weighted) sum of the queue lengths in such systems under great generality. However, it was shown in \cite{Hurtado_gen-switch_arxiv} that the drift method is insufficient to even obtain the individual mean queue lengths. Going beyond the mean queue lengths, the key question we focus on in this paper is: \textit{What is the heavy traffic \textbf{joint distribution} of queue lengths in an SPN when the CRP condition is not satisfied?} We answer this question in this paper by studying two systems that have served as representatives of non-CRP systems in the literature.
We do this by developing a novel transform method for non-CRP systems. Transform method was first developed in \cite{hurtado2020transform} with the goal of overcoming the limitations of the drift method. The key idea in the transform method is to work with exponential Lyapunov functions, which enables one to work with Laplace or Fourier transforms. However, \cite{hurtado2020transform} was limited to CRP systems. A major methodological contribution of this paper is to extend the transform method for non-CRP systems, and
use it to study two systems that have served as representatives of non-CRP systems in the literature.
\subsection{Main Contribution}
The main contributions of this paper are the following.
\subsubsection{Transform method for non-CRP systems}
Based on the transform method for CRP systems in \cite{hurtado2020transform}, we use complex exponential as the test function for non-CRP systems. For CRP systems, when the drift of this test function is set to zero in steady-state, one obtains an exact expression for the Fourier transform of the limiting distribution, as SSC occurs to a line. Based on this limiting transform, one immediately concludes convergence in distribution to an exponential random variable. For non-CRP systems, when the complex exponential is used as test function, after using the multidimensional SSC, we obtain an \textit{implicit functional equation} in the Laplace transform of the limiting distribution. A major challenge in non-CRP systems is in solving this implicit functional equation. When SSC is into two dimensional subspace, such functional equations are solved in the literature \cite{franceschi2019integral}, using Carleman boundary value problem \cite{litvinchuk1970generalized}. We adopt these results to obtain the limiting distribution under two dimensional SSC.
\subsubsection{Input-queued switch}
Input-queued switch that also models a data center networks is a discrete-time queueing systems that also has served as a representative of non-CRP systems in the literature. Historically, developments on input queued switch have served as guide posts to study more general SPNs. In Section \ref{sec: switch}, we then consider the input-queued switch with $n$ ports and $n^2$ queues operating under a class of scheduling policies that satisfies SSC (e.g. MaxWeight scheduling). We obtain the implicit functional equation for the transform of the limiting queue-length vector. Solving this functional equation is a major challenge. In particular, the key difficulty is in establishing uniqueness of its solution. We identify one solution of this functional equation and \textit{conjecture} that this solution is unique.
Our solution for the heavy-traffic joint distribution of the queue lengths in a switch involves a non-linear transformation of $2n$ iid exponential random variables.
The mean of the sum of the queue-lengths under the proposed joint distribution matches with the known solution in the literature \cite{maguluri2016heavy}.
After that, in Section \ref{sec: 3q}, we consider a special case consisting of just three queues, which we call the Three-queue system. The dynamics of the Three-queue system is similar to that of an Input-queued switch, although it has only three-queues. For Three-queue system, the three dimensional queue vector collapses to a two-dimensional subspace in heavy traffic. We characterize the heavy-traffic queue-length vector in terms of the linear combinations of two independent exponential random variables. For the Three-queue system, we prove that the uniqueness conjecture holds, i.e., the functional equation has a unique solution. This is in contrast with the Input-queued switch, for which we conjecture that the functional equation has a unique solution.
\subsubsection{$\mathcal{N}$-system }
$\mathcal{N}$-system is a two-server parallel server system operating in continuous time under Poisson arrivals and exponential service times.
It is one of the simplest parallel server system that preserves much of the complexity of more general models, and so has been extensively studied, albeit only under CRP. We study it under the MaxWeight policy when the CRP condition is not satisfied. In this case, the two dimensional state of the system collapses to a two-dimensional cone (and thus, there is no dimensionality reduction). We present the heavy traffic joint distribution of the steady-state scaled queue length vector of the $\mathcal{N}$-system in terms of two independent and exponentially distributed random variables. The details of our results for $\mathcal{N}$-system are presented in Section \ref{sec: n_sys}.
\subsection{Outline of our method}
The analysis presented in this paper is based on the transform method for heavy-traffic analysis that was first developed in \cite{hurtado2020transform}. It is a variant of the drift method, where a complex exponential is chosen as a Lyapunov test function, and its drift is set to zero in steady-state. This leads to working with the Laplace or Fourier transform of the stationary-distribution in the heavy-traffic limit. When the CRP condition is satisfied, one first establishes a one-dimensional SSC. Using this SSC result, setting the drift of the test function to zero, one obtains an exact expression for the transform of the limiting stationary distribution (i.e., the moment-generating function). By identifying the limiting MGF with that of an exponential random variable, one concludes convergence in distribution to the exponential. In this paper, we extend this framework to non-CRP systems.
After first establishing SSC, our framework is then in two steps. The first step is to use the complex exponential as the Lyapunov function and equate its expected drift to zero in steady-state.
After that, we use the second-order approximation of the complex exponential in terms of the heavy traffic parameter to get the functional equation that characterizes the heavy traffic distribution of the scaled queue length vector. Here we make use of the SSC result.
To be more specific, due to the SSC, the number of variables in the functional equation matches with the dimension of subspace into which SSC occurs.
The second step is to solve the derived functional equation to get the Laplace transform of the heavy traffic distribution of the steady-state scaled queue length vector. Solving the functional equation, in general, is not easy. Under some specific conditions on the parameter involved in the functional equation, one could guess the solution and check whether it satisfies the functional equation or not. If it does, then the solution gives the Laplace transform of the heavy traffic distribution. A crucial step to solve the functional equation is to show that it has a unique solution. This ensures that the guessed solution is the only solution for the functional equation. In this paper, we use the results presented in \cite{franceschi2019integral} to show that if the queueing system has a functional equation in two variables (for example, $\mathcal{N}$-system and Three-queue system), then there is a unique solution to the functional equation. For a system with a functional equation with a higher number of variables than two, we conjecture that the functional equation has a unique solution.
\subsection{Related Work}
Using diffusion limit to study the behaviour of a queueing system in heavy traffic was first introduced by Kingman \cite{kingman1962_brownian}, where he studied a single server queue. Using the state space collapse to study the heavy traffic optimality was first introduced by in \cite{foschini1978basic}, where the authors studied the performance of the Join-the-shortest queue policy in a multi-server system. This method was successfully applied to several queueing systems that satisfies the CRP condition \cite{harrison1998heavy, harrison1987brownian, Williams_CRP, stolyar2004maxweight, gamarnik2006validity}. The idea has also been used to study some non-CRP systems, eg., bandwidth sharing network \cite{Weina_bandwidth, kang2009state, zwart_bandwidth_diffusion, yeyaobandwidth2012}. A major drawback of the diffusion limit method is that it involves a certain interchange of limits which is hard to establish.
The idea behind diffusion limits \cite{gurvich2014diffusion, Williams_state_space, rei_state_space} is to show a \textit{process level convergence} of the scaled queue length vector to a Reflected Brownian Motion (RBM) \cite{harrison_2013_book, morters2010brownian, uhlenbeck1930theory}. Due to state space collapse, the corresponding RBM lives in a lower dimensional subspace as compared to the original state space of the queueing system. Next step is to study the stationary distribution of the obtained RBM process.
The stationary distribution of an RBM motion can be characterized the Basic Adjoint Relationship (BAR) \cite{dai2011nonnegativity}. Solving the BAR to obtain the stationary distribution is hard in general. But under the skew-symmetry condition \cite{harrison1987multidimensional, williams1987reflected, harrison1987brownian}, one can solve the BAR to show that the stationary distribution of the RBM is given by product-form exponential. A few papers \cite{franceschi2019integral, harrison1978diffusion} attempt to solve the BAR even when the skew-symmetry condition is not satisfied, while others \cite{dai2011reflecting, Franceschi2017asymptotic} use the BAR to study the tail behaviour of the stationary distribution of the RBM. Numerical methods to solve the BAR and obtain the stationary distribution is presented in \cite{dai1991steady, dai1992reflected}.
In addition to diffusion limits method, there are three different \textit{direct methods} to study the heavy traffic behaviour of a queueing system. Major advantage of these direct methods over the diffusion limit method is that these method do not require the interchange of limits. The first direct method, named as the \textit{drift method} uses the idea of choosing a test function and equating its expected drift to zero in steady-state. Drift method was introduced in \cite{atilla} to study the moments of weighted queue lengths of a multi server system. The analysis in \cite{atilla} is an extension of results presented in \cite{kingman}. A common choice for test functions in drift method is polynomial test functions, which can be used to obtain bounds on the moments of queue length. However, for non-CRP systems, the drift method with polynomial test functions is not enough to obtain bounds on the higher moments of queue lengths \cite{Hurtado-gen-switch-SIGMETRICS}. Transform methods \cite{hurtado2020transform} is an extension of the drift methods where an exponential test function is used. Second is \textit{BAR method} \cite{braverman_BAR} which studies a continuous time system under general arrivals and services by using exponential functions to get a handle on the jumps. The third method is \textit{Stein's method} \cite{gurvich2014diffusion, braverman2017stein}, which focuses on studying the rate of convergence to the diffusion limit. Among the direct methods, the primary focus of BAR method and the Stein's method are the systems that satisfy the CRP condition, while only drift method is used to study the non-CRP system. In this paper, we extend the transform methods by using complex exponential as the test function to study two well-known non-CRP system, i.e., $\mathcal{N}$-system and Input-queued switch.
A general model for a parallel server system (including the $\mathcal{N}$-system ) is provided in \cite{rubino2009dynamic}. The Brownian control problem for parallel server systems is presented in \cite{harlop_state_space}, where a linear program in terms of arrival rates and mean service times was presented to define the heavy traffic regime for this system and articulate the condition for complete resource pooling. In \cite{belwil_state_space}, the authors studied a Brownian control problem for $\mathcal{N}$-system under CRP condition. They proposed a threshold control policy which is asymptotically optimal in the heavy traffic limit. Similar work for $\mathcal{N}$-system with reneging is presented in \cite{tezcan2010dynamic} which shows that under certain conditions on the service speed, a $c\mu$-type greedy policy is asymptotically optimal in the heavy traffic. The focus of most of the existing literature on $\mathcal{N}$-system is minimizing the cost under CRP condition. More recently, the mean delay of parallel server systems are studied under non-CRP condition \cite{Hurtado-gen-switch-SIGMETRICS} with MaxWeight as the scheduling algorithm. Ours is the first work that studies the heavy traffic distribution of $\mathcal{N}$-system under non-CRP condition.
Input-queued switch is one of the most popular queueing system that does not satisfy the CRP condition and as mentioned in \cite{shah2012optimal, williams_survey_SPN}, Input-queued switch serves a guiding principle for design and analysis of scheduling algorithms in general SPNs. Papers \cite{mckeown1995scheduling, mckeown96walrand, 665071} study the performance and throughput optimality of different scheduling algorithms (including MaxWeight) for Input-queued switch. The holding cost for a generalized switch model under CRP condition with MaxWeight as scheduling algorithm was studied in \cite{stolyar2004maxweight}. While the mean delay of Input-queued switch operating under MaxWeight scheduling in heavy traffic was studied using the drift method in the paper \cite{maguluri2016heavy} with some extensions provided in \cite{QUESTA_switch,Hurtado-gen-switch-SIGMETRICS,jhunjhunwala2021low}. The diffusion approximation for Input-queued switch of size $n$ under MaxWeight scheduling was presented in \cite{kang2012diffusion}, where the authors showed the process level convergence of a $(2n-1)$-dimensional workload process to a semimartingale-RBM.
\subsection{Basic Notations}
We use $\mathbb R$ to denote the set of real numbers and $\mathbb{C}$ to denote the set of complex numbers. Also, we use $\mathbb R_+$ to denote the set of positive real numbers. Similarly, $\mathbb R^d$ and $\mathbb C^d$ denote the set of $d$-dimensional real and complex vectors, respectively. For any complex vector $x\in \mathbb C^d$, $Re(x)$ and $Im(x)$ denote the real part and imaginary part of $x$, respectively. For any vector $\mathbf{x}$, we use $x_i$ to denote the $i^{th}$ element of $\mathbf{x}$. The inner product of two vectors $\mathbf{x}$ and $\mathbf{y}$ is defined as $\langle \mathbf{x},\mathbf{y}\rangle = \mathbf{x}^T \Bar{\mathbf{y}}$, where $\Bar{\mathbf{y}}$ is the complex conjugate of $\mathbf{y}$. If the vectors $\mathbf{x}$ and $\mathbf{y}$ are both real vectors, then $\langle \mathbf{x},\mathbf{y}\rangle$ just represents the dot product of two vectors. The function $|\mathbf{x}| = \sqrt{\langle \mathbf{x} , \mathbf{x}\rangle }$ denotes the absolute value of $\mathbf{x}$. Further, $\|\cdot\|$ denotes the $L_2$-norm of real vector in $\mathbb R^d$. For any set $A$, $\mathbf{1}_A$ denotes the indicator random variable for set $A$. For any positive natural number $d$, $\mathbf{1}_d$ and $\mathbf{0}_d$ denotes the vector of all ones and vector of all zeros of size $d$ respectively, and $\mathbf{I}_{d}$ denotes the identity matrix of size $d$. For any given system, $\mathbb{E}[\cdot]$ denotes the expectation under the steady state distribution of the corresponding system.
For any queueing system, suppose $\mathbf{q}$ is the queue length vector that follows the steady-state distribution, then we call the steady state scaled queue length vector to be $\epsilon \mathbf{q}$, where $\epsilon$ is the heavy traffic parameter that captures the distance of arrival rate vector from the boundary of capacity region. Note that the steady-state distribution itself depends on $\epsilon$, although we have omitted it from the notation $\mathbf{q}$ for convenience. We use the term \textit{heavy traffic distribution} to denote the limiting distribution $\lim_{\epsilon\rightarrow 0} \epsilon \mathbf{q}$. Under the condition that the Laplace transform of the heavy traffic distribution exists for a given $\boldsymbol \theta$, it is given by $\lim_{\epsilon \rightarrow 0} \mathbb{E}[e^{\epsilon \langle \boldsymbol \theta , \mathbf{q}\rangle}]$.
\section{$\mathcal{N}$-System}
\label{sec: n_sys}
We study a parallel server system consisting of two queues (each representing a different job class) and two servers. The system is commonly known as $\mathcal{N}$-system. The first server can only serve the first job class, while the second server can serve both the job classes. We assume that the service time of any job is server-dependent and not class-dependent, i.e., the mean service time depends only on the server that serves the job.
$\mathcal{N}$-system has been heavily studied under the CRP condition, where only one of the two queues are loaded close to the capacity. This assumption leads to a much simpler mathematical analysis. In our paper, we study the $\mathcal{N}$-system under a regime in which CRP condition is not satisfied, i.e., both the queues are simultaneously working close to the capacity. $\mathcal{N}$-system is one of the simplest systems for which CRP condition is not satisfied.
In Section \ref{sec: n_sys_model}, we provide the basic model for the $\mathcal{N}$-system operating in a continuous-time fashion. Section \ref{sec: nsys_ssc} gives the heavy traffic distribution of steady-state scaled queue length vector under the condition that service rates of the two servers are symmetric. In Section \ref{sec: n_sys_dist}, we provide the functional equation that models the heavy traffic steady-state distribution of the $\mathcal{N}$-system under general service rates, along with the claim that the functional equation for $\mathcal{N}$-system has a unique solution.
\subsection{Model for $\mathcal{N}$-system }
\label{sec: n_sys_model}
We consider a continuous time $\mathcal{N}$-system with two queues given by $q_1$ and $q_2$, each denoting a different job class. With slight abuse of notation, the corresponding queue lengths, at time $t$, are denoted by $q_1(t)$ and $q_2(t)$. The arrival process of jobs for $q_1$ and $q_2$ are independent of each other and follows Poisson distribution with parameters $\lambda_1$ and $\lambda_2$. There are two servers in the system denoted by $S_1$ and $S_2$.
The jobs in $q_1$ can be served only by $S_1$, while the jobs in the $q_2$ are more flexible and can be processed by both $S_1$ and $S_2$. Thus, server $S_1$ can serve jobs from either of the queues but it cannot serve both the queues simultaneously. As a result, there are two possible service configurations. First one is that $S_1$ serves $q_1$ and $S_2$ serves $q_2$, and the second configuration being $S_1$ and $S_2$ both serves $q_2$. The processing time of jobs on the servers are exponentially distributed with parameter $\mu_1$ and $\mu_2$ and are independent of the job they are serving. The model for $\mathcal{N}$-system is pictorially depicted in Fig. \ref{n_system}.
\begin{figure}[h!]
\centering
\begin{tikzpicture}[>=latex]
\draw (0,0) -- ++(2cm,0) -- ++(0,-0.5cm) -- ++(-2cm,0);
\draw (0,1) -- ++(2cm,0) -- ++(0,-0.5cm) -- ++(-2cm,0);
\draw (4,-0.25cm) circle [radius=0.25cm];
\draw (4,0.75cm) circle [radius=0.25cm];
\draw[->] (2.25,-0.25) -- +(35pt,0);
\draw[->] (2.25,-0.15) -- +(35pt,0.8);
\draw[<-] (0,-0.25) -- +(-20pt,0) node[left] {$\lambda_2 $};
\node at (4,-0.25cm) {$\mu_2$};
\draw[->] (2.25,0.75) -- +(35pt,0);
\draw[<-] (0,0.75) -- +(-20pt,0) node[left] {$\lambda_1 $};
\node at (4,0.75cm) {$\mu_1$};
\end{tikzpicture}
\caption{The model for $\mathcal{N}$-system}
\label{n_system}
\end{figure}
A scheduling policy is an underlying rule by which the system chooses the service configuration to use in any time slot. In this paper, we only consider the scheduling algorithms for which the process $\{\mathbf{q}(t)\}_{t=0}^\infty$ forms an irreducible and aperiodic Markov chain, where $\mathbf{q}(t) = (q_1(t),q_2(t))$. The scheduling policy is said to be \textit{stable} if the Markov chain $\{\mathbf{q}(t)\}_{t=0}^\infty$ is positive recurrent. The \textit{capacity region} $\mathcal{C}$ is the set of arrival rate vectors $\boldsymbol \lambda = (\lambda_1,\lambda_2)$ for which there exists a scheduling policy such the $\mathcal{N}$-system is stable. The capacity region of $\mathcal{N}$-system is given by
\begin{equation*}
\mathcal{C} = \{\boldsymbol \lambda \in \mathbb{R}^2_+ : \lambda_1 + \lambda_2 < \mu_1 + \mu_2, \lambda_1< \mu_1 \}.
\end{equation*}
We define the boundary of $\mathcal C$ to be $\mathcal F = \mathcal{F}_1 \cup \mathcal{F}_2 \cup \mathcal{F}_3 $, where
\begin{align*}
\mathcal{F}_1 &= \{\boldsymbol \nu \in \mathbb{R}^2_+ : \nu_1 + \nu_2 = \mu_1 + \mu_2, \nu_1< \mu_1 \},\\
\mathcal{F}_2 &= \{\boldsymbol \nu \in \mathbb{R}^2_+ : \nu_1 + \nu_2 < \mu_1 + \mu_2, \nu_1= \mu_1 \}, \\
\mathcal{F}_3 &= \{\boldsymbol \nu \in \mathbb{R}^2_+ : \nu_1 = \mu_1 , \nu_2= \mu_2 \}.
\end{align*}
The system operates in heavy traffic if the arrival rate $\boldsymbol \lambda$ is very close to the boundary of the capacity region $\mathcal{F}$. $\mathcal{N}$-system satisfies the CRP condition when the arrival rate vector $\boldsymbol \lambda$ approaches $\mathcal{F}_1 $ or $\mathcal{F}_2$. On the other hand, when $\boldsymbol \lambda$ approaches the boundary $\mathcal{F}_3$, i.e., the arrival vector converges to the point $\mathcal{F}_3 = (\mu_1,\mu_2)$, the CRP condition is not satisfied. For any $\boldsymbol \nu \in \mathcal{F}$, we assume that $\boldsymbol \lambda$ converges to the point $\boldsymbol \nu$ according to the trajectory,
\begin{align}
\label{eq: n_sys_arrival_vector}
\lambda_1 = (1-\epsilon)\nu_1, && \lambda_1 + \lambda_2 =(1-\gamma \epsilon)( \nu_1 + \nu_2),
\end{align}
where $\epsilon$ is the heavy traffic parameter. In this section, we assume that the arrival vector $\boldsymbol \lambda \rightarrow (\nu_1,\nu_2)$ as $\epsilon \rightarrow 0$ according to Eq. \eqref{eq: n_sys_arrival_vector}.
The parameter $\gamma >0$ defines the direction of approach of the arrival rate vector $\boldsymbol \lambda$ to the point $(\nu_1,\nu_2)$. This gives us that $\lambda_2 = (1-\gamma\epsilon) \nu_2 + \epsilon \nu_1 (1-\gamma)$.
For $\mathcal{N}$-system, MaxWeight scheduling policy prefers the queue with higher queue length, i.e., if $q_2(t) \geq q_1(t)$, then both $S_1 $ and $S_2$ serve the jobs in the queue $q_2$, otherwise $S_1$ serves $q_1$ and $S_2$ serves $q_2$.
Under MaxWeight, the queue length process $\{\mathbf{q}(t)\}_{t\geq0}$ forms a continuous time Markov chain, where $\mathbf{q}(t) = (q_1(t),q_2(t))$.
\subsection{State-space collapse for $\mathcal{N}$-system}
\label{sec: nsys_ssc}
In this section, we provide the definition of state space collapse for $\mathcal{N}$-system. Consider the cone $\mathcal{K}_1$, $\mathcal{K}_2$ and $\mathcal{K}_3$ to be
\begin{align*}
\mathcal{K}_1 = \left\{ \mathbf{x}\in \mathbb{R}^2_+ : x_2 = x_1 \right\},&&\mathcal{K}_2 = \left\{ \mathbf{x}\in \mathbb{R}^2_+ : x_2 = 0 \right\}, &&\mathcal{K}_3 = \left\{ \mathbf{x}\in \mathbb{R}^2_+ : x_2 \leq x_1 \right\}.
\end{align*}
For any vector $\mathbf{y} \in \mathbb{R}^2_+$, let $\mathbf y_{\| \mathcal{K}_i}$ denotes the projection of vector $\mathbf y$ onto the cone $\mathcal K_i$ for $i \in \{1,2,3\}$. Then,
\begin{align*}
\mathbf y_{\| \mathcal{K}_1} = \frac{y_1+y_2}{2} \begin{bmatrix} 1\\1 \end{bmatrix}, && \mathbf y_{\| \mathcal{K}_2} = \begin{bmatrix} y_1\\0 \end{bmatrix}, && \mathbf{y}_{\| \mathcal{K}_3} = \mathbf{y} \mathbf{1}_{\{y_2 \leq y_1\}} + \frac{y_1+y_2}{2} \begin{bmatrix} 1\\1 \end{bmatrix} \mathbf{1}_{\{y_2 > y_1\}}.
\end{align*}
Now, the perpendicular component $\mathbf{y}_{\perp \mathcal{K}_i} = \mathbf y - \mathbf y_{\| \mathcal{K}_i} $ for $i\in \{1,2,3\}$ is given by
\begin{align*}
\mathbf y_{\perp \mathcal{K}_1} = \frac{y_2-y_1}{2} \begin{bmatrix} -1\\1 \end{bmatrix}, && \mathbf y_{\perp \mathcal{K}_2} = \begin{bmatrix} 0\\y_2 \end{bmatrix}, && \mathbf{y}_{\perp \mathcal{K}_3} = \frac{y_2-y_1}{2} \begin{bmatrix} -1\\1 \end{bmatrix} \mathbf{1}_{\{y_2 > y_1\}}.
\end{align*}
Please note that in Appendix \ref{app: n_sys}, we have provided mathematical details mostly for the non-CRP case, i.e., when $\boldsymbol \lambda \rightarrow \boldsymbol \nu$ where $\boldsymbol\nu\in \mathcal{F}_3$, and so, in Appendix \ref{app: n_sys}, for the sake of convenience, we have used the notation $\mathbf y_{\|} = \mathbf y_{\| \mathcal{K}_3} $ and $\mathbf y_{\perp} = \mathbf y_{\perp \mathcal{K}_3}$ for any vector $\mathbf y \in \mathbb R^2$.
\begin{definition}
\label{def: n_sys_ssc}
Consider the $\mathcal{N}$-system as defined in Section \ref{sec: n_sys_model} operating under a given scheduling policy. Pick any $i\in \{1,2,3\}$ and suppose $\boldsymbol \lambda \rightarrow \boldsymbol \nu$ as $\epsilon \rightarrow 0$, where $\boldsymbol \nu \in \mathcal{F}_i$.
Then, we say that the scheduling policy achieves \textit{state space collapse}, if for every $\theta \geq 0$, there exists $\epsilon( \theta) >0$ such that for every $0< \epsilon \leq \epsilon(\theta)$,
\begin{equation*}
\mathbb{E}[e^{\epsilon \theta \|\mathbf{q}_{\perp \mathcal{K}_i} \| }] < C^\star < \infty,
\end{equation*}
where $\mathbf{q}_{\perp \mathcal{K}_i} = \mathbf{q} -\mathbf{q}_{\| \mathcal{K}_i} $ and the expectation is taken under the steady-state distribution.
As a conclusion, for any scheduling policy that achieves state space collapse, we have that for every $\theta > 0$, $\lim_{\epsilon \rightarrow 0} \mathbb{E}[e^{\epsilon \theta \|\mathbf{q}_{\perp \mathcal{K}_i} \| }] < \infty.$ Further, for every non-negative integer $r$, there exists a $C_r$ independent of $\epsilon$ such that \[\mathbb{E} \big [\|\mathbf{q}_{\perp \mathcal{K}_i} \|^r \big] \leq C_r.\]
\end{definition}
From the above definition, if $\boldsymbol \lambda \rightarrow \boldsymbol \nu$ as $\epsilon \rightarrow 0$, where $\boldsymbol \nu \in \mathcal{F}_i$ for $i\in \{1,2\}$, then the state space collapse happens to a one-dimensional subspace, which is same as saying that the system satisfies the CRP condition. But if $\boldsymbol \lambda \rightarrow \boldsymbol \nu$ as $\epsilon \rightarrow 0$, where $\boldsymbol \nu \in \mathcal{F}_3$, the state space of the queue length vector does not collapse to a lower dimensional subspace, but the size of the state space reduces. Similar to the previously considered systems, for $\mathcal{N}$-system also, MaxWeight scheduling achieves the state space collapse according to Definition \ref{def: n_sys_ssc}. The proof of state space collapse for MaxWeight when $\boldsymbol \lambda \rightarrow \boldsymbol \nu$ as $\epsilon \rightarrow 0$, for $\boldsymbol \nu \in \mathcal{F}_3$ is provided in Appendix \ref{app: n_sys_ssc}, which follows by using the result presented in \cite[Lemma 10]{Weina_bandwidth}.
\subsection{Results for $\mathcal{N}$-System}
\label{sec: n_sys_dist}
In this section, we present the functional equation (in Theorem \ref{theo: n_sys_mgf_eq}) that the heavy traffic distribution for the $\mathcal{N}$-system satisfies under the non-CRP condition. We also prove that the solution to the functional equation is unique, i.e., there is a unique distribution that satisfies the functional equation. We also provide the heavy traffic distribution (in Theorem \ref{theo: n_sys_distribution}) for $\mathcal{N}$-system under the condition that service rates are symmetric. With a slight abuse of notation, we reuse the notation $q_1$ and $q_2$ to denote the steady state queue length for the $\mathcal{N}$-system.
\begin{theorem}
\label{theo: n_sys_mgf_eq}
Consider the $\mathcal{N}$-system as defined in Section \ref{sec: n_sys_model}, operating under a scheduling policy that achieves state space collapse according to the Definition \ref{def: n_sys_ssc}. Suppose $\boldsymbol \lambda \rightarrow (\mu_1,\mu_2)$. Let
$\Phi = \{\boldsymbol \phi \in\mathbb{C}^2: Re(\phi_1) \leq 0, Re(\phi_1 + \phi_2 )\leq 0\}$.
Then, for any $\boldsymbol \phi \in \Phi$,
\begin{align}
\label{eq: n_sys_mgf_eq}
L(\boldsymbol \phi) \big[ \mu_2 (-\gamma\phi_2 & +\phi_2^2)+\phi_2\mu_1(1-\gamma) \nonumber\\
& + \mu_1 (-\phi_1+\phi_1^2)\big] +\mu_1(\phi_1-\phi_2) M_1(\boldsymbol \phi) +\gamma(\mu_1 +\mu_2)\phi_2 M_2(\boldsymbol \phi) =0,
\end{align}
where
\begin{align*}
L(\boldsymbol \phi) = \lim_{\epsilon\rightarrow 0} \mathbb{E}[e^{\epsilon(\phi_1 q_1+\phi_2 q_2)}], && M_1(\boldsymbol \phi) = \lim_{\epsilon\rightarrow 0} \mathbb{E}[e^{\epsilon(\phi_1+ \phi_2)q_2} |q_1 \leq q_2], && M_2(\boldsymbol \phi) =\lim_{\epsilon\rightarrow 0} \mathbb{E}[e^{\epsilon \phi_1 q_1}|q_2 =0].
\end{align*}
\end{theorem}
Theorem \ref{theo: n_sys_mgf_eq} provides the functional equation for the heavy traffic distribution of $\mathcal{N}$-system under the non-CRP condition, i.e., when the arrival rate vector $\boldsymbol \lambda$ approaches the point $(\mu_1,\mu_2)$ according to a direction with
direction parameter $\gamma$. Note that Theorem \ref{theo: n_sys_mgf_eq} holds for any value of $\mu_1$ and $\mu_2$, while in Theorem \ref{theo: n_sys_distribution} case (c), we took $\mu_1 = \mu_2$. This is only because we can solve Eq. \eqref{eq: n_sys_mgf_eq} under the condition $\mu_1 = \mu_2$. Finding the analytic solution Eq. \eqref{eq: n_sys_mgf_eq} when $\mu_1\neq \mu_2$ is in general quite hard. One approach is presented in \cite{franceschi2019integral}, where the solution can be represented as a complicated Cauchy integral by solving a properly defined boundary value problem.
The idea behind the proof of Theorem \ref{theo: n_sys_mgf_eq} is that for any stable scheduling policy, the expected drift of a well-defined Lyapunov function is zero in steady-state. For our analysis, we choose complex exponential as the Lyapunov function. As shown in the proof of Theorem \ref{theo: n_sys_mgf_eq}, $\boldsymbol \phi$ is chosen in such a way that the exponential Lyapunov function for $\mathcal{N}$-system is well defined. By equating the drift of complex exponential function to zero in steady state and then by using a second-order approximation in terms of the heavy traffic parameter $\epsilon$, we obtain the functional equation (i.e. Eq. \ref{eq: n_sys_mgf_eq}) for $\mathcal{N}$-system as $\epsilon \rightarrow 0$. The complete proof for Theorem \ref{theo: n_sys_mgf_eq} is provided in Appendix \ref{app: n_sys_mgf_eq}. Next, we claim that there is a unique distribution that solves the functional equation.
\begin{lemma}
\label{lem: n_sys_uniqueness}
Consider the $\mathcal{N}$-system as defined in Section \ref{sec: n_sys_model}, operating under a scheduling policy that achieves state space collapse according to the Definition \ref{def: n_sys_ssc}. Then, there is a unique solution $(L(\boldsymbol \phi) , M_1(\boldsymbol \phi), M_2(\boldsymbol \phi))$ to the functional equation given in Eq. \eqref{eq: n_sys_mgf_eq} that is a valid Laplace transform.
\end{lemma}
A crucial step to solve the functional equation given in Theorem \ref{theo: n_sys_mgf_eq} is to ensure that it has a unique solution. Then, one can just guess the solution and check that it satisfies the functional equation. Using this idea, we solve the functional equation under the condition $\mu_1=\mu_2$ to derive the result presented in Theorem \ref{theo: n_sys_distribution}.
The proof of Lemma \ref{lem: n_sys_uniqueness} follows by using the results presented in \cite{franceschi2019integral} and the complete proof is presented in Appendix \ref{app: n_sys_uniqueness}.
\begin{theorem}
\label{theo: n_sys_distribution}
Consider the $\mathcal{N}$-system defined in Section \ref{sec: n_sys_model}, operating under a scheduling policy that achieves state space collapse according to the Definition \ref{def: n_sys_ssc}. Then,
\begin{itemize}
\item[(a)] If $\boldsymbol \lambda \rightarrow \boldsymbol\nu$ for some $\boldsymbol \nu \in \mathcal{F}_1$, the heavy traffic distribution is given by,
\begin{align*}
\lim_{\epsilon \rightarrow 0} \epsilon (q_1,q_2) = (\Upsilon_2,\Upsilon_2),
\end{align*}
\item[(b)] If $\boldsymbol \lambda \rightarrow \boldsymbol\nu$ for some $\boldsymbol \nu \in \mathcal{F}_2$, the heavy traffic distribution is given by,
\begin{align*}
\lim_{\epsilon \rightarrow 0} \epsilon (q_1,q_2) = (\Upsilon_1,0),
\end{align*}
\item[(c)] If $\boldsymbol \lambda \rightarrow \boldsymbol\nu $ for $ \boldsymbol \nu \in \mathcal{F}_3$, i.e., $\nu = (\mu_1,\mu_2)$, and suppose $\mu_1 = \mu_2$ the heavy traffic distribution is given by,
\begin{align*}
\lim_{\epsilon \rightarrow 0} \epsilon (q_1,q_2) = (\Upsilon_1+\Upsilon_2,\Upsilon_2),
\end{align*}
where $\Upsilon_1$ and $\Upsilon_2$ are independent exponential random variables with mean $1$ and $\frac{1}{2\gamma}$ respectively.
\end{itemize}
\end{theorem}
Theorem \ref{theo: n_sys_distribution} provides the heavy traffic distribution for $\mathcal{N}$-system . For part (a) and (b), the $\mathcal{N}$-system satisfies the CRP condition and the analysis for these cases is very similar to the analysis presented in \cite{hurtado2020transform}. Our major focus in Theorem \ref{theo: n_sys_distribution} is part (c), which does not satisfy the CRP condition as both the queues are in heavy traffic. Part (c) of Theorem \ref{theo: n_sys_distribution} says that, if both the queues are in heavy traffic and the service rates are symmetric, the heavy traffic distribution of $\mathcal{N}$-system is represented using two independent exponential random variables.
Note that the boundary sets $\mathcal{F}_1$ and $\mathcal{F}_2$ are line segments and the set $\mathcal{F}_3$ is a common end point of the line segments $\mathcal{F}_1$ and $\mathcal{F}_2$. Thus, intuitively, if the system approaches the boundary $\mathcal{F}_3$ in heavy traffic, it displays a behaviour which is a combination of the behaviour it displays on the other two boundaries $\mathcal{F}_1$ and $\mathcal{F}_2$.
The proof of Theorem Theorem \ref{theo: n_sys_distribution} part (a) and part (b) follows from the analysis provided in \cite{hurtado2020transform}, as in part (a) and (b), the system satisfies the CRP condition. For the non-CRP case, i.e., Theorem \ref{theo: n_sys_distribution} part (c),
the proof uses the functional equation provided in Theorem \ref{theo: n_sys_mgf_eq}. Complete proof of Theorem \ref{theo: n_sys_distribution} part (c) is given in Appendix \ref{app: n_sys_distribution}.
\begin{corollary}
\label{cor: n_sys_max}
Consider the $\mathcal{N}$-system defined in Section \ref{sec: n_sys_model}, operating under the MaxWeight scheduling policy. Then, the heavy traffic distribution of the corresponding queue length vector satisfies the result mentioned in Theorem \ref{theo: n_sys_mgf_eq} and Theorem \ref{theo: n_sys_distribution}.
\end{corollary}
As mentioned in Section \ref{sec: nsys_ssc}, MaxWeight scheduling achieves state space collapse according to the Definition \ref{def: n_sys_ssc}. Thus, Corollary \ref{cor: n_sys_max} is a direct application of Theorem \ref{theo: n_sys_mgf_eq} and Theorem \ref{theo: n_sys_distribution}.
\section{Outline of Proof for Three-queue system}
\label{sec: 3q_outline}
In this section, we present a brief outline of the proof of the results presented in Section \ref{sec: 3q_results}. The complete proof for all the results in provided in Appendix \ref{app: 3q}. Note that for this section, we are using the notations presented in Section \ref{sec: 3q_model}.
The arguments presented in this section presents much of the mathematical complexity required to prove the results mentioned in previous sections while omitting most of the technical details details.
\begin{proof}[Proof outline for Theorem \ref{theo: 3q_functional_eq}]
First step is to show that the Laplace transform of queue length vector exists for any $\boldsymbol \theta \in \Theta $, which follows by using the fact that $Re(\mathbf B^T \boldsymbol \theta) \leq \mathbf 0_{3}$. The details of this is provided in Lemma \ref{lem: 3q_mgf_equivalence} in Appendix \ref{app: 3q_mgf_equivalence}. Next, we use the definition of the complex exponential function to get,
\begin{align*}
e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{q}^+ \rangle } \Big( e^{- \epsilon \langle \boldsymbol{\theta}, \mathbf{u} \rangle} -1 \Big) = e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{q}^+ \rangle } \Big( -\epsilon \langle \boldsymbol{\theta}, \mathbf{u} \rangle + \sum_{k = 2}^\infty \frac{\epsilon^k}{k!} \langle \boldsymbol{\theta}, \mathbf{u} \rangle^k \Big).
\end{align*}
Now, next step is to prove that higher order terms in RHS of the above equation goes to $0$ as $\epsilon\rightarrow 0$ by eliminating the higher order terms as $\epsilon \rightarrow 0$, and doing some further technical manipulation, we get
\begin{equation*}
\lim_{\epsilon \rightarrow 0} \frac{1}{\epsilon^2} \mathbb{E} \Big[e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{q}^+ \rangle } \Big( e^{- \epsilon \langle \boldsymbol{\theta}, \mathbf{u} \rangle} -1 \Big) \Big] = - \langle \boldsymbol{\theta}, \mathbf{M}(\boldsymbol{\theta}) \rangle.
\end{equation*}
Next, we have the equivalence,
\begin{align}
\mathbb{E} \Big[ e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{q}^+ \rangle } \Big( e^{- \epsilon \langle \boldsymbol{\theta}, \mathbf{u} \rangle} -1 \Big) \Big] \nonumber
& = \mathbb{E} \Big[ e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{q} \rangle } e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{a }- \mathbf{s } \rangle } \Big] - \mathbb{E} \Big[ e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{q}^+ \rangle}\Big] \allowdisplaybreaks \nonumber \\
& = \mathbb{E} \Big[ e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{q} \rangle } \Big] \mathbb{E} \Big[ e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{a }- \mathbf{s } \rangle } \Big] - \mathbb{E} \Big[ e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{q}^+ \rangle}\Big] \allowdisplaybreaks \nonumber \\
& = \mathbb{E} \Big[ e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{q} \rangle } \Big] \Bigg( \mathbb{E} \Big[ e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{a }- \mathbf{s } \rangle } \Big] - 1\Bigg) \allowdisplaybreaks \nonumber \\
&= \mathbb{E} \Big[ e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{q} \rangle } \Big] \Bigg( \mathbb{E} [ \epsilon \langle \boldsymbol{\theta}, \mathbf{a }- \mathbf{s } \rangle] + \frac{\epsilon^2}{2} \mathbb{E} [ \langle \boldsymbol{\theta}, \mathbf{a }- \mathbf{s } \rangle^2] + \sum_{k=3}^\infty \frac{\epsilon^k}{k!} \mathbb{E} [ \langle \boldsymbol{\theta}, \mathbf{a }- \mathbf{s } \rangle^k] \Bigg).
\end{align}
The above equality use three key ideas: first is that $\mathbf{q}^+ = \mathbf{q} +\mathbf{a}- \mathbf{s} +\mathbf{u} $, second is that the steady-state distribution of $\mathbf{q}$ and $\mathbf{q}^{+}$ (the state following $\mathbf{q}$) is same, and third is the Taylor expansion of the complex exponential function. Now, by simple calculations,
\begin{align*}
\epsilon\mathbb{E} [ \langle \boldsymbol{\theta}, \mathbf{a }- \mathbf{s } \rangle ] = - \frac{\epsilon^2}{2} \langle \boldsymbol \theta, \mathbf{1}_3 \rangle, && \frac{\epsilon^2}{2}\mathbb{E}[ \langle \boldsymbol \theta , \mathbf{a }- \mathbf{s } \rangle^2 ] = \frac{\epsilon^2}{2} \langle \boldsymbol \theta, \boldsymbol \sigma^2 \boldsymbol \theta \rangle + \frac{\epsilon^4}{8}\langle \boldsymbol \theta, \mathbf{1}_3 \rangle^2.
\end{align*}
As the higher order terms can be eliminated as $\epsilon \rightarrow 0$, for any $\boldsymbol \theta \in \boldsymbol \Theta$,
\begin{align*}
\lim_{\epsilon \rightarrow 0} \frac{1}{\epsilon^2} \mathbb{E} \Big[ e^{ \epsilon \langle \boldsymbol{\theta}, \mathbf{q}^+ \rangle } \Big( e^{- \epsilon \langle \boldsymbol{\theta}, \mathbf{u} \rangle} -1 \Big) \Big]
= \left( -\frac{1}{2}\langle \boldsymbol{\theta}, \mathbf{1}_{3} \rangle + \frac{1}{2} \langle \boldsymbol{\theta} , \boldsymbol\sigma^2 \boldsymbol{\theta} \rangle \right) L(\boldsymbol{\theta}).
\end{align*}
Combining the above results gives us that for any $\boldsymbol \theta \in \boldsymbol \Theta$,
\begin{equation}
\left( -\frac{1}{2}\langle \boldsymbol{\theta}, \mathbf{1}_{3} \rangle + \frac{1}{2} \langle \boldsymbol{\theta} , \boldsymbol\sigma^2 \boldsymbol{\theta} \rangle \right) L(\boldsymbol{\theta}) + \theta_2 M_2(\boldsymbol\theta)+\theta_3 M_3(\boldsymbol\theta) = 0.
\end{equation}
where
\begin{align*}
L(\boldsymbol \theta) = \lim_{\epsilon \rightarrow 0} \mathbb{E}[e^{\epsilon \langle \boldsymbol \theta, \mathbf{q} \rangle}], && M_2(\boldsymbol \theta) = \lim_{\epsilon \rightarrow 0} \frac{1}{\epsilon} \mathbb{E} [u_2e^{\epsilon (\theta_2+2\theta_3)q_3}], && M_3(\boldsymbol \theta) = \lim_{\epsilon \rightarrow 0} \frac{1}{\epsilon} \mathbb{E} [u_3e^{\epsilon (2\theta_2+\theta_3)q_2}],
\end{align*}
Note that $M_1(\boldsymbol\theta)=0$ because $u_1=0$ by the definition of the service process.
This gives us the functional equation in Eq. \eqref{eq: 3q_functional_eq}.
\end{proof}
Next, we present the uniqueness result for a functional equation with two variables.
\begin{lemma}
\label{lem: functional_uniqueness}
Consider a functional equation of the form
\begin{equation}
\label{eq: functional}
\gamma(\boldsymbol \theta) \Phi (\boldsymbol \theta) + \gamma_1(\boldsymbol \theta) \Phi_1 (\theta_2) +\gamma_2(\boldsymbol \theta) \Phi_2 (\theta_1) = 0
\end{equation}
for all $\boldsymbol \theta \in \mathbb{C}^2$ such that $Re(\boldsymbol \theta ) \leq \mathbf 0$, where $\Phi (\boldsymbol \theta), \Phi_1 (\theta_2)$ and $\Phi_2 (\theta_1)$ are Laplace transform of unknown probability distributions $\pi(x_1,x_2), \nu_1(x_2)$ and $\nu_2(x_1)$ respectively, where the support of $\pi(x_1,x_2)$ is the positive orthant of two-dimension and $\nu_1(x_2)$ and $\nu_2(x_1)$ are the boundary measure that has support on the axes.
\begin{align*}
\gamma(\boldsymbol \theta) &= \alpha_1 \theta_1 + \alpha_2 \theta_2 + \frac{1}{2} (\sigma_{11} \theta_1^2 + 2\sigma_{12} \theta_1\theta_2 + \sigma_{22}\theta_2^2), &&\gamma_1 (\boldsymbol \theta) = r_{11} \theta_1 + r_{21}\theta_2, && \gamma_2 (\boldsymbol \theta) = r_{12} \theta_1 + r_{22}\theta_2.
\end{align*}
Suppose the following conditions are satisfied,
\begin{align}
r_{11} > 0, && r_{22} >0, && r_{11}r_{22} -r_{12}r_{21} >0 && r_{22}\alpha_1 -r_{12}\alpha_{2} <0, && r_{11}\alpha_2 -r_{21}\alpha_{1} <0.
\end{align}
Then, there is a unique solution $(\Phi (\boldsymbol \theta), \Phi_1 (\theta_2)$, $\Phi_2 (\theta_1))$ to the functional equation given in Eq. \eqref{eq: functional}.
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lem: functional_uniqueness}]
The proof of Lemma \ref{lem: functional_uniqueness} uses the result provided in \cite[Theorem 1]{franceschi2019integral}. Consider a SRBM \cite{williams1995semimartingale}\cite[Equation 1]{franceschi2019integral} with drift in the quarter plane $\mathbb{R}_+^2$. Suppose following are the parameters of the SRBM: $\Sigma$ is the covariance matrix of the Brownian motion, $\boldsymbol \alpha$ denotes the interior drift and $\mathbf R$ be the reflection matrix given by,
\begin{align*}
\Sigma = \begin{bmatrix} \sigma_{11} & \sigma_{12}\\ \sigma_{21} & \sigma_{22} \end{bmatrix}, && \boldsymbol \alpha = \begin{bmatrix} \alpha_1\\ \alpha_2 \end{bmatrix}, && \mathbf R = \begin{bmatrix} r_{11} & r_{12}\\ r_{21} & r_{22} \end{bmatrix}.
\end{align*}
Then, for this SRBM, the functional equation, as presented in \cite{franceschi2019integral} matches with the functional equation as given in Eq. \eqref{eq: functional}. Then, from the result provided in \cite[Theorem 1]{franceschi2019integral}, we know that there is a unique solution to the functional equation given in Eq. \eqref{eq: functional}. The result in \cite[Theorem 1]{franceschi2019integral} provide the unique solution to the functional equation in terms of Cauchy integrals. For the proof of Lemma \ref{lem: functional_uniqueness}, we only use the fact that the solution is unique.
\end{proof}
\begin{proof}[Proof outline for Lemma \ref{lem: 3q_uniqueness}]
The proof of Lemma \ref{lem: 3q_uniqueness} follows by using Lemma \ref{lem: functional_uniqueness}. The idea is to substitute $\boldsymbol \theta$ using the relation $\boldsymbol \psi = (\psi_1,\psi_2) = \mathbf B^T \boldsymbol \theta$. Now, by using the fact that $\boldsymbol \theta \in \mathcal S$, $\theta_1 = \theta_2+\theta_3$ and so, $\psi_1 = 2\theta_2 + \theta_3$ and $\psi_2 = \theta_2 + 2\theta_3$. Then, we can rewrite the functional equation as
\begin{equation*}
\left( - \frac{1}{3} \langle \boldsymbol{\psi},\mathbf{1} \rangle + \frac{1}{2} \langle \boldsymbol{\psi} , \tilde{\boldsymbol \Gamma} \boldsymbol{\psi} \rangle \right) L(\boldsymbol{\psi}) +
\frac{1}{3} (2\psi_1 - \psi_2)M_2(\psi_2) +\frac{1}{3} (2\psi_2 - \psi_1)M_3(\psi_1) = 0,
\end{equation*}
where $\tilde{\boldsymbol \Gamma} =\mathbf D^{-1} \mathbf{B}^T\boldsymbol \sigma^2 \mathbf{B} \mathbf D^{-1} $ and $\mathbf D = \mathbf B^T\mathbf B = \begin{bmatrix}
2 & 1\\
1 & 2
\end{bmatrix} $.
Now this functional equation holds for any $\boldsymbol \psi$ such that $Re(\boldsymbol \psi) \leq \mathbf{0}_2$. The above functional equation has the same form as the functional equation in Lemma \ref{lem: functional_uniqueness} by taking the reflection matrix $\mathbf{R} = \mathbf{D}^{-1}$ and the interior drift is $\boldsymbol\alpha = -\frac{1}{3} \mathbf{1}_2$. Now it is easy to observe that the required conditions in Lemma \ref{lem: functional_uniqueness} is satisfied and so, the functional equation for Three-queue system has a unique solution.
\end{proof}
\begin{proof}[Proof outline for Theorem \ref{thm: 3q_dist}]
If we pick any $\Tilde{\boldsymbol \theta} \in \mathbb{C}^3 $ such that $\boldsymbol \theta$ is the projection of $\Tilde{\boldsymbol \theta}$ onto the space $\mathcal{S}$ and $\boldsymbol \theta \in \boldsymbol \Theta$, then by using the state space collapse result for the Three-queue system, we can prove that
\begin{equation*}
\lim_{\epsilon \rightarrow 0}\mathbb{E}[e^{\epsilon \langle \Tilde{\boldsymbol \theta}, \mathbf{q} \rangle}] = \lim_{\epsilon \rightarrow 0} \mathbb{E}[e^{\epsilon \langle \boldsymbol \theta, \mathbf{q} \rangle}] = L(\boldsymbol \theta).
\end{equation*}
The proof for this is provided in Lemma \ref{lem: 3q_mgf_equivalence} in Appendix \ref{app: 3q_mgf_equivalence}.
This implies that it is sufficient to characterize the Laplace transform of the queue length vector for $\boldsymbol \theta \in \mathcal{S}$. So, $\theta_1 = \theta_2 + \theta_3$. The Laplace transform of the proposed limiting distribution is given by
\begin{align*}
\mathbb{E}[e^{\theta_1 (\Upsilon_1+\Upsilon_2)+\theta_2 \Upsilon_1 + \theta_3 \Upsilon_2 }] &= \mathbb{E}[e^{(2\theta_2 + \theta_3) \Upsilon_1+(\theta_2 + 2\theta_3)\Upsilon_2}] \\
&= \frac{1}{\bigg( 1- (2\theta_2 + \theta_3) \frac{3\sigma_2^2 + \sigma_3^2}{8}\bigg)\bigg( 1- (\theta_2 + 2\theta_3)\frac{\sigma_2^2 + 3\sigma_3^2}{8}\bigg)}.
\end{align*}
Now pick
\begin{align}
L(\boldsymbol \theta) = \frac{1}{\bigg( 1- (2\theta_2 + \theta_3) \frac{3\sigma_2^2 + \sigma_3^2}{8}\bigg)\bigg( 1- (\theta_2 + 2\theta_3)\frac{\sigma_2^2 + 3\sigma_3^2}{8}\bigg)},\nonumber\\ M_2 (\boldsymbol \theta) = \frac{1}{1- (2\theta_2 + \theta_3)\frac{\sigma_2^2 + 3\sigma_3^2}{8}}, && M_3 (\boldsymbol \theta) = \frac{1}{1- (\theta_2 + 2\theta_3)\frac{3\sigma_2^2 + \sigma_3^2}{8}}.
\end{align}
By some algebraic manipulation, we can show that this satisfies the functional equation given in Eq. \eqref{eq: 3q_functional_eq} under the condition $2\sigma^2_1 = \sigma_2^2 +\sigma_3^2$. And from Lemma \ref{lem: 3q_uniqueness}, as the solution to the functional equation is unique, this has to be the only solution of the functional equation.
\end{proof}
\section{Input-queued switch}
\label{sec: switch}
In this section, we provide our results on the heavy traffic distribution of the Input-queued switch.
In Section \ref{sec: switch_model}, we present the model for a switch and in Section \ref{sec: switch_ssc}, we provide the state space collapse result for the switch system. The results regarding the heavy traffic distribution of the Input-queued switch is presented in Section \ref{sec: switch_results}. The results for the Input-queued switch holds under the assumption that a certain conjecture holds (see Conjecture \ref{lem: switch_uniqueness}). In Section \ref{sec: 3q}, we present a simpler system, which we call the Three-queue system, for which the conjecture holds.
\subsection{Preliminaries for Input-queued switch}
In this section, we present the model (in Section \ref{sec: switch_model}) and the SSC result (in Section \ref{sec: switch_ssc}) for the switch system.
\subsubsection{Model for Input-queued switch}
\label{sec: switch_model}
An Input-queued switch (or just switch) is a device that exchanges data from one channel to another in a data center. A switch of size $n$ consists of $n$ input ports and $n$ output ports. The message packets flow from input ports to output ports in a time-slotted manner.
For time slot $t$, we denote the arrival $a_{i+n(j-1)}(t)$ to be the number of packets that come input $i$ to be sent to output port $j$. As there are $n^2$ such input-output pairs, the arrival in any time slot can be represented by an $n^2$ vector $\mathbf{a}(t)$. The architecture of the device doesn't allow all the packets to be transferred in one go, which leads to a queue build up on the inputs. We use $q_{i+n(j-1)}(t)$ (or $\mathbf{q}(t)$ in vector notation) to denote the backlog of packets that need to be transferred to the output $j$ from input $i$. We assume that the arrivals are i.i.d. with respect to $t$ and the distribution of the arrivals have a bounded support (i.e. for $(i,j)$ and $t$, $a_{i+n(j-1)}(t)\leq a_{\max}$). The mean arrival rate vector is given by $\mathbb E[\mathbf{a}(t)] = \boldsymbol \lambda$ and let $\boldsymbol \sigma^2$ be the co-variance matrix of the arrival vector $\mathbf{a}(t)$. The independence of the arrivals across the input-output pair gives us that the co-variance matrix $\boldsymbol\sigma^2$ is a diagonal matrix. Also, under the symmetric variance condition, all the variances are equal and then $\boldsymbol \sigma^2 =\sigma^2 \mathbf{I}_{n^2} $, where $\mathbf{I}_{n^2}$ is the identity matrix of size $n^2$.
The bottlenecks in the system don't allow the transfer of all the packets in the queue simultaneously. Every port can send or receive at most one packet in any time slot, i.e., any input port can send at most one packet in a given time slot. Similarly, any output port can receive at most one packet in a given time slot. The packet transfer can happen only among the connected input-output pairs in that time slot. Therefore, the switch system can be modeled analogously as a complete bipartite graph with $2n $ nodes where $q_{i+n(j-1) }(t)$ denotes the weight of the edge $(i,j)$.
A \textit{schedule} denoted by $\mathbf{s}(t) \in \{0,1\}^{n^2}$ gives the set of input-output pairs that are connected in time slot $t$. The element $s_{i+n(j-1)}(t) =1$ if and only if the pair $(i,j)$ is connected in time slot $t$. Using the idea of the complete bipartite graph, a schedule corresponds to a perfect matching between input and output nodes.
It follows that the set of possible schedules $\mathcal{X}$ is given by
\begin{equation*}
\mathcal{X} = \left\{ \mathbf{s} \in \{0,1\}^{n^2} : \sum_{i=1}^n s_{i+n(j-1)} = 1 \ \forall j, \sum_{j=1}^n s_{i+n(j-1)} = 1 \ \forall i \right\}
\end{equation*}
The queue length evolution process is given by
\begin{align*}
\mathbf{q}(t+1) &= [\mathbf{q}(t) + \mathbf{a}(t) - \mathbf{s}(t)]^+= \mathbf{q}(t) + \mathbf{a}(t) - \mathbf{s}(t) + \mathbf{u}(t),
\end{align*}
where operation $[\cdot]^+ = \max(0,\cdot)$ in the above equation is used because the queue length can't be negative. The terms $\mathbf{u}(t)$ is the unused service, which arises because it might happen that there is a connection between a input-output pair but there is no packet available to be transferred. Thus, for any $i,j \in \{1,2,\dots n\}$, $u_{i+n(j-1)}(t) =1$ if and only if $s_{i+n(j-1)}(t) =1, a_{i+n(j-1)}(t) =0$ and $q_{i+n(j-1)}(t) =0$. This gives us that $q_{i+n(j-1)}(t+1)u_{i+n(j-1)}(t) = 0 $ for all $(i,j)$, which in vector notation is given by $\langle \mathbf{q}(t+1),\mathbf{u}(t) \rangle = 0$.
A scheduling algorithm is then the policy that chooses the schedule in each time slot.
We define the \textit{weight} of the schedule as the sum of the queue lengths that are being served in any time slot. A popular scheduling algorithm for switch system is \textit{MaxWeight} scheduling which chooses the schedule with maximum weight, i.e.,
\begin{equation*}
\mathbf{s}(t) = \max_{\mathbf{s} \in \mathcal{X}} \langle \mathbf{q}(t) , \mathbf{s} \rangle = \max_{\mathbf{s} \in \mathcal{X}} \sum_{i=1}^n\sum_{j=1}^n s_{i+n(j-1)} \times q_{i+n(j-1)}(t).
\end{equation*}
In this paper, we only consider the scheduling algorithms for which the process $\{\mathbf{q}(t)\}_{t=0}^\infty$ forms an irreducible and aperiodic Markov chain. The stability of the system in this scenario means that the queue lengths are not going to infinity. More mathematically, we define a system to be stable if the Markov chain $\{\mathbf{q}(t)\}_{t=0}^\infty$ is positive recurrent. The term \textit{capacity region} is used to denote the set of arrival rate vectors for which there exists a scheduling policy for which the system is stable.
The capacity region of the switch system is given by
\begin{equation*}
\mathcal C = \Big \{ \mathbf{\boldsymbol \lambda} \in \mathbb{R}_+^{n^2 } : \sum_{i=1}^n \lambda_{i+n(j-1)} < 1 \ \forall j, \ \sum_{j=1}^n \lambda_{i+n(j-1)} < 1 \ \forall i \Big \}.
\end{equation*}
It has been proved in prior literature that MaxWeight scheduling is \textit{throughput optimal} \cite{665071}, i.e., the corresponding Markov chain is stable for any arrival rate vector in $\mathcal{C}$.
To prove that the Markov chain is stable, one can use the Foster-Lyapunov Theorem by showing that the expected drift of a suitably chosen Lyapunov function is negative as shown in \cite{maguluri2016heavy, 665071}. The two requirements for using the Foster-Lyapunov Theorem, i.e., irreducibility and aperiodicity of the Markov chain can be obtained by using the arguments presented in \cite[Exercise 4.2]{srikant2014communication}.
Let $\mathcal{F}$ denote the part of boundary of the capacity region given by the convex hull of $\mathcal X$, i.e.,
\begin{equation*}
\mathcal F = \Big \{ \boldsymbol \nu \in \mathbb{R}_+^{n^2 } : \sum_{i=1}^n \nu_{i+n(j-1)} = 1 \ \forall j, \sum_{j=1}^n \nu_{i+n(j-1)} =1 \ \forall i \Big \}.
\end{equation*}
A switch system is in heavy traffic when the arrival rate vector $\boldsymbol \lambda$ approaches the boundary $\mathcal{F}$. For simplicity, we are going to assume that arrival rate vector approaches the boundary through a straight line, i.e., there exists a vector $\boldsymbol \nu \in \mathcal{F}$ and the heavy traffic parameter $\epsilon \in (0,1)$ such that $\boldsymbol\lambda = (1-\epsilon)\boldsymbol \nu$. Further, we assume that none of the arrival rates are zeros, which leads to the conclusion $\nu_{\min} = \min_{ij}\nu_{i+n(j-1)} > 0$. This is also called the \textit{Completely Saturated Case} \cite{maguluri2016heavy}. Next, we look at the SSC result for the switch system.
\subsubsection{State space collapse for Input-queued switch}
\label{sec: switch_ssc}
In this section, we present some of the existing results that are necessary for the analysis of the heavy traffic distribution of the switch. Before presenting the definition of state space collapse for switch, we present some required geometry.
Let $\mathbf B \in \{0,1\}^{n^2\times2n}$ is such that for any $1\leq i,j\leq n$,
\begin{equation*}
B_{i+n(j-1),i} =B_{i+n(j-1),n+j} = 1,
\end{equation*}
Consider the subspace $\mathcal{S} \subseteq \mathbb{R}^{n^2}$ given by,
\begin{equation*}
\mathcal{S} = \Big\{ \mathbf{y} \in \mathbb{R}^{n^2} : \exists \mathbf{w} \in \mathbb{R}^{2n} \ s.t. \ y_{i+n(j-1)} = w_{i} + w_{n+j} \Big\} = \Big\{ \mathbf{y} \in \mathbb{R}^{n^2} : \exists \mathbf{w} \in \mathbb{R}^{2n} \ s.t. \ \mathbf{y} =\mathbf B \mathbf w \Big\}.
\end{equation*}
We say that a vector $\mathbf x \in \mathbb{C}^{n^2}$ lies in the space $\mathcal{S}$ if $Re(\mathbf{x}) \in \mathcal{S}$ and $Im(\mathbf{x}) \in \mathcal{S}$. Suppose for any vector $\mathbf x \in \mathbb{C}^{n^2}$, $\mathbf x_{\|}$ denotes the projection of $\mathbf{x}$ onto the space $\mathcal{S}$ and $\mathbf x_{\perp} = \mathbf x - \mathbf x_{\|}$. As $\mathbf x_{\|} \in \mathcal{S}$, $\exists \mathbf{w} \in \mathbb{R}^{2n}$ such that $\mathbf x_{\|} =\mathbf B \mathbf w$. In this case, we call $\mathbf w$ the lower dimensional (or $2n$-dimensional) representation of $\mathbf x_{\|}$. Note that the vector $\mathbf w$ need not be unique. A possible candidate is $\mathbf w = (\mathbf{B}^T\mathbf{B})^{-1}\mathbf{B}^T \mathbf x$. Although for any $\mathbf w$ such that $\mathbf{x}_{\|} = \mathbf{B}\mathbf{w}$, $\mathbf{w}' = \mathbf{w} - w\begin{bmatrix} \mathbf{1}_n \\-\mathbf{1}_n \end{bmatrix}$ for any $w\in \mathbb R$ also satisfies $\mathbf{x}_{\|} = \mathbf{B}\mathbf{w}'$ as $\mathbf{B}\begin{bmatrix} \mathbf{1}_n \\-\mathbf{1}_n \end{bmatrix} = \mathbf{0}_{n^2}$ by structure of matrix $\mathbf B$. This is because even though we are using $2n$ elements to represent a vector in $\mathcal{S}$, the dimension of $\mathcal{S}$ is $(2n-1)$ as we can fix one of the elements of $\mathbf w$ to $0$. Also, suppose the cone $\mathcal K \subset \mathcal S$ is given by,
\begin{align*}
\mathcal{K} &= \Big\{ \mathbf{y} \in \mathbb{R}^{n^2} : \exists \mathbf{r} \in \mathbb{R}^{2n}_+ \ s.t. \ y_{i+n(j-1)} = r_{i} + r_{n+j} \Big\} = \Big\{ \mathbf{y} \in \mathbb{R}^{n^2} : \exists \mathbf{r} \in \mathbb{R}^{2n}_+ \ s.t. \ \mathbf{y} =\mathbf B \mathbf r \Big\}.
\end{align*}
For any vector $\mathbf x\in \mathbb R^{n^2}$, we use $\mathbf x_{\| \mathcal K}$ as the projection to the cone $\mathcal{K}$, and $\mathbf x_{\perp \mathcal K} = \mathbf x - \mathbf x_{\| \mathcal K}$. Note that as $\mathcal{K} \subset \mathcal{S}$, we get that $\|\mathbf x_{\perp}\| \leq \|\mathbf x_{\perp \mathcal K}\|$. Similar to that for the projection on to the space $\mathcal{S}$, the lower dimensional representation to the projection on to the cone $\mathcal{K}$ is also not unique, i.e., there can be multiple $r$ such that $\mathbf x_{\| \mathcal K} = \mathbf B \mathbf r$. One way to ensure that the lower dimensional representation is unique to enforce an extra condition that smallest element in the vector $\mathbf r$ is zero, i.e., $\min_{1\leq i\leq 2n}r_i =0$.
\begin{definition}
\label{def: switch_ssc}
For the Input-queued switch as defined in Section \ref{sec: switch_model}
operating under a given scheduling algorithm, we say that the algorithm achieves \textit{state space collapse}, if for every $\theta \in \mathbb{R}$, there exists $\epsilon(\theta) >0$ such that for every $0< \epsilon \leq \epsilon(\theta)$, the steady state queue length vector satisfies,
\begin{equation*}
\mathbb{E}[e^{\epsilon \theta \| \mathbf{q}_{\perp}\|}] \leq \mathbb{E}[e^{\epsilon \theta \| \mathbf{q}_{\perp \mathcal K}\|}] < C^\star< \infty.
\end{equation*}
where $\mathbf{q}_{\|}$ and $\mathbf{q}_{\| \mathcal{K}}$ are the projection of $\mathbf{q}$ onto the subspace $\mathcal S$ and cone $\mathcal K$, respectively. Also, $\mathbf{q}_{\perp} = \mathbf{q} - \mathbf{q}_{\|} $ and $\mathbf{q}_{\perp \mathcal{K}} = \mathbf{q} - \mathbf{q}_{\| \mathcal{K}}$. And the expectation $\mathbb E[\cdot]$ is taken under the steady-state distribution.
As a conclusion, for any scheduling policy that achieves state space collapse, for every $\theta \in \mathbb R$, \[\lim_{\epsilon \rightarrow 0} \mathbb{E}[e^{\epsilon \theta \| \mathbf{q}_{\perp}\|}] \leq \lim_{\epsilon \rightarrow 0} \mathbb{E}[e^{\epsilon \theta \| \mathbf{q}_{\perp \mathcal K}\|}] < \infty.\] Furthermore, there exists a $C_r$ independent of $\epsilon$ such that,
\begin{equation}
\label{eq: switch_qperp_bound}
\mathbb{E} \Big [\|\mathbf{q}_{\perp }\|^r \Big] \leq \mathbb{E} \Big [\|\mathbf{q}_{\perp \mathcal K }\|^r \Big] \leq C_r \quad \forall r \geq 1.
\end{equation}
\end{definition}
According to Definition \ref{def: switch_ssc}, for any scheduling algorithm that achieves state space collapse, the moments of $\mathbf{q}_{\perp}$ are bounded irrespective of the heavy traffic parameter $\epsilon$. We know from \cite[Proposition 1]{maguluri2016heavy}, that in heavy traffic, queue length scales at least at the rate of $\Omega(1/\epsilon)$.
This shows that, in heavy traffic, $\mathbf{q}_{\perp}$ is insignificant compared to $\mathbf{q}$ and so $\mathbf{q}$ is nearly equal to its projection $\mathbf{q}_{\|}$.
For the Input-queued switch, MaxWeight scheduling achieves state space collapse according to the Definition \ref{def: switch_ssc}.
The proof of this is provided in \cite[Proposition 2]{maguluri2016heavy}, which in turn uses the result given in \cite{hajek1982hitting}.
In \cite{jhunjhunwala2021low}, the authors prove that several algorithms other than MaxWeight also achieve state space collapse according to Definition \ref{def: switch_ssc}.
\subsection{Results for Input-queued switch}
\label{sec: switch_results}
In this section, we present our results for the Input-queued switch. Theorem \ref{thm: switch_functional_eq} provides a functional equation that characterizes the heavy traffic distribution of the scaled queue length vector for Input-queued switch. Under the assumption that Conjecture \ref{lem: switch_uniqueness} holds true, Theorem \ref{thm: switch_dist} provides the solution of the functional equation given in Theorem \ref{thm: switch_functional_eq} under the symmetric variance condition.
\begin{theorem}
\label{thm: switch_functional_eq}
Consider the Input-queued switch as defined in Section \ref{sec: switch_model}
operating under a given scheduling algorithm that achieves state space collapse according to Definition \ref{def: switch_ssc}. Let $\Theta = \{\boldsymbol \theta \in \mathbb{C}^{n^2} : \boldsymbol \theta \in \mathcal{S}, \ Re(\mathbf B^T \boldsymbol \theta) \leq \mathbf 0_{2n}\}$.
Then, for any $\boldsymbol\theta \in \boldsymbol \Theta$, the heavy traffic scaled queue length vector satisfies,
\begin{equation}
\label{eq: switch_functional_eq}
\left( -\frac{1}{n}\langle \boldsymbol{\theta}, \mathbf{1}_{n^2} \rangle + \frac{1}{2} \langle \boldsymbol{\theta} , \boldsymbol\sigma^2 \boldsymbol{\theta} \rangle \right) L(\boldsymbol{\theta}) = - \langle \boldsymbol{\theta}, \mathbf{M}(\boldsymbol{\theta}) \rangle.
\end{equation}
where
\begin{align*}
L(\boldsymbol \theta) = \lim_{\epsilon \rightarrow 0} \mathbb{E}[e^{\epsilon \langle \boldsymbol \theta, \mathbf{q} \rangle}], && M_{k}(\boldsymbol \theta) = \lim_{\epsilon \rightarrow 0} \frac{1}{\epsilon} \mathbb{E} [u_{ k}e^{\epsilon \langle \boldsymbol \theta, \mathbf{q} \rangle }], \ \ \forall k \in \{1,2,\dots,n^2\}.
\end{align*}
\end{theorem}
Theorem \ref{thm: switch_functional_eq} gives a characterization of the functional equation for the input-queued switch. The functional equation is mathematical relationship between the term $L(\boldsymbol \theta)$, which is the Laplace transform of the limiting heavy traffic distribution and the terms $M_k(\boldsymbol{\theta})$, which intuitively denote the Laplace transform under the condition $q_k=0$ (as $u_k =1$ implies $q_k =0$).
In order to establish the functional equation presented in Theorem \ref{thm: switch_functional_eq}, the first step is to use the complex exponential as the Lyapunov function and equate its expected drift to zero in steady-state.
After that, we use the second-order approximation of the complex exponential in terms of the heavy traffic parameter $\epsilon$ and eliminate the higher order terms to get the functional equation. Here, SSC plays a key role in the technical analysis. To be more specific, due to the SSC, we only need to consider the set of $\boldsymbol \theta$ for which $Re(\boldsymbol \theta), Im(\boldsymbol \theta) \in \mathcal S$. As a consequence, we only have to consider $\mathbf{q}_{\|}$ (as $\langle \boldsymbol \theta, \mathbf{q} \rangle = \langle \boldsymbol \theta, \mathbf{q}_{\|} \rangle$ for any $\boldsymbol \theta\in \mathcal S$). This allows us to perform the technical analysis to characterize the heavy traffic distribution using the functional equation.
Also, the condition $Re(\mathbf B^T \boldsymbol \theta) \leq \mathbf 0_{2n}$ is chosen to ensure that the limiting Laplace transforms $L(\boldsymbol \theta)$ and $\mathbf M(\boldsymbol \theta)$ exists. More mathematical details on the existance of the Laplace transforms is provided in Lemma \ref{lem: switch_mgf_equivalence} in Appendix \ref{app: switch_mgf_equivalence}.
The complete proof for Theorem \ref{thm: switch_functional_eq} is presented in Appendix \ref{app: switch_functional_eq}.
After establishing the functional equation, the next step is to solve the derived functional equation to get the Laplace transform of the heavy traffic distribution of the steady-state scaled queue length vector. Solving the functional equation, in general, is not easy. One way to solve the functional equation is to
guess the solution and check whether it satisfies the functional equation or not. If it does, then the solution gives one possible candidate for the Laplace transform of the heavy traffic distribution. Next crucial step is to show that the functional equation has a unique solution. This ensures that the guessed solution is the only solution for the functional equation. For Input-queued switch, we conjecture that the functional equation has a unique solution as given below.
\begin{conjecture}
\label{lem: switch_uniqueness}
Consider the Input-queued switch as defined in Section \ref{sec: switch_model}
operating under a given scheduling algorithm that achieves state space collapse according to Definition \ref{def: switch_ssc}. Then, there is a unique solution $(L(\boldsymbol \theta),\mathbf M(\boldsymbol\theta))$ to the functional equation given in Eq. \eqref{eq: switch_functional_eq} that is a valid Laplace transform of a probability distribution.
\end{conjecture}
A major challenge in solving the implicit functional equation given in Eq. \eqref{eq: switch_functional_eq} is proving that the functional equation has a unique solution. For simpler systems such as Three-queue system or $\mathcal{N}$-system (later part of the paper) where the SSC happens to a two dimensional subspace, we can prove that the corresponding functional equation has a unique solution using Carleman boundary value problem \cite{litvinchuk1970generalized}. Extending that result to a functional equation with more than two variables turns out to be highly non-trivial. Next, we present Theorem \ref{thm: switch_dist}, which assumes that the Conjecture \ref{lem: switch_uniqueness} holds, to solve the functional equation under symmetric variance condition.
\begin{theorem}
\label{thm: switch_dist}
Consider the Input-queued switch as defined in Section \ref{sec: switch_model}
operating under a given scheduling algorithm that achieves state space collapse according to Definition \ref{def: switch_ssc}. Assume Conjecture \ref{lem: switch_uniqueness} holds. Suppose the variance vector $\boldsymbol \sigma^2$ is symmetric, i.e. $\boldsymbol \sigma^2 =\sigma^2 \mathbf{I}_{n^2} $. Then, the heavy traffic steady state queue length vector is given by
\begin{equation*}
\lim_{\epsilon \rightarrow 0} \epsilon q_{i+n(j-1)} = \Upsilon_i + \Upsilon_{n+j} - 2 \Tilde{\Upsilon}, \ \ \forall i,j \in \{1,\dots,n\},
\end{equation*}
where $\{\Upsilon_1,\dots,\Upsilon_{2n}\}$ are independent exponential random variable with mean $\frac{\sigma^2}{2}$ and $\Tilde{\Upsilon} =\displaystyle \min_{1\leq k\leq 2n} \Upsilon_k$. In vector notations, this is given by
\begin{equation*}
\lim_{\epsilon \rightarrow 0} \epsilon \mathbf{q} = \mathbf B ( \boldsymbol \Upsilon - \Tilde{\Upsilon} \mathbf 1_{2n}),
\end{equation*}
where $\boldsymbol \Upsilon $ is the vector $ \boldsymbol \Upsilon = (\Upsilon_1,\dots,\Upsilon_{2n})$.
\end{theorem}
Theorem \ref{thm: switch_dist} gives the complete characterization of the heavy traffic distribution of the Input-queued switch under the symmetric variance condition, where the heavy traffic distribution is represented using $2n$ independent exponential random variable. This suggests that the heavy traffic distribution of the switch is light-tailed even when the symmetric variance condition is not satisfied.
The idea behind the proof is to show that the Laplace transform of the limiting distribution provided in the Theorem \ref{thm: switch_dist} is a solution of a functional equation (given in Theorem \ref{thm: switch_functional_eq}) when the variances of the arrival process are symmetric. And under the assumption that the functional equation has a unique solution (claimed by Conjecture \ref{lem: switch_uniqueness}), the solution provided in Theorem \ref{thm: switch_dist} is the unique solution for the heavy traffic distribution for Input-queued switch under symmetric variance condition. The mathematical proof of Theorem \ref{thm: switch_dist} is provided in Appendix \ref{app: switch_dist}. Remember that for the projection on to the cone $\mathcal{ K}$, we can ensure that the lower dimensional representation $\mathbf r$ is unique by adding an extra constraint that $\min_{1\leq i\leq 2n} r_i =0$. Such a condition is satisfied by the vector $\boldsymbol \Upsilon - \Tilde{\Upsilon} \mathbf 1_{2n}$. This provides an intuitive reasoning behind the appearance of the term $\Tilde{\Upsilon} $.
\begin{corollary}
\label{cor: switch_max}
For the Input-queued switch as defined in Section \ref{sec: switch_model}, MaxWeight scheduling satisfies the functional equation given in Theorem \ref{thm: switch_functional_eq} and the heavy traffic distribution in Theorem \ref{thm: switch_dist}.
\end{corollary}
As mentioned in Section \ref{sec: switch_ssc}, MaxWeight scheduling achieves SSC according to Definition \ref{def: switch_ssc}. Now, Corollary \ref{cor: switch_max} is a direct application of Theorem \ref{thm: switch_functional_eq} and Theorem \ref{thm: switch_dist}. Similarly, the class of algorithms presented in \cite{jhunjhunwala2021low} also satisfies the result presented in Theorem \ref{thm: switch_functional_eq} and Theorem \ref{thm: switch_dist}.
Next, we reiterate the results mentioned in Theorem \ref{thm: switch_dist} for a $2\times 2$ switch to provide more clarity. With slight abuse of notation, we use the matrix notation $q_{ij} = q_{i+n(j-1)}$, i.e., the queue length matrix and its corresponding projection is given by,
\begin{align*}
\mathbf{q} = \begin{bmatrix}
q_{11} & q_{12}\\
q_{21} & q_{22}
\end{bmatrix},
&& \mathbf{q}_{\|} = \begin{bmatrix}
w_1 & w_1\\
w_2 &w_2
\end{bmatrix} + \begin{bmatrix}
w_3 & w_4\\
w_3 &w_4
\end{bmatrix}.
\end{align*}
From the above representation, we note that $\mathbf{q}_{\|}$ is represented as the sum of two matrices. For the first matrix, row elements are common and for the second matrix, column elements are common. Note that, even though the representation of $\mathbf{q}_{\|}$ involves four elements, $\mathbf{q}_{\|}$ lies in a three-dimensional subspace. Now, according to Theorem \ref{thm: switch_dist}, in heavy traffic and under symmetric variance condition,
\begin{equation*}
\lim_{\epsilon \rightarrow 0} \epsilon\mathbf{q} = \begin{bmatrix}
\Upsilon_1 & \Upsilon_1\\
\Upsilon_2 &\Upsilon_2
\end{bmatrix} + \begin{bmatrix}
\Upsilon_3 & \Upsilon_4\\
\Upsilon_3 &\Upsilon_4
\end{bmatrix} - \begin{bmatrix}
\Tilde{\Upsilon} & \Tilde{\Upsilon}\\
\Tilde{\Upsilon} &\Tilde{\Upsilon}
\end{bmatrix},
\end{equation*}
where $\Upsilon_1,\Upsilon_2,\Upsilon_3$ and $\Upsilon_4$ are exponential random variables with mean $\sigma^2/2$ and $\Tilde{\Upsilon} = \min\{\Upsilon_1,\Upsilon_2,\Upsilon_3,\Upsilon_4\}$. By SSC, we know that $\epsilon\mathbf{q}_{\perp} \approx 0$ and so $\epsilon \mathbf{q} \approx \epsilon \mathbf{q}_{\|}$ in heavy traffic. This means that we can think of $\epsilon w_{i}$ converging to $\Upsilon_i - \Tilde{\Upsilon}$ in distribution. Next, we look at a simpler system, which we call Three-queue system.
|
1,941,325,220,461 | arxiv | \section*{Appendix}
\subsection*{On use of the Riemann zeta function in relativistic cosmology}
It is well-known (see e.g. \cite{mak}, page 209, (8.210), (8.211)) that the essence of the relativistic cosmology is in the so-called Friedmann
equations:
\begin{equation*}
\kappa c^2\rho=\frac{3}{R^2}\left(k c^2+R^{\prime 2}\right) , \tag{10}
\end{equation*}
\begin{equation*}
\kappa p = -\frac{2R''}{R}-\frac{R^{\prime 2}}{R^2}-\frac{kc^2}{R^2} , \tag{11}
\end{equation*}
(we do not consider the cosmological constant), where: $\kappa$ is the Newton gravitational constant, $c$ is the speed of light in vacuum,
$\rho(t)$ is the mass density, $p(t)$ is pressure, $k=-1,0,1$, $R(t)$ is the "radius"\ of the Universe; prime denotes the derivative with respect to
the time. \\
We have a pair of equations for a triple of unknown functions $\rho(t),p(t),R(t)$. We have to add one more equation, usually it is the so-called
\emph{state equation} of the form:
\begin{equation*}
F(\rho,p)= 0 . \tag{12}
\end{equation*}
Often used state equation reads
\begin{equation*}
p=0 , \tag{13}
\end{equation*}
that describes an universe filled with matter with vanishing pressure. \\
It is also studied an universe in which
\begin{equation*}
p=\frac{\rho c^2}{3} , \tag{14}
\end{equation*}
(see e.g. \cite{ll}, pages 387, 388). This corresponds to the case of maximal pressure $p$ at given matter density $\rho$. \\
Let us remark that for physical reasons\footnote{Let us remark this work was written before 1974.} we require
\begin{equation*}
\rho>0, \quad p\geq 0 . \tag{15}
\end{equation*}
From the mathematical point of view one constructs the models of the Universe in the following way: equation (12) is postulated and subsequently the pair
of equations (10), (11) is solved for two independent variables. \\
We will construct (supposing the Riemann conjecture holds true) an infinite set of the models of the Universe prescribing the radius $R$ as a
function of time and the conditions (15) will be fulfilled in some intervals of time. \\
We suppose:
\begin{equation*}
R(t)=|Z(t)|, \quad k=+1 , \tag{16}
\end{equation*}
this means the radius $R$ of the Universe is related with the function $\zeta(1/2+it)$ and one supposes the spherical geometry. \\
In this case using formulae (1) and (10), (11) we have
\begin{equation*}
\frac{\kappa c^2}{3}\rho(t)=\frac{c^2}{Z^2(t)}+\left\{\frac{Z'(t)}{Z(t)}\right\}^2 , \tag{17}
\end{equation*}
\begin{equation*}
\frac{\kappa}{2}p(t)=\sum_\gamma \frac{1}{(t-\gamma)^2}-\frac{3}{2}\left\{\frac{Z'(t)}{Z(t)}\right\}^2-\frac{c^2}{2}\frac{1}{Z^2(t)}+
\mathcal{O}\left(\frac{1}{t}\right). \tag{18}
\end{equation*}
We will find the intervals of time $t$ in which the conditions (15) are fulfilled. \\
The first one of them - $\rho(t)>0$ - holds true for every $t>0$. \\
We will find the intervals in which also $P(t)\geq 0$ holds true. \\
From one side, using the Littlewood bound, see \cite{titch}, page 223,
\begin{displaymath}
\gamma''-\gamma'<\frac{A}{\ln\ln\ln \gamma'} ,
\end{displaymath}
we have
\begin{equation*}
\sum_\gamma \frac{1}{(t-\gamma)^2}>\frac{1}{(\gamma''-\gamma')^2}>A_1(\ln\ln\ln t_0)^2 . \tag{19}
\end{equation*}
From the other side, following the Littlewood - Titchmarch $\Omega$-theorem we have that there is such a subsequence $\{\tilde{t}_0\}$ of the
sequence $\{ t_0\}$ that
\begin{equation*}
|Z(\tilde{t}_0)|>A_2\exp(\ln^\beta \tilde{t}_0), \quad \beta<\frac{1}{2} . \tag{20}
\end{equation*}
Let us remark that
\begin{equation*}
Z'(\tilde{t}_0)=0 . \tag{21}
\end{equation*}
Using (19),(20) and (21) we have there is a sequence $\{ \delta(\tilde{t}_0)\}$, $\delta(\tilde{t}_0)>0$, such that:
\begin{equation*}
\tilde{\gamma}'<\tilde{t}_0-\delta(\tilde{t}_0)<\tilde{t}_0+\delta(\tilde{t}_0)<\tilde{\gamma}^{\prime\prime} , \tag{22}
\end{equation*}
\begin{equation*}
p(t)\geq 0, \quad t\in [\tilde{t}_0-\delta(\tilde{t}_0),\tilde{t}_0+\delta(\tilde{t}_0)],
\end{equation*}
hold true. Finally, this defines the above mentioned infinite set of the models of the Universe.
|
1,941,325,220,462 | arxiv | \section*{Nomenclature}
The main notation used throughout the text is stated below for quick reference. Matrices are defined in bold and upper-case, vectors are indicated in bold and lower-case, superscript ${\cdot}^{\prime}$ means \textit{observed}, and symbol $\widehat{\cdot}$ refers to an \textit{estimated} parameter. Other symbols are defined as required.
\subsection*{Sets and Indices}
\begin{ldescription}{$xxxxx$}
\item [$\mathcal{B}$] Set of blocks, indexed by $b = 1 \ldots n_B$.
\item [$\mathcal{D}$] Set of days, indexed by $d$.
\item [$\mathcal{H}$] Set of hours, indexed by $h = 1 \ldots n_H$.
\item [$\Omega^{p/a/i}$] Set of physical- and technical-related parameters for the prototype/aggregate/individual building, respectively.
\item [$\Phi^{p/a}$] Set of model-related parameters for the prototype/aggregate building, respectively.
\item [$i$] Index for individual building.
\item [$r$] Index for regressor.
\end{ldescription}
\subsection*{Parameters}
\begin{ldescription}{$xxxxx$}
\item [$a_1$] Energy dissipation rate.
\item [$a_2$] Parameter defining the product of $\eta \cdot R$.
\item [$C$] Thermal capacitance.
\item [$n_B$] Number of power blocks.
\item [$n_H$] Number of hours in a day.
\item [$P$] Rated power of a thermostatically-controlled load.
\item [$R$] Thermal resistance.
\item [$UA$] Heat transfer coefficient between the room air and the ambient.
\item [$\iota$] Parameter used in the regularization technique.
\item [$\eta$] Coefficient of performance of a thermostatically-controlled load.
\item [$\theta_{0, d}$] Initial indoor air temperature in day $d$.
\item [$\theta^r$] User-specified temperature set-point.
\item [$\hbar$] Heterogeneity factor.
\item [$\delta$] Discretization period.
\item [$\Delta$] Half of the temperature deadband.
\item [$\boldsymbol{c^s}$] Vector of penalty costs for slack variables, where the $h$th component is $c^s_h$.
\item [$\boldsymbol{\lambda}_{d}$] Electricity price in day $d$, where the $h$th component is $\lambda_{h, d}$.
\item [$\boldsymbol{\theta}_{d}^{\boldsymbol{amb}}$] Outdoor air temperature in day $d$, where the $h$th component is $\theta^{amb}_{h, d}$.
\item [$\boldsymbol{A}, \boldsymbol{B}$] Matrices associated with the matricial form of the building's discrete dynamics.
\item [$\boldsymbol{Z}_d$] Matrix of regressors in day $d$, where each component is expressed as $z_{h,r,d}$ at hour $h$ and for regressor $r$.
\item [$\boldsymbol{\Lambda}$] The inverse of matrix $\boldsymbol{A}$.
\end{ldescription}
\subsection*{Other symbols}
We remark that most of these symbols, except for the acronyms MAE and RMSE, take on the role of parameters in the forecasting model, while acting as variables in the (bilevel) inverse optimization problem.
\begin{ldescription}{$xxxxx$}
\item [$\beta$] Scaling factor of the homothetic transformation.
\item [$\boldsymbol{\nu}_b$] Intercept for the marginal utility of block $b$, where component $h$th is $\nu_{b,h}$.
\item [$\text{MAE}$] Mean absolute error.
\item [$\text{RMSE}$] Root mean square error.
\item [$\boldsymbol{c}^{p/a}_d$] Vector representation of the building's initial conditions in day $d$ for the prototype/aggregate building, where component $h$th is $c^{p/a}_{h, d}$.
\item [$\overline{\boldsymbol{e}}_{b,d}$] Length of the power block $b$ in day $d$, where component $h$th is $\overline{e}_{h,b,d}$.
\item [$\boldsymbol{p}^{p/a}_{d}$] Power of the thermostatically-controlled load in day $d$ for the prototype/aggregate building, where component $h$th is $p^{p/a}_{h, d}$.
\item [$\boldsymbol{m}_{b,d}$] Marginal utility of block $b$ and day $d$, where component $h$th is $m_{b,h,d}$.
\item [$\boldsymbol{p}^{a}_{b,d}$] Power of the thermostatically-controlled load in block $b$ and day $d$ for the aggregate building, where component $h$th is $p^{a}_{h,b,d}$.
\item [$\underline{\boldsymbol{p}}^{p/a}_{d}$] Minimum power of the thermostatically-controlled load in day $d$ for the prototype/aggregate building, where component $h$th is $\underline{p}^{p/a}_{h, d}$.
\item [$\overline{\boldsymbol{p}}^{p/a}_{d}$] Maximum power of the thermostatically-controlled load in day $d$ for the prototype/aggregate building, where component $h$th is $\overline{p}^{p/a}_{h, d}$.
\item [$\boldsymbol{s}^a_{d}$] Slack variable for the temperature-related constraints of the aggregate building in day $d$, where component $h$th is $s^a_{h, d}$.
\item [$\boldsymbol{t}^{p/a}_d$] Vector representation of the component of the building's discrete dynamics associated with the ambient temperature in day $d$ for the prototype/aggregate building, where component $h$th is $t^{p/a}_{h, d}$.
\item [$\boldsymbol{\theta}^{p/a}_{d}$] Indoor air temperature in day $d$ for the prototype/aggregate building, where component $h$th is $\theta^{p/a}_{h, d}$.
\item [$\underline{\boldsymbol{\theta}}^{p/a}_{d}$] Minimum indoor air temperature in day $d$ for the prototype/aggregate building, where component $h$th is $\underline{\theta}^{p/a}_{h, d}$.
\item [$\overline{\boldsymbol{\theta}}^{p/a}_{d}$] Maximum indoor air temperature in day $d$ for the prototype/aggregate building, where component $h$th is $\overline{\theta}^{p/a}_{h, d}$.
\item [$\boldsymbol{\tau}$] Translation factor of the homothetic transformation, where component $h$th is $\tau_h$.
\item [$\boldsymbol{\rho}$] Vector of coefficients relative to the affine dependence of marginal utility on regressors, where component $r$th is $\rho_r$.
\end{ldescription}
\section{Introduction}
\label{sec:introduction}
Distributed energy resources (DERs), such as distributed generators, electric vehicles, energy batteries or demand response programs, are constantly growing every year and play a crucial role in the provision of multiple benefits to the power system \cite{pinson2014benefits}. In this paper, we focus on a recently popular DER, namely, an ensemble of smart buildings \cite{junker2018characterizing}. This pool of buildings may efficiently utilize their thermal capacity while keeping the indoor air temperature at user-defined comfort levels in order to provide some degree of flexibility to the power system, by shifting their load in time or reducing the peak demand. In addition, this flexibility may allow its participation in a day-ahead electricity market or could even be viewed as a non-wire alternative to capacity expansion. However, as with any load in the electricity system, its prediction is key to fully exploit the benefits that can bring to the power system operation and planning \cite{shahidehpour2003market}.
Recently, machine learning has become a popular forecasting technique in many scientific fields such as agriculture \cite{liakos2018machine}, medicine \cite{shkolyar2019augmented}, and even power systems \cite{ibrahim2020machine}. Ibrahim \emph{et al.} \cite{ibrahim2020machine} give a thorough overview of machine learning techniques applied for electricity load and price prediction, renewable generation forecasting, fault detection, failure analysis, among other applications for smart grids. Lago \emph{et al.} \cite{lago2018forecasting} compare the performance of various black-box models including traditional statistical techniques and four deep learning approaches to predict day-ahead electricity prices. This study demonstrates that machine learning techniques outperform commonly used statistical models for forecasting day-ahead electricity prices. However, as pointed out in \cite{lago2018forecasting}, machine learning techniques require the tuning of model-specific hyperparameters (e.g. number of layers, dropout, activation function, among others).
This paper is focused on price-responsive load forecasting, which has been also studied in the technical literature by using a plethora of black-box models \cite{corradi2012controlling, jeong2020short, yun2008rbf}. For instance, authors in \cite{corradi2012controlling} propose the use of statistical models such as auto-regressive models with exogenous inputs (also known as ARX) to forecast the dynamics of the price-elasticity of price-responsive consumers. They demonstrate the applicability of the model by using data from price-responsive heating systems from the Olympic Peninsula project \cite{hammerstrom2008pacific}. Authors in \cite{jeong2020short} apply a logistic mixture vector auto-regressive model to forecast daily electric curves of buildings with a pattern history. According to their results, the proposed statistical model outperforms other machine learning techniques (e.g. support vector machines) for the analyzed case studies. Reference \cite{yun2008rbf} demonstrates how neural networks and fuzzy systems can be combined to predict real-time electrical loads. However, black-box models are purely data-driven and, as a consequence, the physics of the load to be predicted are ignored. In other words, this kind of models would neglect the technical and physical constraints governing DERs, e.g. thermostatically-controlled loads or electric vehicles. Hence, black-box models lack interpretability and explainability \cite{pintelas2020grey, arendt2018comparative}. For this reason, in this work, we advocate for grey-box models, which typically rely on a reduced physical model that is estimated or complemented with data, this way featuring a balance between interpretability and scalability.
Grey-box models have been put forward in \cite{lu2015modeling} and \cite{Saez-Gallego2018} to predict the price-response of a pool of buildings . Reference \cite{lu2015modeling} takes into account the physical model of buildings to forecast their energy consumption, however its application is limited to predict individual buildings' loads instead of forecasting an aggregate load of a pool of buildings. Recently, \cite{Saez-Gallego2018} proposes a novel inverse optimization (IO) approach to statistically forecast the aggregate load of a pool of price-responsive buildings in an hour-ahead setting. In that paper, the authors characterize the response of the load to the price by means of an optimization problem. The limitations of the model proposed in \cite{Saez-Gallego2018} are threefold: (i) the methodology is based on heuristics, (ii) the optimization models are tailored to single-step forecasts, and thus its use for multi-step forecasting is inappropriate, and, as a consequence, (iii) the building thermal dynamics are disregarded in the forecasting process. In this paper, we also apply IO to forecast the aggregate response to the electricity price of a pool of buildings. However, unlike \cite{Saez-Gallego2018}, our goal is to predict it in a \emph{day-ahead} framework while also incorporating the building thermal dynamics into the optimization process. To do that, we put forward a novel IO approach that makes use of an homothetic transformation to conveniently reduce the complexity of the target forecasting model.
The goal of an IO problem is to infer the optimization model parameters given a set of observed decision variables or measurements collected by an observer. Recent advances on IO can be found in \cite{aswani2018, esfahani2018data, bertsimas2015data,chan2020inverse,ghobadi2020inferring}, and references therein. Aswani \emph{et al.} \cite{aswani2018} devise a statistically consistent IO methodology based on the transformation of a bilevel problem into a single-level equivalent when data is corrupted by noise. The authors show the performance of the proposed IO methodology to estimate cost vector parameters compared to existing heuristics with synthetic and empirical data. In a more general mathematical context, Esfahani \emph{et al.} \cite{esfahani2018data} prescribe a data-driven approach based on distributionally robust IO to tackle observed measurements with imperfect information. From a different angle, Bertsimas \emph{et al.} \cite{bertsimas2015data} apply IO to equilibrium models by using an inverse variational inequality formulation and demonstrate its predictive performance in two applications related to demand and congestion function estimation. Recently, authors in \cite{chan2020inverse} estimate the feasible region of linear and robust linear optimization problems by using IO with noise-free data, whereas authors in \cite{ghobadi2020inferring} impute a linear feasible region with a general IO methodology so that some noise-free observations become feasible and others become optimal provided that the cost vector is known.
IO is used in various applications in the technical literature \cite{zhou2011designing,ruiz2013,risanger2020inverse,saez2016data,lu2018data,contreras2018tractable,kovacs2020inverse}. The authors in \cite{zhou2011designing} apply IO in the context of generation expansion planning to find an effective incentive policy and those in \cite{ruiz2013} estimate rival marginal offer prices for a strategic producer in a network-constrained day-ahead market by using IO. The benefits and barriers of an inverse equilibrium problem are discussed in \cite{risanger2020inverse} for two applications of a Nash-Cournot game between power producers in electricity markets. In this inverse equilibrium problem, the aim is to estimate all parameters related to the original equilibrium model.
Within the context of smart buildings, IO has also been applied to characterize price-responsive consumers in \cite{saez2016data,lu2018data}. More specifically, bilevel programming is used in \cite{saez2016data} to construct an IO framework whereby the market bid parameters of a pool of price-responsive households (e.g., step-wise marginal utility functions, maximum load pick-up and drop-off rates, and maximum and minimum power
consumption bounds) are inferred. Although \cite{saez2016data} accounts for a refined model of the aggregate load of the households by including the ramping rates, it still neglects the inertia governing the households' thermal consumption. IO is also applied in \cite{lu2018data} to estimate the demand response characteristics of price-responsive consumers, as similarly done in \cite{saez2016data}, and thus sharing the same shortcoming. The authors in \cite{contreras2018tractable} describe a data-driven method to empirically estimate a robust feasible region of a pool of buildings. Authors in \cite{kovacs2020inverse} infer the parameters of electricity consumer models with multiple deferrable loads from historic data by using IO. The proposed IO model is reformulated to a quadratically constrained quadratic program. The resulting model is solved by means of successive linear programming. Although the results are promising due to the prediction performance for single loads, the convergence properties of the successive linear programming approach are arguably, according to the authors. However, the thermal dynamics of the buildings are once again ignored from the estimation procedure in these works \cite{saez2016data,lu2018data,contreras2018tractable,kovacs2020inverse}.
One of the main contributions of this paper is the application of a geometric approach, i.e. we resort to the concept of \textit{homothety}, to characterize the price-response of the ensemble of buildings for forecasting purposes. A homothety is a spatial transformation of an affine space. Hence, we assume that the feasible region of a pool of buildings can be represented as a homothet of a chosen \textit{prototype} building by means of a dilation factor and a translation vector, namely the homothetic parameters. The homothetic representation of an aggregate of buildings has been first proposed in \cite{zhao2017geometric}. Specifically, they put forward the modeling of the aggregate flexibility of a pool of thermostatically-controlled loads by using a geometric approach. The thrust of that paper was to derive sufficient and necessary virtual battery models that can be approximated by homothets of a virtual battery prototype. The authors demonstrated the benefits of such homothetic representation in the provision of flexibility for regulation services. To the best of our knowledge, this is the first time that a homothetic representation of an aggregate load has been applied for forecasting purposes. Consequently, we only rely on the estimation of the homothetic parameters to shape the aggregate feasible region of the pool, thus considerably reducing the computational complexity of the estimation algorithm and avoiding an undesirable overfitting. This work contributes to the technical literature as follows:
\begin{itemize}
\item From a modeling perspective, we propose a novel day-ahead forecasting technique for a pool of buildings via homothetic inverse optimization. The aggregate price-response is characterized by a set of marginal utility curves and a homothet of a \textit{prototype} building. As novel distinctive features compared to the work in \cite{Saez-Gallego2018}, this geometric approach endogenously accounts for the aggregate building thermal dynamics and allows us to solely rely on the estimation of two homothetic parameters and a set of marginal utility curves. We then apply IO to infer them through a given forecasting problem\footnote{Also known as forward problem in the jargon of inverse optimization.} \textit{mimicking} the price-response of the pool. Our approach, therefore, drastically reduces the complexity of the price-response model to be statistically estimated, while still capturing the thermal dynamics of the ensemble of buildings, unlike the works in \cite{Saez-Gallego2018,saez2016data,lu2018data,contreras2018tractable,kovacs2020inverse}.
\item The application of IO gives rise to a bilevel programming problem. We then propose the transformation of this bilevel program into a regularized nonlinear model which can be readily solved by nonlinear commercial solvers. To avoid meaningless local optimal solutions, we initialize this regularized model with the solution given by an efficient heuristic procedure.
\item The proposed forecasting technique has been compared with existing methodologies emphasizing its benefits for different degrees of heterogeneity among buildings.
\end{itemize}
The contribution of this paper is focused on the forecasting of the aggregate load of an ensemble of buildings by using a geometric approach, which is quite interpretable, in a day-ahead setting. Apart from the forecast of the aggregate price-response, the proposed approach allows for the derivation of a bidding curve that can be used in day-ahead electricity markets. Besides, the geometric interpretation of the ensemble of smart buildings by means of a prototype building may be used for accurately representing their power trajectory in operation problems at distribution level. Obviously, some deviations could be observed in intra-day applications. However, the proposed approach may be easily adapted for shorter time periods by adequately adjusting the length of the optimization horizon.
The paper is outlined as follows.
Section \ref{sec:derivation_fwm} provides the derivation of the feasible region for a pool of buildings by using a homothet of a building prototype. Section \ref{sec:inverse_opt_methodology} presents the proposed IO methodology based on a bilevel program. Section \ref{sec:comparison_methodologies} describes the forecasting methodologies used to benchmark our proposal. Section \ref{sec:case_study} provides insightful results. Conclusions are duly drawn in Section \ref{sec:conclusion}. Finally, \ref{sec:two_step_estimation} provides the mathematical formulations for the proposed two-step heuristic estimation procedure used for initialization of the proposed single-level nonlinear program.
\section{Derivation of the Forecasting Model}
\label{sec:derivation_fwm}
In Section \ref{sec:building_prototype}, we first present the feasible region of a prototype building which can be representative of the ensemble of buildings. Subsequently, in Section \ref{sec:aggregate_building_model}, we provide the feasible region of an aggregate building which is built upon the prototype building by using the concept of \emph{homothet}. Finally, in Section \ref{sec:forecasting_model}, we derive the forecasting model.
\subsection{Building Prototype}
\label{sec:building_prototype}
We consider that the prototype building is the one representing the \textit{average} behavior of those in the pool. To do that, we model the prototype building as a single thermostatically-controlled load characterized by a thermal resistance, $R = 1/UA$, being $UA$ the heat transfer coefficient between the room air and the ambient, and the thermal capacitance of the room air, $C$. In addition, we assume that the building is equipped with a cooling system with a rated power $P$ and a coefficient of performance $\eta$. Bearing in mind both the temperature comfort bounds by the building's occupants and the technical power limits of the cooling device, the feasible region of the prototype building for $n_H$ time periods within a day can be mathematically expressed as:
\begin{subequations}
\label{feasible_region_prototype}
\begin{align}
& \theta_h^p = a_1 \theta_{h-1}^p + (1-a_1)\left[ \theta^{amb}_{h} - a_2 p_{h}^p\right], \quad \forall h \in \mathcal{H} \label{fr_one_1} \\
& \underline{\theta}_h^p \leq \theta_h^p \leq \overline{\theta}_h^p, \quad \forall h \in \mathcal{H} \label{fr_one_2} \\
& \underline{p}_h^p \leq p_{h}^p \leq \overline{p}_h^p, \quad \forall h \in \mathcal{H}, \label{fr_one_3}
\end{align}
\end{subequations}
\noindent where $\underline{\theta}_h^p = \theta^{r} - \Delta$ and $\overline{\theta}_h^p = \theta^{r} + \Delta$, being $\theta^{r}$ the user-specified temperature set-point and $\Delta$ the half of the temperature deadband; and $a_1 = 1 - \dfrac{\delta}{RC}$ and $a_2 = \eta R$, with $\delta$ being the discretization period. Besides, $\underline{p}_h^p = 0$ and $\overline{p}_h^p = P$. To sum up, the set of physical and technical parameters of the prototype building is $\Omega^p = \{R, C, \theta^{r}, \Delta, \eta, \theta_0, P\}$. These values are assumed to be the average of the values for the same set of parameters corresponding to each building of the ensemble. In this section, we have dropped index $d$ for the sake of clarity. Note that, in equation \eqref{fr_one_1} and throughout the paper, we assume a linear system for modeling the building thermal dynamics of thermostatically-controlled loads as opposed to a nonlinear switching model. This assumption allows to simplify the analysis of the aggregate behavior of an ensemble of buildings and, when the population of buildings is large, the aggregate power demand of the nonlinear switching models can be accurately approximated by a linear system model \cite{zhao2017geometric}.
Conveniently and following the notation in \cite{zhao2017geometric}, we can recast the thermal model \eqref{feasible_region_prototype} in matricial form:
\begin{subequations}
\label{feasible_region_prototype_matricial}
\begin{align}
& \underline{\boldsymbol{p}}^p \leq \boldsymbol{p}^p \leq \overline{\boldsymbol{p}}^p \label{vb_1} \\
& \underline{\boldsymbol{\theta}}^p \leq \boldsymbol{\Lambda}\boldsymbol{B}\boldsymbol{p}^p + \boldsymbol{\Lambda}\boldsymbol{c}^p + \boldsymbol{\Lambda}\boldsymbol{t}^p \leq \overline{\boldsymbol{\theta}}^p , \label{vb_2}
\end{align}
\end{subequations}
\noindent where $\boldsymbol{\Lambda}$ is the inverse of $\boldsymbol{A}$; $\boldsymbol{A} = \boldsymbol{I}_{n_H} + diag(-a_1; -1)$, wherein $\boldsymbol{I}_{n_H}$ is the identity matrix of dimension $n_H$ and $diag(-a_1; -1)$ is a matrix of dimension $n_H$ with values $-a_1$ on the lower subdiagonal; $\boldsymbol{B} = - a_2 \left( 1-a_1\right) \boldsymbol{I}_n$; $\boldsymbol{c}^p$ is the vector of initial conditions $[a_1\theta_0, 0, ..., 0]^T$, being superscript $T$ the transpose operator; and $\boldsymbol{t}^p$ is the vector related to the ambient temperature, i.e. $\boldsymbol{\theta^{amb}}\left( 1-a_1\right)$. We denote the set of model-related parameters for the building prototype as $\Phi^p = \{\boldsymbol{c}^p, \underline{\boldsymbol{p}}^p, \overline{\boldsymbol{p}}^p, \boldsymbol{t}^p, \underline{\boldsymbol{\theta}}^p, \overline{\boldsymbol{\theta}}^p\}$.
\subsection{Aggregate Building Model}
\label{sec:aggregate_building_model}
We can approximate the feasible region of the aggregation of buildings, for each day $d$, as another thermal building model, that is,
\begin{subequations}
\label{aggregate_building_model}
\begin{align}
& \underline{\boldsymbol{p}}^a_d \leq \boldsymbol{p}^a_d \leq \overline{\boldsymbol{p}}^a_d \label{const1_hvb} \\
& \underline{\boldsymbol{\theta}}^a_d \leq \boldsymbol{\Lambda} \boldsymbol{B} \boldsymbol{p}^a_d + \boldsymbol{\Lambda} \boldsymbol{c}^a_d + \boldsymbol{\Lambda}\boldsymbol{t}^a_d \leq \overline{\boldsymbol{\theta}}^a_d. \label{const2_hvb}
\end{align}
\end{subequations}
However, the set of model-related parameters of the pool of buildings associated with \eqref{aggregate_building_model}, i.e, $\Phi^a = \{\boldsymbol{c}^a_d, \underline{\boldsymbol{p}}^a_d, \overline{\boldsymbol{p}}^a_d, \boldsymbol{t}^a_d, \underline{\boldsymbol{\theta}}^a_d, \overline{\boldsymbol{\theta}}^a_d\}$, are unknown. One possibility would be to infer all these parameters from observations of the aggregate power of the pool of buildings. However, this is most likely to be a lost cause (due to unobservability issues), lead to overfitting, and result in instability of the estimation algorithm. To overcome such difficulty, we assume that the aggregate feasible region is a \textit{homothet} of the prototype building, i.e., the power trajectory of the aggregate of buildings for each day $d$ can be expressed in terms of the power trajectory of the prototype building as follows:
\begin{align}
\boldsymbol{p}^a_d = \beta \boldsymbol{p}^p_d + \boldsymbol{\tau}, \label{definition_homothet}
\end{align}
\noindent where $\beta > 0$ is a scaling factor, and $\boldsymbol{\tau}$ is a vector of translation factors. Expression \eqref{definition_homothet} is the formal definition of a homothet for the aggregate power in $\mathbb{R}^{n_H}$, i.e. vectors $\boldsymbol{p}^a_d$, $\boldsymbol{p}^p_d$, and $\boldsymbol{\tau}$ contains $n_H$ components.
By using the definition of homothet \eqref{definition_homothet} and the prototype feasible region \eqref{feasible_region_prototype_matricial}, we can recast the feasible region of the aggregation in terms of the homothetic parameters $\beta$ and $\boldsymbol{\tau}$:
\begin{subequations}
\label{homothetic_feasible_region}
\begin{align}
& \beta \underline{\boldsymbol{p}}^p_d + \boldsymbol{\tau} \leq \boldsymbol{p}^a_d \leq \beta \overline{\boldsymbol{p}}^p_d + \boldsymbol{\tau} \label{const1_hvb2} \\
& \beta \underline{\boldsymbol{\theta}}^p_d + \boldsymbol{\Lambda} \boldsymbol{B} \boldsymbol{\tau} \leq \boldsymbol{\Lambda} \boldsymbol{B} \boldsymbol{p}^a_d + \boldsymbol{\Lambda} \beta \left(\boldsymbol{c}^p_d \hspace{-0.1cm} + \hspace{-0.1cm} \boldsymbol{t}^p_d\right) \leq \beta \overline{\boldsymbol{\theta}}^p_d + \boldsymbol{\Lambda} \boldsymbol{B} \boldsymbol{\tau}. \label{const2_hvb2}
\end{align}
\end{subequations}
The feasible region of the homothetic aggregate building \eqref{homothetic_feasible_region} depends entirely on the homothetic parameters and the set of model-related parameters of the prototype building $\Phi^p$ (which are given), thus dramatically reducing the complexity of the model to be estimated and avoiding the undesirable overfitting effect. To the best of the authors' knowledge, this is the first time in the literature that such a geometric approach is used to drastically simplify the task of forecasting the price-responsive aggregate power of a pool of buildings via IO, as explained below.
\subsection{Forecasting Model}
\label{sec:forecasting_model}
Let us assume that the feasible region of the ensemble of buildings is a homothetic representation of a prototype building and that the utility function of the pool is a step-wise price function with $n_B$ blocks. Under these assumptions and given the electricity prices, the forecasting model for each day $d$ can be mathematically expressed as:
\begin{subequations}
\label{forecasting_model}
\begin{align}
&\max_{\boldsymbol{p}^a_{b, d}, \boldsymbol{s}^a_d} \quad \sum_{b \in \mathcal{B}}\bigl(\boldsymbol{m}_{b, d}^T - \boldsymbol{\lambda}^T_d \bigr) \boldsymbol{p}^{a}_{b, d} - \boldsymbol{c}^{s, T} \boldsymbol{s}^{a}_d \label{fo_fwp} \\
& \text{subject to:} \notag\\
& \beta \underline{\boldsymbol{p}}^p_d + \boldsymbol{\tau} \leq \sum_{b \in \mathcal{B}} \boldsymbol{p}^a_{b, d} \leq \beta\overline{\boldsymbol{p}}^p_d + \boldsymbol{\tau} : (\underline{\boldsymbol{\epsilon}}_d, \overline{\boldsymbol{\epsilon}}_d) \label{const1_fwp} \\
& \beta \underline{\boldsymbol{\theta}}^p_d + \boldsymbol{\Lambda} \boldsymbol{B} \boldsymbol{\tau} - \boldsymbol{s}^a_d \leq \sum_{b \in \mathcal{B}} \boldsymbol{\Lambda} \boldsymbol{B} \boldsymbol{p}^a_{b, d} \hspace{-0.1cm} + \hspace{-0.1cm} \boldsymbol{\Lambda} \beta \left(\boldsymbol{c}^p_d \hspace{-0.1cm} + \hspace{-0.1cm} \boldsymbol{t}^p_d\right) : (\underline{\boldsymbol{\kappa}}_d) \label{const2_fwp} \\
& \sum_{b \in \mathcal{B}} \boldsymbol{\Lambda} \boldsymbol{B} \boldsymbol{p}^a_{b, d} \hspace{-0.1cm} + \hspace{-0.1cm} \boldsymbol{\Lambda} \beta \left(\boldsymbol{c}^p_d \hspace{-0.1cm} + \hspace{-0.1cm} \boldsymbol{t}^p_d\right) \leq \beta \overline{\boldsymbol{\theta}}^p_d + \boldsymbol{\Lambda} \boldsymbol{B} \boldsymbol{\tau} + \boldsymbol{s}^a_d : (\overline{\boldsymbol{\kappa}}_d) \label{const3_fwp} \\
& 0 \leq \boldsymbol{p}^a_{b, d} \leq \overline{\boldsymbol{e}}_{b, d}: (\underline{\boldsymbol{\phi}}_{b, d}, \overline{\boldsymbol{\phi}}_{b, d}), \quad \forall b \in \mathcal{B} \label{const4_fwp}\\
& \boldsymbol{s}^a_d \geq 0: (\boldsymbol{\varphi}_d), \label{const5_fwp}
\end{align}
\end{subequations}
\noindent where $\boldsymbol{m}_{b, d}$, $\boldsymbol{\lambda}_d$, and $\boldsymbol{c}^s$ are the vectors of marginal utilities, electricity prices, and penalty costs. The dual variables are shown in parentheses after a colon next to the corresponding constraints.
The objective function \eqref{fo_fwp} aims to maximize the welfare of the pool of buildings while minimizing the slack variables associated with the evolution of the building thermal dynamics. Constraints \eqref{const1_fwp}--\eqref{const3_fwp} are almost identical to the homothetic representation of the aggregate feasible region \eqref{homothetic_feasible_region}. Without loss of generality, we have incorporated some degree of flexibility into the forecasting model by: (i) modeling step-wise marginal utility functions to adequately learn the price-response of the pool of buildings, and (ii) including the slack variable in the temperature-related constraints \eqref{const2_fwp}--\eqref{const3_fwp} to capture the infeasibilities that the (approximate) modeling of the building thermal dynamics may cause. Constraints \eqref{const4_fwp} impose lower and upper bounds on the aggregate power per block $b$, being $\overline{\boldsymbol{e}}_{b, d}$ the length of the power per block $b$ in day $d$. Finally, constraints \eqref{const5_fwp} set the non-negative character of slack variables.
As previously mentioned, the feasible region of the ensemble is parameterized in terms of the homothetic parameters $\beta$ and $\boldsymbol{\tau}$. Therefore, in this problem, the vector of marginal utilities $\boldsymbol{m}_{b, d}$, as well as the homothetic parameters $\beta$ and $\boldsymbol{\tau}$ are parameters to be estimated through IO, as explained in Section \ref{sec:inverse_opt_methodology}.
\section{Inverse Optimization Methodology}
\label{sec:inverse_opt_methodology}
In this section, we describe the proposed IO methodology to infer the parameters $\boldsymbol{m}_{b, d}$, $\beta$, and $\boldsymbol{\tau}$ of the forecasting model \eqref{forecasting_model}. This methodology is based on bilevel programming, which has been widely used in the technical literature to model hierarchical optimization problems \cite{deSouza2020optimal,cornelusse2019community,fernandez2015bilevel}. First, we present our bilevel program and its transformation into a parametric (or regularized) nonlinear single-level equivalent program that can be solved by commercial solvers.
Then, we thoroughly explain the steps of the proposed approach.
\begin{figure}[h]
\centering
\begin{tikzpicture}
[node distance = 2.4cm, auto,font=\footnotesize,
every node/.style={node distance=2.8cm},
force/.style={rectangle, draw, fill=black!1, inner sep=12pt, text width=6cm, text badly centered, minimum height=1.5cm, minimum width=8.9cm, font=\bfseries\footnotesize\sffamily}]
\node [force] (UL) {Upper-level problem \eqref{fo_bilevel}--\eqref{const2_bilevel} \\ (Minimize the MAE of the aggregate power)};
\node [force, below of=UL] (LL) {Lower-level problems \eqref{const3_bilevel} for each day $d$ \\ (Forecasting model)};
\draw [->,thick] (-1,-0.80) -- (-1,-2.0) node [midway, left] {$\boldsymbol{m}_{b,d},\beta, \boldsymbol{\tau}$} ;
\draw [->,thick] (1.7,-2.0) -- (1.7,-0.8) node [midway, right] {$\boldsymbol{p}_{b,d}^{a}$} ;
\end{tikzpicture}
\caption{A conceptual diagram of the interfaces of the bilevel problem.}
\label{fig:sketch}
\end{figure}
\subsection{Bilevel Problem}
\label{subsec:bilevel_problem}
The bilevel problem consists of two optimization levels, as depicted in Fig. \ref{fig:sketch}. In the upper-level problem, we seek to minimize the mean absolute error (MAE) of the aggregate power of the ensemble of buildings. This level provides the marginal utilities $\boldsymbol{m}_{b,d}$ as well as the homothetic parameters $\beta$ and $\boldsymbol{\tau}$ needed to build the homothetic representation in the lower-level problem. In contrast, in the lower level, we solve the maximization of the welfare of the pool of buildings and the minimization of the violations related to the building thermal dynamics for each day of the training set. In turn, the lower level passes the values of the optimal aggregate power on to the upper-level problem.
Let us denote the vector of observed aggregate power in day $d$ as $\boldsymbol{p}^{a^{\prime}}_{d}$. Therefore, the bilevel problem can be formulated as follows:
\begin{subequations}
\label{bilevel_formulation}
\begin{align}
&\min_{\boldsymbol{m}_{b,d},\boldsymbol{p}_{b,d}^{a},\boldsymbol{s}_{d}^{a},\beta, \boldsymbol{\tau}, \boldsymbol{\nu}_b, \boldsymbol{\rho}} \quad \sum_{d \in \mathcal{D}} \Bigl|\Bigl| \sum_{b \in \mathcal{B}} \boldsymbol{p}_{b, d}^{a} - \boldsymbol{p}_{d}^{a^{\prime}}\Bigr|\Bigr|_1 \label{fo_bilevel} \\
& \text{subject to:} \notag\\
& \boldsymbol{m}_{b, d} = \boldsymbol{\nu}_b + \boldsymbol{Z}_{d}\boldsymbol{\rho} , \quad \forall b \in \mathcal{B}, d \in \mathcal{D} \label{const1_bilevel} \\
& \boldsymbol{\nu}_b \geq \boldsymbol{\nu}_{b+1}, \quad \forall b < n_B \label{const2_bilevel}\\
& \text{Lower-Level Problem \eqref{forecasting_model},} \quad \forall d \in \mathcal{D}. \label{const3_bilevel}
\end{align}
\end{subequations}
On the one hand, the upper-level problem \eqref{fo_bilevel}-\eqref{const2_bilevel} minimizes the absolute error of the estimated aggregate power of the pool with respect to the observed one, as given by \eqref{fo_bilevel}. Constraints \eqref{const1_bilevel} impose linear regression functions, with $\boldsymbol{\nu}_b$ and $\boldsymbol{\rho}$ as the coefficients to be estimated, so that the marginal utilities are related to the regressors. We assume that vector $\boldsymbol{\nu}_b$ is time invariant, i.e. all components $\nu_{b, h}$ are identical. Constraints \eqref{const2_bilevel} set the marginal utilities to be monotonically non-increasing, as commonly done in electricity markets \cite{omie}. The lower-level problems \eqref{const3_bilevel} are essentially the forecasting problem \eqref{forecasting_model} for each day $d$. These lower levels are solely parameterized in terms of the marginal utilities $\boldsymbol{m}_{b, d}$, as well as the homothetic parameters $\beta$ and $\boldsymbol{\tau}$, and thus rendering the lower levels as linear programs.
Therefore, we can apply the Karush-Khun-Tucker necessary optimality conditions to the lower level and apply the regularization described in \cite{scholtes2001convergence, pineda2018efficiently} to transform the original bilevel model \eqref{bilevel_formulation} into the following nonlinear single-level equivalent:
\begin{subequations}
\label{single_level_equivalent}
\begin{align}
&\min_{\Xi^{NRP}} \quad \sum_{d \in \mathcal{D}} \Bigl|\Bigl| \sum_{b \in \mathcal{B}} \boldsymbol{p}_{b, d}^{a} - \boldsymbol{p}_{d}^{a^{\prime}}\Bigr|\Bigr|_1 \label{fo_sle}\\
& \text{subject to:} \notag\\
& \boldsymbol{m}_{b, d} = \boldsymbol{\nu}_b + \boldsymbol{Z}_{d}\boldsymbol{\rho} , \quad \forall b \in \mathcal{B}, d \in \mathcal{D} \label{const1_sle} \\
& \boldsymbol{\nu}_b \geq \boldsymbol{\nu}_{b+1}, \quad \forall b < n_B \label{const1bis_sle}\\
& \beta \underline{\boldsymbol{p}}^p_d + \boldsymbol{\tau} \leq \sum_{b \in \mathcal{B}} \boldsymbol{p}^a_{b, d} \leq \beta\overline{\boldsymbol{p}}^p_d + \boldsymbol{\tau}, \quad \forall d \in \mathcal{D} \label{const2_sle} \\
& \beta \underline{\boldsymbol{\theta}}^p_d + \boldsymbol{\Lambda} \boldsymbol{B} \boldsymbol{\tau} - \boldsymbol{s}^a_d \leq \sum_{b \in \mathcal{B}} \boldsymbol{\Lambda} \boldsymbol{B} \boldsymbol{p}^a_{b, d} \hspace{-0.1cm} + \hspace{-0.1cm} \boldsymbol{\Lambda} \beta \left(\boldsymbol{c}^p_d \hspace{-0.1cm} + \hspace{-0.1cm} \boldsymbol{t}^p_d\right), \quad \forall d \in \mathcal{D} \label{const3_sle} \\
& \sum_{b \in \mathcal{B}} \boldsymbol{\Lambda} \boldsymbol{B} \boldsymbol{p}^a_{b, d} \hspace{-0.1cm} + \hspace{-0.1cm} \boldsymbol{\Lambda} \beta \left(\boldsymbol{c}^p_d \hspace{-0.1cm} + \hspace{-0.1cm} \boldsymbol{t}^p_d\right) \leq \beta \overline{\boldsymbol{\theta}}^p_d + \boldsymbol{\Lambda} \boldsymbol{B} \boldsymbol{\tau} + \boldsymbol{s}^a_d, \quad \forall d \in \mathcal{D} \label{const4_sle} \\
& 0 \leq \boldsymbol{p}^a_{b,d} \leq \overline{\boldsymbol{e}}_{b,d}, \quad \forall b \in \mathcal{B}, d \in \mathcal{D} \label{const5_sle}\\
& \boldsymbol{s}^a_d \geq 0, \quad \forall d \in \mathcal{D} \label{const6_sle}\\
&-\underline{\boldsymbol{\epsilon}}_d + \overline{\boldsymbol{\epsilon}}_d - \boldsymbol{B}^T_d \boldsymbol{\Lambda}^T_d \underline{\boldsymbol{\kappa}}_d +\boldsymbol{B}^T_d \boldsymbol{\Lambda}^T_d \overline{\boldsymbol{\kappa}}_d - \underline{\boldsymbol{\phi}}_{b,d} + \overline{\boldsymbol{\phi}}_{b,d} \notag \\
&= \boldsymbol{m}_{b,d} - \boldsymbol{\lambda}_d, \quad \forall b \in \mathcal{B}, d \in \mathcal{D} \label{const7_sle} \\
& \underline{\boldsymbol{\kappa}}_d + \overline{\boldsymbol{\kappa}}_d + \boldsymbol{\varphi}_d= \boldsymbol{c}^s, \quad \forall d \in \mathcal{D} \label{const8_sle} \\
& \underline{\boldsymbol{\epsilon}}_d, \overline{\boldsymbol{\epsilon}}_d, \underline{\boldsymbol{\kappa}}_d, \overline{\boldsymbol{\kappa}}_d, \boldsymbol{\varphi}_{d} \geq 0, \quad \forall d \in \mathcal{D} \label{const9_sle}\\
& \underline{\boldsymbol{\phi}}_{b,d}, \overline{\boldsymbol{\phi}}_{b,d} \geq 0, \quad \forall b \in \mathcal{B}, d \in \mathcal{D} \label{const10_sle}
\end{align}
\begin{align}
& \sum_{d \in \mathcal{D}} \underline{\boldsymbol{\epsilon}}_d^T \Bigl( \sum_{b \in \mathcal{B}}\boldsymbol{p}^a_{b,d} - \beta \underline{\boldsymbol{p}}^p_d - \boldsymbol{\tau} \Bigr) + \sum_{d \in \mathcal{D}} \overline{\boldsymbol{\epsilon}}_d^T \Bigl( \beta \overline{\boldsymbol{p}}^p_d + \boldsymbol{\tau} - \sum_{b \in \mathcal{B}}\boldsymbol{p}^a_{b,d} \Bigr) \notag\\
&+ \sum_{d \in \mathcal{D}} \underline{\boldsymbol{\kappa}}_d^T \Bigl( \sum_{b \in \mathcal{B}} \boldsymbol{\Lambda} \boldsymbol{B} \boldsymbol{p}^a_{b,d} + \boldsymbol{\Lambda}_d \beta \boldsymbol{c}^p_d + \boldsymbol{\Lambda}_d \beta \boldsymbol{t}^p_d - \beta \underline{\boldsymbol{\theta}}^p_d - \boldsymbol{\Lambda} \boldsymbol{B} \boldsymbol{\tau} + \boldsymbol{s}_d^a \Bigr) \notag\\
&+ \sum_{d \in \mathcal{D}} \overline{\boldsymbol{\kappa}}_d^T \Bigl( \beta \overline{\boldsymbol{\theta}}^p_d + \boldsymbol{\Lambda} \boldsymbol{B} \boldsymbol{\tau} - \sum_{b \in \mathcal{B}} \boldsymbol{\Lambda} \boldsymbol{B} \boldsymbol{p}^a_{b,d} - \boldsymbol{\Lambda}_d \beta \boldsymbol{c}^p_d + \boldsymbol{\Lambda}_d \beta \boldsymbol{t}^p_d + \boldsymbol{s}_d^a \Bigr) \notag\\
&+ \sum_{d \in \mathcal{D}} \sum_{b \in \mathcal{B}} \Bigl[ \overline{\boldsymbol{\phi}}_{b,d}^T \Bigl( \overline{\boldsymbol{e}}_{b,d} - \boldsymbol{p}^a_{b,d} \Bigr) + \underline{\boldsymbol{\phi}}_{b,d}^T \boldsymbol{p}^a_{b,d} \Bigr] + \sum_{d \in \mathcal{D}} \boldsymbol{\varphi}_{d}^T \boldsymbol{s}^a_{d} \leq \iota, \label{const11_sle}
\end{align}
\end{subequations}
\noindent where the set of decision variables is $\Xi^{NRP} = \{ \boldsymbol{m}_{b,d}, \boldsymbol{p}_{b, d}^{a}, \boldsymbol{s}_d^a, \boldsymbol{\nu}_b, \boldsymbol{\rho}, \underline{\boldsymbol{\epsilon}}_d, \overline{\boldsymbol{\epsilon}}_d, \underline{\boldsymbol{\kappa}}_d, \overline{\boldsymbol{\kappa}}_d,$ $\underline{\boldsymbol{\phi}}_{b,d}, \overline{\boldsymbol{\phi}}_{b,d}, \boldsymbol{\varphi}_d, \beta, \boldsymbol{\tau} \}$. For the sake of simplicity, we assume blocks of identical length, i.e., $\overline{\boldsymbol{e}}_{b,d} = \left(\beta\overline{\boldsymbol{p}}^p_d + \boldsymbol{\tau} \right)/n_B$ for block $b$ and day $d$.
Expressions \eqref{fo_sle}--\eqref{const1bis_sle} represent the upper-level problem \eqref{fo_bilevel}--\eqref{const2_bilevel} while expressions \eqref{const2_sle}--\eqref{const11_sle} represent the regularized Karush–Kuhn–Tucker optimality conditions associated with the lower-level problems \eqref{forecasting_model}.
In problem \eqref{single_level_equivalent}, we basically relax the sum of all complementary slackness conditions in \eqref{const11_sle} by means of the parameter $\iota > 0$. When this parameter is sufficiently small, we can speed up the search of a locally optimal solution found by a nonlinear solver.
\subsection{Steps of the Proposed Approach}
\label{subsec:steps_proposed_approach}
A common shortcoming of any nonlinear program is its sensitivity to the initial search point. In order to avoid meaningless local optima, we propose the use of an efficient heuristic procedure in which two convex programs are sequentially run to infer the marginal utilities $\boldsymbol{m}_{b, d}$ and the homothetic parameters $\beta$ and $\boldsymbol{\tau}$. The result of this procedure can be utilized as a \textit{proxy} of the IO problem \eqref{bilevel_formulation}, and thus can be used to yield more interpretable local optimal solutions from the regularized nonlinear problem \eqref{single_level_equivalent}. The proposed heuristic procedure is built upon the one put forward in \cite{Saez-Gallego2018}, but has been substantially modified to account for the building thermal dynamics. For the sake of completeness, this procedure is fully described in \ref{sec:two_step_estimation}.
Next, we list the steps of the proposed forecasting technique, which we denote as \textit{rnp}:
\begin{enumerate}
\item We first solve the two-step heuristic estimation process described in \ref{sec:two_step_estimation} for a training set. This gives us a suitable \textit{proxy} for the solution of the original bilevel problem \eqref{bilevel_formulation}. From this procedure, we obtain the marginal utilities $\boldsymbol{m}_{b,d}$ and the corresponding estimates $\nu_b$ and $\boldsymbol{\rho}$, as well as the homothetic parameters $\widehat{\beta}$ and $\widehat{\boldsymbol{\tau}}$.
\item We then run the forecasting model \eqref{forecasting_model} for the training set to compute the value of the in-sample aggregate power $\boldsymbol{p}^a_{b,d}$, slack variable $\boldsymbol{s}^a_d$, and the dual variables.
\item Afterwards, we run the regularized nonlinear program \eqref{single_level_equivalent} with fixed homothetic parameters $\widehat{\beta}$ and $\widehat{\boldsymbol{\tau}}$. In addition, we take the solution from the forecasting model \eqref{forecasting_model} evaluated in the training set (previous step) as initialization. Here, we essentially \textit{re-optimize} the marginal utility curves with the aim of improving the solution given by the heuristics.
\item Finally, the forecasting model \eqref{forecasting_model} is built with the homothetic parameters $\widehat{\beta}$ and $\widehat{\boldsymbol{\tau}}$ and the estimates $\widehat{\nu}_b$ and $\widehat{\boldsymbol{\rho}}$ for the marginal utility curves resulting from step 3 above.
\end{enumerate}
The parameters associated with the forecasting model \eqref{forecasting_model}, namely the marginal utilities as well as the homothetic parameters, can be periodically re-estimated with a new training data set in order to capture potential non-stationary patterns in the observed aggregate load.
\section{Comparison Methodologies}
\label{sec:comparison_methodologies}
We compare the forecasting capabilities of the proposed approach against four benchmarks: (i) the nonlinear problem \eqref{single_level_equivalent} with $\beta$ and $\boldsymbol{\tau}$ free, without any initialization of the decision variables, and with $\iota = 0$, denoted as \textit{np w/o init}; (ii) a simpler two-step estimation procedure taken from \cite{Saez-Gallego2018} and denoted as \textit{s2s}; (iii) an AutoRegressive Integrated Moving Average Model with eXogenous (\textit{arimax}) variables; and (iv) a persistence model denoted as \textit{naive}. To the best of the authors' knowledge, \emph{arimax} models stand as the state-of-the-art black-box techniques to forecast the price-response of an aggregate of buildings \cite{corradi2012controlling}, as pointed out in Section \ref{sec:introduction}.
The benefits of all models are compared by analyzing two error metrics on the aggregate power of the ensemble of buildings: the MAE and the root mean square error (RMSE) on a test data set.
The forecasting problem associated with the two-step procedure \textit{s2s} is driven by the maximization of the welfare of the pool of buildings subject to solely the power bounds. In this benchmark, the indoor temperature bounds are ignored, thus overlooking the effect of the building thermal dynamics. Furthermore, in \textit{s2s}, the marginal utilities and the power bounds are inferred by successively running two linear programs so that the RMSE is minimized in a validation data set. The interested reader is referred to \cite{Saez-Gallego2018} for a detailed description of this methodology.
The \textit{arimax} model has been implemented in Python \cite{python} via the SARIMAX class of the package \textit{statsmodels}. We have set the maximum number of iterations to 1000 and the stopping rule is based on the estimator Akaike information criterion~(AIC).
The forecast values of the aggregate power in day $d$ are equal to the observed values in the previous day $d-1$ for the \textit{naive} model. The forecast error of this model is indicative of how hard predicting the demand of the pool of buildings is.
\section{Case Study}
\label{sec:case_study}
We aim to learn the aggregate power of a population of 100 heterogeneous buildings. We first summarize the process to synthetically generate the data set. Then, we present the input data for testing the forecast capabilities. Subsequently, we discuss the results obtained with the proposed approach \textit{rnp} and the benchmarks. Finally, we also discuss the computational complexity of the proposed approach.
\subsection{Data Generation for a Pool of Buildings}
\label{subsec:data_generation_pool}
We assume that the consumption of each building $i$ for each day $d$ is driven by the following optimization problem:
\begin{subequations}
\label{data_generation_model}
\begin{align}
& \min_{p_{h}, s_{h}, \theta_{h}} \quad \sum_{h \in \mathcal{H}} \bigl( p_{h} \lambda_h + \varrho s_{h} \bigr) \label{obj_datagen}\\
& \theta_{h} = a_1 \theta_{h-1} + (1-a_1)\left[ \theta^{amb}_{h} - a_2 p_{h}\right], \quad \forall h \in \mathcal{H} \label{const1_datagen} \\
& - s_{h} + \underline{\theta}_{h} \leq \theta_{h} \leq \overline{\theta}_{h} + s_{h}, \quad \forall h \in \mathcal{H} \label{const2_datagen} \\
& 0 \leq p_{h} \leq \overline{p}_{h}, \quad \forall h \in \mathcal{H} \label{const3_datagen} \\
& s_{h} \geq 0, \quad \forall h \in \mathcal{H}, \label{const4_datagen}
\end{align}
\end{subequations}
\noindent where $\varrho$ represents a penalty cost related to the violation of the temperature-related constraints. Each building aims to minimize its electricity and penalty costs, as in \eqref{obj_datagen}, while satisfying the building thermal dynamics \eqref{const1_datagen}, the temperature comfort bounds \eqref{const2_datagen}, and the power bounds of the cooling device \eqref{const3_datagen}. Slack variables are declared non-negative in \eqref{const4_datagen}.
The technical parameters $\Omega^p$ for the prototype building are shown in Table~\ref{tab:data_prototype}. As done in \cite{zhao2017geometric}, the model parameters $\Omega^i$ for each building $i$ of the pool are assumed to be uniformly distributed based on a factor $\hbar$ modeling the degree of heterogeneity. For instance, the samples for the thermal capacitance $C^i$ are drawn from a uniform distribution in the interval $\bigl[ \bigl(1 - \hbar\bigr) C^p , \bigl( 1 + \hbar \bigr) C^p \bigr]$, where $C^p$ is the thermal capacitance of the prototype building. The discretization step $\delta$ is assumed to be one hour and the penalty cost $\varrho$ is set to 0.01 \euro$/^\circ$C$\cdot$h for all buildings. Ambient temperature, electricity prices, and the building technical parameters $\Omega^i$ are given in \cite{Fernandez-Blanco2020}, for the sake of reproducibility.
\begin{table}[h]
\caption{Technical Parameters $\Omega^p$ for the Prototype Building}
\label{tab:data_prototype}
\centering
\begin{tabular}{cccccccc}
\hline
$C$ (kWh$/^\circ$C) & 10 && $\eta$ & 2.5 && $\Delta$ ($^\circ$C) & 1 \\
$R$ ($^\circ$C/kW) & 2 && $\theta^r$ ($^\circ$C) & 20 && $\theta_0$ ($^\circ$C) & 22.5 \\
$P$ (kW) & 5.4 && & && & \\
\hline
\end{tabular}
\end{table}
\subsection{Input Data for Testing the Forecasting Models}
\label{subsec:data}
We run simulations for 1872 hours (78 days) by using model \eqref{data_generation_model} for two different values of the heterogeneity factor: (i) $\hbar = 0.1$ (low heterogeneity among buildings), and (ii) $\hbar = 0.75$ (high heterogeneity among buildings). To avoid undesirable border effects, we disregard the results from the first day of the simulation. The sizes for the training, validation, and test sets are 35, 35, and 7 days, respectively, in chronological order. For each case, the aggregate power and the initial indoor air temperature per day can be found in \cite{Fernandez-Blanco2020}. Table \ref{tab:statistics_aggregate_power} summarizes some statistics on the aggregate power for both cases. The higher the degree of heterogeneity among buildings is, the smoother the power curve becomes. In other words, 10\% of heterogeneity leads to load synchronization with a maximum power peak of 541.0 kW and 61.8\% of periods where the buildings' load is 0. Conversely, 75\% of heterogeneity causes a lower peak, 218.4 kW, and a more distributed load. Assuming that the ambient temperature is perfectly forecast, we consider five regressors to estimate the marginal utility curves, namely the ambient temperature at hours $h-2$, $h-1$, $h$, $h+1$, and $h+2$, which are also reported in \cite{Fernandez-Blanco2020}. Finally, we consider that $\boldsymbol{c}^s$ is large enough, i.e., $\boldsymbol{c}^s=1$ for all time periods.
\begin{table}[h]
\caption{Statistics on the Aggregate Power}
\label{tab:statistics_aggregate_power}
\centering
\begin{tabular}{ccc}
\cline{2-3}
& $\hbar = 0.1$ & $\hbar = 0.75$ \\
\hline
Maximum (kW) & 541.0 & 218.4 \\
Mean (kW) & 64.0 & 42.0 \\
Total (MW) & 118.2 & 77.6 \\
\# hours without consumption (\%) & 61.8 & 0.0 \\
\hline
\end{tabular}
\end{table}
The simulations have been performed on a Windows-based computer with four CPUs clocking at 1.8 GHz and 8 GB of RAM. For the linear programs, we use CPLEX 12.8 \cite{Cplex} under Pyomo 3.7.3 \cite{Pyomo}. For the model $rnp$, we use the nonlinear solver CONOPT \cite{conopt} connecting to the NEOS server \cite{czyzyk_et_al_1998}.
\subsection{Results}
\label{subsec:results}
We analyze the impact of the degree of heterogeneity of the pool of buildings on the forecasting capabilities of the proposed technique. Besides, for the models \textit{rnp}, \textit{np w/o init}, and \textit{s2s}, we further study the behavior of the models when considering either (i) a number of power blocks $n_B=1$, and (ii) $n_B=6$. Table \ref{tab:error_metrics} provides the forecast error metrics, namely RMSE and MAE, for all models outlined in Section~\ref{sec:comparison_methodologies} and the aforementioned cases. In this table, we highlight the best results in bold.
\begin{table}[h]
\caption{Error Metrics -- Comparison of Models}
\label{tab:error_metrics}
\centering
\begin{tabular}{cc@{\hskip3pt}c@{\hskip6pt}c@{\hskip3pt}cc@{\hskip3pt}c@{\hskip3pt}c@{\hskip3pt}c}
\hline
\multirow{3}{*}{Model} & \multicolumn{4}{c}{$\hbar = 0.1$} & \multicolumn{4}{c}{$\hbar = 0.75$} \\
\cline{2-9}
& \multicolumn{2}{c}{$n_B = 1$} & \multicolumn{2}{c}{$n_B = 6$} & \multicolumn{2}{c}{$n_B = 1$} & \multicolumn{2}{c}{$n_B = 6$} \\
\cline{2-9}
& RMSE & MAE& RMSE & MAE & RMSE & MAE& RMSE & MAE \\
\hline
\textit{rnp} & \textbf{106.7} & \textbf{52.7} & \textbf{103.7} & \textbf{52.5} & \textbf{31.3} & \textbf{22.5} & \textbf{25.3} & \textbf{16.9} \\
\textit{np w/o init} & 165.3 & 76.3 & 165.3 & 76.3 & 35.9 & 22.9 & - & - \\
\textit{s2s} & 132.8 & 87.5 & 133.0 & 88.9 & 38.4 & 24.0 & 30.3 & 26.0 \\
\textit{arimax} & 161.8 & 108.3 & 161.8 & 108.3 & 31.6 & 23.0 & 31.6 & 23.0 \\
\textit{naive} & 177.5 & 90.4 & 177.5 & 90.4 & 36.9 & 24.2 & 36.9 & 24.2 \\
\hline
\end{tabular}
\end{table}
First, we discuss the results for an heterogeneity factor $\hbar = 0.1$ when considering a single power block, i.e. $n_B=1$. In this setup, the proposed method \textit{rnp} leads to the best forecasting performance in terms of RMSE and MAE since they are reduced by 39.9\% and 41.7\% compared to the \textit{naive} model.
Solving the model \textit{np w/o init}, i.e. without fixing the homothetic parameters, without using any initialization, and with $\iota=0$, raises the RMSE and MAE by 54.9\% and 44.8\%, in that order, regarding the ones obtained with the proposed model \textit{rnp}. In Fig. \ref{fig:forecast_power_methods_het01_NB1}, we represent the forecast power for all benchmarks as well as the observed power of the first day of the test set. Unlike the method \textit{np w/o init}, we can observe that \textit{rnp} closely follows the observed power curve and is able to predict the peak periods thanks to the initialization.
The nonlinear model \textit{np w/o init} converges to the local optimal solution with $\beta = 0$, which seems to be an \textit{attractive} solution due to the nature of this mathematical problem. The reason behind this outcome relies on the definition of homothet \eqref{definition_homothet}. $\beta = 0$ implies a constant objective function in the lower-level problem \eqref{const3_bilevel} and a feasible region that boils down to the singleton $\boldsymbol{p}^a_d=\boldsymbol{\tau}, \forall d \in \mathcal{D}$, according to the definitions mentioned above. Therefore, the upper level \eqref{fo_bilevel}--\eqref{const2_bilevel} basically seeks the vector $\boldsymbol{\tau}$ minimizing the MAE over the training data set.
Both models \textit{s2s} and \textit{arimax} make substantially higher forecasting errors than the proposed technique \textit{rnp}, i.e. RMSE increases by 24.5\% and 51.6\%, in that order, whereas the respective increase in MAE is 66.0\% and 105.5\%. As seen in Fig. \ref{fig:forecast_power_methods_het01_NB1}, the method \textit{arimax} may work better for smoother processes and, as expected, it overlooks the irregularities of the aggregate power pattern. Method \textit{s2s} tends to adapt to the peak periods better than other benchmarks, although it is not successful in identifying them. The reason for those poor forecasts is that both models disregard the effect of the building thermal dynamics in the forecasting process. Finally, the \textit{naive} performs worse than any other technique in terms of RMSE for the test set, however its performance for the first day is not as bad as for the other days of the test set (see Fig.~\ref{fig:forecast_power_methods_het01_NB1}).
\begin{figure}[h]
\centerline{\includegraphics[scale=0.6]{aggregate_power_h01_NB1.pdf}}
\vspace{-0.4cm}
\caption{Forecast and observed aggregate power for the first day of the test set with an heterogeneity factor of $0.1$ and $n_B = 1$.}
\label{fig:forecast_power_methods_het01_NB1}
\end{figure}
For the case of $\hbar = 0.1$, when increasing the number of power blocks to 6, the proposed method \textit{rnp} refines the solution achieved with only one block, i.e., RMSE and MAE decreases by 2.8\% and 0.4\%, in that order. This is an indication that there is a small degree of sensitiveness of the power to the price, which is captured by means of the marginal utilities in the proposed forecasting model.
Fig. \ref{fig:forecast_power_methods_het075_NB1} illustrates the forecast and observed aggregate power for all methods for the first day of the test set with $\hbar=0.75$ and $n_B = 1$. We can see that now the power forecasts of all benchmarks are more alike because the aggregate power becomes smoother along the time for the high-heterogeneity case. Therefore, we need to resort to Table \ref{tab:error_metrics} so we can quantify the forecast error in terms of the error metrics in the test set.
Due to the smoothness of the aggregate power, the \textit{naive} model provides a better accuracy compared to the results with a low heterogeneity factor, i.e. RMSE = 36.9 and MAE = 24.2. In terms of RMSE, the proposed technique \textit{rnp} exhibits the best forecasting performance with $n_B = 1$, which results in a 15.2\%, 18.5\%, 12.8\%, and 0.9\% of improvement over the error metrics attained by the \textit{naive}, \textit{s2s}, \textit{np w/o init}, and \textit{arimax} models, in that order. However, this improvement over the \textit{naive} model (15.2\% in RMSE) is substantially lower than when the buildings are more alike, wherein there is a reduction of 39.9\% in RMSE.
\begin{figure}[h]
\centerline{\includegraphics[scale=0.6]{aggregate_power_h075_NB1.pdf}}
\vspace{-0.4cm}
\caption{Forecast and observed aggregate power for the first day of the test set with an heterogeneity factor of $0.75$ and $n_B = 1$.}
\label{fig:forecast_power_methods_het075_NB1}
\end{figure}
In addition, for the case of high heterogeneity, the forecasting performance of the proposed technique considerably enhances the error metrics when increasing the number of blocks to $n_B=6$. Specifically, there is a reduction of 19.2\% in RMSE and 24.9\% in MAE over the solution with only one block, whereas this reduction is just 2.8\% and 0.4\%, respectively, for $\hbar = 0.1$. We show, for the first day of the test set, the power forecast provided by \textit{rnp} in the high-heterogeneity case when considering either $n_B=1$ or $n_B=6$ in Fig. \ref{fig:forecast_power_methods_het075_model_rnp}. Some forecast values are improved due to the power-price sensitiveness captured by the increasing number of blocks and, therefore, they are close to their respective observed values.
\begin{figure}[h]
\centerline{\includegraphics[scale=0.6]{aggregate_power_h075_rnp.pdf}}
\vspace{-0.4cm}
\caption{Forecast and observed aggregate power for the model \textit{rnp} for an heterogeneity factor of $0.75$ with both $n_B = 1$ and $n_B = 6$.}
\label{fig:forecast_power_methods_het075_model_rnp}
\end{figure}
A similar observation can be made for \textit{s2s}, which also captures the price-responsiveness of the pool of buildings by estimating a step-wise utility function. However, the forecasting capabilities of the model \textit{s2s}, which is the one proposed in \cite{Saez-Gallego2018}, are by far worse than those from the proposed technique due to the fact that the former ignores the building thermal dynamics in the forecasting process. Finally, the model \textit{np w/o init} is unable to find a solution and the solver CONOPT returns an evaluation error. This is probably caused because model~\eqref{single_level_equivalent} with $\iota=0$ is inherently ill-posed, as pointed out in \cite{scholtes2001convergence}.
\subsection{Computational Complexity}
\label{subsec:comp_complexity}
Table \ref{tab:comp_complexity} presents the computing times for solving the estimation problem, i.e. the regularized nonlinear single-level program \eqref{single_level_equivalent} for the proposed model \emph{rnp} and the model \textit{np w/o init}, for all cases analyzed. The estimates for the proposed model \emph{rnp}, i.e. when decision variables are initialized, are achieved in less than 20 min, which is acceptable for a day-ahead operational problem. We can also observe that the model \textit{np w/o init} without initialization attains a solution faster, albeit worse in terms of prediction error, than the proposed model. This is due to the convergence of \textit{np w/o init} to the \emph{attractive} local optimal solution with $\beta = 0$, as explained in the previous section. Moreover, the computational burden of the proposed \emph{rnp} is strongly related to the number of blocks of the marginal utility, which plays a major role for high heterogeneity factors. Note that the computing times for solving the forecasting linear problem \eqref{forecasting_model} are negligible.
On the other hand, the computational performance of this forecasting approach is scalable for real-sized aggregators with thousands of buildings for two reasons. First, models \eqref{forecasting_model} and \eqref{single_level_equivalent} are independent of the number of buildings belonging to the aggregator. Second, both models are built around a single \emph{prototype} building, whose feasible region is geometrically transformed via the homothetic parameters, also independent of the size of the pool.
\begin{table}[h]
\caption{Computing Times (s) for the Estimation Problem \eqref{single_level_equivalent} for Models \textit{rnp} and \textit{np w/o init}}
\label{tab:comp_complexity}
\centering
\begin{tabular}{cc@{\hskip3pt}c@{\hskip6pt}c@{\hskip3pt}cc@{\hskip3pt}c@{\hskip3pt}c@{\hskip3pt}c}
\hline
\multirow{2}{*}{Model} & \multicolumn{2}{c}{$\hbar = 0.1$} & \multicolumn{2}{c}{$\hbar = 0.75$} \\
\cline{2-5}
& \multicolumn{1}{c}{$n_B = 1$} & \multicolumn{1}{c}{$n_B = 6$} & \multicolumn{1}{c}{$n_B = 1$} & \multicolumn{1}{c}{$n_B = 6$} \\
\hline
\textit{rnp} & 46.3 & 933.7 & 43.6 & 555.4 \\
\textit{np w/o init} & 41.9 & 153.3 & 30.8 & - \\
\hline
\end{tabular}
\end{table}
\section{Conclusion}
\label{sec:conclusion}
This paper has proposed a novel day-ahead forecasting technique for an aggregation of smart buildings equipped with thermostatically-controlled loads. From a modeling perspective, the aggregate power of the pool of buildings is represented by using a geometric approach, i.e., its price-response is characterized by a set of marginal utility curves and a homothet of a prototype building. This intuitive representation of the aggregate allows us to account for the building thermal dynamics while drastically reducing the number of parameters to be estimated. Hence, the computational complexity of the estimation algorithm is decreased, thus avoiding the undesirable overfitting effect. From a methodological perspective, inverse optimization is applied to infer the marginal utilities and the homothetic parameters by means of bilevel programming. We can conclude that (i) accounting for the building thermal dynamics in the forecasting technique improves the forecasting error by 20--40\% compared to existing and persistence methodologies when the buildings are more alike, and that (ii) the use of an increasing number of blocks for the marginal utilities in the forecasting process substantially improves the accuracy of the proposed forecasting technique when the heterogeneity among buildings is high.
This paper prompts several avenues for future research. Further work will be devoted to improving the forecast accuracy. Three potential extensions of this research are: (i) increasing the number of geometric parameters (e.g. a rotation of the homothet), (ii) extending the geometric parameters to be regressor-dependent, and (iii) combining inverse optimization with machine learning techniques. Besides, we will explore the impact of inverse optimization under noisy data. Finally, intra-day forecasting of an aggregate load of an ensemble of buildings is also an interesting direction of future research.
|
1,941,325,220,463 | arxiv |
\section{Introduction}
Chiral symmetry is an approximate global symmetry in
Quantum ChromoDynamics (QCD),
and the symmetry and its spontaneous breaking
is one of the key ingredients in the low-energy hadron physics.
For instance, all the hadrons can be classified
into representations of $SU(N_f)_L\times SU(N_f)_R$.
Once we fix the representations,
it gives strong constraints to low-energy effective lagrangians
and possible terms are uniquely determined besides overall constants.
To embody chiral symmetry in effective lagrangians,
we have two famous ways; the linear and the non-linear representations.
The non-linear representation has been well studied and successful
especially in the context of the chiral perturbation theory.
The linear representation
with scalar mesons as chiral partners of Nambu-Goldstone bosons
would be important around the chiral restoration point
at high temperature/density.
As for the realization of chiral representations in the baryon sector
in the linear representation, there could be naively two
ways~\cite{DeTar:1988kn,Jido:1998av}.
One is the naive assignment
and the other is the so-called mirror assignment introduced
by DeTar and Kunihiro~\cite{DeTar:1988kn}.
We can find several important differences
between these two assignments
in the couplings or in the nucleon masses.
For example,
the nucleon and its parity partner belong to the same chiral multiplet
and there can exist chirally-invariant mass terms of nucleons
in the mirror assignment~\cite{DeTar:1988kn}.
Due to the mass terms, nucleons can be massive
even when the chiral condensate takes a small value or zero,
whereas nucleon masses are simply proportional to the chiral condensate
in the naive assignment~\cite{DeTar:1988kn,Jido:1998av},
which would be the most important difference
between the naive and mirror cases.
Such differences play crucial roles
at finite temperature/density systems
and it should be revealed directly from QCD.
In order to clarify which assignment is natural,
it would be advantageous to measure the axial charge of N(1535),
which we assume as the chiral partner of N(940),
because the axial charges of N(940) and N(1535)
are sensitive to the chiral structure of baryons~\cite{DeTar:1988kn,Jido:1998av}
and have the same (different) signs in the naive (mirror) assignments.
In this report, we show the first unquenched lattice QCD study
of $g_A^{N^*N^*}$ as well as $g_A^{NN}$.
(For the details, see~\cite{TakahashiKunihiro}.)
We employ $16^3\times 32$ lattice with two flavors of dynamical quarks,
generated~\cite{AliKhan:2001tx} with the renormalization-group improved
gauge action at $\beta=1.95$ and the mean field improved clover quark action
with the clover coefficient $c_{\rm SW}=1.530$.
The calculations are done with the hopping parameters,
$\kappa_{\rm sea},\kappa_{\rm val}=0.1375$, 0.1390 and 0.1400.
\section{Lattice QCD formulations and results}
N(1535)is the ground-state nucleon in $\frac12^-$channel.
Though a ground state signal
can be in principle isolated using a large Euclidean time separation
between the source and the sink points in correlators,
we could suffer from the signal contamination by N(1650)
lying just 100 MeV above.
With the aim to separate the signals in a proper way
and to optimize operators, we diagonalize correlation matrices
constructed with two independent operators;
$
N_1(x)\equiv \varepsilon_{\rm abc}u^a(x)(u^b(x)C\gamma_5 d^c(x)),
$
$
N_2(x)\equiv \varepsilon_{\rm abc}\gamma_5 u^a(x)(u^b(x)C d^c(x)).\nonumber
$
Here, $u(x)$ and $d(x)$ are the Dirac spinors for u- and d- quarks,
respectively, and a,b,c denote the color indices.
We eliminate wraparound effects,
which could be another possible sources of contamination,
imposing the Dirichlet boundary condition in the temporal direction.
With the optimized operators ${\mathcal N(x)}$,
we can obtain vector(axial) charges
$g_V$($g_A$) as follows.
\begin{equation}
g_V \rightarrow
\frac{
{\rm tr}
\gamma_4
\Gamma
\langle {\cal N}(t_{\rm snk})
V_4(t)
\overline{{\cal N}}(t_{\rm src})\rangle
}{
{\rm tr}
\Gamma \langle {\cal N}(t_{\rm snk})
\overline{{\cal N}}(t_{\rm src})\rangle
}
\quad\quad
(t_{\rm snk}\gg t\gg t_{\rm src})
\label{3pf1}
\end{equation}
and
\begin{equation}
g_A \rightarrow
\frac{
{\rm tr}
\gamma_5\gamma_3
\Gamma
\langle {\cal N}(t_{\rm snk})
A_3(t)
\overline{{\cal N}}(t_{\rm src})\rangle
}{
{\rm tr}\Gamma \langle {\cal N}(t_{\rm snk})
\overline{{\cal N}}(t_{\rm src})\rangle
}
\quad\quad
(t_{\rm snk}\gg t\gg t_{\rm src}),
\label{3pf2}
\end{equation}
with $\Gamma\equiv \frac{1+\gamma_4}{2}$.
Here,
$A_\mu(t)\equiv \sum_{\bf x}
\bar u(x)\gamma_\mu\gamma_5 u(x)
-\bar d(x)\gamma_\mu\gamma_5 d(x)$
and
$V_\mu(t)\equiv \sum_{\bf x}
\bar u(x)\gamma_\mu u(x)
-\bar d(x)\gamma_\mu d(x)$
are the zero-momentum projected axial and vector currents,
and the traces are taken over spinor indices.
All the unwanted quantities,
such as the normalization factors,
are all canceled out between the denominator and the numerator.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.37]{vector.eps}
\includegraphics[scale=0.37]{axial.eps}
\caption{
\label{finalcharges}
The renormalized vector and axial charges of the positive- and the
negative-parity nucleons are plotted
as the function of the squared pion mass $m_\pi^2$.
The left panel shows the results of the vector charges
and the right panel the results of the axial charges.
In the left panel, the solid line is drawn at
${g}_V=1$ for reference.
In the right panel, the solid line is drawn at
${g}_A=1.26$ and the dashed line is drawn at
${g}_A=0$.
}
\end{center}
\end{figure}
The renormalization factors for bilinear operators
are determined with the constants listed in Ref.~\cite{AliKhan:2001tx}.
We plot in the left panel in Fig.~\ref{finalcharges}
the vector charges of the positive- and the negative-parity nucleons,
which should be unity.
The open (filled) symbols denote the vector charges
of the positive- (negative-) parity nucleon at each hopping parameter.
We can find about 10\% deviations from unity,
which can be considered to come from
the systematic errors in the renormalization factors.
We should then take into account at least 10\% systematic errors
in our results.
The axial charges of the positive-parity nucleon
at each hopping parameter are plotted in the right panel.
They are shown as the open symbols.
One can find the good agreement between
the lattice data and 1.26, the experimental value.
We finally show the axial charges of the negative-parity nucleon
in the right panel.
One finds at a glance that they take quite small values,
as $g_A^{N^*N^*}\sim 0.2$
and that even the sign is quark-mass dependent.
While the wavy behavior might come from
the sensitiveness of $g_A^{N^*N^*}$ to quark masses,
this behavior may indicate that
$g_A^{N^*N^*}$ is rather consistent with zero.
The small $g_A^{N^*N^*}$ reflects
the interesting chiral structure of
baryons~\cite{DeTar:1988kn,Jido:1998av,Jido:1999hd,Jaffe:2006jy,Glozman:2007ek}.
The present quark masses are unfortunately so heavy
that their related pion masses are 700MeV $\sim$ 1.1GeV.
In order to reveal the chiral structure,
much lighter u,d quarks are indispensable.
The study of the axial charge of Roper or N(1650)
as well as the inclusion of strange sea quarks could also
cast light on the low-energy chiral structure of baryons.
They are left for further study.
\section*{Acknowledgments}
All the numerical calculations were performed
with NEC SX-8 at CMC, Osaka university and at YITP, Kyoto university.
The unquenched gauge configurations
employed in our analysis
were all generated by CP-PACS collaboration~\cite{AliKhan:2001tx}.
|
1,941,325,220,464 | arxiv | \section*{Introduction}
After about 20 years of study of the representation theory of the three infinite dimensional finitary Lie algebras~$\mathfrak{sl}(\infty),\mathfrak{so}(\infty),\mathfrak{sp}(\infty)$, there still is no standard analogue of the Bernstein-Gelfand-Gelfand category $\mathcal{O}$ for {these} Lie algebras. One reason is that each of these Lie algebras has uncountably many conjugacy classes of Borel subalgebras, so potentially there are many ``categories $\mathcal{O}$". Therefore, one is faced with a selection process trying to sort through various options in constructing interesting and relevant analogues of the BGG category $\mathcal{O}$. Existing results concerning integrable $\mathfrak{g}$-modules, as well as primitive ideals in
$U(\mathfrak{g})$, for Lie algebras~$\mathfrak{g}$ as above, motivate the study of interesting analogues of category $\mathcal{O}$ for two types of Borel subalgebras. These are the {\em Dynkin Borel subalgebras}, or Borel subalgebras having ``enough simple roots", and, on the other hand, the {\em ideal Borel subalgebras} defined in \cite{PP1} and \cite{PP2}. These nonintersecting classes of Borel subalgebras are ``responsible" for different classes of representations, and naturally lead to different ``categories $\mathcal{O}$".
The case of Dynkin Borel subalgebras is considered in the recent paper \cite{Nam}(see also \cite{NamT}) where a category $\overline{\mathcal{O}}$ is defined. This category consists of all weight modules with finite dimensional weight spaces which carry a locally finite action of the entire locally nilpotent radical of a fixed Dynkin Borel subalgebra. A direct consequence of the definition of Dynkin Borel subalgebras is that Verma modules are objects of $\overline{\mathcal{O}}$. Nevertheless $\overline{\mathcal{O}}$ is not a highest weight category due to lack of projective or injective {objects}. Another result is that the subcategory of $\overline{\mathcal{O}}$ consisting of integrable modules (integrable modules are direct limits of finite dimensional modules over finite dimensional subalgebras) is a semisimple category. This makes $\overline{\mathcal{O}}$ somewhat similar to the original BGG category ${\mathcal{O}}$. A concrete motivation to study versions of category $\mathcal O$ for this type of Borel subalgebras comes from the representation theory of finite dimensional Lie superalgebras. Through the concept of ``super-duality'', the category of finite dimensional modules over the general linear superalgebra $\mathfrak{gl}(m|n)$ is related to modules in (parabolic subcategories) of category ${\bar\mathcal O}$ for $\mathfrak{gl}(\infty)$, see e.g~\cite{CLW, CWZ}. Such super-duality {exists} also {in} category $\mathcal O$ for $\mathfrak{gl}(m|n)$ and for Lie superalgebras of other types.
On the other hand, in \cite{PS} an analogue of category ${\mathcal{O}}$ is defined, for an ideal Borel subalgebra. Verma modules are not objects of this category, but its subcategory of integrable modules coincides with the nonsemisimple category of tensor modules studied in [DPS]. This latter category is itself an interesting highest weight category.
The current paper arose from an attempt to understand better the homological structure of the category $\overline{\mathcal{O}}$ introduced in \cite{Nam}. It turns out that it is convenient to extend Nampaisarn's category to a category $\mathbf{O}$ whose objects are weight modules which are locally finite with respect to the locally nilpotent radical of a Dynkin Borel subalgebra, but do not necessarily have finite dimensional weight spaces.
Let's give a brief description of the content of the paper. Sections 1 and 2 are of preliminary nature. Here we recall some general facts about abelian categories and about root-reductive Lie algebras. In particular, we go over the notions of Cartan subalgebras, Borel subalgebras and Weyl groups for root-reductive Lie algebras. In Section 3 we collect some basic facts about Verma modules and dual Verma modules. This section reproves some results of \cite{NamT} and \cite{Nam} and explores some of the peculiarities of Verma modules for Borel algebras which are not of Dynkin type.
From Section 4 on, we only consider Dynkin Borel subalgebras and introduce the category $\mathbf{O}$. We demonstrate that $\mathbf{O}$ {is a Grothendieck category, and that it} decomposes as the product of indecomposable blocks described by the Weyl group orbits in the dual Cartan subalgebra $\mathfrak{h}^*$. This also reproves Nampaisarn's result about linkage in $\overline{\mathcal{O}}$. Next, we study blocks after truncation to upper finite ideals in $\mathfrak{h}^\ast$. We prove equivalence of the categories of the truncated blocks with categories of modules over certain locally finite dimensional associative algebras. We then show that any truncated category $\mathbf{O}$ is extension full in $\mathbf{O}$, and also in the category of weight modules. These results allow to transfer certain homological questions in $\mathbf{O}$ to categories which have enough projective objects. It is an open question whether the entire category $\mathbf{O}$ is extension full in the category of weight modules, and whether the category $\overline{\mathcal{O}}$ is extension full in $\mathbf{O}$.
In Section 5, we prove that the Serre quotient category of two appropriately chosen truncations of $\mathbf{O}$ is equivalent to
$\mathcal{O}(\mathfrak{g}_n,\mathfrak{b}_n)$ for arbitrarily large $n$, where $\mathfrak{g}=\varinjlim \mathfrak{g}_n$ for finite dimensional reductive Lie algebras~$\mathfrak{g}_n$. For dominant blocks, it suffices to consider a quotient category of $\mathbf{O}$, and for antidominant blocks it suffices to consider a subcategory of $\mathbf{O}$ to establish such equivalences. Using the homological results in Section 4, this shows that the higher extensions of simple modules by Verma modules in $\mathbf{O}$ are governed by Kazhdan-Lusztig-Vogan polynomials. This was conjectured for $\overline{\mathcal O}$ in \cite{NamT}.
As another application, we show that all blocks of $\mathbf{O}$ corresponding to integral dominant regular weights are equivalent. In this section we also address the Koszulity of blocks of the category $\mathbf{O}$. We prove that truncations of $\mathbf{O}$ admit graded covers, in the sense of \cite{BGS}. In the graded setting, we show that extensions of simple modules by Verma modules (and extensions of dual Verma modules by simple modules) satisfy the Koszulity pattern. For BGG category $\mathcal O$, this property actually implies ordinary Koszulity, see \cite{ADL}. Here, we leave open the question of whether extensions of simple modules by simple modules in the graded cover of $\mathbf{O}$ also satisfy the required pattern. This is a nice question for further research.
Section 6 and 7 are devoted to another natural structural question: Ringel duality in the category $\mathbf{O}$. In Section 6 we construct and study the semi-regular $U(\mathfrak{g})$-bimodule. For Kac-Moody algebras corresponding to a finite dimensional Cartan matrix, this bimodule was introduced in \cite{Arkhipov, Soergel}, and we extend the procedure to Kac-Moody algebras for infinite dimensional Cartan matrices, such as $\mathfrak{sl}(\infty),\mathfrak{so}(\infty)$ and $\mathfrak{sp}(\infty)$.
In Section 7 we show that the category $\mathbf{O}$, as a whole, is Ringel self-dual, by establishing a (covariant) equivalence between the category of modules with a Verma flag and the category of modules with a dual Verma flag. Since this equivalence sends the Verma module $\Delta(\lambda)$ to the dual Verma module $\nabla(-\lambda-2\rho)$, the blocks of $\mathbf{O}$ are not Ringel self-dual. In particular, dominant blocks are dual to anti-dominant blocks. The Ringel duality functor also implies existence of tilting modules in appropriate Serre quotients and determines their decomposition multiplicities.
The paper is concluded by brief appendices on certain theories to which we refer throughout the text: Serre quotient categories, Ringel duality, graded covers, and quasi-hereditary Koszul algebras.
{Finally, we would like to mention} some interesting related results, obtained independently at the same time by Chen and Lam in \cite{CL}. There, specific dominant blocks in {\em parabolic subcategories}, with respect to specific Levi subalgebras of finite corank, of $\bar\mathcal O$ for $\mathfrak{gl}(\infty), \mathfrak{so}(\infty), \mathfrak{sp}(\infty)$ are studied. For $\mathfrak{gl}(\infty)$, this leads to categories where the modules have finite length. In that setting, also in \cite{CL} equivalences with the finite rank case are shown and used to obtain results on Koszulity.
\subsection*{Acknowledgement}
KC was supported by ARC grant DE170100623. IP was supported in part by DFG grant PE980/7-1.
\section{Preliminaries}
We fix an algebraically closed field $\Bbbk$ of characteristic zero. For any Lie algebra~$\mathfrak{k}$, the universal enveloping algebra will be denoted by~$U(\mathfrak{k})$. The restriction functor from the category of $\mathfrak{k}$-modules to the category of $\mathfrak{l}$-modules, for a subalgebra $\mathfrak{l}\subset\mathfrak{k}$, is denoted by ${\rm Res}^{\mathfrak{k}}_{\mathfrak{l}}$. We {put} $\mathbb{N}=\{0,1,2,\ldots\}$. If $A$ is a set{, then} $|A|$ denotes its cardinality. For a category $\mathcal{C}$ we will usually abbreviate $X\in{\rm{Ob}\,}\mathcal{C}$ to $X\in\mathcal{C}$.
\subsection{Abelian categories}
Let $\mathcal{C}$ be an arbitrary abelian category.
\subsubsection{Multiplicities}
We follow \cite[Definition~4.1]{Soergel} regarding multiplicities.
For~$M\in \mathcal{C}$ and simple $L\in\mathcal{C}$, the multiplicity $[M:L]\in\mathbb{N}\cup\{\infty\}$ of~$L$ in~$M$ is
$$[M:L]\;=\;\sup_{F_\bullet}|\{i\,|\,F_iM/F_{i+1}M\cong L\}|,$$
where~$F_\bullet$ ranges over all {\em finite} filtrations $0=F_pM\subset\cdots \subset F_{i+1}M\subset F_iM\subset\cdots\subset F_0M=M$.
\subsubsection{Extensions}
For each $i\in\mathbb{N}$, we define the extension functor
$$\Ext_{\mathcal{C}}^i(-,-)\;:\;\mathcal{C}^{{\rm op}}\times\mathcal{C}\,\to\, \mathbf{A\hspace{-0.5mm}b}$$
as in \cite[III.3]{Ve}, see also \cite[Section~2.1]{CM}. For an abelian subcategory $\iota:\mathcal B\hookrightarrow \mathcal{C}$, the exact inclusion functor~$\iota$ induces group homomorphisms
\begin{equation}\label{XYi}\iota_{XY}^{i}:\;\Ext^i_{\mathcal B}(X,Y)\;\to\;\Ext^i_{\mathcal{C}}(X,Y),\quad\mbox{ for all $i\in\mathbb{N}$ and~$X,Y\in\mathcal B$.}\end{equation}
In general, these are neither epimorphisms nor monomorphisms. When all $\iota_{XY}^i$ are isomorphisms, we say that~$\mathcal B$ is {\bf extension full} in~$\mathcal{C}$.
\subsubsection{Coproducts}We denote the coproduct of a family $\{X_\alpha\}$ of objects in~$\mathcal{C}$, if it exists, by $\bigoplus_\alpha X_\alpha$. By definition, we have an isomorphism
\begin{equation}\label{eqCo}\Hom_{\mathcal{C}}\left(\bigoplus_\alpha X_\alpha,Y\right)\;\stackrel{\sim}{\to}\; \prod_\alpha\Hom_{\mathcal{C}}(X_\alpha,Y),\quad f\mapsto (f\circ\iota_\alpha)_\alpha.\end{equation}
The following lemma can be generalised substantially, but it will suffice for our purposes.
\begin{lemma}\label{LemCopr}
If for a family $\{X_\alpha\}_\alpha$ of objects in~$\mathcal{C}$ and~$Y\in\mathcal{C}$ we have $\Ext^1_{\mathcal{C}}(X_\alpha,Y)=0$ for all $\alpha$, then $\Ext^1_{\mathcal{C}}(\bigoplus_\alpha X_\alpha,Y)=0$.
\end{lemma}
\begin{proof}
Represent an element of $\Ext^1_{\mathcal{C}}(\bigoplus_\alpha X_\alpha,Y)$ as the upper {row} of the following diagram
$$\xymatrix{0\ar[r]& Y\ar@{=}[d]\ar[r]& M\ar[r]^f&\bigoplus_\alpha X_\alpha\ar[r]&0\\
0\ar[r]& Y\ar[r]& M_\beta\ar[r]^{f_\beta}\ar[u]^{\phi_\beta}& X_\beta\ar[r]\ar[u]^{\iota_\beta}&0.
}$$
With $M_\beta$ the pullback of $X_\beta$ in $M$, we obtain the above commuting diagram with exact rows, for every $\beta$. By assumption, there exist $g_\beta: X_\beta\to M_\beta$ with $f_\beta\circ g_\beta=1_{X_\beta}$. Equation~\eqref{eqCo} yields a morphism $g:\bigoplus_\alpha X_\alpha\to M$ such that~$g\circ\iota_\beta =\phi_\beta\circ g_\beta$. Commutativity of the diagram implies ~$f\circ g\circ\iota_\beta=\iota_\beta$. Since $\beta$ is arbitrary, {the} isomorphism~\eqref{eqCo} implies that $f\circ g$ is the identity of $\bigoplus_\alpha X_\alpha${,} and the extension defined by the upper row of the diagram splits.
\end{proof}
\subsubsection{Serre subcategories}
A non-empty full subcategory~$\mathcal B$ of~$\mathcal{C}$ is a {\bf Serre subcategory} (``thick subcategory'' in \cite{Gabriel}) if for every short exact sequence in $\mathcal{C}$
$$0\to Y_1\to X\to Y_2\to 0,$$
we have $X\in \mathcal B$ if and only if~$Y_1,Y_2\in \mathcal B$. The exact inclusion functor~$\imath:\mathcal B\to \mathcal{C}$ is fully faithful (meaning that all $\iota^0_{XY}$ are isomorphisms) and such that also all $\iota^1_{XY}$ are isomorphisms. In addition, it follows immediately that~$\mathcal B$ is a strictly full (full and replete) subcategory.
\subsection{Locally finite algebras}
\subsubsection{}A $\Bbbk$-algebra $A$ is {\bf locally unital} if there exists a family of mutually orthogonal idempotents~$\{e_\alpha\,|\, \alpha\in\Lambda\}$ for which we have
$$A\;=\;\bigoplus_{\alpha}e_\alpha A\;=\; \bigoplus_\alpha Ae_\alpha.$$
We denote by $A$-Mod the category of left $A$-modules $M$ which satisfy $M=\bigoplus_\alpha e_\alpha M$, or equivalently $AM=M$.
\subsubsection{} A locally unital algebra $A$ is {\bf locally finite} if {$\dim_{\Bbbk}e_\alpha Ae_\beta<\infty$ for all $\alpha,\beta$}. Fur such an algebra we have the full subcategory $A$-mod of $A$-Mod, of modules $M$ which satisfy $\dim_{\Bbbk}e_\alpha M<\infty$ for all $\alpha$. Clearly the projective modules $Ae_\alpha$ are {objects of} $A$-mod, although $A$-mod will generally not contain {\em enough} projective objects.
\subsection{Partial orders} Fix a partially ordered set $(S,\preceq)$. We will denote the induced partial order on any subset of~$S$ by the same notation $\preceq$.
\subsubsection{}\label{secPaOr}
Any two elements~$\lambda,\mu\in S$ determine an {\bf interval} $\{\nu\in S\,|\,\mu\preceq\nu\preceq \lambda\}$.
An {\bf ideal} $\mathsf{K}$ is a subset of~$S$ with the property that~$\lambda\in \mathsf{K}$ and~$\mu\preceq \lambda$ implies~$\mu\in\mathsf{K}$. An ideal $\mathsf{K}$ is {\bf finitely generated} if there exists a finite subset $E\subset \mathsf{K}$ such that each $\mu\in\mathsf{K}$ satisfies $\mu\preceq\lambda$ for some $\lambda\in E$. A subset $\mathscr{J} { \subset S}$ is {\bf upper finite}, resp. {\bf lower finite}, if for any $\mu\in\mathscr{J}$ there are only finitely many $\lambda\in \mathscr{J}$ for which $\lambda\succeq\mu$, resp. $\lambda\preceq\mu$. A subset $\mathsf{I}{\subset S}$ is {\bf complete} if it is a union of intervals, {\it i.e.} if $\lambda,\mu\in \mathsf{I}$ and~$\mu\preceq\nu\preceq \lambda$ implies~$\nu\in\mathsf{I}$. A subset $\mathsf{C}$ of~$S$ is a {\bf coideal} if $\lambda\in \mathsf{C}$ and~$\mu\succeq \lambda$ imply~$\mu\in\mathsf{C}$. Clearly, the intersection of an ideal and a coideal is a complete subset. Furthermore, the coideals in~$S$ are precisely the sets $S\backslash\mathsf{I}$ for ideals $\mathsf{I}$ in~$S$.
\subsubsection{}\label{Complete2}To any complete subset $\mathsf{I}\subset S$, we associate two ideals
$$\overline{\mathsf{I}}=\{\mu\in S\,|\, \mu\preceq \lambda,\;\;\mbox{for some $\lambda\in\mathsf{I} $}\}\quad\mbox{and}\quad \mathring{\mathsf{I}}\;=\;\overline{\mathsf{I}}\;\backslash\;\mathsf{I}.$$
A pair of elements {$\mu, \lambda\in S$} is called {\bf remote} if the interval $\{\nu\in S\,|\,\mu\preceq\nu\preceq \lambda\}$ has infinite cardinality. With this convention, incomparable elements are never remote.
A partial order is {\bf interval finite} if every interval is a finite set, or equivalently if no two elements are remote. For a partial order which is interval finite, all finitely generated ideals are upper finite ideals.
\section{Root-reductive Lie algebras and triangular decompositions}
\subsection{Root-reductive Lie algebras}
\subsubsection{}{Recall that a finite dimensional subalgebra $\mathfrak{k}\subset\mathfrak{l}$ of a finite dimnsional algebra $\mathfrak{l}$ is {\bf reductive in $\mathfrak{l}$} if the adjoint action of $\mathfrak{k}$ on $\mathfrak{l}$ is semisimple.}
\label{LocRed} A Lie algebra~$\mathfrak{g}$ over $\Bbbk$ is {\bf locally reductive} if ${\mathfrak{g}}$ has a collection of {nested} subalgebras~$\{\widetilde{\mathfrak{g}}_n{\subset\widetilde{\mathfrak{g}}_{n+1}}\,|\,n\in\mathbb{N}\}$
such that
$$\mathfrak{g}\;=\;\varinjlim \widetilde{\mathfrak{g}}_n,$$
where, for each $n\in\mathbb{N}$, $\widetilde{\mathfrak{g}}_n$ is a {finite dimensional} reductive Lie algebra which is reductive in~$\widetilde{\mathfrak{g}}_{n+1}$.
\subsubsection{}\label{DefrrL}
Consider a locally reductive Lie algebra~$\mathfrak{g}$ as above. If, for each $n\in\mathbb{N}$, there exists a Cartan subalgebra~$\mathfrak{h}_n\subset\widetilde{\mathfrak{g}}_n$ such that $\mathfrak{h}_n\subset\mathfrak{h}_{n+1}$ and such that each root vector in~$\widetilde{\mathfrak{g}}_n$ is also a root vector in~$\widetilde{\mathfrak{g}}_{n+1}$, the Lie algebra~$\mathfrak{g}$ is called
{\bf root-reductive}.
{If $\mathfrak{g}$ is root reducible}, we {are given an} abelian subalgebra~$\mathfrak{h}=\varinjlim\mathfrak{h}_n$. {Such} subalgebras~$\mathfrak{h}\subset\mathfrak{g}$ are known as {\bf splitting maximal toral subalgebras} of~$\mathfrak{g}$. These subalgebras are also {\bf Cartan subalgebras} of $\mathfrak{g}$, according to the definition and results in \cite[Section~3]{DPS}. We will simply use the term ``Cartan subalgebra'' when referring to splitting maximal toral subalgebras.
We also introduce the subalgebras~$$\mathfrak{g}_n:=\widetilde{\mathfrak{g}}_n+\mathfrak{h}\;\subset\;\mathfrak{g}.$$
\begin{lemma} \cite[Theorem~4.1]{DPS},~\cite[Theorem~1]{DP}
For any root-reductive Lie algebra~$\mathfrak{g}$, the derived algebra~$[\mathfrak{g},\mathfrak{g}]$ is a root-reductive Lie algebra which is a countable direct sum of Lie algebras isomorphic to~$\mathfrak{sl}(\infty)$, $\mathfrak{so}(\infty)$, $\mathfrak{sp}(\infty)$ or finite dimensional simple Lie algebras.
\end{lemma}
\subsubsection{}If $\mathfrak{g}$ is a root-reductive Lie algebra with Cartan subalgebra~$\mathfrak{h}$, {there is} a corresponding decomposition into $\mathfrak{h}$-weight spaces
\begin{equation}\label{rootdec}\mathfrak{g}\;=\;\mathfrak{h}\oplus\bigoplus_{\alpha\in\Phi}\mathfrak{g}^{\alpha},\end{equation}
for the set of roots $\Phi\subset\mathfrak{h}^\ast$. By construction, we have $\dim_{\Bbbk}\mathfrak{g}^\alpha=1$ for each $\alpha\in \Phi$, and~$0\not\in\Phi$. We denote the subset of roots belonging to~$\mathfrak{g}_n$ as $\Phi_n$, for each $n\in\mathbb{N}$.
\subsubsection{}\label{SecWeightM}We introduce the category~$\mathbf{C}(\mathfrak{g},\mathfrak{h})$ of~$\mathfrak{g}$-modules which are
semisimple as~$\mathfrak{h}$-modules. For any $\mu\in\mathfrak{h}^\ast$ and~$M\in\mathbf{C}(\mathfrak{g},\mathfrak{h})$, we denote by~$M_\mu$ the $\mu$-weight space in~$M$. By assumption, we have $M=\bigoplus_\mu M_\mu$ for $M\in \mathbf{C}(\mathfrak{g},\mathfrak{h})$.
For any module $M\in\mathbf{C}(\mathfrak{g},\mathfrak{h})$, we consider its support $\mathrm{supp} M\subset \mathfrak{h}^\ast$, which is the set of all weights $\mu$ such that~$M_\mu\not=0$.
The full subcategory of~$\mathbf{C}(\mathfrak{g},\mathfrak{h})$ of modules $M$ which satisfy $\dim_{\Bbbk}M_\mu<\infty$ for all $\mu\in\mathfrak{h}^\ast$, is denoted by~$\mathcal{C}(\mathfrak{g},\mathfrak{h})$. This is clearly a Serre subcategory.
{There is} the duality $M\mapsto M^{\circledast}$ on $\mathcal{C}(\mathfrak{g},\mathfrak{h})$ which takes $M$ to its $\mathfrak{h}$-finite dual, {\it i.e.} to the maximal $\mathfrak{h}$-semisimple submodule of~$M^\ast=\Hom_{\Bbbk}(M,\Bbbk)$. {There is} also the duality $M\mapsto M^{\vee}$ which twists the action on $M^{\circledast}$ with the {anti-involution $\tau:\mathfrak{g}\to\mathfrak{g}$} which maps~$\mathfrak{g}^{\alpha}$ to~$\mathfrak{g}^{-\alpha}$ for all $\alpha\in\Phi$, and acts as $-1$ on $\mathfrak{h}^\ast$. In particular, we have
\begin{equation}
\label{suppDual}
\mathrm{supp} M\;=\;\mathrm{supp} M^\vee\quad\mbox{for all $M\in\mathcal{C}(\mathfrak{g},\mathfrak{h}).$}
\end{equation}
\begin{rem}
If we apply the definition of $\mathbf{C}(\mathfrak{g},\mathfrak{h})$ to $\mathfrak{g}_n$, and then only consider modules with support belonging to a fixed coset of $\mathfrak{h}^\ast/\mathbb{Z}\Phi_n$, we automatically get an equivalence with a correspondingly defined category for $\widetilde{\mathfrak{g}}_n$. We will therefore freely use results for the finite dimensional reductive Lie algebra $\widetilde{\mathfrak{g}}_n$, for example related to category $\mathcal O$, when working over $\mathfrak{g}_n$.
\end{rem}
\subsection{Triangular decompositions} Fix a root-reductive Lie algebra~$\mathfrak{g}$ with Cartan subalgebra~$\mathfrak{h}$.
\subsubsection{}\label{DefBorel}Choose a subset $\Phi^+\subset\Phi$ such that $\Phi=\Phi^+\amalg\Phi^-$, with $\Phi^-:=-\Phi^+$, and $\alpha+\beta\in\Phi^+$ whenever $\alpha,\beta\in\Phi^+$.
Then {let}
$$ \mathfrak{n}^+:=\bigoplus_{\alpha\in\Phi^+}\mathfrak{g}^\alpha,\;\; \mathfrak{n}^-:=\bigoplus_{\alpha\in\Phi^-}\mathfrak{g}^{\alpha}.$$
The elements of~$\Phi^+$, resp. $\Phi^-$, which cannot be written as a sum of two other elements of $\Phi^+$, resp. $\Phi^-$, are known as {\bf simple roots}. The positive simple roots constitute the subset $\Sigma \subset\Phi^+$.
The {\bf splitting Borel subalgebras} of~$\mathfrak{g}$ are by definition precisely the subalgebras~$\mathfrak{b}:=\mathfrak{h}\oplus\mathfrak{n}^+$ obtained in the above way. (The decomposition $\mathfrak{b}=\mathfrak{h}\oplus\mathfrak{n}^+$ is a direct sum of vector spaces, not of Lie algebras.)
Note that~$\mathfrak{b}^-=\mathfrak{h}\oplus\mathfrak{n}^-,$ the Borel subalgebra {opposite} to $\mathfrak{b}$ {and containing $\mathfrak{h}$}, is also a splitting Borel subalgebra. We will simply use the term ``Borel subalgebra'' when referring to splitting Borel subalgebras. A Borel subalgebra for $\mathfrak{g}$ leads to Borel subalgebras for $\mathfrak{g}_n$ and~$\widetilde{\mathfrak{g}}_n$:
$$\mathfrak{b}_n:=\mathfrak{g}_n\cap\mathfrak{b},\qquad \widetilde{\mathfrak{b}}_n:=\widetilde{\mathfrak{g}}_n\cap\mathfrak{b}.$$
\subsubsection{}
For each $\lambda\in\mathfrak{h}^\ast$, we have the corresponding {\bf Verma module}
\begin{equation}\label{Verma}\Delta^{\mathfrak{b}}_{\mathfrak{g}}(\lambda):=U(\mathfrak{g})\otimes_{U(\mathfrak{b})}\Bbbk_\lambda,\end{equation}
where $\Bbbk_\lambda$ is the one dimensional $\mathfrak{h}$-module of weight $\lambda$ with trivial $\mathfrak{n}^+$-action. We will {omit} the indices $\mathfrak{g}$ and~$\mathfrak{b}$ when it is clear which algebras are considered.
We denote by~$\Gamma^+$ the subset of~$\mathfrak{h}^\ast$, consisting of~$0$ and finite sums of elements in~$\Phi^+$. The partial order $\le$ on $\mathfrak{h}^\ast$ is defined as
$$\mu\le\lambda\;\;\Leftrightarrow\;\; \lambda-\mu\in\Gamma^+\;\;\Leftrightarrow\;\; \Delta(\lambda)_\mu\not=0.$$
We use the notation $\le_n$ for the partial order on $\mathfrak{h}^\ast$ obtained from the above procedure applied to $\Phi_n^+=\Phi_n\cap\Phi^+$.
\subsection{Parabolic subalgebras}
\subsubsection{}
For a fixed Borel subalgebra~$\mathfrak{b}$, any subalgebra~$\mathfrak{p}\subset\mathfrak{g}$ which contains $\mathfrak{b}$ is called a {\bf parabolic} subalgebra. The reductive part~$\mathfrak{l}\subset\mathfrak{p}$ is spanned by~$\mathfrak{h}$ and all root spaces $\mathfrak{g}^\alpha$ such that both $\mathfrak{g}^\alpha$ and~$\mathfrak{g}^{-\alpha}$ are in~$\mathfrak{p}$. We denote by $\Phi(\mathfrak{l})\subset\Phi$ the set of roots occurring in $\mathfrak{l}$. We have the corresponding parabolic decomposition (of vector spaces)
$$\mathfrak{g}\;=\;\mathfrak{u}^-\oplus\mathfrak{l}\oplus\mathfrak{u}^+,\qquad\mathfrak{p}=\mathfrak{l}\oplus\mathfrak{u}^+.$$
The following lemma is an easy consequence of the definitions.
\begin{lemma}\label{LemComplete}
For any $\lambda\in\mathfrak{h}^\ast$ and reductive part~$\mathfrak{l}\subset\mathfrak{g}$ of a parabolic subalgebra, the subset $\lambda+\mathbb{Z}\Phi(\mathfrak{l})\subset\mathfrak{h}^\ast$ is complete for {the partial order} $\le$.
\end{lemma}
\subsection{Induction and restriction}\label{IndRes}
Fix a Borel subalgebra $\mathfrak{b}$ and a parabolic subalgebra $\mathfrak{p}\subset\mathfrak{g}$ with reductive part $\mathfrak{l}$. We have the exact functor
$${\rm Ind}^{\mathfrak{g}}_{\mathfrak{l},+}\,:\; \mathfrak{l}\mbox{-Mod}\;\to\;\mathfrak{g}\mbox{-Mod,}$$
which is given by interpreting $\mathfrak{l}$-modules as $\mathfrak{p}=\mathfrak{l}\oplus \mathfrak{u}^+$-modules with trivial $\mathfrak{u}^{+}$-action, followed by ordinary induction ${\rm Ind}^{\mathfrak{g}}_{\mathfrak{p}}=U(\mathfrak{g})\otimes_{U(\mathfrak{p})}-$.
For any $\lambda\in\mathfrak{h}^\ast$, we also consider {the functor}
$${\rm Res}^{\mathfrak{g}}_{\mathfrak{l},\lambda}\,:\; \mathbf{C}(\mathfrak{g},\mathfrak{h})\;\to\; \mathbf{C}(\mathfrak{l},\mathfrak{h})$$
{which we define as} the ordinary restriction functor followed by taking the maximal direct summand with support in~$\lambda+\mathbb{Z}\Phi(\mathfrak{l})$.
\subsection{Dynkin Borel subalgebras}
Consider a root-reductive Lie algebra~$\mathfrak{g}$ with Cartan subalgebra $\mathfrak{h}$.
\begin{prop}\label{equivCond}
For a Borel subalgebra $\mathfrak{h}\subset\mathfrak{b}\subset\mathfrak{g}$, the following conditions are equivalent:
\begin{enumerate}[(i)]
\item The elements of~$\Gamma^+$ are the finite sums of elements in~$\Sigma $.
\item We can write $\mathfrak{g}=\varinjlim\widetilde{\mathfrak{g}}_n$ as in \ref{LocRed}, with the additional condition that~$\widetilde{\mathfrak{g}}_n+\mathfrak{b}$ is a (parabolic) subalgebra of~$\mathfrak{g}$ for each $n\in\mathbb{N}$.
\item The partial order $\le$ is interval finite.
\item The Lie algebra~$\mathfrak{g}$ is generated by~$\mathfrak{h}$ and the simple (positive and negative) root spaces.
\item For each $\lambda\in\mathfrak{h}^\ast$, the Verma module~$\Delta(\lambda)$ is locally $U(\mathfrak{b})$-finite.
\item For each $\lambda\in\mathfrak{h}^\ast$, the Verma module~$\Delta(\lambda)$ has finite dimensional weight spaces.
\end{enumerate}
If one of the conditions is satisfied, $\mathfrak{b}$ is called a {\bf Dynkin Borel} subalgebra.
\end{prop}
\begin{proof}
First we show that (ii) and (iv) are equivalent. Choose finite subsets $\Sigma_n\subset\Sigma $ for $n\in\mathbb{N}$, such that~$\Sigma =\cup_n\Sigma_n$ and~$\Sigma_n\subset\Sigma_{n+1}$. Choose also nested subalgebras $\{\mathfrak{h}_n\subset\mathfrak{h}_{n+1}\}$ of $\mathfrak{h}$ with $\varinjlim\mathfrak{h}_n=\mathfrak{h}$. Then we let $\widetilde{\mathfrak{g}}_n$ be the subalgebra of $\mathfrak{g}$ generated by $\mathfrak{h}_n$ and the root vectors corresponding to $\Sigma_n\sqcup -\Sigma_n$. If (iv) is satisfied, it is easy to see that~$\{\widetilde{\mathfrak{g}}_n\}$ satisfies all properties in (ii). Now assume that (ii) is satisfied. Since any $X\in \mathfrak{g}$ is contained in $\widetilde{\mathfrak{g}}_n$ for some $n$, and $\widetilde{\mathfrak{g}}_n$ is generated by $\widetilde{\mathfrak{g}}_n\cap\mathfrak{h}$ and the simple root spaces of $\mathfrak{g}$ which belong to $\widetilde{\mathfrak{g}}_n$, it follows that (iv) is satisfied.
That {conditions} (i) and (iv) are equivalent is clear.
Now we show that {conditions} (i) and (iii) are equivalent. If (i) is satisfied, then $\lambda\ge \mu$ implies that~$\lambda-\mu$ is a finite sum of simple roots. It follows that the interval between $\lambda$ and~$\mu$ is finite. On the other hand, if (i) is not satisfied, we have $\beta\in\Phi^+$ such that we can consecutively subtract elements of $\Sigma $ and always obtain an element of $\Phi^+$. It follows that the interval between $\beta$ and~$0$ has infinite cardinality.
Consider $\gamma\in\Gamma^+$. By the PBW theorem, there are finitely many ways to write $\gamma$ as a sum of elements in~$\Phi^+$ with non-negative coefficients if and only if $\dim_{\Bbbk} \Delta(\lambda)_{\lambda-\gamma}<\infty$, and it is clear that the latter condition is independent of $\lambda\in\mathfrak{h}^\ast$. It follows that {conditions} (i) and (vi) are equivalent.
Now assume that {condition} (i) is satisfied. By the above, also {conditions} (iii) and (vi) are satisfied. We thus have
$$\sum_{\mu\ge \lambda-\gamma}\dim_{\Bbbk}\Delta(\lambda)_{\mu}<\infty,$$
for an arbitrary $\lambda\in\mathfrak{h}^\ast$ and~$\gamma\in\Gamma^+$.
It follows that {condition} (v) is also satisfied, so (i) implies (v).
Now assume that (i) is not satisfied. Then there exists $\beta\in\Phi^+$ which is not a finite sum of elements of $\Sigma $. It follows from standard $\mathfrak{sl}_2$-arguments that~$X_{-\beta}v$, with $X_{-\beta}\in \mathfrak{g}^{-\beta}$ and~$v$ the highest weight vector of an arbitrary Verma module, generates an infinite dimensional $U(\mathfrak{b})$-module. Hence (v) implies (i).
\end{proof}
\subsubsection{}Consider again an arbitrary Borel subalgebra~$\mathfrak{b}$.
Following \cite[Section~6]{NamT}, we define the {\bf $\mathfrak{b}$-finite root-reductive subalgebra} as the subalgebra $\mathfrak{l}_{\mathfrak{b}}$ of~$\mathfrak{g}$ generated by~$\mathfrak{h}$ and all root spaces for simple roots, with respect to $\mathfrak{b}$. Then $\mathbb{Z}\Phi(\mathfrak{l}_{\mathfrak{b}})=\mathbb{Z}\Sigma $ and $\mathfrak{l}_{\mathfrak{b}}$ is the reductive part of the parabolic subalgebra $\mathfrak{l}_{\mathfrak{b}}+\mathfrak{b}$.
We have~$\mathfrak{l}_{\mathfrak{b}}=\mathfrak{g}$ if and only if $\mathfrak{b}$ is a Dynkin Borel subalgebra. In general, $\mathfrak{b}\cap{\mathfrak{l}_{\mathfrak{b}}}$ is a Dynkin Borel subalgebra of~$\mathfrak{l}_{\mathfrak{b}}$.
\subsection{The Weyl group} In this section, we consider a Dynkin Borel subalgebra $\mathfrak{b}\supset\mathfrak{h}$ of~$\mathfrak{g}$. By~\ref{equivCond}(ii), we can assume that $\mathfrak{g}=\varinjlim\widetilde{\mathfrak{g}}_n$ where each $\widetilde{\mathfrak{g}}_n+\mathfrak{b}$ is a (parabolic) subalgebra.
\subsubsection{}The Weyl group $W_n:=W(\widetilde{\mathfrak{g}}_n:\mathfrak{h}_n)$ is naturally a subgroup of~$W_{n+1}$. Moreover, by assumption, the simple reflections of~$W_n$ as a Coxeter group are mapped to simple reflections in~$W_{n+1}$. The infinite Coxeter group
$$W(\mathfrak{g}:\mathfrak{h})\,=\,W\,:=\,\varinjlim W_n$$ has a natural action on $\mathfrak{h}^\ast$. For any $\alpha\in \Phi^+$, we denote the corresponding reflection by~$r_\alpha\in W$.
\subsubsection{}\label{rhoshift}It can easily be checked, see e.g. \cite[Corollary~1.8]{Nam}, that there exists $\rho\in\mathfrak{h}^\ast$ such that the restriction $\rho|_{\mathfrak{h}_n^\ast}$ is the half sum of $\widetilde{\mathfrak{b}}_n$-positive roots for $\widetilde{\mathfrak{g}}_n$, for every $n\in\mathbb{N}$. The dot action of~$W$ on $\mathfrak{h}^\ast$ is given by
$$w\cdot\lambda\;=\;w(\lambda+\rho)-\rho.$$
\subsubsection{} For a weight $\lambda\in\mathfrak{h}^\ast$, the {\bf integral Weyl group} $W[\lambda]$ is the subgroup of~$W$ of elements~$w\in W$ for which $w\cdot\lambda-\lambda\in\mathbb{Z}\Phi$. A weight $\lambda$ is {\bf integral} if $W=W[\lambda]$.
A weight $\lambda$ is {\bf dominant} if $w\cdot\lambda\le \lambda$, for all $w\in W[\lambda]$, and {\bf antidominant} if $w\cdot\lambda\ge \lambda$, for all $w\in W[\lambda]$. A weight is {\bf regular} if $w\cdot\lambda\not=\lambda$, for all $w\in W[\lambda]$.
The {\bf orbit} of a weight $\lambda$ is denoted by~$[\![\lambda]\!]=W[\lambda]\cdot\lambda$.
\section{Verma modules}
Consider a root-reductive Lie algebra~$\mathfrak{g}$ with Cartan subalgebra $\mathfrak{h}$ and Borel subalgebra~$\mathfrak{b}\supset\mathfrak{h}$.
\subsection{Simple and (dual) Verma modules}
Recall the {Verma module}
$$\Delta(\lambda)\;=\;U(\mathfrak{g})\otimes_{U(\mathfrak{b})}\Bbbk_\lambda$$
from equation~\eqref{Verma}. It is easy to see that {$\Delta(\lambda)$} has a unique maximal submodule. The corresponding simple quotient of~$\Delta(\lambda)$ is denoted by~$L(\lambda)$.
We will typically use the notation $v_\lambda$ for a non-zero element in $1\otimes \Bbbk_\lambda\subset\Delta(\lambda)$.
The following lemma states the universality property of Verma modules.
\begin{lemma}\label{LemDeltaProj}
For $M\in \mathbf{C}(\mathfrak{g},\mathfrak{h})$ with $M_\nu=0$ for all $\nu> \mu$, {there is} an isomorphism
$$\Hom_{\mathfrak{g}}(\Delta(\mu),M)\;\stackrel{\sim}{\to} \; M_\mu,\quad \alpha\mapsto \alpha(v_\mu).$$
Consequently, we have
$\dim \Hom_{\mathfrak{g}}(\Delta(\mu),M)\;=\; [M:L(\mu)].$
\end{lemma}
\begin{proof}
By adjunction, we have
$$\Hom_{\mathfrak{g}}(\Delta(\mu),M)\cong \Hom_{\mathfrak{h}}(\Bbbk_\mu, M^{\mathfrak{n}^+})\cong \Hom_{\mathfrak{h}}(\Bbbk_\mu, M)\cong M_\mu,$$
where the second isomorphism follows from the assumptions on $\mathrm{supp} M $.
\end{proof}
\begin{cor}\label{CorExt1}
For $M\in \mathbf{C}(\mathfrak{g},\mathfrak{h})$ with $M_\nu=0$ for all $\nu> \mu$, we have
$$\Ext^1_{\mathbf{C}(\mathfrak{g},\mathfrak{h})}(\Delta(\mu),M)=0.$$
\end{cor}
\begin{proof}
It follows from Lemma~\ref{LemDeltaProj} that $\Delta(\mu)$ is projective in the Serre subcategory of $\mathbf{C}(\mathfrak{g},\mathfrak{h})$ of modules $M$ with $M_\nu=0$ for all $\nu> \mu$.
\end{proof}
\subsubsection{} If $\mathfrak{b}$ is a Dynkin Borel subalgebra, then $\Delta(\lambda)\in \mathcal{C}(\mathfrak{g},\mathfrak{h})$ by Proposition~\ref{equivCond}. In that case we introduce the {\bf dual Verma module}
$$\nabla(\lambda)\;:=\;\Delta(\lambda)^{\vee},$$ where $\vee$ is the duality on $\mathcal{C}(\mathfrak{g},\mathfrak{h})$ of~\ref{SecWeightM}.
It follows from equation~\eqref{suppDual} that~$L(\lambda)\cong L(\lambda)^\vee$.
\subsection{Reduction to root-reductive subalgebras}
Consider a parabolic subalgebra $\mathfrak{p}\supset\mathfrak{b}$ with reductive part $\mathfrak{l}$.
\begin{lemma}\label{LemMult}
For $\lambda,\mu\in\mathfrak{h}^\ast$ with $\lambda-\mu\in\mathbb{Z}\Phi(\mathfrak{l})$, {the following holds}
\begin{enumerate}[(i)]
\item $[\Delta(\lambda):L(\mu)]=[\Delta_{\mathfrak{l}}(\lambda):L_{\mathfrak{l}}(\mu)];$
\item $\Hom_{\mathfrak{g}}(\Delta(\mu),\Delta(\lambda))\;\cong\;\Hom_{\mathfrak{l}}(\Delta_{\mathfrak{l}}(\mu),\Delta_{\mathfrak{l}}(\lambda)).$
\end{enumerate}
\end{lemma}
\begin{proof}
Part (i) follows from the observations
$${\rm Ind}^{\mathfrak{g}}_{\mathfrak{l},+}\Delta_{\mathfrak{l}}(\lambda)\cong\Delta(\lambda),\qquad {\rm Res}^{\mathfrak{g}}_{\mathfrak{l},\lambda}\Delta(\lambda)\cong\Delta_{\mathfrak{l}}(\lambda),$$
$$[{\rm Ind}^{\mathfrak{g}}_{\mathfrak{l},+}L_{\mathfrak{l}}(\mu):L(\mu)]=1\qquad\mbox{and}\qquad {\rm Res}^{\mathfrak{g}}_{\mathfrak{l},\lambda}L(\mu)\cong L_{\mathfrak{l}}(\mu).$$
By adjunction, we have
$$\Hom_{\mathfrak{g}}(\Delta(\mu),\Delta(\lambda))\;\cong\; \Hom_{\mathfrak{l}}(\Delta_{\mathfrak{l}}(\mu),\Delta(\lambda)^{\mathfrak{u}^+})\;\cong\; \Hom_{\mathfrak{l}}(\Delta_{\mathfrak{l}}(\mu),\Delta_{\mathfrak{l}}(\lambda)),$$
where the last isomorphism follows from weight considerations. This proves part (ii).
\end{proof}
\subsection{Verma modules for Dynkin Borel subalgebras}
Assume that~$\mathfrak{b}$ is a Dynkin Borel subalgebra. By~\ref{equivCond}(ii), without loss of generality we may {\em assume that each $\widetilde{\mathfrak{g}}_n+\mathfrak{b}$ is a parabolic subalgebra of $\mathfrak{g}$.}
\begin{thm}\label{ThmMult}
Consider arbitrary $\lambda,\mu\in\mathfrak{h}^\ast$. For any $n\in\mathbb{N}$ such that~$\lambda-\mu \in\mathbb{Z}\Phi_n$, we have
\begin{enumerate}[(i)]
\item $[\Delta(\lambda):L(\mu)]\;=\;[\Delta_{n}(\lambda):L_{n}(\mu)];$
\item $\Hom_{\mathfrak{g}}(\Delta(\mu),\Delta(\lambda))\;\cong\;\Hom_{\mathfrak{g}_n}(\Delta_n(\mu),\Delta_n(\lambda)).$
\end{enumerate}
\end{thm}
\begin{proof}
This is a special case of Lemma~\ref{LemMult}.
\end{proof}
\begin{rem}For integral regular weights, Theorem~\ref{ThmMult}(i) was obtained in \cite[Proposition~3.6]{Nam} through different methods. Our results completely determine the decomposition multiplicities of Verma modules for Dynkin Borel subalgebras in terms of the Kazhdan-Lusztig multiplicities for finite dimensional reductive Lie algebras.
Analogues of Theorem~\ref{ThmMult} for parabolic Verma modules, where the reductive subalgebra of the parabolic subalgebra has finite rank, can be proved using the same method. Analogues for specific parabolic subalgebras with reductive subalgebra of finite {\em co}rank have been proved in e.g.~\cite{CLW, CWZ}.
\end{rem}
\subsubsection{} The {\bf Bruhat order} on $\mathfrak{h}^\ast$ is the partial order $\uparrow$ generated by the relation
$$\mu\uparrow \lambda\qquad\mbox{if}\qquad \mu=r_\alpha\cdot\lambda \;\mbox{ for some }\;\alpha\in \Phi^+\quad\mbox{and}\quad \mu\le \lambda.$$
\begin{cor}\label{CorBGGThm}
Consider arbitrary $\lambda,\mu\in\mathfrak{h}^\ast$. For any $n\in\mathbb{N}$ such that~$\lambda-\mu \in\mathbb{Z}\Phi_n$, we have
\begin{enumerate}[(i)]
\item $[\Delta(\lambda):L(\mu)]\not=0$ if and only if $\mu\uparrow \lambda$;
\item $\dim\Hom_{\mathfrak{g}}(\Delta(\mu),\Delta(\lambda))=\begin{cases}1&\mbox{if $\mu\uparrow\lambda$}\\
0&\mbox{otherwise.}\end{cases}$
\end{enumerate}
\end{cor}
\begin{proof}
This follows immediately from Theorem~\ref{ThmMult} and the BGG theorem, see~\cite[Theorem~5.1]{Humphreys} and~\cite[Theorem~4.2(b)]{Humphreys}.
\end{proof}
\begin{prop}\label{PropDN}
{Let} $\lambda,\mu\in\mathfrak{h}^\ast$. {Then}
\begin{enumerate}[(i)]
\item $\Ext^1_{\mathbf{C}(\mathfrak{g},\mathfrak{h})}(\Delta(\lambda),\nabla(\mu))=0$;
\item $\dim_{\Bbbk}\Hom_{\mathfrak{g}}(\Delta(\lambda),\nabla(\mu))=\begin{cases}1&\mbox{if $\lambda=\mu$}\\
0&\mbox{if $\lambda\not=\mu$;}
\end{cases}$
\item $\Ext^1_{\mathbf{C}(\mathfrak{g},\mathfrak{h})}(\Delta(\lambda),\Delta(\mu))=0\quad\mbox{unless $\lambda< \mu$.}$
\end{enumerate}
\end{prop}
\begin{proof}
Part (iii) is a special case of Corollary~\ref{CorExt1}. If $\lambda\not< \mu$, part (i) follows also from Corollary~\ref{CorExt1}. If $\lambda<\mu$, part (i) follows from the previous case by applying $\vee$. Similarly, {for} $\lambda\not>\mu$ part (ii) follows from Lemma~\ref{LemDeltaProj}, and in the remaining cases {one applies}~$\vee$.
\end{proof}
\begin{rem}
Proposition~\ref{PropDN} was first obtained in \cite[Propositions~3.8 and~3.9]{Nam}.
\end{rem}
\begin{lemma}\label{LemRad}
If $[\mathfrak{g},\mathfrak{g}]$ is infinite dimensional, the radical of $\Delta(0)$ is not finitely generated.
\end{lemma}
\begin{proof}
The radical $M$ of $\Delta(0)$ is the {kernel} of {the surjective homomorphism} $\Delta(0)\twoheadrightarrow\mathbb{C}$. {Assume that }$M$ is finitely generated. Then there also exist finitely many {\em weight} vectors in $M$ which generate $M$. Since $M$ is locally $U(\mathfrak{b})$-finite, {we conclude via} the PBW theorem that there are finitely many weights $\{\lambda_1,\lambda_2,\ldots,\lambda_l\}$ in $\mathrm{supp} {M}$ such that each $\mu\in \mathrm{supp} {M}$ satisfies $\mu\le\lambda_i$ for some $1\le i\le l$. However, for each simple negative root $\alpha$, the only $\lambda\in\mathrm{supp} M$ for which $\alpha\le\lambda$ is $\lambda=\alpha$. If $[\mathfrak{g},\mathfrak{g}]$ is infinite dimensional, {there are} infinitely many such $\alpha${,} and we {have} a contradiction.
\end{proof}
\subsection{Modules with $\Delta$-flag or $\nabla$-flag}
Assume that~$\mathfrak{b}$ is a Dynkin Borel subalgebra.
\subsubsection{}
Denote by~$\cF^{\Delta}(\mathfrak{g},\mathfrak{b})$, resp. $\cF^{\nabla}(\mathfrak{g},\mathfrak{b})$, the full subcategory of modules in~$\mathbf{C}(\mathfrak{g},\mathfrak{h})$ which admit a finite ${\Delta}$-flag, resp. ${\nabla}$-flag. By a ${\Delta}$-flag of~$M$ we mean a filtration
\begin{equation}\label{eqDflag}0=F_kM\subset F_{k-1}M\subset\cdots F_1M\subset F_0M=M,\end{equation}
with $F_iM/F_{i+1}M\cong {\Delta}(\mu_i)$ for some $\mu_i\in\mathfrak{h}^\ast$, for each $0\le i<k$. By Proposition~\ref{equivCond}, the categories $\cF^\Delta$ and $\cF^\nabla$ are actually subcategories of $\mathcal{C}(\mathfrak{g},\mathfrak{h})$. The categories $\cF^\Delta$ and $\cF^\nabla$ are not abelian, but we consider them as exact categories, where the short exact sequences are precisely the short exact sequences in $\mathcal{C}(\mathfrak{g},\mathfrak{h})$ for which every term is in $\cF^\Delta$, resp. $\cF^\nabla$.
For $M\in \cF^{\Delta}$, we denote by~$(M:{\Delta}(\lambda))$ the number of indices~$i$ for which~$F_iM/F_{i+1}M$ in \eqref{eqDflag} is isomorphic to~${\Delta}(\lambda)$. It is easy to see that $(M:{\Delta}(\lambda))$ is independent of the chosen filtration, for instance by looking at the character of the modules, or from the following lemma.
\begin{lemma}\label{LemDN}
For $M\in \cF^{\Delta}$ and~$\lambda\in\mathfrak{h}^\ast$, we have
$$(M:{\Delta}(\lambda))\;=\;\dim_{\Bbbk}\Hom_{\mathfrak{g}}(M,{\nabla}(\lambda)).$$
\end{lemma}
\begin{proof}
This follows by induction on the length of the filtration, by applying the properties in Proposition~\ref{PropDN}(i) and (ii).
\end{proof}
{Here is an} alternative characterisation of the categories $\cF^{\Delta}$ and~$\cF^{\nabla}$.
\begin{lemma}\label{stupidlemma}
The category~$\cF^{\Delta}$, resp. $\cF^{\nabla}$, is the full subcategory of~$\mathbf{C}(\mathfrak{g},\mathfrak{h})$ consisting of finite direct sums of modules isomorphic to~$U(\mathfrak{n}^-)$ when considered as~$U(\mathfrak{n}^-)$-modules, resp. finite direct sums of modules isomorphic to~$U(\mathfrak{n}^+)^{\circledast}$ when considered as~$U(\mathfrak{n}^+)$-modules.
\end{lemma}
\begin{proof}
It is clear that objects {of}~$\cF^{\Delta}$, resp. $\cF^{\nabla}$, restrict to finite direct sums of modules isomorphic to~$U(\mathfrak{n}^-)$ when considered as an~$U(\mathfrak{n}^-)$-module, resp. finite direct sums of modules isomorphic to~$U(\mathfrak{n}^+)^{\circledast}$ when considered as $U(\mathfrak{n}^+)$-module{s}.
Now consider $M\in \mathbf{C}(\mathfrak{g},\mathfrak{h})$ such that~${\rm Res}^{\mathfrak{g}}_{\mathfrak{n}^-}M\cong U(\mathfrak{n}^-)$. Since $M$ must be a weight module, the element $1\in U(\mathfrak{n}^-)$ corresponds to a one dimensional space of weight~$\lambda$ in~$M$ which must be annihilated by~$\mathfrak{n}^+$ and generates~$M$ as an~$\mathfrak{n}$-module. It follows that~$M\cong {\Delta}(\lambda)$.
Now consider $M\in \mathbf{C}(\mathfrak{g},\mathfrak{h})$ such that~${\rm Res}^{\mathfrak{g}}_{\mathfrak{n}^-}M\cong U(\mathfrak{n}^-)^{\oplus k}$ for some $k>1$. Since $M$ is a weight module, as an $\mathfrak{h}$-module $M$ is isomorphic to $\oplus_i {\Delta}(\lambda_i)$ for some weights $\lambda_1,\ldots,\lambda_k$. Without loss of generality we assume that $\lambda_1$ is maximal among these weights. This shows that there is an injective $\mathfrak{g}$-module morphism ${\Delta}(\lambda_1)\hookrightarrow M$. We can then proceed by considering $M/{\Delta}(\lambda_1)$.
\end{proof}
The above lemma has the following three immediate consequences.
\begin{cor}\label{CorAdd}
For~$M,M'\in \mathbf{C}(\mathfrak{g},\mathfrak{h})$, we have that~$M\oplus M'$ belongs to~$\cF^{\Delta}$, resp. $\cF^{\nabla}$, if and only if both $M$ and~$M'$ belong to~$\cF^{\Delta}$, resp. $\cF^{\nabla}$.
\end{cor}
\begin{cor}\label{resolv}
Consider a short exact sequence in~$\mathbf{C}(\mathfrak{g},\mathfrak{h})$
$$0\to A\to B\to C\to 0$$
with $C\in \cF^{\Delta}$. Then we have $A\in \cF^{\Delta}$ if and only if $B\in\cF^{\Delta}$.
\end{cor}
\begin{cor}\label{CorDua}
The duality functor~$\circledast$, resp. $\vee$, on $\mathcal{C}(\mathfrak{g},\mathfrak{h})$ restricts to a contravariant equivalence of exact categories
$$\circledast: \cF^\Delta({\mathfrak{g},\mathfrak{b}})\;\tilde\to\; \cF^\nabla({\mathfrak{g},\mathfrak{b}^-}),\quad\mbox{resp.}\quad\; \vee : \cF^{\Delta}(\mathfrak{g},\mathfrak{b})\;\tilde\to\; \cF^{\nabla}(\mathfrak{g},\mathfrak{b}).$$
\end{cor}
\subsection{Verma modules for non-Dynkin Borel subalgebras}
First we determine all morphism spaces between Verma modules in terms of those for Dynkin Borel subalgebras (in Theorem~\ref{ThmMult}).
\begin{prop}\label{PropHomVerma}
Consider $\lambda,\mu\in\mathfrak{h}^\ast$.
Let $\mathfrak{l}_{\mathfrak{b}}$ be the $\mathfrak{b}$-finite root-reductive subalgebra of~$\mathfrak{g}$.
\begin{enumerate}[(i)]
\item If $\lambda$ and~$\mu$ are not remote, then
$$\Hom_{\mathfrak{g}}(\Delta(\mu),\Delta(\lambda))\;\cong\;\Hom_{\mathfrak{l}_{\mathfrak{b}}}(\Delta_{\mathfrak{l}_{\mathfrak{b}}}(\mu),\Delta_{\mathfrak{l}_{\mathfrak{b}}}(\lambda)).$$
\item If $\lambda$ and~$\mu$ are remote, then $\Hom_{\mathfrak{g}}(\Delta(\mu),\Delta(\lambda))\;=\;0.$
\end{enumerate}
\end{prop}
\begin{proof}Part (i) is a special case of Lemma~\ref{LemMult}(ii).
Now we prove part (ii). We take a basis of~$\mathfrak{n}^-$ consisting of root vectors. We extend the partial order $\le$ on $\Phi^+$ to a total order $\preceq$ such that all roots of finite length are smaller than all roots of infinite length. Then we take a PBW basis of~$U(\mathfrak{n}^-)$, where each basis element is a product of root vectors, and elements of~$\mathfrak{g}^{-\alpha}$ appear to the left of elements of~$\mathfrak{g}^{-\beta}$ if $\alpha\succ \beta$. An arbitrary weight vector of~$\Delta(\lambda)$ is then of the form
$$w=\sum_{i=1}^k u_i\otimes v=\sum_{i=1}^k u_iv_\lambda,$$
{where} $v\in \Bbbk_\lambda$ and each $u_i$ {is} a PBW basis element of~$U(\mathfrak{n}^-)$.
Let $\mu\le \lambda$ be remote from~$\lambda$ and assume that~$w$ is of weight $\mu$.
{Fix} a minimal positive root $\alpha$ of infinite length such that~$\mathfrak{g}^{-\alpha}$ appears in one of the $u_i$. Now, take $\beta\in\Sigma $ such that~$\alpha-\beta\in \Phi^+$. We thus have $[\mathfrak{g}^\beta,\mathfrak{g}^{-\alpha}]\not=0$, and for a non-zero $X\in \mathfrak{g}^\beta$ we consider
$$Xw=\sum_{i=1}^k [X,u_i]\otimes v.$$
Amongst other possible terms, any $[X,u_i]$ such that $\mathfrak{g}^{-\alpha}$ appears in~$u_i$, has a term (in the natural expansion of~$[X,u_i]$ based on the action of~$X$ on each factor in the product $u_i$) with a factor in~$\mathfrak{g}^{-\alpha+\beta}$ which is by construction a PBW basis element. Moreover, this basis element does not appear in other terms of~$Xw$, by minimality of $\alpha$.
It thus follows that~$X\in \mathfrak{g}^\beta\subset\mathfrak{n}^+$ acts non-trivially on $w$. Consequently, there exists no non-zero morphism from~$\Delta(\mu)$ to~$\Delta(\lambda)$
\end{proof}
\begin{rem}
Proposition \ref{PropHomVerma}(i) was first obtained in \cite[Section~6.4]{NamT}.
\end{rem}
\section{The category~$\mathbf{O}$}\label{Sec4}
For the rest of the paper, fix a root-reductive Lie algebra~$\mathfrak{g}=\varinjlim\widetilde{\mathfrak{g}}_n$ with Dynkin Borel subalgebra~$\mathfrak{b}=\mathfrak{h}\oplus\mathfrak{n}^+$. In particular, $\mathfrak{g}$ can be a finite dimensional reductive Lie algebra. By~\ref{equivCond}(ii), without loss of generality we may {\em assume that each $\widetilde{\mathfrak{g}}_n+\mathfrak{b}$ is a (parabolic) subalgebra.}
\subsection{Definitions}
{Our} main object of study will be the following abelian category.
\begin{ddef}\label{DefO}
The category~$\mathbf{O}=\mathbf{O}(\mathfrak{g},\mathfrak{b})$ is the full subcategory of~$\mathbf{C}=\mathbf{C}(\mathfrak{g},\mathfrak{h})$ of modules~$M$ on which $\mathfrak{b}$ acts locally finitely.
\end{ddef}
It is straightforward to see that {the category} $\mathbf{O}$ is abelian and closed under direct limits. The reason we restrict to Dynkin Borel subalgebras {is} that we want {to study a} category {containing all} the Verma modules, see Proposition~\ref{equivCond}(vi). Our motivation to {choose precisely the category} $\mathbf{O}$ {is} that $\mathbf{O}$ is a Grothendieck category, see Section~\ref{GroCat}.
The simple objects in~$\mathbf{O}$ are, up to isomorphism, the simple highest weight modules~$L(\lambda)$ for $\lambda\in\mathfrak{h}^\ast$.
The categories $\cF^\Delta$ and $\cF^\nabla$ are exact subcategories of $\mathbf{O}$.
\begin{rem}
Let $\mathcal O(\mathfrak{g},\mathfrak{b})$ denote the full subcategory of $\mathbf{O}(\mathfrak{g},\mathfrak{b})$ of finitely generated modules.
In case $\mathfrak{g}$ is finite dimensional (so a reductive Lie algebra){,} the universal enveloping algebra $U(\mathfrak{g})$ is noetherian and the abelian category $\mathcal O(\mathfrak{g},\mathfrak{b})$ is the ordinary BGG category of \cite{BGG, Humphreys}. In this case, the relation between the categories $\mathcal O$ and~$\mathbf{O}$ is summarised in Proposition~\ref{FDprop} below. In our generality, the category $\mathcal O(\mathfrak{g},\mathfrak{b})$ need not be abelian, see Lemma~\ref{LemRad}, and it is natural to study $\mathbf{O}(\mathfrak{g},\mathfrak{b})$.
\end{rem}
\begin{rem}
In \cite{Nam}, the abelian category~$\bar{\mathcal O}(\mathfrak{g},\mathfrak{b})$ is studied, which is the full subcategory of~$\mathbf{O}(\mathfrak{g},\mathfrak{b})$ of modules with finite dimensional weight spaces. We thus have Serre subcategories
$$\xymatrix{
\mathcal{C}(\mathfrak{g},\mathfrak{h}) \ar@{^{(}->}[r]&\mathbf{C}(\mathfrak{g},\mathfrak{h})\\
\bar{\mathcal O}(\mathfrak{g},\mathfrak{b})\ar@{^{(}->}[r]\ar@{^{(}->}[u]& \mathbf{O}(\mathfrak{g},\mathfrak{b}).\ar@{^{(}->}[u]
}
$$
It follows easily from e.g. Proposition~\ref{equivCond}(vi) that $\mathcal O(\mathfrak{g},\mathfrak{h})\subset \bar{\mathcal O}(\mathfrak{g},\mathfrak{h})$. Hence the category $\bar{\mathcal O}(\mathfrak{g},\mathfrak{h})$ is another natural abelian enlargement of $\mathcal O(\mathfrak{g},\mathfrak{h})$.
\end{rem}
From now on we will leave out the references to $\mathfrak{g},\mathfrak{b}$ and $\mathfrak{h}$ in $\mathbf{O}(\mathfrak{g},\mathfrak{b})$, $\mathcal{C}(\mathfrak{g},\mathfrak{h})$ etc.
\subsubsection{Serre subcategories by truncation}
Let $\mathsf{K}$ be any ideal in~$(\mathfrak{h}^\ast,\le)$. The Serre subcategory~${}^{\mathsf{K}}\mathbf{O}$ of~$\mathbf{O}$ is defined as the full subcategory of modules~$M$ with $\mathrm{supp} M \subset\mathsf{K}$. Clearly, we have
\begin{equation}\label{eqDtrunc}\Delta(\lambda)\in {}^{\mathsf{K}}\mathbf{O}\quad\Leftrightarrow\quad \lambda\in\mathsf{K}\quad\Leftrightarrow\quad L(\lambda)\in{}^{\mathsf{K}}\mathbf{O}.\end{equation}
Similarly, ${}^{\mathsf{K}}\bar{\mathcal O}$, respectively $\cF^\Delta[\mathsf{K}]$, is the subcategory of $\bar{\mathcal O}$, respectively {of} $\cF^\Delta$, of modules with support in~$\mathsf{K}$. The condition for $M\in\cF^\Delta$ to be in $\cF^\Delta[\mathsf{K}]$ is equivalently characterised as $(M:\Delta(\lambda))=0$ if $\lambda\not\in\mathsf{K}$.
\subsubsection{Upper finite ideals}A special role will be played by ideals $\mathsf{K}\subset \mathfrak{h}^\ast$ which are upper finite. We denote by $\mathcal{K}$ the set of upper finite ideals in $(\mathfrak{h}^\ast,\le)$. Then $\mathcal{K}$ is a directed set for the partial order given by inclusion.
The following lemma is obvious from the fact that simple highest weight modules have finite dimensional weight spaces.
\begin{lemma}\label{LemKmult}
For an upper finite ideal $\mathsf{K}\in\mathcal{K}$ and~$M\in{}^{\mathsf{K}}\mathbf{O}$, we have
$$M\in{}^{\mathsf{K}}\bar{\mathcal O}\quad\;\Leftrightarrow\; \quad [M:L(\mu)]<\infty\mbox{ for all $\mu\in\mathsf{K}$.}$$
\end{lemma}
\begin{rem}
When $\mathfrak{g}$ is not finite dimensional, there exist indecomposable modules in $\mathbf{O}$ for which $\mathrm{supp} M $ is not upper finite. For instance, when $\lambda$ is integral, regular and antidominant we can consider an infinite chain
$$\lambda=\lambda^0\uparrow\lambda^1\uparrow \lambda^2\uparrow\cdots.$$
By Corollary~\ref{CorBGGThm}(ii), we have morphisms $\Delta(\lambda^i)\to\Delta(\lambda^{i+1})$, for all $i\in\mathbb{N}$. The $\mathfrak{g}$-module $M:=\varinjlim \Delta(\lambda^i)$ belongs to $\mathbf{O}$. However, since $M$ is not {an object of} $\overline{\mathcal O}$, this does not yet answer the question, raised in \cite[Section~5.3]{NamT}, of whether there exist indecomposable modules in~$\bar{\mathcal O}$ whose support is not upper finite.
\end{rem}
\subsection{Locally projective modules} \label{SecConstProj}
\begin{thm}\label{ThmProj} Take $\mathsf{K}\in\mathcal{K}$.
\begin{enumerate}[(i)]
\item For each $\mu\in\mathsf{K}$, there exists a module $P_{\mathsf{K}}(\mu)\in{}^{\mathsf{K}}\bar{\mathcal O}\subset {}^{\mathsf{K}}\mathbf{O}$ such that:
\begin{enumerate}[(a)]
\item $\dim_{\Bbbk}\Hom_{{}^{\mathsf{K}}\mathbf{O}}(P_{\mathsf{K}}(\mu),-)\;=\;[-:L(\mu)].$
\item $P_{\mathsf{K}}(\mu)\in \cF^{\Delta}[\mathsf{K}]$ {and}
$$(P_{\mathsf{K}}(\mu):\Delta(\nu))=
[\Delta(\nu):L(\mu)]\qquad\mbox{for all $\nu\in \mathsf{K}$}.$$
\item $[P_{\mathsf{K}}(\mu):L(\nu)]=0$ unless $\nu\in [\![\mu]\!]$.
\end{enumerate}
\item The category ${}^{\mathsf{K}}\mathbf{O}$ has enough projective objects. Each projective object is a direct sum of modules isomorphic to $P_{\mathsf{K}}(\mu)$ with $\mu\in\mathsf{K}$.
\end{enumerate}
\end{thm}
We precede the proof with some discussions and a lemma.
\begin{rem}\label{RemBruhat}${}$
\begin{enumerate}[(i)]
\item The existence of projective objects $P_{\mathsf{K}}(\mu)$ in ${}^{\mathsf{K}}\bar{\mathcal O}$ was first established through different methods in \cite[Section~4]{Nam}.
\item Even though $P_{\mathsf{K}}(\mu)\in{}^{\mathsf{K}}\bar{\mathcal O}$, {the category ${}^K\bar{\mathcal{O}}$} does generally not have {enough} projective objects. An example is given by considering a regular integral dominant $\lambda\in\mathfrak{h}^\ast$ and $M:=\bigoplus_{\mu\in[\![\lambda]\!]}L(\mu)$. By Corollary~\ref{CorBGGThm}(i), we have $\dim M_\nu\le \dim\Delta(\lambda)_\nu$ for all $\nu\in\mathfrak{h}^\ast$, so $M\in \bar{\mathcal O}$. On the other hand, by Theorem~\ref{ThmProj}(i)(b), a projective cover of $M$ has infinite dimensional weight spaces. This answers \cite[Open Question~4.15]{Nam} negatively.
\item It will follow {a posteori} that the condition on the ideal $\mathsf{K}\subset\mathfrak{h}^\ast$ in Theorem~\ref{ThmProj} can be weakened to demand that it be upper finite with respect to the Bruhat order $\uparrow$.
\item Given an ideal $\mathsf{K}\subset\mathfrak{h}^\ast$ and $\mu\in\mathsf{K}$, the existence of a module $P_{\mathsf{K}}(\mu)$ as in Theorem~\ref{ThmProj}(i) can be proved under the weaker assumption that $\{\nu\in\mathsf{K}\,|\,\nu\ge \mu\}$ is finite (or even {if just}~$\{\nu\in\mathsf{K}\,|\, \mu\uparrow \nu\}$ is finite).
\end{enumerate}
\end{rem}
We follow the approach of \cite[Section~4]{BGG}, see also \cite{CM}. We fix $\mathsf{K}\in\mathcal{K}$ and $\mu\in \mathsf{K}$.
\subsubsection{}\label{DefQ}
We define a $U(\mathfrak{b})$-module~$V_\mu^{\mathsf{K}}$ with $\mathrm{supp} V_\mu^{\mathsf{K}}\subset \mathsf{K}${, and} with presentation
\begin{equation}\label{presentation}\bigoplus_{\kappa\in S}U(\mathfrak{b})\otimes_{U(\mathfrak{h})}\Bbbk_\kappa\to U(\mathfrak{b})\otimes_{U(\mathfrak{h})}\Bbbk_\mu\to V_\mu^{\mathsf{K}}\to 0 ,\end{equation}
where~$S$ is a multiset of weights in~$\mathfrak{h}^\ast\backslash \mathsf{K}$ such that each $\kappa\in\mathfrak{h}^\ast\backslash \mathsf{K}$ appears $\dim U(\mathfrak{b})_{\kappa-\mu}$ times.
Since the set
$\{\nu\in\mathsf{K}\,|\,\nu\ge \mu\}$
is finite, $V_\mu^{\mathsf{K}}$ is finite dimensional.
We then define
$$Q_{\mathsf{K}}(\mu)\;:=\;U(\mathfrak{g})\otimes_{U(\mathfrak{b})} V_\mu^{\mathsf{K}}.$$
By construction, we have $Q_{\mathsf{K}}(\mu)\in\cF^{\Delta}[\mathsf{K}]\subset {}^{\mathsf{K}}\mathbf{O}$. The module $Q_{\mathsf{K}}(\mu)$ is generated by a vector~$v_\mu$, which we take in the image of~$\Bbbk_\mu$ under the epimorphism in \eqref{presentation}.
\begin{ex}If $\mathsf{K}=\{\nu\in\mathfrak{h}^\ast\,|\,\nu\le \mu\}$, we have $V_\mu^{\mathsf{K}}=\Bbbk_\mu$ and thus ${Q}_{\mathsf{K}}(\mu)\cong \Delta(\mu)$.
\end{ex}
\begin{lemma}\label{Qk}
The module~$Q_{\mathsf{K}}(\mu)$ is projective in~${}^{\mathsf{K}}\mathbf{O}${,} and for any $M\in {}^{\mathsf{K}}{\mathbf{O}}$ we have an isomorphism
$$\Hom_{\mathfrak{g}}(Q_{\mathsf{K}}(\mu),M)\;\stackrel{\sim}{\to}\;M_{\mu},\qquad \alpha\mapsto \alpha(v_\mu).$$
\end{lemma}
\begin{proof}
Apply the exact induction functor~$U(\mathfrak{g})\otimes_{U(\mathfrak{b})}-$ to the exact sequence~\eqref{presentation}, followed by application of the left exact contravariant functor~$\Hom_{\mathfrak{g}}(-,M)$. This yields an exact sequence
$$0\to \Hom_{\mathfrak{g}}(Q_{\mathsf{K}}(\mu),M)\to \Hom_{\mathfrak{h}}(\Bbbk_\mu,M)\to \prod_{\kappa\in S}\Hom_{\mathfrak{h}}(\Bbbk_\kappa,M),$$
where we have used adjunction and equation~\eqref{eqCo}.
The right term is zero since $\mathrm{supp} M \subset \mathsf{K}$, which concludes the proof.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{ThmProj}]
First assume that there are no simple modules~$L$ in~${}^{\mathsf{K}}\mathbf{O}$, other than $L{\simeq}L(\mu)$, for which~$L_\mu\not=0$. It then follows from Lemma~\ref{Qk} that~$Q_{\mathsf{K}}(\mu)$ satisfies the property of~$P_{\mathsf{K}}(\mu)$ in (i)(a). If there are other~$\nu\in\mathsf{K}$ such that~$L(\nu)_\mu\not=0$, then {these} are only finitely many. By induction we can assume that we already constructed $P_{\mathsf{K}}(\nu)$ for all {such $\nu$}. It follows that all these are {isomorphic to} direct summands ({with respective multiplicities} $\dim L(\nu)_\mu$) of~$Q_{\mathsf{K}}(\mu)$.
The remaining direct summand of~$Q_{\mathsf{K}}(\mu)$ satisfies the properties of~$P_{\mathsf{K}}(\mu)$ in (i)(a).
By Corollary~\ref{CorAdd}, the module ${P}_{\mathsf{K}}(\mu)$ is {an object of}~$\cF^\Delta$.
Lemma~\ref{LemDN} implies that for any $\nu\in\mathfrak{h}^\ast$
$$(P_{\mathsf{K}}(\mu):\Delta(\nu))\;=\;\dim\Hom_{\mathfrak{g}}(P_{\mathsf{K}}(\mu),\nabla(\nu)).$$
If $\nu\in\mathsf{K}$, then part (i)(a) {yields}
$$\dim\Hom_{\mathfrak{g}}(P_{\mathsf{K}}(\mu),\nabla(\nu))\;=\;[\nabla(\nu):L(\mu)]\;=\; [\Delta(\nu):L(\mu)],$$
where the latter equality follows from the duality $\vee$. This concludes the proof of part (i)(b).
Part (i)(c) follows from part (i)(b) and Corollary~\ref{CorBGGThm}(i).
We consider an arbitrary module $M\in {}^{\mathsf{K}}\mathbf{O}$. It has a set of generating elements~$\{v_\alpha\}\subset M$, which we can choose to be weight vectors. Since $M\in {}^{\mathsf{K}}\mathbf{O}$, it follows {that} $U(\mathfrak{g})v_\alpha$ is {isomorphic to} a quotient of~$Q_{\mathsf{K}}(\mu)$ {where} $\mu\in\mathfrak{h}^\ast$ {is} the weight of~$v_\alpha$. Hence, {there is} an epimorphism $\bigoplus_\mu Q_{\mathsf{K}}(\mu)\twoheadrightarrow M$. From the universality property of coproducts, or alternatively from Lemma~\ref{LemCopr}, it follows that~$\bigoplus_\mu Q_{\mathsf{K}}(\mu)$ is projective. This proves part (ii).
\end{proof}
\subsection{Category $\mathbf{O}$ as a Grothendieck category}\label{GroCat}
An object $G$ in an abelian category $\mathcal{C}$ is a generator if the functor $\Hom_{\mathcal{C}}(G,-):\mathcal{C}\to\mathbf{A\hspace{-0.5mm}b}$ is faithful. Following \cite{KS}, a Grothendieck category is an abelian category which admits set valued direct sums and a generator{,} and in which direct limits of short exact sequences are exact. By \cite[Theorem~9.6.2]{KS}, Grothendieck categories have enough injective objects.
\begin{prop}\label{PropGroth}
The category $\mathbf{O}$ is a Grothendieck category. In particular, $\mathbf{O}$ has enough injective objects.
\end{prop}
First we prove the following lemma, which will be useful in the following sections as well.
\begin{lemma}\label{LemQO}
For each $M\in\mathbf{O}$, $\mu\in\mathfrak{h}^\ast$ and $v\in M_\mu$, there exists $\mathsf{K}\in\mathcal{K}$ such that $v$ {lies} the image of a morphism $Q_{\mathsf{K}}(\mu)\to M$.
\end{lemma}
\begin{proof}
By definition, the space $U(\mathfrak{b})v$ is finite dimensional and hence, we can construct $\mathsf{K}\in\mathcal{K}$ with $\mathrm{supp} U(\mathfrak{b})v\subset \mathsf{K}$. The submodule of $M$ generated by $v$ is thus {an object of} ${}^{\mathsf{K}}\mathbf{O}${,} and the result follows from Lemma~\ref{Qk}.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{PropGroth}]
It follows from the definition that $\mathbf{O}$ admits arbitrary direct sums. Direct limits of short exact sequences are exact as this is true for vector spaces. Finally, by Lemma~\ref{LemQO}, the $\mathfrak{g}$-module
$$G:=\bigoplus_{\mu\in\mathfrak{h}^\ast,\mathsf{K}\in\mathcal{K}}Q_{\mathsf{K}}(\mu)$$
is a generator in $\mathbf{O}$.
\end{proof}
\begin{cor}\label{CorDL}
For $M\in\mathbf{O}$ and $\mathsf{K}\in\mathcal{K}$, denote by ${}^{\mathsf{K}}M$ the maximal submodule of $M$ in ${}^{\mathsf{K}}\mathbf{O}$.
Then we have $M\cong \varinjlim ({}^{\mathsf{K}}M)$.
\end{cor}
\begin{proof}
We only need to prove that the canonical inclusion $\varinjlim ({}^{\mathsf{K}}M)\subset M$ is an equality. The latter follows from the fact that any weight vector in $M$ is included in a submodule of $M$ in ${}^{\mathsf{K}}\mathbf{O}$, indeed we can take that submodule to be the image of $Q_{\mathsf{K}}(\mu)$ under the morphism in Lemma~\ref{LemQO}.
\end{proof}
\subsubsection{} The analogue of Theorem~\ref{ThmProj} for injective objects holds. Concretely, we can define the injective cover $I_{\mathsf{K}}(\mu)=P_{\mathsf{K}}(\mu)^\vee$ of $L(\mu)$ in ${}^{\mathsf{K}}\mathbf{O}$. That $I_{\mathsf{K}}(\mu)$ is injective follows for instance from first defining $J_{\mathsf{K}}(\mu)=Q_{\mathsf{K}}(\mu)^\vee$. Indeed, the fact that any $M\in {}^{\mathsf{K}}\mathbf{O}$ is the union of its finitely generated submodules $\{M^\alpha\}$, where the latter belong to $\bar{\mathcal O}$, and {the isomorphisms}
$$\Hom_{\mathfrak{g}}(M,J_{\mathsf{K}}(\mu))\;\cong\;\varprojlim \Hom_{\mathfrak{g}}(M^\alpha,J_{\mathsf{K}}(\mu))\;\cong\;\varprojlim (M^\alpha_\mu)^\ast\;\cong\; (M_\mu)^\ast$$
show that $J_{\mathsf{K}}(\mu)$ is injective. By construction, $I_{\mathsf{K}}(\mu)$ is a direct summand of $J_{\mathsf{K}}(\mu)$.
\begin{prop}
The injective hull of $L(\mu)$ in $\mathbf{O}$ is given by $I(\mu):=\varinjlim_{\mathsf{K}\in\mathcal{K}} I_{\mathsf{K}}(\mu)$.\end{prop}
\begin{proof}
If $I$ is the injective hull of $L(\mu)$, which exists by Proposition~\ref{PropGroth}, then clearly $^{\mathsf{K}}I=I_{\mathsf{K}}(\mu)$ for all $\mathsf{K}\in\mathcal{K}$. The proposition thus follows as a special case of Corollary~\ref{CorDL}.
\end{proof}
\subsection{Blocks}
For $\lambda\in\mathfrak{h}^\ast$, let $\mathbf{O}_{[\![ \lambda]\!]}$ denote the full subcategory of modules $M$ in~$\mathbf{O}$ such that~$[M:L(\mu)]=0$ whenever $\mu\not\in[\![\lambda]\!]$. We use similar notation for $\bar{\mathcal O}$ and~$\mathcal O$.
\begin{prop}\label{PropBlocks}
{There is} an equivalence of categories
$$\prod_{[\![ \lambda]\!]}\mathbf{O}_{[\![ \lambda]\!]}\;\stackrel{\sim}{\to}\; \mathbf{O},\qquad (M_{[\![\lambda]\!]})_{[\![\lambda]\!]}\mapsto\bigoplus_{[\![\lambda]\!]}M_{[\![\lambda]\!]}.$$
\end{prop}
\begin{proof}
By definition,
$$\Hom_{\prod_{[\![ \lambda]\!]}\mathbf{O}_{[\![ \lambda]\!]}}\left((M_{[\![\nu]\!]})_{[\![\nu]\!]},(N_{[\![\mu]\!]})_{[\![\mu]\!]}\right)\;=\;\prod_{[\![\lambda]\!]}\Hom_{\mathbf{O}_{[\![\lambda]\!]}}(M_{[\![\lambda]\!]},N_{[\![\lambda]\!]}).$$
On the other hand, by \eqref{eqCo},
$$\Hom_{\mathbf{O}}(\bigoplus_{[\![\lambda]\!]}M_{[\![\lambda]\!]},\bigoplus_{[\![\mu]\!]}N_{[\![\mu]\!]})\;\cong\; \prod_{[\![\lambda]\!]}\Hom_{\mathbf{O}}(M_{[\![\lambda]\!]},\bigoplus_{[\![\mu]\!]}N_{[\![\mu]\!]})\;\cong\; \prod_{[\![\lambda]\!]}\Hom_{\mathbf{O}}(M_{[\![\lambda]\!]},N_{[\![\lambda]\!]}).$$
Hence, the functor $\prod_{[\![ \lambda]\!]}\mathbf{O}_{[\![ \lambda]\!]}\to \mathbf{O}$ is fully faithful. We denote the isomorphism closure of its image by $\mathbf{O}'$, which is a subcategory closed under taking quotients. The generator $G$ of $\mathbf{O}$ in the proof of Proposition~\ref{PropGroth} is included in $\mathbf{O}'$. This shows that $\mathbf{O}'=\mathbf{O}$.
\end{proof}
\begin{rem}
For any ideal $\mathsf{K}\subset\mathfrak{h}^\ast$ and~$\lambda\in\mathsf{K}$, we use the notation ${}^{\mathsf{K}}\mathbf{O}_{[\![\lambda]\!]}$ for the full subcategory of $\mathbf{O}$ consisting of modules which are both in~${}^{\mathsf{K}}\mathbf{O}$ and~$\mathbf{O}_{[\![\lambda]\!]}$. It is clear, by equation~\eqref{eqDtrunc}, that ${}^{\mathsf{K}'}\mathbf{O}_{[\![\lambda]\!]}={}^{\mathsf{K}}\mathbf{O}_{[\![\lambda]\!]}$ for two ideals $\mathsf{K}$ and~$\mathsf{K}'$ such that~$\mathsf{K}\cap[\![\lambda]\!]=\mathsf{K}'\cap[\![\lambda]\!]$.
\end{rem}
\begin{rem}
Proposition~\ref{PropBlocks} implies in particular \cite[Theorem~3.4]{Nam}, which was obtained through a different approach. Note however that we have proper inclusions of categories
$$\bigoplus_{[\![ \lambda]\!]}\bar{\mathcal O}_{[\![ \lambda]\!]}\;\subsetneq\;\bar{\mathcal O}\;\subsetneq\; \prod_{[\![ \lambda]\!]}\bar{\mathcal O}_{[\![ \lambda]\!]}.$$
\end{rem}
\subsection{Describing algebras}
Fix an upper finite ideal $\mathsf{K}\subset\mathfrak{h}^\ast$ and~$\lambda\in \mathsf{K}$.
\subsubsection{}\label{DefAlgA}We set ${}^{\mathsf{K}}[\![\lambda]\!]=\mathsf{K}\cap [\![\lambda]\!]$. We define the vector space
$$A^{\mathsf{K}}_{[\![\lambda]\!]}(\mathfrak{g},\mathfrak{b})=A^{\mathsf{K}}_{[\![\lambda]\!]}\;:=\; \bigoplus_{\mu,\nu\in {}^{\mathsf{K}}[\![\lambda]\!]}\Hom_{\mathfrak{g}}(P_{\mathsf{K}}(\mu), P_{\mathsf{K}}(\nu)),$$
which is an algebra with multiplication $fg=g\circ f$. The algebra $A^{\mathsf{K}}_{[\![\lambda]\!]}$ is then locally finite, with idempotents~$e_\nu$ given by the identity of $P_{\mathsf{K}}(\nu)$, for all $\nu\in\mathsf{K}$.
\begin{thm}\label{ThmAMod}
{There is} an equivalence of categories
$${}^{\mathsf{K}}\mathbf{O}_{[\![\lambda]\!]}\;\stackrel{\sim}{\to}\; A^{\mathsf{K}}_{[\![\lambda]\!]}\mbox{{\rm -Mod}},\quad M\mapsto \bigoplus_{\mu\in {}^{\mathsf{K}}[\![\lambda]\!]}\Hom_{\mathfrak{g}}(P_{\mathsf{K}}(\mu),M).$$
\end{thm}
\begin{proof}
We write $A=A^{\mathsf{K}}_{[\![\lambda]\!]}$ and~$\mathscr{F}=\bigoplus_{\mu}\Hom_{\mathfrak{g}}(P_{\mathsf{K}}(\mu),-)$.
We observe that
$$\mathscr{F}(P_{\mathsf{K}}(\nu))\;\cong\; Ae_{\nu}\quad\mbox{for all $\nu\in\mathsf{K}$.}$$
Furthermore, $\mathscr{F}$ induces an isomorphism
$$\Hom_{\mathfrak{g}}(P_{\mathsf{K}}(\nu),P_{\mathsf{K}}(\kappa))\;\stackrel{\sim}{\to}\; \Hom_A(Ae_{\nu},Ae_{\kappa})\quad\mbox{for all $\nu,\kappa\in\mathsf{K}$.}$$
It is clear that the functor~$\mathscr{F}$ preserves arbitrary coproducts. Hence, $\mathscr{F}$ restricts to an equivalence between the categories of projective objects in~${}^{\mathsf{K}}\mathbf{O}$ and~$A$-Mod. The fact that the functor~$\mathscr{F}$ is exact then implies that $\mathscr{F}$ is an equivalence of categories between ${}^{\mathsf{K}}\mathbf{O}$ and~$A$-Mod.
\end{proof}
\begin{rem}\label{RemDomA}
{If} $\lambda\in\mathfrak{h}^\ast$ is dominant we can take $\mathsf{K}:=\{\mu\in\mathfrak{h}^\ast\,|\, \mu\le\lambda\}$, in which case we have ${}^{\mathsf{K}}\mathbf{O}_{[\![\lambda]\!]}=\mathbf{O}_{[\![\lambda]\!]}$. It thus follows that~$\mathbf{O}_{[\![\lambda]\!]}$ is equivalent to the category of modules over a locally finite associative algebra.
\end{rem}
\begin{cor}\label{CorAmod}
{We have} an equivalence of categories
$${}^{\mathsf{K}}\bar{\mathcal O}_{[\![\lambda]\!]}\;\stackrel{\sim}{\to}\; A^{\mathsf{K}}_{[\![\lambda]\!]}\mbox{{\rm -mod}},\quad M\mapsto \bigoplus_{\mu\in {}^{\mathsf{K}}[\![\lambda]\!]}\Hom_{\mathfrak{g}}(P_{\mathsf{K}}(\mu),M).$$
\end{cor}
\begin{proof}
{T}he equivalence of Theorem~\ref{ThmAMod} {induces an equivalence of the respective} Serre subcategories of modules with finite multiplicities, by Lemma~\ref{LemKmult}.
\end{proof}
\begin{lemma}
For $\mathsf{K}\subset \mathsf{K}'\in\mathcal{K}$, we have an isomorphism
$$A^{\mathsf{K}'}_{[\![\lambda]\!]}/I\;\cong\; A^{\mathsf{K}}_{[\![\lambda]\!]}\quad\mbox{with}\quad I:=\sum_{\mu\in {}^{\mathsf{K}'}[\![\lambda]\!]\backslash{}^{\mathsf{K}}[\![\lambda]\!] }A^{\mathsf{K}'}_{[\![\lambda]\!]}e_\mu A^{\mathsf{K}'}_{[\![\lambda]\!]}.$$
\end{lemma}
\begin{proof}
Consider the short exact sequence
$$0\to N(\nu)\to P_{\mathsf{K}'}(\nu)\stackrel{p_\nu}{\to} P_{\mathsf{K}}(\nu)\to0,$$ for any $\nu\in\mathsf{K}$. For all $\nu,\kappa\in\mathsf{K}$,
we have
$$\Hom_{\mathbf{C}}(N(\nu),P_{\mathsf{K}}(\kappa))=0\quad\mbox{and}\quad \Ext^1_{\mathbf{C}}(P_{\mathsf{K}'}(\nu),N(\kappa))=0.$$
Studying the long exact sequences coming from the bifunctor $\Hom_{\mathbf{C}}(-,-)${,} acting on short exact sequences as above{,} then yields an epimorphism and isomorphism
$$\xymatrix{
\Hom_{\mathbf{C}}(P_{\mathsf{K}'}(\nu),P_{\mathsf{K}'}(\kappa))\ar@{->>}[rr]^{p_{\kappa}\circ-}&&\Hom_{\mathcal{C}}(P_{\mathsf{K}'}(\nu),P_{\mathsf{K}}(\kappa))&&\ar[ll]^{-\circ p_\nu}_{\sim}\Hom_{\mathcal{C}}(P_{\mathsf{K}}(\nu),P_{\mathsf{K}}(\kappa)).
\\
}$$
Composing the epimorphism and the inverse of the isomorphism {gives} the epimorphism
$$\Hom_{\mathbf{C}}(P_{\mathsf{K}'}(\nu),P_{\mathsf{K}'}(\kappa))\twoheadrightarrow \Hom_{\mathcal{C}}(P_{\mathsf{K}}(\nu),P_{\mathsf{K}}(\kappa)),\quad \alpha\mapsto \phi,\quad\mbox{when $p_\kappa\circ\alpha=\phi\circ p_\nu$}.$$
This clearly yields an algebra {epi}morphism ${\varepsilon\colon}A^{\mathsf{K}'}_{[\![\lambda]\!]}\twoheadrightarrow A^{\mathsf{K}}_{[\![\lambda]\!]}$.
{A} projective cover of $N(\kappa)$ is {isomorphic to} a direct sum of modules $P_{\mathsf{K}'}(\mu)$ with $\mu\in {}^{\mathsf{K}'}[\![\lambda]\!]\backslash{}^{\mathsf{K}}[\![\lambda]\!]$. Any $\alpha: P_{\mathsf{K}'}(\nu)\to P_{\mathsf{K}'}(\kappa)$ with $p_\kappa\circ\alpha=0$ thus factors through such a projective module. This shows that the ideal $I$ is the kernel of the epimorphism {$\varepsilon$}.
\end{proof}
\begin{prop}\label{FDprop}
Assume that~$\mathfrak{g}$ is finite dimensional, and hence a reductive Lie algebra.
\begin{enumerate}[(i)]
\item We have equivalences of categories
$$\mathbf{O}\;\cong\; \prod_{[\![\lambda]\!]} \mathbf{O}_{[\![\lambda]\!]}\quad\mbox{and}\quad \mathcal O\;\cong\;\bigoplus_{[\![\lambda]\!]} \mathcal O_{[\![\lambda]\!]}. $$
\item For each $\lambda\in\mathfrak{h}^\ast$, there exist finite dimensional algebra $A$ such that
$$\mathcal O_{[\![\lambda]\!]}\;\cong\; A\mbox{{\rm -mod}}\quad\mbox{and}\quad \mathbf{O}_{[\![\lambda]\!]}\;\cong\; A\mbox{{\rm -Mod}}.$$
\item The subcategory $\mathcal O$ is extension full in~$\mathbf{O}$.
\end{enumerate}
\end{prop}
\begin{proof}
Part (i) is a special case of Proposition~\ref{PropBlocks}, see also \cite{BGG}. Part (ii) follows from Theorem~\ref{ThmAMod} and Corollary~\ref{CorAmod}, by observing that~$\bar{\mathcal O}_{[\![\lambda]\!]}=\mathcal O_{[\![\lambda]\!]}$, see e.g.~\cite[Section~5.1]{NamT}.
Part (iii) follows immediately from observing that a minimal projective resolution in~$\mathbf{O}$ of $M\in\mathcal O$, is actually in~$\mathcal O$.
\end{proof}
\subsection{Extension fullness}
\begin{thm}\label{ThmExtFull}
Let $\mathsf{K}\subset\mathfrak{h}^\ast$ be an upper finite ideal.
\begin{enumerate}[(i)]
\item The category ${}^{\mathsf{K}}\mathbf{O}$ is extension full in~$\mathbf{O}$.
\item The category ${}^{\mathsf{K}}\mathbf{O}$ is extension full in~$\mathbf{C}$.
\item For the inclusion $\iota:{}^{\mathsf{K}}\bar{\mathcal O}\hookrightarrow {}^{\mathsf{K}}\mathbf{O}$, the morphism $\iota^i_{MN}$ in \eqref{XYi} is an isomorphism for all $i\in\mathbb{N}$, if $M\in\cF^\Delta$ or $N\in \cF^{\nabla}$.
\end{enumerate}
\end{thm}
Before proving the theorem, we note the following special case.
\begin{cor}
For all~$\mu,\nu\in \mathfrak{h}^\ast$, we have
$$\Ext^i_{\mathbf{O}}(\Delta(\mu),\nabla(\nu))=0\quad\mbox{for all $i>0$.}$$
\end{cor}
\begin{proof}
Assume first that~$\nu\not>\mu$ and let $\mathsf{K}$ be the ideal generated by~$\mu$ and~$\nu$. Then $\Delta(\mu)=P_{\mathsf{K}}(\mu)${.} {Hence} $\Ext^i_{{}^{\mathsf{K}}\mathbf{O}}(\Delta(\mu),\nabla(\nu))=0${,} and the conclusion follows from Theorem~\ref{ThmExtFull}(i). If $\nu>\mu$, we can use $\nabla(\nu)=I_{\mathsf{K}}(\nu)$, or alternatively Theorem~\ref{ThmExtFull}(iii) and the duality $\vee$.
\end{proof}
We start the proof of the theorem with the following proposition.
\begin{prop}\label{PropSumD}
For $\mathsf{K}\in\mathcal{K}$ and a family $\{M_\alpha\}$ of objects in $\cF^\Delta[\mathfrak{h}^\ast\backslash \mathsf{K}]$, set $M=\bigoplus_\alpha M_\alpha$. {Then}
$$\Ext^k_{\mathbf{O}}(M,N)=0\quad\mbox{for all $N\in{}^{\mathsf{K}}\mathbf{O}$ and~$k\in\mathbb{N}$.}$$
\end{prop}
\begin{proof}
The case $k=0$ is obvious {from} equation~\eqref{eqCo}. The case $k=1$ follows from Lemma~\ref{LemCopr} and Corollary~\ref{CorExt1}. {Next,} we take $k>1$ and assume the proposition is proved for $k-1$. An element {of} $\Ext^k_{\mathbf{O}}(M,N)$ can be represented by the upper exact sequence in the diagram
$$\xymatrix{
0\ar[r]& N\ar@{=}[d]\ar[r]& E_k\ar[r]& E_{k-1}\ar[r]&\cdots\ar[r]& E_2\ar[r]& E_1\ar[r]& M\ar[r]\ar@{=}[d]& 0\\
0\ar[r]& N\ar[r]& E_k'\ar[u]\ar[r]& E_{k-1}'\ar[u]\ar[r]&\cdots\ar[r]& E_2'\ar[u]\ar[r]& \bigoplus_\alpha P^\alpha\ar[r]\ar[u]& M\ar[r]& 0.
}$$
Now $M_\alpha\in \cF^\Delta$ is generated by finitely many weight vectors, say of weights $\{\mu^\alpha_j\}_j$ in $\mathfrak{h}^\ast\backslash\mathsf{K}$, and we take a weight vector in~$E_1$ in the preimage of each such generating weight vector. By Lemma~\ref{LemQO} the submodule of $E_1$ generated by those weight vectors is {isomorphic to} a quotient of a finite direct sum $P^\alpha:=\bigoplus_j Q_{\mathsf{K}^\alpha_j}(\mu^\alpha_j)$ for upper finite ideals $\mathsf{K}^\alpha_j\ni \mu^\alpha_j$. Using pull-backs we thus arrive at the above commutative diagram with exact rows. By Corollary~\ref{resolv}, the kernel $K^\alpha$ of $P^\alpha\twoheadrightarrow M_\alpha$ is {an object of}~$\cF^\Delta$. By construction, $K^\alpha$ is {actually an object of}~$\cF^\Delta[\mathfrak{h}^\ast\backslash \mathsf{K}]$. By induction, the
{element in $\operatorname{Ext}^{k-1}_{{\bf O}}(\bigoplus_\alpha K^\alpha, N)$ represented by the sequence}
$$\xymatrix{
0\ar[r]& N\ar[r]& E_k'\ar[r]& E_{k-1}'\ar[r]&\cdots\ar[r]& E_2'\ar[r]& \bigoplus_\alpha K^\alpha \ar[r]& 0
}$$
{equals} zero. It follows that the {element in $\operatorname{Ext}^k_{{\bf O}}(M,N)$ represented by the above diagram also equals zero}.
\end{proof}
\begin{lemma}\label{CorNPM} Let $\mathsf{K}\subset\mathfrak{h}^\ast$ be an upper finite ideal and~$\mathsf{S}$ a coideal in~$\mathsf{K}$ (for instance $\mathsf{S}=\mathsf{K}$).
For any $M\in \cF^{\Delta}[\mathsf{S}]$, we have a short exact sequence in $\cF^{\Delta}[\mathsf{S}]$
$$0\to X\to \bigoplus_{\mu}P_{\mathsf{K}}(\mu)\to M\to 0.$$
\end{lemma}
\begin{proof}
This is an immediate consequence of the structure of projective objects in ${}^{\mathsf{K}}\mathbf{O}$ and {of} Corollary~\ref{resolv}.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{ThmExtFull}]
For part (i), { [CM1, Corollary 5] shows that} it suffices to prove that{,} {if $i>0$ then}~$\Ext^i_{\mathbf{O}}(P,N)=0$ {for any} $N\in {}^{\mathsf{K}}\mathbf{O}$ and~ {any projective object} $P\in{}^{\mathsf{K}}\mathbf{O}$.
Using the main method in the proof of Proposition~\ref{PropSumD}, one shows that~$\Ext^i_{\mathbf{O}}(P,N)\not=0$ implies~$\Ext^{i-1}_{\mathbf{O}}(M,N)\not=0$ for some direct sum {$M$} of modules in~$\cF^{\Delta}[\mathfrak{h}^\ast\backslash\mathsf{K}]$. This contradicts Proposition~\ref{PropSumD}.
Part~(ii) can also be proved by an application of \cite[Corollary~5]{CM}. It follows from the construction in Section~\ref{SecConstProj} that each projective object $P$ in ${}^{\mathsf{K}}\mathbf{O}$ has a resolution by direct sums of modules $U(\mathfrak{g})\otimes_{U(\mathfrak{h})}\Bbbk_\kappa$, with $\kappa\not\in\mathsf{K}$ outside of position $0$ of the resolution. This implies that $\Ext^i_{\mathbf{C}}(P,N)=0$ for all $i>0$ and~$N\in{}^{\mathsf{K}}\mathbf{O}$.
By Lemma~\ref{CorNPM}, any module in~$\cF^{\Delta}[\mathsf{K}]$ has a projective resolution in~${}^{\mathsf{K}}\mathbf{O}$ which is actually contained in~${}^{\mathsf{K}}\bar{\mathcal O}$.
Part (iii) follows from this observation by applying the duality $\vee$ on $\mathcal{C}$.
\end{proof}
\begin{cor}\label{CornCoh}
For arbitrary $i\in\mathbb{N}$, $\mu\in\mathfrak{h}^\ast$ and $M\in\mathbf{O}$ with upper finite $\mathrm{supp} M $, we have
$$\Ext^i_{\mathbf{O}}(\Delta(\mu),M)\;\cong\;\Hom_{\mathfrak{h}}(\Bbbk_\mu,{\mathrm H}^i(\mathfrak{n}^+,M)).$$
\end{cor}
\begin{proof}
Let $\mathsf{K}$ be the ideal generated by $\mathrm{supp} M $ and $\mu$. Theorem~\ref{ThmExtFull}(i) and (ii) imply
$$\Ext^i_{\mathbf{O}}(\Delta(\mu),M)\;\cong\;\Ext^i_{{}^{\mathsf{K}}\mathbf{O}}(\Delta(\mu),M)\;\cong\;\Ext^i_{\mathbf{C}}(\Delta(\mu),M).$$
The reformulation in terms of $\mathfrak{n}^+$-Lie algebra cohomology then follows as in the finite dimensional case, see e.g. \cite[Theorem~6.15(b)]{Humphreys} or \cite[Corollary~14]{CM}.
\end{proof}
\begin{que}\label{QueExtFull}
\begin{enumerate}[(i)]
\item Is $\mathbf{O}$ extension full in~$\mathbf{C}$?
\item Is $\bar{\mathcal O}$ extension full in~$\mathbf{O}$?
\end{enumerate}
\end{que}
Note that Question~\ref{QueExtFull}(i) has a positive answer when restricting to the block $\mathbf{O}_{[\![\lambda]\!]}$, for a dominant $\lambda$, by Remark~\ref{RemDomA} and Theorem~\ref{ThmExtFull}(ii).
\subsection{Extensions in Serre quotients}For any ideal $\mathsf{L}$, we denote {by~${}_{\mathsf{L}}\mathbf{O}$} the Serre quotient $\mathbf{O}/{}^{\mathsf{L}}\mathbf{O}$, see Appendix~\ref{AppSerre}.
\begin{prop}\label{PropExtpi}
Let $\mathsf{L}\subset\mathsf{K}\subset\mathfrak{h}^\ast$ be ideals and let $\mathsf{K}$ be upper finite. For $M\in\cF^{\Delta}[\mathsf{K}\backslash \mathsf{L}]$ and~$N\in{}^{\mathsf{K}}\mathbf{O}$, we have isomorphisms
$$\pi:\Ext^i_{{}^{\mathsf{K}}\mathbf{O}}(M,N)\;\;\tilde\to\;\; \Ext^i_{{}_{\mathsf{L}}^{\mathsf{K}}\mathbf{O}}(M,N)\quad\mbox{for all $i\in\mathbb{N}$.}$$
\end{prop}
\begin{proof}
First we consider the case $i=0$. Clearly{,}~$M\in\cF^{\Delta}[\mathsf{K}\backslash \mathsf{L}]$ has no proper submodule~$M'$ such that~$M/M'$ is in~${}^{\mathsf{L}}\mathbf{O}$, hence
$$\Hom_{{}_{\mathsf{L}}\mathbf{O}}( M,N)\;=\;\varinjlim \Hom_{\mathbf{O}}(M,N/N'),$$
where~$N'$ runs over all submodules of~$N$ which are in~${}^{\mathsf{L}}\mathbf{O}$.
For such $N'$, the exact sequence
$$\Hom_{\mathbf{O}}(M,N')\to \Hom_{\mathbf{O}}(M,N)\to\Hom_{\mathbf{O}}(M,N/N')\to \Ext^1_{\mathbf{O}}(M,N')$$
has first and last term equal to zero, see Lemma~\ref{CorExt1}. Consequently
all the maps in the direct limit are isomorphisms. Hence, we find an isomorphism
$$\pi:\Hom_{\mathbf{O}}(M,N)\;\;\tilde\to\;\; \Hom_{{}_{\mathsf{L}}\mathbf{O}}(M,N).$$
Lemma~\ref{CorNPM} implies that, inside ${}^{\mathsf{K}}\mathbf{O}$ the module $M$ has a projective resolution $P_\bullet$ with $P_i\in\cF^{\Delta}(\mathsf{K}\backslash\mathsf{L})$ for all $i\in\mathbb{N}$. By Lemma~\ref{ProjBC}, $\pi(P_\bullet)$ is a projective resolution of $M$ in ${}^{\mathsf{K}}_{\mathsf{L}}\mathbf{O}$. Since the extension groups are then calculated as ${\mathrm H}^i(\Hom(P_\bullet,N))$ in the respective categories, the conclusion follows from the above paragraph.
\end{proof}
\begin{rem}
For an upper finite ideal $\mathsf{K}\subset\mathfrak{h}^\ast$ and an arbitrary ideal $\mathsf{L}\subset \mathsf{K}$, the category ${}^{\mathsf{K}}_{\mathsf{L}}\mathbf{O}$ has enough projective objects by Theorem~\ref{ThmProj} and Lemma~\ref{ProjBC}.
This observation extends to the case of ideals $\mathsf{L}\subset \mathsf{K}\subset \mathfrak{h}^\ast$ such that~$\mathsf{K}\backslash\mathsf{L}$ is upper finite in~$\mathfrak{h}^\ast\backslash\mathsf{L}$, using Remark~\ref{RemBruhat}(ii).
\end{rem}
\subsection{Example: $\mathfrak{gl}_{\infty}$}
For $\mathfrak{gl}_{\infty}$ we have precisely two Dynkin Borel subalgebras~$\mathfrak{b}$ and~$\mathfrak{b}'$, up to conjugacy. We consider a Cartan subalgebra $\mathfrak{h}$ contained in both {$\mathfrak{b}$ and $\mathfrak{b}'$}.
\subsubsection{}
For $\mathfrak{g}\supset\mathfrak{b}\supset\mathfrak{h}$, we choose a realisation where
$$\Phi=\{\epsilon_i-\epsilon_j\,|\, i\not=j\in\mathbb{N}\} \quad\mbox{and }\quad \Phi^+=\{\epsilon_i-\epsilon_j\,|\, i<j\}.$$
For $\mathfrak{g}\supset\mathfrak{b}'\supset\mathfrak{h}$, we choose a realisation where
$$\Phi=\{\epsilon'_i-\epsilon'_j\,|\, i\not=j\in\mathbb{Z}\} \quad\mbox{and }\quad \Phi^+=\{\epsilon'_i-\epsilon'_j\,|\, i<j\}.$$
We show that these lead to different theories in the following sense.
\begin{lemma}
There exists no equivalence of categories $\mathbf{O}_{[\![ 0]\!] }(\mathfrak{g},\mathfrak{b})\to \mathbf{O}_{[\![ 0]\!] }(\mathfrak{g},\mathfrak{b}')$, or $\bar\mathcal O_{[\![ 0]\!] }(\mathfrak{g},\mathfrak{b})\to \bar\mathcal O_{[\![ 0]\!] }(\mathfrak{g},\mathfrak{b}')$ which exchanges Verma modules.
\end{lemma}
\begin{proof} We set $W=W(\mathfrak{g}:\mathfrak{h})$. {D}enote the simple reflections with respect to $\mathfrak{b}$ by $s_i=r_{\epsilon_i-\epsilon_{i+1}}\in W$ for $i\in\mathbb{N}${,} and with respect to $\mathfrak{b}'$ by~$s_i'=r_{\epsilon'_i-\epsilon'_{i+1}}\in W$ for $i\in\mathbb{Z}$.
{Let $\uparrow$ denote} the Bruhat order corresponding to $\mathfrak{b}$, {and let $\uparrow'$ be the corresponding order for $\mathfrak{b}'$}.
By Corollary~\ref{CorBGGThm}(ii), it suffices to prove that the partially ordered sets $(W\cdot 0,\uparrow )$ and~$(W\cdot 0,\uparrow' )$ are not isomorphic.
To look for a contradiction, we assume that we have an isomorphism of posets $\phi :(W\cdot 0,\uparrow )\to (W\cdot 0,\uparrow' )$.
Both posets have a unique maximal element, $0$, which must be exchanged by $\phi$. The sets of elements~$\mu$ which are {immediate predecessors of} $0$ {in terms of the orders $\uparrow$ and $\uparrow'$} must also be exchanged by $\phi$. We must thus have a bijection
$$\phi: \{s_i\cdot 0,i\in\mathbb{N} \}\;\to\; \{s_i'\cdot 0,i\in\mathbb{Z} \}.$$
Now we define $c(\mu)\in\mathbb{N}$ for any of the weights $\mu$ in~$\{s_i\cdot 0,i\in\mathbb{N} \}$ as the number of other elements~$\nu$ in that set such that there exist (at least) $2$ elements in~$W\cdot0$ which are covered by both $\mu$ and~$\nu$. We find that~$c(s_0\cdot 0)=1$ because {$\nu:=s_1\cdot 0$ is the only weight such that $\mu=s_0\cdot 0$ and $\nu$} cover two weights, namely $s_0s_1\cdot 0$ and~$s_1s_0\cdot 0$. We have $c(s_i\cdot 0)=2$ for $i>0$, coming from~$s_{i-1}\cdot0$ and~$s_{i+1}\cdot0$. Similarly, the same definition { for the set $\{s'_i\cdot 0, i\in\mathbb{Z}\}$ yields} $c(s'_i\cdot 0)=2$ for all $i\in\mathbb{Z}$. This contradicts the existence of $\phi$.
\end{proof}
We conclude this section with an example of an infinite dimensional extension space in~$\mathbf{O}$ for simple modules.
\begin{lemma}
For $\mathfrak{g}=\mathfrak{gl}_{\infty}$ and both of the Dynkin Borel subalgebras {$\mathfrak{b}$ and $\mathfrak{b}'$}, we have
$$\dim_{\Bbbk}\Ext^2_{\mathbf{O}}(\Bbbk,\Bbbk)\,=\,\infty.$$
\end{lemma}
\begin{proof}
{By} Theorem~\ref{ThmExtFull}(i){,} we can equivalently calculate {$\operatorname{Ext}^2_{{}^{\mathsf{K}}\mathbf{O}}(\Bbbk,\Bbbk)$} for some $0\in\mathsf{K}\in\mathcal{K}$. Theorem~\ref{ThmExtFull}(ii) then shows we can {instead} calculate {$\operatorname{Ext}^2_{\mathbf{C}}(\Bbbk,\Bbbk)$}{, and this {is} what we will do.}
Taking the standard projective resolution of $\Bbbk$ in~$\mathbf{C}$ shows that~$\Ext^\bullet_{{\mathbf{C}}}(\Bbbk,\Bbbk)$ is the cohomology of the complex
$$0\to\Bbbk \to \Hom_{\mathfrak{h}}(\mathfrak{g}/\mathfrak{h},\Bbbk)\to \Hom_{\mathfrak{h}}(\Lambda^2{(}\mathfrak{g}/\mathfrak{h}{)},\Bbbk)\to \Hom_{\mathfrak{h}}(\Lambda^3{(}\mathfrak{g}/\mathfrak{h}{)},\Bbbk)\to\cdots.$$
Since $\Hom_{\mathfrak{h}}(\mathfrak{g}/\mathfrak{h},\Bbbk)=0$, we find the well-known properties $\Hom_{\mathbf{C}}(\Bbbk,\Bbbk)=\Bbbk$, $\Ext^1_{{\mathbf{C}}}(\Bbbk,\Bbbk)=0$ and also that~$\Ext^2_{{\mathbf{C}}}(\Bbbk,\Bbbk)$ is the kernel of $ \Hom_{\mathfrak{h}}(\Lambda^2{(}\mathfrak{g}/\mathfrak{h}{)},\Bbbk)\to \Hom_{\mathfrak{h}}(\Lambda^3{(}\mathfrak{g}/\mathfrak{h}{)},\Bbbk)$. That kernel is easily seen to be infinite dimensional.
\end{proof}
\section{Link with finite case}
\subsection{Induction and restriction functors}
\subsubsection{} For each $n\in\mathbb{N}$ and~$\lambda\in\mathfrak{h}^\ast$, we have the complete {sub}set $\Lambda_n:=[\lambda]_n=\lambda+\mathbb{Z}\Phi_n$ {of}~$\mathfrak{h}^\ast${, and} the corresponding ideals $\mathring{\Lambda}_n$ and~$\overline{\Lambda}_n$ as in \ref{Complete2}.
The exact functors of Section~\ref{IndRes} restrict to exact functors
$${\rm Ind}^{n}_+={\rm Ind}^{\mathfrak{g}}_{\mathfrak{g}_n,+}:{}^{\Lambda_n}\mathbf{O}(\mathfrak{g}_n,\mathfrak{b}_n)\to{}^{\overline{\Lambda}_n}\mathbf{O},\;\quad {\rm Res}^n_\lambda={\rm Res}^{\mathfrak{g}}_{\mathfrak{g}_n,\lambda}: {}^{\overline{\Lambda}_n}\mathbf{O}\to {}^{\Lambda_n}\mathbf{O}(\mathfrak{g}_n,\mathfrak{b}_n).$$
The following is an infinite rank version of \cite[Theorem~32]{CMZ}. We provide an alternative proof.
\begin{thm}\label{ThmEquiv}
For $n\in\mathbb{N}$ and~$\lambda\in\mathfrak{h}^\ast$, {there are} mutually inverse equivalences of abelian categories~$\Psi$ and~$\Phi$, admitting a {commutative} diagram
$$
\xymatrix{
{}^{\Lambda_n}\mathbf{O}({\mathfrak{g}_n},\mathfrak{b}_n)\ar[rr]^{{\rm Ind}^n_+}\ar[drr]^{\Psi}&&{}^{\overline{\Lambda}_n}\mathbf{O}\ar[rr]^{{\rm Res}_\lambda^n}\ar[d]^{\pi}&&{}^{\Lambda_n}\mathbf{O}({\mathfrak{g}_n},\mathfrak{b}_n)\\
&&{}^{\overline{\Lambda}_n}_{\mathring{\Lambda}_n}\mathbf{O}\ar[rru]^{\Phi}
}{.}
$$
Moreover, for any $\mu\in {\Lambda_n}$, {there is an isomorphism} $\Psi(\Delta_n(\mu))\cong\Delta(\mu)$ in~${}^{\overline{\Lambda}_n}_{\mathring{\Lambda}_n}\mathbf{O}$.
\end{thm}
\begin{proof}
We define $\Psi:=\pi\circ{\rm Ind}^n_+$. The existence of a functor~$\Phi$ which completes the {commutative} diagram follows from Lemma~\ref{factorspi} in Appendix~\ref{AppSerre}. By construction, {there is an isomorphism of functors} ${\rm Res}_\lambda^n\circ{\rm Ind}^n_+\cong\mathrm{id}_{{}^{\Lambda_n}\mathbf{O}({\mathfrak{g}_n},\mathfrak{b}_n)}$. By commutativity of the diagram, we then have {an isomorphism of functors} $\Phi\circ\Psi\cong\mathrm{id}_{\mathbf{O}_{{\Lambda_n}}({\mathfrak{g}_n},\mathfrak{b}_n)}$. To conclude the proof it suffices to show that~$\Phi$ is faithful.
We will write $\mathcal{A}:={}^{\overline{\Lambda}_n}_{\mathring{\Lambda}_n}\mathbf{O}$.
Consider $M,N\in {}^{\overline{\Lambda}_n}\mathbf{O}$ and~$f\in \Hom_{\mathcal{A}}(M,N)$.
By Lemma~\ref{factorspi}, we have
$\Phi(f)={\rm Res}^{\Lambda_n}_n(g)$, for any representative $g\in \Hom_{\mathfrak{g}}(M',N/N')$ of~$f$, where~$M'\subset M$ { and $N'\subset N$ satisfy $\mathrm{supp} (M/M')\subset{\mathring{\Lambda}_n}$, $\mathrm{supp} N'\subset {\mathring{\Lambda}_n}$}.
Now assume $\Phi(f)=0$, which thus
implies that~$g$ restricted to the weight spaces of~$M'$ for weights in~${\Lambda_n}$ {equals} zero. Since $g$ is a morphism of~$\mathfrak{h}$-modules{,} this means that the image of~$g$ is of the form $N''/N'$ for some $N''\supset N'$ with $\mathrm{supp} N''\subset \mathring{\Lambda}_n$. Thus {the element corresponding to $g$ }
in the direct limit defining $\Hom_{\mathcal{A}}(M,N)$ { equals zero}. {Consequently,}~$f=0$. We hence find that~$\Phi$ is indeed faithful.
\end{proof}
\begin{cor}\label{CorEqBlocks} We use the notation of Theorem~\ref{ThmEquiv}.
\begin{enumerate}[(i)]
\item For dominant $\lambda\in\mathfrak{h}^\ast$, the functor $\Psi$ restricts to an equivalence between $\mathbf{O}_{[\![\lambda]\!]_n}(\mathfrak{g}_n,\mathfrak{b}_n)$ and the Serre quotient of $\mathbf{O}_{[\![\lambda]\!]}$ with respect to the subcategory with only non-zero multiplicities for simple modules $L(w\cdot\lambda)$ with $w\not\in W_n$.
\item For antidominant $\lambda\in\mathfrak{h}^\ast$, the functor $\Psi$ restricts to an equivalence between $\mathbf{O}_{[\![\lambda]\!]_n}(\mathfrak{g}_n,\mathfrak{b}_n)$ and the Serre subcategory of $\mathbf{O}_{[\![\lambda]\!]}$ with only non-zero multiplicities for simple modules $L(w\cdot\lambda)$ with $w\in W_n$.
\end{enumerate}
\end{cor}
\subsubsection{}\label{ForEquivK} Consider an arbitrary set $\{\lambda_1,\lambda_2,\cdots,\lambda_k\}\subset\mathfrak{h}^\ast$ and~$n\in\mathbb{N}$ large enough such that~$\lambda_i-\lambda_j\in\mathbb{Z}\Phi_n$ for all $1\le i,j\le k$. Denote by~$\mathsf{K}$, resp. $\mathsf{K}_n$, the ideal in~$(\mathfrak{h}^\ast,\le)$, resp. $(\mathfrak{h}^\ast,\le_n)$, generated by~$\{\lambda_1,\lambda_2,\cdots,\lambda_k\}$. The set $\mathsf{K}_n$ is complete in~$(\mathfrak{h}^\ast,\le)$, so $\mathring{\mathsf{K}}_n:=\mathsf{K}\backslash \mathsf{K}_n$ is also an ideal in~$(\mathfrak{h}^\ast,\le)$.
By restricting the equivalence in Theorem~\ref{ThmEquiv}, we obtain the following corollary.
\begin{cor}\label{CorEquiv}
With notation and assumptions as in \ref{ForEquivK}, {there are} mutually inverse equivalences of abelian categories~$\Psi$ and~$\Phi$, admitting a commutative diagram
$$
\xymatrix{
{}^{\mathsf{K}_n}\mathbf{O}({\mathfrak{g}_n},\mathfrak{b}_n)\ar[rr]^{{\rm Ind}^n_+}\ar[drr]^{\Psi}&&{}^{\mathsf{K}}\mathbf{O}\ar[rr]^{{\rm Res}_\lambda^n}\ar[d]^{\pi}&&{}^{\mathsf{K}_n}\mathbf{O}({\mathfrak{g}_n},\mathfrak{b}_n)\\
&&{}^{\mathsf{K}}_{\mathring{\mathsf{K}}_n}\mathbf{O}\ar[rru]^{\Phi}
}
$$
Moreover, {there is an isomorphism} $\Psi(\Delta_n(\mu))\cong\Delta(\mu)$ in~${}^{\mathsf{K}}_{\mathring{\mathsf{K}}_n}\mathbf{O}$ for {any} $\mu\in {\mathsf{K}_n}$.
\end{cor}
\begin{thm}\label{Thmidemptr}
With notation and assumptions as in \ref{ForEquivK} and~$\lambda\in\mathsf{K}_n$, consider the algebras~$A:=A^{\mathsf{K}}_{[\![\lambda]\!]}$ and~$A_n:=A^{\mathsf{K}_n}_{[\![\lambda]\!]_n}(\mathfrak{g}_n,\mathfrak{b}_n)$ as in \ref{DefAlgA}. For the idempotent $\varepsilon_n=\sum_{\mu}e_\mu\in A$, with $\mu$ ranging over $\mathsf{K}_n\cap[\![\lambda]\!]_n$, we have an algebra isomorphism $\varepsilon_nA\varepsilon_n\cong A_n$.
\end{thm}
\begin{proof}
By Theorem~\ref{ThmAMod}, {there is} an equivalence
$${}^{\mathsf{K}_n}\mathbf{O}_{[\![\lambda]\!]_n}(\mathfrak{g}_n,\mathfrak{b}_n)\;\cong\;A_n\mbox{-Mod}.$$
By Theorem~\ref{ThmAMod} and Lemma~\ref{LemQuoAlg}, we have equivalences
$${}^{\mathsf{K}_n}\mathbf{O}_{[\![\lambda]\!]_n}(\mathfrak{g}_n,\mathfrak{b}_n)\;\cong\;{}^{\mathsf{K}}_{\mathring{\mathsf{K}}_n}\mathbf{O}\;\cong\;\varepsilon_n A\varepsilon_n\mbox{-Mod}.$$
By construction, both $A_n$ and $\varepsilon_nA\varepsilon_n$ are the endomorphism algebra{s} of {respective maximal} direct sum{s} of {mutually non-isomorphic} indecomposable projective objects in ${}^{\mathsf{K}_n}\mathbf{O}_{[\![\lambda]\!]_n}(\mathfrak{g}_n,\mathfrak{b}_n)$, implying that {the algebras $A_n$ and $\varepsilon_n A \varepsilon_n$} are isomorphic.
\end{proof}
\begin{cor}
For two integral dominant regular weights $\lambda,\lambda'$, we have an equivalence of categories
$$\mathbf{O}_{[\![\lambda]\!]}\;\stackrel{\sim}{\to}\;\mathbf{O}_{[\![\lambda']\!]}\quad\mbox{with}\quad L(w\cdot\lambda)\mapsto L(w\cdot\lambda'),\;\mbox{ for all $w\in W$.}$$
\end{cor}
\begin{proof}
We denote by $\mathsf{K}$, resp. $\mathsf{K}'$, the ideal in $(\mathfrak{h}^\ast,\le)$ generated by $\lambda$, resp. $\lambda'$. Set $A:=A^{\mathsf{K}}_{[\![\lambda]\!]}$ and $B:=A^{\mathsf{K}'}_{[\![\lambda']\!]}$. It follows from Theorem~\ref{Thmidemptr} and \cite[Proposition~7.8]{Humphreys} that, for all $n$, {there is} a commuting square of algebra morphisms
$$\xymatrix{
\varepsilon_n A\varepsilon_n\ar@{^{(}->}[rr]\ar[d]^{\sim}&& \varepsilon_{n+1} A\varepsilon_{n+1}\ar[d]^{\sim}\\
\varepsilon_n B\varepsilon_n\ar@{^{(}->}[rr]&& \varepsilon_{n+1}B\varepsilon_{n+1}.
}$$
We thus have $A\cong\varinjlim \varepsilon_n A\varepsilon_n\cong B${,} and the {claimed} equivalence follows.
\end{proof}
\subsection{Extensions of Verma modules}
\begin{thm}\label{ExtDL}
Consider arbitrary $\lambda,\mu\in\mathfrak{h}^\ast$ and~$i\in\mathbb{N}$. For any $n\in\mathbb{N}$ such that~$\lambda-\mu \in\mathbb{Z}\Phi_n$, {there is an isomorphism}
$$\Ext^i_{{\mathbf{O}}}(\Delta(\mu),L(\lambda))\;\cong\; \Ext^i_{\mathcal O(\mathfrak{g}_n,\mathfrak{b}_n)}(\Delta_n(\mu),L_n(\lambda)).$$
\end{thm}
\begin{proof}
Let $\mathsf{K}$ be the ideal in~$(\mathfrak{h}^\ast,\le)$ generated by~$\mu$ and~$\lambda$, and~$\mathsf{K}_n$ be the ideal in~$(\mathfrak{h}^\ast,\le_n)$ generated by~$\mu$ and~$\lambda$. By Theorem~\ref{ThmExtFull}, it suffices to prove
$$\Ext^i_{{}^{\mathsf{K}}\mathbf{O}}(\Delta(\mu),L(\lambda))\;\cong\; \Ext^i_{{}^{\mathsf{K}_n}\mathbf{O}(\mathfrak{g}_n,\mathfrak{b}_n)}(\Delta_n(\mu),L_n(\lambda)).$$
By Proposition~\ref{PropExtpi}, the left-hand side is isomorphic to $\Ext^i_{{}^{\mathsf{K}}_{\mathring{\mathsf{K}}_n}\mathbf{O}}(\Delta(\mu),L(\lambda))$. The theorem then follows from Corollary~\ref{CorEquiv}.
\end{proof}
In {the} BGG category $\mathcal O(\mathfrak{g}_n,\mathfrak{b}_n)$, the dimensions of the extension spaces $\Ext^i(\Delta_n(\mu),L_n(\lambda))$ are determined by KLV polynomials. Theorem~\ref{ExtDL} thus shows that the same is true in $\mathbf{O}$. For instance, let $\mu\in\mathfrak{h}^\ast$ be integral, regular and anti-dominant. With any unexplained notation taken from \cite[Section~8]{Humphreys}, the combination of Theorem~\ref{ExtDL} and \cite[Theorem~8.11(b)]{Humphreys} yields
$$P_{x,w}(q)\;=\;\sum_{i\ge 0}q^i\dim\Ext^{\ell(x,w)-2i}_{\mathbf{O}}(\Delta(x\cdot\mu),L(w\cdot\mu))\quad\mbox{for all $x,w\in W$},$$
{where} $P_{x,w}$ {is} the KLV polynomial corresponding to the Weyl group $W_n$, {and} $n$ {is} big enough so that $x,w\in W_n$. In \cite[Conjecture~8.17]{NamT}, this formula was conjectured for extensions in $\overline{\mathcal O}$.
\begin{cor}
Conjecture~8.17 in \cite{NamT} is true for ${\mathbf{O}}$.
\end{cor}
The original question in \cite{NamT} therefore becomes a special case of Question~\ref{QueExtFull}(ii).
\begin{lemma}
Consider arbitrary $\lambda,\mu\in\mathfrak{h}^\ast$ and~$i\in\mathbb{N}$. For any $n\in\mathbb{N}$ such that~$\lambda-\mu \in\mathbb{Z}\Phi_n$, we have {an isomorphism}
$$\Ext^i_{{\mathbf{O}}}(\Delta(\mu),\Delta(\lambda))\;\cong\; \Ext^i_{\mathcal O(\mathfrak{g}_n,\mathfrak{b}_n)}(\Delta_n(\mu),\Delta_n(\lambda)).$$
\end{lemma}
\begin{proof}
Mutatis mutandis Theorem~\ref{ExtDL}.
\end{proof}
\subsection{Standard Koszulity}
{There exists a} notion of graded cover of an abelian category as in Definition~\ref{DefCover}{. In addition, we} refer to Appendix~\ref{AppKos} for the use of the term ``standard Koszulity''. {In what follows we frequently use results from the appendices.}
\begin{thm}[Standard Koszulity]\label{ThmKosz}
Let $\mathsf{K}$ be a finitely generated ideal in $(\mathfrak{h}^\ast,\le)$. The category ${}^{\mathsf{K}}\mathbf{O}$ admits a graded cover ${}^{\mathsf{K}}\mathbf{O}^{\mathbb{Z}}$ such that simple, Verma and dual Verma modules admit graded lifts. We use the same symbol for the graded lifts and can choose the normalisation such that, for any $\mu\in\mathsf{K}$,
we have non-zero morphisms $\Delta(\mu)\to L(\mu)\to\nabla(\mu)$ (without applying shifts~$\langle k\rangle$) in ${}^{\mathsf{K}}\mathbf{O}^{\mathbb{Z}}$.
Then, for all $\mu,\nu\in\mathsf{K}$, we have
$$\Ext^i_{{}^{\mathsf{K}}\mathbf{O}^{\mathbb{Z}}}(\Delta(\mu),L(\nu)\langle j\rangle)\;=\;0\;=\; \Ext^i_{{}^{\mathsf{K}}\mathbf{O}^{\mathbb{Z}}}(L(\mu),\nabla(\nu)\langle j\rangle),\quad\mbox{if $i\not=j$}.$$
\end{thm}
\begin{proof}
It suffices to take an arbitrary $\lambda\in\mathsf{K}$ and {consider} ${}^{\mathsf{K}}\mathbf{O}_{[\![\lambda]\!]}$.
Set $A:=A^{\mathsf{K}}_{[\![\lambda]\!]}$ and {recall} the equivalence
$$\mathscr{F}:\;{}^{\mathsf{K}}\mathbf{O}_{[\![\lambda]\!]}\;\stackrel{\sim}{\to}\;A\mbox{-Mod}$$
from Theorem~\ref{ThmAMod}. For each $n\in\mathbb{N}$ large enough we define the ideal $\mathring{\mathsf{K}}_n$ in $(\mathfrak{h}^\ast,\le)$ as in \ref{ForEquivK}. We have the idempotents $\varepsilon_n\in A$ from Theorem~\ref{Thmidemptr}, {such that $A\cong\varinjlim_n A_n$ for $A_n=\varepsilon_n A\varepsilon_n$}. {According to} Proposition~\ref{PropOSK}, the algebras~$A_n$ have Koszul grading{s}. By Theorem~\ref{ThmADL}(ii), the grading on $A_n$ inherited from the one on $A_{n+1}$ via the relation $\varepsilon_n A_{n+1}\varepsilon_n=A_n$ is also Koszul. By uniqueness of Koszul gradings, see e.g.~\cite[Corollary~2.5.2]{BGS}, the gradings on the algebras~$\{A_n\}$ are thus consistent and induce a grading on $A\cong\varinjlim_n A_n$. Example~\ref{ExAg} {shows that} the category
$${}^{\mathsf{K}}\mathbf{O}_{[\![\lambda]\!]}^{\mathbb{Z}}:=A\mbox{-gMod}$$ is a graded cover of ${}^{\mathsf{K}}\mathbf{O}_{[\![\lambda]\!]}$.
{Next}, for $\mu\in\mathsf{K}$, we consider the $A$-module $M:=\mathscr{F}(\Delta(\mu))$. By observing that $M=\varinjlim \varepsilon_nM$ and using the fact that the $A_n$-module $\varepsilon_nM$ is uniquely gradable up to shift, it follows that $M$ admits a graded lift.
We thus have a projective resolution of $M$ in $A$-gMod. For $n$ large enough {so} that $\mu\in\mathsf{K}_n$, it follows from Lemma~\ref{CorNPM} (or Corollary~\ref{CornCoh}) that all terms in {this} complex are direct sums of modules $P_{\mathsf{K}}(\kappa)$ with $\kappa\in \mathsf{K}_n$.
The exact full functor
\begin{equation}\label{eqgMod}\varepsilon_n: A\mbox{-gMod}\to A_n\mbox{-gMod}\end{equation}
shows via the standard Koszulity of $A_n$ that $\Ext^i_{{}^{\mathsf{K}}\mathbf{O}_{[\![\lambda]\!]}^{\mathbb{Z}}}(\Delta(\mu),L(\nu)\langle j\rangle)=0$, if $i\not=j$. The statement for dual Verma modules follows similarly.
\end{proof}
The following proposition suggests that any complete theory of Koszul {\em duality} for $\mathbf{O}$ would restrict to a duality between dominant and antidominant blocks. For $\mu\in\mathsf{K}$, $j\in\mathbb{Z}$ and
$M\in {}^{\mathsf{K}}\mathbf{O}^{\mathbb{Z}}$, we set
$$[M:L(\mu)\langle j\rangle]=\dim\Hom_{{}^{\mathsf{K}}\mathbf{O}^{\mathbb{Z}}}(P_{\mathsf{K}}(\mu)\langle j\rangle ,M),$$
with $P_{\mathsf{K}}(\mu)$ the projective cover of $L(\mu)\langle 0\rangle$ in ${}^{\mathsf{K}}\mathbf{O}^{\mathbb{Z}}$.
\begin{prop}
Let $\lambda,\mu\in\mathfrak{h}^\ast$ be integral and regular, with $\lambda$ dominant and $\mu$ antidominant. For all $w,x\in W$ and $j\in\mathbb{N}$, we have
\begin{enumerate}[(i)]
\item $\dim\Ext^j_{\mathbf{O}}(\Delta(w\cdot\lambda),L(x\cdot\lambda))\;=\; [\Delta(w^{-1}\cdot\mu):L(x^{-1}\cdot\mu)\langle j\rangle]$,
\item $\dim\Ext^j_{\mathbf{O}}(\Delta(w\cdot\mu),L(x\cdot\mu))\;=\; [\Delta(w^{-1}\cdot\lambda):L(x^{-1}\cdot\lambda\langle j\rangle)]$.
\end{enumerate}
\end{prop}
\begin{proof}
Take $n\in\mathbb{N}$ big enough such that $w\cdot\lambda-x\cdot\lambda\in\mathbb{Z}\Phi_n$ and the corresponding conditions for $\mu$ and $x^{-1},w^{-1}$ are satisfied. By Theorem~\ref{ExtDL}, the left-hand sides equal the corresponding dimensions in $\mathcal O(\mathfrak{g}_n,\mathfrak{b}_n)$. Choosing an appropriate finitely generated ideal $\mathsf{K}\subset\mathfrak{h}^\ast$ and using equation~\eqref{eqgMod} shows that the right-hand side can be computed in $\mathcal O^{\mathbb{Z}}(\mathfrak{g}_n,\mathfrak{b}_n)$. The result thus follows from \cite[Proposition~1.3.1]{BGS}.
\end{proof}
Despite the fact that the property in Theorem~\ref{ThmKosz} implies ordinary Koszulity in the case of finite dimensional (quasi-hereditary) algebras, see Theorem~\ref{ThmADL}(i), we still have the following open question.
\begin{que}
{Is}
$\Ext^i_{{}^{\mathsf{K}}\mathbf{O}_{[\![\lambda]\!]}^{\mathbb{Z}}}(L(\mu),L(\nu)\langle j\rangle)\;=\;0$ {for} $i\not=j$?
\end{que}
The difficulty in answering this question lies in the fact that the indecomposable projective modules appearing in a fixed position in the projective resolution of a simple module in ${}^{\mathsf{K}}\mathbf{O}_{[\![\lambda]\!]}$ will generally form a set $\{P(\mu)\,|\,\mu\in S\}$, for some multiset of weights $S$ which is not lower finite. This already happens for instance in the projective cover of the kernel of $P_{\mathsf{K}}(0)\twoheadrightarrow L(0)$.
Another open question is whether we can construct a cover without taking a Serre subcategory of $\mathbf{O}$ via truncation.
\begin{que}
Is it possible to construct a graded cover of ${\bf O}$?
\end{que}
\section{The semiregular bimodule}
\subsection{Definitions}
\subsubsection{The group $\Gamma$}\label{defGamma} Let $S$ be a countable set.
We consider the free abelian group $\Gamma_S\in\mathbf{A\hspace{-0.5mm}b}$ with basis $S$,
$$\Gamma_S:=\bigoplus_{s\in S}\mathbb{Z} \qquad\mbox{with group homomorphism}\quad \mathsf{ht}:\Gamma_S\to \mathbb{Z}{,} \quad (a_s)_{s\in S}\mapsto\sum_{s\in S}a_s.$$
Hence $\Gamma_S$ is isomorphic either to~$\mathbb{Z}^{\oplus k}$ for some $k\in\mathbb{N}$, or to $\mathbb{Z}^{\oplus\aleph_0}$. In the following we {omit} the reference to~$S$.
For any two $\Gamma$-graded vector spaces~$V=\bigoplus_a V^{a}$ and~$W=\bigoplus_a W^{a}$, we define the $\Gamma$-graded vector space ${\mathsf{Hom}}_{\Bbbk}(V,W)$ by {setting}
$${\mathsf{Hom}}_{\Bbbk}(V,W)^a\;{:}=\;\{f\in \Hom_{\Bbbk}(V,W)\;|\; \mbox{with }\; f(V^b)\subset W^{b+a}\;\mbox{for all $b\in\Gamma$}\}.$$
We equip the one dimensional vector space $\Bbbk$ with the trivial $\Gamma$-grading. Then ${\mathsf{Hom}}_{\Bbbk}(V,\Bbbk)$ is the subspace of~$V^\ast$ of functionals which vanish {at} all but finitely many degrees. {We write $V^{\circledast}={\mathsf{Hom}}_{\Bbbk}(V,\Bbbk)$ and} will interpret $(-)^\circledast$ as a duality functor on the category of~$\Gamma$-graded vector spaces with finite dimensional graded components.
\subsubsection{}We will work with $\Gamma$-graded Lie algebras over $\Bbbk$, denoted by
$\mathfrak{k}=\bigoplus_{a\in\Gamma}\mathfrak{k}^a.$
Any $\Gamma$-graded Lie algebra has an {\bf associated $\mathbb{Z}$-grading} through the homomorphism $\mathsf{ht}$:
$$\mathfrak{k}\;=\;\bigoplus_{i\in\mathbb{Z}}\mathfrak{k}^{(i)},\qquad\mathfrak{k}^{(i)}=\bigoplus_{\mathsf{ht}(a)=i}\mathfrak{k}^{a}.$$
For $\Gamma$-graded $\mathfrak{k}$-modules $M,N$ we write ${\mathsf{Hom}}_{U(\mathfrak{k})}(M,N)$ for the subspace of ${\mathsf{Hom}}_{\Bbbk}(M,N)$ of $\mathfrak{k}$-linear morphisms.
\begin{ddef}\label{defg}
We say that a $\Gamma$-grading on a Lie algebra~$\mathfrak{k}$ is {\bf triangular} if
\begin{enumerate}[(i)]
\item $\mathfrak{k}^a=0$, whenever $a=(a_s)\in \Gamma$ contains both positive and negative integers;
\item $\dim_{\Bbbk}\mathfrak{k}^a<\infty$ if $\mathsf{ht}(a)<0$;
\item $\mathfrak{k}$ is generated by the subspace $\mathfrak{k}^{(1)}\oplus\mathfrak{k}^{(0)}\oplus\mathfrak{k}^{(-1)}$.
\end{enumerate}
\end{ddef}
Condition~(i) implies in particular that~$\mathfrak{k}^{(0)}=\mathfrak{k}^{{0}}$.
\begin{ex}
Definition~\ref{defg} is tailored to cover Kac-Moody algebras for arbitrary (possibly infinite) generalised Cartan matrices. The group $\Gamma$ is then to be identified with the root lattice. When the Cartan matrix is finite (and hence $\Gamma$ is finitely generated) the spaces~$\mathfrak{k}^{(i)}$ are already finite dimensional. In this case, one might as well work with the associated $\mathbb{Z}$-grading.
\end{ex}
\subsubsection{}\label{defBN} For a triangularly~$\Gamma$-graded Lie algebra~$\mathfrak{k}$, we set
$$\mathfrak{k}_{<}{:}=\bigoplus_{i<0}\mathfrak{k}^{(i)},\;\;\mathfrak{k}_{\ge}{:}=\bigoplus_{i\ge 0}\mathfrak{k}^{(i)},\;\; N{:}=U(\mathfrak{k}_{<}),\;\; B{:}=U(\mathfrak{k}_{\ge}),\mbox{ and }\; U{:}=U(\mathfrak{k}).$$
All these algebras are naturally~$\Gamma$-graded.
\subsubsection{Semi-infinite characters}\label{DefSemi} Consider a triangularly~$\Gamma$-graded Lie algebra~$\mathfrak{k}$.
Following \cite[Definition~1.1]{Soergel}, see also \cite{Arkhipov}, we call a character $\gamma:\mathfrak{k}^{{0}}\to\Bbbk$ {\bf semi-infinite for~$\mathfrak{k}$} if
$$\gamma([X,Y])\,=\,{\rm{tr}}(\ad_X\ad_Y:\mathfrak{k}^{{0}}\to\mathfrak{k}^{{0}})\quad \mbox{for all $X\in\mathfrak{k}^{(1)}$ and~$Y\in\mathfrak{k}^{(-1)}$}.$$
\subsection{Some bimodules}
Keeping notation as above, we consider a triangularly~$\Gamma$-graded Lie algebra~$\mathfrak{k}$.
\subsubsection{The bimodule~$N^{\circledast}$}\label{Nast}We have the natural $N$-bimodule structure on $N^\ast=\Hom_{\Bbbk}(N,\Bbbk)$, with $(fn)(u)=f(nu)$ and~$(nf)(u)=f(un)$, for~$f\in N^\ast$ and~$u,n\in N$.
The subspace
$$N^{\circledast}:={\mathsf{Hom}}_{\Bbbk}(N,\Bbbk)$$
clearly constitutes a sub-bimodule of~$N^\ast$.
\subsubsection{}
The $(N,B)$-bimodule structure on $N^{\circledast}\otimes_{\Bbbk}B$ is induced from the left $N$-module structure on $N^{\circledast}$ and the right module structure on $B$.
The $N$-bimodule structure on $N^{\circledast}$ and {the} $(N,U)$-bimodule {structure on $U$} yield an $(N,U)$-bimodule structure on $N^{\circledast}\otimes_{N}U$.
\subsubsection{} Now fix an arbitrary character $\gamma:\mathfrak{k}^{(0)}\to\Bbbk$ and define the one dimensional left $B$-module~$\Bbbk_\gamma$ via the character $\gamma:\mathfrak{k}^{(0)}\to \Bbbk$ and the surjection $\mathfrak{k}_{\ge}\twoheadrightarrow\mathfrak{k}^{(0)}$.
Then we have the $B$-bimodule~$\Bbbk_\gamma\otimes_{\Bbbk}B$, which as a left module is the tensor product of~$\Bbbk_\gamma$ and the left regular module. The {structure of right $B$-module comes from $B$ as a right $B$-module. Next, considering}
$U$ as a $(B,U)$-bimodule, allows to introduce the $(U,B)$-bimodule
${\mathsf{Hom}}_B(U,\Bbbk_\gamma\otimes B)$.
\begin{lemma}\label{LemLR}We consider arbitrary elements~$n\in N$, $b,b'\in B$ and~$f\in N^{\circledast}$.
\begin{enumerate}[(i)]
\item The $(N,B)$-bimodule morphism
$$\psi:N^{\circledast}\otimes_{\Bbbk}B\,\to\,N^{\circledast}\otimes_{N}U,\qquad\psi(f\otimes b)=f\otimes b $$ is an isomorphism.
\item
The $(N,B)$-bimodule morphism
$$\phi:N^{\circledast}\otimes_{\Bbbk}B\,\to\, {\mathsf{Hom}}_{B}(U,\Bbbk_\gamma\otimes B),\quad\phi(f\otimes b)(b'n)=b'(f(n)\otimes b)$$
is an isomorphism. \end{enumerate}\end{lemma}
\begin{proof}
These are immediate applications of the PBW theorem.
\end{proof}
\subsection{The semi-regular bimodule}
We continue with assumptions and notation as above and now also {\em assume that~$\gamma:\mathfrak{k}^{(0)}\to\Bbbk$ is a semi-infinite character for~$\mathfrak{k}$.} On the space $N^{\circledast}\otimes_{\Bbbk}B$, we can define a right $U$-action through the isomorphism $\psi$ in Lemma~\ref{LemLR}(i){,} and a left $U$-action through the isomorphism $\phi$ in Lemma~\ref{LemLR}(ii).
\begin{prop}\label{PropBM}
The left and right $U$-action on $N^{\circledast}\otimes_{\Bbbk}B$ commute if $\gamma$ is a semi-infinite character.
\end{prop}
\begin{proof}
This results from the same reasoning as in the proof of \cite[Theorem~1.3]{Soergel}.
By construction, we only need to prove that the left $B$-action commutes with the right $N$-action.
For the left $B$-action it suffices to consider the action of~$\mathfrak{k}^{(0)}\oplus \mathfrak{k}^{(1)}$, by \ref{defg}(iii). For~$H\in \mathfrak{k}^{(0)}$, $f\in N^{\circledast}$ and~$b\in B$, a direct computation shows that
\begin{equation}\label{eqHact}H(\phi(f\otimes b))\;=\; -\phi(f\circ\ad_H\otimes b)+\gamma(H)\phi(f\otimes b)+\phi(f\otimes Hb).\end{equation}
Note that~$\ad_H\in \End_{\Bbbk}(N)^{0} \subset{\mathsf{End}}_{\Bbbk}(N)$, so that~$f\circ\ad_H\in N^{\circledast}$ is well-defined. That this left action commutes with the right $N$-action follows as in \cite[Theorem~1.3]{Soergel}.
Now we consider the left action of~$\mathfrak{k}^{(1)}$. By~\ref{defg}(ii), $\mathfrak{k}^{(1)}$ is spanned by vectors $X\in \mathfrak{k}^{\gamma}$ for basis elements~$\gamma\in S\subset\Gamma$. For such $X$, by~\ref{defg}(i) we then find that the dimension of~$[X,\mathfrak{k}_{<0}]\cap\mathfrak{k}^0=[X,\mathfrak{k}^{-\gamma}]$ is finite. We take a basis $\{H_i\}$ of this space, which allows to define $H^i, F\in{\mathsf{End}}_{\Bbbk}(N)$ by
$$nX\;=\; Xn+\sum_i H_iH^i(n)+F(n)\;\mbox{ in $U(\mathfrak{k})$}\qquad\mbox{for all $n\in N$}.$$
A direct computation shows that
\begin{equation}\label{eqXact}X\phi(f\otimes b)\;=\;\phi(f\otimes Xb)+\phi(f\circ F\otimes b)+\sum_i\gamma(H_i)\phi(f\circ H^i\otimes b)+\sum_i\phi(f\circ H^i\otimes H_ib).\end{equation}
That this action commutes with the left $N$-action follows again from the same computation as in \cite[Theorem~1.3]{Soergel}.
\end{proof}
The resulting bimodule in Proposition~\ref{PropBM} will be denoted by~$S_{\gamma}$, and referred to as the {\bf semi-regular} bimodule.
\begin{cor}\label{CorAdH}
Consider the inclusion of~$N$-bimodules~$\iota: N^{\circledast}\hookrightarrow S_\gamma$, corresponding to~$N^{\circledast}\hookrightarrow N^{\circledast}\otimes_{\Bbbk}B=S_\gamma$.
\begin{enumerate}[(i)]
\item The adjoint action of~$H$ on the bimodule~$S_\gamma$ satisfies
$$\ad_H(\iota(f))\;=\;\gamma(H)\iota(f)-\iota(f\circ \ad_H)\quad\mbox{for~$H\in\mathfrak{k}^{(0)}$ and~$f\in N^{\circledast}$.}$$
\item The $(U,N)$-bimodule morphism
$$\xi:U\otimes_N N^{\circledast}\;\to\;S_\gamma,\quad u\otimes f\mapsto u\iota(f),$$
is an isomorphism.
\end{enumerate}
\end{cor}
\begin{proof}
Part~(i) is essentially equation~\eqref{eqHact}.
For part (ii), we will prove that the composition $\sigma:=\phi^{-1}\circ\xi$ with $\phi$ from lemma \ref{LemLR}(ii)
$$\sigma:\;B\otimes_{\Bbbk}N^{\circledast}\to N^{\circledast}\otimes_{\Bbbk}B,\quad b\otimes f\mapsto \phi^{-1}(b\phi(f\otimes 1)),$$
is an isomorphism.
Consider arbitrary $X_1,\ldots, X_k\in \mathfrak{k}^{(0)}\cup\mathfrak{k}^{(1)}$. Equations~\eqref{eqHact} and~\eqref{eqXact} imply that, for $f\in N^\circledast$, we have
$$\xi(X_1\cdots X_k\otimes f)\;=\; f\otimes X_1\cdots X_k\;+\;\sum g\otimes u,$$
where $\sum g\otimes u$ stands for a finite sum of elements~$g\otimes u$, where $g\in N^{\circledast}$ and~$u\in B$ such that~$u$ is
\begin{itemize}
\item a product of strictly fewer than $k$ elements of $\mathfrak{k}^{(0)}\cup\mathfrak{k}^{(1)}$
\end{itemize}
{or}
\begin{itemize}
\item a product of exactly $k$ elements of $\mathfrak{k}^{(0)}\cup\mathfrak{k}^{(1)}$, but strictly more elements belonging to $\mathfrak{k}^{(0)}$ than in~$X_1\cdots X_k$.
\end{itemize}
From this, it is easy to show that~$\sigma$ must be an isomorphism.
\end{proof}
\section{Ringel duality}
Now we return to {a} root-reductive Lie algebra~$\mathfrak{g}$ as in the beginning of Section~\ref{Sec4}.
\subsection{Triangular $\Gamma$-grading and semi-infinite characters}
\subsubsection{}
Using the notation of~\ref{defGamma}, we set
$$\Gamma:=\Gamma_{\Sigma }\cong \mathbb{Z}\Sigma \cong\mathbb{Z}\Phi.$$
The root decomposition~\eqref{rootdec}, where $\mathfrak{h}=\mathfrak{g}^0$, is thus a $\Gamma$-grading.
It is easily checked that this makes $\mathfrak{g}$ a triangularly~$\Gamma$-graded Lie algebra. We then have $\mathfrak{g}_{\ge}=\mathfrak{b}$ and~$\mathfrak{g}_{<}=\mathfrak{n}^-$, and thus $B=U(\mathfrak{b})$ and~$N=U(\mathfrak{n}^-)$, for the algebras introduced in \ref{defBN}.
\subsubsection{}By the above, we can view $\Gamma$ as a subgroup of~$\mathfrak{h}^\ast$ {and write} $\sigma:\Gamma\hookrightarrow \mathfrak{h}^\ast$. In particular, this equips any $\Gamma$-graded vector space $V$ with the structure of a semisimple $\mathfrak{h}$-module, by setting $H(v)=\sigma(\gamma)(H)v$, for any $v\in V_\gamma$ and~$H\in\mathfrak{h}$.
The dual $V^{\circledast}$ of~\ref{defGamma} then corresponds to the finite dual of~$V$ as a semisimple $\mathfrak{h}$-module{, see}~\ref{SecWeightM}.
In particular, we can interpret $N^\circledast$ as in \ref{Nast} in this way by using the adjoint $\mathfrak{h}$-action.
\begin{lemma}\label{Lem2rho}The semi-infinite characters $\gamma\in\mathfrak{h}^\ast$ are those characters $\gamma:\mathfrak{h}\to\Bbbk$, for which~$\gamma(H)=2\rho(H)$ for all $H\in \mathfrak{h}\cap[\mathfrak{g},\mathfrak{g}]${, for $\rho$ as defined} in \ref{rhoshift}.
\end{lemma}
\begin{proof}
For each simple positive root $\alpha$, we consider the Chevalley generator{s}~$E_\alpha$ and~$F_\alpha$, and set $ H_\alpha:=[E_\alpha,F_\alpha]$. By applying the definition in \ref{DefSemi}, we find
$$\gamma([E_\alpha,F_\alpha])=\alpha(H_\alpha)={2\rho(H_\alpha)}.$$
The conclusion then follows, since $H\in \mathfrak{h}\cap[\mathfrak{g},\mathfrak{g}]$ is spanned by vectors $H_\alpha$ as above, for a Dynkin Borel subalgebra.
\end{proof}
This determines all semi-infinite characters for Dynkin Borel subalgebras in case $\mathfrak{g}$ is simple.
\begin{cor}
For~$\mathfrak{g}$ equal to~$\mathfrak{sl}_\infty$, $\mathfrak{so}_\infty$ or~$\mathfrak{sp}_\infty$, the unique semi-infinite character is $2\rho$.
\end{cor}
\subsection{The AS duality functor} In this subsection, we consider the analogue of the duality functor constructed by Arkhipov and Soergel for (affine) Kac-Moody algebras in \cite{Arkhipov, Soergel}.
We set $\gamma=2\rho$, which is a semi-infinite character by Lemma~\ref{Lem2rho}, and consider the corresponding semiregular bimodule {$S:=S_{2\rho}$}.
\begin{lemma}\label{TwistVerma}
For any $\lambda\in\mathfrak{h}^\ast$, we have an isomorphism $S\otimes_U\Delta(\lambda)\,\cong\,\Delta(-\lambda-2\rho)^{\circledast}$.
\end{lemma}
\begin{proof}
Using the notation of Corollary~\ref{CorAdH} we find that~$S\otimes_U\Delta(\lambda)$ is equal to its subspace
$\iota(N^{\circledast})\otimes \Bbbk_\lambda,$
with $\Bbbk_\lambda$ the one dimensional subspace of~$\Delta(\lambda)$ of weight~$\lambda$. By Corollary~\ref{CorAdH}(i) we have for any $H\in\mathfrak{h}$, $f\in N^\circledast$ and~$v\in \Bbbk_\lambda$
$$H(\iota(f)\otimes v)\;=\; 2\rho(H)(\iota(f\otimes v))+\iota(f)\otimes Hv-\iota(f\circ\ad_H)\otimes v.$$
Hence, {there is} an isomorphism
\begin{equation}\label{isoresh}{\rm Res}^{\mathfrak{g}}_{\mathfrak{h}}S\otimes_U\Delta(\lambda)\;\cong\; N^{\circledast}\otimes_{\Bbbk}\Bbbk_{\lambda+2\rho},\end{equation}
for the canonical adjoint $\mathfrak{h}$-action on $N^{\circledast}$.
In particular, $S\otimes_U\Delta(\lambda)$ is a weight module, so Lemma~\ref{stupidlemma} implies
$$(S\otimes_U\Delta(\lambda))^{\circledast}\cong \Delta(\mu)$$ for some $\mu\in\mathfrak{h}^\ast$. Equation~\eqref{isoresh} implies that $\mu=-\lambda-2\rho$.
\end{proof}
\begin{lemma}\label{LemFF}
The functor~$\mathscr{F}:\cF^\Delta(\mathfrak{g},\mathfrak{b})\to \cF^{\nabla}(\mathfrak{g},\mathfrak{b}^-)$, obtained by the restriction of~$S\otimes_U-$, is an equivalence of exact categories.
\end{lemma}
\begin{proof}
We start by considering the functor~$S\otimes_U-\,:\cF^\Delta(\mathfrak{g},\mathfrak{b})\to U$-Mod. Since
$${\rm Res}^{\mathfrak{g}}_{\mathfrak{n}^-}S\otimes_U-\;\cong \;N^{\circledast}\otimes_N{\rm Res}^{\mathfrak{g}}_{\mathfrak{n}^-}-,$$ Lemma~\ref{stupidlemma} implies that~$S\otimes_U-$ is exact. Lemma~\ref{TwistVerma} then implies that the image of objects in~$\cF^\Delta(\mathfrak{g},\mathfrak{b})$ are contained in~$ \cF^{\nabla}(\mathfrak{g},\mathfrak{b}^-)$. We denote the corresponding exact functor by~$\mathscr{F}$.
By tensor-hom adjunction, we have the right adjoint functor~$\Hom_U(S,-)$. By Corollary~\ref{CorAdH}(ii), we have an isomorphism of functors
$${\rm Res}^{\mathfrak{g}}_{\mathfrak{n}^-}\circ\Hom_{U}(S,-)\;\cong\; \Hom_{N}(N^{\circledast},{\rm Res}^{\mathfrak{g}}_{\mathfrak{n}^-}-).$$
Hence, this yields an exact functor~$\mathscr{G}: \cF^{\nabla}(\mathfrak{g},\mathfrak{b}^-)\to \cF^\Delta(\mathfrak{g},\mathfrak{b})$. That the adjoint {functors} $(\mathscr{F},\mathscr{G})$ are mutually inverse follows as in \cite[Theorem~2.1]{Soergel}.
\end{proof}
We can compose the functor~$\mathscr{F}$ with the duality functor~$(-)^\circledast$ on $\mathcal{C}(\mathfrak{g},\mathfrak{h})$, {and} denote the corresponding functor by~$\mathscr{D}$.
\begin{cor}${}$\label{PropRD}
The functor~$\mathscr{D}$ yields an exact contravariant equivalence ${\cF}^{\Delta}\;\tilde\to\;{\cF}^{\Delta}$, mapping~$\Delta(\lambda)$ to~$\Delta(-\lambda-2\rho)$.
\end{cor}
\begin{proof}
This is immediate from Lemmata~\ref{LemFF}, \ref{TwistVerma} and Corollary~\ref{CorDua}.
\end{proof}
\subsection{Ringel duality and tilting modules}
We can also compose the functor~$\mathscr{F}$ with the twist by the automorphism $\tau$, or equivalently, the functor~$\mathscr{D}$ with the duality functor $\vee$ on $\mathcal{C}(\mathfrak{g},\mathfrak{h})$ of~\ref{SecWeightM}. By comparing the following proposition with Theorem~\ref{ThmRingelAlg}(iii), we can interpret
$$\mathscr{R}:={}_{\tau}S\otimes_U\cong (-)^\vee\circ\mathscr{D}$$ as the Ringel duality functor.
\begin{prop}[Ringel self-duality of $\mathbf{O}$] \label{PropRD}
The functor~$\mathscr{R}$ yields an exact equivalence ${\cF}^{\Delta}\;\tilde\to\;{\cF}^{\nabla}$, mapping $\Delta(\lambda)$ to~$\nabla(-\lambda-2\rho)$.
\end{prop}
\begin{proof}
This is immediate from Lemmata~\ref{LemFF}, \ref{TwistVerma} and Corollary~\ref{CorDua}.
\end{proof}
\begin{rem}
By Theorem~\ref{ThmRingelAlg}(iii), we can thus state that~$\mathbf{O}_{[\![\lambda]\!]}$ is {\em Ringel dual} to~$\mathbf{O}_{[\![-\lambda-2\rho]\!]}$. \end{rem}
The following Proposition represents the combinatorial shadow of the Ringel duality between $\mathbf{O}_{[\![\lambda]\!]}$ and~$\mathbf{O}_{[\![-\lambda-2\rho]\!]}$, see Theorem~\ref{ThmRingelAlg}(iv).
\begin{prop} \label{PropRing2}
Let $\mathsf{C}\subset\mathfrak{h}^\ast$ be a lower finite coideal and~$\nu\in\mathsf{C}$. There exists a module $T_{\mathsf{C}}(\nu)\in \cF^{\nabla}[\mathsf{C}]$ such that, for all $\kappa\in\mathsf{C}$,
$$(T_{\mathsf{C}}(\nu):\nabla(\kappa))\;=\;[\Delta(-\kappa-2\rho):L(-\nu-2\rho)]\qquad\mbox{and}\qquad \Ext^1_{\mathbf{O}}(T_{\mathsf{C}}(\nu),\nabla(\kappa))\;=\;0.$$
\end{prop}
\begin{proof}
We define the upper finite ideal
$$\mathsf{K}\;:=\;\{-\mu-2\rho\,|\, \mu\in\mathsf{C}\}$$
and the module $N:=\mathscr{R}(P_{\mathsf{K}}(\lambda))$, with $\lambda:=-\nu-2\rho$. We use freely the results of Theorem~\ref{ThmProj}. By Proposition~\ref{PropRD}, we have $N\in \cF^{\nabla}[\mathsf{C}]$ with
$$(N:\nabla(\kappa))\;=\; (P_{\mathsf{K}}(\lambda):\Delta(-\kappa-2\rho))\;=\;[\Delta(-\kappa-2\rho):L(\lambda)].$$
By Proposition~\ref{PropRD}, we also have
$$\Ext^1_{\mathbf{O}}(N,\nabla(\mu))\;=\;\Ext^1_{\mathbf{O}}(P_{\mathsf{K}}(\lambda),\Delta(-\mu-2\rho))\;=\;0\quad\mbox{for all $\mu\in\mathsf{C}$}.$$
This concludes the proof.\end{proof}
\begin{ex}
For $\nu\in\mathfrak{h}^\ast$ we set $\mathsf{C}:=\{\lambda\in\mathfrak{h}^\ast\,|\, \lambda\ge\nu\}$. Then $T_{\mathsf{C}}(\nu)=\nabla(\nu)$.
\end{ex}
\begin{rem}
The vanishing of extensions in Proposition~\ref{PropRing2} implies that inside the quotient ${}_{\mathsf{L}}\mathbf{O}$, for $\mathsf{L}=\mathfrak{h}^\ast\backslash\mathsf{C}$, the module $T_{\mathsf{C}}(\nu)$ becomes a tilting module, that is a module with Verma and dual Verma flag. This follows from \cite[Theorem~4]{Ringel} and the results in Section~\ref{Sec4}.
\end{rem}
As a special case of the above remark, we have the following corollary, which also follows from Corollary~\ref{CorEqBlocks}(ii).
\begin{cor}
If $\lambda\in\mathfrak{h}^\ast$ is antidominant, we have a $\mathfrak{g}$-module $T(\mu)$ for each $\nu\in [\![\lambda]\!]$ which is in~$\cF^{\Delta}\cap\cF^{\nabla}$ and satisfies
$$(T(\nu):\nabla(\kappa))\;=\;[\Delta(-\kappa-2\rho):L(-\nu-2\rho)]\quad\mbox{for all $\kappa\in [\![\lambda]\!]$.}$$
\end{cor}
|
1,941,325,220,465 | arxiv |
\section{Example: Analysis of One Dimensional Hopper}
This hopper with one-dimensional vertical motion is expected to demonstrate the applicability of the proposed stochastic analysis methodology. Its quasi-linear nature prohibits generalizing the deductions on more complex dynamical systems; however, promising results are introduced in the following parts.
Chosen one-dimensional hopper model is a variation of the Spring Loaded Inverted Pendulum (SLIP) template with constant forcing and damping called F-SLIP recently studied by \cite{Tanfener2022}. Using SLIP model variations brings the advantage of simplicity for implementation and analysis together with its applicability to many legged systems as a template \cite{Saranli2003}. Figure~\ref{fig:oneleg} depicts the dynamical model of the F-SLIP template model with one-dimensional vertical motion.
\begin{figure}[htb]
\centering
\includegraphics[scale=0.7]{figures/one_leg_model.eps}
\caption{Illustration of the hopper with one-dimensional vertical motion}
\label{fig:oneleg}
\end{figure}
For the stochastic analysis, the first step is representing the system as an absorbing Markov chain. Since this system has a one-dimensional state representation, determining the Markov states is pretty straightforward. States of the Markov chain are obtained by discretizing the state space, in this case, only the height values, using equally spaced 220 slices between $0.4$ and $1.5$ m and defining an absorbing state to represent the height values below $0.4$ and higher than $1.5$, this slicing is roughly illustrated in Figure~\ref{fig:one_leg_slice}.
\begin{figure}[htb]
\centering
\includegraphics[width=\textwidth]{figures/hopper_discrete.eps}
\caption{(on the left) Discretization of states for one dimensional hopper. The apex height of the top of of the leg is discretized into a finite set of slices. (on the right) An example passage time observation. Passage time is observed as 7, that means the robot leaves the predetermined region at the $7^{th}$ step.}
\label{fig:one_leg_slice}
\end{figure}
This paper covers three main approaches to estimating the state transition matrix. First of all, one can calculate the state transition matrix by running the aforementioned systematic experiments covering a wide range of noise values. This method can be interchangeably addressed as Monte Carlo simulations when the slicing of noise values is infeasible, so the initial conditions are randomly sampled in space. Monte Carlo simulation results are expected to be very close to systematic experiments if enough number of experiments are conducted. Secondly, one can use a linearized version of the system to calculate the mean and variance of the future state-value distribution. This linearization can be conducted either, if available, using the analytically linearized version of the system or numerically calculated linearized system matrices (Jacobians) at respective points. The third approach is proposed to handle nonlinear systems more efficiently without losing the information sourced from nonlinearity. For the cases where nonlinear behavior is dominant, the estimation method based on unscented transformation is expected to outperform the linearization-based methods because unscented transformation is supposed to handle nonlinearities and takes the actual system dynamics into account for calculations.
\begin{table}[!htb]
\begin{center} \caption{Comparison of different transition matrix estimation methods}
\begin{tabular}{|c||c|c|c|c|c|}
\hline
Method &\# of experiment & $\lambda_1$ & $\lambda_2$ & $\lambda_3$ & $\lambda_4$\\
\hline \hline
Systematic Experiments & $6.6 \cdot 10^6$ &1.0000& 0.9917& 0.8602& 0.7328\\
Monte Carlo & $3.3 \cdot 10^5$ & 1.0000& 0.9916& 0.8595& 0.7343 \\
Unscented Transform & $6.6 \cdot 10^2$ & 1.0000& 0.9877& 0.8534& 0.7252 \\
Linearization &$4.4 \cdot 10^2$& 1.0000 & 0.9877 & 0.8534 & 0.7252\\
\hline
\end{tabular}
\label{tab:BMUcomp}
\end{center}
\end{table}
In order to compare different estimation methods, we can examine the state transition matrices' eigenvalues that contain essential information about the Markov chain and subsequently the metastable system itself. From the second largest magnitude eigenvalue, system-wide mean first passage time is calculated to use as an indicator for stochastic stability. Table~\ref{tab:BMUcomp} shows the first four eigenvalues of the state transition matrices for impact velocity noise variance of 0.05 for comparison. The methods based on unscented transformation and linearization give almost the same result and slightly deviated results from Monte Carlo and systematic experiments. The linearization-based estimation method is observed to give almost identical results to the proposed method.
State-dependent MFPT curve is plotted in Figure~\ref{fig:stateMFPT}, using the state transition matrix estimated by the proposed method. Each initial condition has a particular state-dependent MFPT $m(s_i)$ and $m(s_i)$ quantifies the relative stability for each point. Different from the rimless wheel (RW) in \cite{Byl2009}, for this system, state-dependent MFPT curve is far from flat. Therefore, the objective system can be inferred as highly sensitive to initial conditions. In addition, the same conclusion can be reached by investigating the eigenvalues: $\lambda_1=1$, $\lambda_2=0.9917$, $\lambda_3= 0.8602$, $ \lambda_4=0.7328$. The value of $\lambda_3$ means that almost 14\% of the contribution to the probability function at the initial condition is lost ("forgotten") with each successive step. Again, this was not the case for the rimless wheel in \cite{Byl2009}. RW system has its third eigenvector near 0.5 and forgets 50\% of the initial condition. As a result, within a few steps, initial conditions for any wheel beginning in the range of analysis have therefore predominantly evolved into the metastable future state-value distribution unless it fails. Analogously, the motion of the one-dimensional hopper will converge to its metastable distribution after more steps but eventually it will.
\begin{figure}[htb]
\centering
\includegraphics[scale=0.7]{figures/state_dep_mfpt_005.eps}
\caption{State dependent MFPTs, quantifies the relative stability of each point in state space, for impact velocity noise variance of 0.05.}
\label{fig:stateMFPT}
\end{figure}
\section{Example: Analysis of Bipedal Walking}
We will showcase our method on a 5-link bipedal walking model, the RABBIT \cite{rabbitRef}. The simulation testbed has a controller design based on optimization of the hybrid zero dynamics (HZD) following the same steps in \cite[Chapter 6.6.2.1]{HZDBook}. As in the implementation in \cite{sovukluk_2022}, we defined the system dynamics as
\begin{equation}
\dot{x}=f(x)+g(x)u,\label{dyn}
\end{equation}
where ten dimensional state $x:=[q^T \enspace \Dot{q}^T]^T$ collects the configuration variables $q:=[q_1\enspace q_2\enspace q_3\enspace q_4\enspace q_5]^T$, as shown in Figure~\ref{fig:biped}, along with their velocities.
\begin{figure}[htb]
\centering
\vspace{2mm}
\includegraphics[width=0.35\textwidth]{figures/biped_q.eps}
\caption{Illustration of 5-link bipedal robot}
\label{fig:biped}
\end{figure}
HZD ensures that relative angles $h_0(q)$ track desired trajectory $h_d(q)$. Definition of tracking error (y) is
\begin{equation}
y=h(q):=h_0(q)-h_d(q) \label{hzd}.
\end{equation}
Control input applied by the HZD controller takes the following form
\begin{equation}
u=(\mathcal{L}_g\mathcal{L}_fh)^{-1}(-\mathcal{L}^2_fh+\varv),\label{input}
\end{equation}
where $\mathcal{L}_g\mathcal{L}_fh$ and $\mathcal{L}_f^2h$ represents Lie derivatives of tracking error with respect to system dynamics $f$ and $g$ in \eqref{dyn}. The control input is saturated in the implementation to make the simulation more realistic.
Study in \cite{HZDBook} proves that, with a simple PD controller, the solution of the closed-loop system converges to an exponentially stable periodic orbit of the hybrid zero dynamics. Therefore, we preferred to utilize a PD controller for $\varv$ to force $h$ in \eqref{hzd} to zero.
\begin{equation}
\varv= K_{D} \mathcal{L}_{f} h+ K_{P} h
\end{equation}
Table \ref{tab1} shows different parameter choices for diagonal entries of $K_P$ and $K_D$ pairs to analyze the closed-loop behavior later with our proposed method.
\begin{table}[htb]
\caption{Different controller parameter pairs, values of the table represents the diagonal entries of $K_P$ and $K_D$ values}
\begin{center}
\begin{tabular}{|c||c|c||}
\hline
& $K_P$ & $K_D$\\
\hline\hline
\textbf{$C_1$ }& $[60 \enspace 90\enspace90 \enspace50]$ & $[10 \enspace 20\enspace20 \enspace10]$\\ \hline
\textbf{$C_2$ }& $[ 5\enspace 5\enspace 5\enspace5]$ & $[ 5\enspace 5\enspace 5\enspace5]$\\ \hline
\textbf{$C_3$ }& $[ 40\enspace 40\enspace 40 \enspace 40]$ &0 \\ \hline
\textbf{$C_4$ }& $[ 40\enspace 40\enspace 40 \enspace 40]$ & $[ 1\enspace 1\enspace 1\enspace1]$\\ \hline
\textbf{$C_5$ }& $[10 \enspace 89\enspace83\enspace50]$& $ [5.4 \enspace21\enspace 21\enspace 9]$ \\ \hline
\hline
\end{tabular}
\label{tab1}
\end{center}
\end{table}
The first step towards the stochastic analysis of bipedal locomotion is building the reachable state space. We need to find reachable state space by Monte Carlo sampling or meshing the space by predefined ranges \cite{Benallegue2013}. In the case of a 5-link bipedal robot, each Markov state $s_i$ can be chosen as a $10\times1$ vector, containing each link's angular positions and velocities. Nevertheless, it is too complicated to specify the Markov states for a 10D space. The underactuated 5-link bipedal testbed needs a different approach for stochastic analysis.
The system's controller follows a trajectory such that the unactuated link shows the desired behavior, i.e., actuated degree of freedoms indirectly control the body angle. As shown in Figure~\ref{fig:biped}, the position and velocity of stance and swing legs of the walker are defined relative to the body of the system. This coordinate configuration strengthens the idea that high bandwidth actuated joints are expected to be around their desired trajectory as long as the unactuated joint is close to its desired evolution. That is why observing body angle provides strong information about other joints' evolution and stability of the locomotion. Furthermore, for the reachable state space construction, the underactuated body angle $s_i=q_5^i$ and body angular velocity $s_i=q_{10}^i$ are suitable candidates to focus, search the vicinity of the fixed point and define the reachable limits assuming noises for all five states representing velocities. Nevertheless, this model reduction should be justified quantitatively. The next section explains the model reduction process for this 5-link bipedal system.
\subsection{Extension to the High-Dimensional Systems}
The main problem in extending this stochastic analysis to multidimensional systems stems from the requirement of meshing of multidimensional state spaces. In Saglam's studies \cite{Saglam2015a}, meshing the hybrid zero dynamics (HZD) surface is presented as an alternative to finding and meshing the reachable state space for a bipedal walker operated by an HZD controller. In this way, a switching mechanism between multiple HZD controllers becomes possible to increase stability. Despite the effort to decrease the complexity of the meshing process, the issue still exists and grows with the increasing degree of freedom and the variety of noise sources.
Linearization is a candidate method to decrease system order to identify and control systems. Numerical Jacobian calculations with variable step size can be conducted exactly in \cite{NumMATLAB}, or other methods such as analytical linearization, forward and center difference approximations can be used, noting that the linearization method will influence the result. After linearization, by investigating the eigenvector associated with the largest magnitude eigenvalue, the state can be identified such that the system is the most sensitive against a change in that state. Eigenvalues of the system give a picture of stability around the chosen operating point. The selected state can be used as the indicator state for the stability conditions in the stochastic stability analysis. Nevertheless, calculating the linearized system matrix with variable step size gets more difficult as the dimension increases. Choosing a fixed step size to tackle complexity diminishes the accuracy of the calculation.
Alternatively, this analysis for the most vulnerable state can also be done with stochastic tools.
This study features PCA to reduce the objective dimensions to assess the legged system's stochastic stability. PCA is used as a preprocessing technique to reduce dimensionality. It aims to increase interpretability while minimizing information loss \cite{Jolliffe2016} and allows to use of previously collected data rather than conducting experiments for numerical simulation. A PCA plot converts the correlations among all cells into a 2D graph and allows to comment on the features in the dataset. The mathematical details of the method are not in the scope, so they are not covered in this paper. More detailed information can be referred from \cite{Jolliffe2016}.
The most critical limitation of PCA is its reliance on linear models and sensitivity towards outliers. Because PCA is a linear projection, it assumes a linear relationship between features and cannot capture the nonlinear dependencies. Its goal is to find the directions (i.e., principal components) that maximize the variance in a dataset.
We generalized the PCA to select "the most important state" in a multi-dimensional legged system. In this paper, we assumed to know the system's input-output relation. While having the system that generates the dataset, analyzing the dynamics based on just the data might seem controversial. However, this model reduction method can be generalized for a model reduction on experimental legged setups when this methodological study goes forward with the actual implementation.
The bipedal walker is run for 100 steps, and a $10 \times 100$ dataset is generated. Next, using the built-in function for PCA in MATLAB, the dataset is visualized. Figure~\ref{fig:biped_pca} demonstrates the PCA biplot, scree plot, and score plot for the dataset. From the scree plot, it can be concluded that the first principle component (PC1) can be concluded as enough to describe the data. Investigating states' projections onto principal components, one can see that the $10^{th}$ state has the largest projection on the first principal component, meaning that one can roughly characterize the motion using the $10^{th}$ state.
Both linearization followed by eigenvector analysis and PCA give the same result about the $10^{th}$ state because they both rely on the linearity assumption for the state relations. And they both produced results that are compliant with our intuition.
\begin{figure}[htb]
\centering
\includegraphics[scale=0.7]{figures/pca_biped_velocity_noise_1e_3.eps}
\caption{Principal Component Analysis for the 5-link bipedal system}
\label{fig:biped_pca}
\end{figure}
\subsection{Stochastic Analysis}
After choosing the Markov chain states, we need to build the state transition matrices by following the proposed methodology. Figure~\ref{fig:Estimation} shows that our experimental observation will support the claim that under Gaussian noise, future state-value distributions shapes, shown as histograms, also approach Gaussian shape, and the estimation is capable of capturing each ten state's variance and mean under multiple sources of noise.
\begin{figure}[htb]
\centering
\includegraphics[scale=0.9]{figures/ten_state_estimation.eps}
\caption{Comparison of experimental results and estimation with unscented transformation. Each subplot represents the future state-value distribution of the 5-link bipedal robot under disturbance. Histogram of experimental results is a product of $10^4$ experiments and our proposed method run 41 simulations for estimation.}
\label{fig:Estimation}
\end{figure}
Building a state transition matrix with Monte Carlo simulations requires a repetitive calculation of many simulation (For Figure~\ref{fig:Estimation}, it is $10^4$ for one Markov state, $1.15\times10^6$ in total.) for the mid point of each state of the Markov chain as the initial condition of body angular velocity. Suppose we choose to build our matrices with Monte Carlo experiments. In that case, due to its random nature, no two matrices will be the same, and increasing the number of trials per Markov state, for example, from $10^4$ to $10^6$ leads to a lower variance among the produced matrices. In other words, Monte Carlo simulations result in different matrices for each experiment set requiring significant computational power and time. Apparently, Monte Carlo experiments are infeasible for high-dimensional cases, as also clearly stated in \cite{Benallegue2013}. Our method includes additional information to the experimental procedure to choose initial conditions to estimate each future state-value distribution. It requires $41$ experiments for each Markov state in a 5-link bipedal walking case.
Mean and variances of estimated PDFs are plotted in Figure~\ref{fig:compmv} in order to assess the estimation performance. Around the region where the nonlinear behavior is dominant ($[-1.12,-0.95]rad/s$), the proposed method with unscented transformation works better than the linearization-based method as expected. In the linear region, both methods have a satisfactory performance; however, the linearization-based method works better. This may caused from the asymmetrical shape of the future state-value distribution, which can be observed in Figure~\ref{fig:Estimation}. The estimation can be improved by tuning the weights in the unscented transformation up to some level. After all, the aim is to find an approach with fewer assumptions and generalizable for highly nonlinear systems. Therefore, the unscented transformation is adopted for further investigation of the stochastic behavior of the bipedal system.
\begin{figure}[htb]
\centering
\includegraphics[scale=0.7]{figures/comp_mv}
\caption{Comparison of mean and variances of future state-value PDF's }
\label{fig:compmv}
\end{figure}
Figure~\ref{fig:UTstateTranMat} depicts the constructed state transition matrix for body angle together with a deterministic return map. We represented the future state-value distribution of body angular velocity if all the velocity states are subject to noise with known characteristics.
\begin{figure}[htb]
\centering
\includegraphics[scale=0.55]{figures/returnmap_with_ut.eps}
\includegraphics[scale=0.55]{figures/metas_neig_biped.eps}
\caption{(on the left) State Transition Matrix for body angular velocity (i.e. stochastic returnmap) visualized together with deterministic return map of body angular velocity.
Colorful surface plot represents the $115\times115$ state transition matrix of body angle for a zero mean Gaussian noise with variance of $10^{-3}$ on each state. The black line represents the deterministic return map of body angular velocity. Controller is $C_1$ in Table \ref{tab1}. (on the right) Metastable neighborhood of state transitions for the bipedal system }
\label{fig:UTstateTranMat}
\end{figure}
The state transition matrix represents a stochastic return map and includes valuable information about the system behavior. For example, For example, Figure~\ref{fig:UTstateTranMat} shows that the deterministic return map tends to be linear for some intervals, including the fixed point. That means, near the fixed point, the system can be assumed as a linear system. Under stochastic disturbance, estimation with the linearity assumption works very well, as seen in Figure~\ref{fig:compmv}. In addition, both deterministic and stochastic return maps indicate that the linearity assumption is not valid for a small region, so the body’s behavior cannot be generalized as linear. The stochastic return map implicates the same facts as the deterministic one and brings the future state-value variance information for different initial conditions of the body angular velocity. And the metastable neighborhood map in Figure~\ref{fig:UTstateTranMat}focuses on relating the probabilities of successive steps. This metastable neighborhood represents that if the robot starts to walk from an initial condition (whose attractor is this fixed point), its states at the next steps will most likely be around the fixed point unless it fails.
In addition, we can infer our system's sensitivity to initial conditions by investigating the eigenvalues of the state transition matrix of the absorbing Markov chain: $\lambda_1=1$, $\lambda_2=0.9775$, $\lambda_3= 0.3552$, $ \lambda_4=0.2918$. The value of $\lambda_3$ means that nearly $65\%$ of the contribution to the probability function at the initial condition is lost ("forgotten") with each successive step. As noise variance increases, we observed that $\lambda_3$ decreases, which means the system tends to forget its initial condition more.
As stated before, the eigenvector associated with the second largest magnitude eigenvalue is used to calculate metastable distribution. Metastable distribution in Figure~\ref{fig:biped_eigenvec} states that if the body angular velocity starts from a random point, it will more likely be around $[-0.9, -0.8]rad/s$ unless it fails.
\begin{figure}[htb]
\centering
\includegraphics[scale=0.7]{figures/biped_metas_dist_and_eigvec.eps}
\caption{Metastable distribution for the bipedal system for noise variance of $10^{-3}$}
\label{fig:biped_eigenvec}
\end{figure}
The state dependent MFPT vector in Figure~\ref{fig:state_dep_mfpt} also shows the initial conditions such that the system is more likely to maintain its locomotion under noise. The curves in Figures~\ref{fig:biped_eigenvec} and \ref{fig:state_dep_mfpt} imply the same possibilities.
\begin{figure}[htb]
\centering
\includegraphics[scale=0.7]{figures/state_dep_mfpt_biped.eps}
\caption{State dependent MFPTs for the bipedal system for noise variance of $10^{-3}$ }
\label{fig:state_dep_mfpt}
\end{figure}
The feasibility of building the state transition matrix with unscented transformation allows us to assess the different controllers and analyze the system's stability under different noise levels. Each velocity state of the 5-link bipedal are subject to noise with the same variance. Figure~\ref{fig:MFPTcomparison} shows the dependence of system-wide MFPT in \eqref{mfpt} on noise standard deviation for different controller parameters in Table \ref{tab1}. The MFPT values over $10^{14}$ are not reliable due to MATLAB's numerical limits. From the figure, it can be deduced that the first controller $C_1$ is more stable than the other experimented controllers. Also, it can be related that the proportional controller $C_5$ shows the least stable behavior. Control input saturation prevents making this observation without conducting the stochastic analysis.
\begin{figure}[htb]
\centering
\includegraphics[scale=0.7]{figures/controller_comparison_input_limit.eps}
\caption{System-wide MFPT for the bipedal walker with respect to standard deviation of state noises, $\sigma$, obtained by unscented transformation method}
\label{fig:MFPTcomparison}
\end{figure}
\section{Introduction}
Analytical models do not represent the real systems perfectly, because some phenomena like impact and friction always bring discrepancies to the actual implementation. In exchange for their agility and capabilities, legged robots are much more vulnerable to the stochastic effects of unknown terrains than other mobile robotic systems. They eventually encounter different abnormalities, such as slight differences in elevations (e.g., holes, rocks) or some contaminated surfaces that drastically affect friction and impact dynamics. Accordingly, controllers should take account of the external noises.
Rhythmic gaits used for legged locomotion include walking, running, galloping, trotting, and pronking. Stance dynamics of those gaits are related to the restricted three-body problem and do not admit to a closed-form solutions \cite{SaranliHumanoid,poincareChaos}.
Poincar\'e's return map analysis is frequently used to simplify the periodic trajectories' limit cycles \cite{Grizzle2001}.
This simplification manages the stability characteristics deterministically and oversees the stochastic impacts of the external disturbances.
Deterministic limit cycle stability analyses, like the eigenvalue analysis based on apex to apex numerical Jacobians \cite{Stride,Er2022}, are frequently utilized in literature to cover the stability characteristics of legged systems.
However, those deterministic numerical methods are frequently difficult/expensive to apply, requiring a calculation around each fixed point.
Furthermore, since systems no longer have actual limit cycles under persistent disturbances, they are theoretically incorrect in the presence of stochastic disturbance. For instance, the MARLO robot was able to walk in a lab setting, but it only managed a few steps before falling while testing outside due to a slight incline in the walkway \cite{Griffin2017}. To increase stability, it is necessary to take into account the characterization of walking's stochastic dynamics during system identification and controller design. In the robotics community, this is a significant but understudied method that is primarily handled as a robust control problem \cite{Dai2012, Griffin2017} rather than an examination of stochastic dynamics.
Legged locomotion is characterized by the dynamic interactions between the feet and the contact surface. The term dynamic locomotion usually stands for an unbalanced walking cycle leading to a stable gait behavior. Underactuated legged systems leverage the underactuation to achieve dynamic locomotion. As the trade-off between stability and agility in the control theory, there exists an essential relationship between stability and maneuverability for legged systems \cite{Aoi2016}. Under disturbance, as in many stochastic dynamical systems, legged robots exhibit long-living, locally stable behaviors up to some point that cannot handle the external effects anymore. Once the disturbed system's states go into a region with a different attractor, the system behavior irreversibly adjusts to the new local dynamics. In simpler words, a legged robot can run for quite some time, but it will definitely fall due to the external stochastic effects. Since they eventually leave the locally stable gait behavior, they cannot be considered ``stable'' (at least asymptotically). On the other hand, they obviously operate for long periods of time, making calling them ``unstable'' would be mistaken. A need arises for a new type of classification in the control theory. In line with several studies in the locomotion community \cite{Byl2009,Byl2008a,Ankarali2014} , we believe that Metastability is a well-defined candidate for defining this phenomenon.
The legged counterpart of the metastable systems substitutes the metastable equilibrium with dynamic locomotion and the absolute minimum with falling to the ground. Hereby, walking is well-characterized as a metastable process. In the literature, Byl and Tedrake introduced the conceptual connection and utilized the metastability concept to quantify the stochastic stability of rimless wheel and compass-gait walking on rough terrain \cite{Byl2008thesis, Byl2009}. They also utilized stochastic optimization to improve the overall stability of legged systems.
Byl and Tedrake's metastable limit cycle analysis methodology deals with walking systems by their closed-loop return map dynamics. Following their main methodology, we represented the return maps as Markov chains for the first step. States of this Markov chain consist of mesh points (and their vicinity) of the state space for the particular legged system. Then we obtained state transition matrices of this Markov chain from Monte Carlo Simulations by integrating the system simulations from each mesh point for different values of terrain slope. Monte Carlo analysis is a common method to estimate a probability distribution's progress over time. The method is based on selecting a sufficient number of representative random samples and simulating them with the dynamical system model. The result is the probability density as an estimate of the system output.
This experimental future state-value distribution can be visualized as a histogram and used for assessing the estimation error. As the number of mesh points, which has been taken as initial conditions to simulations, increases, the accuracy of the future state-value distribution will increase.
Referring to Byl's studies \cite{Byl2008a}, Benallegue and Laumond questioned the computational feasibility of the prior method for complex walking systems and proposed a solution for the legged systems with high-dimensional states using the limit-cycle property of stable walking \cite{Benallegue2013}. Actually, the biggest problem in the former approach is computational complexity which comes from two different aspects: simulation time and meshing methodology. Firstly, obtaining the state transition matrix requires numerous simulations to get the most accurate results. For instance, for a 5-link bipedal walking simulation, a one-step simulation takes up to 0.5 seconds in MATLAB. Running $10^6$ simulations lasts more than five days, which can be reduced to less than one day by parallel programming, but it will still be too long to run enough experiments to build a smooth stochastic return map. In addition to the infeasibility of conducting thousands of experiments, if experimental setups are in the loop, another problem arises; physical damage. The more experiment we conducted, the more likely the system will fall due to its metastable nature. Therefore, the system is more likely to get damaged. Secondly, for lower dimensional (1-DOF or 2-DOF) systems with one-dimensional noise, meshing the state space will be quite easy as slicing a range of values or meshing a surface. However, as the number of dimensions increases, meshing a cube or a 4D structure becomes more complex, even impossible. For example, for a 5-link bipedal robot, one must discretize all the state-space in ten dimensions along with the noise space. If the noise comes from only one source, noise space discretization is relatively trivial. Whereas, in the case of multiple noise sources, the former method fails to present an efficient way to analyze the dynamics. For 3D walking, the required degree of freedom increases quickly. Subsequently, Saglam and Byl introduced an improved meshing technique \cite{Saglam2014b}, but unfortunately, this improvement did not break the curse of dimensionality. On the other hand, Saglam and Byl's studies contributed to the previous breakthrough to handle legged systems as metastable systems by compilation of the methodology \cite{Saglam2014a}, new meshing technique \cite{Saglam2015a} and optimal controller designs \cite{Saglam2018}. Now, we propose a new methodology to further improve metastable analysis by borrowing estimation methods from stochastic tools.
Kalman filtering \cite{Kalman1960} is a well-known estimation method widely used in robotic platforms.
The Extended Kalman Filter (EKF) extends the classic Kalman Filter for nonlinear systems where nonlinearity is approximated using the first or second-order derivative. It tries to capture nonlinearity using Taylor expansion around a local point.
On the other hand, the nonlinear extensions of the Kalman Filter consist of nonlinear propagation of probability densities. The sample-and-propagate methods can be generalized as perturbation methods, using samples as initial values, which are perturbations from the mean trajectory. Sampling transforms the continuous state domain into a discrete set of points. EKF fails to conduct nonlinear propagation because of its basis for linearization and partial derivatives instead of propagation \cite{Grewal2015}.
Unscented Kalman Filter is a special case of sigma point filters introduced to improve filtering performance. Unscented transformation \cite{Julier2004, Wan2006} is a powerful tool to estimate the statistics of a random variable that undergoes a nonlinear transformation \cite{Safaoui2021} and is used in many applications ranging from sensor fusion for state estimation \cite{Choi2021} to an unscented Kalman observer \cite{Daid2021}. Moreover, in recent studies, Sieberg et al. combined an artificial neural network with confidence level adjustment and presented a hybrid state estimation structure using unscented transformation \cite{Sieberg2021}. We borrowed this useful stochastic tool to make informed choices on initial conditions for the stochastic analysis simulations. Eliminating computational complexity, we can utilize the mean first passage time metric to characterize the stochastic stability of high-dimensional underactuated nonlinear systems. Additionally, unlike the previous studies \cite{Byl2009, Saglam2014b}, estimation with unscented transformation allows us to deal with multiple sources of uncertainties on higher dimensional systems.
Even though unscented transformation helps estimate the future state-value distribution for nonlinear systems, nonlinear transformation does not help with the higher-order moments. The higher-order moments of the estimation distributions are not tracked. Many possible distributions share the same mean and variances having distinct higher-order moments. Sample-and-propagate methods can capture the exact and unique solution if the transformation results are linear. In the existence of nonlinearities, we cannot reach the exact solution. Because there is no unique solution, estimation performance assessments for this type of estimator are also tricky. Therefore, tuning the parameters of the filters to reach a better estimation can be done by comparing the estimation with the results of the Monte Carlo experiments. This tuning procedure gives the proper parameters only for this particular nonlinear system. Tuning the weights in the unscented transformation-based estimation affects the estimated results, so one should be careful when choosing those parameters. This paper assumes that the future state-value distribution is a Gaussian, so the future state-value distribution is built as a Gaussian with estimated mean and variances. We compared the estimated mean and variances with the results from Monte Carlo experiments.
This study proposes a more efficient estimation method for metastable system properties based on unscented transformation; therefore, there will be no need for conducting many experiments through Monte Carlo sampling. We implemented our method to examine the stochastic stability of a one-dimensional hopper and an idealized 5-link biped simulation with a hybrid zero dynamics controller under disturbance. The one-dimensional hopper states an example of a simple-legged system. After observing the satisfactory estimation results, we extended the methodology to a higher dimensional system. This 5-link walker model is inclusive for robot walkers due to its nonlinear, underactuated, and hybrid nature.
\section{Conclusion}
\input{conclusion.tex}
\input{references}
\end{document}
\section{Methodology}
\subsection{System as an Absorbing Markov Chain}
Markov chains are stochastic models that describe a sequence of possible events whose probability only depends on the previous event \cite{Gagniuc2017}. One way of performing metastable limit cycle analysis in stochastic rhythmic dynamical systems is by representing the discrete-time system dynamics as a Markov process.
In nature, we observe that behaviors including walking, running and many other types of legged locomotion are periodic. The system dynamics defined on a periodic return map allows us to discretize the system behavior by Poincare maps. To build Poincare maps, we preferred apex point during the gait, where the vertical velocity of the robot is zero.
We represented the apex to apex dynamics as a discrete system for the stochastic stability analysis.
\begin{equation}
\begin{aligned}
\mathbf{x}_{k+1}=\mathbf{f}(\mathbf{x}_{k},\mathbf{w}_{k})
\end{aligned}
\end{equation}
where $\mathbf{x}_{k}$ and $\mathbf{w}_{k}$ represent the states and noises at time step $k$ respectively. We assumed the noise values to be drawn from a Gaussian distribution with zero mean and covariance of $R_\omega$.
The stochastic state transition dynamics can also be modeled with an infinite Markov chain; however, in general, approximated as a finite-state Markov process via discretization of the state-space into a finite set of states \cite{Byl2009}. State discretization allows us to compute and analyze a finite-state Markov chain model of the system. The state space is divided into N pieces and assigned to Markov states. The state transition matrix $\mathbf{T}_{N\times N}$ of this Markov chain collects the transition probabilities between the $N$ predefined states. The probability of transition from state $i$ to state $j$ is,
\begin{equation}
\begin{aligned}
\mathbf{T}_{ij}=\mathbb{P}(\mathbf{x}_{k+1}=s_j|\mathbf{x}_{k}=s_i)\label{nonabsorbingTranProb}.
\end{aligned}
\end{equation}
An absorbing Markov chain is a Markov chain with at least one state that is ``impossible'' to leave.
In legged locomotion, we can consider the absorbing state collecting all configurations where the robot falls \cite{Byl2009}, or we can simply assign that state variables associated with some unwanted behaviors to the absorbing state, assuming that recovery from these is impossible.
Besides, we can specify a particular region we would like to operate and take the other configurations that belong to the absorbing state. Assuming $s_1$ is the absorbing state, we can state the following,
\begin{equation}
\mathbf{T}_{11} = 1 \quad \text{and} \quad \mathbf{T}_{1j}=0 \quad \text{for} \quad j \neq 1.
\end{equation}
Absorbing Markov chains has one eigenvalue at $\lambda_1=1$ and the stable distribution matrix of that Markov chain will be the first eigenvector of $\mathbf{T}$ in \eqref{stateTran} which is the first unit vector, which means this system will eventually stop at the first (absorbing) state. The second-largest magnitude eigenvalue of the matrix $\mathbf{T}$ corresponds to the largest magnitude eigenvalue of the $\Bar{\mathbf{T}}$ and is related to the metastable characteristic of the system. The eigenvector associated with the largest magnitude eigenvalue of $\Bar{\mathbf{T}}$ describes the long-living (metastable) distribution of the state.
\begin{equation}
\mathbf{T}=\begin{bmatrix}\mathbf{1}_{1\times 1}&0_{1\times N-1}\\\mathbf{T}_{j1_{N-1\times 1}}&\Bar{\mathbf{T}}_{N-1\times N-1} \end{bmatrix}\label{stateTran}
\end{equation}
The state transition matrix of the Markov chain can be built via Monte Carlo simulations. According to the law of large numbers, the average of the results obtained from a large number of trials should be close to the expected value and tends to become closer to the expected value as more trials are performed \cite{Dunn2012}. However, building the state transition matrix with Monte Carlo simulations requires too many trials (simulations or experiments), which is highly inconvenient for complex legged systems. Thus, we propose a method based on Unscented transformation and prefer to choose sigma points using prior knowledge on noise characteristics and simulate the system accordingly. As a result, the unscented transformation concept results in a more efficient estimation of state transition matrices.
\subsection{State-Value Distribution Estimation}
\subsubsection{Linearization Based Estimation}
\input{linearization_method}
\subsubsection{Unscented Transformation Based Estimation}
The fundamental motivation behind using unscented transform is that approximating a probability distribution is easier than approximating an arbitrary nonlinear function \cite{Julier2004}. Instead of approximating the system equations by linearization, we calculated sigma points and used them in the unscented transformation to directly approximate the output probability density functions. The assumption under the probability distribution estimation is to have a Gaussian noise and expect the output distributions to be Gaussian. The central limit theorem states that the sampling distribution approaches a normal distribution as the sample size increases \cite{Fischer2011}. That is why it can be assumed that, under the exposure of multiple noise sources, the future state-value distributions will coincide with a Gaussian distribution.
The formulation steps are similar to the Unscented Kalman Filters \cite{Julier2004, Wan2006}. First of all, formulation for the generalized case of nonadditive noise requires an augmented state definition $\mathbf{x}_k^{a}$ with system states $\mathbf{x}_k$ and zero mean noises $\mathbf{w}_{k}$,
\begin{equation}
\mathbf{x}_k^{a}=\begin{bmatrix}\mathbf{x}_k^T & \mathbf{w}_{k}^T\end{bmatrix}^T\label{augmented}.\end{equation}
Previously known nonlinear system dynamics $\mathbf{f}$ and the noise variance characteristics $\mathbf{P}_{k}$ are,
\begin{equation}
\begin{aligned}
\mathbf{x}_{k+1}=\mathbf{f}(\mathbf{x}_{k}^{a}), \quad
\mathbf{P}_{k}=\begin{bmatrix}\varepsilon&0\\0&R_w\end{bmatrix},
\end{aligned}
\end{equation}
where $\mathbf{P}_{k}$ contains the known variances as diagonal entries, and $R_w$ represents the noise variances. Since our initial states $\mathbf{x}_{k}$ are deterministic, their variance will be zero; however, for computational purposes, we specified their variance as a very small value $\varepsilon$. If we take the state variance as zero, that will cause a problem in the matrix square root step.
Sigma points represent the chosen initial conditions so that the output of the nonlinear system to these initial conditions will provide the information related to output distribution. The sigma point set $\mathbf{X}_{k}$ \eqref{sigmaset} contains $2n+1$ sigma points $\mathbf{x}_{k}^j$ and their associated weights $\mathbf{W}^j$ so that their mean will be $\mathbf{x}_{k}^a$ and variance $\mathbf{P}_{k}$, where $n$ is the dimension of augmented state.
\begin{equation}
\begin{aligned}
\mathbf{X}_{k}&=\{ ( \mathbf{x}_{k}^j , \mathbf{W}^j) | \quad & j=0\dots 2n \}\\
\mathbf{x}_{k}^0&=\mathbf{x}_{k}^{a}, \quad& -1<\mathbf{W}^0<1 \\
\mathbf{x}_{k}^j&=\mathbf{x}_{k}^{a} + A_j, \quad& j= 1 \dots n\\
\mathbf{x}_{k}^j&=\mathbf{x}_{k}^{a} - A_j, \quad& j= n+1 \dots 2n\\\mathbf{W}^j&=\frac{1-\mathbf{W}^0}{2n}, \quad& j= 0 \dots 2n\\
A_i&=\left(\sqrt{\frac{n}{1-\mathbf{W}^0}\mathbf{P}_{k}}\right)_i
\end{aligned}\label{sigmaset}
\end{equation}
The weight of the first sigma point $\mathbf{W}^0$, in \eqref{sigmaset}, controls the proximity of sigma points to their mean. If $\mathbf{W}^0 \leq 0$ or $\mathbf{W}^0 > 0$, the sigma points tend to be closer or further from the origin.
At the model forecast step \eqref{forecast}, the transformed points ($\mathbf{x}_{k+1}^{f,j}$) are produced by propagating each sigma point through the nonlinear system and used to compute the mean and covariance of the forecast value of $\mathbf{x}_{k+1}$.
\begin{equation}
\mathbf{x}_{k+1}^{f,j}=\mathbf{f}(\mathbf{x}_{k}^{f,j})\quad j= 0 \dots 2n \label{forecast}
\end{equation}
After forecasting, estimated mean and variances are computed as
\begin{equation}
\begin{aligned}
\boldsymbol{\mu}_k&=\sum^{2n}_{j=0} \mathbf{W}^i_m \mathbf{x}_{k+1}^{f,j},\\
\mathbf{P}^f_{k+1}&=\sum^{2n}_{j=0} \mathbf{W}^i \{ \mathbf{x}_{k+1}^{f,j} -\boldsymbol{\mu}_k \} \{ \mathbf{x}_{k+1}^{f,j} -\boldsymbol{\mu}_k \}^T.
\end{aligned}
\end{equation}
Using the estimated output mean $\boldsymbol{\mu}_0$ and output variance $\mathbf{P}^f_{1}$, we can construct the estimated normal distribution ${\mathbf{X}_{output}\sim\mathcal{N}(\mathbf{\mu}_0,\mathbf{P}^f_{1})}$.
\subsection{Analysis and Performance Metric}
The core purpose of this paper is to assess the system's stability in the existence of noise. Investigating the state transition matrix, we can infer the stochastic characteristics and comment on the effect of noise for each configuration. We calculated the transition probabilities of nonabsorbing states in \eqref{nonabsorbingTranProb} as the following,
\begin{equation}
\begin{aligned}
\mathbf{T}_{ij}=&\mathbb{P}(\mathbf{x}_{k+1}=s_j|\mathbf{x}_{k}=s_i)\\=&\mathbf{F}_{\mathbf{X}_{output}}(\frac{s_{j+1}+s_j}{2})-\mathbf{F}_{\mathbf{X}_{output}}(\frac{s_{j}+s_{j-1}}{2})
\end{aligned}
\end{equation}
where $\mathbf{F}_{\mathbf{X}_{output}}$ represents the cumulative distribution function of output distribution.
Transition probabilities to the absorbing state are equal to the total probability of not going into nonabsorbing states.
\begin{equation}
\begin{aligned}
\mathbf{T}_{i1}&=\mathbb{P}(\mathbf{x}_{k+1}= s_1|\mathbf{x}_{k}=s_i)\\&=1-\sum_{i=0}^N \mathbb{P}(\mathbf{x}_{k+1}\neq s_1|\mathbf{x}_{k}=s_i)
\end{aligned}
\end{equation}
Finally, setting transition probabilities from the absorbing state to zero, we complete the state transition matrix structure in \eqref{stateTran}.
Mean first passage time is a stability metric for metastable systems and extracted from the second-largest magnitude eigenvalue of the state transition matrix $\mathbf{T}$. Definition of system-wide mean first passage time value $M$ is the following;
\begin{equation}
M=\frac{1}{1-\lambda_2}.\label{mfpt}
\end{equation}
We can assess the performances of different controllers by the closed-loop system's mean first passage time values. In addition, we can compare the closed-loop system behaviors with respect to different levels of noise variances.
State-dependent mean first passage time curves are one of the properties to discuss. The state dependent MFPT vector $m$ collects the expected passage time from the state $s_i$ to the absorbing state $s_1$. This vector is computed as in \eqref{state_dep_mfpt}.
\begin{equation}
\begin{aligned}
m=\begin{bmatrix} 0 \\ (\mathbf{I}-\mathbf{\Bar{T}})^{-1}\mathbf{1} \end{bmatrix}
\end{aligned}\label{state_dep_mfpt}
\end{equation}
where $\mathbf{\Bar{T}}$ is the state transition matrix without its first row and first column, $\mathbf{I}$ is the identity, and $\mathbf{1}$ is a vector with all elements equal to 1.
Finally, there are some metastable properties worth to discuss. Metastable neighborhood is actually the stochastic counterpart of the fixed point of the deterministic return map. This neighborhood indicates the joint probability of the two successive body angular velocity value measured just before the impact. And, metastable distribution is the stationary distribution of the Markov states unless the robot is not in the absorbing state and calculated by replacing the first element of the eigenvector associated with the second largest magnitude eigenvalue with zero and normalizing it. Then the metastable neighborhood, i.e., joint probability, can be calculated by multiplying the state transition matrix with the metastable distribution.
|
1,941,325,220,466 | arxiv | \section{Introduction}
\label{S:1}
In the fields of naval architecture, ocean and marine engineering large amounts of data are generated from sources such as ocean wave and current measurements, sea floor mapping by AUVs, ship manoeuvring and recorded wind speeds interacting with the ships at different locations across the globe, etc. This data potentially represents latent knowledge that can advance our understanding of and our solutions for these fields. But the volume and diversity of this data limits manual analysis by human domain experts. Using machine learning based techniques researchers can utilize, interpret, visualize and analyze this data. This enables the generation of data driven surrogate models for these physical phenomena. These predictive models can be utilized for estimation of wave height and ship parameters such as container capacity and added mass coefficient, etc.
Machine learning (ML) is a branch of Artificial Intelligence (AI) that focuses on enabling computers to infer models from data and constraints. The various steps involved in developing a ML model are data preparation, feature engineering, data modelling, and model evaluation. Data preparation involves collection of raw data, data cleaning (deals with the missing values and removing outliers) and formatting the raw data that can be incorporated in a machine learning model. Feature Engineering is the process of converting the raw data into physics based features, which can be correlated with a quantity of interest in a particular field of engineering. One example of quantity of interest in the field of Marine Engineering is the drag coefficient of an autonomous underwater vehicle (AUV), which has relations with size of the AUV, velocity of the flow, upcoming turbulence intensity and various other environmental factors. Feature Engineering provides adequate data for a machine learning model that can enhance the performance of the model. Next step of the of the machine learning is data modelling, that is splitting data into training and testing sets. In the training process both the inputs and the quantity of interests(outputs) are provided to the model. The machine learning algorithm maps the input variables with outputs and give a target function which can be used to predict the unknown parameters for other set of input variables(testing data).
There are mainly four different types of machine learning techniques. Those are supervised, unsupervised, semi-supervised and reinforcement learning. In supervised machine learning the outputs for a given set of inputs are used for training the model and once the model is trained, it is used for prediction. In unsupervised learning the outputs for given inputs are unknown. The semi-supervised learning is in between supervised and unsupervised learning where some samples may have training labels and others may not. In reinforcement learning the machine is exposed to the environment, where it learns by optimizing its reward. Among all the machine learning techniques, the supervised learning methods are most widely adopted in the engineering community. In supervised learning labelled data used for for training and problems of classification and regression are solved. Popular supervised machine learning algorithms are Linear Regression, Support Vector Machines (SVM), Neural Networks, Decision Trees, Naive Bayes, etc.
In this article we provide a detailed review of application of ML algorithms in the naval architecture, ocean and marine Engineering and grouped those into following categories: wave forecasting, AUV operation and control, applications in ship research, design and reliability analysis of breakwaters, detection of damaged mooring lines, applications in propeller research, damage detection of offshore platforms and few other miscellaneous applications like beach classification, condition monitoring of marine machinery system, Performance assessment of shipping operations, autonomous ship hull corrosion cleaning system, wave energy forecasting, prediction of wind and wave induced load effects on floating suspension bridges and tidal current prediction. Based on this comprehensive analysis we point out future directions of research that may be fruitful for the application of ML to coastal and marine problems.
\section{Machine learning basics}
Machine learning is the process of finding the associations between inputs and outputs and parameters of a system using limited data. The learning process can be summarized as follows \citep{cherkassky2007learning}:
\begin{equation}
\begin{split}
R(w)=\int L[y, \phi(x,y,w)]p(x,y)dxdy
\end{split}
\end{equation}
In the above equation, data x and y are the samples form the probability distribution $p$, the structure of the ML model is defined by $\phi(x,y,w)$, and $w$ are the parameters. The learning objectives are balanced by the loss function $L$. There are three broad categories of ML algorithms, those are supervised, unsupervised, and semi-supervised.
\subsection{Supervised Learning}
In supervised learning, correct information is available to the ML algorithm. The data utilized for the development of the ML model are labeled data, where labels are available corresponding to the output. The unknown parameters of the ML model are determined through minimization of the cost function. The supervised learning correspond to various regression and interpolation methods.The loss function of the ML model can be simply defined as:
\begin{equation}
\begin{split}
L[y, \phi(x,y,w)]=|x||y|b
\end{split}
\end{equation}
\begin{figure}
\centering
\includegraphics[height=5.4cm]{ml_cl_1.jpg}
\caption{Classification of ML algorithms as supervised, semi-supervised and unsupervised and reinforcement learning.} \label{fig:1}
\end{figure}
\begin{figure}
\centering
\includegraphics[height=6.2cm]{ML_basics.jpg}
\caption{The learning problem} \label{fig:lp}
\end{figure}
\subsection{Unsupervised Learning}
In unsupervised learning, the features are extracted from the data by specification of certain criteria and supervision and ground truth levels are not required. The problems involved in unsupervised learning are clustering, quantization and dimensionality reduction. The dimensionality reduction involve proper orthogonal decomposition, autoencoders and principal component analysis. In clustering, similar groups in data can be identified. The most common ML algorithm used in clustering of data is the k-means clustering \citep{hartigan1979algorithm}.
\subsection{Semisupervised Learning}
In semisupervised learning, the ML algorithm is partially supervised, either with corrective information from the environment or with limited labeled training data. In semi-supervised learning two alorithms are mainly used, those are generative adversarial networks \citep{creswell2018generative} and Reinforcement learning \citep{sutton2018reinforcement}.
\section{Popular Machine learning algorithms}
\label{S:2}
\subsection{Artificial Neural Network}
Artificial Neural Networks(ANN) (fig.\ref{fig:ann}a) are the machine learning systems inspired from the biological neural networks. The biological neural networks(BNN) are the circuits that carry out a specific task when activated. These are population of neurons interconnected by synapses. Similar to BNNs, ANNs have artificial neurons(fig.\ref{fig:ann}b). The most widely used ANN is a multilayer perception(MLP), which has more than one hidden layers and has applications in both regression and classification problems. The MLP has input layer, hidden layers and output layer. The MLP correlates inputs to the outputs. While passing through the hidden layers the inputs are multiplied by weights. Each neuron in the MLP has a function y that correlates the input. The ultimate aim is to reduce the error at the output by optimizing the weights. In every layer of the neural network, the neuron response is given by an activation function, and a cost is given by a biased weighted sum. For two consecutive layers $[k-1,k]$, the neural network operation can be expressed mathematically as follows:
\begin{equation}
\begin{split}
y_j=f_j(\sum_{i=1}^{n}w_{ij}x_{i}+b_{j}),i\in[0,n]\wedge i\in[0,m]
\end{split}
\end{equation}
where $n$ and $m$ are number of layers in $k-1$ and $k$th layers respectively. For neuron $j$, $y_{j}$ is the output and $x_{i}$ is the input signal from neuron $i$, $b_j$ is the bias of the neuron and $w_{ij}$ is the weight associated with connections $i$ and $j$.
The MLP can learn from data and optimize the weights. The learning task in DNNs is obtained by updating the weights and minimizing the output error. The DNNs employ back-propagation algorithm in the training process and it is good for both speed and performance. There are several optimization techniques by which the weights can be calculated. Those are gradient descent, Quasi- Newton, Stochastic Gradient Descent or the Adaptive moment Estimation etc.
\begin{figure}
\centering
\includegraphics[height=8cm]{nn_1.jpg}
\includegraphics[height=6cm]{neuron_1.jpg}
\caption{Structure of a neural network} \label{fig:ann}
\end{figure}
\subsection{Convolutional Neural Network}
Convolutional neural network (CNN) is type of neural network mainly used in image processing. The CNN has mainly three layers, those are convolutional layer, pooling layer and a fully connected layer. The schematic of CNN is shown in fig.\ref{fig:2}. The job of convolutional layer is to extract latent features form an image by applying convolutional filters to the input. A set of learnable filters are the parameters of CNN connected only to a local region in the input volume spatially, but to the full depth. The detailed methodology involved in CNN can be presented as follows:
\begin{equation}
\begin{split}
x^l_{i,j}=g(x^{l-1}*W^l)_{i,j}=g(\sum_m\sum_n x_{{i+m},{j+n}}^{l-1}w^l_{m,n})
\end{split}
\end{equation}
in above equation, $x^l_{i,j}$ is (i,j)th value of $l$th layer and $w^l_{m,n}$ is ($m,n)$th weight of convolution filter in the $l$th layer. g is the non-linear activation function. Suppose, the $(l-1)$th layer has dimensions of W(width), H(height) and C(Channel) and the $l$th layer convolutional filter dimensions F(width), F(height) and C(Channel), then the $l$th layer can be derived by applying the nonlinear activation function g to the convolution operation. The function of nonlinear activation function is to model the nonlinear relationship the subsequent layers.
\subsection{Recurrent neural network}
Recurrent neural networks (RNN) are used for processing sequential data. RNNs can also process data with much longer sequences and sequences with variable length. RNNs can be designed in different methods as discussed in \cite{goodfellow2016deep}: a) a RNN that generate an output at every time step and recurrent connections between the hidden units, b) a RNN that generate an output at every time step and recurrent connectivity only from the output at one time step to the hidden units at the next time step and c) RNNs with recurrent connectivity among the hidden units that reads entire sequence and produce a single output.
\subsubsection{Long short-term memory network}
Long short-term memory network(LSTM) are the gated RNNs. These networks are based on the concept of generating paths through time that neither vanish nor explode. In the LSTM, the gradients can flow for long duration through the self loops. By making weight of the self loop gated the time scale of integration can be changed dynamically. The LSTM has successful applications in handwriting recognition, speech recognition, handwriting generation and machine translation.
\begin{figure}
\centering
\includegraphics[height=10cm]{lstm.jpg}
\caption{Block diagram of the LSTM network cell, The cells are connected recurrently to each other \citep{goodfellow2016deep}. \label{fig:lstm}}
\end{figure}
\subsection{Random Forest}
The basic building block of a random forest is a decision tree. The structure of the decision tree is shown in fig. \ref{fig:3}a. The boxes in the decision tree represent group of features and data. The decision tree seeks if/then/else rules for obtaining the desired output. Random forest(RF) regression combines performance of multiple decision trees for predicting a output variable. It is a assemble learning technique, works on the concept of bagging method. In RF regression trees are constructed using a subset of random sample drawn with replacement from training data. The RF algorithm has following steps:
a) Drawing boot strap samples for number of trees from original data
b) Growing unpruned regression trees for each boot strap sample and at each node randomly sample the number of predictors and choose the best split among those variables.
c) Prediction of output for test data by the averages of the predictions from all the trees.
\subsection{Support Vector Machine}
Support vector machines are based on the principles of structural risk minimization and are used for recognizing the subtle patterns in complex datasets. This machine learning methodology was initially developed for solving the classification problems, later these were extended towards solving regression problems. The support vector regression employs kernel functions to map the initial data into high dimensional space such that the nonlinear patterns can be converted
into a linear problem. The performance of SVM regressor is largely dependent on the choice of kernel function. There are 4 main types of kernel functions are available, those are Linear, Polynomial, Sigmoid and Radial Basis Function. The hyper-parameters of the SVM regressor must be chosen carefully for efficient performance of the model. Inappropriate choice of hyper parameters would lead to under/ over fitting. The two main hyper parameters of the SVM regression model are $\epsilon$-insensitive zone and regularization parameter C. Any deviation from $\epsilon$ can be calculated from the penalization of the regularization parameter C. The penalty becomes more important when the values of C is higher and SVR fits the data. The penalty becomes negligible, when the value of C is small and SVR gets flat. For higher value of $\epsilon$ SVR becomes flat and for smaller value of $\epsilon$, SVR fits data.
\begin{figure}
\centering
\includegraphics[height=4.2cm]{cnn.jpg}\\
\caption{Architecture of CNN} \label{fig:2}
\end{figure}
\begin{figure}
\centering
\includegraphics[height=6.2cm]{RF.jpg}\\
\caption{Architecture of of a random forest, a)a single decision tree b)a random forest} \label{fig:3}
\end{figure}
\section{Application of Machine learning in the marine environment} \label{S:3}
\subsection{Wave forecasting}
Waves are generated by complex interaction of wind with the ocean and the process by which ocean waves are generated is not fully understood and is extremely uncertain and complex. The information of wave heights at different locations is essential for operation related activities in the ocean. Most of the works related to operation activities in the ocean are carried out with wave heights estimated over a period of some hours or days. Traditionally deterministic models are used for prediction of height of waves and wave periods \citep{sandhya2014wave}. The ocean waves can both be forecasted and hindcasted using different physics based approaches. Waves can be forecasted using meterological conditions and can be hindcasted with different meterological charts. In wave forecasting, differential equations for wind-wave relationship and wave energy are solved numerically.
The methods using differential equations generally predict wave heights for a period of 6-72 hours. The cost of numerical simulations using differential equations is very high and the simulations are time consuming and the numerical predictions are always associated with uncertainties. The uncertainties appears in the prediction results because of the approximations utilized in the model development.
Because of the limitations in the traditional methods of wave height predictions and the rapid development of ML methods, now researchers have started utilizing various ML algorithms \citep{mafi2017forecasting,etemad2009comparison,deka2012discrete,wang2018bp,law2020deterministic,oliveira2020impact,deshmukh2016neural} such as neural networks \citep{deo2001neural,deo1998real,sahoo2019prediction}, recurrent neural networks \citep{mandal2006ocean} and random forests for prediction of wave heights.
\begin{figure}
\centering
\includegraphics[height=5cm]{wh.jpg}
\includegraphics[height=5cm]{wp.jpg}
\caption{Neural network prediction of wave heights and periods \citep{deo2001neural}.} \label{fig:deo}
\end{figure}
\cite{deo2001neural} have used a simple 3 layered feed forward neural network to predict the significant wave height and wave period. They considered the wind speed as input to the neural network for March 1988 to July 1988 and further from December 1988 to May 1989. The location of the data collection point was off Karwar in India. The ANN prediction of wave height is shown in fig. \ref{fig:deo}. \cite{tsai2009wave} have used ANN to predict the significant wave height, significant wave period, maximum wave height and spectral peakedness parameter. The input parameters of the neural network are : average of the highest one third pressure wave heights, corresponding pressure wave period, average pressure zero-crossing period, maximum pressure wave height , average pressure wave height, root mean square pressure wave height, average of one-tenth highest pressure wave height, successive pressure wave height correlation and pressure spectral peakedness parameter. The detailed definitions of the above parameters are available in \citep{tsai2009wave}. The ANN model was trained using data obtained from stations ranged from 11 to 41 m. The predicted results are compared with the results obtained from linear theory. The ANN model predicted better results for water depths in between 20 to 41 meter in comparison to the linear theory.
\cite{rao2005hindcasting} used neural networks to estimate wave parameters from cyclone generated wind fields. They considered 11 cyclones, which crossed the southern east coast of India in their studies. The inputs to the neural network are difference between central and peripheral pressure, radius of maximum wind and the speed of forward motion of
cyclone. The outputs of the neural network are wave heights and periods. They considered the feed forward neural network with back propagation algorithm in their modelling. The NN model predictions are contrasted against other established wave hindcasting models and the observed very good correlation between NN and physics based model results.
\cite{oh2018real} used wavelet and neural network hybrid models for real-time forecasting of wave heights. They developed the hybrid model by combining empirical orthogonal function analysis and wavelet analysis with the neural network and used wave height data at different locations and meteorological data in the surrounding are for training the ANN model. Their developed model was useful for prediction of wave heights where past wave height and meteorological data are available. Doong et al. used ANN based models to predict the the occurrence of coastal freak waves. An actual picture of a coastal freak wave is shown in fig. \ref{fig:4}. These waves are generated by interaction of waves with various coastal structures such as rocks. They used seven parameters(significant wave
height, peak period, wind speed, wave groupiness factor, Benjamin Feir Index (BFI), kurtosis, and wind-wave direction misalignment) for training their model. They used a single hidden layer neural network with back propagation algorithm. The field data used in their study were collected from the Longdong Data Buoy(Central weather bureau of Taiwan). The buoy was at a location of 1 km off the Longdong coast, where the water depth was 23m. \cite{choi2020real} used deep neural networks to estimate significant wave height using raw ocean images\ref{fig:ocean_i}. A CNN based classification model was constructed with four CNN structures. Their method of wave height prediction had two steps: a) the best CNN model for ocean image processing was found among VGGNet, Inception.v3, ResNet and DenseNet. b) The model performance was improved using transfer learning and structure modification. It was observed that the VGGNet and ResNet based model with transfer learning and various feature extractors yielded good performance in significant wave height modeling.
Traditionally strom surges are predicted using fluid dynamics methods(finite difference method) that utilizes large number of equations and the simulations are time consuming. \cite{rajasekaran2008support} applied support vector regression methodology to forecast typhoon surge. The typhoon surge must be accurately predicted to avoid property loss. For the development of ML model they used the input data are pressure, wind velocity, wind direction and estimated astronomical tide and the out put is the storm surge level. The ML model developed with support vector regression was verified using original data collected at the Longdong station at Taiwan for the Aere typhoon. The location of the Longdong harbour is shown in fig. \ref{fig:taiwan}.
\cite{malekmohamadi2011evaluating} evaluated the efficacy of support vector machine, bayesian networks and artificial neural networks in predicting the wave height. The data required for training the models were collected at a buoy station at Lake Superior. They noticed ANN prediction of the wave height matched well with the observational data and are better than prediction of the other ML models.
\begin{figure}
\centering
\includegraphics[height=8cm]{ocean_images.jpg}
\caption{Raw ocean images \citep{choi2020real}.} \label{fig:ocean_i}
\end{figure}
\begin{figure}
\centering
\includegraphics[height=8cm]{freak.jpg}\\
\caption{An actual picture of a coastal freak wave \citep{doong2018development}.} \label{fig:4}
\end{figure}
\begin{figure}
\centering
\includegraphics[height=9cm]{Taiwan.jpg}
\caption{Locations of Longdong harbour, Taiwan \citep{rajasekaran2008support}.} \label{fig:taiwan}
\end{figure}
\subsection{AUV operation and control}
An AUV(autonomous underwater vehicle) is a self propelled unmanned vehicle, that can conduct various activities in the deepest corners of Ocean or near the free surface \citep{tyagi2006calculation}. These don't involve any human supervision. Typical applications of AUV includes, sea floor mapping for construction of offshore structures, characterization physical chemical and biological properties of ocean, Oceanographic applications and many more. Since there is minimal human supervision for AUV operation, design of accurate control system of AUV plays major role in the design of AUV. These vehicles have are robotic devices with own propulsion system for navigation and have onboard computer for decision making \citep{SAHOO2019145}.
Machine learning techniques has various application in the design of control system of the AUV. \cite{zhang2020neural} used neural networks for modelling an adaptive trajectory tracking control scheme for under actuated autonomous underwater. The AUV was subjected to unknown asymmetrical actuator saturation and unknown dynamics. The AUV kinematic controller was designed by using neural network compensation and adaptive estimation techniques. The neural network was used to approximate the complex AUV hydrodynamics and differential of desired tracking velocities. The stability of the NN model was tested against Lyapunov theory and backstepping technique. \cite{zhang2018master} proposed a novel bilateral adaptive control scheme for achieving position and force coordination performance
of underwater manipulator teleoperation system under model uncertainty and external disturbance. A new nonlinear model reference adaptive impedance controller with bound-gain-forgetting (BGF) composite
adaptive law is designed for the master manipulator force tracking of the slave manipulator. They have used a radial basis function neural network
for local approximation of slave manipulator's position tracking. The RBFNN based on Ge-Lee (GL) matrix is adopted to directly approximate each element of the slave manipulator dynamic, and the robust term with a proper update law is designed to suppress the error between the
estimate model and the real model, and the external disturbance. \cite{gao2017sliding} proposed a hybrid visual servo(HVS) controller for underwater vehicles using a dynamic inversion based sliding mode adaptive neural network control. The method was developed for tracking the HVS reference trajectory generated from a constant target pose. The dynamic uncertainties were compensated using a single layer feed forward neural network, with an adaptive sliding mode controller. The control system proposed was composed of a sliding model controller combined with a neural network compensator is employed to construct a pseudo control signal required to track a smooth reference trajectory, that is generated by a target pose through a reference model. A dynamic inversion module was also incorporated in the control system to convert the pseudo control into actual thruster control signals by using approximate dynamic model of underwater vehicles. The schematic of the proposed control system architecture is shown in fig.\ref{fig:5}.
\begin{figure}
\centering
\includegraphics[height=5cm]{auv_control.jpg}\\
\caption{Schematic of the DI-SMANNC for hybrid visual servoing of an underwater vehicle \citep{gao2017sliding}.} \label{fig:5}
\end{figure}
\cite{chu2016observer} proposed an observer based adaptive neural network control approach for a class of remotely operated vehicles
whose velocity and angular velocity state in the body fixed frame are unmeasured. The thruster control signal was considered as the input to the tracking control system. An adaptive state observer based on local recurrent neural network was proposed to estimate the velocity and angular velocity state online. The adaptive learning method was also used to estimate the scale factor of the thrust model.
\subsection{Machine learning applied in ship research}
\subsubsection{Estimation of ship parameters}
\cite{ray1996neural} utilized neural networks to predict container capacity of a container ship. They used database of previously built ships to predict the number of containers which can be accommodated on a vessel and to design a container stowage plan. The input to the neural network are the length, breadth, depth, deadweight and speed of the vessel and the output is the container capacity(number of containers).
\cite{margari2018use} used artificial neural networks to predict the resistance hullforms. The hull forms are sixteen in number and those were mainly designed to be used as bulk carriers and tankers by the U.S. Maritime Admisistration. The experimental data for the residual resistance coefficient were used to train and test a series of neural networks. They considered a feed forward multilayered perception in their analysis. The input vector to the neural network consists of length to breadth ratio, the breadth to draft ratio, the block coefficient and the Froude number and the output is the residual resistance coefficient.
\cite{cepowski2020prediction} predicted the added resistance of a ship in regular head waves. Experimental data collected from model test measurements was used to train the neural network model. The predicted added resistance has applications in the preliminary design stage. The function for the added mass coefficient was approximated as:
\begin{equation}
\begin{split}
C_{AW}=f(LBP,B,d,CB,Fn,\lambda/LBP)
\end{split}
\end{equation}
where:
LBP is the length between perpendiculars, B is the breadth, d is the draught,CB is the block coefficient, Fn is the Froude number, $\lambda$ is the wave length and f is the function for the prediction of ship added resistance coefficient.
\cite{zhang2013estimation} used support vector machine for estimation of hydrodynamic coefficients in the mathematical models of ship manoeuvring motion from captive model test results. The towing test and pure sway test were also considered in the model testing. The comparison of the test data and SVM predicted results suggest that, SVM is an effective method in predicting hydrodynamic coefficients in captive model test. They also noticed that, the SVM model predictions are comparatively poor, for polluted test data and uncertainties in the hydrodynamic models. The efficacy of SVM with polluted test data can be enhanced by treating test data with de-noising approaches. \cite{xu2019hydrodynamic} used a least square support vector machine for estimation of parameters nonlinear manoeuvring model in shallow water.
\subsubsection{Automatic Ship docking}
\cite{shuai2019efficient} proposed an ANN based approach for automatic ship docking in presence of environmental disturbances (\ref{fig:ann_asd}). The data required for ANN training was obtained by operating the ship by a skilled captain using a joystick to control ship's rudder and thrust. In the ANN based controller model the inputs are the parameters selected from data analysis(those are state of vessel and environmental information) and the output are the ship's propeller speed and rudder angle. The vessel's state are its heading and position, speed of the vessel, the force of vessel in different degrees of freedom, and the environmental information are force of wind in each degrees of freedom, the wind speed and wind direction.They used gradient descent method to minimize the mean square error between the required output value and the actual output value of the neural network. Based on sensitivity the optimal parameters for the ANN input are chosen as relative distance between ship and dock and the heading angle. The output of the ANN is rudder angle and thruster speed.
\begin{figure}
\centering
\includegraphics[height=5cm]{shipmodel.jpg}\\
\caption{Schematic of the ANN based control strategy for automated ship docking \citep{shuai2019efficient}.} \label{fig:ann_asd}
\end{figure}
\subsubsection{Ship manoeuvring simulation}
\cite{luo2014manoeuvring} used support vector regression for manoeuvring simulation of catamaran. The implicit models are derived for the manoeuvring motion, instead of the traditional method of calculation hydrodynamic coefficients. For development of the SVM regressor model data obtained from full scale trials were used. The effects of wind and current induced disturbances were also considered in the model development. The inputs to the SVM model were rudder angle, surge speed, yaw rate and sway speed respectively and outputs are derivatives of the surge speed,sway speed and yaw rate respectively. They utilized Gauss function kernel in the SVM to improve the performance of the approximation.
\begin{figure}
\centering
\includegraphics[height=8cm]{k_parameter.jpg}
\caption{Kinematic parameters in ship manoeuvring simulation, detailed definition of these parameters are available in \cite{luo2014manoeuvring}} \label{fig:k_parameter}
\end{figure}
\subsubsection{ship trajectory prediction}
\cite{tang2019model} used Long Short-Term Memory(LSTM) network to model and predict the trajectories of the vessels. The ground truth automatic identification data in the Tianjin port of China was used to train and test the ML model. The inputs to the LSTM model are geographical location, speed, heading and other status information and the output is the status of ship at any future moment. It was observed that, the predicted trajectories are better than the traditional kalman filter model. The predicted trajectory of the ship is shown in fig. \ref{fig:trajectory}. In the figure, the first 10 minute trajectory data was used to train the LSTM model and the trajectory after 10th to 20th minute was predicted by the LSTM model.
\begin{figure}
\centering
\includegraphics[height=10cm]{ship_trajectory.jpg}
\caption{Trajectory prediction of the ship \citep{tang2019model}.} \label{fig:trajectory}
\end{figure}
\cite{volkova2021predicting} applied artificial neural networks(ANN) to predict trajectory of the ship. THE AIS data were used for the ANN model development.This method of trajectory planning is mainly useful for river vessels and river-sea vessels, when the vessel is near a hydraulic structure and there is problem in obtaining satellite signals because of interference. The likelihood of collision can be
decreased with application of ML based trajectory prediction models.
\subsubsection{Calculation of wind load on ship}
Wind loads on ships, is an important parameter that need to calculated accurately, because that is directly correlated with analysis of ship stability, maneuvering, station keeping and ship speed estimation. \cite{valvcic2016hybrid} developed a radial basis function neural network model by using elliptic Fourier features of closed contours and wind load data collected from wind load data of three types of ships, those are car carriers, container ships and offshore supply vessels. The trained neural network was employed for the prediction of non-dimensional wind load coefficients.
\subsubsection{Shaft power prediction of large merchant ships}
\cite{parkes2018physics} used neural networks to predict shaft power of large merchant ships. The data required for training the neural network model was obtained from a data-set of 27 months of continuously monitored data sampled in every 5 minutes from three vessels of same design(Sample data is shown in fig.\ref{fig:shaft power} reproduced from \cite{parkes2018physics}. The data consists of vessel movements recorded with varied geographic locations and weather conditions. The variables used in training of the neural network model are GPS ship speed, ship speed through water, true wind speed, apparent wind direction, draught, headings, trim and wave height. The above mentioned parameters/features were used as input to the neural network model and the output of the neural network is the shaft power. The shaft power is the product of shaft torque and angular velocity. From accurate measurement of shaft power the engine efficiency can be calculated.
\begin{figure}
\centering
\includegraphics[height=8cm]{shaft_power.jpg}
\caption{Box-plot of shaft power vs GPS speed \citep{parkes2018physics}.} \label{fig:shaft power}
\end{figure}
\subsubsection{Prediction of fuel consumption of ship main engine}
\cite{gkerekos2019machine} predicted the fuel consumption of main ship engine using different machine learning algorithms, those are support vector machines, random forest regressors, tree regressors, ensemble methods and artificial neural networks. \cite{gkerekos2019machine} mainly compared the predictive capability of above mentioned models in the prediction of the fuel consumption. Two different ship board data-sets were collected using two different strategies of data collection, noon reports and automated data logging and monitoring, were used for the model development. The ML models developed using different algorithms were found to be accurately predict the fuel consumption under different weather condition, load condition, sailing distance, drafts and speed. \cite{farag2020development} used ANN and multi-regression techniques to estimate the ship power and fuel consumption. The data used for the ML model development was obtained from \cite{farag2017decision}. The data-set consists of data from two sea voyages collected at different loading conditions. \cite{gkerekos2020novel} also used deep neural networks to develop a fuel consummation model and used that model in the route optimisation process. \cite{karagiannidis2021data} feed forward neural networks to
predict the ship fuel consumption. The effect of data prepossessing on the model predictive accuracy was mainly analyzed. \cite{yuan2021prediction} used LSTM to model the real time fuel consumption of vessels and utilized the ML model to optimize the fuel consumption of the inland vessel using Reduced Space Searching Algorithm.
\begin{figure}
\centering
\includegraphics[height=12cm]{fuel_ann.png}
\caption{ANN based model framework for fuel consumption prediction \citep{farag2020development}.} \label{fig:fuel_ann}
\end{figure}
\subsubsection{Ship collision avoidance}
\cite{gao2020ship} used generative adversarial network(GAN) to generate appropriate anthropomorphic collision avoidance decisions and bypass the process of ship collision risk
assessment based on the quantification of a series of functions. The LSTM cell was combined with GAN to improve capacity of memory and the current availability of the overall system. The data required for the development of the ML model was obtained from ship encounter azimuth map. The data-set has 12 types of ship encounter modes from automatic identification system(AIS) big data \citep{yang2021can}. The proposed ML model can be applied in intelligent collision avoidance, route planning and operational efficiency estimation.
\subsection{Design and reliability analysis of breakwaters}
Breakwaters are constructed in ports and harbors to prevent an anchorage from the erosion from the harsh wave climate and long shore drift. The breakwaters the intensity of wave action in the ocean\citep{panduranga2021surface,kaligatla2021wave,vijay2020scattering}.
\cite{kim2005neural} used artificial neural networks to design and reliability analysis rubble mound breakwater. The inputs to the neural networks are chosen by taking different combinations of the variables, $P$, $N$, $S_d$, $\zeta_m$, $cos\theta$, $h/H_s$, $SS$, $h/L_s$ and $T_s$. $h/H_s$ is the water depth parameter, $L_s$ is the period of significant wave, $h/H_s$ is the significant wave height, $P$ is the permeability of the breakwater, $N$ is the number of wave attack and the other parameters have their usual meaning as used in breakwater reliability analysis\citep{kim2005neural}. Based on different input combinations they used five neural networks to model the stability of the breakwater and the predictions of the neural networks are compared against conventional empirical model of \cite{van1990rock}. The stability model of \cite{van1990rock} has the from:
\begin{equation}
\begin{split}
N_{s}=6.2P^{0.18}(\frac{S_d}{\sqrt{N_w}})^{0.2} \frac{1}{\sqrt{\zeta_m}}
\end{split}
\end{equation}
for $\zeta_m<\zeta_c$.
\cite{kim2014artificial} have used ANN to estimate the damage of breakwaters considering tidal level variations. They employed the wave height prediction neural network into a Montecarlo simulation. The ANN was used to predict the wave height in front of the breakwater. The inputs to the ANN are deep sea wave height and the other was the tidal level in front of the breakwater and the significant wave height is the output of the ANN.
\subsection{Detection of damaged mooring lines}
\cite{chung2020detection} used ANN to detect the damaged mooring line in tension leg platform through pattern analysis of floater responses. They used numerical simulation data(time series data of environment and floater responses) of charm3D for training and testing of the neural network. The environmental data are related to wave and wind, while the floater response data are related to the six degrees of freedom: surge,sway, heave, roll, pitch and yaw. They considered a ANN with five hidden layers in their modeling. The number of nodes in each layer as follows (16,100,300,500,300,100,9). 16 and 9 correspond to the first and last layer and other numbers indicate the number of neuron in respective hidden layers. Their ANN model was a classification network, whose job was to detect which mooring line is damaged by assigning an explicit label to it. \cite{aqdam2018health} used radial basis function neural network to detect the fault in the mooring line. The effects of uncertainties in the modeling (material, boundary, measurement uncertainties, hydrodynamic effects) were considered in developing the ML models.
\subsection{Machine learning applied in propeller research}
Propellers are used in ships and submarines to create thrust to propel the vehicle \citep{prabhu2017fluid,kumar2017measurement,nandy2018heuristic} . The blades are designed in such a manner so that their rotational movement through the fluid generates a pressure difference between the two surfaces of blades. The propellers used in marine Engineering applications are mostly screw propellers with helical blades rotating on a propeller shaft. \cite{mahmoodi2019prediction} used gene expression programming(GEP) to evaluate hydrodynamic performance and cavitation volume of the marine propeller with various geometrical and physical conditions. CFD data of propeller thrust, torque and cavitation volume at different rake angle, pitch ratio, skew angle, advance velocity ratio and cavitation number are utilized in the development of the GEP model. The mathematical expressions are developed for torque, thrust and cavitation volume in terms of the physical and geometrical parameters. \cite{shora2018using} used ANNs to predict the hydrodynamic performance and cavitation volume of propellers at different operating conditions. The data utilized in training and testing of the ANN model was obtained from CFD simulations of the flow past the propeller with varied geometrical and physical parameters. The input variables to the neural network are taken as rake angle, skew angle, pitch ratio, advanced ratio and cavitation number and the output variables are propeller thrust, propeller torque and cavitation volume. They generated 180 different data-sets by varying the input parameters. For different output, they considered different configuration of ANN for minimum mean squared error. They considered feed-forward and back-propagation ANNs in their simulations. Their ANN models very good prediction accuracy(greater than 0.99).
\cite{kim2021study} used convolutional neural networks to study the risk of propeller cavitation erosion. The CNN model was trained using various ship model test results of cavitation characteristics. Three types of CNN were used for the ML model development, those are VGG, GoogleNet and ResNet.
\cite{ryan2013determining} used ANN to model propeller induced erosion alongside quay walls in shallow water harbours and compared the predictive capability of the ANN by contrasting the results with other regression based models. The structure of the ANN used in modeling of the propeller induced erosion is shown in fig. \ref{fig:propeller}. The ANN has five inputs, one output and six hidden layers. The input parameters to the neural network are Clarence distance from propeller tip to bed, propeller diameter, distance to quay wall, rudder angle and densimetric Froude number and the output of the neural network is the depth of maximum scour at a particular time instant. The data required for the ANN model was obtained from deep rectangular GRP-lined plywood tank, using two open propellers.
\begin{figure}
\centering
\includegraphics[height=8cm]{prop.jpg}\\
\caption{Neural network used in modeling of the propeller induced erosion \citep{luo2014manoeuvring}.} \label{fig:propeller}
\end{figure}
\subsection{Damage detection of offshore platforms}
Offshore platforms are large structures installed in deep seas with facilities drilling of well to explore natural gas and petroleum that lies in the seabed. These are damaged during their service life, because of complex marine environments and human factors. In order to ensure the safety of marine operations, the structural health monitoring such as, vibration based damage detection technique must be employed. The vibration based damage detection method can identify the presence, location severity of the damages of structures. \cite{bao2021one} used one-dimensional convolutional networks to detect the damage sensitive features automatically from a offshore platform using raw strain response data. The CNN model was validated using numerical simulations of jacket-type offshore platform for random and regular wave excitation in different directions. Different damage locations and the noise effect were considered for finding the damage localization and damage severity. The feature extraction capability of the CNN was enhanced using the data pre-processing procedure based on convolution and deconvolution for noisy data. The CNN model developed was tested for three different cases: e.g. offshore platform subject to a sinusoidal excitation, a white noise extraction and a impulse excitation. Basically the model of \cite{bao2021one} is an extension of the work of \cite{abdeljaber2017real}, in which they had used CNN to detect the damage of a grandstand simulator at Qatar University.
\subsection{Miscellaneous applications in the marine environment}
\subsubsection{Beach classification}
\cite{lopez2015morphological} applied SVM and ANN to classify nine different types of beaches, those are mainly micro-tidal sand and gravel beaches. The beach types are a) sand and gravel beaches, b) sand and gravel separated beaches, c) gravel and sand separated beaches, d) Gravel and sand beaches, e) pure gravel beaches, f) supported sand beaches, g) open sand beaches, h) bi-supported sand beaches, i) enclosed beaches. The ML model results are compared with results of discriminant analysis. The 14 variables used in the classification model development are: modality, $D_{50}$, $D_{10}$, $D_{90}$, source, breaking wave height perpendicular to the beach, frequency, direction associated with the wave height perpendicular to the beach, length profile, type profile, slope of the berm, distance to the source and Posidonia depth. The above mentioned terms has their usual definitions and is available in \cite{lopez2015morphological}. The ML models developed with the above 14 variables, were optimized with variation of neurons in the hidden layers and the SVM was modelled with different kernels, those are linear, polynomial, radial basis function and sigmoid.
\begin{figure}
\centering
\includegraphics[height=12cm]{beaches.jpg}\\
\caption{Types of sandy beaches.(A) open sand beach.(B) supported sand beach.(C) bi-supported sand beach.(D) enclosed sand beach \cite{lopez2015morphological}.} \label{fig:beaches}
\end{figure}
\subsubsection{Condition monitoring of marine machinery system}
Condition based maintenance is an advanced data-driven method of machine maintenance, in which historical data collected by shipboard monitoring system is utilized by intelligence analysis tool to guide the planned maintenance. This makes the machine maintenance work, more scientific, systematic, and planned. \cite{tan2020comparative} used one class classification technology, that needs one class samples to train the model. Six different classifiers were used in the modeling of the condition monitoring system, those are one class support vector machine, support vector data description, Global k-nearest neighbors, Local outlier factor, Isolation forest, Angle-based outlier detection. In the development of ML models, the data-set of marine gas turbine propulsion system was used.
\begin{figure}
\centering
\includegraphics[height=6cm]{svm_classifier.jpg}\\
\caption{Schematic of the support vector machine classification. A non-linear mapping was used to map the observations into a higher dimensional space \cite{pagoropoulos2017applying}.} \label{fig:svm}
\end{figure}
\subsubsection{Performance assessment
of shipping operations}
Energy efficient operations can lead to reduced fuel consumption and reduction in environment pollution. The improvements in the energy use efficiency can be achieved both by technical upgrades and through behavioural changes of the on board crew members. \cite{pagoropoulos2017applying} used multi-class support vector machines to identify the presence of energy efficient operations. The support vector machine utilized a radial basis function kernel, that facilitates the adaptive modeling of the interface between the classes and thus significantly improves classification performance as shown in fig.\ref{fig:svm}. The data required for developing the ML model was obtained through discussions with senior officers and technical superintends(mainly the positive and negative patterns of energy efficient operations were identified). The main source of data utilized in the model development was collected from noon reports \citep{poulsen2016logic} and based on the noon reports the energy consumption data were divided per consumer and covered the auxiliary machine parts used for generation of electricity, boilers, main engine, pumps and incinerators.
\subsubsection{Autonomous ship hull corrosion cleaning system}
The ships can be smoothly operated with cleaning of the hulls in shipyards. \cite{le2021reinforcement} used autonomous system based on reinforcement learning \citep{sutton2018reinforcement} to remove the corrosion of a ship by water blasting. The cleaning of ships by autonomous robotic systems ensure reduced consumption of water, time and energy in contrast to the manual cleaning. \cite{le2021reinforcement} developed a water blasting framework for a novel robot platform, which can navigate smoothly on a vertical plane and is designed with the adhesion mechanism of a permanent magnet. In order to ensure shortest travel distance and time to save resources used in the cleaning process, the complete way-point path planning is modeled as a classic travel salesman problem. The level of corroded areas after water-blasting was assessed by using deep convolutional neural networks. A detailed discussion of the operation strategy of the robot is available in \cite{le2021reinforcement}. The block diagram of the close loop optimal automated water blasting is shown in fig.\ref{fig:robot}.
\begin{figure}
\centering
\includegraphics[height=4cm]{robot.jpg}
\caption{The block diagram of close loop optimal automated water blasting \cite{le2021reinforcement}.} \label{fig:robot}
\end{figure}
\subsubsection{Wave energy forecasting}
Wave energy is a promising source of renewable energy. \cite{bento2021ocean} employed artificial neural networks to predict wave energy flux and other wave parameters. The The neural network model optimized using moth-flame optimization and the proposed model was assessed using 13 different data-sets collected from locations across the Atlantic, Pacific coast and Gulf of Mexico. \cite{mousavi2021providing} used LSTM to forecast power generation of a wave energy converter using artificial neural networks. The effective forecast of wave energy will lead to reduced cost of investment in construction of the device and it is also essential for operation and management of electric power. They have used both experimental and numerical data for training and testing of the model. The experimental data was utilized from \cite{he2020coherence} and the numerical data was obtained from numerical simulations using Flow-3D software. In their study, a ML model was developed for a correlation between wave height and the generated electric power. \cite{vieira2021novel} developed a novel time efficient approach to calibrate VARANS-VOF models for simulation of wave interaction with porous structures using Artificial Neural Networks. These methods are useful for reducing time consumption in predicting the wave parameters using traditional fluid dynamics methods.
\subsubsection{Prediction of wind and wave induced load effects on floating suspension bridges}
\cite{xu2020efficient} used ANN and SVM to predict long-term extreme load effects of floating suspension bridges. They used ML models as surrogate models, in conjunction with the Monte-carlo based methods, for the faster prediction of the loads. For their case study a 3-span suspension bridge with 2 floating pylons under combined wind and wave actions was used as shown in fig.\ref{fig:bridge}. In the new Monte-Carlo framework, the implicit limit-state function was replaced by the surrogate models based on ANN and SVM. It was noticed that, the ML based Monte-Carlo method require less computational effort and predicted more accurate results.
\begin{figure}
\centering
\includegraphics[height=4cm]{bridge.jpg}
\caption{A finite element model of 3-span suspension bridge with 2 floating pylons \citep{xu2020efficient}.} \label{fig:bridge}
\end{figure}
\subsubsection{Tidal current prediction}
The traditional method of tidal current prediction employs computer applications with classical harmonic analysis. Those are computationally expensive for real time predictions. \cite{sarkar2018prediction} used Bayesian machine learning (Gaussian processes) for the prediction of tidal currents. With use of ML based techniques the uncertainties and complexity of the problem were enabled to represent in the modeling basis. The data used in the development of ML algorithm were collected from National Oceanic and Atmospheric Administration (NOAA). The location of tidal current observation sites are Southampton Shoal Channel, Old Port Tampa, Sunshine Sky Bridge and Martinez-AMORCO Pier. The data obtained from these sites were used for long-term predictions. The Gaussian process with periodic kernel function was found to be suitable for the modeling problem concerned, because of its harmonic nature. \cite{immas2021real} utilized deep learning models to develop tools for in situ prediction of ocean currents, those are, a Long Short-Term Memory (LSTM)
Recurrent Neural Network and a Transformer. The data utilized for model development were also obtained from NOAA. In addition to speed, they also predicted the direction of ocean currents. It was noticed that the predictions of Ocean currents are more accurate in contrast to the predictions of harmonic methods\citep{immas2021real}. \cite{sumangala2020coastal} used ANN to model the currents of Bay of Bengal. They mainly improved the velocity predictions using ANN model.
\subsection{Application of ML in CFD}
The field of Naval Architecture, Ocean and Marine Engineering often employs CFD as a tool for modeling and prediction of flow past ships and underwater vehicles \citep{panda2021numerical,mitra2019effects,mitra2020experimental}, visualization of flow past propellers \citep{prabhu2017fluid}, analysis of wave induced load on offshore structures, simulations of waves and currents and simulations of wave energy converters \citep{mohapatra2021performance1, mohapatra2021performance2,mohapatra2020hydrodynamic} . The commercial software used in CFD analysis are ANSYS FLUENT \citep{fluent2011fluent}, STAR-CCM+ \citep{cd2017star} and SHIPFLOW \citep{larsson1989shipflow}. CFD is cheaper in comparison to experimental methods and can be applied for flow prediction in larger and complex domains, like modeling flow past larger ships and predicting complex flow past propellers. The basic building block of such CFD tools are turbulence models. In literature, ML techniques are either applied for faster flow prediction using surrogate models or developing turbulence closure models using large data-sets of direct numerical simulations(DNS) or experiments.
In this section, we will provide a detailed overview of application of ML algorithms in CFD and turbulence modeling. The Naval Architects and CFD engineers working with different research problems may employ such advanced techniques for modeling and prediction of the complex oceanic flow fields.
\subsubsection{Fluid flow field prediction using reduced order models}
\cite{sekar2019fast} used deep neural networks for flow prediction over airfoils. They used a deep convolutional neural network for extraction of features from the different shapes of the airfoil and utilized those shape related features along with Reynolds number and angle of attack as inputs to the DNN model. The outputs to the DNN are pressure and velocity components across the airfoil. The data required for training the DNN model is obtained from CFD simulations. They predicted the flow field at much faster speed (150 times) in comparison to the traditional CFD methods and the results are as accurate as the traditional CFD predictions. \cite{hui2020fast} utilized CNN to predict pressure distribution around airfoils. The data-set library was formed using numerical simulation data from deformed airfoils. The airfoil was parametrized using the signed distance function(SDF).
\begin{figure}
\centering
\includegraphics[height=8cm]{sdf.jpg}
\caption{The SDF representation of base airfoil \citep{hui2020fast}.} \label{fig:sdf}
\end{figure}
\cite{renganathan2020machine} utilized DNN fro non-intrusive model order reduction of the parametric inviscid transonic flow past an airfoil. They preserved the accuracy of flow prediction at a significant lower computational cost. \cite{kong2021deep} used CNN to predict the velocity field for flow in in a scramjet isolator. The data required for training model was obtained from numerical simulations of flow at different Mach numbers and back pressures. The CNN has multiple reconstruction and feature extraction modules. A mapping relationship was established between the wall pressure on isolator and the velocity field on the isolator. \cite{kong2021data} used DL approach for super-resolution flow field reconstruction of a scramjet isolator. In contrast to the work of \cite{kong2021deep}, \cite{kong2021data} used experimental data-sets for the development of the DL model. They used both single path and multi-path network models based on CNNs for flow field reconstruction. It was noticed that the multi-path CNNs has better predictive accuracy in comparison to the single path CNNs. \cite{lee2021analysis} used CNN for predicting unsteady volume wake three-dimensional flow fields. The ML model was trained with past information of flow velocity and pressure. They also performed different analysis to find structural similarities among feature maps to reduce the number of feature maps containing redundant flow structures. Such reduction process can decrease the size of neural network without affecting the prediction performance. \cite{hasegawa2020machine} developed a reduced order model(ROM) for unsteady flow prediction by combining CNN and LSTM, which are trained in a sequential manner. The CNN model was trained using DNS data obtained from numerical simulations with 80 bluff bodies and tested on 20 bluff bodies. They also tested the ML-ROM model for unseen bluff bodies, and the predicted results were quite satisfactory, this shows the universality of ML-ROM based models. \cite{nakamura2021convolutional} modeled three dimensional complex flow using ROM. The ROM consists of a CNN and a LSTM. The function CNN was to map high dimensional flow fields into a low-dimensional latent space and the LSTM was used to predict temporal evolution of latent vectors.The data required for training the ROM was obtained from DNS. \cite{leer2021fast} utilized MLP in conjuction with radial-logarithmic filter mask (RLF) for developing a universally applicable ML concept fast flow field estimation for various geometry types. The function of RLF is to provide information about the geometry in a compressed form for the MLP. They applied new concept for both internal and external flows such as airfoils and car shapes. The ML model was developed with data generated from CFD simulations for different geometries and the was tested for different unknown geometries.
\cite{sun2020surrogate} proposed a physics constrained deep learning (DL) approach for surrogate modeling of fluid flows. The surrogate model was developed without relying on CFD simulation data. They proposed a structured DNN to enforce the initial and boundary conditions and the Navier-Stokes (NS) equation were incorporated into the loss of the DNN in the training. \cite{kashefi2021point} proposed a novel deep learning (DL) approach (Point net architecture) for flow prediction in irregular geometries when the flow field is either a function of the size and shape of the bluff body or the shape of the domain. The grid vertices (spatial positions) of the mesh in the CFD domain were taken as the input to the DL model and corresponding flow parameters at those points were considered as output. The point net architecture learns the non-linear relationship between the inputs and outputs. The Point net model was trained for in-compressible laminar steady flow past a cylinder and for testing its generalizability, it was used for prediction of flow around multiple objects and airfoils. The ML model predictions are found to be satisfactory and also accurate.
\begin{figure}
\centering
\includegraphics[height=4cm]{pointcloud.jpg}
\caption{Sample input and output data for the point-net \citep{kashefi2021point}.} \label{fig:pointcloud}
\end{figure}
\subsubsection{Turbulence modeling with ML}
Turbulent flows are a classification of fluid flows that are characterised by the manifestation of spatio-temporal chaos, hyper-sensitivity to perturbations, increased rates of diffusion, heat and mass transfer, etc. As has been observed turbulence is the norm and not the exception in nature \citep{moin1997tackling}. In engineering context the ability to predict the evolution of turbulence is of critical importance \citep{pope2000}. Because of the chaotic nature exact numerical prediction of turbulent flows is impracticable. Most engineering CFD studies rely on turbulence models to account for turbulence. The accuracy of CFD simulations are largely dependent on choice of turbulence models.
The simplest class of turbulence models are the mixing length based models, also referred to as one equation models \citep{speziale1991analytical}. While these are simple and computationally inexpensive they require extensive input parameters for each case. Many of these parameters cannot be determined without reliable CFD simulations leading to a paradoxical situation. Because of this one equation models are not regularly used in engineering applications. The second category of turbulence models is the eddy viscosity based models that includes popular models like the $k-\epsilon$, $k-\omega$ models. These use the concept of an eddy viscosity to form a constitutive equation between the Reynolds stresses and the instantaneous mean gradients. Eddy viscosity based models are universal, robust and computationally inexpensive. But they have significant limitations in flows separation and re-attachment \citep{speziale1990,mishra2019estimating}, flows around inhomogeneities like walls \citep{pope2000}, flows with moderate to high degrees of anisotropy \citep{pope2000, mishra2019linear}, etc. The final category of turbulence models are Reynolds Stress Models, that utilize the Reynolds Stress Transport Equations to formulate individual transport equations for the components of the Reynolds stress tensor. These include the Launder-Reece-Rodi (LRR) model \citep{lrr}, the Speziale-Sarkar-Gatski (SSG) model \citep{ssg,ssmodel}, the Mishra-Girimaji model \citep{mishra2, mishra6}, etc. Reynolds stress models offer higher accuracy and robustness at a higher computational cost. However Reynolds stress models have limitations as well especially in flows with high influence of rotational effects \citep{mishra1}, significant streamline curvature \citep{mishra3}, etc. We observe that while there are many different turbulence models, they all have significant limitations that affect their accuracy, their applicability and their robustness. All the above mentioned models are developed by calibrating the model coefficients with respect to experimental or direct numerical simulation(DNS) data. With increase in computational facilities and data storage capacity and advancements in the ML techniques, current emphasis of turbulence researchers have been shifted towards development of turbulence models by using different ML algorithms \citep{jimenez2018machine} such as random forest, gradient boosting trees and deep neural networks. In this section, a detailed discussion turbulence models developed with ML approaches will be provided.
\cite{duraisamy2015new} proposed new methods in turbulence and transition modeling using neural networks and Gaussian processes. They developed improved functional forms using ML and applied those functional forms to predict the flow field. \cite{singh2017machine} used neural networks for developing model augmentations for the Spalart-Allmaras model. The model forms are constructed and incorporated into the CFD solver. \cite{ling2016reynolds} used deep neural networks to develop a model for Reynolds stress anisotropy tensor by using high fidelity DNS data. They have proposed a new neural network (Tensor Basis Neural Network) that can accommodate the invariant properties (Galilean Invariance) into the modeling basis. The neural network structure was optimized using Bayesian optimization\cite{snoek2012practical}. A significant improvement in flow predictions were noticed when the ML model predictions are compared against the baseline RANS models. A data-driven Physics Informed Machine Learning(PIML) approach was proposed by \cite{wang2017physics}, in which the discrepancies in the RANS modeled Reynolds stresses were reconstructed. They used Random forests for modeling of the discrepancies with DNS data and the model developed was used to predict flow field for other flow cases, which were not used for model development. \cite{wu2018physics} proposed a systematic approach for choosing the input feature variables used in turbulence modeling. They considered strain rate tensor, rotation rate tensor, pressure gradient, TKE gradient, Wall distance based Reynolds number, turbulence intensity and ratio of turbulent time-scale to mean strain time-scale as the input features to model the discrepancy between RANS modeled and true Reynolds stresses. They used gradient of the flow features in place of the actual value to ensure Galilean invariance \citep{pope2000}. Finally the predictive capability of the ML model was tested against square duct and periodic hill flows.
\cite{kaandorp2020data} proposed Tensor Basis Random Forest(TBRF), a novel machine learning algorithm to predict the Reynolds stress anisotropy tensor(RSAT). The use of tensor basis ensures the Galilean invariance in the prediction of the RSAT. The TBRF was trained with various flow cases of DNS/LES data and was tested for unseen flows. The predicted values of the RSAT was finally employed in a RANS solver for flow prediction in unseen flows. \cite{zhu2021turbulence} developed surrogate turbulence model for prediction of flow past airfoils at high Reynolds numbers. Rather than using DNS or LES data, they utilized results from numerical simulations of Spallart-Allmaras (SA) model for training the DNN. The model was trained with six different six different free stream conditions for NACA0012 airfoil.
\cite{panda2021modelling} have proposed a data-driven model for the pressure strain correlation of turbulence\citep{panda2018representation,panda2020review} using Reynolds stress anisotropy, dissipation, turbulence kinetic energy and the strain rate as the input to the DNN . They used DNS data of flow in channels at different friction Reynolds numbers for the development of the DNN based model and the model was tested for flows in different Reynolds numbers and also for a fully unknown test case of Couette flow in channels. The model predictions was also contrasted against other established pressure strain correlation models. \cite{panda2021evaluation} compared the predictive capability of random forest, gradient boosted tree and DNN in Reynolds stress transport modeling. They mainly considered the modeling of the pressure strain correlation of turbulence. Using Bayesian optimization they recommended the optimal hyper-parameters of the DNN.
\cite{wang2018investigations} proposed ML based subgrid-scale models(SGS) using DNN and RF for LES. They considered 30 flow variables as inputs to the ML model and analysed the feature importance of input variables and found that the filtered velocity and second derivative of the input velocity has larger influence on the output variable. The newly proposed ANN based SGS model has a correlation coefficient of 0.7. The proposed ANN based SGS model was found to be accurate than the Smagorinsky model and the
dynamic Smagorinsky model for flow predictions in isotropic turbulence. \cite{yuan2020deconvolutional} used a deconvolutional artificial neural network(DANN) for modeling SGS. The input features for the DANN are the filtered velocities at different spatial points. It was observed that the DANN model predicted the SGS stress more accurately than the approximate deconvolution method and the velocity gradient model with a corrrelation coefficient of $0.99$. \cite{xie2020artificial} developed ANN based algebraic models for the SGS in LES of turbulence at the Taylor Reynolds number ranging from 180 to 250. They had constructed the coefficients of the non-linear algebraic model using the ANN. It was shown that the ANN based non-linear algebraic model predicted the SGS stress more accurately in a priori tests.
\section{Challenges and Priority Research Directions}
Machine learning applications are having significant successes and impact across different problems in marine engineering, ocean engineering and naval architecture. Nonetheless we need to ensure a higher degree of trust in ML models, enable adequate verification and validation (V\&V) of data driven models, and utilize the strengths of data driven algorithms together with the decades of purely physics based understanding that has been developed in these fields \citep{hu2020physics}.
An important step is to include physics and domain knowledge in machine learning models \citep{baker2019workshop}. This can be done at various levels for example the choice of the model algorithm and its hyperparameters, the features that are inputted to the algorithm, or directly appending physics constraints in the loss functions and optimization. As an illustration we can enforce mass and momentum conservation as constraints by appending additional losses that penalize the violation of these constraints in the loss functions of the turbulence models generated by deep neural networks. This would ensure more physical data driven models. This would also reduce the space of functions that the optimizer has to query over and lead to better models that require less training data.
Another important step is generating measures of interpretability and explainability from ML models \citep{zaman2020scientific}. As an illustration in physics based models that are commonplace in marine and ocean engineering every term and each expression has physical meaning and represents different physics based interactions and processes. Machine learning models do not allow such understanding and rationalization. This lack of transparency leads to an understandable lack of trust in ML models from scientists and engineers working in marine and ocean applications. This also obviates any model validation. Verification and validation are essential steps to be executed before the deployment of any physics based models in marine and ocean engineering and should be so for data driven models as well. Model verification for ML models can be and is carried out using test datasets to estimate generalization error. For model validation we need to be able to unearth the model's reasoning for its predictions. Due to the black box nature of algorithms like deep neural networks, this is not possible yet. Hence there is a critical need to interpret and explain the reasoning of trained ML models before they can start to replace traditional physics and empiricism based models.
A final need is to ensure robustness in the performance of ML models \citep{hegde2020quantifying}. Traditional models in marine and ocean applications have been based on physics. Physical laws such as symmetries, conservation of mass, momentum and species, etc are universal and extend to all ranges of parameter space. But machine learning models are restricted to the range where training data was utilized for their optimization. In regions of feature space far from the training data ML models make extremely poor predictions. For their general application there is a need to guarantee robustness in model performance.
\section{Concluding remarks}
In this article, we have provided a detailed review of application of machine learning algorithms in ocean engineering, naval architecture and marine engineering applications. The different machine learning algorithms are discussed in detail. The ML applications in the marine environment were classified into several categories such as wave forecasting, AUV operation and control, ship research, design and reliability analysis of breakwaters, applications in propeller research etc. The features used in modeling different marine processes and parameters were discussed in detail. The source of data utilized in model development are presented. The features used as inputs to the ML models are discussed in detail. Different algorithms used in optimization of the ML models were also discussed. A detailed overview of application of ML in CFD and turbulence modeling were also presented. Based on this comprehensive review and analysis we point out future directions of research that may be fruitful for the application of ML to ocean and marine engineering as well as problems in naval architecture. This review article will provide an avenue for marine engineers and naval architects to learn the basics of ML models and its applications in the ocean engineering, naval architecture and marine engineering applications.
\newpage
\bibliographystyle{elsarticle-harv}\biboptions{authoryear}
|
1,941,325,220,467 | arxiv | \subsection*{Binary strings with weight $\leq k$}
Recall the weight of a binary string is the number of 1s it contains.
Let ${\bf S}$ be the set of binary strings of length $n$ having weight less than or equal to some $k$.
Observe that ${\bf S}$ satisfies the two closure properties of a flip-swap language as the flip-first and swap-first operations either decrease or maintain the weight.
Thus, ${\bf S}$ is a flip-swap language.
\subsection*{Binary strings $\leq \gamma$}
Let ${\bf S}$ be the set of binary strings of length $n$ with each string lexicographically smaller or equal to some string $\gamma$.
Observe that ${\bf S}$ satisfies the two closure properties of a flip-swap language as the flip-first and swap-first operations either make the resulting string lexicographically smaller or produce the same string.
Thus, ${\bf S}$ is a flip-swap language.
\subsection*{Binary strings with $\leq k$ inversions}
Recall that an \emph{inversion} with respect to $0^*1^*$ in a binary string $\alpha = b_1 b_2 \cdots b_n$ is any $b_i = 1$ and $b_j = 0$ such that $i < j$.
For example when $\alpha = 100101$, it has $4$ inversions: $(b_1, b_2), (b_1, b_3), (b_1, b_5), (b_4, b_5)$.
Let ${\bf S}$ be the set of binary strings of length $n$ with less than or equal to $k$ inversions with respect to $0^*1^*$.
Observe that ${\bf S}$ satisfies the two closure properties of a flip-swap language as the flip-first and swap-first operations either decrease or maintain the number of inversions.
Thus, ${\bf S}$ is a flip-swap language.
\subsection*{Binary strings with $\leq k$ transpositions}
Recall that the number of \emph{transpositions} of a binary string $\alpha = b_1 b_1 \cdots b_n$ with respect to $0^*1^*$ is the minimum number of $swap(i, j)$ operations required to change $\alpha$ into the form $0^*1^*$.
For example, the number of transpositions of the string $100101$ is $1$.
Let ${\bf S}$ be the set of binary strings of length $n$ with less than or equal to $k$ transpositions with respect to $0^*1^*$.
Observe that ${\bf S}$ satisfies the two closure properties of a flip-swap language as the flip-first and swap-first operations either decrease or maintain the number of transpositions.
Thus, ${\bf S}$ is a flip-swap language.
\subsection*{Binary strings $< \text{or} \leq$ their reversal}
Let ${\bf S}$ be the set of binary strings of length $n$ with each string lexicographically smaller than their reversal.
Observe that ${\bf S}$ satisfies the swap-first property as the swap-first operation either produces the same string, or makes the resulting sting lexicographically smaller while its reversal lexicographically larger.
Furthermore, ${\bf S} \cup \{0^n\}$ satisfies the flip-first property as the flip-first operation complements the most significant bit of $\alpha$ but the least significant bit of its reversal when $w(\alpha) > 1$; or otherwise produces the string $0^n$ when $w(\alpha) = 1$.
Thus, ${\bf S}$ is a flip-swap language.
The proof for the set of binary strings of length $n$ with each string lexicographically smaller than or equal to their reversal is similar to the proof for ${\bf S}$.
Equivalence class of strings under reversal has also been called neckties~\cite{Savage:1997:SCG:273590.273592}.
\subsection*{Binary strings $< \text{or} \leq$ their complemented reversal}\label{subsec:complement}
Let ${\bf S}$ be the set of binary strings of length $n$ with each string lexicographically smaller than (or equal to) its complemented reversal.
Observe that ${\bf S} $ satisfies the flip-first property as the flip-first operation makes the resulting string lexicographically smaller while its complemented reversal lexicographically larger.
Furthermore, ${\bf S} $ satisfies the swap-first property as the swap-first operation either produces the same string, or complements the most significant bit of $\alpha$ and also a $1$ of its complemented reversal.
Thus, the resulting string must also be less than its complemented reversal.
Thus, ${\bf S}$ is a flip-swap language.
\subsection*{Binary strings with forbidden $10^t$}\label{subsec:forbidden}
Let ${\bf S}$ be the set of binary strings of length $n$ without the substring $10^t$.
Observe that ${\bf S}$ satisfies the two closure properties of a flip-swap language as the flip-first and swap-first operations do not create the substring $10^t$.
Thus, ${\bf S}$ is a flip-swap language.
\subsection*{Binary strings with forbidden prefix $1\gamma$}\label{subsec:forbidden-prefix}
Let ${\bf S}$ be the set of binary strings of length $n$ without the prefix $1\gamma$.
Observe that ${\bf S}$ satisfies the two closure properties of a flip-swap language as the flip-first and swap-first operations either create a string with the prefix $0$ or produce the same string.
Thus, ${\bf S}$ is a flip-swap language.
\subsection*{Lyndon words}
Let ${\bf L}(n)$ denote the set of Lyndon words of length $n$.
Since ${\bf N}(n)$ is a flip-swap language and ${\bf L}(n) \cup \{0^n\} \subseteq {\bf N}(n)$,
it suffices to show that applying the flip-first or the swap-first operation on a Lyndon word either yields an aperiodic string or the string $0^n$.
Clearly ${\bf L}(n) \cup \{0^n\}$ satisfies the two closure properties of a flip-swap language when $\alpha \in \{0^n, 0^{n-1}1\}$. Thus in the remaining of the proof, $\alpha \notin \{0^n, 0^{n-1}1\}$.
We first prove by contradiction that ${\bf L}(n) \cup \{0^n\}$ satisfies the flip-first closure property.
Let $\alpha = 0^j 1 b_{j+2} b_{j+3}\cdots b_n$ be a string in ${\bf L}(n) \cup \{0^n\}$.
Suppose that ${\bf L}(n) \cup \{0^n\}$ does not satisfy the flip-first closure property and $\FLIP{\alpha}{\ell_\alpha}$ is periodic.
Thus $\FLIP{\alpha}{\ell_\alpha} = (0^{j+1} \beta)^t$ for some string $\beta$ and $t\geq 2$.
Observe that $\alpha = 0^{j}1 \beta (0^{j+1} \beta)^{t-1}$ which is clearly not a Lyndon word, a contradiction.
Therefore ${\bf L}(n) \cup \{0^n\}$ satisfies the flip-first closure property.
Then similarly we prove by contradiction that ${\bf L}(n) \cup \{0^n\}$ satisfies the swap-first property.
If $b_{j+2} = 1$, then applying the swap-first operation on $\alpha$ produces the same Lyndon word.
Thus in the remaining of the proof, $b_{j+2} = 0$.
Suppose that ${\bf L}(n) \cup \{0^n\}$ does not satisfy the swap-first closure property such that $\alpha \in {\bf L}(n) \cup \{0^n\}$ but $\SWAP{\alpha}{\ell_\alpha}{\ell_\alpha+1}$ is periodic.
Thus $\SWAP{\alpha}{\ell_\alpha}{\ell_\alpha +1} = (0^{j+1}1 \beta)^t$ for some string $\beta$ and $t \geq 2$.
Thus $\alpha$ contains the prefix $0^{j}1$ but also the substring $0^{j+1}1$ in its suffix which is clearly not a Lyndon word, a contradiction.
Thus, ${\bf L}(n)$ is a flip-swap language.
In~\cite{vaj}, Vajnovszki proved that the BRGC order induces a cyclic $2$-Gray code for the set of Lyndon words of length $n$.
\subsection*{Prenecklaces}
Recall that a string $\alpha$ is a \emph{prenecklace} if it is a prefix of some necklace.
In Section~\ref{sec:flip-swap} we prove that applying the flip-first or the swap-first operation on a necklace yields a necklace.
Thus by the definition of prenecklace, applying the flip-first or the swap-first operation on a prenecklace also creates a string that is a prefix of a necklace.
Thus, the set of prenecklaces of length $n$ is a flip-swap language.
\subsection*{Pseudo-necklaces}\label{subsec:pseudonecklaces}
Recall that a \emph{block} with respect to $0^*1^*$ is a maximal substring of the form $0^*1^*$.
Each block $B_i$ with respect to $0^*1^*$ can be represented by two integers $(s_i, t_i)$ corresponding to the number of $0$s and $1$s respectively. For example, the string $\alpha = 000110100011001$ can be represented by $B_4B_3B_2B_1 = (3, 2)(1, 1)(3, 2)(2, 1)$.
A block $B_i = (s_i, t_i)$ is said to be \emph{lexicographically smaller} than a block $B_j = (s_j, t_j)$ (denoted by $B_i < B_j$) if $s_i < s_j$ or $s_i = s_j$ with $t_i < t_j$.
A string $\alpha = b_1b_2 \cdots b_n = B_b B_{b-1} \cdots B_1$ is a \emph{pseudo-necklace} with respect to $0^*1^*$ if $B_b \leq B_i$ for all $1 \leq i < b$.
Observe that the set of pseudo-necklaces of length $n$ satisfies the two closure properties of a flip-swap language as the flip-first and swap-first operations do not make the first block $B_b$ lexicographically larger, while the remaining blocks either remain the same or become lexicographically larger.
Thus, the set of pseudo-necklaces of length $n$ is a flip-swap language.
In~\cite{neck-sww}, the authors proved that the BRGC order induces a cyclic $2$-Gray code for the set of pseudo-necklaces of length $n$.
\begin{comment}
\subsection*{Prefix normal words}
A binary string $\alpha$ is \emph{prefix normal} with respect to $0$ (also known as $0$-prefix normal word) if no substring of $\alpha$ has more $0$s than its prefix of the same length.
For example, the string 001010010111011 is a $0$-prefix normal word but the string 001010010011011 is not because it has a substring of length $5$ with four $0$s while the prefix of length $5$ has only three $0$s.
Observe that the set of $0$-prefix normal words of length $n$ satisfies the two closure properties of a flip-swap language as the flip-first and swap-first operations either increases or maintain the number of $0$s in its prefix.
Thus, the set of $0$-prefix normal words of length $n$ is a flip-swap language.
A binary string $\alpha$ is {prefix normal} with respect to $1$ (also known as $1$-prefix normal word) if no substring of $\alpha$ has more $1$s than its prefix of the same length.
Similarly, the set of $1$-prefix normal words of length $n$ is a flip-swap language with respect to $0$.
\end{comment}
\subsection*{Left factors of $k$-ary Dyck words}
Recall that a \emph{$k$-ary Dyck word} is a binary string of length $n = tk$ with $t$ copies of $1$ and $t(k - 1)$ copies of $0$ such that every prefix has at most $k - 1$ copies of $0$ for every $1$.
It is well-known that $k$-ary Dyck words are in one-to-one correspondence with $k$-ary trees with $t$ internal nodes.
When $k = 2$, Dyck words are counted by the Catalan numbers and are equivalent to \emph{balanced parentheses}.
As an example, $110100$ is a $2$-ary Dyck word and is also a balanced parentheses string while $100110$ is not a $2$-ary Dyck word nor a balanced parentheses because its prefix of length three contains more $0$s than $1$s.
$k$-ary Dyck words and balanced parentheses strings are well studied and have lots of applications including trees and stack-sortable permutations~\cite{Bultena:1998:EAW:306049.306064,Ruskey199068,Ruskey:2008:GBP:1379361.1379382,Vajnovszki:2006:LTG:1219189.1219688}.
The set of $k$-ary Dyck words of length $n$ is not a flip-swap language with respect to $0$ since $110100$ is a $2$-ary Dyck word but $111100$ is not.
The set of length $n$ prefixes of $k$-ary Dyck words is, however, a flip-swap language with respect to $0$.
This set is also called \emph{left factors of $k$-ary Dyck words}.
Let ${\bf S}$ be the set of left factors of $k$-ary Dyck words.
Observe that ${\bf S}$ satisfies the two closure properties of a flip-swap language with respect to $0$ as the flip-first and swap-first operations do not increase the number $0$s in the prefix.
Thus, ${\bf S}$ is a flip-swap language with respect to $0$.
\section{Successor rule for necklaces in lexicographic order} \label{sec:gen_n-lex}
\section{Introduction} \label{sec:intro}
Combinatorial generation studies the efficient generation of each instance of a combinatorial object, such as the $n!$ permutations of $\{1,2,\ldots,n\}$ or the $\frac{1}{n+1}\binom{2n}{n}$ binary trees with $n$ nodes.
The research area is fundamental to computer science and it has been covered by textbooks such as \emph{Combinatorial Algorithms for Computers and Calculators} by Nijenhuis and Wilf \cite{WilfBook}, \emph{Concrete Mathematics: A Foundation for Computer Science} by Graham, Knuth, and Patashnik~\cite{concrete}, and \emph{The Art of Computer Programming, Volume 4A, Combinatorial Algorithms} by Knuth~\cite{opac-b1101743}.
In fact, Knuth's section on \emph{Generating Basic Combinatorial Patterns} is over 450 pages.
The subject is important to every day programmers, and Arndt's \emph{Matters Computational: Ideas, Algorithms, Source Code} is an excellent practical resource \cite{MattersComputational}.
A primary consideration is listing the instances of a combinatorial object so that consecutive instances differ by a specified \emph{closeness condition}.
Lists of this type are called \emph{Gray codes}.
This terminology is due to the eponymous \emph{binary reflected Gray code} (\emph{BRGC}) by Frank Gray, which orders the $2^n$ binary strings of length $n$ so that consecutive strings differ in one bit.
The BRGC was patented for a pulse code communication system in 1953~\cite{gray-pulse-code-communication-1953}.
For example, the order for $n=4$ is
\begin{equation} \label{eq:BRGC4}
\begin{aligned}
0000, 1000, 1100, 0100, 0110, 1110, 1010, 0010, \\ 0011, 1011, 1111, 0111, 0101, 1101, 1001, 0001.
\end{aligned}
\end{equation}
Variations that reverse the entire order or the individual strings are also commonly used in practice and in the literature.
We note that the order in \eqref{eq:BRGC4} is \emph{cyclic} because the last and first strings also differ by the closeness condition, and this property holds for all $n$.
One challenge facing combinatorial generation is its relative surplus of breadth and lack of depth\footnote{This is not to say that combinatorial generation is always easy.
For example, the `middle levels` conjecture was confirmed by M\"{u}tze \cite{middleLMS} after 30 years and effort by hundreds of researchers.}.
For example, \cite{MattersComputational}, \cite{opac-b1101743}, and \cite{WilfBook} have separate subsections for different combinatorial objects, and the majority of the Gray codes are developed from first principles.
Thus, it is important to encourage simple frameworks that can be applied to a variety of combinatorial objects.
Previous work in this direction includes the following:
\begin{enumerate}
\item the ECO framework developed by Bacchelli, Barcucci, Grazzini, and Pergola~\cite{Bacchelli2004} that generates Gray codes for a variety of combinatorial objects such as Dyck words in constant amortized time per instance;
\item the twisted lexico computation tree by Takaoka~\cite{DBLP:conf/isaac/Takaoka99} that generates Gray codes for multiple combinatorial objects in constant amortized time per instance;
\item loopless algorithms developed by Walsh~\cite{DBLP:conf/dmtcs/Walsh03} to generate Gray codes for multiple combinatorial objects, which extend algorithms initially given by Ehrlich in~\cite{Ehrlich:1973:LAG:321765.321781};
\item greedy algorithms observed by Williams~\cite{GreedyWADS} that provide a uniform understanding for many previous published results;
\item the reflectable language framework by Li and Sawada~\cite{Li2009296} for generating Gray codes of $k$-ary strings, restricted growth strings, and $k$-ary trees with $n$ nodes;
\item the bubble language framework developed by Ruskey, Sawada and Williams~\cite{Ruskey2012155} that provides algorithms to generate shift Gray codes for fixed-weight necklaces and Lyndon words, $k$-ary Dyck words, and representations of interval graphs;
\item the permutation language framework developed by Hartung, Hoang, M\"{u}tze and Williams~\cite{10.5555/3381089.3381163} that provides algorithms to generate Gray codes for a variety of combinatorial objects based on encoding them as permutations.
\end{enumerate}
We focus on an approach that is arguably simpler than all of the above:
Start with a known Gray code and then \emph{filter} or \emph{induce} the list based on a subset of interest.
In other words, the subset is listed in the relative order given by a larger Gray code, and the resulting order is a \emph{sublist (Gray code)} with respect to it.
Historically, the first sublist Gray code appears to be the \emph{revolving door} Gray code for combinations \cite{Wilf}.
A \emph{combination} is a length $n$ binary string with \emph{weight} (i.e. number of ones) $k$.
The Gray code is created by filtering the BRGC, as shown below for $n=4$ and $k=2$ (cf. \eqref{eq:BRGC4})
\begin{equation} \label{eq:RevolvingDoor42}
\begin{aligned}
\cancel{0000}, \cancel{1000}, 1100, \cancel{0100}, 0110, \cancel{1110}, 1010, \cancel{0010}, \\ 0011, \cancel{1011}, \cancel{1111}, \cancel{0111}, 0101, \cancel{1101}, 1001, \cancel{0001}.
\end{aligned}
\end{equation}
This order is a \emph{transposition Gray code} as consecutive strings differ by transposing two bits\footnote{When each string is viewed as the incidence vector of a $k$-subset of $\{1,2,\ldots,n\}$, then consecutive $k$-subsets change via a ``revolving door'' (i.e. one value enters and one value exits).}.
It can be generated \emph{directly} (i.e. without filtering) by an efficient algorithm~\cite{Wilf}.
Transposition Gray codes are a special case of \emph{2-Gray codes} where consecutive strings differ by flipping (i.e. complementing) at most two bits.
Vajnovszki~\cite{vaj} proved that necklaces and Lyndon words form a cyclic $2$-Gray code in BRGC order, and efficient algorithms can generate these sublist Gray codes directly \cite{neck-sww}.
Our goal is to expand upon the known languages that are 2-Gray codes in BRGC order, and which can be efficiently generated.
To do this, we introduce a new class of languages.
A \emph{flip-swap language} (with respect to 1) is a set ${\bf S}$ of length $n$ binary strings such that ${\bf S} \cup \{0^n\}$ is closed under two operations (when applicable):
(1) Flip the leftmost $1$; and
(2) Swap the leftmost $1$ with the bit to its right.
A flip-swap language with respect to $0$ is defined similarly.
Flip-swap languages encode a wide variety of combinatorial objects.
\begin{theorem} \label{thm:main1}
The following sets of length $n$ binary strings are flip-swap languages:
\begin{tabular}{p{0.51\textwidth}p{0.49\textwidth}}
\small
{\bf Flip-Swap languages (with respect to $1$)}
\squishlisttwo
\item[i.] all strings
\item[ii.] strings with weight $\leq k$
\item[iii.] strings $\leq \gamma$
\item[iv.] strings with $\leq k$ inversions re: $0^*1^*$
\item[v.] strings with $\leq k$ transpositions re: $0^*1^*$
\item[vi.] strings $<$ their reversal
\item[vii.] strings $\leq$ their reversal (neckties)
\item[viii.] strings $<$ their complemented reversal
\item[ix.] strings $\leq$ their complemented reversal
\item[x.] strings with forbidden $10^t$
\item[xi.] strings with forbidden prefix $1\gamma$
\item[xii.] $0$-prefix normal words
\item[xiii.] necklaces (smallest rotation)
\item[xiv.] Lyndon words
\item[xv.] prenecklaces (smallest rotation)
\item[xvi.] pseudo-necklaces with respect to $0^*1^*$
\item[xvii.] left factors of $k$-ary Dyck words
\item[xviii.] feasible solutions to 0-1 knapsack problems
\squishend
&
\small
{\bf Flip-Swap languages (with respect to~$0$)}
\squishlisttwo
\item all strings
\item strings with weight $\geq k$
\item strings $\geq \gamma$
\item strings with $\leq k$ inversions re: $1^*0^*$
\item strings with $\leq k$ transpositions re: $1^*0^*$
\item strings $>$ their reversal
\item strings $\geq$ their reversal
\item strings $>$ their complemented reversal
\item strings $\geq$ their complemented reversal
\item strings with forbidden $01^t$
\item strings with forbidden prefix $0\gamma$
\item $1$-prefix normal words
\item necklaces (largest rotation)
\item aperiodic necklaces (largest rotation)
\item prenecklaces (largest rotation)
\item pseudo-necklaces with respect to $1^*0^*$
\squishend
\end{tabular}
\end{theorem}
Our second result is that every flip-swap language forms a cyclic $2$-Gray code when listed in BRGC order.
This generalizes the previous sublist BRGC results \cite{neck-sww,vaj}.
\begin{theorem} \label{thm:main2}
When a flip-swap language $\mathbf{S}$ is listed in BRGC order the resulting listing is a 2-Gray code. If $\mathbf{S}$ includes $0^n$ then the listing is cyclic.
\end{theorem}
Our third result is a generic successor rule, which efficiently computes the next string in the $2$-Gray code of a flip-swap language, so long as a fast membership test is given.
\begin{theorem} \label{thm:main3}
The languages in Theorem~\ref{thm:main1} can be generated in $O(n)$-amortized time per string, with the exception of prefix normal words which require $O(n^{1.864})$-time.
\end{theorem}
In Section~\ref{sec:BRGC}, we formally define our version of the BRGC.
In Section~\ref{sec:flip-swap}, we prove Theorem~\ref{thm:main1}, and define the flip-swap partially ordered set.
In Section~\ref{sec:main-result}, we give our generic successor rule and prove Theorem~\ref{thm:main2}.
In Section~\ref{sec:gen_n}, we present a generic generation algorithm that list out each string of a flip-swap language, and we prove Theorem \ref{thm:main3}.
\section{The Binary Reflected Gray Code} \label{sec:BRGC}
Let $\BINARY{n}$ denote the set of length $n$ binary strings.
Let $BRGC(n)$ denote the listing of $\BINARY{n}$ in BRGC order.
Let $\overline{BRGC}(n)$ denote the listing $BRGC(n)$ in reverse order.
Then $BRGC(n)$ can be defined recursively as follows, where $\mathcal{L} \cdot x$ denotes the listing $\mathcal{L}$ with the character $x$ appended to the end of each string:
\begin{equation*}
BRGC(n) =
\begin{cases}
0,1 & \text{if $n=1$}; \\
BRGC(n-1) \cdot 0, \ \overline{BRGC}(n-1) \cdot 1 & \text{if $n >1$.}
\end{cases}
\end{equation*}
For example, $BRGC(2) = 00, 10, 11, 01$ and $\overline{BRGC}(2) = 01, 11, 10, 00$, {thus}
\begin{center}
$BRGC(3) = 00{\bf 0}, 10{\bf 0}, 11{\bf 0}, 01{\bf 0}, 01{\bf 1}, 11{\bf 1}, 10{\bf 1}, 00{\bf 1}$.
\end{center}
This definition of BRGC order is the same as the one used by Vajnovzski~\cite{vaj}.
When the strings are read from right-to-left, we obtain the classic definition of BRGC order~\cite{gray-pulse-code-communication-1953}. For flip-swap languages with respect to 0, we interchange the roles of the 0s and 1s; however, for our discussions we will focus on flip-swap languages with respect to 1.
Table \ref{table:brgc} illustrates $BRGC(4)$ and six flip-swap languages listed in Theorem~\ref{thm:main1}.
\begin{table}[t]
\captionsetup{format=hang}
\centering
\begin{subfigure}[b]{0.5\textwidth}
\centering
\scriptsize
\begin{tabular}{ | c | c | c | c | c | c | c | } \hline
$n=4$ & \ all \ & necklaces & $0$-PNW & $\leq 1001$ & $k \leq 2$ & neckties \\
BRGC & i. & xiii. & xii. & iii. & ii. & vii. \\
\hline
0000 & \checkmark &\checkmark & \checkmark & \checkmark & \checkmark & \checkmark \\
1000 & \checkmark & & & \checkmark &\checkmark & \\
1100 & \checkmark & & & &\checkmark & \\
0100 & \checkmark & & & \checkmark & \checkmark & \\
0110 & \checkmark & & \checkmark & \checkmark & \checkmark & \checkmark \\
1110 & \checkmark & & & & & \\
1010 & \checkmark & & & & \checkmark & \\
0010 & \checkmark & & \checkmark & \checkmark & \checkmark & \checkmark \\
0011 & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark \\
1011 & \checkmark & & & & & \checkmark \\
1111 & \checkmark & \checkmark & & & & \checkmark \\
0111 & \checkmark & \checkmark & \checkmark & \checkmark & & \checkmark \\
0101 & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark \\
1101 & \checkmark & & & & & \\
1001 & \checkmark & & & \checkmark & \checkmark & \checkmark \\
0001 & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark \\
\hline
\end{tabular}
\caption{String membership in 6 flip-swap languages.}
\label{table:brgc_checkmarks}
\end{subfigure}
\begin{subfigure}[b]{0.494\textwidth}
\centering
\scriptsize
\begin{tabular}{ c c c c c c }
i. & xiii. & xii. & iii. & ii. & vii. \\[-0.7em]
\ \scalebox{0.75}[1.0]{\raisebox{-0.2em}{\includegraphics[angle=270,scale=2.00,trim=1 1 1 1]{GrayCode4-crop.pdf}}} \ &
\ \scalebox{0.75}[1.0]{\raisebox{-0.2em}{\includegraphics[angle=270,scale=2.00,trim=1 1 1 1]{necklaces-crop.pdf}}} \ &
\ \scalebox{0.75}[1.0]{\raisebox{-0.2em}{\includegraphics[angle=270,scale=2.00,trim=1 1 1 1]{0pnw-crop.pdf}}} \ &
\ \scalebox{0.75}[1.0]{\raisebox{-0.3em}{\includegraphics[angle=270,scale=2.00,trim=1 1 1 1]{leq1001-crop.pdf}}} \ &
\ \scalebox{0.75}[1.0]{\raisebox{-0.3em}{\includegraphics[angle=270,scale=2.00,trim=1 1 1 1]{weight12-crop.pdf}}} \ &
\ \scalebox{0.75}[1.0]{\raisebox{-0.3em}{\includegraphics[angle=270,scale=2.00,trim=1 1 1 1]{neckties-crop.pdf}}}
\end{tabular}
\caption{Visualizating the 2-Gray codes in (a).}
\label{table:brgc_checkmarks}
\end{subfigure}
\caption{Flip-swap languages ordered as sublists of the binary reflected Gray code.
Theorem \ref{thm:main1} covers each language, so the resulting orders are 2-Gray codes.}
\label{table:brgc}
\end{table}
\section{Flip-swap languages} \label{sec:flip-swap}
In this section, we formalize some of the non-obvious flip-swap languages stated in Theorem~\ref{thm:main1}. Then we prove Theorem~\ref{thm:main1} for a subset of the listed languages including necklaces, prefix normal words, and feasible solutions to the 0-1 knapsack problems. The remainder of the languages are proved in the Appendix.
Consider a binary string $\alpha = b_1b_2\cdots b_n$.
The \emph{weight} of $\alpha$ is the number of 1s it contains.
An \emph{inversion} in $\alpha$ with respect to $0^*1^*$ is an index pair $(i,j)$ such that $i<j$ and $b_i = 1$ and $b_j = 0$.
The number of \emph{transpositions} of $\alpha$ with respect to another binary string $\beta$ of length $n$ is the minimum number of adjacent transpositions required to transform $\alpha$ to $\beta$.
A \emph{necklace} is the lexicographically smallest (largest) string in an equivalence class under rotation.
An \emph{aperiodic necklace} is a necklace that cannot be written in the form $\beta^j$ for some $j < n$. A \emph{Lyndon word} is an aperiodic necklace when using the lexicographically smallest string as the representative.
A \emph{prenecklace} is a prefix of a necklace.
A \emph{block} with respect to $0^*1^*$ is a maximal substring of the form $0^*1^*$.
A string $\alpha = b_1b_2 \cdots b_n = B_b B_{b-1} \cdots B_1$ is a \emph{pseudo-necklace} with respect to $0^*1^*$ if $B_b \leq B_i$ for all $1 \leq i < b$.
A \emph{$k$-ary Dyck word} is a binary string of length $n = tk$ with $t$ copies of $1$ and $t(k - 1)$ copies of $0$ such that every prefix has at most $k - 1$ copies of $0$ for every $1$.
The set of length $n$ prefixes of $k$-ary Dyck words is called \emph{left factors of $k$-ary Dyck words}.
Let $\FLIP{\alpha}{i}$ be the string obtained by complementing $b_i$.
Let $\SWAP{\alpha}{i}{j}$ be the string obtained by swapping $b_i$ and $b_j$.
When the context is clear we use $flip(i)$ and $swap(i, j)$ instead of $\FLIP{\alpha}{i}$ and $\SWAP{\alpha}{i}{j}$.
Also, let $\ell_0(\alpha)$ denote the position of the leftmost $0$ of $\alpha$ or $n+1$ if no such position exists.
Similarly, let $\ell_1(\alpha)$ denote the position of the leftmost $1$ of $\alpha$ or $n+1$ if no such position exists.
We now prove that binary strings, necklaces, prefix normal words, and feasible solutions to the 0-1 knapsack problems are flip-swap languages with respect to 1. %
{\bf Binary strings}: Obviously the set ${\bf B}(n)$ satisfies the two closure properties of a flip-swap language and thus is a flip-swap language.
In fact, the BRGC order induces a cyclic $1$-Gray code for ${\bf B}(n)$~\cite{opac-b1101743,ruskey}.
{\bf Necklaces}:
Let ${\bf N}(n)$ be the set of necklaces of length $n$ and $\alpha = 0^j 1 b_{j+2} b_{j+3}\cdots b_n$ be a necklace in ${\bf N}(n)$.
By the definition of necklace, it is easy to see that $\FLIP{\alpha}{\ell_\alpha} = 0^{j+1} b_{j+2} b_{j+3}\cdots b_n \in {\bf N}(n)$ and thus ${\bf N}(n)$ satisfies the flip-first property.
For the swap-first operation, observe that if $\alpha \neq 0^{n-1}1$ and $b_{j+2} = 1$, then the swap-first operation produces the same necklace.
Otherwise if $\alpha \neq 0^{n-1}1$ and $b_{j+2} = 0$, then the swap-first operation produces the string $0^{j+1} 1 b_{j+3} b_{j+4}\cdots b_n$ which is clearly a necklace.
Thus, the set of necklaces is a flip-swap language.
\begin{comment}
{\bf Lyndon words}:
Let ${\bf L}(n)$ denote the set of Lyndon words of length $n$.
Since ${\bf N}(n)$ is a flip-swap language and ${\bf L}(n) \cup \{0^n\} \subseteq {\bf N}(n)$,
it suffices to show that applying the flip-first or the swap-first operation on a Lyndon word either yields an aperiodic string or the string $0^n$.
Clearly ${\bf L}(n) \cup \{0^n\}$ satisfies the two closure properties of a flip-swap language when $\alpha \in \{0^n, 0^{n-1}1\}$. Thus in the remaining of the proof, $\alpha \notin \{0^n, 0^{n-1}1\}$.
We first prove by contradiction that ${\bf L}(n) \cup \{0^n\}$ satisfies the flip-first closure property.
Let $\alpha = 0^j 1 b_{j+2} b_{j+3}\cdots b_n$ be a string in ${\bf L}(n) \cup \{0^n\}$.
Suppose that ${\bf L}(n) \cup \{0^n\}$ does not satisfy the flip-first closure property and $\FLIP{\alpha}{\ell_\alpha}$ is periodic.
Thus $\FLIP{\alpha}{\ell_\alpha} = (0^{j+1} \beta)^t$ for some string $\beta$ and $t\geq 2$.
Observe that $\alpha = 0^{j}1 \beta (0^{j+1} \beta)^{t-1}$ which is clearly not a Lyndon word, a contradiction.
Therefore ${\bf L}(n) \cup \{0^n\}$ satisfies the flip-first closure property.
Then similarly we prove by contradiction that ${\bf L}(n) \cup \{0^n\}$ satisfies the swap-first property.
If $b_{j+2} = 1$, then applying the swap-first operation on $\alpha$ produces the same Lyndon word.
Thus in the remaining of the proof, $b_{j+2} = 0$.
Suppose that ${\bf L}(n) \cup \{0^n\}$ does not satisfy the swap-first closure property such that $\alpha \in {\bf L}(n) \cup \{0^n\}$ but $\SWAP{\alpha}{\ell_\alpha}{\ell_\alpha+1}$ is periodic.
Thus $\SWAP{\alpha}{\ell_\alpha}{\ell_\alpha +1} = (0^{j+1}1 \beta)^t$ for some string $\beta$ and $t \geq 2$.
Thus $\alpha$ contains the prefix $0^{j}1$ but also the substring $0^{j+1}1$ in its suffix which is clearly not a Lyndon word, a contradiction.
Thus, ${\bf L}(n)$ is a flip-swap language.
{\bf Pseudo-necklaces}:
A \emph{block} with respect to $0^*1^*$ is a maximal substring of the form $0^*1^*$.
Each block $B_i$ with respect to $0^*1^*$ can be represented by two integers $(s_i, t_i)$ corresponding to the number of $0$s and $1$s respectively. For example, the string $\alpha = 000110100011001$ can be represented by $B_4B_3B_2B_1 = (3, 2)(1, 1)(3, 2)(2, 1)$.
A block $B_i = (s_i, t_i)$ is said to be \emph{lexicographically smaller} than a block $B_j = (s_j, t_j)$ (denoted by $B_i < B_j$) if $s_i < s_j$ or $s_i = s_j$ with $t_i < t_j$.
A string $\alpha = b_1b_2 \cdots b_n = B_b B_{b-1} \cdots B_1$ is a \emph{pseudo-necklace} with respect to $0^*1^*$ if $B_b \leq B_i$ for all $1 \leq i < b$.
Observe that the set of pseudo-necklaces of length $n$ satisfies the two closure properties of a flip-swap language as the flip-first and swap-first operations do not make the first block $B_b$ lexicographically larger, while the remaining blocks either remain the same or become lexicographically larger.
Thus, the set of pseudo-necklaces of length $n$ is a flip-swap language.
In~\cite{neck-sww}, the authors proved that the BRGC order induces a cyclic $2$-Gray code for the set of pseudo-necklaces of length $n$.
\end{comment}
{\bf Prefix normal words}:
A binary string $\alpha$ is \emph{prefix normal} with respect to $0$ (also known as $0$-prefix normal word) if no substring of $\alpha$ has more $0$s than its prefix of the same length.
For example, the string 001010010111011 is a $0$-prefix normal word but the string 001010010011011 is not because it has a substring of length $5$ with four $0$s while the prefix of length $5$ has only three $0$s.
Observe that the set of $0$-prefix normal words of length $n$ satisfies the two closure properties of a flip-swap language as the flip-first and swap-first operations either increases or maintain the number of $0$s in its prefix.
Thus, the set of $0$-prefix normal words of length $n$ is a flip-swap language.
{\bf Feasible solutions to $0$-$1$ knapsack problems}:
The input to a \emph{$0$-$1$ knapsack problem} is a knapsack capacity $W$, and a set of $n$ items each of which has a non-negative weight $w_i \geq 0$ and a value $v_i$.
A subset of items is \emph{feasible} if the total weight of the items in the subset is less than or equal to the capacity $W$.
Typically, the goal of the problem is to find a feasible subset with the maximum value, or to decide if a feasible subset exists with value $\geq c$.
Given the input to a $0$-$1$ knapsack problem, we reorder the items by non-decreasing weight.
That is, $w_i \geq w_{i+1}$ for $1 \leq i \leq n-1$.
Notice that the incidence vectors of feasible subsets are now a flip-swap language.
More specifically, flipping any $1$ to $0$ causes the subset sum to decrease, and so does swapping any $1$ with the bit to its right.
Hence, the language satisfies the flip-first and the swap-first closure properties and is a flip-swap language.
\subsection{Flip-Swap poset} \label{sec:poset}
In this section we introduce a poset whose ideals correspond to a flip-swap language which includes the string $0^n$.
Let $\alpha = b_1 b_2 \cdots b_n $ be a length $n$ binary string. We define $\tau(\alpha)$ as follows:
\begin{subnumcases}{\tau(\alpha) = }
\alpha & if $\alpha = 0^n$, \nonumber \\
\FLIP{\alpha}{\ell_\alpha} & if $\alpha \ne 0^n$ and ($\ell_\alpha = n$ or $b_{\ell_\alpha + 1} = 1$) \ \ \ \ \ \ \hfill (flip-first), \nonumber \\
\SWAP{\alpha}{\ell_\alpha}{\ell_\alpha+1} & otherwise\hfill (swap-first). \nonumber
\end{subnumcases}
Let $\tau^t(\alpha)$ denote the string that results from applying the $\tau$ operation $t$ times to $\alpha$.
We define the binary relation $<_R$ on ${\bf B}(n)$ to be the transitive closure of the cover relation $\tau$, that is $\beta <_R \alpha$ if $\beta \ne \alpha$ and $\beta = \tau^t(\alpha)$ for some $t > 0$.
It is easy to see that the binary relation $<_R$ is irreflexive, anti-symmetric and transitive. Thus $<_R$ is a strict partial order.
The relation $<_R$ on binary strings defines our flip-swap poset.
\begin{definition}
The \emph{flip-swap poset} $\mathcal{P}(n)$ is a strict poset with ${\bf B}(n)$ as the ground set and $<_R$ as the strict partial order.
\end{definition}
Figure~\ref{fig:poset} shows the Hasse diagram of $\mathcal{P}(4)$ with the ideal for binary strings of length $4$ that are lexicographically smaller or equal to $1001$ in bold.
Observe that $\mathcal{P}(n)$ is always a tree with $0^n$ as the unique minimum element, and that its ideals are the subtrees that contain this minimum.
\begin{lemma}\label{lem:2BRGC-poset}
A set $ {\bf S} $ over ${\bf B}(n)$ that includes $0^n$ is a flip-swap language if and only if $ {\bf S} $ is an ideal of $\mathcal{P}(n)$.
\end{lemma}
\begin{proof}
Let $ {\bf S} $ be a flip-swap language over ${\bf B}(n)$ and $\alpha$ be a string in $ {\bf S}$.
Since $ {\bf S} $ is a flip-swap language, $ {\bf S} $ satisfies the flip-first and swap-first properties and thus $\tau(\alpha)$ is a string in $ {\bf S} $.
Therefore every string $\gamma <_R \alpha$ is in $ {\bf S} $
and hence ${\bf S} $ is an ideal of $\mathcal{P}(n)$.
The other direction is similar.
\end{proof}
If {\bf S} is a set of binary strings and $\gamma$ is a binary string, then the \emph{quotient} of {\bf S} and $\gamma$ is ${\bf S}/\gamma = \{\alpha \ | \ \alpha \gamma \in {\bf S}\}$.
\begin{figure}[t]
\captionsetup{format=hang}
\centering
\begin{subfigure}[b]{0.54\textwidth}
\centering
\includegraphics[height=1.1in]{PosetP4.pdf}
\caption{The flip-swap poset $\mathcal{P}(4)$.}
\label{fig:poset_P4}
\end{subfigure}
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[height=1.1in]{PosetP4_leq1001.pdf}
\caption{An ideal of $\mathcal{P}(4)$.}
\label{fig:poset_leq}
\end{subfigure}
\caption{Flip-swap languages are the ideals of the flip-swap poset.
The ideal in (b) contains the $4$-bit binary strings that are $\leq 1001$ with respect to lexicographic order.}
\label{fig:poset}
\end{figure}
\begin{lemma}\label{lem:closure}
If ${\bf S}_1$ and ${\bf S}_2$ are flip-swap languages and $\gamma$ is a binary string, then ${\bf S}_1 \cap {\bf S}_2$, ${\bf S}_1 \cup {\bf S}_2$ and ${\bf S}_1 / \gamma$ are flip-swap languages.
\end{lemma}
\begin{proof}
Let ${\bf S}_1$ and ${\bf S}_2$ be two flip-swap languages and let $\gamma$ be a binary string.
The intersection and union of ideals of any poset are also ideals of that poset, so ${\bf S}_1 \cap {\bf S}_2$ and ${\bf S}_1 \cup {\bf S}_2$ are flip-swap languages.
Now consider $\alpha \in {\bf S}_1 / \gamma$.
Suppose $\alpha \in {\bf S}_1 / \gamma$ for some non-empty $\gamma$ where $j = |\alpha|$. This means that $\alpha \gamma \in {\bf S}_1$.
Consider three cases depending $\ell_{\alpha\gamma}$.
If $\ell_{\alpha\gamma} < j$, then clearly $\tau(\alpha \gamma) = \tau(\alpha) \gamma$. From Lemma~\ref{lem:2BRGC-poset}, $\tau(\alpha) \gamma \in {\bf S}_1$ and thus $\tau(\alpha) \in {\bf S}_1 / \gamma$.
If $\ell_{\alpha\gamma} = j$, then $\alpha = 0^{j-1}1$ and $\tau(\alpha) = 0^j$. Since ${\bf S}_1$ is a flip-swap language
$0^j\gamma \in {\bf S}_1$. Again this implies that $\tau(\alpha) \in {\bf S}_1 / \gamma$. If $\ell_{\alpha\gamma} > j$ then
$\alpha = 0^j$ and $\tau(\alpha) = \alpha$ in this case. For each case we have shown that $\tau(\alpha) \in {\bf S}_1 / \gamma$ and thus ${\bf S}_1 / \gamma$ is a flip-swap language by Lemma~\ref{lem:2BRGC-poset}.
\end{proof}
\begin{corollary}\label{lem:union-intersection-quotient}
Flip-swap languages are closed under union, intersection, and quotient.
\end{corollary}
\begin{proof}
Let ${\bf S}_A$ and ${\bf S}_B$ be flip-swap languages and $\gamma$ be a binary string.
Since ${\bf S}_A$ and ${\bf S}_B$ can be represented by ideals of the flip-swap poset, possibly excluding $0^n$, by Lemma~\ref{lem:closure} the sets ${\bf S}_A \cap {\bf S}_B$, ${\bf S}_A \cup {\bf S}_B$ and ${\bf S}_A/\gamma$ are flip-swap languages.
\end{proof}
\begin{lemma}\label{lem:prefix}
If $\alpha \gamma$ is a binary string in a flip-swap language ${\bf S}$, then $0^{|\alpha|} \gamma \in {\bf S}$.
\end{lemma}
\begin{proof}
This result follows from the flip-first property of flip-swap languages.
\end{proof}
\section{A generic successor rule for flip-swap languages}\label{sec:main-result}
Consider any flip-swap language ${\bf S}$ that includes the string $0^n$.
Let $\mathcal{BRGC}({\bf S})$ denote the listing of $\bf S$ in BRGC order. Given a string $\alpha \in \mathbf{S}$,
we define a generic \emph{successor rule} that computes the string following $\alpha$ in the cyclic listing $\mathcal{BRGC}({\bf S})$.
Let $\alpha = b_1 b_2 \cdots b_n$ be a string in ${\bf S}$.
Let $t_\alpha$ be the leftmost position such that $\FLIP{\alpha}{t_\alpha} \in {\bf S}$ when $|{\bf S}|>1$, such a $t_\alpha$ exists since ${\bf S}$ satisfies the flip-first property and $|{\bf S}|>1$.
Recall that $\ell_\alpha$ is defined to be the position of the leftmost $1$ of $\alpha$ (or $|\alpha|+1$ if no such position exists).
Notice that $t_\alpha \leq \ell_\alpha$ when $|{\bf S}|>1$ since ${\bf S}$ is a flip-swap language.
\begin{table}[t]
\begin{center} \footnotesize
\begin{tabular}{c @{\hskip 0.2in} c @{\hskip 0.2in} c @{\hskip 0.2in} c @{\hskip 0.2in} c @{\hskip 0.2in} c}
Necklaces &Parity of $w(\alpha)$ & $t_\alpha$ & $\ell_\alpha$ & Successor& Case\\ \hline
000000 & even &$6$ & & $flip2(5, 6)$ &(\ref{f3}) \\
000011 & even& $3$ & & $flip2(2, 3)$ &(\ref{f3}) \\
011011 & even&$2$& & $flip(2)$ &(\ref{f2})\\
001011 & odd && $3$ & $flip(4)$ &(\ref{f5}) \\
001111 & even&$2$& &$flip2(1, 2)$&(\ref{f3}) \\
111111 & even&$1$& &$flip(1)$ &(\ref{f2})\\
011111& odd && $2$ &$flip(3)$ &(\ref{f5}) \\
010111 & even&$3$& &$flip(2)$ & (\ref{f2})\\
000111& odd && $4$ &$flip(5)$ &(\ref{f5}) \\
000101 & even &$2$& &$flip(2)$ &(\ref{f2})\\
010101& odd && $2$ & $flip2(2, 3)$&(\ref{f4}) \\
001101& odd && $3$ & $flip(4)$&(\ref{f5}) \\
001001& even&$3$& &$flip(3)$ &(\ref{f2})\\
000001& odd&& &$flip(6)$ &(\ref{f1})
\end{tabular}
\end{center}
\captionsetup{format=hang}
\caption{The necklaces of length 6 induced by successive applications the function $f$ starting from $000000$.
The sixth column of the table lists out the corresponding rules in $f$ that apply to each necklace to obtain the next necklace.
}
\vspace{-2em}
\end{table}\label{table:brgc11}
Let $\FLIPTWO{\alpha}{i}{j}$ be the string obtained by complementing both $b_i$ and $b_j$.
When the context is clear we use $flip2(i, j)$ instead of $\FLIPTWO{\alpha}{i}{j}$.
Also, let $w(\alpha)$ denote the number of $1$s of $\alpha$.
We claim that the following function $f$ computes the next string in the cyclic ordering $\mathcal{BRGC}({\bf S})$:
{\footnotesize
\begin{subnumcases}{\hspace{-2em}f(\alpha) =}
0^n & \mbox{if $\alpha = 0^{n-1}1$;}\label{f1} \\
\FLIP{\alpha}{t_\alpha} & \mbox{if $w(\alpha)$ is even and ($t_\alpha = 1$ or $\FLIPTWO{\alpha}{t_\alpha-1}{t_\alpha} \notin {\bf S}$);}\label{f2} \\
\FLIPTWO{\alpha}{t_\alpha-1}{t_\alpha} & \mbox{if $w(\alpha)$ is even and $\FLIPTWO{\alpha}{t_\alpha-1}{t_\alpha} \in {\bf S}$;}\label{f3} \\
\FLIPTWO{\alpha}{\ell_\alpha}{\ell_\alpha +1} & \mbox{if $w(\alpha)$ is odd and $\FLIP{\alpha}{\ell_\alpha +1} \notin {\bf S}$;}\label{f4} \\
\FLIP{\alpha}{\ell_\alpha +1} & \mbox{if $w(\alpha)$ is odd and $\FLIP{\alpha}{\ell_\alpha +1} \in {\bf S}$.}\label{f5} \label{eq:recursive}
\end{subnumcases}}
Thus, successive applications of the function $f$ on a flip-swap language ${\bf S}$, starting with the string $0^n$, list out each string in ${\bf S}$ in BRGC order.
As an illustration of the function $f$, successive applications of this rule for the set of necklaces of length $6$ starting with the necklace $000000$ produce the listing in Table 2.
\begin{restatable}{theorem}{mainclaim}
\label{thm:big}
If ${\bf S}$ is a flip-swap language including the string $0^n$ and $|{\bf S}| > 1$, then $f(\alpha)$ is the string immediately following the string $\alpha$ in ${\bf S}$ in the cyclic ordering $\mathcal{BRGC}({\bf S})$.
\end{restatable}
We will provide a detailed proof of this theorem in the next subsection. Observe that each rule in $f$ complements at most two bits and thus successive strings in ${\bf S}$ differ
by at most two bit positions. Observe that when $0^n$ is excluded from ${\bf S}$, then $\mathcal{BRGC}({\bf S})$ is still a $2$-Gray code (although not necessarily cyclic).
This proves Theorem~\ref{thm:main2}.
\subsection{Proof of Theorem~\ref{thm:big}}\label{sec:proof
This section proves Theorem~\ref{thm:big}.
We begin with a lemma by Vajnovszki~\cite{vaj}, and a remark that is due to the fact that $0^{n-1}1$ is in a flip-swap language ${\bf S}$ when $|{\bf S}| > 1$.
\begin{lemma}\label{lem:gray}
Let $\alpha = b_1 b_2 \cdots b_n$ and $\beta$ be length $n$ binary strings such that $\alpha \ne \beta$. Let $r$ be the rightmost position in which $\alpha$ and $\beta$ differ.
Then $\alpha$ comes before $\beta$ in BRGC order (denoted by $\alpha \prec \beta$) if and only if $w(b_r b_{r+1} \cdots b_n)$ is even.
\end{lemma}
\begin{remark}\label{lem:last}
A flip-swap language ${\bf S}$ in BRGC order ends with $0^{n-1}1$ when $|{\bf S}| > 1$.
\end{remark}
Let $succ({\bf S}, \alpha)$ be the \emph{successor} of $\alpha$ in ${\bf S}$ in BRGC order (i.e. the string after $\alpha$ in the cyclic ordering $\mathcal{BRGC}({\bf S})$).
Next we provide two lemmas, and then prove Theorem~\ref{thm:big}.
\begin{lemma} \label{thm:even}
Let ${\bf S}$ be a flip-swap language with $|{\bf S}| > 1$ and $\alpha$ be a string in ${\bf S}$.
Let $t_\alpha$ be the leftmost position such that $\FLIP{\alpha}{t_\alpha} \in {\bf S}$.
If $w(\alpha)$ is even, then $t_\alpha$ is the rightmost position in which $\alpha$ and $succ({\bf S}, \alpha)$ differ.
\end{lemma}
\begin{proof}
By contradiction.
Let $\alpha = b_1 b_2 \cdots b_n$ and $\beta = succ({\bf S}, \alpha)$.
Let $r$ be the rightmost position in which $\alpha$ and $\beta$ differ with $r \neq t_\alpha$.
If $t_\alpha > r$, then $\beta$ has the suffix $1b_{r+1} b_{r+2} \cdots b_n$ since $b_r = 0$ because $r<t_\alpha \leq \ell_\alpha$.
Thus by the {flip-first} property, $0^{r-1}1b_{r+1} b_{r+2} = \FLIP{\alpha}{r} \in {\bf S}$ and $r<t_\alpha$, a contradiction.
Otherwise if $t_\alpha < r$, then let $\gamma = \FLIP{\alpha}{t_\alpha}$.
Clearly $\gamma \neq \alpha$.
Now observe that $w(b_t b_{t+1} \cdots b_n)$ is even because $t_\alpha \leq \ell_\alpha$ and $w(\alpha)$ is even, and thus by Lemma~\ref{lem:gray}, $\alpha \prec \gamma$.
Also, $\gamma$ has the suffix $b_{r} b_{r+1} \cdots b_n$ and $w(b_{r} b_{r+1} \cdots b_n)$ is even because $\alpha \prec \beta$ and $r$ is the rightmost position $\alpha$ and $\beta$ differ, and thus also by Lemma~\ref{lem:gray}, $\gamma \prec \beta$.
Thus $\alpha \prec \gamma \prec \beta$, a contradiction.
Therefore $r = t_\alpha$.
\end{proof}
\begin{lemma} \label{thm:odd}
Let ${\bf S}$ be a flip-swap language with $|{\bf S}| > 1$ and $\alpha\neq 0^{n-1}1$ be a string in ${\bf S}$.
If $w(\alpha)$ is odd, then $\ell_\alpha +1$ is the rightmost position in which $\alpha$ and $succ({\bf S}, \alpha)$ differ.
\end{lemma}
\begin{proof}
Since $\alpha \neq 0^{n-1}1$ and $w(\alpha)$ is odd, $\ell_\alpha < n-1$.
We now prove the lemma by contradiction.
Let $\alpha = b_1 b_2 \cdots b_n$ and $\beta = succ({\bf S}, \alpha)$.
Let $r \neq \ell_\alpha + 1$ be the rightmost position in which $\alpha$ and $\beta$ differ.
If $r < \ell_\alpha + 1$, then $w(b_{r} b_{r+1} \cdots b_n)$ is odd but $\alpha \prec \beta$, a contradiction by Lemma~\ref{lem:gray}.
Otherwise if $r > \ell_\alpha + 1$, then let $\gamma = \FLIPTWO{\alpha}{\ell_\alpha}{\ell_\alpha + 1}$.
Clearly $\gamma \neq \alpha$, and by the {flip-first} and {swap-first} properties, $\gamma \in {\bf S}$.
Also, observe that $w(b_{\ell_\alpha + 1} b_{\ell_\alpha + 2} \cdots b_n)$ is even because $w(\alpha)$ is odd, and thus by Lemma~\ref{lem:gray}, $\alpha \prec \gamma$.
Further, $\gamma$ has the suffix $b_{r} b_{r+1} \cdots b_n$ and $w(b_{r} b_{r+1} \cdots b_n)$ is even because $\alpha \prec \beta$ and $r$ is the rightmost position $\alpha$ and $\beta$ differ, and thus also by Lemma~\ref{lem:gray}, $\gamma \prec \beta$.
Thus $\alpha \prec \gamma \prec \beta$, a contradiction.
Therefore $r = \ell_\alpha + 1$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:big}]
Let $\alpha = a_1 a_2 \cdots a_n$ and $\beta = succ({\bf S}, \alpha) = b_1 b_2 \cdots b_n$.
Let $t_\alpha$ be the leftmost position such that $\FLIP{\alpha}{t_\alpha} \in {\bf S}$.
First we consider the case when $\alpha = 0^{n-1}1$.
Recall that the first string in ${\bf B}(n)$ in BRGC order is $0^n$~\cite{ruskey} and $0^n$ is a string in ${\bf S}$ by Lemma~\ref{lem:prefix}.
Also, the last string in ${\bf S}$ in BRGC order is $0^{n-1}1$ by Remark~\ref{lem:last} when $|{\bf S}| > 1$.
Thus the string that appears immediately after $\alpha$ in the cyclic ordering $\mathcal{BRGC}({\bf S})$ is $f(\alpha)$ when $\alpha = 0^{n-1}1$.
In the remainder of the proof, $\alpha \neq 0^{n-1}1$ and we consider the following two cases.
\begin{description}
\item[Case 1:] $w(\alpha)$ is even:\
If $t_\alpha=1$, then clearly $\beta = \FLIP{\alpha}{t_\alpha}= f(\alpha)$.
For the remainder of the proof, $t_\alpha>1$.
Since $t_\alpha \leq \ell_\alpha$, $\FLIPTWO{\alpha}{t_\alpha-1}{t_\alpha}$ has the prefix $0^{t_\alpha-2}1$.
We now consider the following two cases.
If $\FLIPTWO{\alpha}{t_\alpha-1}{t_\alpha} \notin {\bf S}$, then $\FLIP{\alpha}{t_\alpha}$ is the only string in ${\bf S}$ that has $t_\alpha$ as the rightmost position that differ with $\alpha$ and has the prefix $0^{t-2}$.
Therefore, $\beta = \FLIP{\alpha}{t_\alpha} = f(\alpha)$.
Otherwise, $\FLIPTWO{\alpha}{t_\alpha-1}{t_\alpha}$ and $\FLIP{\alpha}{t_\alpha}$ are the only strings in ${\bf S}$ that have $t_\alpha$ as the rightmost position that differ with $\alpha$ and have the prefix $0^{t_\alpha-2}$.
By Lemma~\ref{lem:gray}, $\FLIPTWO{\alpha}{t_\alpha-1}{t_\alpha} \prec \FLIP{\alpha}{t_\alpha}$ since $w(1\overline{a}_{t_\alpha} a_{t_\alpha+1} a_{t_\alpha+2} \cdots a_n)$ is even.
Thus, $\beta = \FLIPTWO{\alpha}{t_\alpha-1}{t_\alpha} = f(\alpha)$.
\item[Case 2:] $w(\alpha)$ is odd:\
By Lemma~\ref{thm:odd}, $\beta$ has the suffix $\overline{a}_{\ell_\alpha + 1} a_{\ell_\alpha + 2} a_{\ell_\alpha + 3} \cdots a_n$.
If $\FLIP{\alpha}{\ell_\alpha+1} \notin {\bf S}$, then by the {flip-first} and {swap-first} properties, $\FLIPTWO{\alpha}{\ell_\alpha}{\ell_\alpha + 1}$ is the only string in ${\bf S}$ that has $\ell_\alpha + 1$ as the rightmost position that differ with $\beta$.
Thus, $\beta = \FLIPTWO{\alpha}{\ell_\alpha}{\ell_\alpha + 1} = f(\alpha)$.
Otherwise by Lemma~\ref{lem:gray}, any string $\gamma \in {\bf S}$ with the suffix $\overline{a}_{\ell_\alpha + 1} a_{\ell_\alpha + 2} a_{\ell_\alpha + 3} \cdots a_n$
and $\gamma \neq \FLIP{\alpha}{\ell_\alpha+1}$ has $\FLIP{\alpha}{\ell_\alpha +1} \prec \gamma$ because $w(1 \overline{a}_{ \ell_\alpha + 1} a_{\ell_\alpha + 2}a_{\ell_\alpha + 3} \cdots a_n)$ is even.
Thus, $\beta = \FLIP{\alpha} {\ell_\alpha + 1} = f(\alpha)$.
\end{description}
Therefore, the string immediately after $\alpha$ in the cyclic ordering $\mathcal{BRGC}({\bf S})$ is $f(\alpha)$.
\end{proof}
\section{Generation algorithm for flip-swap languages} \label{sec:gen_n}
In this section we present a generic algorithm to generate $2$-Gray codes for flip-swap languages via the function $f$.
A na\"{i}ve approach to implement $f$ is to find $t_\alpha$ by test flipping each bit in $\alpha$ to see if the result is also in the set when $w(\alpha)$ is even; or test flipping the ($\ell_\alpha+1$)-th bit of $\alpha$ to see if the result is also in the set when $w(\alpha)$ is odd.
Since $t_\alpha \leq \ell_\alpha$, we only need to examine the length $\ell_\alpha-1$ prefix of $\alpha$ to find $t_\alpha$.
Such a test can be done in $O(nm)$ time, where $O(m)$ is the time required to complete the membership test of the set under consideration.
Pseudocode of the function $f$ is given in Algorithm~\ref{alg:n2-f}.
To list out each string of a flip-swap language ${\bf S}$ in BRGC order, we can repeatedly apply the function $f$ until it reaches the starting string.
We also maintain $w(\alpha)$ and $\ell_\alpha$ which can be easily maintained in $O(n)$ time for each string generated.
We also add a condition to avoid printing the string $0^n$ if $0^n$ is not a string in ${\bf S}$.
Pseudocode for this algorithm, starting with the string $0^n$, is given in Algorithm~\ref{alg:n2-decode}.
The algorithm can easily be modified to generate the corresponding counterpart of ${\bf S}$ with respect to $0$.
A simple analysis shows that the algorithm generates ${\bf S}$ in $O(nm)$-time per string.
A more thorough analysis improves this to $O(n+m)$-amortized time per string.
\begin{theorem} \label{lem:n2}
If ${\bf S}$ is a flip-swap language, then the algorithm \textit{BRGC} produces
$\mathcal{BRGC}(S)$ in $O(n+m)$-amortized time per string, where $O(m)$ is the time required to complete the membership tester for ${\bf S}$.
\end{theorem}
\begin{proof}
Let $\alpha = a_1 a_2 \cdots a_n$ be a string in ${\bf S}$.
Clearly $f$ can be computed in $O(n)$ time when $w(\alpha)$ is odd.
Otherwise when $w(\alpha)$ is even, the {\bf while} loop in line 5 of Algorithm~\ref{alg:n2-f} performs a membership tester on each string $\beta = b_1 b_2 \cdots b_n$ in ${\bf S}$ with $b_{\ell_\alpha } b_{\ell_\alpha + 1} \cdots b_n = a_{\ell_\alpha} a_{\ell_\alpha + 1} \cdots a_n$ and $w(b_1 b_2 \cdots b_{\ell_\alpha -1}) = 1$.
Observe that each of these strings can only be examined by the membership tester once, or otherwise the {\bf while} loop in line 5 of Algorithm~\ref{alg:n2-f} produces the same $t_\alpha$ which results in a duplicated string, a contradiction.
Thus, the total number of membership testers performed by the algorithm is bound by $|{\bf S}|$, and therefore $f$ runs in $O(m)$-amortized time per string.
Finally, since the other part of the algorithm runs in $O(n)$ time per string, the algorithm \textit{BRGC} runs in $O(n+m)$-amortized time per string.
\end{proof}
The membership tests in this paper can be implemented in $O(n)$ time and $O(n)$ space; see~\cite{Booth,journals/jal/Duval83,Sawada201346} for necklaces, Lyndon words, prenecklaces and pseudo-necklaces of length $n$.
One exception is the test for prefix normal words of length $n$, which requires $O(n^{1.864})$ time and $O(n)$ space~\cite{Chan:2015:CIV:2746539.2746568}.
Together with the above theorem, this proves Theorem~\ref{thm:main3}.
Visit the Combinatorial Object Server~\cite{cos} for a C implmentation of our algorithms.
\begin{algorithm}[t]
\footnotesize
\caption{Pseudocode of the implementation of the function $f$.}
\label{alg:n2-f}
\vspace{-0.6em}
\begin{algorithmic} [1]
\Statex
\Function{$f$}{$\alpha$}
\If{$\alpha = 0^{n-1}1$} \ $\FLIP{\alpha}{n}$
\ElsIf{$w(\alpha)$ is even}
\State $t_\alpha \gets \ell_\alpha$
\While{$t_\alpha>1$ {\bf and} $\FLIP{\alpha}{t_\alpha-1} \in {\bf S}$} \ $t_\alpha \gets t_\alpha - 1$
\EndWhile
\If{$t_\alpha \neq 1$ {\bf and} $\FLIPTWO{\alpha}{t_\alpha-1}{t_\alpha} \in {\bf S}$} \ $\alpha \gets \FLIPTWO{\alpha}{t_\alpha-1}{t_\alpha}$
\Else \ $\alpha \gets \FLIP{\alpha}{t_\alpha}$
\EndIf
\Else
\If{$\FLIP{\alpha}{\ell_\alpha+1} \notin {\bf S}$} \ $\alpha \gets \FLIPTWO{\alpha}{\ell_\alpha}{\ell_\alpha+1}$
\Else \ $\alpha \gets \FLIP{\alpha}{\ell_\alpha+1}$
\EndIf
\EndIf
\EndFunction
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[t]
\footnotesize
\caption{Algorithm to list out each string of a flip-swap language ${\bf S}$ in BRGC order.}
\label{alg:n2-decode}
\vspace{-0.6em}
\begin{algorithmic} [1]
\Statex
\Procedure{\textit{BRGC}}{}
\State{$\alpha = b_1 b_2 \cdots b_n \gets 0^n$}
\Do
\If{$\alpha \neq 0^n$ {\bf or} $0^n \in {\bf S}$} \ $\textrm{Print$(\alpha)$}$ \EndIf
\State $f(\alpha)$
\State $w(\alpha) \gets 0$
\For{$i$ {\bf from} $n$ {\bf down to} $1$}
\If{$b_i = 1$} \ $w(\alpha) \gets w(\alpha) + 1$ \EndIf
\If{$b_i = 1$} \ $\ell_\alpha \gets i$ \EndIf
\EndFor
\doWhile{$\alpha \neq 0^{n}$}
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{comment}
In this section we present algorithms to generate necklaces in binary reflected Gray code order by $\eta$.
Since a membership tester for necklaces can be implemented in $O(n)$ time~\cite{booth}, each necklace can be generated in $O(n^2)$ time per necklace by $\eta$.
However, by optimizing the way to compute the leftmost possible position $t$ of a necklace $\beta = b_1 b_2 \cdots b_n = 0^j 1 b_{j+2} b_{j+3} \cdots b_n $ such that $b_1 b_2 \cdots b_{t-1} \overline{b}_t b_{t+1} \cdots b_n$ is a necklace, we can improve the algorithm to run in $O(n)$ time per necklace.
The membership tester for necklaces scans a string $\beta = b_1 b_2 \cdots b_n$ and maintains a variable $p$ such that $b_1 b_2 \cdots b_p$ is the longest possible prefix of $\beta$ such that it is a necklace.
It updates $p = i$ when $b_i > b_{i-p}$, where $i$ is the position of the current bit under examination.
If there exist a position $i$ such that $b_i < b_{i-p}$, then $\beta$ is not a necklace.
If $i = n$ and $n \text{ mod } p = 0$, then $\beta$ is a necklace.
Otherwise if $i = n$ but $n \text{ mod } p \neq 0$, then $\beta$ is a prenecklace but not a necklace.
To generate each necklace in $O(n)$ time, we optimize the way to compute $t$ by modifying the membership tester for necklaces.
We maintain a variable $t'$ which stores our guess on the value $t$, and another variable $q$ which stores the position of the last seen 1.
We initialize $t' = \lceil\frac{j+1}{2}\rceil$ and $q = j+1$ since complementing any bit $b_i$ with $i < \lceil\frac{j+1}{2}\rceil$ will not create a necklace.
We then perform the membership tester for necklaces on $\gamma = a_1 a_2 \cdots a_n = b_1 b_2 \cdots b_{t'-1} \overline{b}_{t'} b_{t'+1} \cdots b_n$.
If $a_i < a_{i-p}$, then $\gamma$ is not a necklace and we update $t' = t'+1$ and $\gamma$.
We also update $p = q$ since the longest possible prefix in the latest $\gamma$ that is a necklace is $a_1 a_2 \cdots a_q$.
Now if $i=n$ and $n \text{ mod } p = 0$, then $\gamma = b_1 b_2 \cdots b_{t'-1} \overline{b}_{t'} b_{t'+1} \cdots b_n$ is a necklace. Thus, $t = t'$.
Otherwise if $i=n$ and $n \text{ mod } p \neq 0$, then $\gamma$ is a prenecklace but not a necklace.
Thus, $b_1 b_2 \cdots b_{t'} \overline{b}_{t'+1} b_{t'} \cdots b_n$ is a necklace and $t = t' + 1$.
Clearly this modified membership tester for necklaces also runs in $O(n)$ time.
\begin{lemma} \label{thm:n-order-n}
The successor rule $\eta$ can be computed in $O(n)$ time.
\end{lemma}
Pseudocode of the implementation of \textsc{FirstComplement} which computes $t$ in $O(n)$ time is given in Algorithm~\ref{alg:g-k}.
\begin{algorithm}[H]
\caption{Pseudocode of the function \textsc{FirstComplement}.}
\label{alg:g-k}
\small
\begin{algorithmic} [1]
\Function{\textsc{FirstComplement}}{$b_1 b_2 \cdots b_n$}
\State $t' \gets \lceil\frac{j+1}{2}\rceil; q \gets 1; p \gets 1; b_{t'} = 1;$
\State {\bf if }{$b_n = 0$} {\bf then} {\bf return} $n$
\For{$i$ {\bf from} $2$ {\bf to} $n$}
\If{$b_{i-p} > b_i$}
\State $t' \gets t'+1$
\State $b_{t'-1} \gets 0$; $b_{t'} \gets 1$
\State $p = q$
\EndIf
\State {\bf if }{$t' \geq j+1$} {\bf then} {\bf return} $j+1$
\State {\bf if }{$b_{i-p} < b_i$} {\bf then} $p = i$
\State {\bf if }{$b_{i} = 1$} {\bf then} $q = i$
\EndFor
\State $b_{t'} = 0$
\State {\bf if }{$n \text{ mod } p \neq 0$} {\bf then} {\bf return} $t'+1$
\State {\bf return} $t'$
\EndFunction
\end{algorithmic}
\end{algorithm}
\end{comment}
{
\footnotesize
\bibliographystyle{abbrv}
|
1,941,325,220,468 | arxiv | \section{Introduction}\label{S-Intro}
An important aim of current wideband Raman experiments
is to efficiently generate few-cycle pulses
\cite{Harris-S-1998prl,Sokolov-WYYH-2001prl,Hakuta-SKL-2000prl,Sali-MTHM-2004ol}.
If driven strongly enough, the two-photon Raman transition
modulates the incoming field by adding sidebands separated by the transition
frequency. Wideband fields are generated as these sidebands generate
sidebands of their own (and so on); a wide comb of frequency
components separated by the transition frequency is generated in this way.
If a scheme can be implemented that adjusts the phases of
each component appropriately,
then few- or single- cycle optical pulses can be obtained
(see e.g. \cite{Sokolov-WYYH-2001prl}).
In standard theoretical treatments of the Raman process,
the field is split into frequency components
centred on the teeth of the frequency comb.
This approach has the advantage that the components can be
modeled reasonably well with slowly varying envelopes,
but of course it has the disadvantage that one needs to keep track of
a large number of components.
In this paper,
we study an alternative approach in which the field
is treated as a single entity rather than being split into pieces.
Note that this approach is
distinct from methods based on the direct solution of
Maxwell's equations such as
FDTD (finite difference time domain)\cite{Joseph-T-1997itap}
or PSSD (pseudospectral spatial domain)\cite{Tyyrell-KN-2005jmo}.
Our single-field is based on a second-order wave equation,
and uses a convenient choice of carrier to define a field envelope.
As we will demonstrate,
the latter technique offers significant advantages over
the traditional multi-field formalism.
To provide a context for the discussion,
we consider experiments such as those of
Sali et.al. \cite{Sali-MTHM-2004ol,Sali-KNMHTM-2004draft},
where the Raman transition is driven near-resonantly by
a pair of intense pump pulses about 100fs long;
compared to the transition frequency of about 130THz,
the spectra of each pump pulse (and hence the generated sidebands)
are relatively narrow.
This means that a multi-component model is still not unreasonable,
even if numerical considerations might demand that
the arrays used to store these spectra overlap in frequency space.
However, if we were to move to shorter pump pulses,
or to a single (much shorter) pump pulse with enough bandwidth to
efficiently excite the transition,
we would reach the regime where the
teeth of the frequency comb significantly overlap.
At this point,
one would be forced not only to move from a solution based on the
Slowly-Varying Envelope Approximation (SVEA) to a more accurate model
such as the Generalized Few-Cycle Envelope Approximation (GFEA)
\cite{Kinsler-N-2003pra,Kinsler-FCPP},
but the utility of multiple field components would in any case
become questionable.
This provides the motivation for the present work,
since our model can extend the regime in
which the advantages of envelope-based methods can be utilised;
it also turns out to be more versatile,
placing fewer restrictions on the kinds of
Raman media that can be easily described.
In this paper,
we will construct a single-field model which, in most other respects,
closely parallels the approach to wideband Raman
generation adopted by Hickman et. al. \cite{Hickman-PB-1986pra}.
A key feature of the single-field model is that
the coupling constants oscillate at the Raman frequency,
and it is this that impresses the sideband modulation on the
propagating field.
Since the field is now not only wide-band but
contains significant sideband components
(i.e. distinct sub-peaks, as opposed to a broad featureless background),
the field envelope is no longer slowly-varying and must therefore
be propagated using the GFEA.
This necessity can be demonstrated by comparing
the results of the single-field model with those of
a multi-field counterpart.
The paper is organized as follows:
section \ref{S-Theory} outlines the derivation of
the single-field Raman theory,
section \ref{S-Multi} shows how to reduce it to a standard multi-field version,
and section \ref{S-Application} applies the theory to
practical situations.
In section \ref{S-Discuss} we discuss some of the issues relating
to our Raman model and its numerical implementation, and
finally section \ref{S-Conclude} contains our
conclusions.
\section{Single-field Raman theory}\label{S-Theory}
We start by considering the wave function
$\psi$ of a single molecule (e.g. H$_2$) and the electric field $E$, and
write the time-dependent wave function by expanding it in terms of the
eigenfunctions in the field-free (i.e. $E=0$) case.
This means we can get the expansion
coefficients by solving for an effective Schr\"odinger equation that contains
a two-photon Rabi frequency created by means of an interaction term based on a
field-dependent dipole moment. We assume a dispersionless medium and write all
equations in terms of position $z$ and retarded times $t=t_{lab}-z/c$.
Here we follow the method of Hickman, Paisner, and Bischel
\cite{Hickman-PB-1986pra} (HPB),
but we use only a single $E$ field rather than multiple components.
Note that HPB use {\em Gaussian} units, so there may appear to be
inconsistencies when comparing our formulae (in S.I.) to theirs.
We denote the known molecular eigenfunctions of the
unperturbed Hamiltonian $H_0$ as $\left| n \right>$, and their
corresponding energies $\hbar W_n$. We want to
obtain the solution to
~
\begin{eqnarray}
\left( H_0 + V \right) \psi
&=&
\imath \hbar
\frac{\partial \psi}
{\partial t}
,
\label{eqn-hamltonian-def}
\\
\textrm{with} ~~~ ~~~
V &=&
-d E
,
\label{eqn-perturbation-def}
\\
\psi &=&
\sum_n
c_n e^{-\imath W_n t} \left| n \right>
,
\label{eqn-psi-def}
\end{eqnarray}
~
where $d$ is the electronic dipole moment operator and the $c_n$ are a
set of complex probability amplitudes.
We now replace the electric
field $E$ with a carrier-envelope
description, but, unlike HPB, we use only a single
component
centred at a frequency of $\omega_0$,
rather than a set indexed by an integer $j$.
The envelope and carrier for the field is:
~
\begin{eqnarray}
E &=&
A
e^{
-\imath
\left(
\omega_0 t
-
k_0 z
\right)
}
+ \textrm{c.c.}
,
\label{eqn-single-EfromA}
\end{eqnarray}
and,
following the standard procedure
of
assuming the co-efficients $c_i$
are slowly varying, discarding terms at multiples of the carrier frequency,
and simplifying,
we eventually reach
~
\begin{eqnarray}
\imath \hbar \frac{d c_n}{dt}
&=&
-
\sum_j
c_j
\alpha_{nj}
~~ . 2 \left| A \right|^2
,
\label{eqn-single-A-DcnDt}
\\
\textrm{where} ~~~~
\alpha_{nj}
&=&
\frac{1}{ \hbar}
\exp \left[ -\imath W_{jn} t \right]
\sum_i
d_{ni}
d_{ij}
\frac{W_{ij} }
{W_{ij}^2-\omega_0^2}
.
\label{eqn-single-A-alpha}
\end{eqnarray}
The $\alpha_{nj}$ coupling parameters oscillate because,
in contrast to the HPB derivation, there is no frequency difference between
field components to cancel with the Raman transition frequency.
We now take the indices $1$ and $2$ to correspond to the two states
involved in the Raman transition of interest;
these will be the $0$ and $1$ vibrational (or perhaps rotational) levels
of the electronic ground state.
Indices $3$ and above will correspond to (quoting HPB)
``translational motion on higher electronic states''.
Since we are interested only in the Raman transition,
we specialize the above equations for the coefficients $c_n$,
calculating $c_1$ and $c_2$ only,
and assuming that the $d_{12} = \left< 1 \right| d \left| 2 \right>$
dipole moment is zero.
This means we will only be including transitions between indices $1$
and $2$ that {\em go via one of the higher states} $j \ge 3$, since we still
allow $d_{1j}, ~d_{2j} \neq 0 ~~ \forall j \ge 3$.
Further, we solve for the
coefficients for the higher states in terms of $c_1$ and $c_2$, in
an adiabatic approximation justified when $c_1$ and $c_2$ vary only slowly
compared to the exponential terms.
When converting the equations for $c_1$, $c_2$ into Bloch
equations, we make the same approximations as HPB:
keeping the energy separations
for all transitions greater than that of the $1 \leftrightarrow 2$ transition,
and ignoring all the higher vibrational (or rotational)
states.
Thus we can write
~
\begin{eqnarray}
\alpha_{12}^* - \alpha_{21}
&\approx&
0,
\\
\alpha_{12} + \alpha_{21}^*
&=&
2 \hbar f' e^{-\imath \omega_b t + \imath \delta'}
.
\label{eqn-alpha-bar}
\end{eqnarray}
Here $\omega_b$ is the Raman transition frequency,
and $\delta'$ is a phase factor that ensures
that the coupling constant $f'$ is real valued.
This $f'$ will be used to
replace $\alpha_{12}+\alpha_{21}^*$.
We also get a Stark shift term --
~
\begin{eqnarray}
\hbar g'
&=&
\alpha_{11}' - \alpha_{22}'^*
.
\label{eqn-alpha-stark}
\end{eqnarray}
We define $\rho_{12}=c_1 c_2^*$ and
$w=c_2^* c_2 - c_1^* c_1$, so that
~
\begin{eqnarray}
\frac{d \rho_{12}}{dt}
&=&
\imath
\frac{\left( \alpha_{11} - \alpha_{22}^* \right) }
{\hbar}
2 \left| A \right|^2
\rho_{12}
+ \imath
\frac{\alpha_{12} }{ \hbar}
2 \left| A \right|^2
w
,
\label{eqn-basic2pbloch-rho}
\\
\frac{dw}{dt}
&=&
+ \imath
\frac{2 \alpha_{12}^* }
{ \hbar}
2 \left| A \right|^2
\rho_{12}
-
\imath
\frac{ 2 \alpha_{12} }
{ \hbar}
2 \left| A \right|^2
\rho_{12}^*
.
\label{eqn-basic2pbloch-w}
\end{eqnarray}
Finally, we insert decay terms $\gamma_i$,
and introduce $\omega_b'=\omega_b-\Delta$.
This $\Delta$ allows for arbitrary rotations of the polarization,
$\rho_{12}
=
\rho_{12}' \exp \left( -\imath \Delta t -\imath \delta' \right)
$.
Eqns. (\ref{eqn-basic2pbloch-rho},\ref{eqn-basic2pbloch-w})
governing the response of the medium to the applied fields now become
~
\begin{eqnarray}
\partial_t \rho_{12}'
&=&
\left(
-\gamma_2
+ \imath \Delta
\right)
\rho_{12}'
\nonumber
\\
& &
~~~~ ~~~~
+
\imath g' 2 A^* A
\rho_{12}'
~
+
\imath f'
2 A^* A w
e^{ \imath \omega_b' t }
,
\label{eqn-rbpostRWA-last-rho}
\\
\partial_t w
&=&
- \gamma_1 \left( w' - w_i \right)
\nonumber
\\
& &
~~~~ ~~~~
+
2 \imath f'
.
2 A^* A
\left(
\rho_{12}'
e^{ \imath \omega_b' t }
-
\rho_{12}'^*
e^{-\imath \omega_b' t }
\right)
.
\label{eqn-rbpostRWA-last-w}
\end{eqnarray}
The parameter $\Delta$ should be chosen to optimise computational accuracy by
making the dynamics as slowly-varying as possible.
For example, if the field contained two frequency components that
were slightly detuned from the Raman frequency,
we might use $\Delta$ to compensate for the resultant beating.
In general,
$\Delta$ is most useful in the multi-field model discussed in the next section.
The complementary part that specifies how the field responds to the
polarization of the Raman transition, is
~
\begin{eqnarray}
\partial_z A(t)
&=&
\frac{2 \imath \pi \omega_0}{c_0 n_0}
\times
\left[
1 + \frac{\imath \partial_t }{\omega_0}
\right]
\frac{ \mathscr{B}(t) }{4 \pi \epsilon_0}
\\
&=&
\imath
\frac{2 \sigma \bar{\alpha}_{12} \omega_0}{c_0 n_0 \epsilon_0 }
\times
\left[
1 + \frac{\imath \partial_t }{\omega_0}
\right]
A(t) X(t)
,
\label{eqn-single-Apropagate}
\\
X(t)
&=&
\rho_{12}'
e^{ \imath \omega_b' t }
+
\rho_{12}'^*
e^{-\imath \omega_b' t }
\label{eqn-single-X}
.
\end{eqnarray}
Here the $1 + \imath \partial_t/\omega_0$ in eqn.(\ref{eqn-single-Apropagate})
is (with $\partial_t \equiv d/dt$) the lowest-order approximation to the
GFEA few-cycle propagation corrections
\cite{Kinsler-FCPP,Kinsler-N-2003pra},
which is equivalent to the SEWA (Slowly Evolving Wave Approximation)
correction derived by Brabec and Krausz \cite{Brabec-K-1997prl}.
Although the full form is not included for reasons of brevity,
it could easily be introduced if the extra accuracy was desired;
indeed we routinely use it in our simulation codes.
It is
independent of the Raman derivation presented here, since
it is a field propagation effect. The full form of the few-cycle
prefactor (and various expansions thereof) has already been reported in
\cite{Kinsler-N-2003pra,Kinsler-FCPP}.
A detailed derivation of this single-field Raman theory can be found in
\cite{Kinsler-2006arXiv-sfwbr}
We solve these equations numerically using a split step method,
where we treat the nonlinearity in the time domain,
and the dispersion in the frequency domain.
To include dispersion in a time domain equation like
eqn.(\ref{eqn-single-Apropagate}) requires either additional time derivatives
(as in \cite{Kinsler-FCPP,Kinsler-N-2003pra})
or a convolution over a time-response function which is an $N^2$ operation.
However,
handling dispersion in the frequency domain is both conceptually simpler
(since it simply amounts to a frequency-dependent phase evolution),
and more computationally efficient because it is an $N \log N$ process.
The validity of the approximations used in deriving our Bloch equations
will obviously depend both
on the details of the chosen Raman medium and/or transition, and on the
number of Stokes and anti-Stokes sidebands we wish to describe. Since in the
experiments of \cite{Harris-S-1998prl,Sokolov-WYYH-2001prl,Hakuta-SKL-2000prl,Sali-MTHM-2004ol,Gundry-AASTKNM-2005ol} the emphasis was on a single
Raman transition, a simple Bloch model is clearly appropriate, and indeed
our approximations differ little from those of other theoretical
approaches (such at that of HPB).
\section{Multi-field Raman Theory}\label{S-Multi}
The single-field Raman model can be converted into a traditional
multi-field model as developed in e.g. HPB \cite{Hickman-PB-1986pra} or Syed,
McDonald and New
\cite{Syed-MN-2000josab} by replacing the field envelope with a sum of
multiple envelopes using carrier exponentials spaced at the Raman frequency.
When doing this, we will only get the correct multi-field form if few-cycle
(either SEWA or GFEA) corrections to the field evolution part of
the theory are applied to the effective polarization caused by the
Raman transition.
Since the single-field evolution equation (eqn.(\ref{eqn-single-Apropagate}))
uses an envelope $A$ that is based on a carrier
(see eqn.(\ref{eqn-single-EfromA})),
the single-field envelope $A$ is replaced with $A_j$'s
at frequency $\omega_j = \omega_0 + j \omega_b$
and wavevector $k_j = k(\omega_j)$.
The single-field envelope in terms of the new $A_j$'s is
~
\begin{eqnarray}
A
&=&
\sum_j
A_j
\exp
\left[
-\imath
\left(
\omega'_j t
-
k'_j z
\right)
\right]
,
\label{eqn-multienvelope}
\end{eqnarray}
where $\omega'_j = \omega_j - \omega_0$, and $k'_j = k_j - k(\omega_0)
= k_j - k_0$.
The equations
for $ \rho_{12}'$ and $w$ describing the Raman transition
result from a simple substitution
of eqn.(\ref{eqn-multienvelope})
into eqns.(\ref{eqn-rbpostRWA-last-rho}, \ref{eqn-rbpostRWA-last-w}),
followed by a rotating wave approximation (RWA)
to remove non frequency matched terms.
They are
~
\begin{eqnarray}
\partial_t \rho_{12}'
&\approx&
\left(
-\gamma_2 + \imath \Delta + \imath g' \sum_j 2 A_j^* A_j
\right)
\rho_{12}'
\nonumber
\\
&& ~~~~ ~~~~
+
2 \imath f' \sum_j 2 A_{j} A_{j-1}^*
. w
. e^{-\imath \Delta t }
. e^{+\imath \left( k_j-k_{j-1} \right) z }
,
\label{eqn-multi-dr}
\\
\partial_t w
&=&
- \gamma_1 \left( w - w_i \right)
,
\nonumber
\\
&&
+
2 \imath f'
\left(
2 A_j^*A_{j+1}
\rho_{12}'
e^{ \imath \omega_b' t }
-
2 A_j A_{j+1}^*
\rho_{12}'^*
e^{-\imath \omega_b' t }
\right)
.
~~~~ ~~~~
\label{eqn-multi-dw}
\end{eqnarray}
Quite a lot of physics has been removed by the RWA approximation,
although it is a very reasonable one except in the very wideband limit.
For example, the effects of next-nearest neighbour field components
have been ignored,
as have all more distant field-field interactions.
In the next-nearest neighbour case, the dropped terms would impose a
rapid $\omega_b$ oscillation onto the polarization $\rho_{12}$,
which would in turn tend to impose sidebands at $\pm \omega_b$ onto
each field component.
It is reasonable to ignore such sidebands
in the narrowband limit used for most applications of a multi-field Raman
theory;
but, in principle one might extend a multi-field theory to include them
by inventing a scheme to apply the sidebands to the field component
with which they are in nearest resonance.
Extra factors of $2$ have appeared in eqns.(\ref{eqn-multi-dr},
\ref{eqn-multi-dw})
because the multi-field equations start with double summations that give
pairs of terms that can be reduced to one in the remaining single summation.
Finally, we need to insert the few-cycle correction
to the polarization term, because the ($j\ne 0$) sub-envelopes $A_j$
have an $\imath j \omega_b t$ time dependence that cannot be neglected.
The polarization correction terms are just the result of
applying the first-order correction $(\imath/\omega_0)\partial_t$ to
the $A(t)X(t)$ from eqn.(\ref{eqn-single-Apropagate}).
The $j$-th polarization correction term is then
~
\begin{eqnarray}
&&
\imath
\frac{ \sigma \omega_j' \alpha_{12}}
{2 \epsilon_0 c_0}
\left\{
\rho_{12}'
A_{j+1}
\exp\left[ +\imath (k'_{j+1} -k'_j) z - \imath \Delta t \right]
\right.
\nonumber
\\
&& ~~~~ ~~~~ ~~~~ ~~~~
\left.
+
\rho_{12}'^*
A_{j-1}
\exp\left[ +\imath (k'_{j-1} -k'_j) z + \imath \Delta t \right]
\right\}
\nonumber
\\
&& ~~~~ ~~~~ ~~~~ ~~~~
-
\imath \left( k_j - k_0 \right) A_j
,
\end{eqnarray}
and differs only from the standard polarization term in that $\omega_j'$
appears in place of $\omega_0$. The two terms can then
be straightforwardly summed, and since $\omega_j=\omega_0+\omega_j'$,
from
eqns.(\ref{eqn-single-Apropagate}, \ref{eqn-single-X},
\ref{eqn-multienvelope}), we get
~
\begin{eqnarray}
\partial_z A_j(t)
&=&
\imath
\frac{ \sigma \omega_j \alpha_{12}}
{2 \epsilon_0 c_0}
\left\{
\rho_{12}'
A_{j+1}
\exp\left[ +\imath (k'_{j+1} -k'_j) z - \imath \Delta t \right]
\right.
\nonumber
\\
&& ~~~~ ~~~~ ~~~~ ~~~~
\left.
+
\rho_{12}'^*
A_{j-1}
\exp\left[ +\imath (k'_{j-1} -k'_j) z + \imath \Delta t \right]
\right\}
\nonumber
\\
&& ~~~~ ~~~~ ~~~~ ~~~~
-
\imath \left( k_j - k_0 \right) A_j
,
\label{eqn-multi-Ajpropagate}
\end{eqnarray}
where the $\imath \Delta t$ terms arise because of our
rotation of the frame of reference of $\rho_{12}'$. The residual
$k_j - k_0$ terms result from a difference in the $k$ frame
of reference between the our multi-field derivation and the standard one.
\section{Example Applications}\label{S-Application}
We now use the single-field (GFEA) model to simulate an experimental
situation.
First we compare the results to their multi-field counterparts,
demonstrating the relationships between the two methods,
and showing them to be in good agreement, as expected for the chosen
pulse lengths.
Second, we contrast our model with an (inaccurate) single-field SVEA model,
in order to highlight the role of the few-cycle propagation terms.
The bulk of the code used was the same for all simulations,
as it contains options to switch from a single to a multi-field case,
and to switch GFEA corrections on and off.
Figure \ref{F-transient-pump} shows a set of results for a pair of
pump pulses traveling though 9cm of H$_2$.
This corresponds to a simulation of an experiment where
the pulses pump the 1st vib(ro) level in molecular H$_2$ (at 4155cm$^{-1}$,
i.e. $\sim 126$THz),
as in the transient-regime experiments of
Sali et.al. \cite{Sali-MTHM-2004ol,Sali-KNMHTM-2004draft}.
In these experiments, typical pulses might be 70fs and 250fs wide
at 800nm (30$\mu$J) and 600nm (120$\mu$J) respectively,
and the comb of Raman sidebands generated
are narrow and well separated.
A Cauchy-type dispersion curve for H$_2$ is incorporated into the simulations.
In our simulations,
we use the smaller widths of 17.5fs and 62.5fs, which broadens
the spectral peaks (to about 57THz and 16THz respectively)
and makes the standard multi-field approach less practical.
The figure compares three data sets --
(a) single-field GFEA simulation,
(b) multi-field simulation,
and lastly (c) single-field SVEA simulation (i.e. {\em
without} any few-cycle propagation corrections).
There is good agreement in the heights of all the spectral peaks between the
two exact simulations (single-field GFEA fig. \ref{F-transient-pump}(a) and
multi-field fig \ref{F-transient-pump}(b) ); even the
details in the wings of the first anti-Stokes peak (at about $f=0.5$)
are replicated. Those in the wings of the second
anti-Stokes peak (at about $f=0.63$) are not well replicated;
however, the features in question are about three orders of magnitude
weaker than the peaks, and the two simulations are not
equivalent because the multi-field theory does not include
next-nearest neighbour interactions.
The comparison between fig. \ref{F-transient-pump}(a,b) and the
single-field SVEA simulation fig \ref{F-transient-pump}(c) is also
instructive. Although it does reproduce the character of the single-field
GFEA spectra in many ways, the peak heights do not agree -- a fact
that is more apparent on a linear scale than a logarithmic one.
In terms of a multi-field model, we can say that without the GFEA
corrections, the prefactor of the polarization term does not pick up
its correct frequency dependence,
so the Stokes lines are artificially enhanced,
and the anti-Stokes artificially suppressed.
\begin{figure}
\includegraphics[width=49mm,angle=-90]{f01a-tr-sf-GFEA.ps}\\
\includegraphics[width=49mm,angle=-90]{f01b-tr-mf-GFEA.ps}\\
\includegraphics[width=49mm,angle=-90]{f01c-tr-sf-SVEA.ps}
\caption{
Transient Raman generation using 17.5fs and 62.5f pump pulses as described
in the text. Here we compares three simulation results:
(a) single-field GFEA simulation,
(b) multi-field simulation, and
(c) single-field SVEA simulation.
The dashed lines help compare the relative heights of the first
Stokes and anti-Stokes peaks.
The vertical scale is in arbitrary units.
}
\label{F-transient-pump}
\end{figure}
Figure \ref{F-adiabatic-probe} shows a set of results from a single
10fs probe pulse at 397nm,
traveling though 9cm of previously polarized D$_2$.
This corresponds to the probe stage of an experiment where
the gas had been prepared using a pair of nanosecond
fields resulting in a medium polarization of $\rho_{12}=0.025\imath$ on the
2993.57cm$^{-1}$ ($\sim$ 90THz) vibrational transition,
e.g. as in the
experiments of Gundry et.al. \cite{Gundry-AASTKNM-2005ol},
who use a longer
probe pulse of about 150fs.
A Cauchy-type dispersion curve is incorporated into the
simulations, but in the absence of
good dispersion data for D$_2$, we use that for $H_2$
as it should be a good match.
Note that although the polarization initial condition is fixed,
our simulations do incorporate the response of the polarization to
the probe pulses.
The main spectral peaks agree well
in the multi-field and single-field GFEA simulations,
although as before the results differ at the edges
where the intensities are very small compared to the main features.
As for the previous situation,
in the single-field SVEA simulation the Stokes and anti-Stokes lines are
artificially enhanced or suppressed.
\begin{figure}
\includegraphics[width=49mm,angle=-90]{f02a-ad-sf-GFEA.ps}\\
\includegraphics[width=49mm,angle=-90]{f02b-ad-mf-GFEA.ps}\\
\includegraphics[width=49mm,angle=-90]{f02c-ad-sf-SVEA.ps}
\caption{
10fs probe pulse incident on a medium with an initial
polarization of $\rho_{12}=0.025\imath$.
Here we compares three simulation results:
(a) single-field GFEA simulation,
(b) multi-field simulation, and
(c) single-field SVEA simulation.
The dashed lines help compare the relative heights of the first
Stokes and anti-Stokes peaks.
The vertical scale is in arbitrary units.
}
\label{F-adiabatic-probe}
\end{figure}
\section{Discussion}\label{S-Discuss}
For simple systems,
those (for example) with a single Raman transition driven by
relatively long pulses,
it will usually be most efficient to continue using a multi-field model.
Single-field simulations require very fine time-resolution,
so they are computationally expensive for
pulses with many optical cycles.
The spectral range of the numerical field is correspondingly broad,
typically covering many Stokes and anti-Stokes lines.
In more complex situations, however,
the single-field approach will outperform its multi-field counterpart.
For example,
if a Raman interaction is probed by a beam that does not lie on the frequency
comb defined by the pumping beams (e.g. as in \cite{Gundry-AASTKNM-2005ol}),
the multi-field approach will become much more complicated to implement.
It will be necessary to define separate arrays for the
pump and probe Raman ``ladders'' of Stokes and anti-Stokes lines,
an issue that we avoided in section \ref{S-Application}
by replacing the pump stage of the process with an
initial condition for the polarization.
With a single-field model,
the probe pulse and its Raman sidebands simply get superimposed on the
overall spectrum, where they will be offset from the frequency ladder defined
by the pump beams.
Another situation in which the multi-field model will run into difficulty is
where there are multiple Raman resonances.
Although the treatment in this paper has been restricted to a simple two-level
Bloch equation description of the Raman medium,
additional Bloch equations can easily be added,
even if there are coupled multi-level interactions
(as for example in \cite{Wallis-1995pra}).
It is only necessary to describe those transitions appropriately,
and to modify the polarization terms acting on the propagating field.
This procedure is considerably more difficult to handle in the
multi-field case,
which is based on field components separated by a
particular Raman transition frequency.
Additional Raman resonances complicate the theory;
not only must extra detuning factors be added to the equations,
but it is also necessary to work out which field component is nearest to
each new driving term.
With a wideband single-field model, on the other hand,
any new sidebands or resonance effects appear automatically
in the spectrum, and no special measures need to be adopted
to handle them.
The usefulness of our single-field approach is not restricted to the
Raman interaction described in this paper.
It is not just more easily extended to more complex Raman
materials involving e.g. multiple transitions than the standard
multi-field model.
It would be equally valuable for a near-degenerate optical parametric
oscillator, or indeed any system where two or more field
components start to overlap as the pump or probe
pulses get shorter.
\section{Conclusion}\label{S-Conclude}
We have considered how best to model the multi-frequency field
in wideband Raman generation experiments.
Rather than using multiple field envelopes, with one at each
Stokes or anti-Stokes frequency, we instead
use a single wideband field envelope.
This requires that the field be propagated
taking into account wideband effects,
as described by either the SEWA theory
of Brabec and Krausz \cite{Brabec-K-1997prl},
or the more general GFEA of Kinsler and New \cite{Kinsler-N-2003pra}.
Our single-field approach has three crucial advantages.
First, it includes more physics,
even compared to a multi-field approach enhanced by adding GFEA corrections
to the propagation of the field components.
Secondly, it deals effortlessly with
the complications of overlapping spectra that occur in the multi-field case.
Thirdly, it allows for extra Raman transitions, and other molecular
details to be included more easily than is possible for the multi-field
model.
All of these factors ensure that our wideband single-field model not only
extends the regime in which envelope-based methods can be utilised;
but is also more versatile
and places fewer restrictions on the kinds of
Raman media that can be easily described.
|
1,941,325,220,469 | arxiv | \section{Introduction}
\label{s:introd}
\label{sec:intro}
The study of the gravitational redshift, predicted for solar radiation by
\citet{Ein08}, is still an important subject in modern physics and
astrophysics \citep[e.g.,][]{Kol04,Neg05,Lae09,Choetal,Pasetal,Tur13}.
The displacement of metallic lines to the violet observed in the laboratory
in comparison with the corresponding solar lines had first been noted by
\citet{Row96} and \citet{Jew96} \citep[cf.][]{Hen93a}.
Measurements of the small gravitational redshift of solar
spectral lines are inherently difficult, because many processes in the
atmosphere of the Sun can influence the spectrum. In particular, the high
speeds of the emitting plasmas lead to line shifts due to the classical Doppler
effect \citep[cf.][]{Hen93b}.
Nevertheless, early observations confirmed Einstein's prediction in general
\citep{StJ28,BlaRod,Bra62,Bra63,Sni70} \citep[cf.][]{Hen96}.
Improved observational techniques \citep[e.g.,][]{LoPetal,Cacetal,TakUen},
have established a shift of solar lines of
\begin{eqnarray}
c_0\,\frac{\Del \lambda}{\lambda} \approx 600 {\ \mathrm m} {\ \mathrm s} ^{-1} ~,
\label{shift_600}
\end{eqnarray}
where $c_0 = 299\,792\,458 {\ \mathrm m} {\ \mathrm s} ^{-1}$ is the speed of light in vacuum
remote from any masses and $\lambda$ the wavelength of the electromagnetic
radiation.
The gravitational potential~$U$ at a distance~$r$ from a spherical body with
mass~$M$ is constraint in the weak-field approximation for non-relativistic
cases \citep[cf.][]{LanLif} by
\begin{eqnarray}
- 1 \ll \frac{U}{c^2_0} = - \frac{G_{\rm N}\,M}{c^2_0~r} \le 0 ~,
\label{potential}
\end{eqnarray}
where $G_{\rm N} = 6.67554(16) \times 10^{-11} {\ \mathrm {kg}} ^{-1} {\ \mathrm m} ^3 {\ \mathrm s} ^{-2}$
is Newton's constant of gravity \citep{Quietal}.
A definition of a reference potential in line with
Eq.~(\ref{potential}) is $U_0 = 0$ for $r = \infty$.
In an attempt to describe the physical process(es) that lead to the
gravitational redshift,
\citet{Woletal} and \citet{Mueetal} disagreed on whether the frequency
of an atomic clock is sensitive to the
gravitational potential~$U$ (according to Wolf et al.) or, as suggested by
M\"uller et al., to the local gravity field $\vec{g} = {\bf \nabla } U$.
Support for the first alternative can be found in many publications
\citep[e.g.,][]{Ein08,Lau20,Sch60,Wil74,Okuetal,SinSam}, but it is, indeed,
not obvious how an atom can locally sense the gravitational potential~$U$.
Experiments on Earth~\citep{PouReb,Craetal,KraLue,PouSni},
in space~\citep{Vesetal,BauWey} and in the Sun-Earth
system~\citep{StJ28,BlaRod,Bra62,Bra63,Sni72,LoP91,Cacetal,TakUen} have,
however, quantitatively confirmed in the static weak field approximation a
relative frequency
shift of
\begin{eqnarray}
\frac{\nu - \nu_0}{\nu_0} = \frac{\Del \nu}{\nu_0}
\approx \frac{\Del U}{c^2_0} = \frac{U - U_0}{c^2_0}~,
\label{shift}
\end{eqnarray}
where $\nu_0 = c_0/\lambda_0$ is the frequency of the radiation emitted by a
certain transition at~$U_0$ and $\nu$ the observed frequency there, if the
emission caused by the same transition had occurred at a potential~$U$.
In addition to the redshift, the deflection of light near gravitational
centres is of fundamental importance. For a close solar fly-by
\citet{Sol04} and
\citet{Ein11} obtained $0.87\hbox{$^{\prime\prime}$}$ under the assumption
that radiation would be affected in the same way as matter.
\emph{Twice} this value was then derived in the framework of the General
Theory of Relativity \citep[GTR,][]{Ein16}\footnote{It is of interest in
the context of this paper that Einstein employed Huygens' Principle in his
calculation of the deflection.},
and later by \citet{Sch60} using the equivalence principle and STR.
The high value was confirmed
during the total solar eclipse in 1919 for the first time \citep[]{Dysetal}.
This and later observations have been summarized by \citet{Mik59} and
combined to a mean value of
approximately 2\hbox{$^{\prime\prime}$}.
\section{Graviton interactions}
\label{sec:graviton}
A model of gravitational interactions based on a modified impact concept
has been proposed for massive bodies \citep[][Paper~1]{WiWiDw},
and the difficulties of the old theory proposed by
Nicolas \citet{Fat90} \citep[cf.][]{Bop29,Gag49} have been
considered in the light of the Special Theory of Relativity
\citep[STR,][]{Ein05a} and the non-local behaviour of virtual
particles \citep[cf.][]{NimSta}. The basic idea is that impacting
\emph{gravitons}\,---\,originally named \emph{quadrupoles}\,---\,with no mass
and a speed $c_0$ are absorbed by massive particles and re-emitted with
reduced energy~$T^-_{\rm G}$ according to
\begin{eqnarray}
T^-_{\rm G} = T_{\rm G}\,(1 - Y) ~,
\label{Eq:reduced_energy}
\end{eqnarray}
where $T_{\rm G}$ is the energy of a graviton in the background flux and
$0 < Y \ll 1$.
A spherically symmetric emission of a liberated graviton with a reduction
parameter~$Y$ had been assumed in Paper~1. Further studies
have, however, indicated that an anti-parallel emission with respect to the
incoming graviton is more appropriate, because conflicts with the energy and
momentum conversation principles in closed systems can be avoided by the
second choice.
Newton's law of gravitation could be explained with this model,
however, a secular mass increase of matter was a consequence
of its application. This poses the question of how the interaction of gravity
with photons can be understood, since the photon mass is in all likelihood
zero.\footnote{A zero mass of photons follows from the STR and a speed of light
in vacuum~$c_0$ constant for all frequencies. \citet{Ein05b} used
,,Lichtquant'' for a quantum of electromagnetic radiation;
the term ``photon'' was introduced by \citet{Lew26}.
With various methods the photon mass could be constrained to
$m_\nu < 10^{-49}~ {\ \mathrm {kg}} $ \citep{GolNie,Amsetal}.}
If the mass of a photon is indeed zero, the interaction process must be
different.
An initial attempt at solving that problem has been made in Paper~2
\citep{WilDwi13} and is summarized here under the assumption of an
anti-parallel re-emission, both for massive particles and photons.
A physical process will then be outlined that provides information on
the gravitational potential~$U$ at the site of a photon emission. This aspect
had not been covered in our earlier paper on the gravitational redshift
\citep{WilDwi14}.
Interactions between massive bodies have been treated in Paper~1 with an
absorption rate of \emph{half} the intrinsic de Broglie frequency~$m\,c^2_0/h$
for a mass~$m$ \citep[cf.][]{Bro23}, because \emph{two} virtual gravitons have
to be emitted for one interaction, whereas in Paper~2 it is assumed that a
photon causes a reflection with an interaction rate of $\nu = E_\nu/h$ with
Planck's constant~$h$. The momentum transfer to a photon will thus be twice
as high as to a massive body with a mass equivalent to $E_\nu/c^2_0$.
If we apply the momentum conservation principle to photon-graviton pairs in
the same way as to photons \citep[cf.][]{LanLif}, we can write after a
reflection of $\vec {p}_{\rm G}$
\begin{eqnarray}
\vec {p}_\nu + 2\,\vec {p}_{\rm G} =
\vec {p}^*_\nu
\label{Eq:momentum}
\end{eqnarray}
with $|{\vec p}_{\rm G}| = p_{\rm G} = T_{\rm G}/c_0$.
We assume, applying Eq.~(\ref{Eq:momentum}) with
$p_{\rm G} \ll p_\nu = |\vec {p}_\nu|$, that
under the influence of a gravitational centre relevant
interactions occur on opposite sides of a photon with $p_{\rm G}$ and
$p_{\rm G}\,(1 - Y)$ transferring a net momentum of $2\,Y\,p_{\rm G}$. Note,
in this context, that the Doppler effect can only operate for interactions of
photons with massive bodies \citep[cf.][]{Fer32,Som78}.
Consequently, there will be no energy change of the photon, because both
gravitons are reflected with constant energies under these
conditions, and we can write for a pair of interactions
\begin{eqnarray}
E_\nu = |\vec p_\nu|\,c = |\vec p_\nu + 2\,Y\vec p_{\rm G}|\,c' =
|\vec p'_\nu|\,c' = E'_\nu ~,
\label{Eq:photon_energy}
\end{eqnarray}
where $\vec p'_\nu$ is the photon momentum after the events. If $\vec p_\nu$
and a component of $2\,Y\vec p_{\rm G}$ are pointing in the same direction,
it is $c' < c$, the speed is reduced; an antiparallel direction leads to
$c' > c$. Note that this could, however, not result in $c' > c_0$, because
$c = c_0$ can only be attained in a region with
an isotropic distribution of gravitons with a momentum of $p_{\rm G}$,
i.e. with a gravitational potential~$U_0 = 0$.
The momentum $\vec p_\nu$ of a photon radially approaching a gravitational
centre will be treated in line with Eq.~(6) in
Sect.~2 of Paper~2 for massive bodies, however, with twice the interaction
rate (valid for photons as explained above).
Since we know from observations that the deflection
of light during a close fly-by at the Sun is very small\,--\,to simplify the
calculations, we only treat this configuration\,--\,the
momentum variation caused by the weak and static
gravitational interaction is also very small.
The momentum change rate of the photon can then be approximated by
\begin{eqnarray}
\frac{\Del {\vec p}_\nu}{\Del t_M} \approx
2\,G_{\rm N}\,M\,\frac{\uvec r}{r^2}\,\frac{p_\nu}{c_0} ~,
\label{Eq:transfer_rate}
\end{eqnarray}
where $M$ the mass of the gravitational centre, $r = |\vec r|$ the
distance of the photon from the centre, and the position vector of
the photon is $r\,\uvec{r}$ with a unit vector $\uvec{r}$.
The small deflection angle also allows us to approximate
the actual path by a straight line and use $x \approx c_0\,t_M$
along an $x$~axis.
The normalized momentum variation along the trajectory then is
\begin{eqnarray}
-\frac{c_0}{p_\nu}\left(\frac{\Del \vec{p}_\nu}{\Del t_M}\right)_x =
\frac{c_0}{p_\nu}\,\frac{\Del p_\nu}{\Del t_M}\cos \vartheta
\approx 2\,G_{\rm N}\,M\,\frac{x}{r^3}~.
\label{Eq:x_component}
\end{eqnarray}
The corresponding component perpendicular to the trajectory is
\begin{eqnarray}
-\frac{c_0}{p_\nu}\left(\frac{\Del \vec{p}_\nu}{\Del t_M}\right)_y =
\frac{c_0}{p_\nu}\,\frac{\Del p_\nu}{\Del t_M}\,\sin \vartheta
\approx 2\,G_{\rm N}\,M\frac{R}{r^3}~,
\label{Eq:y_component}
\end{eqnarray}
where $R$ is the impact parameter of the trajectory.
Integration of Eq.~(\ref{Eq:x_component}) over $t_M$ from $-\infty$ to $x/c_0$
yields
\begin{eqnarray}
\frac{1}{p_\nu}\,[ {\ \mathrm d} \vec{p}_\nu(r)]_x \approx
\frac{2\,G_{\rm N}\,M}{c_0^2\,r} =
\frac{2\,G_{\rm N}\,M}{c_0^2\,\sqrt{R^2 + x^2}}~.
\label{Eq:x_integrated}
\end{eqnarray}
If we apply Eq.~(\ref{Eq:photon_energy}) to a photon approaching the
mass~$M$ along the $x$~axis
starting from infinity with $E_\nu = p_\nu\,c_0$, and
considering that the $y$~component in Eq.~(\ref{Eq:x_component}) is much smaller
than the x component in Eq.~(\ref{Eq:y_component}) for $x \gg R$,
the photon speed~$c(r)$ as a function of $r$
can be determined from
\begin{eqnarray}
p_\nu\,c_0 \approx \{p_\nu + [ {\ \mathrm d} \vec{p}_\nu(r)]_x\}\,c(r)~.
\label{Eq:reduced_c}
\end{eqnarray}
Division by $p_\nu\,c_0$ then gives with Eq.~(\ref{Eq:x_integrated})
\begin{eqnarray}
\frac{1}{[n_{\rm G}(r)]_x} = \frac{c(r)}{c_0} \approx
1 - \frac{2\,G_{\rm N}\,M}{c_0^2~r} = 1 + \frac{2\,U(r)}{c_0^2}
\label{Eq:refraction}
\end{eqnarray}
as a good approximation of the inverse gravitational index of refraction
along the $x$~axis. The same index has been obtained albeit with different
arguments, e.g., by \citet{Booetal,YeLin}. The resulting speed of light is in
agreement with evaluations by \citet{Sch60}, for a radial
propagation\footnote{\citet{Ein12} states
explicitly that the speed at a certain location is not dependent on the
direction of the propagation.} in a central gravitational field, and
\citet{Oku00}\,---\,calculated on the basis of the standard Schwarzschild
metric. A decrease of the speed of light near the Sun, consistent with
Eq.~(\ref{Eq:refraction}), is not only supported by the predicted and
subsequently observed Shapiro delay
\citep{Sha64,Reaetal,Sha71,Kraetal,Baletal,KutZaj},
but also indirectly by the deflection of light \citep{Dysetal}.
\section{Gravitational redshift
\label{sec:grav_red}
Since Einstein discussed the gravitational redshift and published conflicting
statements regarding this effect, the confusion could still not be cleared up
consistently \citep[cf., e.g.,][]{Man06,Sotetal}. In most of his publications
Einstein defined clocks as atomic clocks. Initially he assumed that the
oscillation of an atom corresponding to a spectral line might be an
intra-atomic process, the frequency of which would be determined by the atom
alone \citep{Ein08,Ein11}. \citet{Sco15} also felt that the equivalence
principle and the notion of an ideal clock running independently of
acceleration suggest that such clocks are unaffected by gravity. \citet{Ein16}
later concluded that clocks would slow down near gravitational centres thus
causing a redshift.
The question whether the gravitational redshift is caused by the emission
process (Case~a) or during the transmission phase (Case~b) is nevertheless
still a matter of recent debates. Proponents of (a) are, e.g.,
\citet{Moe57,Des90,Sch60,Craetal,Oha76,EarGly,Oku00,Okuetal} and of
(b): \citet{Hayetal,Feyetal,Str04,Fli06,Ran06,Wil06}.
There is general agreement on the observational and experimental facts and
most of the arguments are formally consistent with them, but different
physical processes or mathematical concepts are considered. In particular, it
is surprising that the same team of experimenters, albeit with different first
authors (Cranshaw et al. and Hay et al.) published different views on the
process of the Pound--Rebka--Experiment. \citet{PouSni} and \citet{Pou00}
pointed out, however,
that this experiment could not distinguish between the two options, because
the invariance of the velocity of the radiation had not been demonstrated.
\citet{Bon86} and \citet{Dic60} also left the question open. In many cases,
the confusion results from the unclear definitions of clocks and times as
detailed, for instance, by \citet{AshAll} and \citet{Oku00}.
\citet{Ein17} emphasized that for an elementary emission process not only
the energy exchange, but also the momentum transfer is of importance
\citep[cf., as well][]{Poi00,Abr03,Fer32}. Taking these considerations into
account, \citet{WilDwi14} formulated a photon emission process at
a gravitational potential~$U$ assuming that:
\begin{enumerate}
\item [(1)] The atom cannot sense the potential~$U$, in line with the
original proposal by \citet{Ein08,Ein11}, and initially emits the same
energy~$\Del E_0$ at $U > 0$ and $U_0 = 0$.
\item [(2)] It also cannot directly sense the speed of light at the location with a
potential~$U$. The initial momentum thus is $p_0 = \Del E_0/c_0$.
\item [(3)] As the local speed of light is, however, $c(U) \ne c_0$, a photon
having an energy of $\Del E_0$ and a momentum $p_0$ is not able to propagate.
The necessary adjustments of the photon energy and momentum as well as the
corresponding atomic quantities then lead in the \emph{interaction region} to
a redshift consistent with $h \nu = \Del E_0\,(1 + U/c^2_0)$ and observations.
\end{enumerate}
As outlined in Sect.~\ref{sec:graviton}, there is general
agreement in the literature that the local speed of light is
\begin{eqnarray}
c(U) \approx c_0 \left(1 + \frac{2\,U}{c^2_0}\right)
\label{Eq:local_speed}
\end{eqnarray}
in line with Eq.~(\ref{Eq:refraction}). It has, however, to be noted that
in Sect.~\ref{sec:graviton} the speed~$c(U)$ was obtained for a photon
propagating from $U_0$ to $U$, and, therefore, the physical process which
controls the speed of newly emitted photons is not established. An attempt
to do that will be made in the next section.
\section{An aether model with photons as solitons}
\label{sec:aether}
Before we suggest a specific aether model, a few statements on the aether
concept in general should be mentioned. Following \citet{MicMor} famous
experiment, \citet{Ein05a,Ein08} concluded that the concept of a light aether
as carrier of the electric and magnetic forces is not consistent with the STR.
In response to critical remarks by \citet{Wie11}, cf. \citet{Sch90} for
Wiechert's support of the aether, \citet{Lau12} wrote that the existence
of an aether is not a physical, but a philosophical problem, but later
differentiated between the physical world and its mathematical formulation.
A four-dimensional `world' is only a valuable mathematical trick; deeper
insight, which some people want to see behind it, is not involved \citep{Lau59}.
In contrast to his earlier statements, Einstein said at the end of a speech
in Leiden that according to the GTR a space without aether cannot be
conceived \citep{Ein20}; and even more detailed: Thus one could instead of
talking about `aether' as well discuss the `physical properties of space'.
In theoretical physics we cannot do without aether, i.e., a continuum endowed
with physical properties \citep{Ein24}.
\citet{Mic28} confessed at a meeting in Pa\-sa\-de\-na in the
presence of H.A. Lorentz that he clings a liitle to the aether; and
\citet{Dir51} wrote in a letter to Nature that there are good reasons for
postulating an {\ae}ther.
\citet{WiDwWi} proposed an impact model for the electrostatic force
based on massless \emph{dipoles}.
The vacuum is thought to be permeated by these dipoles that are, in the absence of
electromagnetic or gravitational disturbances, oriented and directed randomly
propagating along their dipole axis with a speed of~$c_0$. There is little or
no interaction among them. Note that such electric dipoles have no mean
interaction energy, even in the classical theory \citep[see, e.g.,][]{Jac06}.
We suggest to identify the dipole distribution with an aether. This is very
similar to the conclusion of \citet{Pre75}:
\begin{enumerate} \item[ ]
``[...] first, that the normal
state of the component particles of the ether is a state of motion; second,
that this motion of the particles takes place in straight lines; and third,
that this motion takes place towards every possible direction.''
\end{enumerate}
Einstein's aether mentioned above may, however, be more related to the
gravitational interactions \citep[cf.][]{Gra01}. In this case, we have to
consider the graviton distribution as another component of the aether.
If we assume that an individual dipole interacts with gravitons in the same
way as photons, see Eq.~(\ref{Eq:photon_energy}), according to
\begin{eqnarray}
E_{\rm D} = |\vec p_{\rm D}|\,c = |\vec p_{\rm D} + 2\,Y\vec p_{\rm G}|\,c' =
|\vec p'_{\rm D}|\,c' = E'_{\rm D} ~,
\label{Eq:dipole_energy}
\end{eqnarray}
where $E_{\rm D}$ and $\vec p_{\rm D}$ refer to the energy and momentum of a
dipole. We can then modify Eqs.~(\ref{Eq:transfer_rate}) to (\ref{Eq:reduced_c})
by changing $\nu$ to D and find that Eqs.~(\ref{Eq:refraction}) and
(\ref{Eq:local_speed} ) are also valid for dipoles with a speed of $c_0$ for
$U_0 = 0$. One exception from Preston's ``ether'' is that dipoles can, according
to a modified Eq.~(\ref{Eq:y_component}), be deflected by graviton
interactions.
Considering that many suggestions have been made to describe photons as
solitons \citep[e.g.,][]{Dir27,Vig91,KamSla,Meu13,Ber13,Beretal}, we
also propose that a photon is a soliton propagating in the dipole
aether with a speed of~$c(U)$, cf., Eq.~(\ref{Eq:local_speed}), controlled by
the dipoles moving in the direction of propagation of the photon.
The dipole distribution thus determines the gravitational index of
refraction, cf. Eq.~(\ref{Eq:refraction}), and consequently the speed of
light~$c(U)$ at the potential~$U$. This solves the problem formulated
at the end of Sect.~\ref{sec:grav_red} and might be relevant for other
phenomena, such as gravitational lensing and the cosmological redshift
\citep[cf., e.g.,][]{Ell10,CheKan}.
We will further assume that the dipoles constituting a photon will have turned
the orientation of their axes to a direction perpendicular to the photon
velocity vector. This avoids any electrostatic interactions
during emission and absorption processes of photons, and will probably also be
required by their polarization effects.
\section{Discussion and Conclusion
\label{sec:concl}
Our aim was to identify a physical process that leads to a speed~$c(U)$ of
photons controlled by the gravitational potential~$U$. This could be achieved
by postulating an aether model with moving dipoles, in which a
gravitational index of refraction $n_{\rm G}(U) = c_0/c(U)$ regulates the
emission and propagation of photons as required by energy and momentum
conservation principles. The emission process thus follows Steps~(1) to (3)
in Sect.~\ref{sec:grav_red}, where the local speed of light is given by the
gravitational index of refraction~$n$. In this sense, the statement that an
atom cannot detect the potential~$U$ by \citet{Mueetal} is correct; the local
gravity field~$\vec{g}$, however, is not controlling the emission process.
A photon will be emitted by an atom with appropriate energy and momentum
values, because the local speed of light requires an adjustment of the
momentum. This occurs in the interaction region between the atom and its
environment as outlined in Step~(3) of Sect.~\ref{sec:grav_red}. A receiver
of the same type next to the emitter would also not be able to determine the
potential either, because the energy and momentum restrictions apply for the
absorption process as well.
\acknowledgments
This research has made extensive use of the
Smithsonian Astrophysical Observatory (SAO)/National Aeronautics and Space
Administration (NASA) Astrophysics Data System (ADS).
Administrative support has been provided by the Max-Planck-Institute for Solar
System Research and the Indian Institute of Technology (Banaras Hindu
University).
|
1,941,325,220,470 | arxiv | \section{Introduction}
Online social networks (OSNs) have become
the most profitable channel for ``viral marketing'' purposes.
In this regard, a classic optimization problem in OSNs is \textit{influence maximization} (IM), i.e., to discover a set of initial influencers, or \textit{seeds}, that can maximize the spread of information through the network~\cite{Kempe:2003,Chenbook}.
In most practical scenarios, companies want to tailor their advertisement strategies in order to address only selected OSN-users as potential customers.
This is the perspective adopted in the context
of \textit{targeted IM} (e.g.,~\cite{kbtim,Lu:2015:CCC:2850578.2850581,M.thai1,M.thai2,8326536}),
which is also the focus of this work.
Besides trying to maximize the spread of information (e.g., advertising of a product), which is directly related to an a-priori specified budget, i.e., the number of seeds, a further yet less explicit issue in (targeted) IM is in the attempt of maximizing the ``potential'' of the selected seeds to influence, or engage, the users in the network. We believe that such a kind of potential can be well-explained in terms of \textit{diversity} that may characterize the seeds.
Intuitively, influencers that have different ``features'' (e.g., age, gender, socio-cultural aspects, preferences) bring unique opinions, experiences, and perspectives to bear on the influence propagation process.
As a consequence, \textit{seed users that have more different characteristics are more likely to maximize their strategies to engage the target users}.
Also, from a different view,
favoring diversity has important ethical implications in choosing the seeds as well as the target users.
Surprisingly, despite diversity has been recognized as a key-enabling dimension in data science (e.g., to improve user satisfaction in content recommendation based on novelty and serendipity),
relatively few studies have considered diversity in the context of (targeted) IM problems.
%
One of the earliest attempt is provided by Bao et al.~\cite{BaoCZ13}, which extends the Independent Cascade model to account for the structural diversity of nodes' neighborhood, however without addressing an optimization problem.
Other works have studied relations between diversity and spreading ability, but focusing on a single node in a network~\cite{HuangLCC13}.
Node diversity into the IM task has been first introduced by Tang et al.~\cite{6921625}. They
consider numerical attributes reflecting user's preferences on some predefined categories (e.g., movie genres) to address a generic IM task.
In~\cite{8326536}, we originally define an IM problem that is both targeted and diversity-sensitive, which
however, only considers specific notions of diversity that are driven by the topology of the information diffusion graph.
{\bf Contributions.\ }
We aim to advance research on IM by introducing a targeted IM problem that accounts for side-information-based diversity of the seeds to be identified. Our contributions are summarized as follows.
\vspace{-1mm}
\begin{itemize}
\item
We propose \ the {\em {\bf A}ttribute-based {\bf DI}versity-sensitive} \ {\em{\bf T}argeted Infl{\bf{U}}ence {\bf M}axi\-mization} problem, dubbed \myalgo.\footnote{Latin term for \textit{access}, \textit{admission}, \textit{audience}.} Our notion of diversity assumes that nodes in the network are associated with side-information in the form of a schema of categorical attributes and corresponding values.
\item
We provide different definitions of diversity that are able to reflect the variety in the amount and type of categorical values that characterize the seeds being discovered. Remarkably, we design a class of nondecreasing monotone and submodular functions for categorical diversity, which also has the nice property of allowing incremental computation of a node's marginal gain when added to the current seed set.
To the best of our knowledge, we are the first to propose a formal systematization of approaches and functions for determining node-set diversity in influence propagation and related problems in information networks.
\item
We design our solution to the \myalgo problem under the Reverse Influence Sampling (RIS) paradigm~\cite{doi:10.1137/1.9781611973402.70,Tang:2014:IMN:2588555.2593670} and recognized as the state-of-the-art approach for IM problems.
One challenge that we address is revisiting the RIS framework to deal with both the targeted nature and the diversity-awareness of the \myalgo problem.
\item
We develop the \myalgo algorithm, which
returns a $(1-1/e-\epsilon)$-approxima\-tion with at least $1-1/n^l$ probability in $O((k+l)(|\E|+|\V|) \log |\V| /\epsilon ^2)$ time, under the triggering model, a general diffusion model adopted by most existing work.
\item
We experimentally evaluated \myalgo on publicly available network data\-sets, three of which were used in a user engagement context, one in community interaction, and the other one in recommendation.
This choice was mainly motivated by the opportunity of comparing our \myalgo with the aforementioned methods in~\cite{6921625} and~\cite{8326536}.
\end{itemize}
{\bf Plan of the paper.\ }
The rest of this paper is organized as follows.
Section~\ref{sec:related} briefly discusses related work on targeted IM and diversity-aware IM.
Section~\ref{sec:problem-statement} formalizes the information diffusion context model, the objective function, and the optimization problem under consideration.
Section~\ref{sec:diversity} presents our study on monotone and submodular diversity functions for the categorical data modeling the profiles of nodes in a network.
Section~\ref{sec:framework} describes our proposed approach and algorithm for the \myalgo problem.
Sections~\ref{sec:eval} and \ref{sec:results} contain our experimental evaluation methodology and results, respectively.
In Section~\ref{sec:conclusions}, we provide our conclusions and pointers for future research.
\section{Related work}
\label{sec:related}
The foundations of IM as an optimization problem, initially posed by Kempe et al. in their seminal work~\cite{Kempe:2003},
rely on two main findings, namely the intractability of the problem in its two sources of complexity (i.e., given the budget $k$ and a diffusion model, to discover a size-$k$ seed set that maximizes the expected spread, and to estimate the expected spread of the final activated node-set) and the possibility of designing an approximate greedy solution with theoretical guarantee provided that the activation function is nondecreasing monotone and submodular.
Upon the findings in the breakthrough work by Borgs et al.~\cite{doi:10.1137/1.9781611973402.70}, Tang. et al.~\cite{Tang:2014:IMN:2588555.2593670} proposed a randomized algorithm, \algo{TIM}/\algo{TIM+}, that can perform orders of magnitude faster than the greedy one, overcoming the bottleneck in the computation of the expected spread by exploiting a \textit{reverse sampling} technique.
Since then, other methods have followed, such as IMM \cite{IMM}, BCT \cite{M.thai1}, TipTop \cite{M.thai2}.
Also, \cite{Lu:2015:CCC:2850578.2850581} generalizes the theoretical results
in~\cite{doi:10.1137/1.9781611973402.70,Tang:2014:IMN:2588555.2593670}
to any diffusion model with an equivalent
live-edge model of the diffusion graph.
%
In the following, we focus our discussion on targeted IM approaches, while for broader and more complete views on the IM topic, the interested reader can refer to recent surveys, such as \cite{lifan,annappa,Peng+18}.
{\bf Targeted influence maximization.\ }
Research on targeted IM has also gained attention in recent years.
A query processing problem for distinguishing specific users from others is considered in~\cite{LeeChung15}.
%
In the keyword-based targeted IM method proposed in~\cite{kbtim}, the target nodes are identified as those having preferences (i.e., keywords) in common with a certain advertisement.
%
In~\cite{Devotion}, the targeted IM problem is studied in the context of user engagement, whereby a node is regarded as target on the basis of its social capital.
The RIS-based BCT method is proposed in~\cite{M.thai1}, whereby
each node is associated with a cost (i.e., the effort required to engage a node as a seed), and a benefit score (i.e., the profit resulting from its involvement in the propagation).
A few studies focus on the special case of a single selected target-node~\cite{GuoZZCG13,YangHLC13,GulerVTNZYO14}. By contrast, more general targeted IM methods, like ours,
aim at maximizing the probability of activating a target set of arbitrary size
by discovering a seed set which is neither fixed and singleton
nor has constraints related to the topological closeness to a fixed initiator.
Other approaches incorporate information on the users' profiles into the diffusion process or into the influence probability estimation.
In~\cite{LagnierDGG13}, a family of probabilistic diffusion models is proposed to exploit vectors of features representing the content of information to be diffused and the profile of users.
In~\cite{ZhouZC14}, the independent cascade model is adapted
to accommodate user preferences, which are learned from a set of users' documents labeled with topics.
In the conformity-aware cascade model~\cite{LiBSC15}
the influence probability from node $u$ to node $v$ is computed based on a sentiment analysis approach and proportionally to the product of $u$'s influence and $v$'s conformity, where the latter refers to the inclination of a node to be influenced by others.
%
User activity, sensitivity, and affinity are considered in~\cite{Deng+16} to define node features, which are then used to adjust the influence between any two users.
A further perspective that can be regarded as related to targeted IM consists in exploiting network structures to drive the seed selection.
In~\cite{Suman+19}, a budget constraint on the cumulative cost of the seeds to be selected is divided among available communities, then seeds are selected inside each community based on some centrality measure.
Community structure is also exploited in the three-phase greedy approach proposed in~\cite{PHG}.
Yet, coreness is used in~\cite{8031449} for estimating nodes' influence and developing a simulated annealing based algorithm for IM.
Note that the aforementioned works,
besides \textit{discarding any diversity notion}, are concerned with the development of heuristics for IM while we are interested in designing a solution with approximation guarantee.
{\bf Diversity-aware influence maximization}.
Diversity notions have been considered in several research fields, such as web searching, recommendation, and information spreading (e.g.,~\cite{Santos:2015:SRD:2802186.2802187,Wu:2016:RMC:2885506.2700496,BaoCZ13,HuangLCC13}).
However, a relatively little amount of work has been devoted to integrating diversity in the objective function of IM problems.
%
Tang et al.~\cite{6921625} proposed the first study on diversity-aware IM, where a linear combination of the expected spread function and a numerical-attribute-based diversity is maximized by means on heuristic search strategies, defined upon
classic centrality measures.
In~\cite{8326536}, we formulated the topology-driven diversity-sensitive targeted IM problem, dubbed \algo{DTIM}, with an emphasis on maximizing the social engagement of a given network. The provided solution, built upon the Simpath method~\cite{Simpath}, supports only the Linear Threshold model.
It should be noted that, although the optimization problem presented in this work is similar to the one in~\cite{8326536},
here we provide different formulation and algorithmic solution than the earlier ones, since unlike \algo{DTIM} (i) \myalgo builds on state-of-the-art approximation methods for IM, and (ii) it is designed to handle different notions of attribute-based diversity.
%
In Sects.~\ref{sec:eval}--\ref{sec:results}
we present a comparative evaluation with the methods in~\cite{6921625} and~\cite{8326536}.
\section{Problem statement}
\label{sec:problem-statement}
{\bf Representation model.\ }
Given a social network graph $\G_0=\langle \V, \E \rangle$, with set of nodes $\V$ and set of edges $\E$, let $\G = \G_0(b, t) = \langle \V, \E, b, t \rangle$ be a directed weighted graph representing the \textit{information diffusion} context associated with $\G_0$, with
$b:\E \rightarrow (0,1]$
edge weighting function,
and $t: \V \rightarrow (0,1]$
node weighting function.
Function $t$ determines the status of each node as \textit{target}, i.e., a node toward which the information diffusion process is directed. Given a user-specified threshold $\tau_{TS} \in [0,1]$, we define the \textit{target set} $\TargetSet$ for $\G$ as:
$$\TargetSet = \{ v \in \V | t(v)\geq \tau_{TS}\}.$$
\vspace{-1mm}
Function $b$ corresponds to the parameter of the \textit{Triggering} model~\cite{Kempe:2003}, which in line with several existing studies on IM is also adopted here as information diffusion model.
Under this model, each node chooses a random subset of its neighbors as \textit{triggers}, where the choice of triggers for a given node is independent of the choice for all other nodes. If a node $u$ is inactive at a given time and a node in its trigger set becomes active, then $u$ becomes active at the subsequent time. Notably, Triggering has an equivalent interpretation as ``reachability via live-edge paths'', such that an edge $(u, v)$ is designated as live when $v$ chooses $u$ to be in its trigger set. Therefore, $b(u,v)$ represents the probability that edge $(u, v)$ is live.
%
Linear Threshold and Independent Cascade~\cite{Kempe:2003} are special cases of Triggering with particular distributions of trigger sets.
Note also that function $b$ and $t$ are usually defined as data-driven. We will discuss possible instances of both functions in Sect.~\ref{sec:settings}.
{\bf Objective function.\ }
The objective function of our targeted IM problem is comprised of two functions.
The first one, denoted as $C(\cdot)$, is determined as the cumulative amount of the scores associated with the target nodes that are activated by the seed set $S$.
Following the terminology in~\cite{8326536}, we call this function social capital, or simply \textit{capital}, which is defined as
\begin{equation}\label{eq:DC}
\olf{\ActiveSet{S}}=\sum \limits_{v \in \ActiveSet{S}\cap \TargetSet } t(v)
\end{equation}
where $\ActiveSet{S}$ denotes the set of nodes that are active at the end of the diffusion starting from $S$.
The second term in our objective function, denoted as $div(\cdot)$, is introduced to determine the \textit{diversity} of the nodes in any subset of $\V$.
As previously mentioned, our approach is to measure node diversity in terms of a-priori knowledge provided in the form of symbolic values corresponding to a predetermined set of \textit{categorical attributes}. In Section~\ref{sec:diversity}, we provide a class of diversity functions for categorical datasets.
We now formally define our proposed problem of targeted IM, {\em {\bf A}ttribute-based {\bf DI}versity-sensitive {\bf T}argeted Infl\-{\bf{U}}ence {\bf M}aximization} (\myalgo).
\begin{definition}
\label{def:tim}
{\em (}\textsc{Attribute-based Diversity-sensitive Targeted Influence Maximization}{\em )}
Given a diffusion graph $\G = \langle \V, \E, b, t \rangle$, a budget $k$, and a threshold $\tau_{TS}$, find a set $S \subseteq \V$ with $|S| \leq k$ of seed-nodes such that
\begin{equation}\label{eq:problem}
S = \operatornamewithlimits{argmax}_{S' \subseteq \mathcal{V} \ s.t. \ |S'| \leq k} \alpha \times \olf{\ActiveSet{S'}} + (1-\alpha)\times div(S')
\end{equation}
where $\alpha \in [0,1]$ is a smoothing parameter that controls the weight of capital $\olf{\cdot}$ w.r.t diversity $div(\cdot)$.
\hfill~\qed
\end{definition}
The problem in Def.~\ref{def:tim} preserves the NP-hard complexity of the IM problem.
However, as for the classic IM problem, if we are able to design an objective function for which
the natural \textit{diminishing property} holds, then the output of a greedy solution provides a $(1-1/e-\epsilon)$-approximation guarantee w.r.t. the optimal solution.
To this aim, we need to ensure that Eq.~(\ref{eq:problem}) is a linear combination of two monotone and submodular functions.
Here we point out that monotonicity and submodularity of the capital function $C(\cdot)$ was previously demonstrated in~\cite{8326536}. In the next section, we provide our definitions of $div(\cdot)$.
\section{Monotone and submodular diversity functions for a set of categorical tuples}
\label{sec:diversity}
We assume that nodes in the social network graph $\G_0=\langle \V, \E \rangle$ are associated with side-information in the form of symbolic values that are valid for a predetermined set of categorical attributes, or \textit{schema}, $\A = \{A_1, \ldots, A_m \}$.
For each $A \in \A$, we denote with $dom_A$ its domain, i.e., the set of admissible values known for $A$, and with $dom$ the union of attribute domains. Moreover, we define $val_A: \V \mapsto dom_A$
as a function that associates a node with a value of $A$.
For any $S \subseteq \V$, we will also use symbols $dom_A(S)$ and $dom(S)$ to denote the subset of values in $dom_A$, resp. $dom$, that are associated with nodes in $S$.
Given the schema $\A$, we will refer to the categorical tuple associated to any $v \in \V$ as the \textit{profile} of node $v$, and to the categorical dataset for all nodes in $\V$ as the \textit{profile set} of $\V$.
We will use symbol $\A[v]$ to denote the profile of $v$ and symbol $\D_S$ to denote the profile set of nodes in $S \subseteq \V$. Note that $\D_S$ is a multiset such that $\D_S = \bigcup_{v \in S} \A[v]$, and any $\A[v]$ is generally regarded as a sparse vector, as it could contain \textit{missing values} for some attributes; i.e.,
if we denote with $\bot$ a missing attribute value,
$\A[v] = \langle val_{A_1}(v) \vee \bot, \ldots, val_{A_m}(v) \vee \bot\rangle$.
Moreover, we will use symbol $|\A[v]|$ to denote the actual length of $\A[v]$ as the number of attribute values contained in the profile.
\vspace{-2mm}
\paragraph*{\bf General requirements. \ }
Given our setting of an information diffusion graph $\G = \G_0(b, t) = \langle \V, \E, b, t \rangle$ associated with $\G_0$,
here we define a class of functions $div$ that, for any $S \subseteq \V$ with associated $\D_S$, satisfy the following requirements:
\begin{itemize}
\item $div(S)$ defines a notion of diversity of nodes in $S$ w.r.t. their categorical representation given in $\D_S$;
\item $div(S)$ must be \textit{nondecreasing monotone and submodular}; hereinafter, we will use the more simple term ``monotone and submodular''.
\item for any $v \in \V \setminus S$, the marginal gain $div(S \cup \{v\}) - div(S)$ should be computed efficiently.
\item $div(S)$ should be \textit{meaningful}, in terms of ability in capturing the subtleties underlying the variety of node profiles according to their categorical attributes and values.
\end{itemize}
\vspace{-4mm}
\subsection{Challenges in defining set diversity functions}
\label{sec:negativefunctions}
\vspace{-1mm}
Before providing our definitions of diversity functions in Sects.~\ref{sec:diversity:attributeprojection}--\ref{sec:diversity:class},
here we mention some of the negative outcomes that were drawn by an attempt of devising apparently simple and intuitive approaches based on \textit{attribute-wise} functions as well as based on \textit{profile-wise} functions, eventually demonstrating their unsuitability as diversity functions for the task at hand, as they do not satisfy one or more of the above listed general requirements.
%
Let us begin with attribute-wise functions.
Given $A \in \mathcal{A}$ and $S \subseteq \V$, one simple approach would be to compute the \textit{number of unique values} admissible for $A$ that occur in $\D_S$, normalized by the size of $S$; however, this coarse-grain function is not only unable to characterize the variety of nodes in terms of repetitions of the different values of the attribute under consideration, but also it is not nondecreasing monotone since it decreases by adding nodes with identical values of the attribute.
The desired properties of monotonicity and submodularity could be satisfied by just counting the number of unique values of attribute in $\D_S$, however at the cost of a further worsening in meaningfulness, thus obtaining a useless notion of diversity.
An alternative approach would be to aggregate \textit{pairwise distances} of the node profiles w.r.t. a given attribute. For instance, we could count the (normalized) number of mismatchings over each pair of nodes in a set; however, it is easy to prove that the derived function will not be submodular in general.
\iffalse -------------------
An alternative approach would be to count and aggregate the \textit{mismatchings} of the node profiles in $\D_S$ w.r.t. attribute $A$.
We consider in particular the following definition based on pairwise attribute-value mismatchings:
$$
f_1(S,A) = \frac{1}{|S|} \sum_{u,v \in S} \mathbbm{1}[val_{A}(u) \neq val_{A}(v)],$$ where $\mathbbm{1}[\cdot]$ denotes the indicator function.
It is easy to prove that this function is non-submodular; to give empirical evidence of this fact, consider the following example. We are given $S = \{u,v,x\}$ with
$val_{A}(u) = val_{A}(v) = a_1$ and $val_{A}(x)= a_2$, and $T = \{u,v,x,y\}$ with
$val_{A}(y) = a_1$. Suppose that node $z$, with $val_{A}(z)= a_2$, is inserted into $S$ and $T$, then it holds that: $f_1(S,A) = \frac{2}{3}$, $f_1(S \cup \{z\}, A) = \frac{4}{4}$, $f_1(T,A) = \frac{3}{4}$, and $f_1(T \cup \{z\},A) = \frac{6}{5}$. It follows that
$f_1(S \cup \{z\}, A) - f_1(S,A) \not\geq f_1(T \cup \{z\},A) - f_1(T,A)$.
Note also that the property of submodularity still does not hold if the normalization term (i.e., $|S|$) is discarded in $f_1(\cdot)$.
------------------- \fi
Let us now extend to calculating pairwise distances of the node profiles over the entire schema. In this regard, we could consider a widely-applied measure for computing the distance between two sequences of symbols, namely \textit{Hamming distance}. However, for different varying set-size-based normalization schemes, this might result in a function that is not submodular or even not monotone.
Alternatively, we could consider a standard statistic for dissimilarity of finite sample sets, namely \textit{Jaccard distance}. This is defined as the complement of Jaccard similarity, that is, for any two sets, substracting from 1 the ratio between the size of the intersection and the size of the union of the sets. (In our context, a sample set corresponds to a categorical tuple, i.e., a node profile.) Again, the resulting function will not ensure submodularity.
%
The interested reader can refer to the \textbf{\em Appendix} for analytical details of the aforementioned functions and relating examples that show their unsuitability as nondecreasing monotone submodular diversity functions.
Please note that, in \textbf{\em Appendix}, we also report {\em Proofs} for the main theoretical results that will be presented next in Sections~\ref{sec:diversity:attributeprojection}--\ref{sec:diversity:class}.
\iffalse -------------------
Given the profiles of any two nodes $u,v$, this is defined as:
\begin{equation}\label{eq:hamming}
dist^H(u,v) = \sum_{j=1}^m \mathbbm{1}[val_{A_j}(u) \neq val_{A_j}(v)].
\end{equation}
Upon this, let us define $f_2(S) = \sum_{u, v \in S, u\neq v} dist^H(u,v)$, and two normalized versions: $\widehat{f_2}(S) = (1/(2|S|))f_2(S)$ and $\widehat{\widehat{f_2}}(S) = (1/|S|(|S|-1))f_2(S)$.
It is easy to check that none of such functions is appropriate. Let us consider the following example. We are given a schema with three attributes ($m=3$) and sets $S = \{u,v\}$, such that
$\A[u] = \langle a_1, \bot, \bot\rangle$,
$\A[v] = \langle a_2, \bot, \bot\rangle$,
and
$T= \{u,v,x\}$, such that $\A[x] = \langle a_3, b_1, c_1\rangle$. Suppose that node $z$, with
$\A[z] = \langle a_4, \bot, \bot\rangle$,
is inserted into $S$ and $T$, then it holds that:
$f_2(S) = 2$, $f_2(T) = 14$, $f_1(S \cup \{z\}) = 6$, and $f_1(T \cup \{z\}) = 24$. It follows that
$f_2(S \cup \{z\}) - f_2(S) \not\geq f_2(T \cup \{z\}) - f_2(T)$.
Considering $\widehat{f_2}(\cdot)$, we have: $\widehat{f_2}(S) = \frac{1}{2}$, $\widehat{f_2}(T) = \frac{7}{3}$, $\widehat{f_2}(S \cup \{z\}) = 1$, and $\widehat{f_2}(T \cup \{z\}) = 3$; thus, again
$\widehat{f_2}(S \cup \{z\}) - \widehat{f_2}(S) \not\geq \widehat{f_2}(T \cup \{z\}) - \widehat{f_2}(T)$.
Yet, when using $\widehat{\widehat{f_2}}(\cdot)$, we have:
$\widehat{\widehat{f_2}}(S) = 1$, $\widehat{\widehat{f_2}}(T) = \frac{7}{3}$, $\widehat{\widehat{f_2}}(S \cup \{z\}) = 1$, and $\widehat{\widehat{f_2}}(T \cup \{z\}) = 2$; in this case, mononicity is not even satisfied (since $\widehat{\widehat{f_2}}(T \cup \{z\}) \not\geq \widehat{\widehat{f_2}}(T)$).
------------------- \fi
\iffalse -------------------
Alternatively, let us consider a standard statistic for similarity or dissimilarity of finite sample sets, namely \textit{Jaccard distance}. This is defined as the complement of Jaccard similarity, that is, for any two sets, substracting from 1 the ratio between the size of the intersection and the size of the union of the sets. In our context, a sample set corresponds to a categorical tuple, i.e., a node profile.
Given the profiles of any two nodes $u,v$, their Jaccard distance is:
$$
dist^J(u,v) = 1 - \frac{\sum_{j=1}^m \mathbbm{1}[val_{A_j}(u) = val_{A_j}(v)]}{|\A[u]| + |\A[v]|- \sum_{j=1}^m \mathbbm{1}[val_{A_j}(u) = val_{A_j}(v)]}.
$$
Upon this, let us define $f_3(S) = \sum_{u, v \in S, u\neq v} dist^J(u,v)$, and normalized version: $\widehat{f_3}(S) = (1/(2|S|))f_2(S)$.
Like previous functions, it can be empirically shown that $f_3(\cdot)$ and $\widehat{f_3}(\cdot)$ are not appropriate for our purposes.
Suppose we are given a schema with five attributes ($m=5$) and sets $S = \{u,v\}$, such that
$\A[u] = \langle a, b, c, \bot, \bot\rangle$,
$\A[v] = \langle a, b, \bot, d, \bot\rangle$,
and
$T= \{u,v,x\}$, such that $\A[x] = \A[v]$. Suppose that node $z$, with
$\A[z] = \langle a, \bot, \bot, d, e\rangle$,
is inserted into $S$ and $T$, then it holds that:
$f_3(S) = 1, f_3(T) = 2, f_3(S \cup \{z\}) = \frac{18}{5},
f_3(T \cup \{z\}) = \frac{28}{5}$.
It follows that
$f_3(S \cup \{z\}) - f_3(S) \not\geq f_3(T \cup \{z\}) - f_3(T)$.
Considering $\widehat{f_3}(\cdot)$, we have: $\widehat{f_3}(S) = \frac{1}{4}$, $\widehat{f_2}(T) = \frac{1}{3}$, $\widehat{f_2}(S \cup \{z\}) = \frac{3}{5}$, and $\widehat{f_2}(T \cup \{z\}) = \frac{7}{10}$; thus, again
$\widehat{f_3}(S \cup \{z\}) - \widehat{f_3}(S) \not\geq \widehat{f_3}(T \cup \{z\}) - \widehat{f_3}(T)$.
The above Jaccard distance function could also be exploited to allow for measuring the dissimilarity of all profiles in any set $S=\{v_1, \ldots, v_k\} \subseteq \V$:
$$
f_4(S) = 1 - \frac{\sum_{j=1}^m \mathbbm{1}[val_{A_j}(v_1) = \ldots = val_{A_j}(v_k)]}{\sum_{j=1}^m |\bigcup_{v \in S} \{val_{A_j}(v)\}|}.
$$
However, it is straightforward to show that the above function can easily yield useless results; e.g., referring to the previous example, the marginal gains of $z$ w.r.t. $S$ and $T$ are the same. Even worse, a normalization of $f_4(S)$ by set-size does not even ensure monotonicity.
------------------- \fi
\vspace{-1mm}
\subsection{Attribute-wise diversity}
\label{sec:diversity:attributeprojection}
\vspace{-1mm}
In this section, we discuss the first of our proposed diversity functions, which is \textit{attribute-wise}.
We consider a notion of diversity of nodes that builds on the variety in the amount and type of categorical values that characterize the nodes in a selected set. In particular, we consider a linear combination of the contributions the various attributes provide to the diversity of nodes in a set.
\begin{definition}
\label{def:diversity}
Given a set of categorical attributes $\mathcal{A} = \{A_1, \ldots, A_m \}$ and associated profile set $\D$ for the nodes in a graph $\G_0=\langle \V, \E \rangle$, we define the {\em attribute-wise diversity} of any set $S \subseteq \V$ as:
\begin{equation}
\label{eq:diversity}
div(S) = \sum_{j=1..m} \omega_j \ div_{A_j}(S)
\end{equation}
where $div_{A_j}(S)$ evaluates the diversity of nodes in $S$ w.r.t. attribute $A_j$,
and $\omega$'s are real-valued coefficients in $[0,1]$, which sum up to 1 over $j=1..m$.
\hfill~\qed
\end{definition}
To meet the monotonicity, submodularity, meaningfulness and efficiency requirements, we provide the following attribute-specific set diversity function.
\begin{definition}
\label{def:divA}
Given a categorical attribute $A$, with domain of values $dom_A$, and node set $S \subseteq \mathcal{V}$, we define the \textit{attribute-specific set diversity} for $S$ as:
\begin{equation}
\label{eq:divA}
div_A(S) = \sum_{a \in dom_A(S)} \sum_{i=1}^{n_a} \frac{1}{i^{\lambda}}
\end{equation}
where $n_a$ is the number of nodes in $S$ that have value $a$ for $A$, and $\lambda \geq 1$.
\hfill~\qed
\end{definition}
One nice property of the function in Eq.~(\ref{eq:divA}) is that the contribution of a node to the set diversity, i.e., \textit{the node's marginal gain} can be determined at constant time, thus without recomputing the set diversity from scratch.
This holds based on the following fact.
\begin{fact}
The marginal gain of adding a node $v$ to $S$ is equal to
$$\sum_{j=1..m} \omega_j \sum_{a \in dom(A_j) \ \wedge \ a \in \mathcal{A}[v]} (n_a+1)^{-\lambda},$$
where $n_a$ is the number of nodes in $S$ that have value $a$ for $A$, and $\lambda \geq 1$.
\end{fact}
\begin{proposition}
The attribute-wise diversity function defined in Eq.~(\ref{eq:diversity}) is monotone and submodular.
\end{proposition}
\iffalse
{\em Proof.\ }
Function $div(S)$ in Eq.~(\ref{eq:diversity}) is monotone and submodular provided that $div_A(S)$ in Eq.~(\ref{eq:divA}) is such as well, for any choice of $A \in \A$ and setting of coefficients $\omega$, since $div(S)$ is a linear combination of functions $div_A(S)$ with nonnegative weights.
Monotonicity of Eq.~(\ref{eq:divA}) is trivially satisfied.
As concerns submodularity, let us assume $\lambda =1$ without loss of generality.
Note that the inclusion of a node $u$ into $S$ is $1/k_1$, with $k_1$ equal to the size of $S' \subseteq S$ such that, for each $v \in S'$, it holds that $val_A(v) \equiv val_A(u)$; moreover, the inclusion of node $u$ into $T$ ($S \subseteq T$) is $1/k_2$, with $k_2$ equal to the size of $T' \subseteq T$ such that for each $v \in T'$, $val_A(v) \equiv val_A(u)$. Since $S \subseteq T$, it holds that $k_2 \geq k_1$, or $1/k_1 \geq 1/k_2$, which concludes the proof.
\hfill~$\blacksquare$
\fi
\begin{lemma} \label{lemma:max_div_value}
Given a set $S$ and a categorical attribute $A$, \
consider $M_A = \max_{a \in dom_A(S)} n_a$ and
$m_A = \min_{a \in dom_A(S)} n_a$. For any $$S = \operatornamewithlimits{argmax}_{S' \subseteq \mathcal{V} \ s.t. \ |S'| \leq k} div_A(S'),$$ it holds that $M_A - m_A <= 1.$
\end{lemma}
\iffalse
{\em Proof.\ }
Assume by contradiction that there exists a set $S$ that maximizes $div_A$ (for any $A \in \A$) such that $M_A - m_A > 1$. Without loss of generality, assume $M_A = m_A+2$ and $\lambda = 1$.
Let $a^{(M)}$ and $a^{(m)}$ denote the categorical values corresponding
to $M_A$ and $m_A$, respectively. It is easy to note that if we remove
a node with profile containing $a^{(M)}$, resp. $a^{(m)}$, then $div_A$ will decrease by a $\delta^- = 1/(M_A)$ factor, resp. increase
by a $\delta^+=1/(m_A+1)$ factor. Since $\delta^- < \delta^+$, the diversity value is increased, therefore $S$ cannot be the optimal solution, which proves our statement.
\hfill~$\blacksquare$
\fi
We also observed that the \textit{theoretical maximum} value reach\-ed by Eq.~(\ref{eq:diversity}) depends only
on the budget $k$, as provided by the following result.
\begin{proposition}
\label{def:diversity_max_possible_value}
Given the set of categorical attributes $\mathcal{A} = \{A_1, \ldots, A_m \}$, $m$-real valued coefficients $\omega_j \in [0,1]$ ($j=[1..m]$), and a budget $k$, the theoretical maximum value for Eq.~(\ref{eq:diversity}) is function of $k$ and determined as ($d_j \triangleq |dom_{A_j}|$):
\begin{equation}
\label{eq:diversity_max_possible_value}
div^*[k] = \sum_{j=1}^{m} \omega_j \left(d_j \sum_{i=1}^{k/d_j} \frac{1}{i^{\lambda}} + \frac{k~\mathrm{mod}~d_j}{\big(1+ \frac{k}{d_j}\big)^{\lambda}} \right)
\end{equation}
\end{proposition}
\iffalse
{\em Proof sketch.\ }
Equation~(\ref{eq:diversity_max_possible_value}) can be derived based on the observation that the maximum possible value achievable w.r.t. a budget $k$ is obtained when the categorical values
are equally distributed among the $k$ nodes.
Without loss of generality, let us consider the case with
one categorical attribute $A$. If we need to select $k$ nodes, one at a time, the best choice corresponds to select the node with value $a^*= \operatornamewithlimits{argmin}_{a \in dom_A(S)}$ $n_a$,
as it yields the maximum marginal gain.
It straightforwardly follows that, by adopting this strategy, a set $S$ can be produced to satisfy
the requirement stated in Lemma~\ref{lemma:max_div_value} for the maximization of Eq.~(\ref{eq:diversity}).
\hfill~$\blacksquare$
\fi
\vspace{-2mm}
\subsection{Distance-based diversity}
\label{sec:diversity:hball}
\vspace{-1mm}
In Sect.~\ref{sec:negativefunctions}, we showed that an aggregation by sum of the profile-wise Hamming distances does not generally ensure submodularity or even monotonicity.
Given the profiles of two nodes $u,v$, the \textit{Hamming distance} is defined as:
\begin{equation}\label{eq:hamming}
dist^H(u,v) = \sum_{j=1}^m \mathbbm{1}[val_{A_j}(u) \neq val_{A_j}(v)],
\end{equation}
where $\mathbbm{1}[\cdot]$ denotes the indicator function.\footnote{For any nodes $u$ and $v$, we assume that if either $u$'s or $v$'s profile is not associated with a value in the domain of $A_j$ (i.e., missing value for $A_j$), with $j=1..m$, then the indicator function will be evaluated as 1.}
To design a set-function that satisfies both
the properties of monotonicity and submodularity, we borrow the notion of \textit{Hamming ball} introduced in~\cite{hammingball}, i.e., a set of objects each having a Hamming distance from a selected object-center at most equal to a predefined threshold, or \textit{radius}.
Our definition of Hamming ball for a given node in the network takes also into account the \textit{influence range}
of the node, i.e., all the nodes reachable starting from the node at the center of the ``ball''. Formally, given $v \in \V$ and a positive integer $\xi$, we define the Hamming ball as:
\begin{equation}
\mathcal{B}_v^{\xi} = \{ u \mid u \in \mathrm{IR}(v) \wedge dist^H(u,v) \leq \xi \},
\end{equation}
where $\mathrm{IR}(v) \subseteq \V$ denotes the set of nodes $u$ for which there exists
a path connecting $v$ to $u$.
%
%
Restricting the Hamming balls to the center's influence range
is beneficial in terms of efficiency, but also
licit since only the Hamming balls that are meaningful
in an influence spread scenario might be considered.
\begin{definition}
\label{def:hamming-div}
Given a set of categorical attributes $\mathcal{A} = \{A_1, \ldots, A_m \}$ and associated profile set $\D$ for the nodes in a graph $\G_0=\langle \V, \E \rangle$ and a radius $\xi$, we define the {\em Hamming-based diversity} of any $S \subseteq \V$ as:
%
\begin{equation}
\label{eq:hamming-div}
div(S) = \bigl\vert \bigcup\limits_{v \in S} B^{\xi}_v \bigr\vert
\end{equation}
\vspace{-4mm}
\hfill~\qed
\end{definition}
Intuitively, as similar nodes have overlapping Hamming balls,
by taking the union in Eq.~(\ref{eq:hamming-div}) we implicitly force
the selection of seeds so that nodes are as different
as possible from each other.
In fact, this
eventually leads an extension of the Hamming
ball given by the individual balls associated with every selected seed.
\iffalse
Intuitively, in order to maximize the value of Eq.~(\ref{eq:hamming-div}), one
must ensure that the Hamming balls associated with every node in $S$ are as different as possible, which implies that the profiles of nodes within $S$ are diverse.
\fi
%
Moreover, one nice effect of accounting for the influence reachable set
in computing the Hamming balls, is that we inherently favor the selection
of nodes with higher connectivity, since having a ``large'' Hamming ball also implies a large influence range, which is a particularly valuable aspect
for our problem.
The above defined function has the property of allowing an incremental computation of the marginal gain of any node.
\begin{fact}
\label{fact:hamming-inc}
The marginal gain of adding a node $u$ to $S$, with $u$ having
Hamming ball $B^{\xi}_u$, is equal to $\mid B^{\xi}_u \setminus B^{\xi}_S \mid$,
where $B^{\xi}_S = \cup_{v \in S} B^{\xi}_v$.
\end{fact}
\begin{proposition}
The Hamming-based diversity function defined in Eq.~(\ref{eq:hamming-div}) is monotone and submodular.
\end{proposition}
\iffalse
{\em Proof sketch.\ }
Monotonicity of Eq.~(\ref{eq:hamming-div}) is trivial.
In fact, since the equation takes into account the union
of the Hamming balls associated with any node in the set,
greater sets can only lead to greater Hamming balls,
thus Eq.~(\ref{eq:hamming-div}) is only allowed to increase.
As concerns the submodularity, it should be noted that for
any $S \subseteq T \subseteq V$, it holds that
$B^{\xi}_S \subseteq B_T^{\xi}$.
In light of Fact~\ref{fact:hamming-inc}, we can write the
inequality between the marginal gain of any node $v$ with respect to
$S$ and $T$ as:
\[
\begin{array}{lcl}
\cancel{div(S)} + \mid B_v^{\xi} \setminus B_S^{\xi} \mid - \cancel{div(S)} & \geq & \cancel{div(T)} + \mid B_v^{\xi} \setminus B_T^{\xi} \mid - \cancel{div(T)}\\
\end{array}
\]
In order to prove the submodularity, we can proceed by contradiction.
Suppose there exists a node $v$ such that the following inequality
is strictly satisfied:
\[
\begin{array}{lcl}
\cancel{\mid B_v^{\xi} \mid} - \mid B_v^{\xi} \cap B_S^{\xi} \mid & < & \cancel{\mid B_v^{\xi} \mid} - \mid B_v^{\xi} \cap B_T^{\xi} \mid \\
\mid B_v^{\xi} \cap B_S^{\xi} \mid & > & \mid B_v^{\xi} \cap B_T^{\xi} \mid
\end{array}
\]
It is easy to verify that the above inequality is a contradiction, in fact since $B_S^{\xi} \subseteq B_T^{\xi}$, there cannot exist
any node $u$ belonging to the intersection in the leftmost side of the equation that does not belong to the
intersection in the rightmost side.
\hfill~$\blacksquare$
\fi
\vspace{-2.5mm}
\subsection{Entropy-based diversity}
\label{sec:diversity:entropy}
\vspace{-1.5mm}
Diversity for categorical data can naturally be associated with notions of heterogeneity, or variability, for discrete random variables, such as entropy and Gini-index.
Unfortunately, it is easy to note that such measures cannot be used to define a monotone submodular function of diversity as long as they are evaluated on any discrete random variable whose sample space (i.e., set of admissible values) corresponds to the categorical content of $\D_S$, for any $S \subseteq \V$.
For instance, if we describe each node-profile, resp. each attribute-value, in $\D_S$ by means of a vector whose generic entry represents the frequency of that profile, resp. attribute-value, then the entropy for the correponding probability mass function does not even preserve monotonicity for any $T \supseteq S$.
Nonetheless, it is known that entropy is monotone and submodular if defined for a \textit{set of discrete random variables}~\cite{Fujishige78}.
Given a collection $\mathcal{X} = \{X_i\}_{i=1..|\mathcal{X}|}$ of discrete random variables, for the entropy function $H: 2^{\mathcal{X}} \mapsto [0, +\infty)$ it holds that $H(\mathcal{X}_S) \leq H(\mathcal{X}_T)$ and that $H(\mathcal{X}_S,X) - H(\mathcal{X}_S) \geq H(\mathcal{X}_T,X) - H(\mathcal{X}_T)$, with $\mathcal{X}_S \subseteq \mathcal{X}_T \subseteq \mathcal{X}$ and $X \in \mathcal{X}, X \notin \mathcal{X}_T$.
%
Hence, one question here becomes how to suitably define the variables over $\D_S$, for any $S \subseteq \V$. We next provide an intuitive definition valid in our context.
%
\begin{definition}
Given any $S \subseteq \V$, we define a set $\mathcal{X}_S = \{X_i\}_{i=1..|S|}$ of discrete random variables associated with the profiles of nodes in $S$, where for each $v_i \in S$, $X_{i}: dom \mapsto \{0,1\}$, such that $dom$ is equipped with a probability function that assigns each $a \in dom$ with its relative frequency in $\D$, and $X_i$ takes the value 1 if $a$ is contained in $\A[v_i]$, 0 otherwise.
\hfill~\qed
\end{definition}
By definition, the entropy of a set of $n$ discrete random variables is the joint entropy $H(X_1, \ldots, X_n) = \mathbb{E}[-\log P(X_1,$ $\ldots, X_n)]$.
This can be rewritten in terms of conditional entropy through a \textit{chain rule} for discrete random variables~\cite{CoverThomas2006}:
$$H(X_1, \ldots, X_n) = H(X_1) + H(X_2|X_1) + \ldots +H(X_n|X_{n-1}, \ldots, X_1).$$
That is, the entropy of a collection of random variables is the sum of the conditional entropies. In particular, given three variables,
it holds that:
%
\begin{equation*}
\begin{split}
H(X_1, X_2, X_3) & = H(X_1) + H(X_2, X_3|X_1) \\
& = H(X_1) + H(X_2|X_1) + H(X_3|X_2,X_1) \\
& = H(X_1,X_2) + H(X_3|X_2,X_1).
\end{split}
\end{equation*}
It should also be noted that a sequence of random variables can be considered as a single vector-valued random variable, therefore the joint probability distribution $p(\mathcal{X})$ can also be seen as the probability distribution $p(\mathbf{X})$ of the random vector $\mathbf{X}=[X_1, \ldots, X_n]$. This naturally reflects as well on the computation of the conditional entropy of a variable given a sequence of random variables.
\begin{definition}
\label{def:entropy-div}
Given a set of categorical attributes $\mathcal{A} = \{A_1, \ldots, A_m \}$ and associated profile set $\D$ for the nodes in a graph $\G_0=\langle \V, \E \rangle$, we define the {\em entropy-based diversity} of any $S \subseteq \V$ as:
\begin{equation}
\label{eq:entropy-div}
div(S) = H(X_1, \ldots, X_{|S|}) = \sum_{i=1}^{|S|} H(X_i | \mathbf{X}^{<i}),
\end{equation}
where $\mathcal{X}_S = \{X_i\}_{i=1..|S|}$ is the set of discrete random variables corresponding to nodes in $S$,
$\mathbf{X}^{<i}$ denotes the vector of variables $X_1, \ldots, X_{i-1}$, and
\begin{equation}
\begin{split}
H(X_i|\mathbf{X}^{<i}) & = -\!\!\sum_{x \in \{0,1\}^{i-1}} p(\mathbf{X^{<i}}\!=\!x) \\
& \times \sum_{x_i \in \{0,1\}} p(x_i|\mathbf{X}^{<i}\!=\!x) \log p(x_i|\mathbf{X}^{<i}\!=\!x) \\
& = - \sum_{x \in \{0,1\}^{i-1}} p(\mathbf{X}^{<i}\!=\!x) \times H(X_i|\mathbf{X}^{<i}\!=\!x). \nonumber
\end{split}
\end{equation}
\vspace{-4mm}
\hfill~\qed
\end{definition}
In the above equation, note that the enumeration of 0-1 tuples of length $i$ is only limited to the joint variable combinations corresponding to the attribute-values occurring in $\D$, whereas for all other attribute-values $a'$ not in $\D$, the same tuple of all zeros is associated with the sum of probabilities of $a'$ in $\D$.
The following fact states that the entropy-based diversity function allows for an incremental computation of a node's marginal gain.
\begin{fact}
The marginal gain of adding a node $v$ to $S$ is equal to the conditional entropy $H(X_{|S|+1} \ |\ \mathbf{X}^{<|S|+1})$.
\end{fact}
\begin{proposition}
The entropy-based diversity function defined in Eq.~(\ref{eq:entropy-div}) is monotone and submodular.
\end{proposition}
\iffalse
{\em Proof sketch.\ }
Monotonicity and submodularity are ensured given the strict relation between the joint entropy function and a polymatroid~\cite{Fujishige78}. Moreover, as concerns submodularity in particular, note that in the inequality $H(\mathcal{X}_S,X) - H(\mathcal{X}_S) \geq H(\mathcal{X}_T,X) - H(\mathcal{X}_T)$ (with $\mathcal{X}_S \subset \mathcal{X}_T \subseteq \mathcal{X}$ and $X \in \mathcal{X}, X \notin \mathcal{X}_T$), each of the two terms is just the conditional entropy of variable $X$ given $\mathcal{X}_S$ and $\mathcal{X}_T$, respectively. Therefore, $H(X | \mathcal{X}_S) \geq H(X | \mathcal{X}_T)$ holds since conditioning cannot increase entropy.
\hfill~$\blacksquare$
\fi
\vspace{-2mm}
\subsection{Class-based diversity}
\label{sec:diversity:class}
We now introduce a subclass of diversity functions which differs from the ones previously described in that it exploits a-priori knowledge on a grouping of the node profiles. This might be particularly relevant in scenarios where we are interested in distinguishing the nodes based on a coarser grain than their individual profiles. An available organization of the profiles into categorically-cohesive groups could reflect some predetermined equivalence classes of the profiles w.r.t. a given schema of attributes $\A$. (This in principle also includes the opportunity of defining profile groups based on the availability of a \textit{community structure} over the set of nodes in the network.)
A simple yet efficient approach to measure diversity based on the exploitation of profile groups is to cumulate the \textit{selection rewards} for choosing nodes with a profile that belongs to any given class.
\begin{definition}
\label{def:partition-div}
Given a set of categorical attributes $\mathcal{A} = \{A_1, \ldots, A_m \}$ and associated profile set $\D$ for the nodes in a graph $\G_0=\langle \V, \E \rangle$, we define the {\em class-based diversity} of any $S \subseteq \V$ as:
\begin{equation}
\label{eq:partition-div}
div(S) = \sum_{l=1..h} \mathrm{f}(\sum\limits_{v_j \in C_l \cap S} r_j)
\end{equation}
\noindent
where $\mathcal{C} = \{C_1, \ldots, C_h\}$ is a partition of $\D$ (i.e., $\bigcup_{l=1}^h C_l = \D$, and $C \cap C' = \emptyset$, for each $C,C' \in \mathcal{C}$, with $C \neq C'$),
$\mathrm{f}: \mathbb{R} \mapsto \mathbb{R}$ is any non-decreasing concave function, and $r_j >0$ is the selection reward for $v_j \in \V$.
\hfill~\qed
\end{definition}
The effect of $\mathrm{f}$ is that repeatedly selecting nodes of the
same class yields increased diminishing gains for the previously selected nodes.
In fact, since $\mathrm{f}$ is nonnegative concave and $\mathrm{f}(0)\geq 0$, $\mathrm{f}$ is also \textit{subadditive} on $\mathbb{R}^+$, i.e.,
%
$
\sum_{x_i=0}^{+\infty} \mathrm{f}(x_i) \geq \mathrm{f}(\sum_{x_i=0}^{+\infty} x_i).
$
%
%
Therefore, adding (to the set $S$ being discovered) a node from a different class is preferable in terms of marginal gain than adding a node from an already covered class.
%
Example instances of $\mathrm{f}(x)$ are $\sqrt(x)$ and $\log(1+x)$, but any other non-decreasing concave function can in principle be adopted.
We now provide the lower bound and upper bound of Eq.~(\ref{eq:partition-div}) when the logarithmic function is adopted.
\begin{proposition}
Given a budget $k$ and $h$ classes,
the function in Eq.~(\ref{eq:partition-div}), equipped with $\mathrm{f}(x)=\log(1+x)$, with $r_j = 1, \forall v_j \in \V$, achieves the minimum value of $\log(1+k)$ when all $k$ nodes belong to the same class (i.e., 1 class covered), and the maximum value of $k$ when all $k$ nodes belong to different classes (i.e., $k$ classes covered).
\end{proposition}
\iffalse
{\em Proof sketch.\ }
The values of $\log(1+k)$ and $k$ are immediately derived by evaluating Eq.~(\ref{eq:partition-div}) for the cases $h=1$ and $h=k$, respectively. The proof of $k$ as upper bound is immediate. To prove that $\log(1+k)$ is the lower bound of Eq.~(\ref{eq:partition-div}), consider w.l.o.g. a uniform class distribution, i.e., there are $k/h$ (with $h<k$) nodes that belong to each class. In this case, it holds that $div(S)= h \log(1+k/h)$, for any size-$k$ $S$. It follows that the inequality $\log(1+k) \leq h \log(1+k/h)$ must be verified (with equality iff $h=1$). This is immediately derived by observing that, after algebraic manipulation, the above inequality holds iff $(1+k)h^h \leq (h+k)^h$, which is true since the terms on the left side are contained in the polynomial $(h+k)^h$.
\hfill~$\blacksquare$
\fi
Again, the above defined function enables an incremental computation of the marginal gain of any node.
\begin{fact}
The marginal gain of adding a node $v$ to $S$, with $v$ belonging to class $C_l$, is equal to $\log(1+ r/R_l)$, where $r$ is the reward of adding $v$ and $R_l$ is one plus the sum of rewards of nodes in $S$ that belong to class $C_l$.
\end{fact}
\begin{proposition}
The partition-based diversity function defined in Eq.~(\ref{eq:partition-div}) is monotone and submodular.
\end{proposition}
\iffalse
{\em Proof sketch.\ }
Monotonicity and submodularity of the function in Eq.~(\ref{eq:partition-div}) can directly be derived from the mixture property of submodular functions and the composition property of submodular with nondecreasing concave functions~\cite{Lovasz83}, respectively.
In fact, the summation argument of $\mathrm{f}$ is a collection of modular functions with nonnegative weights (and hence is monotone),
the application of $\mathrm{f}$ yields a submodular function, and finally summing up over the groups retains monotonicity and submodularity.
\hfill~$\blacksquare$
\fi
\iffalse
{\color{red}
\vspace{2mm}
\subsection{Hamming-based diversity}
\label{sec:diversity:hball}
The Hamming distance is arguably one of the most widely used
measure for computing the distance between any two sequence of symbols.
In our context, the distance between any pair of nodes is defined as:
\[
dist^H(u,v) = \sum_{j=1}^{m} \mathbbm{1}[val_{A_j}(u) \neq val_{A_j}(v)].
\]
Essentially, it considers the attribute-wise number of mismatches.
If we wish to design a set-function that satisfy both
the property of monotonicity and submodularity, it requires particular
attention since a pairwise distance can easily lead to a function that
does not match the desiderata.
For this reason, we borrow the notion of \textit{hamming ball}
introduced in \cite{hammingball}, which is defined
as:
\[
\mathcal{B}_v^r = \{ u \mid u \in V, dist^H(u,v) \leq r \}
\]
A \textit{hamming ball} associated with a node $v$ is a set
containing nodes of the graph whose distance from $v$ is less or equal
to a given radius, denoted by $r$.
\begin{definition}
\label{def:hamming-div}
Given a set of categorical attributes $\mathcal{A} = \{A_1, \ldots, A_m \}$ and associated profile set $\D$ for the nodes in a graph $\G_0=\langle \V, \E \rangle$ and a radius $r$, we define the {\em hamming-based diversity} of any $S \subseteq \V$ as:
\begin{equation}
\label{eq:hamming-div}
div(S) = \bigl\vert \bigcup_{v \in S} B^r_v \bigl\vert
\end{equation}
\hfill~\qed
\end{definition}
Intuitively, in order to maximize the value of Eq.~(\ref{eq:hamming-div}), one
must ensure that the hamming balls associated with every any node in $S$ are as different
as possible.
It straightforwardly implies that nodes within $S$ are very different to each other.
The above measures has the nice property of allowing a linear time computation
of the marginal gain of any node. This holds based on the following fact.
\begin{fact}
\label{fact:hamming-inc}
The marginal gain of adding a node $v$ to $S$, with $v$ having
hamming ball $B_r^v$ is equal to $\mid B^r_v \setminus B^r_S \mid$,
where $B^r_S = \mid \cup_{v \in S} B^r_v \mid$.
\end{fact}
\begin{proposition}
The hamming-based diversity function defined in Eq.~(\ref{eq:hamming-div}) is monotone and submodular.
\end{proposition}
{\em Proof sketch.\ }
Monotonicity of Eq.~(\ref{eq:hamming-div}) is trivial.
In fact, since the equation takes into account the union
of every hamming ball associated with any node in the set,
greater sets can only lead to greater hamming balls,
thus Eq.~(\ref{eq:hamming-div}) is only allowed to increase.
\noindent
As concerns the submodularity, it should be noted that for
any $S \subseteq T \subseteq V$, it holds that
$B_S^r \subseteq B_T^r$.
In light of the Fact~\ref{fact:hamming-inc} we can write the
inequality between the marginal gain of any node $v$ with respect to
$S$ and $T$ as:
\[
\begin{array}{lcl}
\cancel{div(S)} + \mid B_v^r \setminus B_S^r \mid - \cancel{div(S)} & \geq & \cancel{div(T)} + \mid B_v \setminus B_T^r \mid - \cancel{div(T)}\\
\end{array}
\]
In order to prove the submodularity we can proceed by contradiction.
Therefore we suppose there exists a node $v$ such that the following inequality
is satisfied.
\[
\begin{array}{lcl}
\cancel{\mid B_v^r \mid} - \mid B_v^r \cap B_S^r \mid & < & \cancel{\mid B_v^r \mid} - \mid B_v^r \cap B_T^r \mid \\
\mid B_v^r \cap B_S^r \mid & > & \mid B_v^r \cap B_T^r \mid
\end{array}
\]
It is easy to verify that the above equation is a contradiction, in fact since $B_S^r \subseteq B_T^r$, there cannot exist
any node $u$ belonging to the intersection in the leftmost side of the equation that does not belong to the intersection in the rightmost side.
\hfill~$\blacksquare$
}
\fi
\iffalse
Given two nodes $u,v \in \V$, we define the mismatch between their respective profiles, $d(\A[u], \A[v])$, as the number of times the two nodes differ in their attribute-wise values, i.e., $d(\A[u], \A[v]) = \sum_{A \in \A} I[val_A(u) \neq val_A(v)]$, where $I[\cdot]$ denotes the indicator function.
Let $\mathcal{X} = \{X_v\}_{v \in \V}$ be a set of discrete random variables, where for each $v$, $X_v: \D \mapsto \mathbb{R}$.
The $i$-th element of $X_v$ corresponds to the node $u_i \in \V$ and measures the mismatch $d(\A[u_i], \A[v])$.
For any $S \subseteq \V$, we define the entropy $H: 2^{\mathcal{X}} \mapsto [0, +\infty )$ as:
\begin{definition}\label{def:entropy}
\begin{equation}\label{eq:entropy}
H(S) = \sum_{v \in S} -\frac{1}{\log D_v} \sum_{u \in S} p_v(u) \log p_v(u)
\end{equation}
with $p_v(u) \equiv p(X_v\!\!=\!\!u) = (1/D_v) d(\A[u], \A[v])$, and $D_v = \sum_{u' \in S} d(\A[u'], \A[v])$.
\hfill~\qed
\end{definition}
....the higher the difference of a node $v$ with other members of $S$, the higher the contribution of $v$ to $H(S)$.
...........
\begin{definition}
\label{def:entropy-div}
\begin{equation}
\label{eq:entropy-div}
div(S) = |supp(\D_S)| + H(S)
\end{equation}
where $supp(\D_S)$ is the set of unique profiles of the nodes in $S$.
\hfill~\qed
\end{definition}
\fi
\section{A RIS-based framework for the ADITUM problem}
\label{sec:framework}
We develop our framework for the \myalgo problem based on the \textbf{R}everse \textbf{I}nfluence \textbf{S}ampling (RIS) para\-digm first introduced in~\cite{doi:10.1137/1.9781611973402.70} and recognized as the state-of-the-art approach for IM problems.
%
%
The breakthrough study by Borgs et al.~\cite{doi:10.1137/1.9781611973402.70} overcomes the limitations of a greedy, Monte Carlo based, approach to IM by proposing
a novel solution based on the two following concepts.
%
Given the diffusion graph $\G$ with node set $\V$ and edge set $\E$, let $G$ be an instance of $\G$ obtained by removing each edge $e \in \E$ with probability $1-p(e)$. The \textit{reverse reachable set} (RR-Set) rooted in $v$ w.r.t. $G$ contains all the nodes reachable from $v$ in a backward fashion.
%
A \textit{random RR-Set} is any RR-Set generated on an instance $G$,
for a node selected uniformly at random from $G$.
The key idea of the RIS framework is that the more a node $u$ appears in a random RR-Set rooted in $v$, the higher the probability that $u$, if
selected as seed node, will activate $v$.
The design of the RIS framework follows a \textit{two-phase schema}~\cite{doi:10.1137/1.9781611973402.70}:
(1) Generate a certain number of random RR-Sets,
and (2) Select as seeds the $k$ nodes that cover the most RR-Sets. (The latter step can be solved by using any greedy algorithm for the Maximum Coverage problem.)
Based on RIS, Tang et al.~\cite{Tang:2014:IMN:2588555.2593670} developed the \algo{TIM} and \algo{TIM+} algorithms
that achieves $(1-1/e-\epsilon)$-approxi\-mation
with at least $1-|\V|^{-l}$ probability (by default $l=1$) in time
$O((k+l)(|\E|+|\V|) \log |\V| /\epsilon ^2)$.
\algo{TIM}/\algo{TIM+} works in two major stages: \textit{parameter estimation} and \textit{seed selection}. The first stage aims at deriving a lower-bound for the maximum expected spread a size-$k$ seed set can achieve, from which depends the number $\theta$ of random RR-Sets that must be generated in the second stage; the latter essentially coincides with the second phase
of the RIS method.\footnote{\algo{TIM+} aims to improve upon \algo{TIM} by adding an intermediate step between parameter estimation and node selection, which heuristically refines $\theta$ into a tighter lower bound of the maximum expected influence of any size-$k$ node set. Also, in~\cite{IMM}, IMM is introduced to further speed up \algo{TIM+}.}
The effectiveness of \algo{TIM}/\algo{TIM+} is explained by Lemma 2 provided in~\cite{Tang:2014:IMN:2588555.2593670}, which states that, if $\theta$ is sufficiently
large, the fraction of random RR-Sets covered by any
seed set $S$ is a good and unbiased estimator of the average node-activation probability.
\iffalse
following Lemma:
%
\begin{lemma}{\cite{Tang:2014:IMN:2588555.2593670}}
\label{lemma:rrset}
Given a set of nodes $S$ and a node $v$, let $\rho_1$
be the probability that $S$ and the RR-Set rooted in $v$ have non-empty
overlap; moreover, let $\rho_2$ be the probability that $v$ is activated
during the diffusion process when $S$ is the seed set. It holds that $\rho_1 = \rho_2$.
\end{lemma}
Lemma~\ref{lemma:rrset} states that, assuming that $\theta$ is large
enough, the fraction of random RR-Sets covered by any
seed set $S$ is a good and unbiased estimator of the average node-activation probability when $S$ is used as the seed set.
\fi
\iffalse
\footnote{\disc{We are talking about average probability and not spread}
$\mathbb{E}[F(S)]/\theta \simeq \mathbb{E}[\mu(S)]/|\V|$ -- mantengo il focus
sulla probabilità di attivazione e non sullo spread per meglio giustificare delle scelte
piu tardi}
\fi
\vspace{-2mm}
\subsection{Proposed approach}
\label{sec:algo:revisiting}
\vspace{-2mm}
Our proposed RIS-based framework follows the typical two-phase schema, however it originally embeds both the targeted nature and the diversity-awareness in an influence maximization task. To accomplish this, we revise the two-phase schema as follows.
\paragraph*{\bf Parameter estimation. \ }
We want to understand how much capital can be captured from a size-$k$ seed set. Therefore, to compute the number $\theta$ of RR-Sets, we need to identify a lower-bound on the maximum capital score.
We select a node $v$ as the root of an RR-Set with
probability $p(v) \propto t(v)$. Since we are interested in the activation
of the target nodes only, we set $p(v) = \frac{t'(v)}{\mathcal{T}_{TS}}$, where $\mathcal{T}_{TS} = \sum_{u \in TS} t(u)$,
and $t'(v) = t(v)$ if $v \in TS$ and $t'(v) = 0$ otherwise.
%
We leverage on the \algo{TIM+} procedures \textsl{KPTEstimation} and \textsl{RefineKPT}, in order to estimate
a lower-bound
for the expected spread achieved by any optimal seed set of size $k$.
More specifically, the first procedure generates a small number of RR sets upon which it provides an initial approximation that it is further improved
by the second procedure.
We borrowed these procedures from \algo{TIM+} as
\iffalse
We maintain the \algo{TIM+} procedures \textsl{KPTEstimation} and \textsl{RefineKPT} since, although these subroutines are designed to compute the maximum lower-bound for the expected spread,
\fi
our capital function is contingent on the activation process, thus we still need to have an unbiased estimator for the spread function.
In fact, any target node will contribute in terms of capital as long as it has been activated starting from the seed set.
The lower-bound on the expected spread allows us to derive
a lower-bound on the average activation probability, from which we
compute the \textit{expected capital} score of a seed set as
\vspace{-1.5mm}
\begin{equation} \label{eq:exp_capital_estimation}
\mathbb{E}[C(S)] = \mathcal{T}_{TS} (\mathbb{E}[\mu(S)])/|\V|.
\end{equation}
%
Above, the rightmost term is the average fraction of
total capital score, denoted by $\mathcal{T}_{TS}$, the seed set $S$ is able to capture.
Moreover, since every random RR-Set is rooted in a target node, the aforementioned Lemma 2~\cite{Tang:2014:IMN:2588555.2593670} ensures that
$\mathbb{E}[\mu(S)]/|\V|$ is very close
to the average activation probability of the target nodes.
\paragraph*{\bf Seed selection. \ }
Once all $\theta$ RR-Sets are computed, this stage is in charge of detecting the $k$ seed sets. To this end, we need also to account for a notion of set-diversity to choose the candidate seeds.
The selection of
best seeds is accomplished in a greedy fashion, one seed at a time. A node $v$ is associated with a linear combination of (i) the \textit{node's capital score}, obtained by summing the target scores of the roots of the RR-Sets to which $v$ belongs and that are not already covered by seeds, and (ii) the \textit{node's diversity score}, which corresponds to the node's marginal gain for the diversity function w.r.t. the current seed set.
\begin{algorithm}[t!]
\small
\caption{{\bf A}ttribute-based {\bf DI}versity-sensitive {\bf T}argeted Infl\textbf{U}ence {\bf M}aximization (\myalgo)}
\label{alg:ris-dtim}
\begin{algorithmic}[1]
\Require A diffusion graph $\mathcal{G} = \langle \mathcal{V}, \mathcal{E}, b, t \rangle$ based on triggering model $\mathcal{M}$, a budget $k$, a target selection threshold $\tau_{TS} \in [0,1]$, a smoothing
parameter $\alpha \in [0,1]$.
\Ensure Seed set $S$ of size $k$.
\vskip 0.1em
\State $TS \gets \{v \mid t(v) \geq \tau_{TS} \}$\Comment{Select the target nodes}
\State Compute $\theta$ by using \algo{TIM+} procedures \textsl{KPTEstimation} and \textsl{RefineKPT}
\State $\mathcal{R} \gets \emptyset$
\For{$i \gets 1$ \textbf{to} $\theta$}
\State $R \gets \textsl{computeRandomRRSet}(TS, \mathcal{M}, i)$
\State $\mathcal{R} \gets \mathcal{R} \cup \{ R \}$
\EndFor
\State $S \gets \textsl{buildSeedSet}(\mathcal{R},k,\alpha)$\Comment{Seed Selection stage}\\
\Return $S$
\vskip 0.2cm
\Procedure{\textsl{computeRandomRRSet}}{$TS,\mathcal{M}, id$}
\State $R \gets \emptyset$\Comment{Initialize the RR-Set}
\State Select node $r \in TS$ as root, with probability $p(r) \propto t(r)$
\State $R.id \gets id, R.root \gets r$\Comment{Associate id and root to the RR-Set}
\State Add to $R$ the nodes that can reach $r$ according to live-edge model of $\mathcal{M}$ \\
\Return $R$
\EndProcedure
\vskip 0.2cm
\Procedure{\textsl{buildSeedSet}}{$\mathcal{R}, k, \alpha$}
\State $q \gets \emptyset$\Comment{Priority queue for lazy-greedy optimization
\For{$v \in \V$}
\State $v.pushedC \gets \sum_{R \in \mathcal{R}(v)} c(root(R))$
\State $v.pushedD \gets \textsl{marginalGainInDiversity}(v, \emptyset)$
\State $q.add(\langle (\alpha \times v.pushedC + (1-\alpha) \times v.pushedD), v, 0 \rangle)$
\EndFor
\State $S \gets \emptyset, CS \gets \emptyset$
\Repeat
\State $\langle \symbOF\_val, v, it \rangle \gets q.removeFirst()$
\If{$it = |S|$}
\State $S \gets S \cup \{v \}, CS \gets CS \cup \mathcal{R}(v)$
\Else
\For{$R \in \mathcal{R}(v) \cap CS$}
\State $v.pushedC \gets v.pushedC - t(root(R))$
\State Remove $R$ from $\mathcal{R}(v)$
\EndFor
\State $v.pushedD \gets \textsl{marginalGainInDiversity}(v, S)$
\State $q.add(\langle (\alpha \times v.pushedC + (1-\alpha) \times v.pushedD), v, |S| \rangle)$
\EndIf
\Until{$|S| = k$}\\
\Return $S$
\EndProcedure
\end{algorithmic}
\end{algorithm}
{\bf Remarks.}
The objective function we seek to maximize is a linear combination
of two main quantities: the expected capital and the diversity of the seeds.
%
Note that there is a key difference between these two measures: the former is defined globally over the whole network, while the latter is limited to the seed nodes, namely the solution itself.
Our approach hence reflects this inherent interplay between capital and diversity.
In fact, the sampling procedure in the first stage corresponding to the parameter estimation, is driven by only the capital score --
there are no seeds upon which the diversity must be assessed -- whereas the diversity aspect comes into play only during the process of seed set formation, thus it drives the discovery of the seeds.
\vspace{-2mm}
\subsection{The ADITUM algorithm}
\label{sec:algo:aris}
Algorithm~\ref{alg:ris-dtim} shows the pseudocode of our implementation of \myalgo.
The algorithm starts by identifying the target nodes \sline{1}, then
it infers the number $\theta$ of RR-Sets to be computed, according to \algo{TIM+} subroutines of estimation and refinement of $KPT$, i.e., the mean of the expected spread of possible size-$k$ seed sets \sline{2}. In lines 4-6 the $\theta$ RR-Sets are generated by invoking the \textsl{computeRandomRRSet} procedure \lines{4}{6}.
in $\mathcal{R}$.
The procedure \textsl{buildSeedSet} eventually returns the size-$k$ seed set \lines{7}{8}.
%
In the following, we provide details about the two procedures.
\vspace{1mm}
Procedure \textsl{computeRandomRRSet} starts
by sampling no\-de $r$ as the root of $R$ from a distribution of probability proportional to the target-node scores \sline{11}.
Each RR-Set is associated with an integer identifier and the root node \sline{12} --- this information is needed since the capital associated with a set is given by the target score of its root.
Finally, an instance of the influence graph $G \sim \mathcal{G}$ is computed according to the live-edge model related to $\mathcal{M}$, then all the nodes that can reach $r$ in $G$
are inserted in the RR-Set to be returned.
\vspace{1mm}
Procedure \textsl{buildSeedSet}
exploits a priority queue $q$, which is initialized \sline{16} to store triplets comprised of: value of the linear combination of capital and diversity, node and iteration to which the value refers to. The triplets are ordered by decreasing values of capital-diversity combination.
For each node $v$, its capital score is computed by summing the target score of all nodes that are roots of an RR-Set $v$ belongs
to \sline{18}. Moreover, the $v$'s diversity score is computed as its marginal gain for the $div$ function w.r.t. the current seed set \sline{19}; in particular, since the latter is initialized as empty, the initial $v$'s diversity score equals 1 (according to Eqs.~(3--4) of the main paper).
Once all the scores are computed, the procedure starts to select the seeds, by getting at each iteration the best triplet from the queue \sline{23}: if the choice is done at iteration $it$ equal to the number of nodes currently in the seed set \sline{24}, then $v$ is
inserted in $S$, and all sets covered by $v$ are stored in
$CS$; otherwise, all the score are to be recomputed.
By denoting with $\mathcal{R}(v)$ the set of random RR-Set containing $v$, the $v$'s capital score is decreased by the target score of
each node $r$ that is root of an already covered RR-Set (i.e., a set in $\mathcal{R}(v) \cap CS$) \sline{28}, and this set is also removed from $\mathcal{R}(v)$ \sline{29}.
The diversity score needs also to be recomputed, finally the updated triplet is inserted into the priority queue \lines{30}{31}.
The procedure loop ends when the desired size $k$ is reached for the seed set \sline{32}.
\begin{proposition}
\myalgo runs in $O((k+l)(|\E|+|\V|)$ $\log |\V| /\epsilon ^2)$ time and returns a $(1-1/e-\epsilon)$-approximate solution
with at least $1-|\V|^{-l}$ probability.
\end{proposition}
\iffalse
{\em Proof sketch.\ }
\myalgo is developed under the RIS framework and follows the typical two-phase schema of \algo{TIM}/ \algo{TIM+} methods, i.e., parameter estimation and (seed) node selection, for which the theoretical results in the Proposition hold.
Due to the targeted nature of the problem under consideration, the expected capital must be computed in place of the expected spread; however, this only implies to choose a distribution over the roots of the RR-Sets, which depends on the target scores of the nodes in the network. Thus, the asymptotic complexity of \algo{TIM}/\algo{TIM+} is not increased.
Moreover, two major differences occur in the
seeds selection phase of
\myalgo w.r.t. \algo{TIM}/\algo{TIM+}, i.e.,
the lazy forward approach and the computation of the
marginal gain w.r.t. the diversity function.
However, both aspects do not affect the asymptotic complexity, since the former allows saving runtime only and the latter does not represent any overhead (computing a node's marginal gain is made in nearly constant time, for each of the diversity functions).
Therefore, we can conclude that \myalgo has the same asymptotic complexity of \algo{TIM}/\algo{TIM+}.
\hfill~$\blacksquare$
\fi
\iffalse
In the following, we provide details about the two procedures.
Procedure \textsl{computeRandomRRSet} starts
by sampling node $r$ as the root of $R$ from a distribution of probability proportional to the target-node scores \sline{11}.
Each RR-Set is associated with an integer identifier and the root node \sline{12} --- this information is needed since the capital associated with a set is given by the target score of its root.
Finally, an instance of the influence graph $G \sim \mathcal{G}$ is computed according to the live-edge model related to $\mathcal{M}$, then all the nodes that can reach $r$ in $G$
are inserted in the RR-Set to be returned.
Procedure \textsl{buildSeedSet}
exploits a priority queue $q$, which is initialized \sline{16} to store triplets comprised of: value of the linear combination of capital and diversity, node and number of iteration to which the value refers to. The triplets are ordered by decreasing values.
For each node $v$, its capital score is computed by summing the target score of all nodes that are roots of an RR-Set $v$ belongs
to \sline{18}. Moreover, the $v$'s diversity score is computed as its marginal gain for the $div$ function w.r.t. the current seed set \sline{19}; in particular, since the latter is initialized as empty, the initial $v$'s diversity score equals 1 (according to Eqs.~\ref{eq:Div}--\ref{eq:divA}).
Once all the scores are computed, the procedure starts to select the seeds, by getting at each iteration the best triplet from the queue \sline{23}: if the choice is done at iteration $it$ equal to the number of nodes currently in the seed set \sline{24}, then $v$ is
inserted in $S$, and all sets covered by $v$ are stored in
$CS$; otherwise, all the score are to be recomputed.
By denoting with $\mathcal{R}(v)$ the set of random RR-Set containing $v$, the $v$'s capital score is decreased by the target score of
each node $r$ that is root of an already covered RR-Set (i.e., a set in $\mathcal{R}(v) \cap CS$) \sline{28}, and this set is also removed from $\mathcal{R}(v)$ \sline{29}.
The diversity score needs also to be recomputed, finally the updated triplet is inserted into the priority queue \lines{30}{31}.
The procedure loop ends when the desired size $k$ is reached for the seed set \sline{32}.
\andrea[inline]{nota su versatilità}
\fi
\section{Evaluation methodology}
\label{sec:eval}
\begin{table*}[t!]
\caption{Summary of evaluation network data.}
\label{tab:graph:properties}
\scalebox{0.84}{
\begin{tabular}{|c||c|c|c|c|c|c|c|c|}
\hline
network & \#nodes & \#edges & avg. & avg. & clust. & assort. & \#sources & \#sinks \\
& & & in-degree & path length & coeff. & & & \\
\hline\hline
FriendFeed & 493\,019 & 19\,153\,367 & 38.85 & 3.82 & 0.029 & -0.128 & 41\,953 & 292\,003 \\
\hline
GooglePlus & 107\,612 & 13\,673\,251 & 127.06 & 3.32 & 0.154 & -0.074 & 35\,341 & 22 \\
\hline
Instagram & 17\,521 & 617\,560 & 35.25 & 4.24 & 0.089 & -0.012 & 0 & 0 \\
\hline
MovieLens & 943 & 229\,677 & 243.5 & 1.87 & 0.752 & -0.323 & 1 & 1 \\
\hline
Reddit & 11\,224 & 91\,924 & 8.18 & 4.11 & 0.083 & -0.072 & 0 & 0 \\
\hline
\end{tabular}
}
\end{table*}
\subsection{Data}
\label{sec:data}
\vspace{-1.5mm}
We used five real-world OSN datasets, namely
\textit{FriendFeed}~\cite{8326536},
\textit{GooglePlus}~\cite{8326536}, \textit{Instagram}~\cite{8326536}, \textit{MovieLens}~\cite{6921625}, and
\textit{Reddit}~\cite{kumar2018community}.
Table~\ref{tab:graph:properties} shows main statistics about the evaluation networks.
%
It should be emphasized that we came to our choice of the datasets because of the following reasons:
\begin{itemize}
\item \textit{reproducibility}, i.e., all of the networks are publicly available;
\item \textit{diversification} of the evaluation scenarios, which include user engagement and item recommendation;
\item \textit{continuity} w.r.t. previous studies;
\item \textit{fair comparative evaluation}, i.e., we based our choice also in relation of the competing methods include in our evaluation, so to enable a fair comparison between them and our \myalgo.
\end{itemize}
FriendFeed, GooglePlus, and Instagram network datasets refer to OSNs previously studied in a \textit{user engagement} scenario, which has been recognized as an important case in point for demonstrating targeted IM tasks~\cite{8326536}.
For each of these networks, the meaning of any directed edge $(u,v)$ is that user $v$ is ``consuming'' information received from $u$ (e.g., $v$ likes/comments/rates a $u$'s media post).
%
No side information is originally provided with such datasets, therefore we synthetically generated the user profiles as follows:
Given $m$ categorical attributes, each with $n_i$ admissible values ($i=1..m$), we associated each user with a set of values sampled from either uniform or exponential (with $\lambda=1$) distribution. We set $m=n_i=10$.
We used these datasets also for comparison with \algo{DTIM}.
\iffalse
We did not set the variance of the distribution with greater values because we observe that, as the shape of the distribution become closer to be uniform, the task of maximizing the diversity becomes gradually easier.
\fi
Originally used for \textit{movie recommendation}, MovieLens is associated with a (user, movie-genre) rating matrix storing the number of movies each user rated for each genre, at any given time over a predefined observation period.
This dataset was previously included in the evaluation of our competitor \algo{Deg-D}.
To enable \myalgo to work on MovieLens, we mapped
each genre to an attribute, with unique rating-values as corresponding attribute-values.
%
The MovieLens network was built so to have users as nodes and any directed edge $(u,v)$ is drawn if user $u$ rated \textit{first} at least 10 movies in common with $v$ (timestamps are available in the original data).
Reddit network represents the directed connections between two subreddits, i.e., communities on the Reddit platform. Each connection refers to a post in the source community that links to a post in the target community.
From the original network, we kept only the connections for which the source post is explicitly positive towards the target post, and finally extracted the largest strongly connected component to overcome sparsity issues.
Reddit connections are also rich in terms of numerical attributes associated with each source post, which include both lexical and sentiment information. We selected 11 attributes which appeared to be the most informative for influence propagation reasons.\footnote{We selected the \textsc{POST\_PROPERTIES} attributes corresponding to the following identifiers: 19, 20, 21, 43, 44, 45, 46, 51, 52, 53, 66. }
To generate the profile of each node (community), we grouped the posts by community and summed up the scores for each attribute; finally, the values of each attribute were discretized through a 10-quantile binning scheme.
\vspace{-2.5mm}
\subsection{Settings}
\label{sec:settings}
\vspace{-1.5mm}
We considered \myalgo instantiations with each of the definitions of diversity proposed in Sect.~\ref{sec:diversity}. Hereinafter, we will use notations $div^{(AW)}$, $div^{(H)}$, $div^{(E)}$, $div^{(C)}$ to refer to the attribute-wise, Hamming-, entropy-, and class-based definitions, respectively. When using $div^{(H)}$, we set
the radius $\xi$ of the Hamming balls within $\{1,3,5\}$.
We experimentally varied the setting of \myalgo parameters:
the seed-set size $k$, within [5..50],
the smoothing parameter $\alpha$, from 0 to 1 with step 0.1,
and the target selection threshold $\tau_{TS}$; the latter was controlled in terms of percentage of top-values from the target score distribution, thus we selected target sets corresponding to the top-$\{5\%,10\%,25\%\}$.
We used the default $\epsilon=0.1$ for the approximation-guarantee in
the parameter estimation phase.
Concerning the edge weighting function ($b$) and the node weighting function ($t$), we devised the following settings:
\vspace{-1mm}
\begin{description}
\item[(S1)]
The first setting refers to the basic, non-targeted setting adopted in~\cite{6921625}, i.e., $b(u,v)=1/n_v$, with $n_v$ number of $v$'s in-neighbors, and $t(u)=1$, for all $u,v$ in $\V$. We used this setting for MovieLens evaluation.
\item[(S2)]
The second setting refers to Reddit, for which the influence weights are set to be proportional to the amount of interactions between communities:
for any two nodes $u$ and $v$, $b_{uv} = P_{uv}/P_v$, where $P_{uv}$ is the number of posts of $u$ towards $v$, and $P_v$ is the total number of posts having $v$ as target.
The node weighting function is here simply defined as the in-degree function, in order to mimic a scenario of influence targeting as corresponding to communities that are highly popular in terms of post recipients.
\item[(S3)]
The third setting refers to a user engagement scenario and applies to FriendFeed, GooglePlus and Instagram, which were previously used on that context~\cite{8326536}.
User engagement is addressed as a topology-driven task for encouraging silent users, a.k.a. ``lurkers'', to return their acquired social capital, through a more active participation to the community life. Note that such users are effective members of an OSN, who are not actively involved in tangible content production and sharing with other users in the network, but rather they are information consumers.
Given this premise, in~\cite{8326536} a specific instance of targeted IM is developed such that lurkers are regarded as the target of the diffusion process. Therefore, the user engagement task becomes: Given a budget $k$, to find a set of $k$ nodes that are capable of maximizing the likelihood of ``activating'' (i.e., engaging) the target lurkers.
In this context, the two weighting functions rely on a pre-existing solution of a \textit{lurker} \textit{ranking} algorithm applied to the social graph.
The intuition is as follows (the interested reader is referred to \cite{8326536} for analytical details about the above functions):
For any node $v$, the node weight $t(v)$ indicates the status of $v$ as lurker, such as the higher the lurker ranking score of $v$ the higher should be $t(v)$; for any edge $(u,v)$, the weight $b(u,v)$ is computed to measure how much node $u$ has contributed to the $v$'s lurking score calculated by the lurker ranking algorithm, which resembles a measure of ``influence'' produced by $u$ to $v$.
\end{description}
\iffalse
This section provides details about the setting of node and edge weighting functions in the case of our evaluation networks FriendFeed, GooglePlus and Instagram, which were previously used by the competing method \algo{DTIM} for user engagement tasks~\cite{8326536}. In that work, user engagement is addressed as a topology-driven \textit{delurking} task, i.e., encouraging lurkers to return their acquired social capital, through a more active participation to the community life. Lurkers are indeed effective members of an OSN, however not actively involved in tangible content production and sharing with other users in the network~\cite{8326536}. Given this premise, the authors in that work develop a specific instance of targeted IM in which lurkers are regarded as the \textit{target} of the diffusion process. Therefore, the delurking task becomes: Given a budget $k$, to find a set of $k$ nodes that are capable of maximizing the likelihood of ``activating'' (i.e., engaging) the target lurkers.
The node and edge weighting functions rely on a pre-existing solution of a \textit{lurker} \textit{ranking} algorithm applied to the social graph $\G_0 =(\V, \E)$. The intuition is as follows:
\begin{itemize}
\item Edge weighting function ($b$): For any edge $(u,v)$, the weight $b(u,v)$ indicates how much node $u$ has contributed to the $v$'s lurking score calculated by the lurker ranking algorithm, which resembles a measure of ``influence'' produced by $u$ to $v$.
\item Node weighting function ($t$): For any node $v$, the node weight $t(v)$ indicates the status of $v$ as lurker, such as the higher the lurker ranking score of $v$ the higher should be $t(v)$.
\end{itemize}
The interested reader is referred to \cite{8326536} for analytical details about the above functions.
\fi
\vspace{-2mm}
\subsection{Competing methods}
\vspace{-1mm}
The closest methods to \myalgo are \algo{DTIM}~\cite{8326536} and \algo{Deg-D}~\cite{6921625}.
As previously mentioned,
\algo{DTIM} addresses targeted IM, but it considers topology-driven notions of diversity only; conversely, \algo{Deg-D} utilizes side-information-based diversity, however it assumes a numerical representation of node attributes and the addressed problem is not targeted.
We next provide details on the objective function of \algo{Deg-D} and \algo{DTIM}.
\iffalse
A short description of both methods is provided in the {\bf \em Supplemental Material}. \antonio{Espandere descrizione?}
\fi
The objective function in \algo{DTIM}~\cite{8326536} shares the capital term with \myalgo, which is however combined with a diversity term defined as $\sum_{s \in S} \sum_{v \in TS} div_v(s)$, i.e., as the sum of diversity scores that each seed has in relation with each of the target nodes, where $div_v(\cdot)$ is either the global topology-driven or the local topology-driven diversity function~\cite{8326536}.
\algo{Deg-D}~\cite{6921625} follows a simple greedy scheme to maximize the objective function
$(1-\gamma) \sum_{u \in S} deg(u) + \gamma D(S)$,
where $deg(u)$ denotes the out-degree of node $u$, while $D(S)$ represents
the diversity of the set $S$, whose value is given by:
$D(S) = \sum_{m=1}^{M} f(\sum_{u \in S} \omega_{um} \times g(u))$,
where $M$ denotes a given number of types of external information,
$\gamma$ is a smoothing parameter,
$\omega_{um} \in [0,1]$ is a real-valued coefficient expressing the preference of node $u$ toward type $m$,
$f$ denotes any nondecreasing concave function (with default form set to $f(x)=\log(1+x)$),
whereas $g$ is a function defined for each node $u$, either as $g(u)=1$ or $g(u)=deg(u)$; the two different definitions of $g$ lead to the variants named \algo{Deg-DU} and \algo{Deg-DW}, respectively.
Note that, compared to $\alpha$ in \myalgo, $\gamma$ in \algo{Deg-D} has an opposite role,
therefore we set $\gamma = 1-\alpha$ in all the experiments.
%
Moreover,
\algo{Deg-D} requires a numeric vector of size $M$ to be associated with each node.
To enable a comparison with \algo{DTIM}, we integrated its global
topology-driven diversity function into our RIS-based framework, following the guidelines provided in~\cite{8326536}.
As concerns \algo{Deg-D}, we also had to account for the different (i.e., numerical) representation of side-information by \algo{Deg-D}. Thus, we devised two settings:
\begin{itemize}
\item
Integration of the \textit{uniform} and \textit{weighted} functions, i.e., \algo{Deg-DU} and \algo{Deg-DW}, resp., into our RIS-based framework, upon numerical representation of nodes' attributes;
\item
Comparison of the two methods: \myalgo upon categorical representation derived from a numerical representation of nodes' attributes vs. \algo{Deg-DU} and \algo{Deg-DW} upon normalized numerical representation.
\end{itemize}
%
\iffalse
\paragraph*{\bf Goals}
%
%
We pursued four main goals of evaluation.
First, we assessed the significance of the estimation of capital produced by \myalgo (Sect.~\ref{sec:results:capital}).
Second, we compared the solutions provided by \myalgo, for each of the three
proposed definitions of diversity (cf. Sect.~\ref{sec:results:diversity}).
Third, we analyzed the sensitivity of \myalgo w.r.t. its various parameters and the attributes' distributions (Sect.~\ref{sec:results:seedset}).
Fourth, we comparatively evaluated \myalgo with the competing methods
\algo{DTIM} and \algo{Deg-D} (Sects.~\ref{sec:results:comparison_DTIM} and~\ref{sec:results:comparison_Tang}).
\fi
\section{Experimental results}
\label{sec:results}
\iffalse
\begin{figure}[t!]
\begin{tabular}{ccc}
\includegraphics[width=.3\linewidth, height=1.2in]{IG_risvsmc_capital_estimation_top25.pdf} &
\includegraphics[width=.3\linewidth, height=1.2in]{FF_risvsmc_capital_estimation_top25.pdf} &
\includegraphics[width=.3\linewidth, height=1.2in
{GPLUS_risvsmc_capital_estimation_top25.pdf} \\
(a) Instagram & (b) FriendFeed & (c) GooglePlus \\
\end{tabular}
\caption{Capital estimation for seed sets obtained by \myalgo: RIS-based estimation by \myalgo vs. estimation by Monte Carlo simulations, with top-25\% target selection.}
\label{fig:estimation}
\end{figure}
\fi
\paragraph*{\bf Goals.\ \ }
We pursued four main goals of experimental evaluation, around which we organize the presentation of our results.
First, we want to assess the significance of the estimation of capital produced by \myalgo (Sect.~\ref{sec:results:capital}).
Second, we want to understand the effect of the three
proposed definitions of diversity on the solutions provided by \myalgo (cf. Sect.~\ref{sec:results:diversity}).
Third, we analyze the sensitivity of \myalgo w.r.t. its various parameters and the attributes' distributions (Sect.~\ref{sec:results:seedset}).
Fourth, we comparatively evaluate \myalgo with the competing methods
\algo{DTIM} and \algo{Deg-D} (Sects.~\ref{sec:results:comparison_DTIM} and~\ref{sec:results:comparison_Tang}).
\vspace{-2mm}
\subsection{Capital estimation}
\vspace{-2mm}
\label{sec:results:capital}
To begin with, we analyzed the correctness of the RIS-based estimation of the capital captured by the seeds discovered by \myalgo, which refers to Eq.~\ref{eq:exp_capital_estimation}.
To this purpose, we compared the \myalgo capital estimation (with $\alpha=1$)
with the capital scores obtained from a Monte Carlo simulation ($10\,000$ runs).
As shown in Fig.~\ref{fig:estimation}, for top-25\% target selection and varying $k$, the two capital estimations are practically identical (i.e., relative error almost zero), even for higher $k$. The same holds for other settings of target selection.
This confirms the correcteness of the RIS-based estimation of capital in \myalgo.
\begin{figure}[t]
\centering
\begin{tabular}{@{\hskip -2.4mm}c@{\hskip -2.3mm}c@{\hskip -2.3mm}c@{\hskip -2.3mm}c}
\includegraphics[width=.27\linewidth]{IG_risvsmc_capital_estimation_top25.pdf} &
\includegraphics[width=.27\linewidth]{FF_risvsmc_capital_estimation_top25.pdf} &
\includegraphics[width=.27\linewidth
{GPLUS_risvsmc_capital_estimation_top25.pdf} &
\includegraphics[width=.27\linewidth
{REDDIT_risvsmc_capital_estimation_top25.pdf} \\
(a) Instagram & (b) FriendFeed &
(c) GooglePlus & (d) Reddit \\
\end{tabular}
\caption{Capital estimation for seed sets obtained by \myalgo: RIS-based estimation by \myalgo vs. estimation by Monte Carlo simulations, with top-25\% target selection.}
\label{fig:estimation}
\end{figure}
\begin{figure}[t]
\centering
\begin{tabular}{@{\hskip -3mm}c@{\hskip 0mm}c@{\hskip -2mm}c@{\hskip -2mm}c}
\includegraphics[width=.26\linewidth, height=3cm]{IGentropy_line_alpha00_plottop25_.pdf} &
\includegraphics[width=.27\linewidth, height=3cm
{FFentropy_line_alpha00_plottop25_.pdf} &
\includegraphics[width=.27\linewidth, height=3cm
{GPLUSentropy_line_alpha00_plottop25_.pdf} &
\includegraphics[width=.26\linewidth, height=3cm
{REDDITentropy_line_alpha00_plottop25.pdf} \\
(c) Instagram & (d) FriendFeed & (a) GooglePlus & (b) Reddit \\
\end{tabular}
\caption{Entropy of the seed sets obtained by \myalgo for various diversity functions, with top-25\% target selection and
$\alpha=0$.}
\label{fig:entropy}
\end{figure}
\vspace{-1mm}
\subsection{Effect of the diversity functions}
\label{sec:results:diversity}
To understand the impact of the diversity notion on the \myalgo performance, we inspected the degree of diversification induced by each of the functions described in Sect.~\ref{sec:diversity}.
In particular, we first measured the cross-entropy of the distribution of attribute-values associated to the profile set of seeds, i.e.,
%
%
$$Entropy(S) =\!\!\!\!\sum_{a \in dom(S)} \frac{n_a}{\sum_{a' \in dom(S)} n_{a'}} \log\bigg(\frac{n_a}{\sum_{a' \in dom(S)} n_{a'}}\bigg).
$$
Then, we multiplied the value of $Entropy(S)$ by a factor
$\zeta = (1+\log(|dom|/$ $|dom(S)|))^{-1}$ that penalizes more for smaller fraction of attribute-values covered by the profile set of $S$.
\iffalse
Results shown in Fig.~\ref{fig:entropy} indicate that
$div^{(AW)}$ and $div^{(E)}$ follow similar trends, whereas $div^{(C)}$ generally leads to a lower smoothed entropy.
\fi
Results shown in Fig.~\ref{fig:entropy} indicate that
$div^{(AW)}$ generally yields seed sets with higher
cross-entropy than the other diversity functions --- in fact, to maximize $div^{(AW)}$, \myalgo tends to favor a uniform distribution of the attribute-values over the seed set.
Also, $div^{(AW)}$ achieves higher coverage of the attribute domains (i.e., lower penalization factor).
The second best diversity function is the entropy-based one, $div^{(E)}$, which shows trends similar to $div^{(AW)}$.
Conversely, $div^{(C)}$ and $div^{(H)}$ lead to less diversified seed sets. This is actually not surprising since the class-based notion of diversity relies on the grouping of the profiles (i.e., coarser grain than at attribute-value level) and it is maximized when all profiles in $S$ are chosen from different classes (i.e., $k\equiv h$, cf. Sect.~\ref{sec:diversity:class}), regardless of the distribution of their constituent attribute-values.
In this regard, we further investigated how the combination
of the budget $k$ and the number of classes (into which the profile set is partitioned) affects the diversity value.
Fig.~\ref{fig:surface} shows that $div^{(C)}$ increases more rapidly with the increase in the number of classes w.r.t. $k$.
Also, the Hamming-based diversity, $div^{(H)}$,
consistently behaves worse
than $div^{(AW)}$ and $div^{(E)}$, while it is comparable to $div^{(C)}$ for higher radius.
Indeed, $div^{(H)}$ strongly depends on the setting of the radius: as expected, the diversity increases for higher values of the radius $\xi$.
This is explained since
Eq.~(\ref{def:hamming-div}) increases as the union of the Hamming balls of the nodes in the seed set grows; however, setting $\xi=1$ leads to Hamming balls containing nodes that are not really
different from each other. As a consequence, Eq.~(\ref{def:hamming-div}) would to be deceived because
a huge Hamming ball may corresponds to a poorly diversified seed set.
In the rest of the result presentation, we will refer to the attribute-wise diversity only. Our justification is that $div^{(AW)}$ (i) has shown effectiveness in the diversification of the seed set that is as good as or better than $div^{(E)}$, while outperforming $div^{(C)}$ and $div^{(H)}$,
(ii) it allows marginal gain computation that is clearly more efficient than the conditional entropy computation required in $div^{(E)}$,
and (iii) it does not depend from additional a-priori knowledge like $div^{(C)}$ does, or parameters like $div^{(H)}$ does.
\begin{figure}[t!]
\centering
\begin{tabular}{ccc
\includegraphics[width=.3\linewidth, height=3cm]{IGsurface_alpha00_toptop25.pdf} &
\includegraphics[width=.3\linewidth, height=3cm
{IGsurface_alpha05_toptop25.pdf}
&
\includegraphics[width=.3\linewidth, height=3cm
{IGsurface_alpha10_toptop25.pdf} \\
(a) $\alpha=0.0$ & (b) $\alpha=0.5$ & (c) $\alpha=1.0$
\end{tabular}
\caption{\textit{Class-based} diversity on Instagram by varying the number of
classes, $k$, and $\alpha$, with top-25\% target selection.}
\label{fig:surface}
\end{figure}
\vspace{-1.5mm}
\subsection{Evaluation of identified seed sets}
\label{sec:results:seedset}
\vspace{-0.5mm}
Here we discuss how the different settings of parameters in \myalgo, particularly $\alpha$ and the attribute distributions, affect the seed identification.
\emph{\underline{Sensitivity to $\alpha$}.\ }
Heatmaps in Fig.~\ref{fig:seeds_overlap_exp_distribution} show the pairwise overlaps of seed sets, normalized by $k$, for varying $\alpha$.
Focusing first on the overlaps between the seed set corresponding to $\alpha=1$ (i.e., capital contribution only) and the ones corresponding to diversity at different degrees ($\alpha < 1$), the overlap decreases rapidly for lower $\alpha$. (This trend is less evident for Instagram because of its tighter connectivity than FriendFeed, GooglePlus and Reddit, as in fact it corresponds to the maximal strongly connected component of the original network graph~\cite{8326536}).
While in general overlaps always change for pairs of seed sets corresponding to different settings of $\alpha$, it appears that the fading of overlaps becomes more gradual on networks with stronger small-world characteristics (i.e., GooglePlus).
Moreover, results (shown in \textbf{\em Appendix}, Fig.~\ref{fig:seeds_overlap_exp_distribution-app}) obtained at top-5\% and top-10\% target selection, also confirm the variability in the seed set overlap, which is again more evident on the larger networks.
\begin{figure}[t!]
\begin{tabular}{@{\hskip -1mm}c@{\hskip -0.6mm}c@{\hskip -0.6mm}c@{\hskip -0.6mm}c}
\includegraphics[width=.26\linewidth]{IG_seeds_overlap_by_alpha_sidedtim_top25_eps_1_k50.pdf} &
\includegraphics[width=.26\linewidth]{FF_seeds_overlap_by_alpha_sidedtim_top25_eps_1_k50.pdf} &
\includegraphics[width=.26\linewidth]{GPLUS_seeds_overlap_by_alpha_sidedtim_top25_eps_1_k50.pdf} &
\includegraphics[width=.26\linewidth]{REDDIT_seeds_overlap_by_alpha_sidedtim_top25_eps_1_k50.pdf} \\
(a) Instagram & (b) FriendFeed & (c) GooglePlus & (d) Reddit \\
\end{tabular}
\caption{Normalized overlap of seed sets, for $\alpha \in [0,1]$ (with increments of $0.1$), $k=50$, top-25\% target selection, and exponential distribution of attributes (except Reddit). }
\label{fig:seeds_overlap_exp_distribution}
\end{figure}
\begin{figure*}[t!]
\centering
\begin{tabular}{ccc}
\hspace{-4mm}
\includegraphics[width=.28\linewidth, height=3cm]{IG_diversity_line_plot_top25_inset.pdf} &
\hspace{-4mm}
\includegraphics[width=.28\linewidth, height=3cm]{FF_diversity_line_plot_top25_inset.pdf} &
\hspace{-4mm}
\includegraphics[width=.28\linewidth, height=3cm]{GPLUS_diversity_line_plot_top25_inset.pdf} \\
(a) Instagram & (b) FriendFeed & (c) GooglePlus
\end{tabular}
\caption{Exponential (main) vs. uniform (inset) distribution: attribute-wise of seed set for varying $k$ and $\alpha$, top-25\% target selection, and comparison to maximum diversity value.}
\label{fig:diversity_line_plot}
\end{figure*}
\iffalse
\begin{figure}[t!]
\begin{tabular}{ccc}
\includegraphics[width=.3\linewidth]{./../../results/plots/side_diversity_line_plot/f1/exp_distribution/IG_diversity_line_plot_top25.pdf} &
\includegraphics[width=.3\linewidth]{./../../results/plots/side_diversity_line_plot/f1/exp_distribution/FF_diversity_line_plot_top25.pdf} &
\includegraphics[width=.3\linewidth]{./../../results/plots/side_diversity_line_plot/f1/exp_distribution/GPLUS_diversity_line_plot_top25.pdf} \\
Instagram (exponential) & FriendFeed (exponential) & GooglePlus (exponential) \\
\includegraphics[width=.3\linewidth]{./../../results/plots/side_diversity_line_plot/f1/uniform_distribution/IG_diversity_line_plot_top25.pdf} &
\includegraphics[width=.3\linewidth]{./../../results/plots/side_diversity_line_plot/f1/uniform_distribution/FF_diversity_line_plot_top25.pdf} &
\includegraphics[width=.3\linewidth]{./../../results/plots/side_diversity_line_plot/f1/uniform_distribution/GPLUS_diversity_line_plot_top25.pdf} \\
Instagram (uniform) & FriendFeed (uniform) & GooglePlus (uniform) \\
\end{tabular}
\caption{Exponential vs. uniform distribution: seed-set diversity for varying $k$ and $\alpha$, top-25\% target selection, and comparison to maximum diversity value.}
\label{fig:diversity_line_plot}
\end{figure}
\fi
\emph{\underline{Effect of the attribute distribution.\ }}
The previous analysis refers to exponential distribution of the attributes.
We observed however that the sensitivity of \myalgo to the setting of $\alpha$ becomes much lower when a uniform distribution law is adopted.
This prompted us to investigate the reasons underlying this behavior. To this end,
we compared the diversity value associated to each seed set,
by varying $\alpha$ and distributions, with the maximum possible value
$div^*[k]$ (Eq.~\ref{eq:diversity_max_possible_value}); this is achieved when all the attribute values are equally distributed over the seeds.
Not surprisingly, looking at the insets of Fig.~\ref{fig:diversity_line_plot} that correspond to uniform distribution, we observe that the trends of seed-set diversity at varying $\alpha$ are all close to each other as well as to the maximum value. By contrast, using exponential distributions (main plots of Fig.~\ref{fig:diversity_line_plot}), it is evident that the slope of the diversity tends to decrease with higher $\alpha$, thus increasing the gap with the maximum diversity curve.
Moreover, different settings of the target selection threshold have no significant impact on the trends already observed for top-25\% (results shown in \textbf{\em Appendix}, Fig.~\ref{fig:diversity_line_plot-app}).
In the following, results correspond to exponential distribution of the attributes, unless otherwise specified.
\vspace{-3mm}
\subsection{Comparison with \algo{DTIM}}
\label{sec:results:comparison_DTIM}
\vspace{-2mm}
{\bf Stage 1:\ }
We first evaluated the integration of the topology-driven
diversity function~\cite{8326536} into our RIS-based framework.
We analyzed the normalized overlap of seed sets obtained by \myalgo and by the resulting \algo{DTIM}-based variant.
Figure~\ref{fig:seeds_overlap_diff_diversity} shows low-mid lack of normalized overlap between compared seed sets;
in particular, overlap is much closer to zero for the largest networks, which are also sparser (and hence, more realistic) than Instagram network.
{\bf Stage 2:\ }
In the second stage of evaluation, we compared \myalgo and \algo{DTIM} in terms of the expected capital. In
Fig.~\ref{fig:aditum_vs_dtim}, the insets show results of a Monte Carlo simulation
(with 10\:000 runs) for the estimation of the capital associated with the seed sets provided by each of the methods with
$\alpha=1$ (i.e., without the diversity contribution). Also, we set $\eta=10^{-4}$ for \algo{DTIM}, which means minimal path-pruning, and hence highest estimation accuracy for the competitor.
We observe that \myalgo keeps a relatively small advantage over \algo{DTIM} as for the estimated capital.
Nonetheless, it should be emphasized that, as expected from a comparison between a RIS-based method and a greedy method, \myalgo outperforms \algo{DTIM} in terms of running time, up to 3 orders of magnitude (e.g., in FriendFeed with $k\geq10$), and this gap becomes even more evident as both $k$ and the network size increase.
Note that, while the running time of \algo{DTIM} tends to increase linearly in $k$, for \myalgo it may even decrease with $k$: likewise \algo{TIM+}, this is a result of the interplay of the main factors that determine the number of random RR-Sets.
\begin{figure}[t!]
\centering
\begin{tabular}{@{\hskip -1.1mm}c@{\hskip -0.6mm}c@{\hskip -0.6mm}c@{\hskip -0.6mm}c}
\includegraphics[width=.26\linewidth, height=2.8cm]{IG_seeds_overlap_top5_eps_1_k50_sidedtim_vs_gdtim_full.pdf} &
\includegraphics[width=.26\linewidth, height=2.8cm]{FF_seeds_overlap_top25_eps_1_k50_sidedtim_vs_gdtim_full.pdf} &
\includegraphics[width=.26\linewidth, height=2.8cm]{GPLUS_seeds_overlap_top25_eps_1_k50_sidedtim_vs_gdtim_full.pdf} &
\includegraphics[width=.26\linewidth, height=1.1in]{REDDIT_seeds_overlap_top25_eps_1_k50_sidedtim_vs_gdtim_full.pdf} \\
(a) Instagram & (b) FriendFeed & (c) GooglePlus & (d) Reddit \\
\end{tabular}
\caption{Topology-based vs. attribute-based diversity: Normalized overlap of seed sets, for selected values of $\alpha$, $k=50$, and top-25\% target selection.}
\label{fig:seeds_overlap_diff_diversity}
\vspace{-2mm}
\end{figure}
\begin{figure}[t]
\centering
\begin{tabular}{@{\hskip -1mm}c@{\hskip -0.6mm}c@{\hskip -0.6mm}c}
\includegraphics[width=.28\linewidth, height=3cm]{IG_aditum_vs_dtim_top25.pdf} &
\includegraphics[width=.28\linewidth, height=3cm]{FF_aditum_vs_dtim_top25.pdf} &
\includegraphics[width=.28\linewidth, height=3cm]{GPLUS_aditum_vs_dtim_top25.pdf} \\
(a) Instagram & (b) FriendFeed & (c) GooglePlus \\
\end{tabular}
\caption{\myalgo ($\epsilon=0.1$) vs. \algo{DTIM} ($\eta=10^{-4}$): Running time in seconds (main plot) and expected capital (inset) for varying $k$,
top-25\% target selection and $\alpha=1$. }
\label{fig:aditum_vs_dtim}
\vspace{-2mm}
\end{figure}
\vspace{-1mm}
\subsection{Comparison with \algo{Deg-D} diversity and attribute representation}
\label{sec:results:comparison_Tang}
\vspace{-1mm}
As concerns the comparison with \algo{Deg-D}, we again devised two stages of evaluation:
(1) comparison of seed sets produced by \myalgo and by \algo{Deg-DU}/\algo{Deg-DW}, and (2) adaptation of our RIS framework to numerical-attribute diversity used by \algo{Deg-D} (cf. Sect.~\ref{sec:eval}-Setting).
%
{\bf Stage 1:\ }
Fig.~\ref{fig:tang_seeds_overlap} shows the normalized overlaps of seed sets. Two main remarks can be drawn: first,
the overlaps between \myalgo and \algo{Deg-D} are always quite low ($0.28\sim 0.43$), and second, the setting of $\gamma$ (i.e., $1-\alpha$) has little effect on \algo{Deg-D}.
{\bf Stage 2:\ }
Fig.~\ref{fig:tang_comparison_line_plot} refers to numerical attribute representation and integration of \algo{Deg-DU} and \algo{Deg-DW} functions into our framework, denoted as \textit{RIS-U} and \textit{RIS-W}. We set $\gamma = \alpha = 0.5$ to equally balance the contributions of diversity and spread in the methods' objective function.
We observe that the seed-set diversity values are the same for the two methods in the uniform setting of the numerical-attribute diversity (i.e., \algo{Deg-DU} and \textit{RIS-U}).
Conversely, in the weighted setting, the RIS-based diversity curve is only slightly below the \algo{Deg-DW} curve.
Also, the insets show very similar expected spread (on average over 10\,000 Monte Carlo runs).
Overall, this indicates flexibility of our RIS-based framework, which can also be properly adapted to integrate numerical-based diversity functions.
\iffalse
\begin{figure}[!t]
\centering
\begin{tabular}{cc}
\hspace{-4mm}
\includegraphics[width=.42\linewidth]{./../../results/plots/tang_comparison/f1/ML_seeds_overlap_k50_f1_uniform.pdf} &
\hspace{-4mm}
\includegraphics[width=.42\linewidth]{./../../results/plots/tang_comparison/f1/ML_seeds_overlap_k50_f1_weighted.pdf}
\end{tabular}
\caption{\myalgo vs. \algo{Deg-DU} (left) and \algo{Deg-DW} (right): Normalized overlap of seed sets, for $\gamma \in \{ 0.15, 0.5, 0.85 \}$, $k=50$, and top-100\% target selection, on MovieLens.}
\label{fig:tang_seeds_overlap}
\end{figure}
\begin{figure}[!t]
\centering
\begin{tabular}{cc}
\hspace{-4mm}
\includegraphics[width=.45\linewidth]{./../../results/plots/tang_comparison/ml_01/ML_spread_diversity_line_plot_gamma050_tang_uniform_vs_ris_tang_uniform_inset_.pdf} &
\hspace{-4mm}
\includegraphics[width=.45\linewidth]{./../../results/plots/tang_comparison/ml_01/ML_spread_diversity_line_plot_gamma050_tang_weighted_vs_ris_tang_weighted_inset_.pdf}
\end{tabular}
\caption{\algo{Deg-DU} vs. \algo{RIS-U} (left) and \algo{Deg-DW} vs. \algo{RIS-W} (right): Expected spread (inset) and seed set diversity by varying $k$, for $\gamma=0.5$, on MovieLens. }
\label{fig:tang_comparison_line_plot}
\end{figure}
\fi
\begin{figure}[t]
\begin{minipage}[t]{0.49\linewidth}
\begin{tabular}{cc}
\hspace{-4mm}
\includegraphics[width=.52\linewidth, height=1.1in]{ML_seeds_overlap_k50_f1_uniform.pdf} &
\hspace{-9mm}
\includegraphics[width=.52\linewidth, height=1.1in]{ML_seeds_overlap_k50_f1_weighted.pdf}
\end{tabular}
\vspace{-2mm}
\caption{\myalgo vs. \algo{Deg-DU} (left) and \algo{Deg-DW} (right): Normalized overlap of seed sets, for $\gamma \in \{ 0.15, 0.5, 0.85 \}$, $k=50$, and top-100\% target selection, on MovieLens.}
\label{fig:tang_seeds_overlap}
\end{minipage}
\hfill
\begin{minipage}[t]{0.49\linewidth}
\begin{tabular}{@{\hskip -3mm}c@{\hskip -2mm}c}
\includegraphics[width=.52\linewidth]{ML_spread_diversity_line_plot_gamma050_tang_uniform_vs_ris_tang_uniform_inset_.pdf} &
\includegraphics[width=.52\linewidth]{ML_spread_diversity_line_plot_gamma050_tang_weighted_vs_ris_tang_weighted_inset_.pdf}
\end{tabular}
\vspace{-3mm}
\caption{\algo{Deg-DU} vs. \algo{RIS-U} (left) and \algo{Deg-DW} vs. \algo{RIS-W} (right): Expected spread (inset) and seed set diversity by varying $k$, for $\gamma=0.5$, on MovieLens. }
\label{fig:tang_comparison_line_plot}
\end{minipage}
\end{figure}
\iffalse
\begin{figure}
\centering
\begin{tabular}{cc}
\includegraphics[width=.3\linewidth, height=1.2in]{./../../results/plots/tang_comparison/f1/ML_seeds_overlap_k50_f1_uniform.pdf} &
\includegraphics[width=.3\linewidth, height=1.2in]{./../../results/plots/tang_comparison/f1/ML_seeds_overlap_k50_f1_weighted.pdf} \\
(a) & (b)
\end{tabular}
\caption{Numerical-attribute-based vs. categorical-attribute-based diversity: Normalized overlap of seed sets, for $(1-\alpha) \equiv \gamma \in \{ 0.15, 0.5, 0.85 \}$, $k=50$, and top-100\% target selection.}
\end{figure}
\begin{figure}[t!]
\centering
\begin{tabular}{cc}
\includegraphics[width=.3\linewidth]{./../../results/plots/tang_comparison/ml_01/ML_spread_diversity_line_plot_gamma050_tang_uniform_vs_ris_tang_uniform_inset_.pdf} &
\includegraphics[width=.3\linewidth]{./../../results/plots/tang_comparison/ml_01/ML_spread_diversity_line_plot_gamma050_tang_weighted_vs_ris_tang_weighted_inset_.pdf} \\
\end{tabular}
\caption{\algo{Deg-DU} vs. \algo{RIS-U} (left) and \algo{Deg-DW} vs. \algo{RIS-W} (right) on MovieLens numerical attribute representation: Expected spread (rightmost $y$-axis, blue lines) and seed set diversity (leftmost $y$-axis, green lines) by varying $k$, for $\gamma=0.5$. \textit{(Best viewed in color)}}
\label{fig:tang_comparison_line_plot}
\end{figure}
\fi
\section{Conclusions}
\label{sec:conclusions}
We proposed a novel targeted influence maximization problem
which accounts for the diversification of the seeds according to side-information available at node level in the general form of
categorical attribute values.
We also design a class of nondecreasing monotone and submodular functions to determine diversity of the categorical profiles associated to seed nodes.
Our developed RIS-based \myalgo algorithm was compared to two IM methods, the one exploiting topology-driven diversity and the other one accounting for numerical-based diversity in IM.
While showing different and more flexible behavior than the competitors, \myalgo takes the advantages of ensuring the RIS-typical theoretical-guarantee and computational complexity under a general, side-information-based setting of node diversity.
A further strength point of our diversity-sensitive framework lays on its versatility since \myalgo can easily be extended to incorporate other definitions of node diversity.
In this regard, we plan to define diversity notions based on representation learning techniques, including network embedding methods.
|
1,941,325,220,471 | arxiv | \section{Introduction}
Bohmian mechanics\ (often called the de$\,$Broglie-Bohm theory) yields the same predictions as
standard quantum theory provided the configuration of a system with wave
function $\psi$ is random, with distribution given by $|\psi|^2$. This
distribution, the {\em quantum equilibrium distribution}
\cite{valentini91,durr92}, satisfies the
following natural property: If the distribution of the configuration at
some time $t_0$ is given by $|\psi_{t_0}|^2$, then the distribution of the
configuration at any other time $t$ will be given by $|\psi_t|^2$---i.e.,
with respect to the wave function it will have the same functional form at
the other time---provided, of course, that the wave function evolves
according to Schr\"odinger's equation between the two times and the
configuration evolves according to the law of motion for Bohmian mechanics.
This property was already emphasized by de$\,$Broglie\ in 1927 \cite{debroglie28} and was later
formalized and called {\em equivariance} by D\"urr {\em et al.}\
\cite{durr92}, who used it to establish the typicality of empirical
statistics given by the quantum equilibrium distribution.
The notion of equivariance is a natural generalization of that of the
stationarity of a distribution in statistical mechanics and dynamical
systems theory \cite{durr92}. Just as stationarity is regarded as a basic
requirement for a description of equilibrium in statistical mechanics, one
can regard equivariance as a basic requirement for what might be called
equilibrium in Bohmian mechanics. Of course, this equilibrium need not be a complete
equilibrium, since the wave function in general changes with time and need
not be in equilibrium---even if the configuration is. Rather, equivariance
concerns an equilibrium relative to the wave function: a quantum
equilibrium.
An interesting question which then arises is whether the quantum
equilibrium distribution $|\psi|^2$ is the unique equivariant distribution. In this paper we show that
$|\psi|^2$ is the only local functional of the wave function that is
equivariant.
The uniqueness proof is of particular value for the approach of D\"urr {\em
et al.}\ \cite{durr92,durr93,durr96,maudlin07} to explaining equilibrium
in Bohmian mechanics, an approach first advocated by Bell \cite{bell81}.
D\"urr {\em et al.}\ base their justification of the $|\psi|^2$
distribution on a ``typicality'' argument. They argue that a ``typical''
Bohmian universe yields $|\psi|^2$ probabilities as empirical
distributions. What this means is that the set of initial configurations of
the universe that yield the $|\psi|^2$ distribution is very large: it has
measure near one for the measure $P_e^\Psi$ having density $|\Psi|^2$, with
$\Psi$ the wave function of the universe. One reason $P_e^\Psi$ is invoked
is that it is equivariant.
After recalling Bohmian mechanics in Section \ref{Bohmian mechanics}, we define in
Section \ref{Equivariance} the notion of equivariance, providing some
illustrative examples. Some of these touch upon the connection between the
uniqueness of equivariant distributions for Bohmian mechanics and the
notion of the ergodicity of a dynamical system, a connection that is
developed in Sections \ref{stat} and \ref{erg}. While some familiarity with
elementary ergodic theory would be helpful for some of the discussion in
Section \ref{Equivariance}, the uniqueness results for the quantum equilibrium
distribution presented in Sections \ref{u} and \ref{?} require no such
familiarity.
\section{Bohmian mechanics}\label{Bohmian mechanics}
In Bohmian mechanics\ the state of a quantum system is given by the positions of its particles as
well as its wave function; the motion of the particles is
determined by the wave function. For a system of $N$ spinless particles the
wave function $\psi_t(q)=\psi_t(q_1, \dots, q_{3N})$ is a complex-valued
function on the
configuration space $\mathbb{R}^{3N}$, and satisfies the
non-relativistic Schr\"odinger equation
\begin{equation}
i\hbar \partial_t \psi_t(q) =H\psi_t(q)= \left(- \sum^M_{k=1} \frac{\hbar^2}{2m_k} \partial^2_{q_k} + V(q) \right) \psi_t(q)\,,
\label{1}
\end{equation}
with $M=3N$, $\partial_{q_k}=\partial /\partial {q_k}$ and where
$m_1=m_2=m_3$ is the mass of the first particle and similarly for the other
particles. The particles move in physical space $\mathbb{R}^{3}$. We denote
the actual positions of the particles by $\mathbf Q_i\in
\mathbb{R}^{3}$. Thus the actual configuration $Q$ of the system of
particles, collectively representing their $N$ actual positions, is given
by the vector $Q=(Q_1, \dots, Q_{M})=(\mathbf Q_1, \dots, \mathbf Q_N) \in
\mathbb{R}^{M}={(\mathbb{R}^{3})}^N$. (The Cartesian coordinates of the
first particle are given by $\mathbf Q_1=(Q_1,Q_2,Q_3)$ and similarly for
the other particles.) The possible trajectories $Q_t$ for the system of
particles are given by solutions to the {\em guidance equation}
\begin{equation}
\frac{dQ_{t}}{dt} = v^{\psi_t}(Q_t) \,,
\label{2}
\end{equation}
where the velocity field $v^{\psi}=(v^{\psi}_1,\dots,v^{\psi}_M)$ on $\mathbb{R}^{M}$ is given by
\begin{equation}
v^{\psi}_k(q) = \frac{\hbar}{m_k} {\textrm{Im}} \frac{\partial_{q_k} \psi(q)}{\psi(q)} \,.
\label{2.0001}
\end{equation}
We denote the flow associated to the velocity field by $q_t:
\mathbb{R}^{M}\to \mathbb{R}^{M}$.\footnote{The Bohmian dynamics, defined
by equations (1--3), is well defined on the subset of
$L^2(\mathbb{R}^{M})\times \mathbb{R}^{M}$ consisting of pairs $(\psi,q)$
with $\psi$ sufficiently smooth and $q$ such that $\psi(q)\neq0$, see
\cite{berndl96}. We shall usually ignore such details.} Thus $Q_t=q_t(q)$
is the solution to the guidance equation for which $Q_0=q$, so that\
$q_{0}(q)=q$. In this notation we have suppressed the dependence on the
wave function. We keep the initial time $t=0$ fixed throughout the paper,
and let $\psi$ usually denote the initial wave function, so that $\psi_0 =
\psi$.
\section{Equivariance}\label{Equivariance}
Suppose we have a (measure-valued) functional $P:\psi \mapsto P^{\psi}$ from (nontrivial,
i.e.\ not everywhere $0$) wave functions to probability distributions on
configuration space ${\mathbb{R}^M}$. There exist then two natural time
evolutions for $P^{\psi}$. On the one hand, with $\psi_t(q)=
e^{-iHt/\hbar}\psi(q)$ a solution to the Schr\"odinger equation with
initial wave function $\psi_{0}(q)=\psi(q)$, we have the probability
distribution $P^{\psi_t}$ for all $t \in \mathbb{R}$. On the other hand,
under the Bohm flow (\ref{2},\,\ref{2.0001}) the distribution
$P^{\psi}$ is carried to the distribution $P^{\psi}_t=P^{\psi} \circ q^{-1}_t$ at time $t$. This
means that if the initial configuration $Q_0$ is random, with distribution
$P^{\psi}$, then the distribution of the configuration $Q_t=q_t(Q_0)$ at
time $t$ is $P^{\psi}_t$.
The functional $P$ is called {\em equivariant} \cite{durr92} if
\begin{equation}
P^{\psi}_t = P^{\psi_t} \,, \qquad \text{for all } t \in \mathbb{R} \,.
\label{2.005}
\end{equation}
In other words $P$ is equivariant if $P^\psi$ retains its form as a
functional of the wave function $\psi$ when the time evolution of the
distribution is governed by the flow $q_t$ associated to the velocity field
$v^{\psi_t}$. When the equivariant functional $P$ is given by a density,
i.e., when it is of the form $P^{\psi}(dq)= p^{\psi}(q)dq$, we will also
call the density-valued functional $p:\psi \mapsto p^{\psi}(q)$ equivariant. This will of
course be so precisely when $p^{\psi}_t(q) = p^{\psi_t}(q)$ for all $t$,
with $p^{\psi}_t(q)=p^{\psi} (q^{-1}_t(q)) \left| \frac{\partial
q_t}{\partial q}(q^{-1}_t(q))\right|^{-1}$ the density for
$P^{\psi}_t$. (We will also say that the distribution $P^{\psi}$ and the
density $p^{\psi}$ are equivariant when the functionals are.)
We can also characterize equivariance as follows. Suppose $P^\psi$ is given by
the density $p^\psi$. Then the density $p(q,t)=p^{\psi}_t(q)$ satisfies the continuity equation
\begin{equation}
\partial_t p(q,t) +\sum^M_{k=1}\partial_{q_k} \left(v^{\psi_t}_k(q) p(q,t) \right) = 0 \,.
\label{2.002}
\end{equation}
Thus the functional $P$ is equivariant precisely if $\tilde
p(q,t)=p^{\psi_t}(q)$ also satisfies the continuity equation (\ref{2.002})
for all $\psi$. This follows from the uniqueness of solutions of partial
differential equations and the fact that the functions $p^{\psi_t}(q)$ and
$p^{\psi}_t(q)$ are equal at $t=0$.
Let us now give some examples. The first example is the distribution $|\psi|^2$. In the following we don't assume the wave functions to be normalized. If the distributions are given by $|\psi|^2$, then it is natural to normalize the wave functions so that they have $L^2$-norm one. But for other distributions, other normalizations might be more appropriate.
\vspace{0.5cm}
\noindent
{\bf Example 1} The {\it quantum equilibrium} functional is
$P_e(dq)=p_e(q)\,dq$ where $p_e:\psi \mapsto p^\psi_e = N_e^\psi |\psi|^2$, with
$N_e^\psi=1/\int_{\mathbb{R}^M}|\psi|^2 dq$. Obviously $p_e$, respectively
$P_e$, maps wave functions to probability densities, respectively
probability distributions. This functional is equivariant since
$p^{\psi_t}_e$ satisfies the continuity equation ({\ref{2.002}}) for all
wave functions $\psi$.
\vspace{0.3cm}
In general, whether or not a distribution $P^\psi$ is equivariant would be
expected to depend on the potential $V$. Note, however, that the quantum
equilibrium distribution $P_e$ is equivariant for all $V$.
\vspace{0.5cm}
\noindent
{\bf Example 2} Suppose $\phi$ is a real-valued eigenstate of the
Hamiltonian $H$, for example the ground state. For this stationary state the
associated velocity field $v^{\phi_t}$ (\ref{2.0001}) vanishes, so that the
Bohm motion is trivial in this case. Thus any functional $P$ will trivially
obey (\ref{2.005}) for $\psi=\phi$.
\vspace{0.3cm}
The previous example illustrates the fact that equivariance is a property
of a mapping $\psi\mapsto P^{\psi}$; it concerns a family $\{P^{\psi}\}$ and
not merely the satisfaction of (\ref{2.005}) for a single wave function
$\psi$. Equivariance means that $P^{\psi_t}=P^{\psi}_t$ for {\em
all} wave functions $\psi$ in the Hilbert space.
We may also consider the equivariance of a functional $P$ defined on an
invariant subset of Hilbert space: Let $\mathscr{I}$ be an invariant set of
wave functions, i.e., such that $\psi \in \mathscr{I}$ if and only if
$\psi_t=e^{-iHt/\hbar}\psi \in \mathscr{I}$. We say that the functional
$\psi \mapsto P^\psi$, defined for $\psi \in \mathscr{I}$, is {\it
equivariant on} $\mathscr{I}$ if (\ref{2.005}) is obeyed by all $\psi \in
\mathscr{I}$.
We have so far not explicitly imposed any conditions on the
distribution-valued functional $P^{\psi}$ beyond equivariance. A condition
that would be natural is that the functional be projective,
i.e., that if $\psi'$ is a (non-vanishing) scalar multiple of $\psi'$ then
$P^{\psi'}=P^{\psi}$, but we shall not do so. We shall, however, insist on
the following: When we speak of an equivariant functional $P^\psi$, it is
to be understood that the mapping $\psi\mapsto P^\psi$ is measurable. When
$P^{\psi}$ is given by the density $p^{\psi}$, the measurability of
$P^{\psi}$ amounts to that of $p^{\psi}(q)$ as a function of $\psi$ and
$q$. Measurability is the weakest sort of regularity condition invoked in
analysis, probability theory, and ergodic theory, much weaker than
differentiability or continuity. We do not wish to specify here precisely
what is meant by the measurability of $P^{\psi}$ (or of $p^{\psi}(q)$),
since the main result of this paper involves a much stronger condition,
that $P^{\psi}$ be suitably local. As a rule of thumb, however, we can say
the following: Any mapping $\psi\mapsto P^\psi$ given by an explicit formula
will be measurable.
In order to appreciate the importance of measurability, one should note
that when a dynamical system is analyzed, it is often necessary to consider
random initial conditions. For the Bohmian system the initial condition is
given by the quantum state $\psi$ as well as the initial configuration,
and hence one should allow for the possibility that the initial wave
function $\psi$ is random, with distribution $\mu(d\psi)$. When this is
combined with a functional $P^{\psi}(dq)$, one is naturally led to consider
the joint distribution $\mu(d\psi)P^{\psi}(dq)$ of $\psi$ and $q$, see
Section \ref{stat}. But this will be meaningful---i.e., define a genuine
probability distribution---only when $P^{\psi}$ is measurable.
Furthermore, there is a sense in which the equivariance condition
(\ref{2.005}) says that $P^\psi$ is a constant of the motion for the
Schr\"odinger evolution of wave functions: With each
$\psi\in\mathscr{H}=L^2(\mathbb{R}^{M})$ associate a ``fiber''
$\Gamma_{\psi}$, namely the set of probability distributions on
configuration space $\mathbb{R}^{M}$. The Bohm flow acting on distributions
provides a natural identification of $\Gamma_{\psi_t}$ with $\Gamma_{\psi}$
(and in fact defines a connection on the fiber bundle
$\mathscr{H}\times\Gamma = \{\,(\psi,\mu)\,|\,\psi\in\mathscr{H},
\mu\in\Gamma_{\psi}\}$). The equivariance condition (\ref{2.005}) then says
that the function $P^{\psi}$ is a constant of the Schr\"odinger motion
under this identification.
Now if a dynamical system is ergodic, there can be no nontrivial functions
(i.e., functions that are not almost everywhere equal to a constant) that
are constants of the motion. However, it is understood that only measurable
functions are to be considered; in fact, there are more or less always many
nontrivial constants of the motion that are not measurable. Any function of
the orbits of the motion will define a constant of the motion. Most such
constants of the motion will be nontrivial, and these will also fail to be
measurable when the dynamics is ergodic.
Similarly, one might expect that there will more or less always be a great
many functionals satisfying (\ref{2.005}) if measurability is not demanded,
and this is indeed the case, as we indicate in the next example. (See
Section \ref{erg} for more on equivariance and ergodicity.)
\vspace{0.5cm}
\noindent
{\bf Example 3} For any fixed $\psi$ let ${\cal
O_\psi}=\{e^{-iHt/\hbar}\psi\}\equiv\{\psi_t\}$ denote the orbit of $\psi$
under the Schr\"odinger evolution---the smallest invariant set containing
$\psi$. If ${\cal O_\psi}$ is not a periodic orbit (one such that $\psi_t =
\psi$ for some $t\neq0$), we may let $P^\psi$, for this $\psi$, be {\it
any} probability distribution on configuration space, and extend it to
${\cal O_\psi}$ via (\ref{2.005}). The resulting function $P$ is then
obviously equivariant on ${\cal O_\psi}$. If ${\cal O_\psi}$ is periodic,
let $P=P_e$ on ${\cal O_\psi}$. In this way we may obtain a great many
different functionals $P$---one for each assignment of probability
distributions to representatives of each non-periodic orbit---defined on the
union of all orbits, and hence for all $\psi$ in Hilbert space. All of them
obey (\ref{2.005}) for all $\psi$. Most of these, however, will not be
measurable, and hence should not count as equivariant functionals.
\vspace{0.3cm}
In the previous example, suppose we were to choose $P^\psi$ in an explicit
way, for example as in Example 2, on the representatives. It might seem
then, on the one hand, that we have provided in effect an explicit formula
for the functional $P$ constructed in this way, so that it would then be
measurable. On the other hand, if the Bohmian dynamics is suitably ergodic,
see Section \ref{erg}, as is likely often to be the case, $P$ (if it is
given by a density) must then agree with $P_e$ on many non-periodic orbits,
which it clearly does not. What gives? The answer is that the specification
just mentioned is much less explicit than it might at first appear to be,
since in general there is no canonical way to choose a representative for
each orbit, and the functional so constructed need not be measurable.
A flow on the line or an autonomous flow on the plane can't have strong
ergodic properties. One might thus expect the Bohm motion on the line to
also fail to have strong ergodic properties. That this is so was shown in
\cite{goldstein99}. Accordingly, since dynamical systems that are not
ergodic have many stationary distributions, one should expect there to be a
great many distributions that are
equivariant for this case.
\vspace{0.5cm}
\noindent
{\bf Example 4} Consider a Bohmian particle moving on the line. Since
trajectories can't cross, it is easy to see that the function $F(\psi, q)=
P_e^\psi((-\infty, q))$ is a constant of the motion, $F(\psi_t,
q_t(q))=F(\psi, q)$. For fixed $\psi$, $F$ is a map $\mathbb R\to [0,1]$,
and for every probability distribution $\mu$ on $(0,1)$ there is an
equivariant functional
\begin{equation}
P_{\mu}^{\psi}(B)=\mu(F(\psi,B))\,,
\end{equation}
the image of $\mu$ under $F_\psi^{-1}$, the inverse of the map $q\mapsto
F(\psi,q)$. (When $\mu$ is the Lebesgue measure, $\mu(dq)=dq$, we have that
$P_\mu=P_e$.) Perhaps the simplest way to understand this is in terms of
the change of variables $(\psi,q)\mapsto(\psi,\tilde q)$ with $\tilde
q=F(\psi, q)$. In these new coordinates the Bohmian dynamics becomes
trivial: $\psi$ evolves as usual according to Schr\"odinger's equation and
$\tilde q$ does not change under the dynamics. Thus any distribution $\mu$
for $\tilde q$ defines an equivariant functional.\footnote{Moreover, every
equivariant functional for a particle on the line corresponds a.e.\ to a
(possibly different) choice $\mu$ for each ergodic component of the
Schr\"odinger dynamics.}
\vspace{0.3cm}
A difference between the functional in Example 1 and those in (Example 3
and) Example 4 is that the former is a local functional, whereas the latter
(except for the quantum equilibrium functional) are not. We call a functional $p^{\psi}$ {\em local} if $p^{\psi}(q)$ can
be written, up to normalization, as a (sufficiently differentiable)
function of $q$, $\psi(q)$, and finitely many derivatives of $\psi$,
evaluated at $q$. That is, for a local functional $p^{\psi}$ we can write
\begin{equation}
p^{\psi}(q)=N^{\psi}g^{\psi}(q)
\label{3.001}
\end{equation}
where $N^{\psi}$ does not depend on $q$ and where
\begin{equation}
g^{\psi}(q) =g\left(q,\psi(q),\dots, \partial^{n_1}_{q_1} \dots \partial^{n_M}_{q_M}\psi(q),\dots\right)
\label{sl}
\end{equation}
depends on at most finitely many partial derivatives of $\psi$ (and is
sufficiently differentiable). We shall say that a functional of the form
(\ref{sl}) is {\em strictly local}. (A local density $p^{\psi}(q)$, because
of the normalization factor $N^{\psi}$, need not be strictly local.) We
note that a local functional that, as demanded above, is differentiable
will of course be measurable. In fact, for measurability,
continuity---indeed mere measurability of $g$---would suffice.
In the following section we will see that equivariance, together with the
requirement that the functional be local, leads uniquely to quantum
equilibrium $p_e$.
\section{Uniqueness of equivariant densities}\label{u}
Let $p:\psi \mapsto p^{\psi}$ be a functional from wave functions to probability densities. We show that $p$ is uniquely given by $p_e$, with $p^{\psi}_e=N^\psi_e|\psi|^2$ as in Example 1, under the assumptions that $p^{\psi}$ is equivariant and local. The locality implies that $p^{\psi}$ can be written in the form $p^{\psi}(q)=N^{\psi}_g g^{\psi}(q)$, where $N^{\psi}_g = 1/\int_{\mathbb{R}^M} g^{\psi}(q) dq$ and where $g^{\psi}(q)$ is a strictly local functional, see (\ref{sl}). We split the proof into two parts, successively showing that:
\begin{itemize}
\item[(P1)]
$g^{\psi_t}(q)$ satisfies the equation
\begin{equation}
\partial_t g^{\psi_t}(q) + \sum^M_{k=1}\partial_{q_k} \left(v^{\psi_t}_k(q) g^{\psi_t}(q) \right) + hg^{\psi_t}(q) = 0 \,,
\label{3.00202}
\end{equation}
with $h$ a constant, i.e., independent of $q$ and the wave function.
\item[(P2)]
$p^{\psi}(q) = p^{\psi}_e(q)=N^\psi_e |\psi(q)|^2$.
\end{itemize}
We now give the proofs.\\
\vspace{-0.2cm}
\noindent
{\em Proof of} (P1): Equivariance implies that $p^{\psi_t}(q)$ satisfies the continuity equation (\ref{2.002}). Since $p^{\psi_t}(q)=N^{\psi_t}_g g^{\psi_t}(q)$ the continuity equation for $p^{\psi_t}(q)$ can be written as
\begin{equation}
\frac{1}{g^{\psi_t}(q)}\left( \partial_t g^{\psi_t}(q) + \sum^M_{k=1}\partial_{q_k} \left(v^{\psi_t}_k(q) g^{\psi_t}(q) \right) \right) = -\partial_t \ln N^{\psi_t}_g
\label{3.002}
\end{equation}
(wherever $g^{\psi_t}(q)>0$).
Let us introduce the functional $h:\psi \mapsto h^\psi$, from wave functions to the real numbers, defined by
\begin{equation}
h^{\psi_t} = \partial_t \ln N^{\psi_t}_g\,.
\label{3.002001}
\end{equation}
Since $\partial_t \ln N_g^{\psi_t}$ is independent of $q$, $h^{\psi_t}$ is well-defined as a real number. We will show that this functional is constant, i.e.\ independent of $\psi$.
First note that $\partial_t g^{\psi_t}(q)$ can be expressed as a function
of $q$ and of the variables $\partial^{m_1}_{q_1} \dots
\partial^{m_M}_{q_M}\psi_t(q)$. This is because $g^{\psi}$ is a strictly local functional and because the time derivatives of any of the variables $\partial^{m_1}_{q_1} \dots \partial^{m_M}_{q_M}\psi_t(q)$ can be replaced by spatial derivatives by making use of the Schr\"odinger equation. As a result we have from (\ref{3.002}) that
\begin{equation}
h^{\psi} = h \left(q,\psi(q),\dots, \partial^{n_1}_{q_1} \dots \partial^{n_M}_{q_M}\psi(q),\dots \right) \,,
\label{3.00201}
\end{equation}
so that $h^{\psi}$ is a strictly local functional.
It follows that $h^{\psi}=h^{\psi'}$ for any two wave functions $\psi$ and
$\psi'$ for which all derivatives agree at a configuration $q \in
{\mathbb{R}^M}$. But this means that for any $\psi$ and $\psi'$,
$h^{\psi}=h^{\psi'}$, since there is always a third wave function $\psi''$
such that all the derivatives of $\psi$ and $\psi''$ agree at one
configuration $q \in {\mathbb{R}^M}$ and such that all the derivatives of
$\psi'$ and $\psi''$ agree at another configuration $q' \in
{\mathbb{R}^M}$.
Thus $h^\psi$ is independent of $\psi$. We write $h^\psi=h$. The continuity equation (\ref{3.002}) then reduces to (\ref{3.00202}).
\vspace{0.2cm}
\noindent
{\em Proof of} (P2): Let us introduce the functional $f^{\psi}(q) =
g^{\psi}(q)/ |\psi(q)|^2$.\footnote{$f^{\psi}(q)$ is defined on $\{q\in
\mathbb{R}^M\,|\,\psi(q)\neq0\}$. Since the Bohm flow (\ref{2},\,\ref{2.0001}) is defined
only on this set, we consider only densities on this set, i.e., for which
$g^{\psi}>0$ only on this set.} From the continuity equation for $|\psi_t(q)|^2$ and the equation (\ref{3.00202}) for $g^{\psi_t}(q)$ it follows that
\begin{equation}
\frac{df^{\psi_t}}{dt} + h f^{\psi_t} = 0\,,
\label{3.01}
\end{equation}
with
\begin{equation}
\frac{d}{dt} = \partial_t + \sum^M_{k=1} v^{\psi_t}_k \partial_{q_k} \,.
\label{3.1}
\end{equation}
Because $f^{\psi}$ is a strictly local functional we have that
\begin{equation}
f^{\psi}(q) = f \left(q,\psi(q),\dots, \partial^{n_1}_{q_1} \dots \partial^{n_M}_{q_M}\psi(q),\dots \right) \,.
\label{3.00203}
\end{equation}
Relation ({\ref{3.01}}) can therefore be written as
\begin{eqnarray}
0&=& \frac{df^{\psi_t}}{dt} + h f^{\psi_t} \nonumber\\
&=& \sum^M_{k=1} v^{\psi_t}_k \partial_{q_k}f + \sum_{m_1,\dots,m_M } \Bigg( \frac{d }{dt}\left( \partial^{m_1}_{q_1} \dots \partial^{m_M}_{q_M}\psi_{t,r} \right) \frac{ \partial f}{\partial (\partial^{m_1}_{q_1} \dots \partial^{m_M}_{q_M}\psi_{t,r})} \nonumber\\
&& + \frac{d }{dt}\left( \partial^{m_1}_{q_1} \dots \partial^{m_M}_{q_M}\psi_{t,i} \right) \frac{ \partial f}{\partial (\partial^{m_1}_{q_1} \dots \partial^{m_M}_{q_M}\psi_{t,i})} \Bigg) + h f \,,
\label{4.1}
\end{eqnarray}
where $\psi_{t,r}$ and $\psi_{t,i}$ are respectively the real part and the imaginary part of $\psi_t$.
This expression can be rewritten by making use of the Schr\"odinger equation ({\ref{1}}), since for every variable $\partial^{m_1}_{q_1} \dots \partial^{m_M}_{q_M}\psi_{t,r}$ and $\partial^{m_1}_{q_1} \dots \partial^{m_M}_{q_M}\psi_{t,i}$ we have that
\begin{eqnarray}
\frac{d }{dt}\left( \partial^{m_1}_{q_1} \dots \partial^{m_M}_{q_M}\psi_{t,r} \right) &=& \partial_t \partial^{m_1}_{q_1} \dots \partial^{m_M}_{q_M}\psi_{t,r} + \sum^M_{k=1} v^{\psi_t}_k \partial_{q_k} \partial^{m_1}_{q_1} \dots \partial^{m_M}_{q_M}\psi_{t,r} \nonumber\\
&=& - \sum^M_{k=1} \frac{\hbar}{2m_k} \partial^2_{q_k} \partial^{m_1}_{q_1} \dots \partial^{m_M}_{q_M}\psi_{t,i} + \frac{1}{\hbar} \partial^{m_1}_{q_1} \dots \partial^{m_M}_{q_M} \left( V\psi_{t,i} \right) \nonumber\\
&& \mbox{} + \sum^M_{k=1} v^{\psi_t}_k \partial_{q_k}\partial^{m_1}_{q_1} \dots \partial^{m_M}_{q_M}\psi_{t,r}
\label{4.2}
\end{eqnarray}
and
\begin{eqnarray}
\frac{d }{dt}\left( \partial^{m_1}_{q_1} \dots \partial^{m_M}_{q_M}\psi_{t,i} \right) &=& \partial_t \partial^{m_1}_{q_1} \dots \partial^{m_M}_{q_M}\psi_{t,i} + \sum^M_{k=1} v^{\psi_t}_k \partial_{q_k} \partial^{m_1}_{q_1} \dots \partial^{m_M}_{q_M}\psi_{t,i} \nonumber\\
&=& \sum^M_{k=1} \frac{\hbar}{2m_k} \partial^2_{q_k} \partial^{m_1}_{q_1} \dots \partial^{m_M}_{q_M}\psi_{t,r} - \frac{1}{\hbar} \partial^{m_1}_{q_1} \dots \partial^{m_M}_{q_M} \left( V\psi_{t,r} \right) \nonumber\\
&& \mbox{} + \sum^M_{k=1} v^{\psi_t}_k \partial_{q_k}\partial^{m_1}_{q_1} \dots \partial^{m_M}_{q_M}\psi_{t,i} \,.
\label{4.201}
\end{eqnarray}
In this way the equation (\ref{4.1}) expresses a functional relation between the variables $q$ and all the real variables $\partial^{n_1}_{q_1} \dots \partial^{n_M}_{q_M}\psi_{t,r}$ and $\partial^{n_1}_{q_1} \dots \partial^{n_M}_{q_M}\psi_{t,i}$ which has to hold identically, i.e.\ for all possible values of these variables. Since all these variables can be treated as independent, we can show that the function $f$ must be a constant as follows.
First select a variable $\partial^{n_1}_{q_1} \dots
\partial^{n_M}_{q_M}\psi_{t,r}$ or $\partial^{n_1}_{q_1} \dots
\partial^{n_M}_{q_M}\psi_{t,i}$ such that $f$ depends on this variable and
such that, if $f$ depends on another variable $\partial^{{\bar n}_1}_{q_1}
\dots \partial^{{\bar n}_M}_{q_M}\psi_{t,r}$ or $\partial^{{\bar
n}_1}_{q_1} \dots \partial^{{\bar n}_M}_{q_M}\psi_{t,i}$, then ${\bar n}_1
\le n_1$. Suppose the selected variable is, say, $\partial^{n_1}_{q_1}
\dots \partial^{n_M}_{q_M}\psi_{t,r}$. Then, from ({\ref{4.1}}),
({\ref{4.2}}) and ({\ref{4.201}}) it follows that the only term in
$df^{\psi_t}/dt + hf^{\psi_t}$ that contains the variable
$\partial^{n_1+2}_{q_1} \dots \partial^{n_M}_{q_M}\psi_{t,i}$ is
\begin{equation}
-\frac{\hbar}{2m_1} \partial^{n_1+2}_{q_1} \dots \partial^{n_M}_{q_M}\psi_{t,i} \frac{ \partial f}{\partial (\partial^{n_1}_{q_1} \dots \partial^{n_M}_{q_M}\psi_{t,r})}\,.
\label{7}
\end{equation}
Because $\partial^{n_1+2}_{q_1} \dots \partial^{n_M}_{q_M}\psi_{t,i}$ can be treated as an independent variable, the term above has to be zero. Hence
\begin{equation}
\frac{ \partial f}{\partial \left( \partial^{n_1}_{q_1} \dots \partial^{n_M}_{q_M}\psi_{t,r} \right)} = 0\,.
\label{8}
\end{equation}
But this contradicts the fact that $f$ depends on the variable
$\partial^{n_1}_{q_1} \dots \partial^{n_M}_{q_M}\psi_{t,r}$. It follows
that $f$ does not depend on any of the variables $\partial^{m_1}_{q_1}
\dots \partial^{m_M}_{q_M}\psi_{t,r}$ or $\partial^{m_1}_{q_1} \dots
\partial^{m_M}_{q_M}\psi_{t,i}$. Hence we have that $f=f(q)$.
Equation ({\ref{4.1}}) now reduces to
\begin{equation}
\sum^M_{k=1} v^{\psi_t}_k \partial_{q_k}f + h f = 0
\label{8.1}
\end{equation}
and we can use a reasoning similar to the above to conclude that $\partial_{q_k}f=0$, $k=1,\dots,M$. Hence $f$ is a constant independent of $q$ and the wave function and any of its derivatives. Since $g^\psi(q)=f|\psi|^2$ with $f$ now a constant and since $p^\psi$ is assumed to be a probability density we have that
\begin{equation}
p^\psi(q) = p^{\psi}_e(q)=N_e^\psi |\psi(q)|^2 \,.
\label{8.2}
\end{equation}
\section{A stronger result?}\label{?}
There is a weaker version of the locality of the functional
$p^{\psi}(q)=N^{\psi}g^{\psi}(q)$, which we shall call {\em weak locality},
that is worth considering. This requires that $g^{\psi}(q)$ be determined
by $\psi$ in a neighborhood of $q$, i.e., that if $\psi$ and $\psi'$ agree
in some neighborhood of $q$, then $g^{\psi}(q) = g^{\psi'}(q)$. This is
indeed a weaker notion of locality than used earlier, and allows in
particular for $g^{\psi}(q)$ to depend on all derivatives of $\psi$ at $q$.
It is reasonable to ask whether the uniqueness result would continue to be
valid if the equivariant functional $p^\psi$ were assumed only to be weakly
local. We believe that the answer is yes. There is an argument for this
that, while not entirely rigorous, is quite compelling. At the same time,
the argument provides some perspective on our uniqueness result. It is
this:
The Bohmian dynamics defines a flow on (a subset of) the space
$\mathscr{X}=\mathscr{H}\times\mathbb{R}^{M}$, where $\mathscr{H}=L^2(\mathbb{R}^{M})$
is the Hilbert space of the Bohmian system. We shall denote the action of
this flow by $T_t$, so that for $\eta=(\psi,q)\in\mathscr{X}$, we have that
$T_t\eta=(\psi_t,q_t(q))$. In terms of this flow, the equivariance of the
density $p^{\psi}(q)$ can be conveniently expressed as follows: Let
\begin{equation}\label{G}
G(\eta)= p^{\psi}(q)/p_e^{\psi}(q)\,.
\end{equation}
Then the equivariance of $p^{\psi}$
amounts to the requirement that $G$ be a constant of the motion for the
flow $T_t$,
\begin{equation}\label{cm}
G(T_t\eta)=G(\eta)\,.
\end{equation}
(This is an easy consequence of the fact that $p^{\psi}_e$ is equivariant.)
And uniqueness amounts to the statement that $G$ is constant on (the
relevant subset of) $\mathscr{X}$. This would be so if the flow $T_t$ were
sufficiently ergodic (see Section \ref{erg}): ergodicity means that there
are no nontrivial constants of the motion---that the only constants of the
motion are in fact functions that are almost everywhere constant, and hence
trivially constants of the motion---as would be the case if the set of
possible states $\eta$ consisted of a single trajectory. This, of course,
is impossible. Nonetheless, the ergodicity of a motion on a space means
roughly that the motion is sufficiently complicated to produce trajectories
that almost connect any two points in the space, so that functions that
don't change along a trajectory must be more or less everywhere constant.
In fact, it is easy to see that for uniqueness it is sufficient that $G$ be
constant on the subsets
$\mathscr{X}_\psi=\{(\psi,q)\in\mathscr{X}\,|\,q\in\mathbb{R}^{M}\}$ of
$\mathscr{X}$ corresponding to fixed $\psi$, and for this it is of course
sufficient that $G$ be locally constant on $\mathscr{X}_\psi$, i.e., that
every $q\in\mathbb{R}^{M}$ has a neighborhood $O_q$ such that $G$ is
constant on $\{(\psi,q')\in\mathscr{X}\,|\,q'\in O_q\}$. It is also easy to
see that for uniqueness it is sufficient that
$F(\eta)=g^\psi(q)/|\psi(q)|^2$ be constant on $\mathscr{X}$---or (locally)
constant on $\mathscr{X}_\psi$.
While $F$ is not obviously invariant under the flow $T_t$, it is clearly
{\em quasi-invariant}, which is almost as good: In terms of $F$, (\ref{cm})
becomes
\begin{equation}\label{finv}
F(T_t\eta)=e^{ht}F(\eta)\,,
\end{equation}
for all $t\in\mathbb{R}$, where $h$ is the constant defined by
(\ref{3.002001}). (That $h^{\psi}$ is constant follows from weak locality
much as it does from locality. Moreover, it seems likely on general grounds
that $h=0$, in which case $F$ would be strictly invariant.)
Now the (weak) locality of $p^{\psi}$ implies that $F$ is invariant under a
much larger set of transformations than the one-dimensional set $\{T_t\}$,
defining an action of the group $\mathbb{R}$ on $\mathscr{X}$. It implies
invariance under the action $T_\phi$ of the infinite-dimensional (additive)
group $\mathscr{N}=\{\phi\in\mathscr{H}\,|\,\phi(q') = 0\text{ in a
neighborhood of $q=0$}\}$, where
$T_\phi\eta=T_\phi(\psi,q)=(\psi+\phi_q,q)$, with
$\phi_q(q')=\phi(q'-q)$. Thus with weak locality we have, in addition to
(\ref{finv}), that for all $\phi\in\mathscr{N}$
\begin{equation}\label{inv}
F(T_\phi\eta)=F(\eta)\,.
\end{equation}
Now while the action of $\mathbb{R}$ on $\mathscr{X}$ given by the Bohmian
flow $T_t$ may fail to be suitable ergodic, it is hard to imagine this for
the action $T_\xi,\ \xi\in \mathscr{G}$, of the group $\mathscr{G}$
generated by the actions of $\mathbb{R}$ and $\mathscr{N}$ on
$\mathscr{X}$. Indeed, it seems very likely that $\mathscr{X}$
consists of a single orbit $\{T_\xi(\psi,q)\,|\,\xi\in \mathscr{G}\}$ of
this action, and more likely still that $\mathscr{G}$ connects any two points
in any sufficiently small neighborhood of any point in $\mathscr{X}_\psi$.
If $h$ were 0 this would imply uniqueness. For general $h$ we have that
\begin{equation}\label{xinv}
F(T_\xi\eta)=e^{ht_\xi}F(\eta)
\end{equation}
for all $\xi\in\mathscr{G}$. But what was suggested above for $\mathscr{G}$
should still be true of the subgroup
$\mathscr{G}_0=\{\xi\in\mathscr{G}\,|\, t_\xi=0$\}, under the action of which
$F$ is invariant, and this would imply uniqueness in the general case.
Indeed, consider only the transformations in $\mathscr{G}_0$ of the form
$T_{\phi_2,-t,\phi_1,t}= T_{\phi_2} T_{-t}T_{\phi_1} T_t$, with
$\phi_i\in\mathscr{N}$ and $t\in\mathbb{R}$. Since the dimension of the
set of such transformations should be regarded as roughly twice the
dimension of $\mathscr{X}$, the set obtained by
applying all such transformations to a given
point $\eta\in\mathscr{X}$---the range of the mapping
$(\phi_1,\phi_2,t)\mapsto T_{\phi_2,-t,\phi_1,\,t}\eta$---should be all of
$\mathscr{X}$, at the very least, locally.
The previous argument also suggests that for the uniqueness of the
equivariant distribution, the locality condition can be weakened further to
that of having finite range $r>0$: that $g^\psi(q)$ depend at most on the
restriction of $\psi$ to the ball $B_r$ of radius $r$ centered at $q$. (The
weak locality condition is then that of having finite range $r$ for all
$r>0$.)
\section{Equivariance and stationarity}\label{stat}
We have already indicated that an equivariant functional can be regarded as
generalizing the notion of a stationary probability distribution for a
dynamical system---one that is invariant under the time-evolution. We wish
here to tighten this connection a bit, and observe that the equivariance of
the functional $P^\psi$ is more or less equivalent to (it implies and is
almost implied by) the following: For every measure $\mu(d\psi)$ on Hilbert
space $\mathscr{H}$ that is stationary under the Schr\"odinger evolution,
the measure $\mu(d\psi)P^{\psi}(dq)$ is a stationary measure on
$\mathscr{X}=\mathscr{H}\times\mathbb{R}^M$ for the Bohmian dynamics. (The
``almost'' and ``more or less'' refer to the following: The stationarity of
$\mu(d\psi)P^{\psi}(dq)$ implies that the condition (\ref{2.005}) for
equivariance is satisfied by all $\psi$ with the possible exception of a
set of $\psi$'s with $\mu$-measure 0. If there are exceptional $\psi$'s,
$P^{\psi}(dq)$ can be changed, on a set with $\mu$-measure 0 so that it
continues to define the same measure $\mu(d\psi)P^{\psi}(dq)$ on
$\mathscr{X}$, so as to become strictly equivariant.)
A general probability measure on $\mathscr{X}$ can be regarded as of the
form $\mu(d\psi)P^{\psi}(dq)$: $\mu(d\psi)$ is the first marginal, the
distribution of the first component $\psi$ of
$\eta=(\psi,q)\in\mathscr{X}$, and $P^{\psi}(dq)$ is the conditional
distribution of the configuration $q$ given $\psi$, a probability measure on the
fiber of the product space $\mathscr{X}$ that ``lies above $\psi$''.
Consider now any measure on $\mathscr{X}$ of the form
$\mu(d\psi)P^{\psi}(dq)$, with now $\mu$ any measure on $\mathscr{H}$ and
$P^{\psi}(dq)$ a probability measure on $\mathbb{R}^M$. (Here $\mu$ need
not be a probability measure, nor even normalizable.) For this measure to
be stationary $\mu(d\psi)$ obviously must be. Suppose this is so. Then, for
stationarity, we still must have that the measure $P^{\psi}(dq)$ on the
$\psi$-fiber evolves to the correct measure on the $\psi_t$-fiber, namely
$P^{\psi_t}(dq)$ (with the possible exception of a set of $\psi$'s having
$\mu$-measure 0). But equivariance says more or less precisely that this is
so: it says that for all $\psi$, $P^{\psi_t} = {P_t^{\psi}}$, the
measure to which $P^{\psi}$ evolves.
Thus a probability measure on $\mathscr{X}$ is stationary if and only if it
is of the form $\mu(d\psi)P^{\psi}(dq)$ with $\mu$ stationary and
$P^{\psi}$ equivariant. In particular, the measure
$\mu(d\psi)P_e^{\psi}(dq)$, where $P_e$ is the quantum equilibrium
distribution, is stationary whenever $\mu(d\psi)$ is. Suppose this is
so. Consider a measure $\mu(d\psi)P^{\psi}(dq)$ having a density with
respect to $\mu(d\psi)P_e^{\psi}(dq)$. This density is given by the
function $G$ (\ref{G}) on $\mathscr{X}$. The measure
$\mu(d\psi)P^{\psi}(dq)$ will be stationary precisely if its density $G$ is
a constant of the motion, consistent with our earlier assertion that this
amounts to the equivariance of $P^{\psi}$.
\section{Uniqueness and ergodicity}\label{erg}
The ergodicity of a dynamical system, defined by a dynamics and a given
stationary probability distribution, is equivalent to the statement that
any stationary probability distribution with a density with respect to the
given one must in fact be the given one. Thus ergodicity amounts to the
uniqueness, in an appropriate sense, of a stationary measure. So a
uniqueness statement for an equivariant functional---a uniqueness statement
for quantum equilibrium---can be regarded as expressing a sort of
generalized ergodicity. We wish now to sharpen this connection by observing
that certain uniqueness statements for quantum equilibrium are more or less
equivalent to the ergodicity of certain dynamical systems. (One should bear
in mind that the ergodicity of a dynamical system is usually extremely
difficult to establish.)
The relevant dynamical systems for our purposes here are defined by the
Bohmian dynamics on $\mathscr{X}$, with this space equipped with a
stationary probability measure of the form $\mu(d\psi)P_e^{\psi}(dq)$, with
$\mu(d\psi)$ stationary under the Schr\"odinger dynamics, as described in
Section \ref{stat}. In order for this dynamical system to be ergodic, it
is of course necessary for $\mu(d\psi)$ to be an ergodic measure for the
Schr\"odinger dynamics. Suppose that this is so. Then it is easy to see
that the ergodicity of $\mu(d\psi)P_e^{\psi}(dq)$ under the Bohmian
dynamics amounts to the uniqueness of quantum equilibrium ``modulo
$\mu(d\psi)$'': $\mu(d\psi)P_e^{\psi}(dq)$ is ergodic if and only if every
equivariant density $p^{\psi}$ agrees with quantum equilibrium,
$p^{\psi}=p_e^{\psi}$, for $\mu$-a.e. $\psi$.\footnote{A genuinely
different equivariant distribution $P^\psi$ with density $p^\psi$---one
that does not agree with $P_e^{\psi}$ for $\mu$-a.e. $\psi$---would yield a
stationary probability distribution on $\mathscr{X}$ that is given by a
density with respect to the one arising from $P_e^{\psi}$ but that differs
from it, contradicting ergodicity. Conversely, by the discussion of Section
\ref{stat} and the ergodicity of $\mu$, a stationary probability
distribution on $\mathscr{X}$ that is given by a density with respect to
$\mu(d\psi)P_e^{\psi}(dq)$ must be of the form $\mu(d\psi)P^{\psi}(dq)$
with $P^{\psi}(dq)$ equivariant.}
There is, however, perhaps less in this equivalence than first meets the
eye. The set of $\psi$'s of $\mu$ measure 1 for which, as a consequence of
the ergodicity of $\mu(d\psi)P_e^{\psi}(dq)$, we must have that
$p^{\psi}=p_e^{\psi}$ when $p^{\psi}$ is an equivariant density will be rather
small. The set is large only relative to the ``support'' of $\mu$, an
invariant subset $\mathscr{I_\mu}$ of $\mathscr{H}$, with $\mu$-measure 1,
defined by specified values of the constants of the Schr\"odinger motion
such as $\left<\psi|H^n|\psi\right>,\ n=0,1,2,\dots$.
For every such ``ergodic component'' $\mathscr{I_\mu}$ of the Schr\"odinger
dynamics, with $\mu(d\psi)P_e^{\psi}(dq)$ also ergodic, we have the
uniqueness of quantum equilibrium for almost all $\psi$ in
$\mathscr{I_\mu}$. Taking the totality of such ergodic components of the
Schr\"odinger dynamics, we obtain the uniqueness of quantum equilibrium for
almost all of the union of these components. In particular, if
$\mathscr{H}$ were completely decomposable into such ergodic components, we
would have the uniqueness of quantum equilibrium for almost all $\psi$ in
$\mathscr{H}$.\footnote{Such a decomposition, of all of $\mathscr{H}$,
probably never exists. For many stationary states $\psi$ the Bohm motion is
trivial, so that, with $\mu$ the uniform distribution on the orbit ${\cal
O}_\psi$ of $\psi$, which is ergodic for the Schr\"odinger dynamics,
$\mu(d\psi)P_e^{\psi}(dq)$ is not ergodic, see Example 2. And for wave
functions belonging to the spectral subspace of $\mathscr{H}$ corresponding
to the continuous spectrum the situation is even worse. For example, for a
free Hamiltonian $H$, with $V=0$, there are no ergodic components to begin
with. There are in fact, in this case, no probability measures on
$\mathscr{H}$ that are stationary under the Schr\"odinger
dynamics. (Consider the free Schr\"odinger dynamics. As time goes on the
wave function should spread, never to become narrow again. But this
conflicts with Poincar\'e recurrence, and thus implies that there is no
finite invariant measure, and in particular no stationary probability
measure.) And in this case as well, there are, presumably, equivariant
densities $p^{\psi}$ that disagree with $p_e^{\psi}$ for all $\psi$.}
Here is an example of a typical ergodic component of the Schr\"odinger
dynamics, to which the discussion of this section could be applied. Suppose
$\phi_1,\dots,\phi_n$ are eigenstates of the Hamiltonian $H$, with
corresponding eigenvalues $E_1,\dots,E_n$ that are rationally
independent. For $c_j>0,\ j=1,\dots,n$, let
$\mathscr{I}_{c_1,\dots,c_n}=\{\psi\in\mathscr{H}\,|\,\psi=\sum_{j=1}^n c_j
e^{i\theta_j}\phi_j,\ 0\leq \theta_j< 2\pi, \ j=1,\dots,n\}.$ The
Schr\"odinger dynamics on $\mathscr{I}_{c_1,\dots,c_n}$ is quasi-periodic,
with stationary probability distribution, corresponding to a uniform
distribution of the phases $\theta_j$, that is ergodic.
\section{Properties of quantum equilibrium}
The quantum equilibrium functional $P^\psi=P_e^\psi$ satisfies many natural
conditions, some of which play an important role in the analysis of a
Bohmian universe:
\begin{itemize}
\item[(i)] It is {\em universally} equivariant: it is equivariant for
all Schr\"odinger Hamiltonians $H$, of the form expressed on the right
hand side of equation (\ref{1}), i.e., for all $V$ and for all choices
$m_k$ of the masses of the particles.
\item[(ii)] It is projective: $P^{c\psi}=P^\psi$ for every
constant $c\neq0$.
\item[(iii)] It is covariant: $P_R^\psi=P^{R\psi}$ for all the usual
symmetries of non-relativistic quantum mechanics, for example for
space-translations, rotations, time-reversal, Galilean boosts, and
particle permutations. Here $P_R^\psi$ is the distribution to which
$P^\psi$ is carried by the action of $R$ on configurations.
\item[(iv)] It is {\em factorizable}. Suppose a Bohmian system is a
composite of two systems, with Hilbert space
$\mathscr{H}=\mathscr{H}_1\otimes \mathscr{H}_2$ and configuration variable
$q=(q^{(1)},q^{(2)})$. Then $P^{\psi_1\otimes\psi_2}(dq^{(1)}\times
dq^{(2)})=P^{\psi_1}(dq^{(1)})P^{\psi_2}(dq^{(2)})$. (If $H = H_1\otimes
I_2\, +\, I_1\otimes H_2$, with $I_i$ the identity on $\mathscr{H}_i$, then
it follows immediately from the equivariance of $P$ for the composite
system that the $P^{\psi_i}$ are equivariant for the respective components.)
\item[(v)] More generally, it is {\em hereditary}. Consider a composite
system as in (iv), and suppose that the conditional wave function of, say,
system 1 is $\psi$ when the composite has wave function $\Psi$ and system 2
has configuration $Q^{(2)}$, i.e., that
$\psi(q^{(1)})=\Psi(q^{(1)},Q^{(2)})$. Then the conditional distribution
of the configuration of system 1, given that the configuration of system~2 is
$Q^{(2)}$, depends only on $\psi$ and not on the choice of wave function
$\Psi$ and configuration $Q^{(2)}$ that yields $\psi$: for fixed $\psi$,
$P^\Psi(dq^{(1)}\,|\,Q^{(2)})$ is independent of $\Psi$ and $Q^{(2)}$.
\end{itemize}
It remains to be seen to what extent these properties, individually or in
various combinations, uniquely characterize quantum equilibrium among
equivariant distributions. (It presumably follows, along the lines of the
discussion in Section \ref{?}, that the satisfaction of the equivariance
condition (\ref{2.005}) for all $V$'s implies uniqueness---with the
exception of the case of a single particle on a line.) Be that as it
may, it is noteworthy that locality alone, with no additional conditions
beyond equivariance, is sufficient to guarantee the uniqueness of quantum
equilibrium.
\section{Acknowledgements}
Discussions with Detlef D\"urr, Michael Kiessling, Owen Maroney, Roderich
Tumulka, Antony Valentini, Hans Westman and Nino Zangh\`i are gratefully
acknowledged. The work of S. Goldstein was supported in part by NSF Grant
DMS--0504504. Research at Perimeter Institute for Theoretical Physics is supported in part by the Government of Canada through NSERC and by the Province of Ontario through MRI.
|
1,941,325,220,472 | arxiv | \section{Introduction}
The application of the $\theta$-exact Seiberg-Witten map (SW)~\cite{Seiberg:1999vs}
expansions~\cite{Jurco:2000fb,Madore:2000en,Jurco:2000ja,Mehen:2000vs,Szabo:2001kg,Jurco:2001my,Jurco:2001rq,Barnich:2002pb,Barnich:2003wq,Banerjee:2003vc,Banerjee:2004rs,Martin:2012aw,Martin:2015nna} represents the current state-of-the-art in the field of the noncommutative (NC) gauge field theories \cite{Schupp:2008fs,Horvat:2010km,Horvat:2011bs,Horvat:2011qg,Aschieri:2012in,Aschieri:2014xka,Horvat:2013rga} and the associated particle-physics phenomenology
~\cite{
Horvat:2010sr,Horvat:2011iv,Horvat:2012vn,Wang:2012ye,Wang:2013iga}. A summation over all orders in the antisymmetric tensor $\theta^{\mu\nu}$ at tree level is automatically achieved via this approach, leading to various interesting results. More importantly, the $\theta$-exact approach allows an access to nonlocal effects within the perturbative approach, most pronouncedly the quadratic UV/IR mixing~\cite{Minwalla:1999px,Hayakawa:1999yt,Hayakawa:1999zf,Matusis:2000jf} in the two-point functions at one loop~\cite{Schupp:2008fs,Horvat:2013rga}.
A particular topic related to the SW map approach to the noncommutative gauge theories is the implementation of the gauge freedoms into the (inter)action. This step is generally regarded as favorable as far as the control over various divergences in the perturbative quantum corrections is concerned. The original employment was via a $\theta$-iterative construction~\cite{Bichl:2001cq}. Later on a $\theta$-exact substitute was suggested~\cite{Trampetic:2014dea} and generalized to a second-order expansion afterwards~\cite{Trampetic:2015zma}.
While after the first order, it is natural to consider the second-order SW map
(either in the perturbative or in the $\theta$-exact approach) in the perturbative
quantum field theory, in the past the second order has been much less investigated~
\cite{Alboteanu:2007bp,Schupp:2008fs}. The reason was obviously technical: The second-order SW map solution is inherently much more complicated than the first-order one and requires consequently more effort to obtain explicitly the gauge invariant action and to implement the gauge freedom. Recently the model building works based on the $\theta^2$ order SW map of non-Abelian gauge fields have received more attention~
\cite{Aschieri:2012in,Dimitrijevic:2012pk,Aschieri:2014xka,Dimitrijevic:2014iwa} when
the order-$\theta$ correction vanishes, yet studies on quantum corrections are still
absent. Going from the $\theta$-expansion to $\theta$ exact, the second-order SW
map adds its unique additional difficulties. Two expansion solutions sharing the
identical first order do exist~\cite{Mehen:2000vs,Martin:2012aw}. Each solution involves its own type of 3-products ($\star_3$ and $\star_{3'}$ as in~\cite{Trampetic:2015zma}), and while the leading order of these two solutions with respect to $\theta$ can be shown to be connected by gauge transformation, the full solutions are not~\cite{Trampetic:2015zma}.
The gauge freedom structure at second order in the $\theta$-exact approach is considerably more complicated than the first order, and also more complicated than its $\theta^2$-order counterpart. Besides the existence of two solutions, the field strength expansion from each solution contains distinct gauge freedom structures~\cite{Trampetic:2015zma}. Analyzing the $\theta^2$ order indicates that more gauge freedom structures will show up after performing the integration-by-part in the action. Performing the same
procedure $\theta$-exactly meets the difficulty from the noncommutativity and/or
nonassociativity of the generalized star products. This issue is much less pressing at the $e^2$ order since the $\star_2$ product satisfies the so-called 3-cyclicity~\cite{Mylonas:2013jha}
\begin{equation}
\int f(x)\big(g(x)\star h(x)\big)=\int h(x)\big(g(x)\star f(x)\big),\;\;\int f(x)\big(g(x)\star_2 h(x)\big)=\int h(x)\big(g(x)\star_2 f(x)\big).
\label{3-cy}
\end{equation}
However, there is no 4-cyclicity in general, --($\star_2$ and $\star_{3'}$ products have some cyclicity left, while $\star_3$ has practically no cyclicity left.) In this paper we will show that the proper substitute to integration-by-part in the $\theta$-exact computation on Moyal space is to Fourier-transforming the whole computation to the momentum space and achieve the explicit gauge-invariant action simultaneously with the verification of the Ward identity of the vertex.
Following the preliminary work in~\cite{Trampetic:2015zma}, we present in this paper, for
the first time, the full SW-map based/inspired nonlocally deformed four-photon
couplings on Moyal space, with all reasonable gauge freedom parameters included, and their quantum correction induced by such coupling via the one-loop four-photon-tadpole diagram. We use two distinct second-order $\theta$-exact SW map expansions for the gauge field strength (which we call model (I) and (II), as dubbed recently in~\cite{Trampetic:2015zma}) with all the freedom/ambiguity/deformation parameters included. From these field strengths we then derive the corresponding actions and show that they can be expressed explicitly in terms of the commutative $\rm U(1)$ field strength $f_{\mu\nu}=\partial_\mu a_\nu-\partial_\nu a_\mu$ by working out the integration-by-part procedure required. Our final form for both SW map based actions further indicates that the $\theta$-exact freedom parameters we suggested before~\cite{Trampetic:2014dea} would not deplete all the $\theta$-iterative possibilities in~\cite{Bichl:2001cq}. The two additional gauge freedom/ambiguity/deformation parameters are thus introduced by hand in turn.
We then determine the four-photon coupling vertices with all possible gauge freedom/ambiguity/deformation parameters included from models I and II and write down the one-loop four-photon-tadpole integrals for these models. We find that various momentum factors in the second-order SW map expansion reduce either to unity or to the common nonlocal factor $\sin^2\frac{p\theta k}{2}/\big(\frac{p\theta k}{2}\big)^2$ in the tadpole, the same as found in the three-photon-bubble diagram studied previously~\cite{Horvat:2013rga}.
It is long known that the massless tadpole integrals all vanish at the integration dimensions $D\ge 4$ under dimensional regularization~\cite{Leibbrandt:1975dj}. However this result is modified by the nonlocal factors~\cite{Hayakawa:1999zf,Horvat:2011qg}. In order to precisely verify this effect we evaluate the tadpole integral using two different methods: The first method simply introduces a pair of identical numerators and denominators to turn the tadpole into the bubble integral then evaluate the tadpole using the protocol established for the bubble diagram, as we did in~\cite{Horvat:2013rga}. The second method generalizes the $n$-nested zero regulator~\cite{Leibbrandt:1975dj} for the commutative tadpole diagram. We compute the four-photon-tadpole contributions to the photon two-point function as a function of unspecified number of the integration dimensions $D$. Then we specify gauge field theory dimension d by taking the limits $D\to d$. Next we especially discuss the $d=4$ case. In the end, we find that both approaches reveal the same {\em purely} quadratic IR divergent result in the $D\to 4-\epsilon$ limit, verifying the soundness of our computation.
The UV/IR mixing phenomenon, reflecting the inherent nonlocality of the full theory and arising from the high-momentum region of integration in the Feynman integrals, shows up as a IR divergence when the spatial extension of the NC string of size $|\theta p|$ gets reduced to a point. A related anomaly is a nonanalytical behavior in the NC parameter $\theta$, when the limit $\theta \rightarrow 0$ is undertaken. In the NC gauge theory, however, a quadratic IR divergence coexists with the logarithmic divergence which matches the UV behavior. Our study indicates that the quadratic IR divergence is clearly connected with tadpole integrals. Therefore the gauge invariant four photon interaction we found may serve as counterterms to cancel quadratic IR divergence. For this purpose the tadpole induced quadratic IR divergence is summed together with the corresponding contribution from the photon bubble diagram~\cite{Horvat:2013rga}. We find that while it is indeed possible to do so in both models I and II, the procedure requires fixing the first-order gauge freedom parameter~\cite{Trampetic:2014dea} $\kappa=1$. Subsequently a third possible action (III), inspired by the structures of the first two, is introduced. It involves a SW-map inspired gauge invariance deformation in a more general way.
In this action each manifestly gauge invariant four-photon coupling term starting at $\theta^2$ order is assigned an independent freedom parameter, which is shown to be sufficient to cancel any quadratic IR divergences from the bubble diagram for the two typical $\theta$ values. (The UV divergences still require separated fine-tuning.)
This paper is structured as follows:
In the first two sections we describe two $\theta$-exact Seiberg-Witten map models up to the $e^3$ order and construct the four-photon self-interaction, as a model definition. Section III is devoted to the computation, presentation, and discussion of the $\theta$-exact tadpole in $D$ and four dimensions, while in the Sec. IV we analyze a sum of the bubble and the tadpole diagram to show elimination of IR divergences for arbitrary $\theta$-matrix. In the Sec. V we introduce a generalized model of the deformed four-photon interaction and show the elimination of all divergences from the bubble plus the tadpole diagram for the special choice of the $\theta$-matrix and freedom parameters. Sections VI and VII present the discussion and conclusion, respectively. In this article the capital letters denote noncommutative objects, while the small letters denote the commutative ones.
\section{A model definition of the three- and the four-photon self-interactions}
\subsection{Definitions and construction of the actions}
As usual we consider the formal $\rm U_{\star}(1)$ NC gauge theory action
\begin{equation}
S=-\frac{1}{4e^2}\int F_{\mu\nu}\left(e a_\mu,\theta^{\mu\nu};\kappa,\eta,...\right)
\star F^{\mu\nu}\left(e a_\mu,\theta^{\mu\nu};\kappa,\eta,...\right),
\label{2.00}
\end{equation}
where the formal NC gauge field strength
$F_{\mu\nu}\left(e a_\mu,\theta^{\mu\nu},\kappa,\eta,...\right)$
is regarded as a composite operator built-up using the commutative gauge field operator $a_\mu$ and the NC parameter $\theta^{\mu\nu}$ via the SW map procedure. The set of parameters $(\kappa,\eta,...)$ represents in principle the SW map/gauge-invariance freedoms. The commutative coupling constant $e$ is attached to the commutative gauge field operator $a_\mu$ due to the charge quantization issue~\cite{Horvat:2011qn}. As a bonus feature it also serves as the ordering parameter for the $\theta$-exact SW map expansion, i.e.
\begin{equation}
F_{\mu\nu}=e f_{\mu\nu}+F^{e^2}_{\mu\nu}+F^{e^3}_{\mu\nu}+\mathcal O\left(e^4\right)
\label{2.0}.
\end{equation}
To $e^2$ order the gauge field strength SW map $F^{e^2}_{\mu\nu}$ expansion is fairly universal
\cite{Horvat:2013rga,Trampetic:2014dea,Trampetic:2015zma}
\begin{equation}
F^{e^2}_{\mu\nu}=e^2\theta^{ij}
\Big(\kappa f_{\mu i}\star_{2}f_{\nu j}-a_i\star_2\partial_j
f_{\mu\nu}\Big).
\label{2.1}
\end{equation}
Note that $\kappa$ deformation in the settings of this paper (\ref{2.1}) and in a recent works \cite{Trampetic:2014dea,Trampetic:2015zma}, corresponds to the ${\kappa^{-1}_g}$ for the previous $\kappa_g$ deformation of \cite{Horvat:2013rga} \footnote{See also a discussion after Eq (26) in \cite{Trampetic:2014dea}. We first substitute $\kappa_g=\kappa^{-1}$, then extract out from all terms in the action a factor
$\kappa^{-2}$. That factor is than absorbed as an overall rescaling of the field redefinition~\cite{Trampetic:2014dea}. Further on we name this paper settings the $\kappa$-settings.}. In this paper the $\kappa$-deformation is dubbed the $\kappa$-settings, where we have the following triple-photon action:
\begin{equation}
S^{e}=-\frac{e}{2}\int\theta^{ij}f^{\mu\nu}\left(\kappa f_{\mu i}\star_2 f_{\nu j}-\frac{1}{4}f_{ij}\star_2f_{\mu\nu}\right)\,,
\label{3photon}
\end{equation}
responsible to the contribution to the photon polarization tensor arising from the photon bubble diagram \cite{{Horvat:2013rga}}.
The (profound) structure of the $\theta$-exact SW map of a $\rm U(1)$ gauge theory is summarized in \cite{Trampetic:2015zma}, where two distinct gauge field SW maps were found and analyzed up to the $e^3\sim a_\mu^4$ order. Expanding \eqref{2.00} up to order $a_\mu^4$ with (\ref{2.0}) gives the following general form for the four-photon interaction
\begin{equation}
S^{e^2}=-\frac{1}{4e^2}\int\,F^{e^2}_{\mu\nu}F^{ e^2\mu\nu}+2ef^{\mu\nu}F_{\mu\nu}^{e^3},
\label{2.2}
\end{equation}
where the following two distinct solutions for the $e^3$ order gauge field strength have been found and given explicitly in \cite{Trampetic:2015zma}. The first solution is resolved from SW differential equation \cite{Trampetic:2015zma},
\begin{equation}
\begin{split}
F^{e^3}_{\mu\nu_{\rm (I)}}&(x)_{\kappa,\kappa_1,\kappa_2}=
\frac{e^3}{2}\theta^{ij}\theta^{kl}\bigg[\kappa_1\left(\left[f_{\mu k}f_{\nu i} f_{l j}\right]_{\star_{3'}}+\left[f_{\nu l}f_{\mu i}f_{kj}\right]_{\star_{3'}}\right)-\kappa a_i\star_2\partial_j\left(f_{\mu k}\star_2 f_{\nu l}\right)
\\&-\kappa_2\left(\left[f_{\nu l}a_i\partial_j f_{\mu k}\right]_{\star_{3'}}
+\left[f_{\mu k}a_i\partial_j f_{\nu l}\right]_{\star_{3'}}+\left[a_k\partial_l\left(f_{\mu i}f_{\nu j}\right)\right]_{\star_{3'}}-2a_i\star_2\partial_j\left(f_{\mu k}\star_2 f_{\nu l}\right)\right)
\\&+\left[a_i\partial_j a_k \partial_l f_{\mu\nu}\right]_{\star_{3'}}
+\left[\partial_l f_{\mu\nu}a_i\partial_j a_k\right]_{\star_{3'}}+\left[a_k a_i \partial_l\partial_j f_{\mu\nu}\right]_{\star_{3'}}
-\frac{1}{2}\Big(\left[a_i\partial_k a_j\partial_l f_{\mu\nu}\right]_{\star_{3'}}
+\left[\partial_l f_{\mu\nu}a_i\partial_k a_j\right]_{\star_{3'}}\Big)\bigg]\,,
\label{2.3}
\end{split}
\end{equation}
while the second one is obtained by inverting the known solution for the inverted SW map \cite{Mehen:2000vs} in \cite{Trampetic:2015zma},
\begin{equation}
\begin{split}
F^{e^3}_{\mu\nu_{\rm (II)}}&(x)_{\kappa,\kappa'_1,\kappa'_2}
=e^3\theta^{ij}\theta^{kl} \Big[\kappa'_1\left(f_{\mu i}\star_2\left(f_{jk}\star_2 f_{l\nu}\right)+f_{l\nu}\star_2\left(f_{jk}\star_2 f_{\mu i}\right)-\left[f_{\mu i}f_{jk}f_{l\nu}\right]_{\star_3}\right)
\\&-\kappa'_2\big((a_i\star_2\partial_j f_{\mu k})\star_2 f_{\nu l}+(a_i\star_2\partial_j f_{\nu l})\star_2 f_{\mu k}-[a_i\partial_j (f_{\mu k}f_{\nu l})]_{\star_3}\big)
\\&- \kappa a_i\star_2\partial_j\left(f_{\mu k}\star_2 f_{\nu l}\right)+(a_i\star_2\partial_j a_k)\star_2\partial_l f_{\mu\nu}
\\&+a_i\star_2(\partial_j a_k\star_2\partial_l f_{\mu\nu})+a_i\star_2(a_k\star_2\partial_j\partial_l f_{\mu\nu})-[a_i\partial_j a_k\partial_l f_{\mu\nu}]_{\star_{3}}
\\&-\frac{1}{2}\Big(a_i\star_2(\partial_k a_j\star_2\partial_l f_{\mu\nu})+(a_i\star_2\partial_k a_j)\star_2\partial_l f_{\mu\nu}-[a_i\partial_k a_j\partial_l f_{\mu\nu}]_{\star_3}
+[a_ia_k\partial_j\partial_l f_{\mu\nu}]_{\star_3}\Big)\Big].
\label{2.4}
\end{split}
\end{equation}
Definitions of generalized star products and the momentum dependent
functions $f_{\star_2}\left(p,q\right)$, $f_{\star_3}\left(p,q,k\right)$
and $f_{\star_{3'}}\left(p,q,k\right)$ are given in the Appendix A.
In both solutions we have included the following freedom parameters:
$\kappa$ from $F^{e^2}_{\mu\nu}$, while in $F^{e^3}_{\mu\nu}$ we have $(\kappa,\kappa_{1,2})$ for model I and $(\kappa,\kappa'_{1,2})$ for model II, respectively. From those field strengths we have found the following two actions at the $a_\mu^4$ order,
\begin{equation}
\begin{split}
S^{e^2}_{\rm (I)}=&-\frac{e^2}{4}\theta^{ij}\theta^{kl}\int\,\kappa^2(f_{\mu i}\star_2 f_{\nu j})(f^\mu_{\;\,\; k}\star_2 f^\nu_{\;\,\; l})-\kappa(f_{ij}\star_2 f_{\mu\nu})(f^\mu_{\;\,\; k}\star_2 f^\nu_{\;\,\; l})
+2\kappa_1f^{\mu\nu}[f_{\mu i}f_{\nu k}f_{jl}]_{\star_{3'}}
\\&
+2\kappa_2 f^{\mu\nu}(a_i\star_2\partial_j(f_{\mu k}\star_2 f_{\nu l})-[f_{\mu k} a_i\partial_j f_{\nu l}]_{\star_{3'}}-[a_i f_{\mu k}\partial_j f_{\nu l}]_{\star_{3'}})
+(a_i\star_2\partial_j f_{\mu\nu})(a_k\star_2\partial_l f^{\mu\nu})
\\&
+\frac{1}{2}f^{\mu\nu}(2[a_i\partial_j a_k \partial_l f_{\mu\nu}]_{\star_{3'}}+2[\partial_l f_{\mu\nu}a_i \partial_j a_k]_{\star_{3'}}+2[a_i a_k \partial_j\partial_l f_{\mu\nu}]_{\star_{3'}}
-[a_i\partial_k a_j\partial_l f_{\mu\nu}]_{\star_{3'}}-[\partial_l f_{\mu\nu}a_i\partial_k a_j]_{\star_{3'}}),
\end{split}
\label{2.8}
\end{equation}
and
\begin{equation}
\begin{split}
S^{e^2}_{\rm (II)}=&-\frac{e^2}{4}\theta^{ij}\theta^{kl}\int\,\kappa^2(f_{\mu i}\star_2 f_{\nu j})(f^\mu_{\;\,\; k}\star_2 f^\nu_{\;\,\; l})-\kappa(f_{ij}\star_2 f_{\mu\nu})(f^\mu_{\;\,\; k}\star_2 f^\nu_{\;\,\; l})
\\&+2\kappa'_1 f^{\mu\nu}\left(2f_{\mu i}\star_2(f_{jk}\star_2 f_{l\nu})-[f_{\mu i}f_{jk}f_{l\nu}]_{\star_3}\right)
-4\kappa'_2 f^{\mu\nu}((a_i\star_2\partial_j f_{\mu k})\star_2 f_{\nu l}-[a_i\partial_j f_{\mu k}f_{\nu l}]_{\star_3})
\\&+(a_i\star_2\partial_j f_{\mu\nu})(a_k\star_2\partial_l f^{\mu\nu})
+f^{\mu\nu}(2 a_i\star_2(\partial_j a_k\star_2\partial_l f_{\mu\nu})+2(a_i\star_2\partial_j a_k)\star_2\partial_l f_{\mu\nu}-2[a_i\partial_j a_k\partial_l f_{\mu\nu}]_{\star_3}
\\&-a_i\star_2(\partial_k a_j\star_2\partial_l f_{\mu\nu})-(a_i\star_2\partial_k a_j)\star_2\partial_l f_{\mu\nu}+[a_i\partial_k a_j\partial_l f_{\mu\nu}]_{\star_3} +2a_i\star_2(a_k\star_2\partial_j\partial_l f_{\mu\nu})-[a_i a_k \partial_j\partial_l f_{\mu\nu}]_{\star_2}).
\end{split}
\label{2.9}
\end{equation}
One can reduce \eqref{2.8} and \eqref{2.9} to the leading/$\theta^2$ order and perform
integration-by-part to obtain
\begin{equation}
S^{\theta^2}_{\rm (I)}=S^{\theta^2}_{\rm (II)}=-\frac{e^2}{4}\int \kappa^2 f_{\mu i}f_{\nu j}f^\mu_{\;\,\; k}f^\nu_{\;\,\; l}-\kappa f^{\mu\nu}f_{\mu i}f_{\nu j}f_{kl}+2\kappa_1 f^{\mu\nu}f_{\mu i}f_{\nu k}f_{jl}-\frac{1}{4}f^{\mu\nu}f_{\mu\nu}f_{ik}f_{jl}+\frac{1}{8}f^{\mu\nu}f_{ij}f_{kl}f_{\mu\nu}.
\label{thetasquare}
\end{equation}
We observe two crucial facts from this formula: First, two more gauge invariant
structures $f^{\mu\nu}f_{\mu\nu}f_{ik}f_{jl}$ and $f^{\mu\nu}f_{ij}f_{kl}f_{\mu\nu}$ emerge
after the integration-by-part. Second, these five terms deplete all possible combination of
four $\rm U(1)$ field strength $f_{\mu\nu}$ contracted with two $\theta$ and two
metric tensors. One would naturally wonder what are the $\theta$-exact completions of these
two terms and if there are more structures emerging after a $\theta$-exact integration-by-part
procedure performed. This is, however, not an easy task, since the universal
integration-by-part/cyclicity identities on Moyal space only exist for the
two-products/integral over the product of three functions. Yet it is still possible to
convert both $\theta$-exact interactions \eqref{2.8} and \eqref{2.9} fully in terms of the
commutative field strength $f_{\mu\nu}$ by applying a series of integration-by-part
identities resulting from verification of the Ward identity on the four-photon coupling
vertex. A detailed procedure for model I is given in Appendix A, while here we only list
the final result for both models I and II:
\begin{equation}
\begin{split}
S^{e^2}_{\rm (I)}=&-\frac{e^2}{4}\theta^{ij}\theta^{kl}\int\,\kappa^2(f_{\mu i}\star_2 f_{\nu j})(f^\mu_{\;\,\; k}\star_2 f^\nu_{\;\,\; l})-\kappa(f_{ij}\star_2 f_{\mu\nu})(f^\mu_{\;\,\; k}\star_2 f^\nu_{\;\,\; l})
\\&+2\kappa_1 f^{\mu\nu}[f_{\mu i}f_{\nu k}f_{jl}]_{\star_{3'}}+2\kappa_2 f^{\mu\nu}(a_i\star_2\partial_j(f_{\mu k}\star_2 f_{\nu l})-[f_{\mu k} a_i\partial_j f_{\nu l}]_{\star_{3'}}-[a_i f_{\mu k}\partial_j f_{\nu l}]_{\star_{3'}})
\\&-\frac{1}{4}f^{\mu\nu}\left[f_{\mu\nu}f_{ik}f_{jl}\right]_{\star_{3'}}+\frac{1}{8}\left(f^{\mu\nu}\star_2 f_{ij}\right)\left(f_{kl}\star_2 f_{\mu\nu}\right)+\frac{1}{2}\theta^{pq}f^{\mu\nu}\left[\partial_i f_{jk} f_{lp}\partial_q f_{\mu\nu}\right]_{\mathcal M_{\rm (I)}},
\end{split}
\label{2.10}
\end{equation}
and
\begin{equation}
\begin{split}
S^{e^2}_{\rm (II)}=&-\frac{e^2}{4}\theta^{ij}\theta^{kl}\int\,\kappa^2(f_{\mu i}\star_2 f_{\nu j})(f^\mu_{\;\,\; k}\star_2 f^\nu_{\;\,\; l})-\kappa(f_{ij}\star_2 f_{\mu\nu})(f^\mu_{\;\,\; k}\star_2 f^\nu_{\;\,\; l})
\\&+2\kappa'_1f^{\mu\nu}(2f_{\mu i}\star_2(f_{jk}\star_2 f_{l\nu})-[f_{\mu i}f_{jk}f_{l\nu}]_{\star_3})
-4\kappa'_2 f^{\mu\nu}((a_i\star_2\partial_j f_{\mu k})\star_2 f_{\nu l}-[a_i\partial_j f_{\mu k}f_{\nu l}]_{\star_3})
\\&-\frac{1}{4}f^{\mu\nu}(3f_{ik}\star_2\left(f_{jl}\star_2 f_{\mu\nu}\right)-2\left[f_{ik}f_{jl}f_{\mu\nu}\right]_{\star_3})
+\frac{1}{8}f^{\mu\nu}(2f_{ij}\star_2\left(f_{kl}\star_2 f_{\mu\nu}\right)-\left[f_{ij}f_{kl}f_{\mu\nu}\right]_{\star_3})
\\&-\frac{1}{4}\theta^{pq}\theta^{rs}f^{\mu\nu}\left[\partial_k f_{ri}\partial_j f_{lp}\partial_q\partial_s f_{\mu\nu}+\partial_i\partial_r f_{jk}\partial_s(f_{lp}\partial_q f_{\mu\nu})\right]_{\mathcal M_{\rm (II)}}.
\end{split}
\label{2.11}
\end{equation}
The products $\mathcal M_{\rm (I,II)}$ are defined via the
momentum structures $f_{\rm (I,II)}$ in Appendix A.
As stated above, the five terms of order $\theta^2$ deplete all possible indices
arrangements. Also $\theta^{ij}\theta^{kl}f_{ik}f_{jl}f_{\mu\nu}$ and
$\theta^{ij}\theta^{kl}f_{ij}f_{kl}f_{\mu\nu}$ terms can be easily generated via the $\theta$-iterative
procedure ~\cite{Bichl:2001cq}. Therefore it is reasonable to introduce two additional freedom
parameters $(\kappa_3, \kappa_4)$ and $(\kappa'_3, \kappa'_4)$, in (I) and (II) respectively, as the $\theta$-exact completion of these two freedoms. In this way we produce the final forms for the $a_{\mu}^4$-order actions (I,II):
\begin{equation}
\begin{split}
S^{e^2}_{\rm (I)_{\kappa,\kappa_1,\kappa_2,\kappa_3,\kappa_4}}=&-\frac{e^2}{4}\theta^{ij}\theta^{kl}\int\,\kappa^2(f_{\mu i}\star_2 f_{\nu j})(f^\mu_{\;\,\; k}\star_2 f^\nu_{\;\,\; l})-\kappa(f_{ij}\star_2 f_{\mu\nu})(f^\mu_{\;\,\; k}\star_2 f^\nu_{\;\,\; l})
\\&+2\kappa_1f^{\mu\nu}[f_{\mu i}f_{\nu k}f_{jl}]_{\star_{3'}}+2\kappa_2 f^{\mu\nu}
(a_i\star_2\partial_j(f_{\mu k}\star_2 f_{\nu l})-[f_{\mu k} a_i\partial_j f_{\nu l}]_{\star_{3'}}-[a_i f_{\mu k}\partial_j f_{\nu l}]_{\star_{3'}})
\\&
-\frac{\kappa_3}{4}f^{\mu\nu}\left[f_{\mu\nu}f_{ik}f_{jl}\right]_{\star_{3'}}+\frac{\kappa_4}{8}\left(f^{\mu\nu}\star_2 f_{ij}\right)\left(f_{kl}\star_2 f_{\mu\nu}\right)+\frac{1}{2}\theta^{pq} f^{\mu\nu} \left[\partial_i f_{jk} f_{lp}\partial_q f_{\mu\nu}\right]_{\mathcal M_{\rm (I)}},
\end{split}
\label{2.15}
\end{equation}
and
\begin{equation}
\begin{split}
S^{e^2}_{\rm (II)_{\kappa,\kappa'_1,\kappa'_2,\kappa'_3,\kappa'_4}}=&-\frac{e^2}{4}\theta^{ij}\theta^{kl}\int\,\kappa^2(f_{\mu i}\star_2 f_{\nu j})(f^\mu_{\;\,\; k}\star_2 f^\nu_{\;\,\; l})-\kappa(f_{ij}\star_2 f_{\mu\nu})(f^\mu_{\;\,\; k}\star_2 f^\nu_{\;\,\; l})
\\&+2\kappa'_1f^{\mu\nu}(2f_{\mu i}\star_2(f_{jk}\star_2 f_{l\nu})-[f_{\mu i}f_{jk}f_{l\nu}]_{\star_3})
-4\kappa'_2f^{\mu\nu}((a_i\star_2\partial_j f_{\mu k})\star_2 f_{\nu l}-[a_i\partial_j f_{\mu k}f_{\nu l}]_{\star_3})
\\&-\frac{\kappa'_3}{4}f^{\mu\nu}\left(3f_{ik}\star_2(f_{jl}\star_2 f_{\mu\nu}\right)-2\left[f_{ik}f_{jl}f_{\mu\nu}\right]_{\star_3})
+\frac{\kappa'_4}{8}f^{\mu\nu}\left(2f_{ij}\star_2(f_{kl}\star_2 f_{\mu\nu}\right)-\left[f_{ij}f_{kl}f_{\mu\nu}\right]_{\star_3})
\\&-\frac{1}{4}\theta^{pq}\theta^{rs}f^{\mu\nu}\left[\partial_k f_{ri}\partial_j f_{lp}\partial_q\partial_s f_{\mu\nu}+\partial_i\partial_r f_{jk}\partial_s(f_{lp}\partial_q f_{\mu\nu})\right]_{\mathcal M_{\rm (II)}}.
\end{split}
\label{2.16}
\end{equation}
Note that $\kappa$-terms are identical for both above actions, as they should be in the $\kappa$-settings.
\subsection{Feynman rule for the four-photon interaction}
Since the triple-photon Feynman rules from action (\ref{3photon})
is given previously in \cite{Trampetic:2014dea} it is not necessary
to be repeated here.
From the actions \eqref{2.15} and \eqref{2.16} we read out the corresponding
four-photon interactions in the momentum space, with all four
momenta $p_i$ in Fig.\ref{fig:vertex} being the incoming ones
\begin{equation}
\begin{split}
\Gamma^{\mu_1\mu_2\mu_3\mu_4}_{\rm (I)}\left(p_1,p_2,p_3,p_4\right)=&-i\frac{e^2}{4}[(\kappa^2\Gamma_{\rm A}^{\mu_1\mu_2\mu_3\mu_4}\left(p_1,p_2,p_3,p_4\right)
+\kappa\Gamma_{\rm B}^{\mu_1\mu_2\mu_3\mu_4}\left(p_1,p_2,p_3,p_4\right)
\\&+\kappa_1\Gamma_1^{\mu_1\mu_2\mu_3\mu_4}\left(p_1,p_2,p_3,p_4\right)+\kappa_2\Gamma_2^{\mu_1\mu_2\mu_3\mu_4}\left(p_1,p_2,p_3,p_4\right)
+\kappa_3\Gamma_3^{\mu_1\mu_2\mu_3\mu_4}\left(p_1,p_2,p_3,p_4\right)
\\&+\kappa_4\Gamma_4^{\mu_1\mu_2\mu_3\mu_4}\left(p_1,p_2,p_3,p_4\right)+\Gamma_5^{\mu_1\mu_2\mu_3\mu_4}\left(p_1,p_2,p_3,p_4\right))
\\&+{\rm all\; S_4\; permutations\; over}\; \{p_i\}{\rm \; and}\; \{\mu_i\}{\rm \; simutaneously}]\delta\left(p_1+p_2+p_3+p_4\right).
\end{split}
\label{3.1}
\end{equation}
and
\begin{equation}
\begin{split}
\Gamma^{\mu_1\mu_2\mu_3\mu_4}_{\rm (II)}\left(p_1,p_2,p_3,p_4\right)=&-i\frac{e^2}{4}[(\kappa^2\Gamma_{\rm A}^{\mu_1\mu_2\mu_3\mu_4}\left(p_1,p_2,p_3,p_4\right)
+\kappa\Gamma_{\rm B}^{\mu_1\mu_2\mu_3\mu_4}\left(p_1,p_2,p_3,p_4\right)
\\&+\kappa'_1{\Gamma'}_1^{\mu_1\mu_2\mu_3\mu_4}\left(p_1,p_2,p_3,p_4\right)+\kappa'_2{\Gamma'}_2^{\mu_1\mu_2\mu_3\mu_4}\left(p_1,p_2,p_3,p_4\right)
+\kappa'_3{\Gamma'}_3^{\mu_1\mu_2\mu_3\mu_4}\left(p_1,p_2,p_3,p_4\right)
\\&+\kappa'_4{\Gamma'}_4^{\mu_1\mu_2\mu_3\mu_4}\left(p_1,p_2,p_3,p_4\right)+{\Gamma'}_5^{\mu_1\mu_2\mu_3\mu_4}\left(p_1,p_2,p_3,p_4\right))
\\&+{\rm all\; S_4\; permutations\; over}\; \{p_i\}{\rm \; and}\; \{\mu_i\}{\rm \; simutaneously}]\delta\left(p_1+p_2+p_3+p_4\right).
\end{split}
\label{3.2}
\end{equation}
with $\Gamma_{\rm A}$, $\Gamma_{\rm B}$, $\Gamma_i$ and $\Gamma'_i$ 's being given in the Appendix B.
\begin{figure}
\begin{center}
\includegraphics[width=6cm,height=4cm]{NC4photonvertex.eps}
\end{center}
\caption{Four-photon field vertex $\Gamma^{\mu_1\mu_2\mu_3\mu_4}(p_1,p_2,p_3,p_4)$ with all incoming momenta.}
\label{fig:vertex}
\end{figure}
\section{Four-photon interaction at one loop}
\begin{figure}
\begin{center}
\includegraphics[width=7cm,height=4cm]{NCphotonphotontadpole.eps}
\end{center}
\caption{Four-photon-tadpole contribution to the photon two-point function $T^{\mu\nu}(p)$.}
\label{fig:photontadpole}
\end{figure}
Following the successful derivation of four-photon self-coupling vertices we move on to
study the simplest perturbative quantum loop correction induced by this coupling: the one-loop
four-photon-tadpole diagram contribution to the photon polarization tensor. We first read
off from Fig.\ref{fig:photontadpole} the tadpole integral. After some arithmetics we find that various nonlocal factors listed in the Appendix A simplify into two cases: either one or $f_{\star_2}^2(p,\ell)$, i.e.
\begin{equation}
\begin{split}
T^{\mu\nu}_{\rm (I,II)}(p)=&\frac{1}{2}\mu^{d-D}\int\,\frac{d^D \ell}{(2\pi)^D}
\frac{-ig_{\rho\sigma}}{\ell^2}\Gamma^{\mu\nu\rho\sigma}_{\rm (I,II)}(p,-p,\ell,-\ell)\\
=&e^2\tau^{\mu\nu}_{\rm (I,II)}\;\mu^{d-D} \int\,\frac{d^D \ell}{(2\pi)^D}\frac{\ell^2}{\ell^2}+e^2\mathcal T_{\rm (I,II)}^{\mu\nu\rho\sigma}\mu^{d-D} \int\,\frac{d^D \ell}{(2\pi)^D}\frac{\ell_\rho \ell_\sigma}{\ell^2}f_{\star_2}^2(p,\ell).
\end{split}
\label{3.3}
\end{equation}
Since the first integral in the above Eq. (\ref{3.3}) vanishes according to the dimensional regularization prescription~\cite{Leibbrandt:1975dj}, the only remaining integral is the second one. The tensors $\tau^{\mu\nu}_{\rm (I,II)}$ and ${\cal T}_{\rm (I,II)}^{\mu\nu\rho\sigma}$ are given in the Appendix C.
\subsection{Tadpole integral}
Now we compute our $D$-dimensional tadpole integral
\begin{equation}
I^{\mu\nu}=\int\,\frac{d^D \ell}{(2\pi)^D}\frac{\ell^\mu \ell^\nu}{\ell^2}f_{\star_2}^2(p,\ell).
\label{3.4}
\end{equation}
We have encounter similar computation in our prior works~\cite{Horvat:2011qn,Horvat:2013rga}. Here to ensure the consistency of our methodology we choose to evaluate the integral in two different ways, both are extensions of sound methods for commutative field theories. As shown below these two methods agrees with each other at the $D\to 4$ limit, as expected.
(1) The first is the conventional method which multiplies the tadpole integral with an identity $1=(\ell+p)^2/(\ell+p)^2$ to turn it into a bubble integral. We then parameterizing the integral as in our prior works \cite{Horvat:2013rga}, which ultimately yields the following expression in $D$ dimensions
\begin{equation}
\begin{split}
&I^{\mu\nu}=g^{\mu\nu}\frac{1}{D-1}\left(4(\theta p)^{-\frac{D}{2}}\mathcal K\left[\frac{D}{2}-1;0,0\right]-\frac{4p^2}{(\theta p)^2}\left((1-D)\mathcal K\left[\frac{D}{2}-2;0,1\right]
\right.\right.\\&\left.\left.+2D\mathcal K\left[\frac{D}{2}-2;1,1\right]\right)-p^2\left((1-D)\mathcal W\left[\frac{D}{2}-1;0,1\right]-2D\mathcal W\left[\frac{D}{2}-1;1,1\right]\right)\right)
\\&+p^\mu p^\nu\left(\frac{4}{(\theta p)^2}\left((D-2)\mathcal K\left[\frac{D}{2}-2;1,0\right]+2(1-D)\mathcal K\left[\frac{D}{2}-2;1,1\right]\right)\right.\\&\left.-\left((1-D)\mathcal W\left[\frac{D}{2}-1;1,0\right]+2D\mathcal W\left[\frac{D}{2}-1;1,1\right]\right)\right)
\\&+(\theta p)^\mu(\theta p)^\nu\frac{1}{D-1}\left(-4D(\theta p)^{-1-\frac{D}{2}}\mathcal K\left[\frac{D}{2}-1;0,0\right]+\frac{4p^2}{(\theta p)^4}\left((1-D)\mathcal K\left[\frac{D}{2}-2;0,1\right]
\right.\right.\\&\left.\left.+(2D-1)\mathcal K\left[\frac{D}{2}-2;1,1\right]\right)+\frac{p^2}{(\theta p)^2}\left((1-D)\mathcal W\left[\frac{D}{2}-1;0,1\right]+2D\mathcal W\left[\frac{D}{2}-1;1,1\right]\right)\right),
\end{split}
\label{3.5}
\end{equation}
where the special function integrals are defined as follows
\begin{gather}
\mathcal{K}[\nu;a,b]=2^{\nu}(\theta p)^{-\nu}\int\limits_0^1\,dx\,x^a(1-x)^b X^{\nu} K_\nu[X],
\label{3.6}\\
\mathcal{W}[\nu;a,b]=\int\limits_0^1\,dx\,x^a(1-x)^b W_\nu[X].
\label{3.7}
\end{gather}
Here the $K_\nu[X]$ is the modified Bessel function of second kind, while
\begin{eqnarray}
W_\nu[X]=
(\theta p)^{-2\nu}\Bigg[X^{2\nu}\Gamma\left[-\nu\right] {}_1F_2\left(\frac{1}{2};\frac{3}{2},\nu+1;\frac{X^2}{4}\right)
-\frac{2^{2\nu}}{1-2\nu}\Gamma\left[\nu\right]{}_1F_2\left(\frac{1-2\nu}{2};1-\nu,\frac{3-2\nu}{2};\frac{X^2}{4}\right)\Bigg],
\label{3.8}
\end{eqnarray}
and
\begin{equation}
X=(x(1-x)p^2(\theta p)^2)^{\frac{1}{2}}.
\label{3.9}
\end{equation}
In the limit $D\to 4-\epsilon$ after a lengthy manipulations the tadpole integral $I^{\mu\nu}$ reduces to the following two terms
\begin{equation}
I^{\mu\nu}=\frac{1}{6\pi^2}\frac{1}{(\theta p)^4}\left(g^{\mu\nu}-4\frac{(\theta p)^\mu(\theta p)^\nu}{(\theta p)^2}\right).
\label{3.10}
\end{equation}
The integrals $\mathcal K\left[\frac{D}{2}-2;0,1\right]$ and $\mathcal K\left[\frac{D}{2}-2;1,0\right]$ become UV divergent when $D\to 2-\epsilon$. This
hard $1/\epsilon$ divergence occurs at the the same place as the IR divergence when $D\to 4-\epsilon$; i.e., the UV and IR divergence get "mixed" in the same term now.
(2) As a test a second computation using the $n$-nested zero regulator~\cite{Leibbrandt:1975dj} is also performed. In this approach we first parameterize the integral as follows
\begin{equation}
\begin{split}
I^{\mu\nu}=
\int\,\frac{d^D \ell}{(2\pi)^D}\frac{\ell^\mu \ell^\nu}{\ell^2}f_{\star_2}^2(p,\ell)
=-2\int\limits_0^\infty d\lambda\; \lambda^2
\,\int\limits_0^{1/\lambda} dy \left(y-\frac{1}{\lambda}\right)\,
\\
\cdot\int\,\frac{d^D \ell}{(2\pi)^D}\left(\frac{\ell^2}{D}g^{\mu\nu}-\frac{y^2}{4}(\theta p)^\mu(\theta p)^\nu\right)\exp\left[-\lambda \ell^2-\frac{\lambda y^2}{4}\left(\theta p\right)^2\right].
\end{split}
\label{3.11}
\end{equation}
Next we introduce the loop momenta integral
\begin{equation}
\int\,\frac{d^D \ell}{(2\pi)^D}\exp\left[-\lambda \ell^2\right]=\left(4\pi\lambda\right)^{-\frac{D}{2}}\exp\left[-\lambda f\left(\frac{D}{2}\right)\right],
\label{3.12}
\end{equation}
where $f\left(\frac{D}{2}\right)$ is the $n$-nested zero regulator~\cite{Leibbrandt:1975dj} which satisfies the following properties
\begin{itemize}
\item $f\left(\frac{D}{2}\right)$ is a nonzero analytic function
\item $f\left(\frac{D}{2}\right)=0$ when $D\in Z^+$
\item $f^{(\ell)}\left(\frac{D}{2}\right)=0$ when $D\in Z^+$ and $\ell\le \ell_0\in N$
\item $\forall {\rm Re}D\not\in Z^+,\; \exists {\rm Im} D,\; {\rm Re}\left[f\left(\frac{D}{2}\right)\right]>0$.
\end{itemize}
The inclusion of the $f\left(\frac{D}{2}\right)$ regulator leads to a planar-modified Bessel-hypergeometric function combinations similar to the first approach with but a variable $\left(f\left(\frac{D}{2}\right)(\theta p)^2\right)^{\frac{1}{2}}$, instead of $\left(x(1-x)p^2(\theta p)^2\right)^{\frac{1}{2}}$, and no $x$ integration. In the
end, we have found
\begin{equation}
I^{\mu\nu}=I\cdot\left(-\frac{g^{\mu\nu}}{D}+\frac{(\theta p)^\mu(\theta p)^\nu}{(\theta p)^2}\right),
\label{3.13}
\end{equation}
with
\begin{equation}
\begin{split}
I=(4\pi)^{-\frac{D}{2}}&\left(\frac{(\theta p)^2}{4}\right)^{-1}\left\{\left(f\left(\frac{D}{2}\right)\right)^{\frac{D}{2}-1}\Gamma\left(1-\frac{D}{2}\right)-W_{\frac{D}{2}}\left[\left(f\left(\frac{D}{2}\right)(\theta p)^2\right)^{\frac{1}{2}}\right]\right.
\\&\left.-2\left(f\left(\frac{D}{2}\right)\right)^{\frac{D}{4}-\frac{1}{2}}\left(\frac{(\theta p)^2}{4}\right)^{-\frac{1}{2}-\frac{D}{4}}K_{\frac{D}{2}-1}\left[\left(f\left(\frac{D}{2}\right)(\theta p)^2\right)^{\frac{1}{2}}\right]\right\}.
\end{split}
\label{3.14}
\end{equation}
The above scalar integral $I$ reduces to a single IR divergent term
$-\frac{2}{3\pi^2}(\theta p)^{-4}$ when $D\to 4-\epsilon$, with all others
being suppressed by the $f\left(\frac{D}{2}\right)$ regulator. This matches
the result (\ref{3.10}) obtained from the other method.
\subsection{Four-photon-tadpole contributions in the limit $D\to 4-\epsilon$}
By combining the partial tensor reduction from above and the master integral in $D=4$, we obtain the following results
in the $\kappa$-settings,
\begin{equation}
\begin{split}
T_{\rm (I,II)}^{\mu\nu}(p)&=\frac{e^2}{(4\pi)^2}\{[g^{\mu\nu}p^2-p^\mu p^\nu]T_{1_{\rm (I,II)}}(p)
+(\theta p)^\mu (\theta p)^\nu T_{2_{\rm (I,II)}}(p)
+[g^{\mu\nu}(\theta p)^2-(\theta\theta)^{\mu\nu}p^2
+ p^{\{\mu}(\theta\theta p)^{\nu\}}]T_{3_{\rm (I,II)}}(p)
\\&+[(\theta\theta)^{\mu\nu}(\theta p)^2+(\theta\theta p)^\mu(\theta\theta p)^\nu]T_{4_{\rm (I,II)}}(p)
+ (\theta p)^{\{\mu} (\theta\theta\theta p)^{\nu\}} T_{5_{\rm (I,II)}}(p)\},
\end{split}
\label{4.1T}
\end{equation}
with model I coefficients
\begin{equation}
\begin{split}
T_{1_{\rm (I)}}(p)&=\frac{4}{3}\left(\frac{tr\theta\theta}{(\theta p)^4}+4\frac{(\theta\theta p)^2}{(\theta p)^6}\right)\big(\kappa_4-1\big),
\\
T_{2_{\rm (I)}}(p)&=\frac{16}{3}\frac{1}{(\theta p)^4} \,\big(2\kappa^2-4\kappa+6\kappa_1+2\kappa_2-2\kappa_3+\kappa_4-1\big),
\\
T_{3_{\rm (I)}}(p)&=\frac{16}{3}\frac{1}{(\theta p)^4}\big(2\kappa^2-2\kappa+\kappa_1+\kappa_2\big),
\\
T_{4_{\rm (I)}}(p)&=\frac{32}{3}\frac{p^2}{(\theta p)^6}\big(\kappa^2-2\kappa+\kappa_1+\kappa_2\big),
\\
T_{5_{\rm (I)}}(p)&=\frac{16}{3}\frac{p^2}{(\theta p)^6}\big(2\kappa-\kappa_1-\kappa_2\big),
\end{split}
\label{Tik}
\end{equation}
and model II
\begin{equation}
\begin{split}
T_{1_{\rm (II)}}(p)&=\frac{4}{3}\left(\frac{tr\theta\theta}{(\theta p)^4}+4\frac{(\theta\theta p)^2}{(\theta p)^6}\right)\big(2\kappa'_2+\kappa'_4-3\big),
\\
T_{2_{\rm (II)}}(p)&=\frac{32}{3}\frac{1}{(\theta p)^4} \,\big(\kappa^2-2\kappa+2\kappa'_1+2\kappa'_2-\kappa'_3\big),
\\
T_{3_{\rm (II)}}(p)&=\frac{16}{3}\frac{1}{(\theta p)^4}\big(2\kappa^2-2\kappa+2\kappa'_2\big),
\\
T_{4_{\rm (II)}}(p)&=\frac{32}{3}\frac{p^2}{(\theta p)^6}\big(\kappa^2-2\kappa+2\kappa'_2\big),
\\
T_{5_{\rm (II)}}(p)&=\frac{32}{3}\frac{p^2}{(\theta p)^6}\big(\kappa-\kappa'_2\big).
\end{split}
\label{Tik'}
\end{equation}
The tensor structure remains exactly the same as for the photon bubble diagram (Fig. \ref{fig:photonbubble}), as one would expect. However, we notice immediately the absence of UV and logarithmic divergent terms contrary to the photon bubble diagram results \cite{Horvat:2013rga}. In addition, the tadpole diagram produces no finite terms either. The tadpole contribution for model I can be made equal to that of model II when setting
\begin{eqnarray}
\kappa_1+\kappa_2&=&2\kappa'_2,
\nonumber\\
4\kappa_1-2\kappa_3&=&4\kappa'_1-4\kappa'_3-\kappa'_4+3,
\nonumber\\
\kappa_4&=&2\kappa'_3+\kappa'_4-2,
\label{3.21}
\end{eqnarray}
and in particular $T^{\mu\nu}_{\rm (I)}(p)=T^{\mu\nu}_{\rm (II)}(p)$, for $\kappa_i=\kappa'_i=1, \forall i$. We also notice that $T_3$, $T_4$, and $T_5$ can only be set to zero simultaneously if $\kappa=0$. Since the value $\kappa$ directly affects the divergences in the bubble diagram~\cite{Horvat:2013rga}, we therefore conclude that the tadpole divergences
should be analyzed only after being summed with the bubble diagram contribution.
\section{Summing over bubble and tadpole diagrams for $D\to 4-\epsilon$}
As stated above to complete our analysis for the photon two-point function we have to sum up the tadpole (Fig. \ref{fig:photontadpole}) (\ref{3.3}) and the bubble (Fig. \ref{fig:photonbubble}) contributions, which has the same structure as (\ref{4.1T}), but with the coefficients $T_i(p)$ replaced with $B_i(p)$ given below.
\begin{figure}
\begin{center}
\includegraphics[width=8cm,height=4.5cm]{NCphotonphotonbubble.eps}
\end{center}
\caption{Three-photon-bubble contribution to the photon two-point function $B^{\mu\nu}(p)$.}
\label{fig:photonbubble}
\end{figure}
To do that we express the bubble contribution ${\cal B}^{\mu\nu}(p)$, in terms of the $\kappa$-settings for the gauge field strength at the $e^2$ order (\ref{2.1}), by converting each $B_i$-coefficient from the $\kappa_g$-setting~\cite{Horvat:2013rga,Trampetic:2014dea} to the $\kappa$-setting. We first substitute $\kappa_g=\kappa^{-1}$, then extract out from all coefficients a factor $\kappa^{-2}$ since it is absorbed as an overall re-scaling factor of the corresponding field redefinition~\cite{Trampetic:2014dea}. Remaining divergent parts of the $B_i$-coefficients are then given next in (\ref{4.2}) in terms of the $\kappa$-settings, so that they match their tadpole counterparts and could be summed up with.
\begin{gather}
\begin{split}
B_1(p)&\sim+\bigg(\frac{1}{3}\big(1-3\kappa\big)^2 +\frac{1}{3}\big(1+2\kappa\big)^2\; \frac{p^2(\tr\theta\theta)}{(\theta p)^2}
+\frac{2}{3}\big(1+4\kappa+\kappa^2\big)\; \frac{p^2(\theta\theta p)^2}{(\theta p)^4}\bigg)
\left[\frac{2}{\epsilon} + \ln(\mu^2(\theta p)^2)\right]
\\&-\frac{8}{3}\big(1-\kappa\big)^2\;\frac{1}{(\theta p)^6}\bigg((\tr\theta\theta)(\theta p)^2+4(\theta\theta p)^2\bigg)\,,
\\
B_2(p)&\sim+\bigg(\frac{4}{3}\big(1-\kappa\big)^2\; \frac{p^4(\theta\theta p)^2}{(\theta p)^6}+\frac{1}{3}\big(1-2\kappa-5\kappa^2\big)\frac{p^4(\tr\theta\theta)}{(\theta p)^4}+\frac{1}{3}\big(25
-86\kappa+73\kappa^2\big)\frac{p^2}{(\theta p)^2}\bigg)\left[\frac{2}{\epsilon} + \ln(\mu^2(\theta p)^2)\right]
\\&-\frac{8}{3}\big(1-3\kappa\big)\big(3-\kappa\big)\frac{1}{(\theta p)^4}
+\frac{16}{3}(1-\kappa\big)^2\frac{p^2}{(\theta p)^8}\bigg((\tr\theta\theta)(\theta p)^2+6(\theta\theta p)^2\bigg),
\\
B_3(p)&\sim-\frac{1}{6}\big(1-2\kappa-11\kappa^2\big)\frac{p^2}{(\theta p)^2}
\left[\frac{2}{\epsilon} + \ln(\mu^2(\theta p)^2)\right]
-\frac{4}{3(\theta p)^4}\big(1-10\kappa+17\kappa^2\big),
\\
B_4(p)&\sim-\big(1+\kappa\big)^2\frac{p^4}{(\theta p)^4}\left[\frac{2}{\epsilon} + \ln(\mu^2(\theta p)^2)\right]-\frac{16p^2}{3(\theta p)^6}\big(1-6\kappa+7\kappa^2\big),
\\
B_5(p)&\sim+\frac{2}{3}\big(1+\kappa+4\kappa^2\big)\frac{p^4}{(\theta p)^4}
\left[\frac{2}{\epsilon} + \ln(\mu^2(\theta p)^2)\right]+\frac{32p^2}{3(\theta p)^6}\big(1-\kappa\big)\big(1-2\kappa\big).
\end{split}
\label{4.2}
\end{gather}
All $B_i(p)$ coefficients are computed for arbitrary $\kappa$ and $\sim$ means that we have neglected all finite terms in the above equations. We observe the presence of the UV divergences as well as quadratic UV/IR mixing in all $B_i$'s. The logarithmic IR divergences from both planar and nonplanar sources in the bubble diagram appear to have identical coefficient and combine into a single $\ln\mu^2(\theta p)^2$ term. Finally, no single $\kappa$ value is capable of removing all novel divergences.
\subsection{$\theta$ independent elimination of the bubble plus tadpole IR divergences}
Since tadpole integrals give only a quadratic IR divergence, we perform a sum
over the tadpole and quadratically IR divergent parts of the bubble diagram~\cite{Horvat:2013rga} for an arbitrary choice of the antisymmetric tensor $\theta^{\mu\nu}$ and deformation freedom parameters. Working out the arithmetics we get for both models, I and II in the IR regime the sum $\Pi_{\rm(I,II)}^{\mu\nu}(p)_{\rm IR}={\cal B}^{\mu\nu}(p)_{\rm IR} + T^{\mu\nu}_{\rm (I,II)}(p)_{\rm IR}$ of the bubble and the tadpole photon contribution to the photon polarization tensor. It has again the structure as in (\ref{4.1T}), but with coefficients $T_i(p)$ getting replaced with the following sums:
\begin{equation}
\Pi_{i_{\rm (I,II)}}(p)_{\rm IR}=B_i(p)_{\rm IR}+T_{i_{\rm (I,II)}}(p)_{\rm IR}, \;\forall i=1,...,5.
\label{4.5IR}
\end{equation}
Since tadpole produces no UV, log and also has no finite contributions, we have
\begin{equation}
\Pi_i(p)_{\rm UV}=B_i(p)_{\rm UV}, \;\forall i=1,...,5.
\label{4.5UV}
\end{equation}
According to (\ref{4.5IR}), in the rest of this article label $\Pi^{\mu\nu}$ always represents the sum of bubble and tadpole contributions to the photon polarization tensor. A summation over the leading IR terms in the bubble and tadpole diagrams provides IR results (for overall UV/IR mixings) in model I:
\begin{gather}
\begin{split}
\Pi_{1_{\rm (I)}}(p)_{\rm IR}&\sim-\frac{4}{3}\frac{1}{(\theta p)^4}\Big(tr\theta\theta+4\frac{(\theta\theta p)^2}{(\theta p)^2}\Big)\Big(2\big(\kappa-1\big)^2-\kappa_4+1\Big),
\\
\Pi_{2_{\rm (I)}}(p)_{\rm IR}&\sim+\frac{8}{3}\frac{1}{(\theta p)^4}\Big(\kappa^2+2\kappa+12\kappa_1+4\kappa_2-4\kappa_3+2\kappa_4-5\Big)
+\frac{16}{3}\frac{p^2}{(\theta p)^4}\Big(\frac{tr\theta\theta}{(\theta p)^2}+6\frac{(\theta\theta p)^2}{(\theta p)^4}\Big)\Big(\kappa-1\Big)^2,
\\
\Pi_{3_{\rm (I)}}(p)_{\rm IR}&\sim-\frac{4}{3}\frac{1}{(\theta p)^4}\Big(9\kappa^2-2\kappa-4\kappa_1-4\kappa_2+1\Big),
\\
\Pi_{4_{\rm (I)}}(p)_{\rm IR}&\sim-\frac{16}{3}\frac{p^2}{(\theta p)^6}\Big(5\kappa^2-2\kappa-2\kappa_1-2\kappa_2+1\Big),
\\
\Pi_{5_{\rm (I)}}(p)_{\rm IR}&\sim+\frac{16}{3}\frac{p^2}{(\theta p)^6}\Big(4\kappa^2-4\kappa-\kappa_1-\kappa_2+2\Big),
\end{split}
\label{4.6}
\end{gather}
and in model II:
\begin{gather}
\begin{split}
\Pi_{1_{\rm (II)}}(p)_{\rm IR}&\sim-\frac{4}{3}\frac{1}{(\theta p)^4}\Big(tr\theta\theta+4\frac{(\theta\theta p)^2}{(\theta p)^2}\Big)\Big(2\big(\kappa-1\big)^2-2\kappa'_3-\kappa'_4+3\Big),
\\
\Pi_{2_{\rm (II)}}(p)_{\rm IR}&\sim+\frac{8}{3}\frac{1}{(\theta p)^4}\Big(\kappa^2+2\kappa+8\kappa'_1+8\kappa'_2-4\kappa'_3-3\Big)
+\frac{16}{3}\frac{p^2}{(\theta p)^4}\Big(\frac{tr\theta\theta}{(\theta p)^2}+6\frac{(\theta\theta p)^2}{(\theta p)^4}\Big)\Big(\kappa-1\Big)^2,
\\
\Pi_{3_{\rm (II)}}(p)_{\rm IR}&\sim-\frac{4}{3}\frac{1}{(\theta p)^4}\Big(9\kappa^2-2\kappa-8\kappa'_2+1\Big),
\\
\Pi_{4_{\rm (II)}}(p)_{\rm IR}&\sim-\frac{16}{3}\frac{p^2}{(\theta p)^6}\Big(5\kappa^2-2\kappa-4\kappa'_2+1\Big),
\\
\Pi_{5_{\rm (II)}}(p)_{\rm IR}&\sim+\frac{16}{3}\frac{p^2}{(\theta p)^6}\Big(4\kappa^2-4\kappa-2\kappa'_2+2\Big).
\end{split}
\label{4.11}
\end{gather}
Here we see that IR divergence in the coefficients $\Pi_{3,4,5}$ depends only
on $\kappa_1+\kappa_2$ in model I or $\kappa'_2$ in model II. A bit more calculation shows that $\kappa=\kappa_1=\kappa_2=\kappa'_2=1$ can annihilate $\Pi_{3,4,5_{\rm (I,II)}}(p)_{\rm IR}$, respectively. Furthermore, $\kappa=1$ also removes the pathological second term in $\Pi_{2_{\rm (I,II)}}(p)_{\rm IR}$. The rest of divergences in \eqref{4.6} and \eqref{4.11} can be readily removed by an appropriate choice of the rest of freedom parameters, proving
that the elimination of all IR divergences is independent on the choices of $\theta$-matrix elements.
\subsection{UV divergences}
Up to now we see that the quadratic IR divergence can be canceled by selecting the first order gauge freedom parameter $\kappa=1$ and $\forall \theta$. On the other hand, we also know that there are various UV ($1/\epsilon$) divergences in the bubble diagram which are also precisely connected to the logarithmic divergences. Characterizing this part of the behavior requires some simplifications by special $\theta^{\mu\nu}$ choices which satisfy the condition that $(\theta\theta)^{\mu\nu}$ becomes minus identity within a suitable subspace of dimension $n\le 4$. Here we consider two important examples.
\subsubsection{The subdimension $n=2$ and $\theta^{\mu\nu}_1$ matrix}
First choice is to set $n=2$, as used already in \cite{Horvat:2011bs}.
This choice has the potential of preserving unitarity when the NC directions are spatial,
and it is manifested in the form of $\theta^{\mu\nu}_1$ matrix\footnote{Actually for $d=4$ any spacelike $\theta^{ij}$ can be simplified to this form by a rotation that sends the pseudovector $v_i=\epsilon_{ijk}\theta^{jk}$ to the third axis. Here $\epsilon_{ijk}$ is the totally antisymmetric tensor of the three spatial dimensions.}:
\begin{equation}
\theta^{\mu\nu}\equiv
\theta^{\mu\nu}_{1}=-\theta^2
\begin{pmatrix}
0&0&0&0\\
0&0&-1&0\\
0&1&0&0\\
0&0&0&0
\end{pmatrix}.
\label{nondegen1}
\end{equation}
So, for $d=4$-dimensional NC field theory and the subspace of dimension $n=2$, and by noticing that $(\theta\theta)^{\mu\nu}$ is now a projector into a two-dimensional subspace, we find
\begin{eqnarray}
-2\theta^2(\theta p)^\mu(\theta p)^\nu&=&2\Big[(\theta\theta)^{\mu\nu}(\theta p)^2+(\theta\theta p)^\mu(\theta\theta p)^\nu\Big]=(\theta p)^{\{\mu} (\theta\theta\theta p)^{\nu\}},
\label{theta1}\\
(\theta\theta\theta p)^\mu&=&-\theta^2 ({\theta p})^\mu, \:(\theta p)^2\tr\theta\theta=-2(\theta\theta p)^2=-2(\theta p)^2 \theta^2, \forall p;\; \theta^2=1/\Lambda_{\rm NC}^4,
\nonumber
\end{eqnarray}
with $\Lambda_{\rm NC}$ being the scale of noncommutativity.
In order to perform computations in this case, we need to use all above relations
in general decomposition of the photon polarization tensor, which is the same for both, the bubble and the tadpole contribution.
A general decomposition, like (\ref{4.1T}), in $n=2$ with $\theta_1$ matrix, simplifies to
\begin{equation}
\begin{split}
\Pi^{\mu\nu}(p)\big|_{n=2}^{\theta_1}&=\frac{e^2}{(4\pi)^2}[(g^{\mu\nu}p^2 - p^\mu p^\nu)\Pi_1
\\&+(g^{\mu\nu}(\theta p)^2-(\theta\theta)^{\mu\nu}p^2
+ p^{\{\mu}(\theta\theta p)^{\nu\}})\Pi_3
+(\theta p)^\mu (\theta p)^\nu(\Pi_2-\theta^2\Pi_4-2\theta^2\Pi_5 )],
\end{split}
\label{Pi12}
\end{equation}
where (for general $\theta$ matrix) the coefficients are given in (\ref{4.6}) and/or (\ref{4.11}), respectively. \\
The UV part of (\ref{4.2}) obtained by using $\theta_1$ and/or (\ref{Pi12}) is then given below:
\begin{gather}
\begin{split}
\Pi^{\mu\nu}(p)\big|^{\theta_1}_{\rm UV}&=\frac{e^2}{48\pi^2}\bigg(\frac{2}{\epsilon}+
\ln\big(\mu^2 (\theta p)^2\big)\bigg)\bigg\{(g^{\mu\nu} p^2-p^\mu p^\nu)
\bigg[\big(3\kappa-1\big)^2-6\kappa^2\frac{p^2 \theta^2}{ (\theta p)^2}\bigg]
\\&
+\frac{1}{2}(g^{\mu\nu}(\theta p)^2-(\theta\theta)^{\mu\nu}p^2
+ p^{\{\mu}(\theta\theta p)^{\nu\}})\frac{ p^2}{(\theta p)^2}\bigg[11\kappa^2+2\kappa-1\bigg]
\\&
+\frac{(\theta p)^\mu (\theta p)^\nu}{(\theta p)^2} p^2\bigg[
73\kappa^2-86\kappa+25 + \big( \kappa-1\big)^2\frac{p^2 \theta^2}{ (\theta p)^2}\bigg]\bigg\}.
\end{split}
\label{PiUV1}
\end{gather}
A simple inspection of the above result shows that there is not any single $\kappa$ value which would simultaneously eliminate IR and UV plus log divergences, thus elimination of UV pathologies has to be treated more carefully. What is more, since in the UV regime we have $\Pi_i=B_i$ because of the absence of the tadpole contributions, the elimination of pathologies goes beyond choices of freedom parameter. However, a fine property is that UV part of polarization tensor (\ref{PiUV1}) in all limits $p\to 0, \; \theta \to 0$, separately and/or simultaneously, behave well enough. This is encouraging enough to consider that the UV plus log divergences in (\ref{PiUV1}) may be removed for any point $\kappa$ via certain proper subtraction of the linear combinations of dimensionless nonlocal counterterms, as noted in \cite{Horvat:2011bs},
\begin{equation}
{\cal B}_C=\xi\; \partial^2\frac{\tr\theta\theta}{(\theta \partial)^2}+\zeta\:\partial^2\frac{(\theta\theta \partial)^2}{(\theta \partial)^4},
\label{Counterterm}
\end{equation}
with $\xi,\zeta$ being free coefficients measuring strengths of the two terms. Those coefficients could be determined during the renormalization procedure, which we leave for the future as a part of the NCGFT renormalization project.
\subsubsection{The subdimension $n=4$ and $\theta^{\mu\nu}_2$ matrix}
Second choice is to set $n=4$ and selects the following $\theta^{\mu\nu}_2$ matrix\footnote{This condition was used in the renormalizability studies of four-dimensional NCGFT without the SW map \cite{Blaschke:2009aw,Blaschke:2010ck,Gomis:2000zz}. Note also that this $\theta^{\mu\nu}_2$ is full rank and, thus, breaks in general the unitarity if one performs Wick rotation to the Minkowski space\-time. It contains also time-space noncommutativity which breaks causality.}
\cite{Horvat:2013rga}:
\begin{equation}
\theta^{\mu\nu}\equiv
\theta^{\mu\nu}_2=\theta^2
\begin{pmatrix}
0&-1&0&0\\
1&0&0&0\\
0&0&0&-1\\
0&0&1&0
\end{pmatrix}
=\theta^2
\begin{pmatrix}
{i\sigma_2}&0\\
0&{i\sigma_2}
\end{pmatrix}
\equiv(\sigma_2\otimes I_2)\;{\theta^2},
\label{nonudegen2}
\end{equation}
with $\sigma_2$ being famous Pauli matrix. This choice, which we are calling nonunitary, in four-dimensional Euclidean space\-time induces useful relations:
\begin{eqnarray}
(\theta\theta)^{\mu\nu}=- g^{\mu\nu}\theta^2,\,(\theta\theta p)^\mu=-p^\mu\theta^2, \,(\theta\theta\theta p)^{\mu}=-({\theta p})^\mu\theta^2,\,
(\theta p)^2\tr\theta\theta=-4(\theta\theta p)^2=-4\theta^4 p^2, \forall p; \;\theta^2=1/\Lambda_{\rm NC}^4.
\label{theta2}
\end{eqnarray}
The general tensor structure (\ref{4.1T}) then simplifies into two parts:
\begin{equation}
\begin{split}
\Pi^{\mu\nu}(p)|^{\theta_2}_{n=4}=\frac{e^2}{(4\pi)^2}\bigg[&\Big(g^{\mu\nu}p^2 - p^\mu p^\nu\Big)\Big(\Pi_1+2\frac{(\theta p)^2}{p^2}\Pi_3 -\frac{(\theta p)^4}{p^4}\Pi_4\Big)
+(\theta p)^\mu (\theta p)^\nu\Big(\Pi_2-2\frac{(\theta p)^2}{p^2}\Pi_5 \Big)\bigg].
\end{split}
\label{Pi24}
\end{equation}
The coefficients $\Pi_i$'s are given in (\ref{4.2}), (\ref{4.6}) and (\ref{4.11}), respectively.\\
The UV ($1/\epsilon$) divergent and the UV/IR mixing logarithmic terms in the total photon two-point function, come entirely from the bubble diagram, as one can easily be convinced by combining Eqs. (\ref{4.2}), (\ref{4.5UV}), and (\ref{Pi24}),
\begin{gather}
\begin{split}
\Pi^{\mu\nu}(p)|^{\theta_2}_{\rm UV}&=\frac{e^2 p^2}{24\pi^2}\bigg(\frac{2}{\epsilon}+
\ln\big(\mu^2 (\theta p)^2\big)\bigg)\bigg[\Big(g^{\mu\nu}-\frac{p^\mu p^\nu}{p^2}\Big)\big(3\kappa-1\big)^2
+\frac{(\theta p)^\mu (\theta p)^\nu}{(\theta p)^2}\frac{3}{2}\big(3\kappa-1\big)\big(9\kappa-7\big)\bigg].
\end{split}
\label{PiUV2}
\end{gather}
The above two divergences are certainly removable by selecting the special point $\kappa=1/3$~\cite{Horvat:2013rga,Trampetic:2014dea}. However, the UV plus log divergences from bubble diagrams are retained for any other $\kappa$ point in the $n=4$, $\theta_2$ case, and could be only removed by a proper subtraction of a linear combinations of nonlocal counterterms similar to (\ref{Counterterm}).
\subsubsection{Photon polarization for the $\theta_2$ matrix and $\kappa=1/3$ deformation}
In this case, because of the elimination of the UV divergences (\ref{PiUV2}), we analyze next the four-dimensional IR divergences
for $\theta^{\mu\nu}=\theta_{2}^{\mu\nu}$ satisfying $(\theta\theta)^{\mu\nu}=-g^{\mu\nu} \theta^2$ in four-dimensional Euclidean spacetime. So, the three-photon bubble plus four-photon-tadpole one-loop contributions to the photon two-point function, for special choice for $\theta_2$ and $\kappa=1/3$ is being reduced to two IR (UV/IR mixing) terms from photon tadpole diagram. This is obtained by using decomposition (\ref{Pi24}) with (\ref{Tik}) and (\ref{Tik'}), respectively. Adding two finite terms from the bubble diagram \cite{Horvat:2013rga},
\begin{equation}
\Pi_{\rm(I,II)}^{\mu\nu}(p)|^{\theta_2}_{\kappa=1/3}=\frac{e^2p^2}{2\pi^2}\bigg\{\frac{7}{27}\Big(g^{\mu\nu}-\frac{p^\mu p^\nu}{p^2}\Big)-\frac{1}{2}\frac{(\theta p)^\mu (\theta p)^\nu}{(\theta p)^2}\bigg\},
\label{4.16}
\end{equation}
we finally obtain the total photon polarization tensor in both models as
\begin{gather}
\begin{split}
\Pi_{\rm (I)}^{\mu\nu}(p)|^{\theta_2,\kappa=1/3}_{\rm IR}&=\frac{e^2p^2}{2\pi^2}\bigg\{\frac{7}{27}\Big(g^{\mu\nu}-\frac{p^\mu p^\nu}{p^2}\Big)\Big(1+\frac{4}{7}\frac{1}{p^2(\theta p)^{2}}\Big)
\\
&-\frac{1}{2}\frac{(\theta p)^\mu (\theta p)^\nu}{(\theta p)^2}\Big(1+\frac{4}{27}\frac{1}{p^2(\theta p)^2}\big(31-72\kappa_1-36\kappa_2+18\kappa_3-9\kappa_4\big)\Big)\bigg\},
\\
\Pi_{\rm (II)}^{\mu\nu}(p)|^{\theta_2,\kappa=1/3}_{\rm IR}&=\frac{e^2p^2}{2\pi^2}\bigg\{\frac{7}{27}\Big(g^{\mu\nu}-\frac{p^\mu p^\nu}{p^2}\Big)\Big(1+\frac{4}{7}\frac{1}{p^2(\theta p)^{2}}\Big)
\\
&-\frac{1}{2}\frac{(\theta p)^\mu (\theta p)^\nu}{(\theta p)^2}\Big(1+\frac{4}{27}\frac{1}{p^2(\theta p)^2}\big(22-36\kappa'_1-72\kappa'_2+18\kappa'_3\big)\Big)\bigg\}.
\end{split}
\label{4.18}
\end{gather}
The ``additional'' term $(\theta p)^{\mu}(\theta p)^{\nu}/(\theta p)^4$ receives multiple corrections from the tadpole diagram and can be easily removed by shifting the second-order gauge freedom parameters. Usual tensor structure $(g^{\mu\nu}p^2-p^{\mu}p^{\nu})$, however, depends solely on the first order gauge freedom parameter $\kappa$ in such a way that no real $\kappa$ value can set it to zero in models I and II.\\
\noindent
At the end of this section let's summarize what we have learned about photon polarization tensor pathologies within models (I) and (II):\\
-- Summing over the tadpole and bubble diagram not only formally completes the one loop quantum corrections to the photon polarization tensor, but also appears to be crucial for the elimination of the quadratic IR divergences $\forall \theta$.\\
-- The quadratic IR divergences can be removed for arbitrary $\theta$, and for $\kappa=\kappa_1=\kappa_2=\kappa'_2=1$, plus appropriate choices for the rest of the freedom parameters in both models I and II.\\
-- Unfortunately, choice $\kappa=1$ does not remove UV divergences for arbitrary $\theta$.\\
-- Choice $\kappa=1/3$ and fixing $\theta=\theta_2$ removes UV but not IR for both models I and II.\\
-- We conclude that to eliminate simultaneously all divergences from tadpole plus bubble contributions requires a new extended model/procedure of deformation freedom parameter selections.
\section{Extending SW map-based models}
\subsection{Generalized SW map inspired four-photon self-interaction model}
Our experience in the last two sections suggests that in order to efficiently eliminate
pathological divergences in a deformed one-loop photon polarization tensor it is necessary
to modify the freedom parameter dependence at least in the four-photon interactions. Here we
propose such a modification to the SW mapped model I,\footnote{The same could be done also
with the model II action (\ref{2.16}), but we choose to discuss only one case for simplicity.}
which we call the model (III), to fulfill this requirement. We start by realizing
that $\kappa^2$ and $\kappa$ terms in (\ref{2.8}) and/or (\ref{2.10}) and/or (\ref{2.15}) may
be varied independently due to the manifestly gauge invariant structure of both terms,
so by this means we could assign independent gauge-symmetry (variation) freedom parameters to all five gauge invariant structures starting at $\theta^2$ order as found in \eqref{thetasquare}. Proceed this idea further we get the following action:
\begin{equation}
\begin{split}
S^{e^2}_{\eta_1,\eta_2,\eta_3,\eta_4,\eta_5}=&-\frac{e^2}{4}\theta^{ij}\theta^{kl}\int\,\eta_1(f_{\mu i}\star_2 f_{\nu j})(f^\mu_{\;\,\; k}\star_2 f^\nu_{\;\,\; l})-\eta_2(f_{ij}\star_2 f_{\mu\nu})(f^\mu_{\;\,\; k}\star_2 f^\nu_{\;\,\; l})
\\&+2\eta_3 f^{\mu\nu}[f_{\mu i}f_{\nu k}f_{jl}]_{\star_{3'}}-\frac{\eta_4}{4}f^{\mu\nu}\left[f_{\mu\nu}f_{ik}f_{jl}\right]_{\star_{3'}}
+\frac{\eta_5}{8}\left(f^{\mu\nu}\star_2 f_{ij}\right)\left(f_{kl}\star_2 f_{\mu\nu}\right)
\\&+2 f^{\mu\nu}(a_i\star_2\partial_j(f_{\mu k}\star_2 f_{\nu l})-[f_{\mu k} a_i\partial_j f_{\nu l}]_{\star_{3'}}-[a_i f_{\mu k}\partial_j f_{\nu l}]_{\star_{3'}})+\frac{1}{2}\theta^{pq}f^{\mu\nu}\left[\partial_i f_{jk} f_{lp}\partial_q f_{\mu\nu}\right]_{\mathcal M_{\rm (I)}}.
\end{split}
\label{7.1}
\end{equation}
Comparing \eqref{7.1} with \eqref{2.15} we see the following replacement rules: $\kappa^2:=\eta_1, \;\kappa:=\eta_2, \;\kappa_1:=\eta_3, \;\kappa_2:=1,\; \kappa_3:=\eta_4, \;\kappa_4:=\eta_5$. Here we omitted $\kappa_2$ because its correspond structure starts at $\theta^4$.
Interaction (\ref{7.1}) leads to the following result for the tadpole diagram in the $\eta$ setting
\begin{equation}
\begin{split}
T^{\mu\nu}(p)_{\eta}&=\frac{e^2}{(4\pi)^2}\{[g^{\mu\nu}p^2-p^\mu p^\nu]T_1(p)_{\eta}
+(\theta p)^\mu (\theta p)^\nu T_2(p)_{\eta}
\\&+[g^{\mu\nu}(\theta p)^2-(\theta\theta)^{\mu\nu}p^2
+ p^{\{\mu}(\theta\theta p)^{\nu\}}]T_3(p)_{\eta}
\\&+[(\theta\theta)^{\mu\nu}(\theta p)^2+(\theta\theta p)^\mu(\theta\theta p)^\nu]T_4(p)_{\eta}
+ (\theta p)^{\{\mu} (\theta\theta\theta p)^{\nu\}} T_5(p)_{\eta}\},
\end{split}
\label{4.1Teta}
\end{equation}
with $T_i(p)_{\eta}$ being
\begin{equation}
\begin{split}
T_1(p)_{\eta}&=\frac{4}{3}\left(\frac{tr\theta\theta}{(\theta p)^4}+4\frac{(\theta\theta p)^2}{(\theta p)^6}\right)\big(\eta_5-1\big),
\\
T_2(p)_{\eta}&=\frac{64}{3}\frac{1}{(\theta p)^4} \,\big(2\eta_1-4\eta_2+6\eta_3-2\eta_4+\eta_5+1\big),
\\
T_3(p)_{\eta}&=\frac{64}{3}\frac{1}{(\theta p)^4}\big(2\eta_1-2\eta_2+\eta_3+1\big),
\\
T_4(p)_{\eta}&=\frac{128}{3}\frac{p^2}{(\theta p)^6}\big(\eta_1-2\eta_2+\eta_3+1\big),
\\
T_5(p)_{\eta}&=\frac{64}{3}\frac{p^2}{(\theta p)^6}\big(2\eta_2-\eta_3-1\big).
\end{split}
\label{Tiketa}
\end{equation}
\subsection{Bubble plus tadpole results in model (III)}
We now consider the divergences from bubble plus tadpole diagrams in model III for both $\theta_1$ and $\theta_2$ choices. As we will see below, the quadratic IR divergence cancellation can be achieved for arbitrary $\kappa$ and both choices of $\theta$, while UV cancellation is only available for $\theta_2$.
\subsubsection{The IR part for subdimension $n=2$ and the $\theta_1$ matrix in the $\eta$ setting}
Substituting \eqref{Tiketa} into \eqref{4.5IR} then \eqref{Pi12}, we obtain the following conditions for quadratic IR divergence cancellation
\begin{equation}
\begin{split}
2(1-\kappa)^2-(\eta_5-1)=0,
\\
\big(1-3\kappa\big)\big(3-\kappa\big)-2(2\eta_1-4\eta_2+6\eta_3-2\eta_4+\eta_5+1)=0,
\\
(1-10\kappa+17\kappa^2)-4(2\eta_1-2\eta_2+\eta_3+1)=0,
\\
(3\kappa^2-2\kappa+1)-2\eta_1=0,
\end{split}
\end{equation}
with solutions
\begin{equation}
\begin{split}
\eta_1=&\frac{1}{2}\big(3\kappa^2-2\kappa+1\big),\;
\eta_3=2\eta_2+\frac{1}{4}\big(5\kappa^2-2\kappa-7\big),
\\
\eta_4=&4\eta_2-\frac{1}{2}\big(11\kappa^2-4\kappa-7\big),\;
\eta_5=1+2\big(\kappa-1\big)^2,
\end{split}
\label{7.9}
\end{equation}
which produce the IR free photon polarization tensor for arbitrary $\kappa$, as expected.
\subsubsection{Subdimension $n=4$ represented by the $\theta_2$ matrix and for the $\eta$ setting}
For NCQFTs in $d$ dimensions, equal to the subspace dimensions $n$, i.e. in four-dimensional Euclidian spacetime with
$n=4$ case, and by choosing $\theta_2$ deformation (\ref{nonudegen2}), we get the following tadpole
\begin{equation}
T^{\mu\nu}(p)_{\eta}|^{n=4,\theta_2}_{\rm IR}=\frac{e^2}{3\pi^2}\frac{1}{(\theta p)^2}\bigg[\Big(g^{\mu\nu} - \frac{p^\mu p^\nu}{p^2}\Big)\,2\eta_1
+\frac{(\theta p)^\mu (\theta p)^\nu}{(\theta p)^2}\big(2\eta_1-8\eta_2+8\eta_3-2\eta_4+\eta_5+3\big)\bigg].
\label{7.4}
\end{equation}
Using the above Eq. (\ref{7.4}), together with (\ref{4.1T}) and (\ref{4.2}) for $\theta_{2}$,
we obtain the following bubble plus tadpole contributions to the photon polarization tensor in the IR regime:
\begin{gather}
\begin{split}
\Pi^{\mu\nu}(p)|^{\theta_2,\kappa,\eta_i}_{\rm IR}=&\frac{e^2}{6\pi^2}\frac{1}{(\theta p)^{2}}\bigg[\Big(g^{\mu\nu}-\frac{p^\mu p^\nu}{p^2}\Big)\Big(-3\kappa^2-2\kappa+1+4\eta_1\Big)
\\
&-\frac{(\theta p)^\mu (\theta p)^\nu}{(\theta p)^2}\Big(15\kappa^2-26\kappa+7-4\eta_1+16\eta_2-16\eta_3+4\eta_4-2\eta_5-6\Big)\bigg].
\end{split}
\label{4.17}
\end{gather}
Here, due to the $\eta$ settings the quadratic IR tadpole contributions are clearly disentangled from the well-behaving rest of contributions. Since the number of free parameters exceeds the number of tensor structures in a subspace $n=4$, the quadratic IR divergence cancellation surely exists for arbitrary $\kappa$.
Furthermore, an interesting choice,
\begin{equation}
\kappa=\frac{1}{3},\; \eta_1=0,\; \eta_2=\eta_3, \;2\eta_4-\eta_5=3,
\label{ketai3}
\end{equation}
eliminates not only IR but, simultaneously due to (\ref{PiUV2}), also the hard UV and logarithmic divergences as well.
\section{Discussion}
Following recent progress in the $e^3$-order $\theta$-exact Seiberg-Witten map expansions \cite{Martin:2012aw,Trampetic:2015zma,Martin:2015nna}, in this paper the resulting four-photon interaction is presented through the construction of three different models (I,II,III). We study their impact on the perturbative quantization of the SW map deformed $\rm U(1)$ gauge field theory via the corresponding one-loop, four-photon-tadpole contributions to the (divergent part of the) photon polarization tensor. Note also that the same term should contribute to the NC phenomenology at extreme energies, for example tree-level NCQED contributions to the $\gamma\gamma\to\gamma\gamma$ scattering processes~\cite{Hewett:2000zp}.
Quadratic IR divergence in NCQFT on Moyal space was first found in the original version of UV/IR mixing in the tadpole integral of the NC $\phi^4$ QFT on Moyal space~\cite{Matusis:2000jf}, where part of the originally quadratically UV divergent diagram gets UV regularized from the NC phase factor and becomes quadratically IR divergent instead. The UV divergence in the commutative $\rm U(1)$ gauge theory, on the other hand, is logarithmic under the dimensional regularization procedure, yet the IR divergence (steaming from UV/IR mixing) in the NC $\rm U_\star(1)$ gauge theory still starts from the quadratic order~\cite{Hayakawa:1999yt,Hayakawa:1999zf,Minwalla:1999px}. It is shown in our previous work on the bubble diagram~\cite{Horvat:2013rga} with the first-order gauge freedom parameter $\kappa_g$ $(=\kappa^{-1})$ that the hard UV ($1/\epsilon$) divergence, the commutative logarithmic divergence $\ln (\mu^2/p^2)$ and the NC logarithmic divergence $\ln p^2(\theta p)^2$, share the same dependence on $\kappa_g$, while the quadratic IR divergence bears a completely different one. Thus the quadratic IR divergence may indeed have a separated origin from ``UV/IR mixing'' which is logarithmic as it should be.
By evaluating the photon one-loop tadpole diagram Fig.\ref{fig:photontadpole} in the first two deformed $\rm U(1)$ gauge theory models based on two distinct gauge field strengths~\cite{Trampetic:2015zma} using NC-extended dimensional regularization techniques, we show that the NC massless tadpole integrals produce solely quadratic IR divergence, i.e. there is no UV divergent or finite contributions from tadpole. From this perspective the he result of this paper suggests that the quadratic IR divergence behavior could be simply a tadpole effect, especially because it can be removed by a suitable gauge freedom selection. The two models give different tadpole contributions for general values of the SW / gauge freedom deformation parameters $\kappa$, $\kappa_i$'s and $\kappa'_i$'s, yet they can be made equal by employing certain simple algebraic relations. In particular the tadpole integrals from the two models are equal to each other for $\kappa_i=\kappa'_i=1, \forall i$. In fact this is not surprising as the limited number of momenta involved in the tadpole evaluation restrict the possible value for the nonlocal factors to either be unity or $\sin^2\frac{p\theta k}{2}/\big(\frac{p\theta k}{2}\big)^2$, thus severely constraining the possible combinations in the tensor structure(s).
After summing of bubble and tadpole diagrams in the above three models, we study the possible choices of freedom parameters for divergence eliminations. These choices are expectedly different in the so-called $\kappa$-setting models (I, II) and in the third $\eta$-setting model (III). Of importance is that $\kappa=1$ stands out as the unique quadratic IR divergence cancelation point for arbitrary $\theta$ and for both models (I, II), while UV and log/mixing divergence can only be removed at a different $\kappa=\frac{1}{3}$ point with the special choice for $\theta$, $\theta=\theta_2$ matrix. For the third $\eta$-setting model the quadratic IR divergence removal is available for arbitrary $\kappa$ and both $\theta_{1,2}$ choices, as expected. We, thus, manage to obtain the full divergence cancellation at the $(\kappa,\; \eta_1)=(\frac{1}{3},\;0)$, point for $\theta=\theta_2$ matrix with the help of extra freedom parameters $\eta_2=\eta_3$, $2\eta_4-\eta_5=3$ in the four-photon interaction (\ref{ketai3}).
From the $\theta$-dependent photon polarization tensor in the U(1) NCQFT with
the special choice for $\theta_2$, we raise one more interesting question:
Does our deformation parameter set $(\kappa,\eta_i)$ run between IR and UV divergence-eliminating points?
To be more precise, let us take a liberty of assuming that there is indeed a possibility
that deformation parameters $\kappa$ and $\eta$ are energy/momentum-dependent
quantities.\footnote{That would not be strange at all, since our deformation parameters
are actually quantities similar to coupling constants---sitting in the actions---of any QFT.} As
a next point after inspecting (\ref{PiUV2}), (\ref{4.17}) and (\ref{ketai3}) we notice the fact that starting
with the photon polarization behavior in the deep IR regime, and moving towards
the high UV regime, our divergence eliminating freedom parameters $\kappa,\eta_i$
decreases: $\kappa=1\,\to\,\frac{1}{3},\; \eta_1=1\,\to\,0, \;2\eta_4-\eta_5=7\,\to\,3$, while $\eta_2-\eta_3=0$. Now taking into account the results of the $\theta^1$ models \cite{Buric:2006wm,Latas:2007eu}, where in the NC SU(N) model \cite{Latas:2007eu} was explicitly shown that $\theta^1$ parameter runs while having a negative $\beta$ function, we conjecture that $\beta$ functions associated with the parameters $\kappa$, $\eta_1$, and $(2\eta_4-\eta_5)$ could be negative too. The $(\eta_2-\eta_3)$ difference should have a zero $\beta$ function. However a precise computation of the above $\beta$ functions goes well beyond the scope of this paper and, therefore, such considerations should be left for the future.
\section{Conclusion}
We introduce three different types of nonlocal four-photon interactions in a full-fledged $\theta$-exact deformed U(1) gauge field theory. The first two actions (I,II) are deformed by the two distinct $\theta$-exact gauge field strength Seiberg-Witten (SW) maps, and the third one (III) involves a SW-map inspired gauge invariance deformation in a more general way.
The four-photon interaction induces the tadpole diagram which results in a pure quadratic IR divergence. Based on the computation of that tadpole we conjecture that the general origin of the quadratic IR divergence, including those from the bubble diagram, stems from the tadpolelike integrals, which although highly UV divergent still vanish under the dimensional regularization in the commutative theory~\cite{Leibbrandt:1975dj}.
In conclusion, by performing the sum over one-loop bubble and tadpole diagrams in all
three models, we obtain for arbitrary noncommutative tensors $\theta^{\mu\nu}$ and with
suitable choices of freedom parameters, the quadratic IR divergence free-photon
polarization tensor. A simultaneous UV, quadratic IR, and logarithmic divergences
cancellation does occur only in model III by employing the special $\theta=\theta_2$
matrix and the deformation parameter set, $\kappa=\frac{1}{3},\; \eta_1=0,\; \eta_2=\eta_3, \;2\eta_4-\eta_5=3$. The most important fact is definitely a huge freedom due to the SW mapping and gauge invariance, which at the end of the day allows elimination of all pathologies in the one-loop two point function, which is quite aspiring although there is still a very long way before one could speculate a renormalizable U(1) theory on Moyal space. The IR improvement we have achieved till now seems to suggest that there should be some way
out within the SW map approach, therefore we believe that further investigation along this
line could help reaching an accurate formulation with SW map and
provide some answers about the renormalizability of the NCQFT in general.
Besides the splitting of the polarization states driven by the breaking of Lorentz invariance caused by $\theta^{\mu \nu}$, the persistence of a finite additional term/effect (\ref{4.16}) of quantum-gravity origin may also affect the low-momentum end of the spectrum in the photon dispersion relation. The serious possibility of eliminating all pathological effects in the photon polarization tensor within NCQFT is our main motivation to continue searching for the noncommutative-geometry/quantum-gravity-inspired UV/IR mixing phenomena and connections between the NCQFT and holography, black hole horizon physics, etc. \cite{Horvat:2010km,Horvat:2011bs,Horvat:2011qg,Aschieri:2012in,Horvat:2010sr}.
\section{Acknowledgments}
The work by R.H. and J.Y. has been fully supported by Croatian Science Foundation under Project No. IP-2014-09-9582. The work J.T. is conducted under the European Commission and the Croatian Ministry of Science, Education and Sports Co-Financing Agreement No. 291823. In particular, J.T. acknowledges project financing by the Marie Curie FP7-PEOPLE-2011-COFUND program NEWFELPRO: Grant Agreement No. 69.
J.T. would also like to acknowledge the partial support of the Alexander von Humboldt Foundation (KRO 1028995 STP), and Max-Planck-Institute for Physics, and W. Hollik for hospitality.
J.Y. would like to acknowledge the Alexander von Humboldt Foundation and COST Action MP1405 for partially supporting his participation in the Corfu Summer Institute 2015 as well as the organizers of the Corfu Summer Institute 2015 for hospitality.
We would like to thank P. Aschieri, D. Blaschke, M. Dimitrijevi\'c \'Ciri\'c, J. Erdmenger, M. Hanada, W. Hollik, A.Ilakovac, T. Juri\'c, C. P. Martin, P. Schupp, and R. Szabo for fruitful discussions. A great deal of computation was done by using MATHEMATICA 8.0Mathematica \cite{mathematica} plus the tensor algebra package xACT~\cite{xAct}. Special thanks to A. Ilakovac and D. Kekez for the computer software and hardware support.
|
1,941,325,220,473 | arxiv |
\section{Introduction\label{s:intro}}
Many theoretical models of physics beyond the standard model (SM)~\cite{SM1,SM2,SM3} predict strong production of particles decaying into high-multiplicity final states, i.e., characterized by three or more energetic jets, leptons, or photons. Among these models are supersymmetry~\cite{SUSY1,SUSY2,SUSY3,SUSY4,SUSY5,SUSY6,SUSY7,SUSY8}, with or without $R$-parity violation~\cite{R-parity}, and models with low-scale quantum gravity~\cite{add1,add2,add3,RS1,RS2}, strong dynamics, or other nonperturbative physics phenomena. While the final states predicted in these models differ significantly in the type of particles produced, their multiplicity, and the transverse momentum imbalance, they share the common feature of a large number of energetic objects (jets, leptons, and/or photons) in the final state. The search described in this paper targets these models of beyond-the-SM (BSM) physics by looking for final states of various inclusive multiplicities featuring energetic objects. Furthermore, since such final states can be used to test a large variety of models, we provide model-independent exclusions on hypothetical signal cross sections. Considering concrete examples of such models, we interpret the results of the search explicitly in models with microscopic semiclassical black holes (BHs) and string balls (SBs), as well as in models with electroweak (EW) sphalerons. These examples are discussed in detail in the rest of this section.
\subsection{Microscopic black holes\label{s:introBH}}
In our universe, gravity is the weakest of all known forces. Indeed, the Newton constant, ${\sim}10^{-38}\GeV^{-2}$, which governs the strength of gravity, is much smaller than the Fermi constant, ${\sim}10^{-4}\GeV^{-2}$, which characterizes the strength of EW interactions. Consequently, the Planck scale $\ensuremath{{M_\mathrm{Pl}}}\xspace \sim 10^{19}\GeV$, \ie, the energy at which gravity is expected to become strong, is 17 orders of magnitude higher than the EW scale of ${\sim}100\GeV$. With the discovery of the Higgs boson~\cite{Higgs1,Higgs2,Higgs3} with a mass~\cite{H-mass,H-mass-new} at the EW scale, the large difference between the two scales poses what is known as the hierarchy problem~\cite{hierarchy}. This is because in the SM, the Higgs boson mass is not protected against quadratically divergent quantum corrections and---in the absence of fine tuning---is expected to be naturally at the largest energy scale of the theory: the Planck scale. A number of theoretical models have been proposed that attempt to solve the hierarchy problem, such as supersymmetry, technicolor~\cite{TC}, and, more recently, theoretical frameworks based on extra dimensions in space: the Arkani-Hamed, Dimopoulos, and Dvali (ADD) model~\cite{add1,add2,add3} and the Randall--Sundrum model~\cite{RS1,RS2}.
In this paper, we look for the manifestation of the ADD model that postulates the existence of $\ensuremath{{n_\mathrm{ED}}}\xspace \ge 2$ ``large" (compared to the inverse of the EW energy scale) extra spatial dimensions, compactified on a sphere or a torus, in which only gravity can propagate. This framework allows one to elude the hierarchy problem by explaining the apparent weakness of gravity in the three-dimensional space via the suppression of the fundamentally strong gravitational interaction by the large volume of the extra space. As a result, the fundamental Planck scale, \MD, in $3+{\ensuremath{{n_\mathrm{ED}}}\xspace}$ dimensions is related to the apparent Planck scale in 3 dimensions via Gauss's law as: $\ensuremath{{M_\mathrm{Pl}}}\xspace^2 \sim \MD^{\ensuremath{{n_\mathrm{ED}}}\xspace+2}R^\ensuremath{{n_\mathrm{ED}}}\xspace$, where $R$ is the radius of extra dimensions. Since \MD could be as low as a few TeV, \ie, relatively close to the EW scale, the hierarchy problem would be alleviated.
At high-energy colliders, one of the possible manifestations of the ADD model is the formation of microscopic BHs~\cite{dl,gt} with a production cross section proportional to the squared Schwarzschild radius, given as:
\begin{linenomath}
\begin{equation*}
\ensuremath{{R_\mathrm{S}}}\xspace = \frac{1}{\sqrt{\pi}\MD}\left[\frac{\ensuremath{{M_\mathrm{BH}}}\xspace}{\MD}\left(\frac{8\Gamma(\frac{\ensuremath{{n_\mathrm{ED}}}\xspace+3}{2})}{\ensuremath{{n_\mathrm{ED}}}\xspace+2}\right)\right]^{\frac{1}{\ensuremath{{n_\mathrm{ED}}}\xspace+1}},
\end{equation*}
\end{linenomath}
where $\Gamma$ is the gamma function and \ensuremath{{M_\mathrm{BH}}}\xspace is the mass of the BH. In the simplest production scenario, the cross section is given by the area of a disk of radius \ensuremath{{R_\mathrm{S}}}\xspace,
\ie, $\sigma \approx \pi \ensuremath{{R_\mathrm{S}}}\xspace^2$~\cite{dl,gt}. In more complicated production scenarios, \eg, a scenario with energy loss during the formation of the BH horizon, the cross section is modified from this ``black disk" approximation by a factor of order one~\cite{gt}.
As BH production is a threshold phenomenon, we search for BHs above a certain minimum
mass $\ensuremath{{M_\mathrm{BH}^\text{min}}}\xspace \ge \MD$. In the absence of signal, we will express the results of the search as limits on \ensuremath{{M_\mathrm{BH}^\text{min}}}\xspace.
In the semiclassical case (strictly valid for $\ensuremath{{M_\mathrm{BH}}}\xspace \gg \MD$), the BH quickly evaporates via Hawking radiation~\cite{Hawking} into
a large number of energetic particles, such as gluons, quarks, leptons, photons, \etc The relative abundance of various particles produced in the process of BH evaporation is expected
to follow the number of degrees of freedom per particle in the SM. Thus, about 75\% of particles produced are expected to be
quarks and gluons, because they come in three or eight color combinations, respectively. A significant amount of missing
transverse momentum may be also produced in the process of BH evaporation via production
of neutrinos, which constitute ${\sim}5\%$ of the products of a semiclassical BH decay, \PW\ and \PZ\ boson decays, heavy-flavor quark decays, gravitons, or noninteracting stable BH remnants.
If the mass of a BH is close to \MD, it is expected to exhibit quantum features, which can modify the characteristics of its decay products. For example, quantum BHs~\cite{QBH1,QBH2,QBH3} are expected to decay before they thermalize, resulting in low-multiplicity final states. Another model of semiclassical BH precursors is the SB model~\cite{sb}, which predicts the formation of a long, jagged string excitation, folded into a ``ball". The evaporation of an SB is similar to that of a semiclassical BH, except that it takes place at a fixed Hagedorn temperature~\cite{Hagedorn}, which depends only on the string scale \ensuremath{{M_\mathrm{S}}}\xspace. The formation of an SB occurs once the mass of the object exceeds \ensuremath{{\MS/\gs}}\xspace, where \ensuremath{{g_\mathrm{S}}}\xspace is the string coupling. As the mass of the SB grows, eventually it will transform into a semiclassical BH, once its mass exceeds $\ensuremath{{\MS/\gs^2}}\xspace > \MD$.
A number of searches for both semiclassical and quantum BHs, as well as for SBs have been performed at the CERN LHC using the Run 1 ($\sqrt{s} = 7$ and 8\TeV) and Run 2 ($\sqrt{s} = 13\TeV$) data. An extensive review of Run 1 searches can be found in Ref.~\cite{GL-Springer}. The most recent Run 2 searches for semiclassical BHs and SBs were carried out by ATLAS~\cite{ATLAS-inclusive13,Aaboud:2016ewt} and CMS~\cite{CMSBH4} using 2015 data. Results of searches for quantum BHs in Run 2 based on 2015 and 2016 data can be found in Refs.~\cite{ATLAS:2015nsi,Aad:2015ywd,Sirunyan:2017ygf,Aaboud:2017yvp,Aaboud:2017nak,Sirunyan:2018zhy}. The most stringent limits on \ensuremath{{M_\mathrm{BH}^\text{min}}}\xspace set by the Run 2 searches are 9.5 and 9.0 TeV for semiclassical and quantum BHs, respectively, for $\MD = 4\TeV$~\cite{ATLAS-inclusive13,CMSBH4}. The analogous limits on the minimum SB mass depend on the choice of the string scale and coupling and are in the 6.6--9\TeV range for the parameter choices considered in Refs.~\cite{ATLAS-inclusive13,CMSBH4}.
\subsection{Sphalerons\label{s:introSph}}
The Lagrangian of the EW sector of the SM has a possible nonperturbative solution,
which includes a vacuum transition known as a ``sphaleron". This class of solutions to gauge field theories was first proposed in 1976 by 't~Hooft \cite{tHooft:76}. The particular sphaleron solution of the SM was first described by Klinkhamer and Manton in 1984 \cite{Manton:84}. It is also
a critical piece of EW baryogenesis theory~\cite{Trodden:99}, which explains
the matter-antimatter asymmetry of the universe by such processes. The crucial feature of the sphaleron, which allows such claims to
be made, is the violation of baryon ($B$) and lepton ($L$) numbers, while preserving $B-L$. The possibility of sphaleron transitions at hadron colliders and related phenomenology has been discussed since the late 1980s~\cite{Ringwald:1989ee}.
Within the framework of perturbative SM physics, there are twelve globally conserved currents, one for each of the 12 fundamental fermions: $J^\mu = \overline{\psi}_L \gamma^\mu \psi_L$. An anomaly breaks this conservation, in particular $\partial_\mu J^\mu = [g^2/(16 \pi^2)] \mathrm{Tr}[F_{\mu\nu} \widetilde{F}^{\mu\nu}]$. This is because the integral of this term, known as a Chern--Simons (or winding) number $\ensuremath{{N_\mathrm{CS}}}\xspace$~\cite{CSnumber}, is nonzero. The anomaly exists for each fermion doublet. This means that the lepton number changes by 3\ensuremath{{N_\mathrm{CS}}}\xspace, since each of three leptons produced has absolute lepton number of 1. The baryon number will also change by $3 \ensuremath{{N_\mathrm{CS}}}\xspace$ because each quark has an absolute baryon number of $1/3$ and there are three colors and three generations of quarks produced. This results in two important relations, which are essential to the phenomenology of sphalerons: $\Delta(B+L) = 6 \ensuremath{{N_\mathrm{CS}}}\xspace$ and $\Delta(B-L) = 0$. The anomaly only exists if there is enough energy to overcome the potential in \ensuremath{{N_\mathrm{CS}}}\xspace, which is fixed by the values of the
EW couplings. Assuming the state at 125 GeV to be the SM Higgs boson, the precise measurement of its mass~\cite{H-mass,H-mass-new}
allowed the determination of these couplings, giving an estimate of the energy required for the sphaleron transitions of $\ensuremath{{E_\mathrm{sph}}}\xspace \approx 9\TeV$~\cite{Manton:84,TyeWong}.
While the $\ensuremath{{E_\mathrm{sph}}}\xspace$ threshold is within the reach of the LHC, it was originally thought that the sphaleron transition probability would be significantly suppressed by a large potential barrier. However, in a recent work~\cite{TyeWong} it has been suggested that the periodic nature of the Chern--Simons potential reduces this suppression at collision energies $\sqrt{\hat s} < \ensuremath{{E_\mathrm{sph}}}\xspace$, removing it completely for $\sqrt{\hat s} \ge \ensuremath{{E_\mathrm{sph}}}\xspace$. This argument opens up the possibility of observing an EW sphaleron transition in proton-proton (\Pp\Pp) collisions at the LHC via processes such as: $\cPqu + \cPqu \to \Pep \Pgmp \Pgt^+$ $\PAQt\, \PAQt\, \PAQb\, \PAQc\, \PAQc\, \PAQs\, \PAQd + X$. Fundamentally, the $\ensuremath{{N_\mathrm{CS}}}\xspace = +1$ ($-1$) sphaleron transitions involve 12 (anti)fermions: three (anti)leptons, one from each generation, and nine (anti)quarks, corresponding to three colors and three generations, with the total electric charge and weak isospin of zero. Nevertheless, at the LHC, we consider signatures with 14, 12, or 10 particles produced, that arise from a $\cPq + \cPq' \to \cPq + \cPq' + \mathrm{sphaleron}$ process, where 0, 1, or 2 of the 12 fermions corresponding to the sphaleron transition may ``cancel" the $\cPq$ or $\cPq'$ inherited from the initial state~\cite{Ellis2016,sphaleron-gen}. Since between zero and three of the produced particles are neutrinos, and also between zero and three are top quarks, which further decay, the actual multiplicity of the visible final-state particles may vary between 7 and 20 or more. Some of the final-state particles may also be gluons from either initial- or final-state radiation. While the large number of allowed combinations of the 12 (anti)fermions results in over a million unique transitions~\cite{TEU_Kolb}, many of the final states resulting from these transitions would look identical in a typical collider experiment, as no distinction is made between quarks of the first two generations, leading to only a few dozen phenomenologically unique transitions, determined by the charges and types of leptons and the third-generation quarks in the final state. These transitions would lead to characteristic collider signatures, which would have many energetic jets and charged leptons, as well as large missing transverse momentum due to undetected neutrinos.
A phenomenological reinterpretation in terms of limits on the EW sphaleron production of an ATLAS search for microscopic BHs in the multijet final states at $\sqrt{s} = 13\TeV$~\cite{ATLAS-inclusive13}, comparable to an earlier CMS analysis~\cite{CMSBH4}, was recently performed in Ref.~\cite{Ellis2016}. In the present paper, we describe the first dedicated experimental search for EW sphaleron transitions.
\section{The CMS detector and the data sample\label{sec:detector}}
The central feature of the CMS apparatus is a superconducting solenoid of 6\unit{m} internal diameter, providing a magnetic field of 3.8\unit{T}. Within the solenoid volume are a silicon pixel and strip tracker, a lead tungstate crystal electromagnetic calorimeter (ECAL), and a brass and scintillator hadron calorimeter (HCAL), each composed of a barrel and two endcap sections. Forward calorimeters extend the pseudorapidity ($\eta$) coverage provided by the barrel and endcap detectors. Muons are detected in gas-ionization chambers embedded in the steel flux-return yoke outside the solenoid.
In the region $\abs{\eta} < 1.74$, the HCAL cells have widths of 0.087 in pseudorapidity and 0.087 in azimuth ($\phi$). In the $\eta-\phi$ plane, and for $\abs{\eta} < 1.48$, the HCAL cells map on to $5 \times 5$ arrays of ECAL crystals to form calorimeter towers projecting radially outwards from close to the nominal interaction point. For $\abs{\eta} > 1.74$, the coverage of the towers increases progressively to a maximum of 0.174 in $\Delta \eta$ and $\Delta \phi$. Within each tower, the energy deposits in ECAL and HCAL cells are summed to define the calorimeter tower energies, subsequently used to provide the energies and directions of hadronic jets.
Events of interest are selected using a two-tiered trigger system~\cite{Khachatryan:2016bia}. The first level, composed of custom hardware processors, uses information from the calorimeters and muon detectors to select events at a rate of around 100\unit{kHz} within a time interval of less than 4\mus. The second level, known as the high-level trigger (HLT), consists of a farm of processors running a version of the full event reconstruction software optimized for fast processing, and reduces the event rate to around 1\unit{kHz} before data storage.
A more detailed description of the CMS detector, together with a definition of the coordinate system used and the relevant kinematic variables, can be found in Ref.~\cite{Chatrchyan:2008zzk}.
The analysis is based on a data sample recorded with the CMS detector in \Pp\Pp\ collisions at a center-of-mass energy of 13\TeV in 2016, corresponding to an integrated luminosity of \ensuremath{35.9\fbinv}\xspace. Since typical signal events are expected to contain multiple jets, we employ a trigger based on the \HT variable, defined as the scalar sum of the transverse momenta (\pt) of all jets in an event reconstructed at the HLT. We require $\HT > 800$--900 \GeV and also use a logical OR with several single-jet triggers with \pt thresholds of 450--500 \GeV. The resulting trigger selection is fully efficient for events that subsequently satisfy the offline requirements used in the analysis.
\section{Event reconstruction\label{sec:eventselection}}
The particle-flow (PF) algorithm~\cite{PFLOW} aims to reconstruct and identify each individual particle in an event with an optimized combination of information from the various elements of the CMS detector. The energy of photons is directly obtained from the ECAL measurement, corrected for zero-suppression effects. The energy of electrons is determined from a combination of the electron momentum at the primary interaction vertex as determined by the tracker, the energy of the corresponding ECAL cluster, and the energy sum of all bremsstrahlung photons spatially compatible with originating from the electron track. The energy of muons is obtained from the curvature of the corresponding track. The energy of charged hadrons is determined from a combination of their momentum measured in the tracker and the matching ECAL and HCAL energy deposits, corrected for zero-suppression effects and for the response function of the calorimeters to hadronic showers. Finally, the energy of neutral hadrons is obtained from the corresponding corrected ECAL and HCAL energies.
The reconstructed vertex with the largest value of summed physics-object $\pt^2$ is taken to be the primary $\Pp\Pp$ interaction vertex. The physics objects are the jets, clustered using the anti-\kt jet finding algorithm~\cite{Cacciari:2008gp,Cacciari:2011ma} with the tracks assigned to the vertex as inputs, and the associated missing transverse momentum, taken as the negative vector sum of the \pt of those jets. Events are required to have at least one reconstructed vertex within 24 (2)\unit{cm} of the nominal collision point in the direction parallel (perpendicular) to the beams.
For each event, hadronic jets are clustered from the PF candidates using the anti-\kt algorithm with a distance parameter of 0.4. The jet momentum is determined as the vectorial sum of all particle momenta in the jet, and is found from simulation to be within 5 to 10\% of the true momentum over the whole \pt spectrum and detector acceptance. Additional \Pp\Pp\ interactions within the same or neighboring bunch crossings (pileup) can contribute additional tracks and calorimetric energy depositions to the jet momentum. To mitigate this effect, tracks originating from pileup vertices are discarded and an offset correction is applied to correct for the remaining contributions. Jet energy corrections are derived from simulation, to bring the measured response of jets to that of particle-level jets on average. In situ measurements of the momentum balance in dijet, multijet, {\Pgg}+jet, and leptonically decaying {\PZ}+jet events are used to account for any residual differences in the jet energy scales in data and simulation~\cite{Khachatryan:2016kdb}. The jet energy resolution amounts typically to 15\% at a jet \pt of 10\GeV, 8\% at 100\GeV, and 4\% at 1\TeV. Additional selection criteria are applied to each jet to remove those potentially dominated by anomalous contributions from various subdetector components or reconstruction failures. All jets are required to have $\pt > 70\GeV$ and be within $\abs{\eta} < 5$. For the leading \pt jet in each event, the energy fraction carried by muon candidates failing the standard identification~\cite{Sirunyan:2018fpa} is required to be less than 80\%. This requirement removes events where a low-momentum muon is misreconstructed with very high momentum and misidentified as a high-energy jet. We further require the leading jet in an event to have a charged-hadron fraction of less than 0.99 if this jet is found within $\abs{\eta} <2.4$~\cite{CMS-PAS-JME-16-003}.
The missing transverse momentum, $\ptmiss$, is defined as the magnitude of the vectorial sum of transverse momenta of all PF candidates in an event. The jet energy corrections are further propagated to the \ptmiss calculation.
Details of muon reconstruction can be found in Ref.~\cite{Sirunyan:2018fpa}. The muon candidate is required to have at least one matching energy deposit in the pixel tracker and at least six deposits in the silicon strip tracker, as well as at least two track segments in the muon detector. The transverse impact parameter and the longitudinal distance of the track associated with the muon with respect to the primary vertex are required to be less than 2 and 5\unit{mm}, respectively, to reduce contamination from cosmic ray muons. The global track fit to the tracker trajectory and to the muon detector segments must have a $\chi^2$ per degree of freedom of less than 10. Muon candidates are required to have $\pt > 70\GeV$ and to be within $\abs{\eta} < 2.4$.
Details of electron and photon reconstruction can be found in Refs.~\cite{EGM-13-001} and \cite{EGM-14-001}, respectively. Electron and photon candidates are required to have $\pt> 70\GeV$ and $\abs{\eta} < 2.5$, excluding the $1.44 < \abs{\eta} < 1.57$ transition region between the ECAL barrel and endcap detectors where the reconstruction is suboptimal. We use standard identification criteria, corresponding to an average efficiency of 80\% per electron or photon. The identification criteria include a requirement that the transverse size of the electromagnetic cluster be compatible with the one expected from a genuine electron or photon, and that the ratio of the HCAL to ECAL energies be less then 0.25 (0.09) for electrons and less than 0.0396 (0.0219) for photons in the barrel (endcap). In addition, photon candidates are required to pass the conversion-safe electron veto requirements~\cite{EGM-14-001}, which disambiguates them from electron candidates.
Muons, electrons, and photons are required to be isolated from other energy deposits in the tracker and the calorimeters. The isolation $\mathcal{I}$ is defined as the ratio of the \pt sum of various types of additional PF candidates in a cone of radius $\Delta R = \sqrt{\smash[b]{(\Delta\eta)^2+(\Delta\phi)^2}}$ of 0.4 (muons) or 0.3 (electrons and photons), centered on the lepton or photon candidate, to the candidate's \pt. For muons, the numerator of the ratio is corrected for the contribution of neutral particles due to pileup, using one half of the \pt carried by the charged hadrons originating from pileup vertices. For electrons and photons, an average area method~\cite{rho-method}, as estimated with {\FASTJET}~\cite{Cacciari:2011ma}, is used. The isolation requirements are the same as used in an earlier 13\TeV analysis~\cite{CMSBH4}, except that for electrons we use a tighter isolation requirement of ${\cal I} < 0.07$.
To avoid double counting, we remove jets that are found within a radius of $\Delta R =0.3$ from a muon, electron, or photon, if the latter object contributes more than 80, 70, or 50\% of the jet \pt, respectively.
\section{Analysis strategy\label{s:strategy}}
We follow closely the approach for semiclassical BH searches originally developed by CMS for Run 1 analyses~\cite{CMSBH1,CMSBH2,CMSBH3} and subsequently used in the studies of early Run 2~\cite{CMSBH4} data. This approach is based on an inclusive search for BH decays to
all possible final states, dominated by the high-multiplicity multijet ones in the semiclassical BH case. This type of
analysis is less sensitive to the details of BH evaporation and the relative abundance of various particles produced, as it
considers all types of particles in the final state. We use a single discriminating variable \ensuremath{{S_\mathrm{T}}}\xspace, defined as the scalar sum of
\pt of all $N$ energetic objects in an event (which we define as jets, electrons, muons, and photons with \pt
above a given threshold), plus \ptmiss in the event, if it exceeds the same threshold:
$\ensuremath{{S_\mathrm{T}}}\xspace = \ptmiss + \sum_{i=1}^{N} \pt^i$. Accounting for \ptmiss in the \ensuremath{{S_\mathrm{T}}}\xspace variable makes \ensuremath{{S_\mathrm{T}}}\xspace a better measure of the total transverse momentum in the event carried by all the various particles. Since it is impossible to tell how many objects lead to the \ptmiss in the event, we do not consider \ptmiss values above the threshold when determining the object multiplicity.
This definition of \ensuremath{{S_\mathrm{T}}}\xspace is robust against variations in the BH evaporation model, and is also sensitive to the cases when there is large
\ptmiss due to enhanced emission of gravitons or to models in which a massive, weakly interacting remnant of a BH is formed at the terminal stage of Hawking evaporation, with a mass below \MD. It is equally applicable to sphaleron searches, given the expected energetic, high-multiplicity final states, possibly with large \ptmiss.
The \ensuremath{{S_\mathrm{T}}}\xspace distributions are then considered separately for various inclusive object multiplicities (\ie, $N \ge \ensuremath{{N^\mathrm{min}}}\xspace = 3, \dots,11$).
The background is dominated by SM QCD multijet production and is estimated exclusively from control samples in data. The observed number of events with \ensuremath{{S_\mathrm{T}}}\xspace values above a
chosen threshold is compared with the background and signal+background predictions to either establish a signal or to set limits on the signal production. This approach does not rely on the Monte Carlo (MC) simulation of the backgrounds, and
it also has higher sensitivity than exclusive searches in specific final states, \eg, lepton+jets~\cite{ATLAS-ljets1,ATLAS-ljets2}.
The main challenge of the search is to describe the inclusive multijet background in a robust
way, as both BH and sphaleron signals correspond to a broad enhancement in the high tail of the \ensuremath{{S_\mathrm{T}}}\xspace distribution, rather than to a narrow peak.
Since these signals are expected to involve a high multiplicity
of final-state particles, one has to reliably describe the background for large jet multiplicities,
which is quite challenging theoretically as higher-order calculations that fully describe multijet
production do not exist. Thus, one cannot rely on simulation to reproduce the \ensuremath{{S_\mathrm{T}}}\xspace spectrum for large $N$ correctly.
To overcome this problem, a dedicated method of predicting the QCD multijet background directly from collision
data has been developed for the original Run 1 analysis~\cite{CMSBH1} and used in the subsequent Run 1~\cite{CMSBH2,CMSBH3} and Run 2~\cite{CMSBH4} searches. It has been found empirically, first via simulation-based studies, and then from the analysis of data at low jet
multiplicities, that the shape of the \ensuremath{{S_\mathrm{T}}}\xspace distribution for the dominant QCD multijet background
does not depend on the multiplicity of the final state, above a certain turn-on threshold.
This observation reflects the way a parton shower develops via nearly collinear emission, which
conserves \ensuremath{{S_\mathrm{T}}}\xspace. It allows one to predict the \ensuremath{{S_\mathrm{T}}}\xspace spectrum of a multijet final state using low-multiplicity
QCD events, \eg, dijet or trijet events. This ``\ensuremath{{S_\mathrm{T}}}\xspace invariance" provides a powerful method of predicting the dominant
background for BH production by taking the \ensuremath{{S_\mathrm{T}}}\xspace shape from low-multiplicity events, for which the signal contamination
is expected to be negligible, and normalizing it to the observed spectrum at high multiplicities at the low
end of the \ensuremath{{S_\mathrm{T}}}\xspace distribution, where signal contamination is negligible even for large multiplicities of the
final-state objects. The method has been also used for other CMS searches, \eg, a search for stealth
supersymmetry~\cite{stealth} and a search for multijet resonances~\cite{EXO-13-001}.
\section{Simulated samples\label{sec:signals}}
\subsection{Black hole and string ball signal samples}
Signal simulation is performed using the {\textsc{BlackMax}}\xspace~v2.02.0~\cite{BlackMax} (semiclassical BHs) and \CHARYBDIS2~v1.003~\cite{Charybdis,Charybdis2}
(semiclassical BHs and SBs) generators. The generator settings of each model are listed in Tables \ref{table:generator-blackMax}
and \ref{table:generator-Charybdis}.
\begin{table}[hbt]
\centering
\topcaption{Generator settings used for {\textsc{BlackMax}}\xspace signal sample generation.\label{table:generator-blackMax}}
\cmsTable{
\begin{tabular}{ccccc}
Model & Choose\_a\_case & Mass\_loss\_factor & Momentum\_loss\_factor & turn\_on\_graviton\\
\hline
B1 & tensionless\_nonrotating & 0 & 0 & FALSE\\
B2 & rotating\_nonsplit & 0 & 0 & FALSE\\
B3 & rotating\_nonsplit & 0.1 & 0.1 & TRUE\\
\end{tabular}
}
\end{table}
\begin{table}[hbt]
\centering
\topcaption{Generator settings used for \CHARYBDIS2 signal sample generation.\label{table:generator-Charybdis}}
\cmsTable{
\begin{tabular}{ccccccccc}
Model & BHSPIN & MJLOST & YRCSC & NBODYAVERAGE & NBODYPHASE & NBODYVAR & RMSTAB & RMBOIL\\
\hline
C1 & TRUE & FALSE & FALSE & FALSE & TRUE & TRUE &FALSE &FALSE\\
C2 & FALSE & FALSE & FALSE & FALSE & TRUE & TRUE &FALSE &FALSE\\
C3 & TRUE & FALSE & FALSE & TRUE & FALSE& FALSE &FALSE &FALSE\\
C4 & TRUE & TRUE & TRUE & FALSE & TRUE & TRUE &FALSE &FALSE\\
C5 & TRUE & TRUE & TRUE & FALSE & FALSE& FALSE &TRUE &FALSE\\
C6 & TRUE & TRUE & TRUE & FALSE & FALSE & FALSE &FALSE &TRUE\\
\end{tabular}
}
\end{table}
For semiclassical BH signals, we explore different aspects of BH production and decay by simulating various scenarios, including
nonrotating BHs (B1,C2), rotating BHs (B2,C1), rotating BHs with mass loss (B3), and rotating BHs with Yoshino--Rychkov bounds~\cite{YR} (C4).
Models C3, C5, and C6 explore the termination phase of the BH with different object multiplicities from the BH remnant, varying from 2-body decaying remnant (C3), stable remnant (C5, for which additionally the generator parameter NBODY was changed from its default value of 2 to 0), and "boiling" remnant (C6), where the remnant continues to evaporate until a maximum Hawking
temperature equal to \MD is reached. For each model,
the fundamental Planck scale \MD is varied within 2--9\TeV in 1\TeV steps, each with $\ensuremath{{n_\mathrm{ED}}}\xspace=2,\,4,\,6$. The minimum black hole mass \ensuremath{{M_\mathrm{BH}^\text{min}}}\xspace is varied between $\MD+1\TeV$ and 11\TeV in 1\TeV steps.
For SB signals, two sets of benchmark points are generated with \CHARYBDIS2, such that different regimes of the SB production
can be explored. For a constant string coupling value $\ensuremath{{g_\mathrm{S}}}\xspace = 0.2$ the string scale \ensuremath{{M_\mathrm{S}}}\xspace is varied from 2 to 4\TeV, while at constant $\ensuremath{{M_\mathrm{S}}}\xspace=3.6\TeV$, \ensuremath{{g_\mathrm{S}}}\xspace is varied from 0.2 to 0.4. For all SB samples, $\ensuremath{{n_\mathrm{ED}}}\xspace = 6$ is used. The SB dynamics below the
first transition (\ensuremath{{\MS/\gs}}\xspace), where the SB production cross section scales with $\ensuremath{{g_\mathrm{S}}}\xspace^2/\ensuremath{{M_\mathrm{S}}}\xspace^4$, are probed with the constant $\ensuremath{{g_\mathrm{S}}}\xspace = 0.2$ and low \ensuremath{{M_\mathrm{S}}}\xspace values as well as with the constant \ensuremath{{M_\mathrm{S}}}\xspace scan. The saturation regime ($\ensuremath{{\MS/\gs}}\xspace < \ensuremath{{M_\mathrm{SB}}}\xspace < \ensuremath{{\MS/\gs^2}}\xspace$), where the SB production cross section no longer depends on \ensuremath{{g_\mathrm{S}}}\xspace, is probed
by the higher \ensuremath{{M_\mathrm{S}}}\xspace points of the constant \ensuremath{{g_\mathrm{S}}}\xspace benchmark. For each benchmark point, the scale \MD is chosen such that the cross section
at the SB--BH transition (\ensuremath{{\MS/\gs^2}}\xspace) is continuous.
For the BH and SB signal samples we use leading order (LO) MSTW2008LO~\cite{MSTW,MSTW1} parton distribution functions (PDFs).
This choice is driven by the fact that this set tends to give a conservative estimate of the signal cross section at
high masses, as checked with the modern NNPDF3.0~\cite{NNPDF} LO PDFs, with the value of strong coupling constant
of 0.118 used for the central prediction, with a standard uncertainty eigenset. The MSTW2008LO PDF set was also used
in all Run 1 BH searches~\cite{CMSBH1,CMSBH2,CMSBH3} and in an earlier Run 2~\cite{CMSBH4} search, which makes the comparison with earlier results straightforward.
\subsection{Sphaleron signal samples\label{sec:sphaleron-signal}}
The electroweak sphaleron processes are generated at LO with the \textsc{BaryoGEN} v1.0 generator~\cite{sphaleron-gen}, capable of simulating various final states described in Section~\ref{s:introSph}. We simulate the sphaleron signal for three values of the transition energy $\ensuremath{{E_\mathrm{sph}}}\xspace = 8$, 9, and 10\TeV. The parton-level simulation is done with the CT10 LO PDF set~\cite{CT10}. In the process of studying various PDF sets, we found that the NNPDF3.0 yields a significantly larger fraction of sea quarks in the kinematic region of interest than all other modern PDFs. While the uncertainty in this fraction is close to 100\%, we chose the CT10 set, for which this fraction is close to the median of the various PDF sets we studied. The PDF uncertainties discussed in Section~\ref{s:systematics} cover the variation in the signal acceptance between various PDFs due to this effect.
The typical final-state multiplicities for the $\ensuremath{{N_\mathrm{CS}}}\xspace = \pm1$ sphaleron transitions resulting in 10, 12, or 14 parton-level final states are shown in Fig.~\ref{fig:sphaleron}. The $\ensuremath{{N_\mathrm{CS}}}\xspace = 1$ transitions are dominated by 14 final-state partons, as the proton mainly consists of valence quarks, thus making the probability of cancellations small.
\begin{figure}[htb]
\centering
\includegraphics[width=0.45\textwidth]{Figure_001.pdf}
\caption{Observed final-state particle multiplicity $N$ distributions for $\ensuremath{{N_\mathrm{CS}}}\xspace=\pm 1$ sphaleron transitions resulting in 10, 12, and 14 parton-level final-state multiplicities. The relative numbers of events in the histograms are proportional to the relative probabilities of these three parton-level configurations. The peaks at positive values correspond to $\ensuremath{{N_\mathrm{CS}}}\xspace = 1$ transitions, while those at negative values correspond to $\ensuremath{{N_\mathrm{CS}}}\xspace = -1$ transitions and therefore are shifted toward lower multiplicity $N$ because of cancellations with initial-state partons.}
\label{fig:sphaleron}
\end{figure}
The cross section for sphaleron production is given by~\cite{Ellis2016}: $\sigma = \mathrm{PEF} \, \sigma_0$, where $\sigma_0 = 121,$ 10.1, and 0.51\unit{fb} for $\ensuremath{{E_\mathrm{sph}}}\xspace = 8,$ 9, and 10\TeV, respectively, and PEF is the pre-exponential factor, defined as the fraction of all quark-quark interactions above the sphaleron energy threshold \ensuremath{{E_\mathrm{sph}}}\xspace that undergo the sphaleron transition.
\subsection{Background samples}
In addition, we use simulated samples of {\PW}+jets, {\PZ}+jets, {\Pgg}+jets, \ttbar, and QCD multijet events for auxiliary studies. These events are generated with the \MGvATNLO v2.2.2~\cite{MadGraph} event generator at LO or next-to-LO, with the NNPDF3.0 PDF set of a matching order.
The fragmentation and hadronization of parton-level signal and background samples is done with \PYTHIA v8.205~\cite{pythia8}, using the underlying event tune CUETP8M1~\cite{CUET}. All signal and background samples are reconstructed with the detailed simulation of the CMS detector via \GEANTfour~\cite{GEANT4}. The effect of pileup interactions is simulated by superimposing simulated minimum bias events on the hard-scattering interaction, with the multiplicity distribution chosen to match the one observed in data.
\section{Background estimate\label{s:backgrounds}}
\subsection{Background composition}
The main backgrounds in the analyzed multi-object final states are: QCD multijet, V+jets (where V = \PW, \PZ), \Pgg+jets, and \ttbar production, with the QCD multijet background being by far the most dominant.
Figure~\ref{fig:STdist} illustrates the relative importance of these backgrounds for the inclusive multiplicity $N \ge 3$ and 6 cases, based on simulated background samples. To reach the overall agreement with the data, all simulated backgrounds except for the QCD multijets are normalized to the most accurate theoretical predictions available, while the QCD multijet background is normalized so that the total number of background events matches that in data. While we do not use simulated backgrounds to obtain the main results in this analysis, Fig.~\ref{fig:STdist} illustrates an important point: not only is the QCD multijet background at least an order of magnitude more important than other backgrounds, for both low- and high-multiplicity cases, but also the shape of the \ensuremath{{S_\mathrm{T}}}\xspace distributions for all major backgrounds is very similar, so the method we use to estimate the multijet background, discussed below, provides an acceptable means of predicting the overall background as well.
\begin{figure}[htb]
\centering
\includegraphics[width=0.47\textwidth]{Figure_002-a.pdf}
\includegraphics[width=0.47\textwidth]{Figure_002-b.pdf}
\caption{The \ensuremath{{S_\mathrm{T}}}\xspace distribution in data for inclusive multiplicities of (left) $N \ge 3$ and (right) $N \ge 6$, compared with the normalized background prediction from simulation, illustrating the relative contributions of major backgrounds. The lower panels show the difference between the data and the simulated background prediction, divided by the statistical uncertainty in data. We note that despite an overall agreement, we do not rely on simulation for obtaining the background prediction.}
\label{fig:STdist}
\end{figure}
\subsection{Background shape determination}
The background prediction method used in the analysis follows closely that in previous similar CMS searches~\cite{CMSBH1,CMSBH2,CMSBH3,CMSBH4}.
As discussed in Section~\ref{s:strategy}, the central idea of this method is that the shape of the \ensuremath{{S_\mathrm{T}}}\xspace distribution for the dominant multijet background is invariant with respect to the final-state object multiplicity $N$. Consequently, the background shape can be extracted from low-multiplicity spectra and used to describe the background at high multiplicities. The \ensuremath{{S_\mathrm{T}}}\xspace value is preserved by the final-state radiation, which is the dominant source of extra jets beyond LO $2 \to 2$ QCD processes, as long as the additional jets are above the \pt threshold used in the definition of \ensuremath{{S_\mathrm{T}}}\xspace.
At the same time, jets from initial-state radiation (ISR) change the \ensuremath{{S_\mathrm{T}}}\xspace value, but because their \pt spectrum is steeply falling they typically contribute only a few percent to the \ensuremath{{S_\mathrm{T}}}\xspace value and change the multiplicity $N$ by just one unit, for events used in the analysis. Consequently, we extract the background shape from the $N=3$ \ensuremath{{S_\mathrm{T}}}\xspace spectrum, which already has a contribution from ISR jets, and therefore reproduces the \ensuremath{{S_\mathrm{T}}}\xspace shape at higher multiplicities better than the $N=2$ spectrum used in earlier analyses. To estimate any residual noninvariance in the \ensuremath{{S_\mathrm{T}}}\xspace distribution, the $N=4$ \ensuremath{{S_\mathrm{T}}}\xspace spectrum, normalized to the $N=3$ spectrum in terms of the total number of events, is also used as an additional component of the background shape uncertainty. Furthermore, to be less sensitive to the higher instantaneous luminosity delivered by the LHC in 2016, which resulted in a higher pileup, and to further reduce the effect of ISR, the \pt threshold for all objects was raised to 70\GeV, compared to 50\GeV used in earlier analyses. The reoptimization that has resulted in the choice of a new exclusive multiplicity to be used for the baseline QCD multijet background prediction and a higher minimum \pt threshold for the objects counted toward \ensuremath{{S_\mathrm{T}}}\xspace was based on extensive studies of MC samples and low-\ensuremath{{S_\mathrm{T}}}\xspace events in data.
In order to obtain the background template, we use a set of 16 functions employed in earlier searches for BSM physics in dijets, VV events, and multijet events at various colliders. These functions typically have an exponential or power-law behavior with \ensuremath{{S_\mathrm{T}}}\xspace, and are described by 3--5 free parameters. Some of the functions are monotonously falling with \ensuremath{{S_\mathrm{T}}}\xspace by construction; however, some of them contain polynomial terms, such that they are not constrained to have a monotonic behavior. In order to determine the background shape, we fit the $N = 3$ \ensuremath{{S_\mathrm{T}}}\xspace distribution or the $N=4$ \ensuremath{{S_\mathrm{T}}}\xspace distribution, normalized to the same total event count as the $N=3$ distribution, in the range of 2.5--4.3\TeV, where any sizable contributions from BSM physics have been ruled out by earlier versions of this analysis, with all 16 functional forms. The lowest masses of the signal models considered, which have not been excluded by the previous analysis~\cite{CMSBH4}, contribute less than 2\% to the total number of events within the fit range. Any functional form observed not to be monotonically decreasing up to $\ensuremath{{S_\mathrm{T}}}\xspace =13\TeV$ after the fit to both multiplicities is discarded. The largest spread among all the accepted functions in the $N = 3$ and $N=4$ fits is used as an envelope of the systematic uncertainty in the background template. The use of both $N=3$ and $N=4$ distributions to construct the envelope allows one to take into account any residual \ensuremath{{S_\mathrm{T}}}\xspace noninvariance in the systematic uncertainty in the background prediction. We observe a good closure of the method to predict the background distributions in simulated QCD multijet events.
The best fits (taking into account the F-test criterion~\cite{F-test} within each set of nested functions) to the $N=3$ and $N=4$ distributions in data, along with the corresponding uncertainty envelopes, are shown in the two panels of Fig.~\ref{fig:fitData}. In both cases, the best fit function is $f(x) = p_0(1-x^{1/3})^{p_1}/(x^{p_2+p_3\log^2(x)})$, where $x = \ensuremath{{S_\mathrm{T}}}\xspace/\sqrt{s} = \ensuremath{{S_\mathrm{T}}}\xspace/(13\TeV)$ and $p_i$ are the four free parameters of the fit. The envelope of the predictions at large \ensuremath{{S_\mathrm{T}}}\xspace ($\ensuremath{{S_\mathrm{T}}}\xspace > 5.5\TeV$, most relevant for the present search) is given by the fit with the following 5-parameter function: $\phi(x) = p_0(1-x)^{p_1}/(x^{p_2 + p_3\log(x)+p_4\log^2(x)})$ to the $N=4$ (upper edge of the envelope) or $N=3$ (lower edge of the envelope) distributions. For \ensuremath{{S_\mathrm{T}}}\xspace values below 5.5\TeV the envelope is built piecewise from other template functions fitted to either the $N=3$ or $N=4$ distribution.
\begin{figure}[htb]
\centering
\includegraphics[width=0.45\textwidth]{Figure_003-a.pdf}
\includegraphics[width=0.45\textwidth]{Figure_003-b.pdf}
\caption{The results of the fit to data with $N = 3$ (left) and $N=4$ (right), after discarding the functions that fail to monotonically
decrease up to $\ensuremath{{S_\mathrm{T}}}\xspace = 13\TeV$. The description of the best fit function and the envelope are given in the main text. A few points beyond the plotted vertical range in the ratio panels are outside the fit region and do not contribute to the fit quality.}
\label{fig:fitData}
\end{figure}
\subsection{Background normalization}
The next step in the background estimation for various inclusive multiplicities is to normalize the template and the uncertainty envelope, obtained as described above, to low-\ensuremath{{S_\mathrm{T}}}\xspace data for various inclusive multiplicities. This has to be done with care, as the \ensuremath{{S_\mathrm{T}}}\xspace invariance is only expected to be observed above a certain threshold, which depends on the inclusive multiplicity requirement. Indeed, since there is a \pt threshold on the objects whose transverse energies count toward the \ensuremath{{S_\mathrm{T}}}\xspace value, the minimum possible \ensuremath{{S_\mathrm{T}}}\xspace value depends on the number of objects in the final state, and therefore the shape invariance for an \ensuremath{{S_\mathrm{T}}}\xspace spectrum with $N \ge \ensuremath{{N^\mathrm{min}}}\xspace$ is only observed above a certain \ensuremath{{S_\mathrm{T}}}\xspace threshold, which increases with \ensuremath{{N^\mathrm{min}}}\xspace. In order to determine the minimum value of \ensuremath{{S_\mathrm{T}}}\xspace for which this invariance holds, we find a plateau in the ratio of the \ensuremath{{S_\mathrm{T}}}\xspace spectrum for each inclusive multiplicity to that for $N=3$ in simulated multijet events. The plateau for each multiplicity is found by fitting the ratio with a sigmoid function. The lower bound of the normalization region (NR) is chosen to be above the 99\% point of the corresponding sigmoid function. The upper bound of each NR is chosen to be 0.4\TeV above the corresponding lower bound to ensure sufficient event count in the NR. Since the size of the simulated QCD multijet background sample is not sufficient to reliably extract the turn-on threshold for inclusive multiplicities of $N \ge 9$--11, for these multiplicities we use the same NR as for the $N \ge 8$ distribution. A self-consistency check with the CMS data sample has shown that this procedure provides an adequate description of the data. Table \ref{tab:NF} summarizes the turn-on thresholds and the NR boundaries obtained for each inclusive multiplicity.
\begin{table}[htbp!]
\centering
\topcaption{The \ensuremath{{S_\mathrm{T}}}\xspace invariance thresholds from fits to simulated QCD multijet background spectra, normalization region definitions, and normalization scale factors in data for different inclusive multiplicities. \label{tab:NF}}
\begin{tabular}{cccc}
Multiplicity & 99\% turn-on & Normalization & Normalization \\
& point (\TeVns{}) & region (\TeVns{})& scale factor (data) \\
\hline
${\ge}3$ & $2.44 \pm 0.06$ & 2.5--2.9 & $3.437 \pm 0.025$ \\
${\ge}4$ & $2.47 \pm 0.06$ & 2.5--2.9 & $2.437 \pm 0.019$ \\
${\ge}5$ & $2.60 \pm 0.07$ & 2.7--3.1 & $1.379 \pm 0.016$ \\
${\ge}6$ & $2.75 \pm 0.11$ & 2.9--3.3 & $0.652 \pm 0.012$ \\
${\ge}7$ & $2.98 \pm 0.13$ & 3.0--3.4 & $0.516 \pm 0.015$ \\
${\ge}8$ & $3.18 \pm 0.21$ & 3.2--3.6 & $0.186 \pm 0.011$ \\
${\ge}9$ & $3.25 \pm 0.28$ & 3.2--3.6 & $0.055 \pm 0.006$ \\
${\ge}10$ & $3.02 \pm 0.26$ & 3.2--3.6 & $0.012 \pm 0.003$ \\
${\ge}11$ & $2.89 \pm 0.24$ & 3.2--3.6 & $0.002 \pm 0.001$ \\
\end{tabular}
\end{table}
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.47\textwidth]{Figure_004-a.pdf}
\includegraphics[width=0.47\textwidth]{Figure_004-b.pdf}
\includegraphics[width=0.47\textwidth]{Figure_004-c.pdf}
\includegraphics[width=0.47\textwidth]{Figure_004-d.pdf}
\caption{The comparison of data and the background predictions after the normalization for inclusive multiplicities $N \ge 3, \dots, 6$ (left to right, upper to lower).
The gray band shows the background shape uncertainty alone and the red lines also include the normalization uncertainty.
The bottom panels show the difference between the data and the background prediction from the fit, divided by the overall uncertainty, which includes the statistical uncertainty of data as well as the shape and normalization uncertainties in the background prediction, added in quadrature.}
\label{fig:dataBkg3to6}
\end{figure}
The normalization scale factors are calculated as the ratio of the number of events in each NR for the inclusive multiplicities of $N\geq 3, \dots, 11$ to that for the exclusive multiplicity of $N = 3$ in data, and are listed in Table \ref{tab:NF}.
The relative scale factor uncertainties are derived from the number of events in each NR, as $1/\sqrt{N_\mathrm{NR}}$, where $N_\mathrm{NR}$ is the number of events in the corresponding NR.
\subsection{Comparison with data}
The results of the background prediction and their comparison with the observed data are shown in Figs.~\ref{fig:dataBkg3to6} and \ref{fig:dataBkg7to11} for inclusive multiplicities $N \ge 3,\dots, 11$. The data are consistent with the background predictions in the entire \ensuremath{{S_\mathrm{T}}}\xspace range probed, for all inclusive multiplicities.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.47\textwidth]{Figure_005-a.pdf}
\includegraphics[width=0.47\textwidth]{Figure_005-b.pdf}
\includegraphics[width=0.47\textwidth]{Figure_005-c.pdf}
\includegraphics[width=0.47\textwidth]{Figure_005-d.pdf}
\includegraphics[width=0.47\textwidth]{Figure_005-e.pdf}
\caption{The comparison of data and the background predictions after normalization for inclusive multiplicities of $ N \ge 7, \dots, 11$ (left to right, upper to lower).
The gray band shows the shape uncertainty and the red lines also include the normalization uncertainty.
The bottom panels show the difference between the data and the background prediction from the fit, divided by the overall uncertainty, which includes the statistical uncertainty of data as well as the shape and normalization uncertainties in the background prediction, added in quadrature. The $N \ge 7$ ($N \ge 8, \dots, 11$) distributions also show contributions from benchmark {\textsc{BlackMax}}\xspace B1 (sphaleron) signals added to the expected background.}
\label{fig:dataBkg7to11}
\end{figure}
\section{Systematic uncertainties\label{s:systematics}}
There are several sources of systematic uncertainty in this analysis. Since the background estimation is based on control samples in data, the only uncertainties affecting the background predictions are the modeling of the background shape via template functions and the normalization of the chosen function to data at low \ensuremath{{S_\mathrm{T}}}\xspace, as described in Section~\ref{s:backgrounds}. They are found to be 1--130\% and 0.7--50\%, depending on the values of \ensuremath{{S_\mathrm{T}}}\xspace and \ensuremath{{N^\mathrm{min}}}\xspace, respectively.
For the signal, we consider the uncertainties in the PDFs, jet energy scale (JES), and the integrated luminosity. For the PDF uncertainty, we only consider the effect on the signal acceptance, while the PDF uncertainty in the signal cross section is treated as a part of the theoretical uncertainty and therefore is not propagated in the experimental cross section limit. The uncertainty in the signal acceptance is calculated using PDF4LHC recommendations~\cite{PDF4LHC1,PDF4LHC2} based on the quadratic sum of variations from the MSTW2008 uncertainty set (${\approx}0.5\%$), as well as the variations obtained by using three different PDF sets: MSTW2008, CTEQ6.1~\cite{CTEQ}, and NNPDF2.3~\cite{NNPDF} (up to 6\% based on the difference between the default and CTEQ6.1 sets) for one of the benchmark models (nonrotating BH with $\MD= 3\TeV$, $\ensuremath{{M_\mathrm{BH}}}\xspace = 5.5\TeV$, and $n = 2$, as generated by {\textsc{BlackMax}}\xspace); the size of the effect for other benchmark points is similar. To be conservative, we assign a systematic uncertainty of 6\%
due to the choice of PDFs for all signal samples. The JES uncertainty affects the signal acceptance because of the kinematic requirements on the objects and the fraction of signal events passing a certain \ensuremath{{S_\mathrm{T}^\text{min}}}\xspace threshold used for limit setting, as described in Section~\ref{s:Limits}. In order to account for these effects, the jet four-momenta are simultaneously shifted up or down by the JES uncertainty, which is a function of the jet \pt and
$\eta$, and the largest of the two differences with respect to the use of the nominal JES is assigned as the uncertainty. The uncertainty due to JES depends on \ensuremath{{M_\mathrm{BH}}}\xspace and varies between $<$1 and 5\%; we conservatively assign a constant value of 5\% as the signal acceptance uncertainty due to JES. Finally, the integrated luminosity is measured with an uncertainty of 2.5\%~\cite{LUM-17-001}. Effects of all other uncertainties on the signal acceptance are negligible.
The values of systematic uncertainties that are used in this analysis are summarized in Table~\ref{tab:Table_Uncertainties}.
\begin{table}[htbp!]
\centering
\topcaption{Summary of systematic uncertainties in the signal acceptance and the background estimate.\label{tab:Table_Uncertainties}}
\begin{tabular}{lcc}
Uncertainty source & Effect on signal acceptance & Effect on background\\
\hline
PDF & $\pm 6\%$ & \NA \\
JES & $\pm 5\%$ & \NA \\
Integrated luminosity & $\pm 2.5\%$ & \NA \\
Shape modeling & \NA & $\pm$(1--130)\%, depending on \ensuremath{{S_\mathrm{T}}}\xspace\\
Normalization & \NA & $\pm$(0.7--50)\%, depending on \ensuremath{{N^\mathrm{min}}}\xspace \\
\end{tabular}
\end{table}
\section{Results\label{s:Limits}}
As shown in Figs.~\ref{fig:dataBkg3to6} and \ref{fig:dataBkg7to11}, there is no evidence for a statistically significant signal observed in any of the inclusive \ensuremath{{S_\mathrm{T}}}\xspace distributions. The null results of the search are interpreted in terms of model-independent limits on BSM physics in energetic, multiparticle final states, and as model-specific limits for a set of semiclassical BH and SB scenarios, as well as for EW sphalerons.
Limits are set using the \CLs method~\cite{Junk,Read,ATLAS_CMS} with log-normal priors in the likelihood to constrain the nuisance parameters near their best estimated values. We do not use an asymptotic approximation of the \CLs method~\cite{Gross}, as for most of the models the optimal search region corresponds to a very low background expectation, in which case the asymptotic approximation is known to overestimate the search sensitivity.
\subsection{Model-independent limits}
\begin{figure}[htpb!]
\centering
\includegraphics[width=0.45\textwidth]{Figure_006-a.pdf}
\includegraphics[width=0.45\textwidth]{Figure_006-b.pdf}
\includegraphics[width=0.45\textwidth]{Figure_006-c.pdf}
\includegraphics[width=0.45\textwidth]{Figure_006-d.pdf}
\caption{Model-independent upper limits on the cross section times acceptance for four sets of inclusive multiplicity thresholds, $N \ge 3, \dots, 6$ (left to right, upper to lower). Observed (expected) limits are shown as the black solid (dotted) lines. The inner (outer) band represents the $\pm 1$ ($\pm 2$) standard deviation uncertainty in the expected limit.}
\label{fig:Limit_ModelIndependent1}
\end{figure}
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.45\textwidth]{Figure_007-a.pdf}
\includegraphics[width=0.45\textwidth]{Figure_007-b.pdf}
\includegraphics[width=0.45\textwidth]{Figure_007-c.pdf}
\includegraphics[width=0.45\textwidth]{Figure_007-d.pdf}
\includegraphics[width=0.45\textwidth]{Figure_007-e.pdf}
\caption{Model-independent upper limits on the cross section times acceptance for five sets of
inclusive multiplicity thresholds, $N \ge 7, \dots, 11$ (left to right, upper to lower).
Observed (expected) limits are shown as the black solid (dotted) lines. The inner (outer) band represents the $\pm 1$ ($\pm 2$) standard deviation uncertainty in the expected limit.}
\label{fig:Limit_ModelIndependent2}
\end{figure}
The main result of this analysis is a set of model-independent upper limits on the product of signal cross section and acceptance ($\sigma \, A$)
in inclusive $N \ge \ensuremath{{N^\mathrm{min}}}\xspace$ final states, as a function of the minimum \ensuremath{{S_\mathrm{T}}}\xspace requirement, \ensuremath{{S_\mathrm{T}^\text{min}}}\xspace, obtained from a simple counting experiment
for $\ensuremath{{S_\mathrm{T}}}\xspace > \ensuremath{{S_\mathrm{T}^\text{min}}}\xspace$.
These limits can then be translated into limits on the \ensuremath{{M_\mathrm{BH}^\text{min}}}\xspace in a variety of models, or on any
other signals resulting in an energetic, multi-object final state. We start with the limits for the inclusive multiplicities
$N \ge 3,4$, which can be used to constrain
models resulting in lower multiplicities of the final-state objects. Since part of the data entering these distributions are used to determine the background shape and its uncertainties, the limits are set only for \ensuremath{{S_\mathrm{T}^\text{min}}}\xspace values above the background fit region, \ie, for $\ensuremath{{S_\mathrm{T}}}\xspace > 4.5\TeV$. For other multiplicities, the limits are shown for \ensuremath{{S_\mathrm{T}}}\xspace values above the NRs listed in Table~\ref{tab:NF}.
These limits at 95\% confidence level (\CL) are shown in Figs.~\ref{fig:Limit_ModelIndependent1} and~\ref{fig:Limit_ModelIndependent2}. When computing the limits, we use systematic uncertainties in the signal acceptance applicable to the specific models discussed in this paper, as documented
in Section~\ref{s:systematics}. It is reasonable to expect these limits to apply to a large variety of models resulting in multi-object final states dominated by jets.
The limits on the product of the cross section and acceptance approach 0.08\unit{fb} at high values of \ensuremath{{S_\mathrm{T}^\text{min}}}\xspace.
\subsection{Model-specific limits}
To determine the optimal point of \ensuremath{{S_\mathrm{T}^\text{min}}}\xspace and the minimum multiplicity of the final-state objects \ensuremath{{N^\mathrm{min}}}\xspace for setting an exclusion limit for a particular model, we calculate the acceptance and the expected limit on the cross section for a given model for each point of the model-independent limit curves, for all inclusive multiplicities. The optimal point of $(\ensuremath{{N^\mathrm{min}}}\xspace,\ensuremath{{S_\mathrm{T}^\text{min}}}\xspace)$ is chosen as the point that gives the lowest expected cross section limit. In most of the cases this point also maximizes the significance of an observation, for the case of a nonzero signal present in data~\cite{CMSBH4}.
An example of a model-specific limit is given in Fig.~\ref{fig:BH_optPoint} for a {\textsc{BlackMax}}\xspace benchmark point B1 (nonrotating semiclassical BH) with \MD = 4\TeV, $\ensuremath{{n_\mathrm{ED}}}\xspace =6$, and \ensuremath{{M_\mathrm{BH}^\text{min}}}\xspace between 5 and 11\TeV. In this case, the optimal inclusive multiplicity \ensuremath{{N^\mathrm{min}}}\xspace starts at 7 for the lowest \ensuremath{{M_\mathrm{BH}^\text{min}}}\xspace value of 5\TeV, with the corresponding $\ensuremath{{S_\mathrm{T}^\text{min}}}\xspace = 5\TeV$. As \ensuremath{{M_\mathrm{BH}^\text{min}}}\xspace increases, the optimal point shifts to lower inclusive multiplicities and the corresponding \ensuremath{{S_\mathrm{T}^\text{min}}}\xspace increases, reaching $(3,7.6\TeV)$ for $\ensuremath{{M_\mathrm{BH}^\text{min}}}\xspace = 11\TeV$. The corresponding 95\% \CL upper limit curve and the theoretical cross section for the chosen benchmark point is shown in Fig.~\ref{fig:BH_optPoint}. The observed (expected) 95\% \CL lower limit on \ensuremath{{M_\mathrm{BH}^\text{min}}}\xspace in this benchmark model can be read from this plot as the intersection of the theoretical curve with the observed (expected) 95\% \CL upper limit on the cross section, and is found to be 9.7 (9.7)\TeV.
\begin{figure}[htb]
\centering
\includegraphics[height=0.5\textwidth]{Figure_008.pdf}
\caption{Example of a model-specific limit on \ensuremath{{M_\mathrm{BH}^\text{min}}}\xspace for a semiclassical nonrotating BH model ({\textsc{BlackMax}}\xspace point B1) with $\MD = 4\TeV$ $\ensuremath{{n_\mathrm{ED}}}\xspace=6$, as a function of \ensuremath{{M_\mathrm{BH}^\text{min}}}\xspace. The 95\% \CL upper exclusion limit on the signal cross section for each \ensuremath{{M_\mathrm{BH}^\text{min}}}\xspace value is obtained at the optimal $(\ensuremath{{N^\mathrm{min}}}\xspace,\ensuremath{{S_\mathrm{T}^\text{min}}}\xspace)$ point, which ranges from $(7,5.0\TeV)$ for $\ensuremath{{M_\mathrm{BH}^\text{min}}}\xspace = 5\TeV$ to $(3,7.6\TeV)$ for $\ensuremath{{M_\mathrm{BH}^\text{min}}}\xspace = 11\TeV$.
Also shown with a dashed line are the theoretical cross sections corresponding to these optimal points. The inner (outer) band represents the $\pm 1$ ($\pm 2$) standard deviation uncertainty in the expected limit.}
\label{fig:BH_optPoint}
\end{figure}
We repeat the above procedure for all chosen benchmark scenarios of semiclassical BHs, listed in Tables~\ref{table:generator-blackMax} and \ref{table:generator-Charybdis}. The resulting observed limits on the \ensuremath{{M_\mathrm{BH}^\text{min}}}\xspace are shown in Figs.~\ref{fig:BlackMax_limit} and \ref{fig:Charybdis_limit}, for the {\textsc{BlackMax}}\xspace and \CHARYBDIS2 benchmarks, respectively. We also obtain similar limits on the SB mass for the set of the SB model parameters we scanned. These limits are shown in Fig.~\ref{fig:SB} for a fixed string scale $\ensuremath{{M_\mathrm{S}}}\xspace = 3.6\TeV$, as a function of the string coupling \ensuremath{{g_\mathrm{S}}}\xspace (left plot) and for a fixed string coupling $\ensuremath{{g_\mathrm{S}}}\xspace = 0.2$ as a function of the string scale \ensuremath{{M_\mathrm{S}}}\xspace (right plot). The search excludes SB masses below 7.1--9.4\TeV, depending on the values of the string scale and coupling.
\begin{figure}[htb!]
\centering
\includegraphics[height=0.6\textwidth]{Figure_009.pdf}
\caption{The observed 95\% \CL lower limits on \ensuremath{{M_\mathrm{BH}^\text{min}}}\xspace as a function of \MD at different $n$ for the models B1--B3 generated with {\textsc{BlackMax}}\xspace.}
\label{fig:BlackMax_limit}
\end{figure}
\begin{figure}[htb!]
\centering
\includegraphics[height=0.6\textwidth]{Figure_010.pdf}
\caption{The 95\% observed \CL lower limits on \ensuremath{{M_\mathrm{BH}^\text{min}}}\xspace as a function of \MD at different $n$ for the models C1--C6 generated with \CHARYBDIS2.}
\label{fig:Charybdis_limit}
\end{figure}
\begin{figure}[htb!]
\centering
\includegraphics[width=0.49\textwidth]{Figure_011-a.pdf}
\includegraphics[width=0.49\textwidth]{Figure_011-b.pdf}
\caption{The 95\% \CL lower limits on a string ball mass as a function of the string scale \ensuremath{{M_\mathrm{S}}}\xspace for a fixed value of the string coupling $\ensuremath{{g_\mathrm{S}}}\xspace = 0.2$ (left) and as a function of the string coupling \ensuremath{{g_\mathrm{S}}}\xspace for a fixed value of the string scale $\ensuremath{{M_\mathrm{S}}}\xspace = 3.6\TeV$ (right). The inner (outer) band represents the $\pm 1$ ($\pm 2$) standard deviation uncertainty in the expected limit. The area below the solid curve is excluded by this search.}
\label{fig:SB}
\end{figure}
For the sphaleron signal, the optimal $(\ensuremath{{N^\mathrm{min}}}\xspace,\ensuremath{{S_\mathrm{T}^\text{min}}}\xspace)$ point is also chosen by scanning for the lowest expected limit and is found
to be $(8,6.2\TeV)$ for $\ensuremath{{E_\mathrm{sph}}}\xspace = 9$ and 10\TeV, and $(9,5.6\TeV)$ for $\ensuremath{{E_\mathrm{sph}}}\xspace = 8\TeV$. Consequently, the exclusion limit on the sphaleron cross section can be converted into a limit on the PEF, defined in Section~\ref{sec:sphaleron-signal}. Following Ref.~\cite{Ellis2016} we calculate the PEF limits for the nominal $\ensuremath{{E_\mathrm{sph}}}\xspace = 9\TeV$, as well as for the modified values of $\ensuremath{{E_\mathrm{sph}}}\xspace = 8$ and 10\TeV. The observed and expected 95\% \CL upper limits on the PEF are shown in Fig.~\ref{fig:PEF_limit}. The observed (expected) limit obtained for the nominal $\ensuremath{{E_\mathrm{sph}}}\xspace = 9\TeV$ is 0.021 (0.012), which is an order of magnitude more stringent than the limit obtained in Ref.~\cite{Ellis2016} based on the reinterpretation of the ATLAS result~\cite{ATLAS-inclusive13}.
\begin{figure}[htb!]
\centering
\includegraphics[width=0.6\textwidth]{Figure_012.pdf}
\caption{Observed (solid curve) and expected (dashed black curve) 95\% \CL upper limit on the pre-exponential factor PEF of the sphaleron production as a function of \ensuremath{{E_\mathrm{sph}}}\xspace. The inner (outer) band represents the $\pm 1$ ($\pm 2$) standard deviation uncertainty in the expected limit. The area above the solid curve is excluded by this search.}
\label{fig:PEF_limit}
\end{figure}
\section{Summary\label{s:summary}}
A search has been presented for generic signals of beyond the standard model physics resulting in energetic multi-object final states, such as would be produced by semiclassical black holes, string balls, and electroweak sphalerons. The search was based on proton-proton collision data at a center-of-mass energy of 13\TeV, collected with the CMS detector in 2016 and corresponding to an integrated luminosity of \ensuremath{35.9\fbinv}\xspace. The background, dominated by QCD multijet production, is determined solely from low-multiplicity samples in data. Comparing the distribution of the total transverse momentum \ensuremath{{S_\mathrm{T}}}\xspace of the final-state objects in data with that expected from the backgrounds, we set 95\% confidence level model-independent upper limits on the product of the production cross section and acceptance for such final states, as a function of the minimum \ensuremath{{S_\mathrm{T}}}\xspace for minimum final-state multiplicities between 3 and 11. These limits reach 0.08\unit{fb} at high \ensuremath{{S_\mathrm{T}}}\xspace thresholds. By calculating the acceptance values for benchmark black hole, string ball, and sphaleron signal models, we convert these model-independent limits into lower limits on the minimum semiclassical black hole mass and string ball mass. The limits extend as high as 10.1\TeV, thus improving significantly on previous results. We have also set the first experimental upper limit on the electroweak sphaleron pre-exponential factor of 0.021 for the sphaleron transition energy of 9\TeV.
\begin{acknowledgments}
We congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC and thank the technical and administrative staffs at CERN and at other CMS institutes for their contributions to the success of the CMS effort. In addition, we gratefully acknowledge the computing centers and personnel of the Worldwide LHC Computing Grid for delivering so effectively the computing infrastructure essential to our analyses. Finally, we acknowledge the enduring support for the construction and operation of the LHC and the CMS detector provided by the following funding agencies: BMBWF and FWF (Austria); FNRS and FWO (Belgium); CNPq, CAPES, FAPERJ, FAPERGS, and FAPESP (Brazil); MES (Bulgaria); CERN; CAS, MoST, and NSFC (China); COLCIENCIAS (Colombia); MSES and CSF (Croatia); RPF (Cyprus); SENESCYT (Ecuador); MoER, ERC IUT, and ERDF (Estonia); Academy of Finland, MEC, and HIP (Finland); CEA and CNRS/IN2P3 (France); BMBF, DFG, and HGF (Germany); GSRT (Greece); NKFIA (Hungary); DAE and DST (India); IPM (Iran); SFI (Ireland); INFN (Italy); MSIP and NRF (Republic of Korea); MES (Latvia); LAS (Lithuania); MOE and UM (Malaysia); BUAP, CINVESTAV, CONACYT, LNS, SEP, and UASLP-FAI (Mexico); MOS (Montenegro); MBIE (New Zealand); PAEC (Pakistan); MSHE and NSC (Poland); FCT (Portugal); JINR (Dubna); MON, RosAtom, RAS, RFBR, and NRC KI (Russia); MESTD (Serbia); SEIDI, CPAN, PCTI, and FEDER (Spain); MOSTR (Sri Lanka); Swiss Funding Agencies (Switzerland); MST (Taipei); ThEPCenter, IPST, STAR, and NSTDA (Thailand); TUBITAK and TAEK (Turkey); NASU and SFFR (Ukraine); STFC (United Kingdom); DOE and NSF (USA).
\hyphenation{Rachada-pisek} Individuals have received support from the Marie-Curie program and the European Research Council and Horizon 2020 Grant, contract No. 675440 (European Union); the Leventis Foundation; the A. P. Sloan Foundation; the Alexander von Humboldt Foundation; the Belgian Federal Science Policy Office; the Fonds pour la Formation \`a la Recherche dans l'Industrie et dans l'Agriculture (FRIA-Belgium); the Agentschap voor Innovatie door Wetenschap en Technologie (IWT-Belgium); the F.R.S.-FNRS and FWO (Belgium) under the ``Excellence of Science - EOS" - be.h project n. 30820817; the Ministry of Education, Youth and Sports (MEYS) of the Czech Republic; the Lend\"ulet (``Momentum") Programme and the J\'anos Bolyai Research Scholarship of the Hungarian Academy of Sciences, the New National Excellence Program \'UNKP, the NKFIA research grants 123842, 123959, 124845, 124850 and 125105 (Hungary); the Council of Science and Industrial Research, India; the HOMING PLUS program of the Foundation for Polish Science, cofinanced from European Union, Regional Development Fund, the Mobility Plus programme of the Ministry of Science and Higher Education, the National Science Center (Poland), contracts Harmonia 2014/14/M/ST2/00428, Opus 2014/13/B/ST2/02543, 2014/15/B/ST2/03998, and 2015/19/B/ST2/02861, Sonata-bis 2012/07/E/ST2/01406; the National Priorities Research Program by Qatar National Research Fund; the Programa Estatal de Fomento de la Investigaci{\'o}n Cient{\'i}fica y T{\'e}cnica de Excelencia Mar\'{\i}a de Maeztu, grant MDM-2015-0509 and the Programa Severo Ochoa del Principado de Asturias; the Thalis and Aristeia programmes cofinanced by EU-ESF and the Greek NSRF; the Rachadapisek Sompot Fund for Postdoctoral Fellowship, Chulalongkorn University and the Chulalongkorn Academic into Its 2nd Century Project Advancement Project (Thailand); the Welch Foundation, contract C-1845; and the Weston Havens Foundation (USA).
\end{acknowledgments}
|
1,941,325,220,474 | arxiv | \section{The key scientific projects of SKA}
Some of the main puzzles of cosmology are the nature of dark matter
and dark energy, representing in total 95\% of the content of the Universe.
The dark energy is presently compatible within the uncertainties with
a cosmological constant, but it is paramount to determine with greater
precision whether its evolution with time is dynamic, and could be due
to a fifth element, a quintessence, and new physics. The tools for making
this diagnostic are the same as for many other dark energy probes, either
from ithe ground or in space, with Euclid: the BAO (Baryon Acoustic
Oscillations), playing the role of a standard ruler, measuring the expansion
at differnt redshifts, the Weak Lensing (WL), or Redshift Space Distorsions
(RSD), measuring the density and amplitude of large-scale structures to
constrain the evolution of $\Omega$ and $\Lambda$. These tools will be
exploited with optical tracers, and the novelty of SKA is to use radio
tracers, and the HI-21cm line to identify galaxies. These tracers have
different biases than the optical ones, and both studies
are very complementary.
Optically, the massive galaxies are early-type gathered in galaxy clusters,
while the HI-rich galaxies are late-type in the field.
Another key project is to explore the Epoch of Reionization (EoR), likely
to extend from z=20 to 6. If it is already possible to have some clues
with present searches of galaxies and quasars at z$>$6, the inter-galactic
medium will be uniquely explored by SKA, with the redshifted 21cm-HI line,
as it is now with pathfinders and precursors. The large galaxy surveys made
over the whole available sky, due to the wide field of view, will serve to
determine uniquely the large-scale structures, and the galaxy formation and
evolution.
Pulsars will be discovered in a huge number with SKA, exploring the
whole Milky Way, while presently they are confined in the solar neighborhood.
Milli-second pulsars are extremely precise clocks, which can be used to
detect very long wavelength gravitational waves.
Strong-gravity will be explored with pulsars and black holes
Cosmic magnetism is another key project, and in particular the formation
of primordial magnetic fields will be tackled. Finally, the search for the
origin of life, the mapping of the protoplanetary disks, and the search
for pre-biotic molecules, will be carried out in synergy and complementarity
with ALMA at higher frequencies.
All key projects with SKA have been developed in many whitepapers and
conferences, \citep[e.g.,][]{Carilli2004,Carilli2015}.
\section{Cosmology and galaxies}
\subsection{Dark sector and new physics}
The state of the art constraints on the dark energy and
the dark matter are obtained by combining all available data,
from the SNIa standard candels, i.e. the
Pantheon sample of 1048 SNIa between redshifts 0.01 $<$ z $<$ 2.3
\citep{Scolnic2018}, and the 207 SN sample from DES-3yr
\citep{Abbott2019}, with the BAO results from SDSS \citep{Alam2021}, and the
CMB data \citep{Planck2020}.
The equation of state of the dark energy can be written as the
pressure proportional to the density P = w $\rho$, with w negative,
and the variation of w(a)= w$_0$ + wa (1-a), where a is the characteristic
radius of the Universe, a=1/(1+z).
Since SNIa are difficult to observe at z$>$1, \citet{Inserra2021}
propose to use and calibrate superluminous supernovae (SLSNe), which will allow
to go farther and faster. The first results are promising, putting constraints
in the w$_0$-wa diagram. With the enhanced precision acquired in the
recent years, some tension has grown between observations and the standard
$\Lambda$CDM model, suggesting the necessity of new physics
\citep{Smith2020}. In particular the main tension occurs between the Hubble
constant H$_0$= 73.48$\pm$1.66 km/s/Mpc
measured locally with Cepheids or other indicators
\citep{Riess2018}, and the Planck determination of 67.4$\pm$ 0.5 km/s/Mpc.
The discrepancy reaches 3.7$\sigma$.
In radioastronomy, powerful masers (H$_2$O, OH..) allow to observe in VLBI the
center of external galaxies, and their rotating circum-nuclear disk;
measuring the velocities through the Doppler effect and monitoring
through VLBI the gradient of maser poition with velocity,
results in a precise distance indicator, as shown
beautifully by the prototypical example of NGC~4258 \citep{Greenhill1995}.
SKA will measure many more masers around AGN at various redshifts,
and can give a complementary approach to the problem. Already
\citet{Pesce2020} with megamasers confirm H$_0$ = 73.9$\pm$3 km/s/Mpc.
The most recent BAO and RSD results, including 147 000 quasars
\citep{Ata2018}, and Ly$\alpha$ absorption surveys \citep{Bautista2017},
are compatible with a flat $\Lambda$CDM cosmology, and
perfectly compatible with the Planck cosmological parameters
\citep{Bautista2021}.
SKA-1 surveys of galaxies in the HI-21cm lines will
be complementary and competitive with the optical ones from the ground
and in space (Euclid). Surveys where galaxies are detected individually will
be most useful for galaxy formation and evolution, they will detect
4 million galaxies up to z=0.2 in the all-sky survey, 2 million
galaxies up to z=0.6 in the wide field, and 0.4 million in the deep field
survey up to z=0.8 (of 50 square degrees area).
For cosmology purposes, HI intensity mapping over 30 000
square degrees, and covering redshifts up to 3 will be
more competitive \citep{Maartens2015}. Weak lensing in radio surveys up
to redshift z=6 will consider a billion objects. One of the strong
advantages of SKA-1
is the much larger volumes sampled, with respect to all other probes
(Euclid, DESI, BOSS, Nancy-Grace-Roman...).
The second phase SKA-2 will surpass all.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.8\textwidth,clip]{combes_S07_fig1}
\caption{{\bf Left:} Radio continuum emission from ESO 137-006 detected by MeerKAT at 1030 MHz. Three collimated synchrotron threads (CSTs) between the
radio lobes are indicated.
The Sobel filter has been applied to the image, to better show these features
\citep{Ramatsoku2020}. The upper panel shows
that the galaxy is entering the Norma cluster, and its X-ray
gas atmosphere (in red)..
{\bf Right:} Some radio images of nearby galaxies from the
LOFAR Two-metre Sky Survey (LoTSS) \citep{Shimwell2019}.}
\label{fig1}
\end{figure}
\subsection{Continuum and HI surveys}
Simulations have been performed of wide-field images of the
radio-continuum sky with SKA, detecting both the very numerous star-forming
galaxies, with synchrtron emission coming from supernovae, and the
stronger but less numerous radio AGN, of FRI and FRII types, the latter
even less numerous but stronger \citep{Jackson2004}. The AGN radio jets can
be used easily as standard rods, constraining the cosmological parameters,
by themsleves, and also through weak lensing.
For the all-sky survey at 1.4 GHz, in 2yrs of integration,
SKA1 will achieve 3 $\mu$Jy rms, and detect $\sim$4
galaxies per arcmin$^2$ (at more than 10$\sigma$),
\citep{Jarvis2015}. The survey will be made with an
excellent quality circular Gaussian beam from about 0.6 to 100'',
With almost uniform sky coverage of 3$\pi$ str.
This will provide a total of 0.5 billion radio sources,
yielding weak lensing and Integrated Sachs Wolfe (WL, ISW) diagnostics.
For the wide-field (5000 deg$^2$), with 2 $\mu$Jy rms $\sim$6 galaxies
per arcmin$^2$ are expected (at more than 10$\sigma$).
For the deep-field (50deg$^2$) with 0.1 $\mu$Jy rms, $\sim$20 galaxies
per arcmin$^2$ will be detected, at more than 10$\sigma$.
Figure \ref{fig1} shows some examples of radio images from the
LOFAR Two-metre Sky Survey (LoTSS), and also how the precursor MeerKAT
has discovered new features in typical radio-jets: collimated synchrotron
threads, linking the radio lobes from the sides, in parallel to the radio jets,
in ESO 137-006. This galaxy is moving inside the wind of the intra-cluster gas,
entering the Norma cluster. The radio lobes are distorted and bent, and the
threads look like relics of the previous radio jets, in previous episodes
of ejection.
In HI-21cm surveys, SKA-1 will allow the imaging of substantial number
of high-redshift galaxies for the first time \citep{Staveley2015}.
While the present instruments are restricted to detect HI in individual
galaxies only to the local Universe up to z=0.1, the very deep survey
will permit the detection of galaxies at z=2, and even higher for SKA-2.
A glance of what intensities could be detected is given by recent
stacking to detect only "globally" some remote galaxies.
With GMRT deep (117h) field, \citet{Bera2019} have stacked 445 blue galaxies
between 0.2 $<$ z $<$ 0.4, and obtained a
detection at 7$\sigma$ of M(HI) = 5 10 $^9$ M$_\odot$.
Stacking the continuum to derive the star formation rate, they
derive a depletion time of $\sim$ 9 Gyrs.
From GAMA survey, imaged on DINGO-VLA,
\citet{Chen2021} have stacked HI cubelets on a sample of 3622 galaxies,
and obtained a clear detection, with FWHM of 60km/s.
\section{Reionization}
Intensity mapping is the only technique able to determine
the global quantities searched for in the EoR.
Continuum foregrounds are typically 1000 times brighter than the
expected cosmological signal. The instrumental responses to bright foregrounds
with extended and multiple sidelobes, forming a sea of confused
signals, depending on their location on the field of view, are a challenge
to understand and subtract away \citep{Santos2015}.
The foregrounds to be eliminated produce a perturbing signal,
which is not necessarily spectrally smooth
\citep{Switzer2015}.
The LOFAR key project on EoR has observed more than 1000hours
on selected clean fields, with the least possible foreground emission.
However, even if the nominal sensitivity is reached to detect easily
the expected signal, the confusion by foregrounds has prevented to
draw any conclusion. Controlling the calibration, and cleaning for the
sidelobes down to the low intensity level required is a long process.
While in 2017, only 0.5\% of data were understood and used \citep{Patil2017},
recently up to 5\% of data has been understood and cleaned,
resulting in an upper limit of the EoR signal two orders of magnitude above
the expected signal.
There still remains noise that could be due to residual emission
from foreground sources or diffuse emission far away from the phase centre,
polarization leakage, chromatic calibration errors, ionosphere, or
low-level radiofrequency interference \citep{Mertens2020}
\section{Pulsars, Cosmic magnetism}
\subsection{Pulsars and gravitational waves}
The large number of pulsars to be discovered by SKA, in combination
with its exceptional timing precision, will revolutionize the field of
pulsar astrophysics. SKA will provide a complete census of pulsars in
both the Galaxy and in Galactic globular clusters
\citep{Cordes2004}. In the Milky Way, about 30 000 pulsars should
be present, and 10 000 milli-second ones. May be 20 000 pulsars will
be detectable in the whole Galaxy (while today we know only pulsars in the
solar neighborhood).
Pulsars and compact objects will allow unique tests of the
strong field limit of relativistic gravity and the equation of state at extreme densities.
Through monitoring these pulsars, which are extremely precise clocks,
(up to 10$^{-15}$ in relative), gravitational waves of long wavelengths,
of the order of light-yrs, could be detected, and pulsars are the only way.
The hope is to detect the merger of super-massive black holes, and also
the primordial waves, signature of inflation
\citep{Janssen2015}. A first preliminary detection
has been claimed with present telescopes \citep{Arzoumanian2020}.
\subsection{Fast Radio Bursts}
Since their discovery by \citet{Lorimer2007}, our knowledge of
Fast Radio Bursts (FRB) have grown considerably: they have been detected in
large numbers, especially with the wide-field instrument CHIME:
540 are known, and the occurence has been estimated at 800 per day on the
whole sky \citep{CHIME-FRB2019}.
With SKA-MID, it will be possible to detect 100 FRB/yr,
with precise localisation \citep{Keane2018}.
The nature of the FRB phenomenon is not yet clarified, although
many repeaters have been detected, and one FRB has been associated
with a Galactic magnetar, SGR 1935+2154 \citep{Bochenek2020}.
Due to their surface magnetic fields larger than 10$^{14}$ gauss,
magnetars are the source of high-energy phenomena, where their magnetic
energy decays. A florilege of theories have been proposed to explain
the phenomena \citep{Platts2019}.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.8\textwidth,clip]{combes_S07_fig2}
\caption{Stacking radio and X-ray maps, around physically nearby pairs
of luminous red galaxies (LRG).
From left to right the columns are GLEAM 154 MHz,
118 MHz, 88 MHz, OVRO-LWA 73 MHz, and ROSAT combined band 0.1 -2.4 keV.
The colour bars all have
units of temperature in K, except the ROSAT maps which are in
counts per second per arcmin$^2$ x10$^4$.
From \citet{Vernstrom2021}.}
\label{fig2}
\end{figure}
\subsection{Magnetic Fields}
Polarisation of radio emission, and Faraday rotation have been used
intensively to determine the intensity and orientation of the magnetic
field in spiral galaxies, at various depths according
to the various wavelengths: either from the halo, or the disk,
and different distributions have been obtained with respect
to the spiral arms \citep{Kierdorf2020}. Turbulence due to star formation
in spiral arms paradoxically reduces alignment, and frequent field
reversals in the vertical direction contribute to distortions
that are not yet well understood. LOFAR has been used in combination with VLA
to determine the spectrum of the emission, separate thermal and
non-thermal components, the magnetic field strength and the cosmic ray
electron losses \citep{Mulcahy2018}.
All-sky survey of Faraday rotation will measure inter-galactic magnetic field, as
well as inside galaxies. The mechanisms to generate the field are not yet
settled, from inflation, phase transitions in the early Universe, and
batteries to amplify the seeds. Normally the field is frozen in the ionized gas, but
should dilute away in the expansion. When structures collapse, the field is amplified
again \citep{Johnston2015}.
Searches have been done in diffuse filaments connecting clusters,
at the cosmic web (15Mpc) scale, combining
X-ray hot gas with eRosita with radio data from
ASKAP/EMU Early Science \citep{Reiprich2021}.
Missing baryons are searched for by studying
the warm-hot gas in cluster outskirts and filaments.
The bridge between two clusters is detected; it may contain known galaxy
groupis, but not accounting for all the emission
There are several clumps of warm gas falling into the clusters,
compatible to what is observed in simulations.
LOFAR has also detected synchrotron emission in filaments
between merging galaxies, with possible shocks re-accelerating
the electrons \citep{Govoni2019, Botteon2020}, but these were
only short scales. Now with GLEAM (the MWA survey), it is possible
to search for longer filaments (see Figure \ref{fig2}).
These are traced by Luminous red galaxies (LRGs), which are
massive early-types residing in the center of galaxies clusters or groups.
The first large-scale filament detection, has revealed a magnetic
field of 30-60 nG, of intensity higher than previously believed,
with electrons subject to more efficient shock acceleration
\citep{Vernstrom2021}.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.8\textwidth,clip]{combes_S07_fig3}
\caption{Cartoon of a protoplanetary disk (HD 100546) showing the regions where
CH$_3$OH is detected and the different physical and
chemical mechanisms proposed \citep{Booth2021}.}
\label{fig3}
\end{figure}
\section{Craddle for life}
ALMA has made a breakthrough in the domain of the formation of planets
and protoplanetary disks, in imaging with superb resolution resonant
rings and gaps \citep{Andrews2020}.
Disks are formed first with gas and small-size dust grains,
the latter agglomerating
progressively in mm- and cm-sized grains before becoming planetesimals.
These grains emit at longer and longer wavelengths, and SKA-1 will
be the prefered instrument to detect cm to m-sized dust. At high
resolution with 40mas beam, the nearest systems will be mapped
with 4AU resolution, sufficient to determine the snow line
\citep{Hoare2015}.
Large exoplanets, of Jupiter-size, could be detected with their
magnetic fields \citep{Zarka2018}.
In synergy with ALMA, the detection of complex organic molecules (COM)
could be carried out, such as the methanol CH$_3$OH, with deuterated
species CH$_2$DOH, methanethiol CH$_3$SH, formamide NH$_2$CHO, and
heavier pre-biotic molecules in Band 5, such as amino acids and sugars. In
1000h SKA1-mid could detect clearly $\alpha$-alanine, with a hundred
of lines, for a column density of 10$^{13}$ cm$^{-2}$
\citep{Hoare2015}.
Methanol has been detected in protoplanetary disks,
and is thought to come from hydrogenation of CO on icy grains
(cf Figure \ref{fig3}). Complex organic molecules have
now been detected, which are key to form amino-acids, and pre-biotic
molecules \citep{Booth2021}. These COM cannot form in situ,
but must come from ices formed previously in dark interstellar clouds.
\section{Conclusions}
SKA-1 will help to tackle the main puzzles of
cosmology: the nature of dark matter and dark energy,
by using the common tools (BAO, WL, RSD) with
high precision, and with tracers with different
biases than optical surveys (radio continuum, HI in galaxies).
With extragalactic masers, it will be possible to bring a
complementary constraint to the H$_0$ problem, suggesting new
physics.
With redshifted HI-21cm line, SKA will have a unique contribution
to the Epoch of Reionization, and the birth of
the first galaxies. With the timing of millisecond pulsars,
SKA will make a breakthrough in probing strong ggavity,
and detecting gravitational waves in the very low frequency regime
(nanoHz), to search for primordial waves.
Our knowledge of magnetic genesis will considerably improve. SKA will work in synergy with ALMA to determine the physics
of protoplanetary disks, and detect pre-biotic molecules.
All these key projects have begun to be tackled at very low
frequency with the NenuFAR pathfinder in Nan\c{c}ay, where the first
large programs are
ES1: Cosmic Dawn;
ES2: Exoplanets \& Stars;
ES3: Pulsars;
ES4: Transients;
ES5: Fast Radio Bursts;
ES6: Planetary Lightning;
ES7: Joint Jupiter studies;
ES8: Cluster of galaxies \& AGNs;
and ES9: Cluster Filament \& Cosmic Magnetism.
\bibliographystyle{aa}
|
1,941,325,220,475 | arxiv | \section{Introduction}
The formation of molecular hydrogen in the interstellar medium (ISM) is a topic of ongoing debate (see recent review \cite{wakelam2017}). As molecular hydrogen is the most abundant molecule in the universe and plays a key role in many astrophysical processes, a proper understanding of the processes that lead to its formation is of great interest. A number of potential formation processes have already been explored. While gas phase routes to H$_2$ formation have been found to be inefficient, formation on interstellar dust grains has been identified as one possible mechanism \citep{oort1946,gould1963}. Another possible route to molecular hydrogen formation is on interstellar polycyclic aromatic hydrocarbons (PAHs) \citep{bauschlicher1998,hirama2004,lepage2009,mennella2012,boschman2012,thrower2012}. A variety of objects inside and outside our galaxy exhibit spectra crowded with unidentified lines. These lines can be seen in absorption in the optical (diffuse interstellar bands: DIBS) and in emission in the infrared (aromatic infrared bands: AIB). Ubiquitous interstellar AIB are now commonly considered to be carried by PAHs. These PAHs are typically at least partially hydrogenated \citep{schutte1993,bernstein1996}. Assuming DIBS to be due to PAH-based species as well, it is most likely that part are present in protonated form or as syperhydrogenated cations, which feature the observed transitions in the visible \citep{lepage1997,snow1998,hammonds2009}.
Atomic hydrogen impacting on PAH may undergo an attachment reaction and become attached to the molecule. Sequences of these addition reactions, with s additional H atoms, are of the type
\begin{equation}
[\mbox{PAH}+s\mbox{H}]+\mbox{H}\rightarrow [\mbox{PAH}+(s+1)\mbox{H}], \;\; \Delta M=+1
\label{eq:H-attachment}
\end{equation}
\noindent
are referred to as hydrogenation sequences, and leave the PAH in a state of superhydrogenation and increase its mass $M$ by 1. Recent experimental and theoretical research on trapped coronene radical cations C$_{24}$H$_{12}^+$ revealed that in the gas-phase and at $T=300$ K, hydrogenation proceeds through a specific sequence of well-defined atomic sites. Reaction barriers and binding energies lead to odd - even oscillations in the observed superhydrogenation states, and magic numbers of particularly high intensity for the attachment of $n=$5, 11, and 17 extra H atoms \citep{boschman2012,cazaux2016}.
Superhydrogenation dramatically alters the response of neutral and ionic gas-phase PAHs in various astrochemically relevant interaction processes.
Attachment of small numbers of H atoms to coronene cations can for instance quench photoionization-induced H loss from a C$_{24}$H$_{12}^+$ precursor cation \citep{reitsma2014,reitsma2015}. For superhydrogenation of neutral pyrene molecules,
an opposite effect was observed.The C-backbone is weakened and fragmentation upon ion collisions or photoionization is increased \citep{wolf2016,gatchell2015}. Attachment of a single H atom to a PAH radical cations has a dramatic influence on its IR spectrum \citep{knorke2009} and substantially decreases the HOMO-LUMO gap \citep{pathak2008}.
\begin{figure*}
\includegraphics[width=170mm]{sketch.jpg}
\caption{Schematic representation of the reaction sequences for subsequent H (top row) or D (bottom row network) interactions with a coronene cation. H atoms are marked in blue and D atoms in red. Only a selection of possible attachment (+H, +D) and abstraction (+H-H$_2$, +D-D$_2$, +D-HD) processes are indicated. The third atom is attached to an inner edge site, as indicated by recent IR spectroscopy data \citep{schlatholter2018} and not an outer edge site as previously predicted \citep{cazaux2016}. The masses of systems that possess an odd total number of H+D atoms are given in bold letters.}
\label{fig:sketch}
\end{figure*}
In 1998 a second type of H reaction with PAH molecules was proposed \citep{bauschlicher1998}, which was experimentally confirmed in 2012 through hydrogenation experiments on supported PAH thin films by Mennella et al. \citep{mennella2012}. In these so-called direct abstraction reactions of the Eley-Rideal type
\begin{equation}
[\mbox{PAH}+s\mbox{H}]+\mbox{H}\rightarrow [\mbox{PAH}+(s-1)\mbox{H}]+\mbox{H}_2, \;\; \Delta M=-1
\label{eq:H2-abstraction}
\end{equation}
\noindent
the incoming H atom does not get bound to the PAH molecule, but rather reacts with an H atom already present in the hydrogenated PAH, to directly desorb as an H$_2$ molecule. In these experiments on solid PAH films the abstraction channel is determined to be more than an order of magnitude weaker than H attachment, with the reaction cross sections for abstraction and attachment being respectively 0.06 \AA$^2$ and 1.1 \AA$^2$ corresponding to a ratio between abstraction and attachment of $\approx 1:20$ \citep{mennella2012}.
In the following, we study H abstraction reactions on gas phase coronene cations, C$_{24}$H$_{12}^+$. It should be noted that coronene is not a major species in the interstellar medium \citep{hirama2004}, but it is used as one of the prototypical PAHs in related astrolaboratory research \citep{boschman2012,cazaux2016,rauls2008,boschman2015,jochims1994,ling1998} because it is fairly large and has a compact shape, making it relatively easy to work with, and it is commercially available in large quantities. By comparing the mass spectra obtained from C$_{24}$H$_{12}^+$ exposure to $T=300$\;K H and D beams, respectively, direct evidence for the occurrence of abstraction reactions is observed. The modeling of the measured mass spectra with a time-dependent rate equation model indicates a relative cross section for abstraction that is about an order of magnitude larger than previously thought.
\section{Experiment}
\subsection{Concept of the experiment}
The hydrogenation of coronene cations is predominantly determined by the alternating heights of the energy barriers for hydrogen addition. Even numbered superhydrogenation states can be subject to barrierless hydrogenation, whereas hydrogen attachment to odd-numbered superhydrogenation states involves a reaction barrier. As a result, all (closed shell) odd-numbered superhydrogenation states are more stable than the (radical) even-numbered states and occur significantly more often. A typical mass spectrum consists of dominating peaks at $M=301$, 303, 305 etc. On top of that, ''magic'' stages of superhydrogenation are observed for the attachment of 5, 11, and 17 H atoms, corresponding to hydrogenation stages which have particularly high binding energies \citep{cazaux2016}.
In principle, an incoming H atom can interact with every C site in a coronene cation. As a first step, the attachment on one of the outer edge positions is energetically most favorable and thus most likely \citep{mennella2012,cazaux2016}. As a result, two H atoms are attached to a single carbon atom, which we shall refer to as the position being doubly occupied. Figure 1 illustrates the initial hydrogenation and abstraction steps for a coronene cation. In the top row, the case of H exposure is sketched. The first H attachment leads to a doubly occupied outer edge site and results in a mass increase by one unit into $M=301$. Previous experimental studies have shown that this process quickly transfers the entire C$_{24}$H$_{12}^+$ population into C$_{24}$H$_{13}^+$ \citep{boschman2012}. This implies that the probability of abstraction from C$_{24}$H$_{13}^+$ is negligibly small. Attachment of a second H atom leads to double occupation of the adjacent outer edge site and a molecular mass of $M=302$. From here on further hydrogenation competes with abstraction, if an impinging H atom impacts on a previously created doubly occupied site and undergoes an Eley-Rideal reaction \citep{mennella2012} with one of the H atoms attached to the site. This leads to the release of a neutral H$_2$ molecule which corresponds to the net loss of an H atom by the molecule (see eq. 2).
From a mass spectrometric perspective, it is important to realize that since hydrogenation and abstraction shift the mass of a superhydrogenated coronene cation by +1 and -1, respectively, it is not possible to establish whether a C$_{24}$H$_{12+n}^+$ originates from C$_{24}$H$_{12+(n-1)}^+$ via H attachment or from C$_{24}$H$_{12+(n+1)}^+$ via H abstraction. This explains why abstraction reactions remained obscured thus far in gas phase hydrogenation experiments. Furthermore, as abstraction counteracts the mass shift towards higher masses driven by hydrogenation, the rates for H addition might well be underestimated.
In order to overcome this problem and quantify the relative contribution of abstraction reactions on gas-phase coronene cations, the cations can be exposed to atomic D ($^2$H) rather than H ($^1$H). Addition and abstraction of atomic D then change the mass of the molecular precursor by +2 and -2, respectively. Except for the twice as large step size in mass, this yields a similar spectrum as for hydrogen. The doubly occupied sites, however, are initially occupied by an H and a D atom, i.e. the abstraction reaction can also involve an H atom, leading to a mass change of only -1. For our prototypical system, coronene with an initial mass of 300, the appearance of molecular ions with odd mass numbers is a direct signature of such an abstraction. More specific for a coronene cation C$_{24}$H$_{(12-n)}$\,D$_m^+$ that has lost $n$ of its initial H atoms by HD abstraction and contains $m$ additional D atoms the following generic reactions need to be considered:
\begin{equation}
\mbox{C}_{24}\,\mbox{H}_{(12-n)}\,\mbox{D}_m^+ \; +\; \mbox{D} \; \longrightarrow \left \lbrace
\begin{array}{lr}
\mbox{C}_{24}\,\mbox{H}_{(12-n)}\,\mbox{D}_{(m+1)}^+ & \Delta M=+2 \\ \\
\mbox{C}_{24}\,\mbox{H}_{(12-n)}\,\mbox{D}_{(m-1)}^+ \;+\; \mbox{D}_2 & \Delta M=-2 \\ \\
\mbox{C}_{24}\,\mbox{H}_{(12-n-1)}\,\mbox{D}_m^+ \;+\; \mbox{HD} & \Delta M=-1
\end{array}
\right.
\label{eq:HD-abstraction}
\end{equation}
\noindent
The reaction network driven by thermal D exposure described by eq.\ref{eq:HD-abstraction} is schematically illustrated in the bottom part of Figure 1. As for H, it is assumed that the singly hydrogenated cation is not undergoing significant abstraction reactions. In the figure H atoms are marked in blue and D atoms in red. From the figure and eq. \ref{eq:HD-abstraction} it is clear that for attachment and abstraction of a D atom the systems changes mass in steps of $\pm 2$ and stays in the same row. Loss of an H atom (HD abstraction) moves the molecular system one row down. The rows are characterized by the number of H atoms removed from the precursor coronene cation. The rows with odd numbers of H atoms abstracted correspond to odd-mass cations and are a direct signature of abstraction which allow us to determine both attachment and abstraction cross sections and barriers.
\subsection{Experimental implementation}
The coronene radical cations were produced by means of electrospray ionization (ESI) from a coronene solution in methanol. Admixture of AgNO$_3$ to the solution facilitates charge transfer from C$_{24}$H$_{12}$ to Ag$^+$. The C$_{24}$H$_{12}^+$ beam generated by the ESI source was then phase space compressed in a RF ion funnel and mass selected in a RF quadrupole mass filter. The ions were then trapped in a 3D RF ion trap at ambient temperature \citep{bari2011,egorov2016}. Note, that the substantial binding energy of atomic hydrogen on coronene cations (2 - 3.5 eV \citep{cazaux2016}) is deposited into the molecular system with every attachment event. At the same time, the system is subject to cooling by photon emission and fragmentation processes. The resulting internal excitation depends on the experimental conditions but only marginally on the initial temperature, as discussed elsewhere \citep{rapacioli2018}.
Molecular hydrogen or deuterium gas was dissociated in a Slevin-type discharge source operated at an RF frequency of 27 MHz \citep{hoekstra1991,boschman2012,reitsma2014}. The gas was cooled through collisions with a water-cooled sleeve and guided through Teflon tubes into the RF trap holding the C$_{24}$H$_{12}^+$ target. For both hydrogen and deuterium exposures, exposure times were set to 0.15, 0.5, 1, 3, 6, 9, 12, 15, and 40\;s in order to get broad view of the resulting superhydrogenation states. All hydrogenation and deuteration measurements were performed under otherwise identical experimental conditions.
Before the hydrogenation experiments, a reference measurement was made of the pristine coronene cation sample. The resulting mass spectrum can be found in the top panel of Figure 2. The main feature of the mass spectrum is the C$_{24}$H$_{12}^+$ precursor ion with $M=300$\;amu. A weak peak at $M=301$ is due to $^{12}$C$_{23}$$^{13}$C H$_{12}^+$. This peak is due to the naturally occurring $^{13}$C isotope. In order to correct for this peak in the analysis of subsequent measurements, the ratio between the main coronene mass peak and this isotope peak was calculated. This ratio was used during calculations on all mass peaks in the subsequent measurements to correct for the presence of this isotope peak. It is of note that the isotopic contribution is much less than its natural fraction of almost 25\%, due to the mass filtering by the RF quadrupole. Tighter filtering starts to reduce the number of C$_{24}$H$_{12}^+$ cations of $M=300$ as well, thereby hampering performing experiments with sufficient statistics.
\begin{figure*}
\includegraphics[width=14cm]{spectra.png}
\caption{Top panel: Mass spectrum of the coronene radical precursor C$_{24}$H$_{24}^+$ featuring a small contamination of $^{ 12}$C$_{23}$$^{13}$C$_1$H$_{24}^+$ (see text); Left column: Evolution of the superhydrogenation pattern as a function of H exposure time texp. Right column: Evolution of the superhydrogenation pattern as a function of D exposure time t$_{exp}$. All distributions are normalized with respect to the total peak integral.}
\label{fig:spectra}
\end{figure*}
\subsection{Results}
The left panel in fig. \ref{fig:spectra} shows the evolution of the mass spectrum with increasing H exposure time. The pressure in the RF-trap chamber was at $p=1.5\times10^{-6}$ mbar. Already after an H exposure time $t_{exp}=0.15$ s, more than half of the trapped ions are in the singly superhydrogenated state ($M=301$, +H). For $t_{exp}=1$ s, almost the entire trap content is singly superhydrogenated and a small feature due to triple superhydrogenation emerges ($M=303$, +3H). With increasing $t_{exp}$, the ratio between +H and +3H shifts towards the latter one and at $t_{exp}=6$ s, the next odd superhydrogenation appears ($M=305$, +5H). At an exposure time of $t_{exp}=9$ s, first traces of 7-fold superhydrogenation show up ($M=307$, +7H). In between the odd-mass peaks, the weak intensity at even mass numbers is mostly due to the presence of $^{13}$C in the precursor ion. The evolution towards the expected superhydrogenation pattern with its pronounced odd-even oscillation is evident. As discussed in the introduction, for atomic H exposure H abstraction remains hidden in the mass spectra as it leads to formation of molecular cations of identical masses.
The right panel of fig. \ref{fig:spectra} shows the corresponding results for D exposure at otherwise identical conditions. For the shortest exposure time of 0.15\ s, results for H and D exposure only differ in mass shift. In both cases, more than half of the C$_{24}$H$_{12}^+$ population has been transferred to the singly superhydrogenated or superdeuterated state. As discussed in the introduction, abstraction does not play a role in this step.
For longer exposure times the deuteration spectra are much more complex than the hydrogenation ones. The generic trends, and especially the additional information contained in the more complex deuterium spectra, will be illustrated for the cases of 1 and 3\ s exposure.
For exposures times longer than 1 s, HD abstraction occurs, resulting in the appearance of peaks in between the ones corresponding to the attachment of 1 and 3 D atoms at $M=302$ and $M=306$, respectively. For 1 s exposure, peaks at $M=303$ and $M=304$ are clearly visible. As illustrated in figure \ref{fig:sketch}, the production of $M=303$ requires the subsequent attachment of 2 D atoms followed by a HD abstraction event. The peak at $M=304$ requires the attachment of 3 D atoms and 2 HD abstraction events.
For $t_{exp}=3$ s, hydrogenation of the coronene cations show singly hydrogenated species at $M=301$, as well as triply hydrogenated species at $M=303$. Deuteration of coronene cations, on the other hand, leads to the presence of singly and triply deuterated species at $M=302$ and $M=306$ respectively. However, many intermediate masses can be observed, which are resulting from abstraction reactions such as the masses $M=303, 304$, and 305 (+2D-H, +3D-2H, and +4D-3H, respectively). With further increase of $t_{exp}$, the hydrogenation pattern continues to evolve according to the same principle: new odd superdeuteration states appear (masses $M=310$, +5D) and subsequently become accompanied by higher mass species with higher D content and lower H content. Note that the labeling in the right panel of fig.\ref{fig:spectra} is not complete and only features the most straightforward sequences leading to odd total numbers of hydrogen and deuterium atoms involved. For longer exposure time of $t_{exp}=6$ s and $t_{exp}=9$ s, coronene cations with 5 and then with 7 extra deuterium atoms can be seen, as well as intermediate masses due to abstraction.
\begin{figure*}
\includegraphics[width=12cm]{deuterium_yields.jpg}
\caption{Ion yields for mass numbers M=300-307 Da as a function of D exposure time. Solid data points: experimental data (red: masses resulting from pure D attachment; blue: masses that necessarily involve H abstraction). Dashed lines: Simulation results obtained with abstraction cross sections for graphene. Solid: modified abstraction barriers (see text). The experimental error bars reflect uncertainties
from the data treatment: background subtraction, Gaussian fitting as well as the presence of a small contamination of 13C that we could not discriminate. }
\label{fig:deuterium_yields}
\end{figure*}
To conclude, for deuterated coronene the masses of stable superhydrogenation states follow the equation:
\begin{equation}
M_{\mbox{stable}}=300+2m-n, m-n=[1,3,5,]
\end{equation}
Here $m$ represents the number of D atoms attached to the molecule, and $n$ represents the number of H atoms that have been abstracted from the molecule.
For the next section it is important to realize that the composition of a stable superhydrogenation state defines the possible subsequent abstraction reactions. For instance, after HD abstraction from a deuterated site, a subsequent D attachment leaves the site doubly occupied with D atoms and accordingly, as a next step only D$_2$ abstraction is possible. In general the relative importance of D$_2$ abstraction will increase in the course of the attachment/abstraction sequence.
\section{Kinetic model}
\subsection{Assumptions}
In order to extract quantitative information on reaction cross sections and barriers from the experimental data, attachment and abstraction sequences were described with a rate-equation model.
To this end, the time-evolution of each of the D-exposure peaks in the experimental data from fig. \ref{fig:spectra} up to $M=309$ was determined by integration of Gaussian fits as a function of $t_{exp}$. The data is shown in fig.~\ref{fig:deuterium_yields} as solid circles for each mass. To be able to reach $M=309$ (+4D -1H) not only by D-attachment, but also by HD/D$_2$-abstraction, attachment up to $M=312$ and abstraction needed to be taken into account. The precise reaction barriers and cross sections for attachment and abstraction are very likely site-dependent and could not be precisely derived from our model as this would involve too many free parameters. We chose to follow the sequence and scenario derived by experiments and DFT calculations from \citep{cazaux2016}. In that study, the barrier for addition of the first hydrogen had been estimated as $E_{attach}^{radical}=10$ meV, while the second hydrogenation had a higher barrier of $E_{attach}^{closed}=30$ meV, as it described a reaction between a closed shell cation and a radical. For the subsequent D attachment steps, we used this alternation between barriers of 10 meV and 30 meV until the 6$^{th}$ addition. For the 6$^{th}$ hydrogenation, a higher barrier of 100 meV had been determined. For abstraction processes, we used the small $E_{abstract}=10$ meV barriers determined for such reactions on graphene \citep{morisset2004}. Based on these assumptions, a chemical network capturing the different processes was established which is reported in the appendix.
\subsection{Results}
We then compute the evolution of the yields of the different superdeuterated species using our chemical network. Our first goal is to reproduce experimental data using the values that were previously derived in other studies. First, we used cross sections for attachment (1.1 \AA$^2$) and abstraction (0.06 \AA$^2$) similar to the ones derived for neutral coronene by \citep{mennella2012} and independent on the hydrogenation state \cite{skov2014}. In this low abstraction scenario the computations do not reproduce the even mass-numbered superdeuterated states reported in fig.~\ref{fig:deuterium_yields}, which imply the abstraction of a H atom by a D to form HD. In order to reproduce the experimentally observed yields of all species whose formation involved abstraction processes, the abstraction cross sections had to be significantly increased to at least 0.45 \AA$^2$ while the addition cross sections are set to 0.9 \AA$^2$. This implies that for coronene cations, abstraction rates are at least half of addition rates. Note that since the flux of D atoms is difficult to constrain in our experiments, the cross sections derived in our model could be different. However, our model allows to derive the ratio between abstraction and addition cross sections.
Fig.~\ref{fig:deuterium_yields} displays the model results for the yields of various superdeuterated coronene cations as a function of their mass number $M$ as dashed lines.The agreement with the experimental data is good with the exception of the $M=303$ and $305$ case.
Our model gives a reliable ratio of the cross sections and thus of the reaction rates. It is important to realize that the absolute cross sections are related to the respective reaction barriers for attachment and abstraction. Barriers and cross sections cannot be determined independently with our approach.
A closer look at the dashed lines in fig.~\ref{fig:deuterium_yields} reveals an overestimation of the yields for $M=303, 304$ and 305 whereas the yields for higher masses ($M=306, 307$) are underestimated. The two different attachment barriers $E_{attach}^{radical}=10$ meV ($n+m$ even, radical-radical reaction) and $E_{attach}^{closed}=30$ meV ($n+m$ odd, closed-shell-radical reaction) we used were theoretically determined for sequential hydrogenation in the absence of abstraction reactions \citep{cazaux2016}.
Abstraction reactions could induce a deviation from the hydrogenation sequence.
For instance, sequential attachment of 3 D atoms to sites 1, 2 and 3 leads to formation of C$_{24}$H$_{12}$D$_3^+$ (see fig.\ref{fig:sketch}, D exposure to 12 H, $n=0$ , $M=306$). Starting from this configuration, an abstraction could remove an H from one of the neighboring doubly occupied outer-edge sites, e.g. site 2 (see fig.\ref{fig:sketch}, the molecule shifts down-left to 11 H, $n=1$, $M=305$). The result is a radical configuration that would not be formed by attachment processes only, with a single doubly occupied outer edge site 1 and a singly occupied inner edge site 3. Attachment barriers typically decrease upon structural perturbations \citep{cazaux2016} which could result in attachment of the next D atom not only to site 2 but also to site 4. The resulting closed shell cation would have either a 1, 2, 3 conformation (with the correspondingly high attachment barrier $E_{attach}^{closed}=30$) or it would have a 1, 3, 4 configuration. In the latter case, the attachment barrier could be lowered, because there are now two singly occupied outer edge sites with a perturbed molecular structure due to neighboring doubly occupied outer edge sites. Similar reasoning is not limited to an abstraction from the triply superhydrogenated configuration but to all configurations that have originally undergone $n\geq 2$ D attachment processes and at least one abstraction reaction.
Abstraction reactions could also reduce the barrier for subsequent attachment, because abstraction leads to extra vibrational excitation of the remaining coronene cation. For similar Eley-Rideal processes on graphite, it has been theoretically shown that most of the released energy goes into the vibrational energy of the substrate rather than into the formed H$_2$ \citep{bachellerie2007,sizun2010}. On the other hand, attachment also increases vibrational excitation and does not lead to decreasing attachment barriers.
We have tried to reflect this with our model, by lowering the attachment barriers $E_{attach}^{closed}$ from 30 meV to 10 meV for the discussed closed shell configurations ($n\geq 2$, at least one abstraction, see appendix barriers in bold face).
The resulting yields are shown in fig.\ref{fig:deuterium_yields} as solid lines. The increased agreement between model results and experimental data is obvious.
While our model reproduces experimental data better when attachment barriers for all configurations modified by abstraction reactions are reduced, we would like to stress that other mechanisms cannot be ruled out. The number of potential attachment sites could be different for different configurations corresponding to the same hydrogenation state. Also, abstraction reactions could lead to an increase in internal energy.
\section{Conclusions}
The main conclusion from our study is therefore the 2:1 ratio between attachment and abstraction. This implies that H abstraction from gas phase coronene cations is at least more than 7 times more efficient than H abstraction from neutral coronene thin films. This could have implications for H$_2$ production in the ISM. For instance, Andrews {\it et al.} \citep{andrews2016} have recently shown that in photodissociation regions, PAHs only contribute to H$_2$ formation via photodissociation channels and not via abstraction mechanisms. However, their calculations were based on the low abstraction cross sections from \citep{mennella2012}. An increase of the abstraction cross section by one order of magnitude could make PAHs a very important route for the formation of H$_2$ in space.
\section*{Acknowledgements}
\bibliographystyle{mnras}
|
1,941,325,220,476 | arxiv | \section*{Acknowledgment} \label{acknowledgement}
This work was supported by KAKENHI Grant-in-Aids~(18K18768, 21K18628, 26104005, 19H05806, 19J20418, 18H05536, and 21K13942). This work is also partially supported by the joint research program of the Institute for Cosmic Ray Research~(ICRR), the University of Tokyo.
\bibliographystyle{JHEP}
\section{Introduction} \label{s1_intro}
Scintillation detectors have been one of the
most widely-used instruments
for particle detection.
Since large detectors can be built at a relatively low cost, they have been used in a wide range of applications for rare-event-search experiments, such as dark matter searches~\cite{xenon, xmass, DEAP:2019yzn}, neutrinoless double beta decay searches~\cite{kamland, CANDLES:2020iya}, and studies of neutrino interactions~\cite{borexino, juno}.
A high light yield is
one of the most important properties of these scintillation detectors to realize a low threshold and good energy resolution.
The light yield of the scintillators is known to be temperature dependent in addition to their inherent light production.
Three main mechanisms,~(A) the population of occupied excited levels of electrons,~(B) temperature quenching, and~(C) capture of excited electrons in traps, are known to cause the temperature dependence.~(A) is known to provide a light yield maximum at some temperatures.~(B) gives a monotonic light yield increase with the temperature decrease, while
~(C) gives monotonic light yield decrease with the temperature decrease.
As a result, there is no comprehensive knowledge of how to anticipate the precise temperature dependency of each scintillator, and each one must be evaluated separately.
Particle detectors containing fluorine-19~($^{19}\mathrm{F}$)
are known to have advantages for
Weakly Interacting Massive Particles~(WIMPs) dark matter search~\cite{spin} as well as for (solar and supernova) neutrino detection~\cite{Barabanov:1994rln, munu}.
In particular, the spin structure of the $^{19}\mathrm{F}$ nucleus enhances the signal from spin-dependent~(SD) interactions with dark matter.
The PICO-60 experiment reported the world-best limit of $3.2 \times 10^{-41}~\mathrm{cm}^{2}$ for $25~\mathrm{GeV}/c^{2}$ WIMPs
with a superheated droplet detector~(SDD)
filled with ~$\mathrm{C_{3}F_{8}}$~\cite{pico}.
While leading experiments can potentially discover WIMPs at any time, it should be noted that SDD is a threshold-type detector, which means the energy spectrum cannot be obtained from one measurement. Since the energy spectrum is thought to be one of the most important pieces of information for the discovery and further studies of WIMPs, it is well-justified to develop spectroscopy detectors that contain
$^{19}\mathrm{F}$.
This type of ``$^{19}\mathrm{F}$ spectroscopy detector''
can be realized in the forms of
gas time projection chambers~(TPC)~\cite{Tanimori:2003xs, Battat:2014van}, crystal scintillators~\cite{Shimizu:2005kf, Ogawa:2004fy}, and bolometers~\cite{Miuchi:2002zp, Takeda:2003km}.
Among these possibilities, we focused on the carbontetrafluoride ($\rm CF_4$) gaseous scintillator mainly because of the purification feasibility.
Large mass detectors can be realized once
$\rm CF_4$ is confirmed to be used as a practical scintillator in the liquid-state in terms of the intrinsic light yield and the self-absorption, which is a future important work of ours.
The properties of gas $\rm CF_4$ scintillation light, such as emission spectra~\cite{VANSPRANG197851, Pansky:1994zh, BROGGINI1995543, cf4prop, cf4prop2, cf4prop3, secondary}, and decay time~\cite{cf4decay, decaytime}, have been investigated for several decades.
The temperature dependence of its scintillation light yield has not been investigated yet, since TPC detectors with $\mathrm{CF_{4}}$ gas are
ordinarily operated at the room temperature.
For this purpose,
we evaluated the scintillation properties of $\mathrm{CF_{4}}$ gas at a low temperature and the results are reported in this paper.
This paper consists of four sections, including the Introduction. In Sec.~2, we describe the experimental setup to evaluate the temperature dependence of scintillation light from $\rm CF_4$ gas and briefly explain the analysis method. In Sec.~3, we present the experimental results and discussion. The study is concluded in Sec.~4.
\section{Measurements} \label{s2_setup}
\subsection{Setup}
The detector used for this work was
constructed at Kobe University.
A photograph and a conceptional image are
shown in Figure~\ref{setup1}.
The target volume of $28\times28\times41~\rm{{mm}^3}$ is viewed by two photomultiplier tubes~(PMTs; PMT-1 and PMT-2) from two sides.
The other four faces are surrounded by copper plates of $1$~mm thickness.
To preserve the thermal contact between the target gas and the copper plates as high as possible, no extra material to improve reflection is put inside the copper.
An $\mathrm{^{241}Am}$ source~($37$~Bq) is placed on the external side of one of the copper plates, which has a small hole to let $\gamma$-rays~($59.5$~keV) enter into the target volume. The temperature and pressure of the gas were measured by a Pt--100 thermometer~(P0K1.232.6W.B.007, IST INNOVATIVE SENSOR TECHNOLOGY) and a pressure gauge~(MPS--R33RC--NGAT, Myotoku), respectively.
PMT R8520-406 produced by Hamamatsu K.K. was selected because it was demonstrated to work at a cold temperature of $163~\mathrm{K}$~\cite{datasheet}.
The PMT-1 and PMT-2 were biased at $+700$~V and $+720$~V, respectively, to function at comparable gains. The waveforms of PMTs' signals were inverted with a signal transformer and
recorded by a DRS4 Evaluation board~\cite{drs4}.
The DRS4 Evaluation board recorded the waveforms
with a 14-bit precision at
1 GHz sampling for a range from $-100$~mV to $+900$~mV. The board has a 50~$\Omega$ termination.
The trigger was issued by a coincidence of the two PMTs with a threshold of $+95$~mV.
Figure \ref{fig:vessel} shows the whole experimental equipment.
The detector was set in a stainless cylindrical container with an inner diameter of 60~mm.
The stainless container had ICF~144 flanges on both ends and was filled with CF$_4$ gas for the measurement.
It was set in a vacuum vessel for thermal insulation.
The target volume was filled with $\mathrm{CF_{4}}$ gas~(purity grade~5N, $99.999\%$) filtered with 4A-type molecular sieves~(MS-4A).
A refrigerator, PDC08 supplied by Ulvac Cryogenics Inc.~($14$~W capacity), was connected to the copper plate via a copper thermal link.
\begin{figure}[t]
\centering
\begin{minipage}{0.48\columnwidth}
\centering
\includegraphics[width=\columnwidth]{figs/setup1.png}
\end{minipage}
\begin{minipage}{0.48\columnwidth}
\centering
\includegraphics[width=\columnwidth]{figs/setup2.png}
\end{minipage}
\caption{(Left) The top view of the detector. (Right) Conceptional image of the detector setup. The source of $\mathrm{^{241}Am}$ and the thermometer are mounted on the external side of the copper plate. The $\mathrm{^{241}Am}$ source is located at a slightly off-centered position. The distance to one PMT is 14.5~mm while the other is 26.5~mm. It was confirmed that there was no effect of this displacement for the light yield measurement.}
\label{setup1}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=12.0cm]{figs/cf4_detector2.pdf}
\caption{The schematic view of the whole experimental equipment.
The detector~(shown in Figure~\ref{setup1}) was set in a vacuum vessel. The detector was cooled down by the refrigerator through the thermal link. The target volume is connected to the vacuum pump and the $\mathrm{CF_{4}}$ gas cylinder.}
\label{fig:vessel}
\end{center}
\end{figure}
\subsection{Measurement}
The measurement was carried out as one heat cycle starting at the room temperature, cooling down to a low temperature, and then warming-up to the room temperature.
The scintillation signal was continuously measured throughout the heat cycle at various temperatures.
First, the target volume was
evacuated down to $1$~Pa and then filled with $\mathrm{CF_{4}}$ gas
at room temperature. Soon after the gas pressure reached $1.0 \times 10^5$~Pa, the detector valve was closed and the bias voltages were applied to the PMTs.
The data acquisition was started at this timing, which is defined as the origin of the experimental time~($t_{0}$).
\begin{table}[t]
\caption{Definition of the experimental periods. The term~(hours) is the elapsed time since the start of the measurement~($t_{0}$ in the main text).}
\label{tab:period}
\centering
\begin{tabular}{|ccc|}
\hline
Period & Hours since t0 & Description \\
\hline
A & $\phantom{00.0}0$ -- $\phantom{0}4.05$ & Room temperature (stable)\\
B & $\phantom{0}4.05$ -- $\phantom{0}7.50$ & Cooling \\
C & $\phantom{0}7.50$ -- $14.12$ & Cold (stable) \\
D & $14.12$ -- $17.50$ & Warming \\
E & $17.50$ -- $20.00$ & Room temperature (stable)\\
\hline
\end{tabular}
\end{table}
The detector temperature and pressure
during the measurement are shown in the middle and bottom plots of Figure \ref{fig:pesum}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\hsize]{figs/cf4_plots.pdf}
\caption{The number of photoelectrons and temperature during the experiment.
The top figure shows transitions of the number of photoelectrons.
The middle and bottom figures show the temperature and pressure of the CF$_4$ gas, respectively.
The green and purple lines are mean of the light yield during the stable conditions.
}
\label{fig:pesum}
\end{center}
\end{figure}
Five ``Periods'' are defined during the measurement to describe the different conditions as summarized in Table~\ref{tab:period}.
The data was collected at room temperature for four hours as a reference and to evaluate the stability of the measurement~(Period A). We then switched on the refrigerator to start cooling the gas, and data was collected throughout this time~(Period B).
Data was collected at a cold temperature of $263$~K when the instruments attained a steady temperature and gas pressure at which period the thermal uniformity of the gas in the target volume can be enough assumed~(Period C).
After about seven hours of data taking at the cold temperature, the refrigerator was turned off. A heat leak increased the temperature of the gas~(Period D).
At the end, the detector returned to the room temperature and the data was taken for another two hours~(Period E).
\begin{figure}[t]
\begin{center}
\includegraphics[width=11.0cm]{figs/cf4_example_wfs.pdf}
\caption{Typical~(orange line) and averaged~(blue line) pulses of PMT-1.
The light blue region shows the interquartile range. The horizontal dashed line shows the threshold of data acquisition.
The inset shows an example of a pulse of single photoelectron.
}
\label{fig:wf}
\end{center}
\end{figure}
The stability of the PMT gains was monitored throughout the measurement by observing single photoelectron~(SPE) dark signals.
Figure \ref{fig:wf} shows a typical pulse recorded in Period A.
A fast ($\sim$20~ns of full width) pulse of scintillation light which represents the $\mathrm{CF_{4}}$'s short decay time is observed.
This figure shows an average pulse after excluding saturating pulses which were $\alpha$~particles from the source.
In this experiment, the rate of the cosmic muon is 1/100 of the signal, and the typical deposit energy is as small as 20~keV, which is negligible.
SPE pulses from accidental dark signals were searched
for before the triggered pulse timing with a software threshold of $+5$~mV.
A typical SPE pulse is shown in the inset of Figure~\ref{fig:wf}.
The charge for each pulse was calculated by integrating the pulse for $\pm 10$~ns around the pulse peak.
Figure~\ref{fig:pe} shows the charge distribution of SPE pulses.
\begin{figure}[t]
\begin{center}
\includegraphics[width=12.0cm]{figs/pe.png}
\caption{Typical charge distribution of SPE pulses.
The distribution is fitted with a Gaussian function~(red dotted curve).
}
\label{fig:pe}
\end{center}
\end{figure}
The peaks are fitted with Gaussian functions and the mean values are used to calculate
the charges of the SPE pulses, taking account of the termination resistor~(50~$\Omega$).
Calculated SPE charges are defined as $Q_\mathrm{SPE-1}$ and $Q_\mathrm{SPE-2}$~[C] for PMT-1 and PMT-2, respectively.
Figure~\ref{fig:gain} shows the time dependence of gains calculated as $Q_\mathrm{SPE-1,2}/e$, where $e$ is the elementary charge.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{figs/gain3.png}
\caption{Gains of PMT-1 and PMT-2 during the measurement. The blue points show the gain of PMTs with the statistical uncertainty. The orange line and band show the mean and the range of its standard deviation. The gains are stable within $\pm1.5\%$ and this does not affect to the observed number of photoelectrons. }
\label{fig:gain}
\end{center}
\end{figure}
It demonstrates the stability of the PMT gains throughout the measurement.
The fluctuations in PMT gains were found to be less than $\pm1.5\%$.
The non-linearity of PMTs and electronics were evaluated individually with LED calibration.
It was confirmed that the dynamic range was much larger than the light increase.
The effect of non-linearity is negligible to the considered gain fluctuations.
\section{Results} \label{s3_results}
\subsection{Temperature dependence of light output} \label{sec:res:temp}
The number of photoelectrons for the observed
event
is calculated from the waveforms of two PMTs.
The baseline calculated from the initial $200$~ns of the waveform were subtracted from the waveform and the charge was calculated by integrating the waveform taking account of the termination resistor.
Obtained charges from PMT-1 and PMT-2 are referred to as $Q_\mathrm{PMT-1}$~[C] and $Q_\mathrm{PMT-2}$~[C], respectively.
The total numbers of observed photoelectrons were obtained as, $N_\mathrm{total} = Q_\mathrm{PMT-1}/Q_\mathrm{SPE-1} +Q_\mathrm{PMT-2}/Q_\mathrm{SPE-2}$.
Figure \ref{fig:hist} shows a typical spectrum of $N_\mathrm{total}$ by the 59.5~keV $\gamma$-rays in stable Periods (A and C).
\begin{figure}[t]
\begin{center}
\includegraphics[width=10.0cm]{figs/sp3.pdf}
\caption{Typical spectra for $\gamma$-rays~($59.5$~keV). One at the room temperature is drawn with orange histogram and the other at the cool temperature~($263$~K) is drawn with blue histogram.
They are fitted with Gaussian function~(dotted lines).
In these spectra, the means of the Gaussian function are $211.9$~P.E. and $305.0$~P.E., respectively.
The increase of emitted number of photoelectrons is clearly observed.}
\label{fig:hist}
\end{center}
\end{figure}
The shift of the peak position at different temperatures is clearly observed.
It demonstrates the high light yield at the cold temperature.
The peaks are fitted with Gaussian function
and the means of the functions for Periods A and C functions are 211.9~P.E. and 305.0~P.E., respectively.
Top plot of Figure~\ref{fig:pesum} clearly shows the increase of light yield of CF$_4$ at cold temperatures.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.8\hsize]{figs/cf4_plot2.pdf}
\caption{Measured correlation between the gas temperature and the light yield. The red circles, orange triangles, blue stars, gray crosses, and green squares corresponds the data taken in Periods A, B, C, D, and E, respectively. The arrow illustrates the time order of the measurement.
}
\label{fig:tvsl}
\end{center}
\end{figure}
Table~\ref{tab:res} summarizes the light yields, resolutions for $59.5$~keV $\gamma$-ray, and gas temperatures during Periods A, C, and E.
\begin{table}[t]
\centering
\caption{Summary of the light yield, the energy resolution, and the temperature measured in stable periods.
The resolutions are calculated for $59.5$~keV $\gamma$-ray. Uncertainties shown in this table are only statistical uncertainty.}
\label{tab:res}
\begin{tabular}{|ccccc|}
\hline
Period & Light yield~[P.E.]& Relative ratio to Period A & Resolution [\%($\sigma$)] & Gas temperature~[K] \\ \hline
A & $219.5 \pm 5.8$ & $1$ & 51.1 & $299.7 \pm 0.5$ \\
C & $309.1 \pm 4.5$ & $1.41 \pm 0.04$ & 41.6 & $262.8 \pm 0.8$ \\
E & $210.9 \pm 7.6$ & $0.96 \pm 0.04$ & 52.4 & $296.3 \pm 1.0$ \\
\hline
\end{tabular}
\end{table}
In Period A, the light yield for $59.5$~keV $\gamma$-ray is measured to be $219.5\pm5.8~\mathrm{P.E.}$ with an energy resolution of $51.1\%$~($\sigma$).
The uncertainty is calculated by quadratically summing the statistical uncertainty and the systematic uncertainty described in Section~\ref{sec_sys}.
In Period~C, the observed light yield increased to $309.1\pm4.5~\mathrm{P.E.}$
The ratio and difference between these two results are $1.41\pm0.04$ and $89.6\pm7.3~\mathrm{P.E.}$, respectively.
The energy resolution improved from $51.1\%$ to $41.6\%$.
In Period E, the light yield was measured to be $210.9\pm7.6~\mathrm{P.E.}$
The ratio and difference between Period A and E are~$0.96\pm0.04$ and~$-8.6\pm9.6~\mathrm{P.E.}$, respectively. The light yield in Period E is consistent with the one in Period A within uncertainties.
Hence, this reproducibility of light yield demonstrates that the experimental data were properly obtained.
Figure~\ref{fig:tvsl} shows light yield as a function of the detector temperature.
A hysteresis was observed between
Periods B and D.
This behavior indicates a delay in the thermal conduction
of the gas from the copper plate.
There is another possibility that the light yield may have decreased due to a small amount of out-gas around the melting point even after water removal by molecular sieves.
For these reasons, we focus on the stable Periods, A, C, and E, for the discussion of the light yield.
\subsection{Systematic uncertainties} \label{sec_sys}
The observed number of photoelectrons $N_\mathrm{obs.}$ is represented as
\begin{equation}
N_\mathrm{obs.} = N_\mathrm{orig.} \times \Omega_\mathrm{eff.} \times C_\mathrm{gain} \times \varepsilon_\mathrm{QE},
\end{equation}
where $N_\mathrm{orig.}$ is the number of originally generated photons, $\Omega_\mathrm{eff.}$ is the effective solid angle of the PMTs considering reflections, $C_\mathrm{gain}$ is the PMT gain, and $\varepsilon_\mathrm{QE}$ is the quantum efficiency of the PMT.
Table~\ref{tb:sys} summarizes the systematic uncertainties for the $N_\mathrm{Total}$.
\begin{table}[t]
\begin{center}
\caption{The summary of systematic uncertainties for $N_\mathrm{orig.}$. The details are described in the main text.}
\label{tb:sys}
\begin{tabular}{|cc|}
\hline Item & Systematic uncertainty~[$\%$] \\
\hline
PMT gain & $\pm 1.5 $ \\
PMT Q.E. & $\pm 5.0 $ \\
Reproducibility & $\pm 4.0$ \\
\hline
Total & $\pm 6.6$ \\
\hline
\end{tabular}
\end{center}
\end{table}
In this study, the $\Omega_\mathrm{eff.}$ is invariant during the measurement while the other three parameters affect $N_\mathrm{obs}$.
The uncertainty of the PMT gain, $C_\mathrm{gain}$, is evaluated as $\pm1.5\%$ of the standard deviation of the distribution of the PMT gain shown in Figure~\ref{fig:gain}.
The uncertainty of $\varepsilon_\mathrm{QE}$ was reported in the previous study to be $\pm5\%$ at maximum~\cite{pmt_qe}.
The reproducibility is estimated at $\pm4.0\%$ from the difference between the light yields in two room temperature Periods, A and E.
These uncertainties can be judged to be independent from each other, thus the total systematic uncertainty is evaluated to be $\pm 6.6\%$.
This systematic uncertainty and measured statistical uncertainty are both small relative to the observed increases in $N_\mathrm{obs.}$, and we conclude that a significant increase in $N_\mathrm{orig.}$, or the light yield, has been observed at cold temperatures.
\section{Conclusion and future prospects}\label{s5_conclusion}
A temperature dependence of a gaseous scintillator, CF$_4$, was studied at $300$~K and $263$~K.
The light yield of CF$_4$ was found to increase
by $(41.0\pm4.0_{\rm stat.}\pm6.6_{\rm syst.})\%$ and the energy resolution enhanced when the $\mathrm{CF}_4$ gas is cooled to $263$~K.
As for the future prospects, this high light yield at cold temperatures enables detectors using $\mathrm{CF_{4}}$ gas to improve their energy resolution and to lower their threshold. |
1,941,325,220,477 | arxiv | \section{Introduction}
In this paper we consider the steady Navier-Stokes equations in a half-plane
$\Omega_{+}=\left\{ \left. (x,y)\in\mathbb{R}^{2}\right\vert ~y>1\right\} $
with a drift term parallel to the boundary, a small driving force of compact
support, with zero Dirichlet boundary conditions at the boundary of the half
plane and at infinity. See \cite{Hillairet.Wittwer-Existenceofstationary2009}
and \cite{Hillairet.Wittwer-Asymptoticdescriptionof2011} for a detailed
motivation of this problem. Existence of a strong solution for this system was
proved in \cite{Hillairet.Wittwer-Existenceofstationary2009} together with a
basic bound on the decay at infinity, and the existence of weak solutions was
shown in \cite{Hillairet.Wittwer-Asymptoticdescriptionof2011}. By elliptic
regularity weak solutions are smooth, and their only possible shortcoming is
the behavior at infinity, since the boundary condition may not be satisfied
there in a pointwise sense. In
\cite{Hillairet.Wittwer-Asymptoticdescriptionof2011} it was also shown that
for small forces there is only one weak solution. This unique weak solution
therefore coincides with the strong solution and satisfies as a consequence
the boundary condition at infinity in a pointwise sense.
The aim of this paper is to provide additional information concerning the
behavior of this solution at infinity by analyzing the solution obtained in
\cite{Hillairet.Wittwer-Existenceofstationary2009} in a more stringent
functional setting. More precisely, we obtain more information on the decay
behavior of the vorticity of the flow. Bounds on vorticity as a step towards
bounds on the velocity are a classical procedure in asymptotic analysis of
fluid flows (see the seminal papers
\cite{Gilbarg.Weinberger-AsymptoticPropertiesof1974},
\cite{Gilbarg.Weinberger-Asymptoticpropertiesof1978} and
\cite{Amick-asymptoticformof1991}). In
\cite{Hillairet.Wittwer-Existenceofstationary2009} and the current work, the
equation for the vorticity is Fourier-transformed with respect to the
coordinate $x$ parallel to the wall, and then rewritten as a dynamical system
with the coordinate $y$ perpendicular to the wall playing the role of time. In
this setting information on the behavior of the vorticity at infinity is
studied by analyzing the Fourier transform at $k=0$, with $k$ the Fourier
conjugate variable of $x$. In the present work, we also control the derivative
of the Fourier transform of the vorticity, which yields more precise decay
estimates for the vorticity and the velocity field in direct space than the
ones found in \cite{Hillairet.Wittwer-Existenceofstationary2009}. Our proof is
then based on a new linear fixed point problem involving the solution obtained
in \cite{Hillairet.Wittwer-Existenceofstationary2009} and the derivative of
the vorticity with respect to $k$.
Since the original equation is elliptic, the dynamical system under
consideration contains stable and unstable modes and no spectral gap, so that
standard versions of the center manifold theorem are not sufficient to prove
existence of solutions. Functional techniques that allow to deal with such a
situation go back to \cite{Gallay-center-stablemanifoldtheorem1993} and were
adapted to the case of the Navier-Stokes equations in
\cite{Baalen-StationarySolutionsof2002} and in
\cite{Wittwer-structureofStationary2002},
\cite{Wittwer-Supplementstructureof2003}. For a general review see
\cite{Heuveline.Wittwer-ExteriorFlowsat2010}. The linearized version of the
current problem was studied in \cite{Hillairet.Wittwer-vorticityofOseen2008}.
A related problem in three dimensions was discussed in
\cite{Guo.etal-Existenceofstationary2011}.
The results of the present paper are the basis for the work described in
\cite{Boeckle.Wittwer-Asymptoticsofsolutions2011}, where we extract several
orders of an asymptotic expansion of the vorticity and the velocity field at
infinity. The asymptotic velocity field obtained this way is divergence-free
and may be used to define artificial boundary conditions of Dirichlet type
when the system of equations is restricted to a finite sub-domain to be solved
numerically. The use of asymptotic terms as artificial boundary conditions was
pioneered in \cite{Boenisch.etal-Secondorderadaptive2008} for the related
problem of an exterior flow in the whole space in two dimensions, and in
\cite{Heuveline.Wittwer-AdaptiveBoundaryConditions2010} for the case in three dimensions.
\bigskip
Let $\mathbf{x}=(x,y)$, and let $\Omega_{+}=\{\left. (x,y)\in\mathbb{R
^{2}\right\vert ~y>1\}$. The model under consideration is given by the
Navier-Stokes equations with a drift term parallel to the boundary
\begin{align}
-\partial_{x}\boldsymbol{u}\mathbf{+}\Delta\boldsymbol{u} & =\boldsymbol{F
\mathbf{+}\boldsymbol{u}\cdot\mathbf{\nabla}\boldsymbol{u}+\mathbf{\nabla
}p~,\label{eq:nssteadyforce}\\
\mathbf{\nabla}\cdot\boldsymbol{u} & =0~,\label{eq:incompressibility
\end{align}
subject to the boundary condition
\begin{align}
\boldsymbol{u}(x,1) & =0~,\hspace{1cm}x\in\mathbb{R}~,\label{eq:b0}\\
\lim\limits_{\mathbf{x\rightarrow\infty}}\boldsymbol{u}\mathbf{(x)} &
=0~.\label{eq:b1
\end{align}
The following theorem is our main result.
\begin{theorem}
\label{thm:main}For all $\boldsymbol{F}\in C_{c}^{\infty}(\Omega_{+})$ with
$\boldsymbol{F}$ sufficiently small in a sense to be defined below, there
exist a unique vector field $\boldsymbol{u}=(u,v)$ and a function $p$
satisfying the Navier-Stokes equations (\ref{eq:nssteadyforce}),
(\ref{eq:incompressibility}) in $\Omega_{+}$ subject to the boundary
conditions (\ref{eq:b0}) and (\ref{eq:b1}). Moreover, there exists a constant
$C>0$, such that $|y^{3/2}u(x,y)|+|y^{3/2}v(x,y)|+$ $|y^{3}\omega
(x,y)|+|xy\omega(x,y)|\leq C$, for all $(x,y)\in\Omega_{+}$.
\end{theorem}
\bigskip
This theorem is a consequence of Theorem$~$\ref{thm:existence} which is proved
in Section \ref{sec:existencemap}. The crucial improvement with respect to
\cite{Hillairet.Wittwer-Existenceofstationary2009} is the bound on the
function $xy\omega(x,y)$.
\bigskip
The paper is organized as follows. In Section~\ref{sec:evoleq} we rewrite
(\ref{eq:nssteadyforce}) and (\ref{eq:incompressibility}) as a dynamical
system with $y$ playing the role of time, and Fourier-transform the equations
with respect to the variable $x$. Then, in Section~\ref{sec:integraleq}, we
recall the integral equations for the vorticity discussed in
\cite{Hillairet.Wittwer-Existenceofstationary2009} and complement them by the
ones for the derivative with respect to $k$. We then introduce in
Section~\ref{sec:funframe} certain well adapted Banach spaces which encode the
information concerning the decay of the functions at infinity. Finally, in
Section~\ref{sec:existencemap}, we reformulate the problem of showing the
existence of the derivative of vorticity with respect to $k$ as the fixed
point of a continuous map, based on the existence of solutions proved in
\cite{Hillairet.Wittwer-Existenceofstationary2009}. We present in
Sections~\ref{sec:convolutiondisc} and \ref{sec:bounds} the proofs of the
lemmas used in Section~\ref{sec:existencemap}. In the appendix, we recall
results from \cite{Hillairet.Wittwer-Existenceofstationary2009} which are
needed here.
\section{Reduction to an evolution equation\label{sec:evoleq}}
We recall the procedure used in
\cite{Hillairet.Wittwer-Existenceofstationary2009} to frame the Navier-Stokes
equations for the studied case as a dynamical system. Let $\boldsymbol{u
=(u,v)$ and $\boldsymbol{F}=(F_{1},F_{2})$. Then, equations
(\ref{eq:nssteadyforce}) and (\ref{eq:incompressibility}) are equivalent t
\begin{align}
\omega & =-\partial_{y}u+\partial_{x}v~,\label{eq:vorticity}\\
-\partial_{x}\omega\mathbf{+}\Delta\omega & =\partial_{x}(u\omega
)+\partial_{y}(v\omega)+\partial_{x}F_{2}-\partial_{y}F_{1
~,\label{eq:vorticityNScomovingForce}\\
\partial_{x}u+\partial_{y}v & =0~. \label{eq:incompressibility2
\end{align}
The function $\omega$ is the vorticity of the fluid. Once equations
(\ref{eq:vorticity})-(\ref{eq:incompressibility2}) are solved, the pressure
$p$ can be obtained by solving the equatio
\[
\Delta p=-\mathbf{\nabla}\cdot(\boldsymbol{F}\mathbf{+}\boldsymbol{u
\cdot\mathbf{\nabla}\boldsymbol{u})
\]
in $\Omega_{+}$, subject to the Neumann boundary conditio
\[
\partial_{y}p(x,1)=\partial_{y}^{2}v(x,1)~.
\]
Le
\begin{align}
q_{0} & =u\omega~,\label{eq:defq0direct}\\
q_{1} & =v\omega~, \label{eq:defq1direct
\end{align}
and let furthermor
\begin{align}
Q_{0} & =q_{0}+F_{2}~,\label{eq:defQ0direct}\\
Q_{1} & =q_{1}-F_{1}~. \label{eq:defQ1direct
\end{align}
We then rewrite the second order differential equation
(\ref{eq:vorticityNScomovingForce}) as a first order system
\begin{align}
\partial_{y}\omega & =\partial_{x}\eta+Q_{1}~,\label{eq:diffomegadirect}\\
\partial_{y}\eta & =-\partial_{x}\omega+\omega+Q_{0}~.
\label{eq:diffetadirect
\end{align}
Note that, unlike the right-hand side of (\ref{eq:vorticityNScomovingForce}),
the expressions for $Q_{0}$ and $Q_{1}$ do not contain derivatives. This is
due to the fact that, in contrast to standard practice, we did not set, say,
$\partial_{y}\omega=\eta$, but we chose with (\ref{eq:diffomegadirect}) a more
sophisticated definition. The fact that the nonlinear terms in
(\ref{eq:diffomegadirect}), (\ref{eq:diffetadirect}) do not contain
derivatives simplifies the analysis of the equations considerably. An
additional trick allows to reduce complexity even further. Namely, we can
replace (\ref{eq:incompressibility2}) and (\ref{eq:vorticity}) with the
equations
\begin{align}
\partial_{y}\psi & =-\partial_{x}\varphi-Q_{1}~,\label{eq:diffpsidirect}\\
\partial_{y}\varphi & =\partial_{x}\psi+Q_{0}~, \label{eq:diffphidirect
\end{align}
if we use the decompositio
\begin{align}
u & =-\eta+\varphi~,\label{eq:ansatzUdirect}\\
v & =\omega+\psi~. \label{eq:ansatzVdirect
\end{align}
The point is that in contrast to $u$ and $v$ the functions $\psi$ and
$\varphi$ decouple on the linear level from $\omega$ and $\eta$. Since, on the
linear level we have $\Delta\varphi=0$ and $\Delta\psi=0$, it will turn out
that $\varphi$ and $\psi$ have a dominant asymptotic behavior which is
harmonic when $Q_{0}$ and $Q_{1}$ are small.
Equations (\ref{eq:diffomegadirect})-(\ref{eq:diffphidirect}) are a dynamical
system with $y$ playing the role of time. We now take the Fourier transform in
the $x$-direction.
\begin{definition}
\label{def:fourier}Let $\hat{f}$ be a complex valued function on $\Omega_{+}$.
Then, we define the inverse Fourier transform $f=\mathcal{F}^{-1}[\hat{f}]$ by
the equation,
\[
f(x,y)=\mathcal{F}^{-1}[\hat{f}](x,y)=\frac{1}{2\pi}\int_{\mathbb{R}
e^{-ikx}\hat{f}(k,y)dk~,
\]
and $\hat{h}=\hat{f}\ast\hat{g}$ b
\[
\hat{h}(k,y)=(\hat{f}\ast\hat{g})(k,y)=\frac{1}{2\pi}\int_{\mathbb{R}}\hat
{f}(k-k^{\prime},y)\hat{g}(k^{\prime},y)dk^{\prime}~,
\]
whenever the integrals make sense. We note that for a function $f$ which is
smooth and of compact support in $\Omega_{+}$ we have $f=\mathcal{F}^{-1
[\hat{f}]$, wher
\[
\hat{f}(k,y)=\mathcal{F}[f](k,y)=\int_{\mathbb{R}}e^{ikx}f(x,y)dx~,
\]
and that $fg=\mathcal{F}^{-1}[\hat{f}\ast\hat{g}]$.
\end{definition}
With these definitions we have in Fourier space, instead of
(\ref{eq:diffomegadirect})-(\ref{eq:diffphidirect}), the equation
\begin{align}
\partial_{y}\hat{\omega} & =-ik\hat{\eta}+\hat{Q}_{1
~,\label{eq:diffomegafourier}\\
\partial_{y}\hat{\eta} & =(ik+1)\hat{\omega}+\hat{Q}_{0
~,\label{eq:diffetafourier}\\
\partial_{y}\hat{\psi} & =ik\hat{\varphi}-\hat{Q}_{1
~,\label{eq:diffpsifourier}\\
\partial_{y}\hat{\varphi} & =-ik\hat{\psi}+\hat{Q}_{0}~.
\label{eq:diffphifourier
\end{align}
From (\ref{eq:defQ0direct}) and (\ref{eq:defQ1direct}) we ge
\begin{align}
\hat{Q}_{0} & =\hat{q}_{0}+\hat{F}_{2}~,\label{eq:defQ0fourier}\\
\hat{Q}_{1} & =\hat{q}_{1}-\hat{F}_{1}~, \label{eq:defQ1fourier
\end{align}
from (\ref{eq:defq0direct}) and (\ref{eq:defq1direct}) we ge
\begin{align}
\hat{q}_{0} & =\hat{u}\ast\hat{\omega}~,\label{eq:defq0fourier}\\
\hat{q}_{1} & =\hat{v}\ast\hat{\omega}~, \label{eq:defq1fourier
\end{align}
and instead of (\ref{eq:ansatzUdirect}) and (\ref{eq:ansatzVdirect}) we have
the equation
\begin{align}
\hat{u} & =-\hat{\eta}+\hat{\varphi}~,\label{eq:ansatzUfourier}\\
\hat{v} & =\hat{\omega}+\hat{\psi}~. \label{eq:ansatzVfourier
\end{align}
\section{Integral equations\label{sec:integraleq}}
We now reformulate the problem of finding a solution to
(\ref{eq:diffomegafourier})-(\ref{eq:diffphifourier}) which satisfies the
boundary conditions (\ref{eq:b0}) and (\ref{eq:b1}) in terms of a system of
integral equations. The equations for $\hat{\omega}$, $\hat{\eta}$,
$\hat{\varphi}$ and $\hat{\psi}$ are as in
\cite{Hillairet.Wittwer-Existenceofstationary2009}. In particular we recall
tha
\begin{equation}
\hat{\omega}=\sum_{m=0}^{1}\sum_{n=1}^{3}\hat{\omega}_{n,m}~,
\label{eq:omegaInt
\end{equation}
where, for $n=1,2,3$, $m=0,1$,
\begin{equation}
\hat{\omega}_{n,m}(k,t)=\check{K}_{n}(k,t-1)\int_{I_{n}}\check{f
_{n,m}(k,s-1)\hat{Q}_{m}(k,s)ds~, \label{eq:omegaIntComp
\end{equation}
where, for $k\in\mathbb{R}\setminus\{0\}$ and $\sigma$, $\tau\geq0$
\begin{align}
\check{K}_{n}(k,\tau) & =\frac{1}{2}e^{-\kappa\tau}~,\text{ for
}n=1,2~,\label{Kn}\\
\check{K}_{3}(k,\tau) & =\frac{1}{2}(e^{\kappa\tau}-e^{-\kappa\tau})~,
\label{K3
\end{align}
an
\begin{align}
\check{f}_{1,0}(k,\sigma) & =\frac{ik}{\kappa}e^{\kappa\sigma}-\frac{\left(
|k|+\kappa\right) ^{2}}{\kappa}e^{-\kappa\sigma}+2\left( |k|+\kappa\right)
e^{-|k|\sigma}~,\label{eq:f10}\\
\check{f}_{2,0}(k,\sigma) & =2\left( \kappa+|k|\right) \left(
e^{-|k|\sigma}-e^{-\kappa\sigma}\right) ~,\label{eq:f20}\\
\check{f}_{3,0}(k,\sigma) & =\frac{ik}{\kappa}e^{-\kappa\sigma
}~,\label{eq:f30}\\
\check{f}_{1,1}(k,\sigma) & =e^{\kappa\sigma}+\frac{\left( |k|+\kappa
\right) ^{2}}{ik}e^{-\kappa\sigma}-2\frac{|k|\left( |k|+\kappa\right)
{ik}e^{-|k|\sigma}~,\label{eq:f11}\\
\check{f}_{2,1}(k,\sigma) & =2\left( \frac{|k|\left( |k|+\kappa\right)
}{ik}-1\right) e^{-\kappa\sigma}-2\frac{|k|\left( |k|+\kappa\right)
{ik}e^{-|k|\sigma}~,\label{eq:f21}\\
\check{f}_{3,1}(k,\sigma) & =-e^{-\kappa\sigma}~, \label{eq:f31
\end{align}
and where $I_{1}=[1,t]$ and $I_{2}=I_{3}=[t,\infty)$.
We introduce the integral equation for $\partial_{k}\hat{\omega}$, noting that
$\hat{\omega}$ is continuous at $k=0$ (see
\cite{Hillairet.Wittwer-Existenceofstationary2009}). From
(\ref{eq:omegaIntComp}) we get that
\begin{equation}
\partial_{k}\hat{\omega}=\sum_{m=0}^{1}\sum_{n=1}^{3}\sum_{l=1}^{3
\partial_{k}\hat{\omega}_{l,n,m}~,\label{eq:dkomegaInt
\end{equation}
where, for $n=1,2,3$, $m=0,1$
\begin{align}
\partial_{k}\hat{\omega}_{1,n,m}(k,t) & =\partial_{k}\check{K}_{n
(k,t-1)\int_{I_{n}}\check{f}_{n,m}(k,s-1)\hat{Q}_{m
(k,s)ds~,\label{eq:dkomegaIntComp1}\\
\partial_{k}\hat{\omega}_{2,n,m}(k,t) & =\check{K}_{n}(k,t-1)\int_{I_{n
}\partial_{k}\check{f}_{n,m}(k,s-1)\hat{Q}_{m
(k,s)ds~,\label{eq:dkomegaIntComp2}\\
\partial_{k}\hat{\omega}_{3,n,m}(k,t) & =\check{K}_{n}(k,t-1)\int_{I_{n
}\check{f}_{n,m}(k,s-1)\partial_{k}\hat{Q}_{m
(k,s)ds~,\label{eq:dkomegaIntComp3
\end{align}
where, for $k\in\mathbb{R}\setminus\{0\}$ and $\sigma$, $\tau\geq0$
\begin{align}
\partial_{k}\check{K}_{n}(k,\tau) & =\frac{1}{4}\dfrac{2k-i}{\kappa
}e^{-\kappa\tau}~,\text{ for }n=1,2~,\label{dKn}\\
\partial_{k}\check{K}_{3}(k,\tau) & =\frac{1}{4}\dfrac{2k-i}{\kappa
}(e^{\kappa\tau}+e^{-\kappa\tau})~,\label{dK3
\end{align}
where $\check{f}_{n,m}$ is as above, where
\begin{align}
\partial_{k}\check{f}_{1,0}(k,\sigma) & =\frac{i}{2\kappa}(e^{\kappa\sigma
}+e^{-\kappa\sigma}-2e^{-|k|\sigma})-\frac{ik^{2}}{2\kappa^{3}}(e^{\kappa
\sigma}-e^{-\kappa\sigma})+\frac{2}{\kappa}\frac{k^{2}+|k|\kappa
{k}(e^{-|k|\sigma}-e^{-\kappa\sigma})\nonumber\\
& +i\frac{k^{2}+\kappa^{2}}{2\kappa^{2}}(e^{\kappa\sigma}-e^{-\kappa\sigma
})\sigma+\frac{k^{2}+|k|\kappa}{k}\frac{k^{2}+\kappa^{2}}{\kappa^{2
}e^{-\kappa\sigma}\sigma-2\frac{k^{2}+|k|\kappa}{k}e^{-|k|\sigma
\sigma~,\label{eq:dkf10}\\
\partial_{k}\check{f}_{2,0}(k,\sigma) & =\frac{(|k|+\kappa)^{2}}{\kappa
k}(e^{-|k|\sigma}-e^{-\kappa\sigma})-2\frac{\kappa+|k|}{\kappa k}\left(
|k|\kappa e^{-|k|\sigma}-\frac{k^{2}+\kappa^{2}}{2}e^{-\kappa\sigma}\right)
\sigma~,\label{eq:dkf20}\\
\partial_{k}\check{f}_{3,0}(k,\sigma) & =\frac{k}{2\kappa^{3}}e^{-\kappa
\sigma}-i\frac{k^{2}+\kappa^{2}}{2\kappa^{2}}\sigma e^{-\kappa\sigma
}~,\label{eq:dkf30}\\
\partial_{k}\check{f}_{1,1}(k,\sigma) & =i\frac{\left( |k|+\kappa\right)
^{2}}{\kappa|k|}(e^{-|k|\sigma}-e^{-\kappa\sigma})\nonumber\\
& +\frac{k^{2}+\kappa^{2}}{2\kappa k}(e^{\kappa\sigma}+e^{-\kappa\sigma
})\sigma+2i\frac{k^{2}+|k|\kappa}{k^{2}}\left( \frac{k^{2}+\kappa^{2
}{2\kappa}e^{-\kappa\sigma}-|k|e^{-|k|\sigma}\right) \sigma~,\label{eq:dkf11
\\
\partial_{k}\check{f}_{2,1}(k,\sigma) & =i\frac{\left( |k|+\kappa\right)
^{2}}{\kappa|k|}(e^{-\kappa\sigma}-e^{-|k|\sigma})+i(|k|+\kappa)\frac
{k^{2}+\kappa^{2}}{k^{2}}e^{-\kappa\sigma}\sigma-2i\left( |k|+\kappa\right)
e^{-|k|\sigma}\sigma~,\label{eq:dkf21}\\
\partial_{k}\check{f}_{3,1}(k,\sigma) & =\frac{k^{2}+\kappa^{2}}{2\kappa
k}e^{-\kappa\sigma}\sigma~.\label{eq:dkf31
\end{align}
and where the functions
\begin{align*}
\partial_{k}\hat{Q}_{0} & =\partial_{k}\hat{q}_{0}+\partial_{k}\hat{F
_{2}~,\\
\partial_{k}\hat{Q}_{1} & =\partial_{k}\hat{q}_{1}-\partial_{k}\hat{F}_{1}~,
\end{align*}
are obtained from (\ref{eq:defQ0fourier})\ and (\ref{eq:defQ1fourier}). Since
$\hat{q}_{0}$ and $\hat{q}_{1}$ are convolution products (see
(\ref{eq:defq0fourier}) and (\ref{eq:defq1fourier})), and noting that $\hat
{u}$ and $\hat{v}$ are continuous bounded functions on $\mathbb{R}$, that
$\hat{\omega}$ is continuous on $\mathbb{R}$ and differentiable on
$\mathbb{R}\setminus\{0\}$ and that $\partial_{k}\hat{\omega}$ is absolutely
integrable, we conclude (see \cite[Proposition 8.8, page 241]{Folland1999})
that $\hat{q}_{0}$ and $\hat{q}_{1}$ are continously differentiable functions
and that
\begin{align}
\partial_{k}\hat{q}_{0} & =\hat{u}\ast\partial_{k}\hat{\omega
~,\label{eq:defdkq0}\\
\partial_{k}\hat{q}_{1} & =\hat{v}\ast\partial_{k}\hat{\omega
~.\label{eq:defdkq1
\end{align}
This means that it is sufficient to add equation (\ref{eq:dkomegaInt}) to the
ones for $\hat{\omega}$, $\hat{\eta}$, $\hat{\varphi}$ and $\hat{\psi}$ in
order to get a set of integrals equations determining also $\partial_{k
\hat{\omega}$.
\begin{remark}
\label{rem:kcheckandk}The products $\check{K}_{n}\check{f}_{n,m}$ are equal to
$K_{n}f_{n,m}$ as defined in
\cite{Hillairet.Wittwer-Existenceofstationary2009}, and we have $\check
{K}_{n=1,2}=K_{n=1,2}$, $\check{K}_{3}=\frac{ik}{\kappa}K_{3}$, $\check
{f}_{n=1,2;m}=f_{n=1,2;m}$ and $\check{f}_{3,m}=\frac{\kappa}{ik}f_{3,m}$. We
chose to rewrite the equations in the new form for convenience later on.
\end{remark}
\section{Functional framework\label{sec:funframe}}
We recall the definition of the function spaces introduced in
\cite{Hillairet.Wittwer-Existenceofstationary2009} and extend it to include
functions with a certain type of singular behavior. Let $\alpha$, $r\geq0$,
$k\in\mathbb{R}$, $t\geq1$, and let
\begin{equation}
\mu_{\alpha,r}(k,t)=\frac{1}{1+(|k|t^{r})^{\alpha}}~. \label{eq:defMu
\end{equation}
Let furthermor
\begin{align*}
\bar{\mu}_{\alpha} & =\mu_{\alpha,1}(k,t)~,\\
\tilde{\mu}_{\alpha} & =\mu_{\alpha,2}(k,t)~.
\end{align*}
We also defin
\begin{equation}
\kappa=\sqrt{k^{2}-ik}~, \label{eq:defKappa
\end{equation}
an
\begin{equation}
\Lambda_{-}=-\operatorname{Re}(\kappa)=-\frac{1}{2}\sqrt{2\sqrt{k^{2}+k^{4
}+2k^{2}}~. \label{LM
\end{equation}
Throughout this paper we use the inequalitie
\begin{equation}
|\kappa|=(k^{2}+k^{4})^{1/4}\leq|k|^{1/2}+|k|\leq2^{3/4}|\kappa|\leq
2^{3/4}(1+|k|)~. \label{bk1
\end{equation}
We have in particular that
\begin{equation}
|k|^{\frac{1}{2}}\leq\mathrm{const.}|\kappa|~, \label{eq:sqrtKleqkappa
\end{equation}
and tha
\begin{equation}
e^{\Lambda_{-}\sigma}\leq e^{-|k|\sigma}~, \label{expbound
\end{equation}
which will play a crucial role for small and large values of $k$, respectively.
\begin{definition}
\label{def:Bnapq}We define, for fixed $\alpha\geq0$, and $n$, $p$, $q$ $\geq
0$, $\mathcal{B}_{\alpha,p,q}^{n}$ to be the Banach space of functions
$\hat{f}\colon\mathbb{R}\setminus\{0\}\times\lbrack1,\infty)\rightarrow
\mathbb{C}$, for which $\hat{f}_{n}=\kappa^{n}\cdot\hat{f}\in C(\mathbb{R
\setminus\{0\}\times\lbrack1,\infty),\mathbb{C})$, and for which the nor
\[
\left\Vert \hat{f};\mathcal{B}_{\alpha,p,q}^{n}\right\Vert =\sup_{t\geq1
\sup_{k\in\mathbb{R}\setminus\{0\}}\frac{|\hat{f}_{n}(k,t)|}{\frac{1}{t^{p
}\bar{\mu}_{\alpha}(k,t)+\frac{1}{t^{q}}\tilde{\mu}_{\alpha}(k,t)
\]
is finite. We use the shorthand $\mathcal{B}_{\alpha,p,q}$\ for $\mathcal{B
_{\alpha,p,q}^{0}$. Furthermore we set, for $\alpha>2$
\begin{align*}
\mathcal{D}_{\alpha-1,p,q}^{1} & =\mathcal{B}_{\alpha,p,q}^{1}\times
\mathcal{B}_{\alpha-\frac{1}{2},p+\frac{1}{2},q+\frac{1}{2}}^{1
\times\mathcal{B}_{\alpha-1,p+\frac{1}{2},q+1}^{1}~,\\
\mathcal{V}_{\alpha} & =\mathcal{B}_{\alpha,\frac{5}{2},1}\times
\mathcal{B}_{\alpha,\frac{1}{2},0}\times\mathcal{B}_{\alpha,\frac{1}{2},1}~.
\end{align*}
\end{definition}
\begin{remark}
We present two elementary properties of the spaces $\mathcal{B}_{\alpha
,p,q}^{n}$, which will be routinely used without mention. Let \textit{
}$\alpha$\textit{, }$\alpha^{\prime}\geq0$\textit{, and }$p$\textit{,
}$p^{\prime}$\textit{, }$q$\textit{, }$q^{\prime}$\textit{ }$\geq0$\textit{,
then
\[
\mathcal{B}_{\alpha,p,q}^{n}\cap\mathcal{B}_{\alpha^{\prime},p^{\prime
},q^{\prime}}^{n}\subset\mathcal{B}_{\min\{\alpha^{\prime},\alpha
,\},\min\{p^{\prime},p\},\min\{q^{\prime},q\}}^{n}~.
\]
In addition\textit{ we have
\[
\mathcal{B}_{\alpha,p,q}^{n}\subset\mathcal{B}_{\alpha,\min\{p,q\},\infty
^{n}~,
\]
where the space with $q=\infty$ is to be understood to contain functions for
which the nor
\[
\left\Vert \hat{f};\mathcal{B}_{\alpha,p,\infty}^{n}\right\Vert =\sup_{t\geq
1}\sup_{k\in\mathbb{R}\setminus\{0\}}\frac{|\hat{f}_{n}(k,t)|}{\frac{1}{t^{p
}\bar{\mu}_{\alpha}(k,t)
\]
is finite.
\end{remark}
\section{Existence of solutions\label{sec:existencemap}}
In \cite{Hillairet.Wittwer-Existenceofstationary2009} it was shown that one
can rewrite the integral equations as a fixed point problem, and that, for
$\boldsymbol{F}$ sufficiently small, there exist functions $\hat{\omega}$,
$\hat{u}$ and $\hat{v}$, that are solution to (\ref{eq:vorticity
)-(\ref{eq:incompressibility2}), satisfying the boundary conditions
(\ref{eq:b0}) and (\ref{eq:b1}). More precisely, we have, for $\alpha>3$
\begin{align}
\hat{\omega} & \in\mathcal{B}_{\alpha,\frac{5}{2},1}~,\label{eq:omegaspace}\\
\hat{u} & \in\mathcal{B}_{\alpha,\frac{1}{2},0}~,\label{eq:uspace}\\
\hat{v} & \in\mathcal{B}_{\alpha,\frac{1}{2},1}~,\label{eq:vspace
\end{align}
and, for $i=0,1$
\begin{equation}
\hat{Q}_{i}\in\mathcal{B}_{\alpha,\frac{7}{2},\frac{5}{2}}~.\label{eq:Qispace
\end{equation}
We now show that using this solution as a starting point, we may define a
linear fixed point problem with a unique solution for $\partial_{k}\hat
{\omega}$. The structure of (\ref{eq:dkomegaInt}) is rather complicated and it
turns out to be necessary to decompose the sum into three parts which are
analyzed independently. Let $\mathbf{\hat{d}}=(\hat{d}_{1},\hat{d}_{2},\hat
{d}_{3})$ where
\[
\hat{d}_{l}=\sum_{m=0}^{1}\sum_{n=1}^{3}\partial_{k}\hat{\omega}_{l,n,m}~,
\]
then $\partial_{k}\hat{\omega}=\sum_{l=1}^{3}\hat{d}_{l}$. The function
$\hat{d}_{3}$ depends on $\partial_{k}\hat{\omega}$, but $\hat{d}_{1}$ and
$\hat{d}_{2}$ do not.
\begin{proposition}
\label{prop:d1&d2space}The functions $\hat{d}_{1}$ and $\hat{d}_{2}$ are in
$\mathcal{D}_{\alpha-1,\frac{3}{2},0}^{1}$.
\end{proposition}
\begin{proof}
See Sections~\ref{sec:boundsd1} and \ref{sec:boundsd2}.
\end{proof}
\bigskip
We now define the fixed point problem.
\begin{lemma}
\label{lem:P}Let $\alpha>3$, and let $\hat{u}$ and $\hat{v}$ be as in
(\ref{eq:uspace}) and (\ref{eq:vspace}) respectively. Then
\
\begin{array}
[c]{cccc
\mathfrak{L}_{1}~\colon & \mathcal{D}_{\alpha-1,\frac{3}{2},0}^{1} &
\rightarrow & \mathcal{B}_{\alpha,\frac{3}{2},1}\times\mathcal{B
_{\alpha,\frac{3}{2},2}\\
& \hat{d} & \longmapsto & \left(
\begin{array}
[c]{c
\hat{u}\ast\hat{d}\\
\hat{v}\ast\hat{d
\end{array}
\right) ~,
\end{array}
\]
defines a continuous linear map.
\end{lemma}
\begin{proof}
The map $\mathfrak{L}_{1}$ is linear by definition of the convolution
operation. Using Corollary~\ref{corr:convsimplification} we get that the map
$\mathfrak{L}_{1}$ is bounded, sinc
\begin{equation}
\left\Vert \hat{u}\ast\hat{d};\mathcal{B}_{\alpha,\frac{3}{2},1}\right\Vert
\leq\mathrm{const.}\left\Vert \hat{u};\mathcal{B}_{\alpha,\frac{1}{2
,0}\right\Vert \cdot\left\Vert d;\mathcal{D}_{\alpha-1,\frac{3}{2},0
^{1}\right\Vert ~, \label{eq:lemQ1
\end{equation}
an
\begin{equation}
\left\Vert \hat{v}\ast\hat{d};\mathcal{B}_{\alpha,\frac{3}{2},2}\right\Vert
\leq\mathrm{const.}\left\Vert \hat{v};\mathcal{B}_{\alpha,\frac{1}{2
,1}\right\Vert \cdot\left\Vert d;\mathcal{D}_{\alpha-1,\frac{3}{2},0
^{1}\right\Vert ~. \label{eq:lemQ2
\end{equation}
\end{proof}
\begin{lemma}
\label{lem:Q}Let $\alpha>3$, $\hat{d}_{3}=\sum_{m=0}^{1}\sum_{n=1}^{3
\partial_{k}\hat{\omega}_{3,n,m}$ and let $\partial_{k}\hat{\omega}_{3,n,m}$
be given by (\ref{eq:dkomegaIntComp3}). Then, we have
\
\begin{array}
[c]{cccc
\mathfrak{L}_{2}~\colon & \mathcal{B}_{\alpha,\frac{3}{2},1}\times
\mathcal{B}_{\alpha,\frac{3}{2},2} & \rightarrow & \mathcal{D}_{\alpha
-1,\frac{3}{2},0}^{1}\\
& \left(
\begin{array}
[c]{c
\partial_{k}\hat{Q}_{0}\\
\partial_{k}\hat{Q}_{1
\end{array}
\right) & \longmapsto & \hat{d}_{3}~,
\end{array}
\]
which defines a continuous linear map.
\end{lemma}
\begin{proof}
The map $\mathfrak{L}_{2}$ is linear by definition of $\hat{d}_{3}$ and is
proved to be bounded in Section~\ref{sec:boundsd3}.
\end{proof}
\subsection{Proof of Theorem~\ref{thm:main}}
Theorem~\ref{thm:main} is a consequence of the following theorem.
\begin{theorem}
[Existence]\label{thm:existence} Let $\alpha>3$, $\boldsymbol{F}=(F_{1
,F_{2})\in C_{c}^{\infty}(\Omega_{+})$, and let $\boldsymbol{\hat{F}}=(\hat
{F}_{1},\hat{F}_{2})$ be the Fourier transform of $\boldsymbol{F}$. If
$\Vert(\hat{F}_{2},-\hat{F}_{1});\mathcal{B}_{\alpha,\frac{7}{2},\frac{5}{2
}\times\mathcal{B}_{\alpha,\frac{7}{2},\frac{5}{2}}\Vert$ is sufficiently
small, then there exists a unique solution $(\hat{\omega},\hat{u},\hat
{v},\mathbf{\hat{d})}$ in $\mathcal{V}_{\alpha}\times\mathcal{D
_{\alpha-1,\frac{3}{2},0}^{1}$.
\end{theorem}
\begin{proof}
We have the existence and uniqueness of $(\hat{\omega},\hat{u},\hat{v
)\in\mathcal{V}_{\alpha}$ thanks to
\cite{Hillairet.Wittwer-Existenceofstationary2009} and
\cite{Hillairet.Wittwer-Asymptoticdescriptionof2011}. Since $\alpha>3$, we
have by Lemmas~\ref{lem:P} and \ref{lem:Q} that the map $\mathfrak{C
~\colon\mathcal{D}_{\alpha-1,\frac{3}{2},0}^{1}\rightarrow\mathcal{D
_{\alpha-1,\frac{3}{2},0}^{1}$, $x\mapsto\mathfrak{C}[x]=\mathfrak{L
_{2}[\mathfrak{L}_{1}[\hat{d}_{1}+\hat{d}_{2}+x]+(\partial_{k}\hat{F
_{2},-\partial_{k}\hat{F}_{1})]$ is continuous. Since from
\cite{Hillairet.Wittwer-Existenceofstationary2009} we have that $\left\Vert
(\hat{\omega},\hat{u},\hat{v});\mathcal{V}_{\alpha}\right\Vert \leq
\mathrm{const.}\left\Vert (\hat{F}_{2},-\hat{F}_{1});\mathcal{B}_{\alpha
,\frac{7}{2},\frac{5}{2}}\times\mathcal{B}_{\alpha,\frac{7}{2},\frac{5}{2
}\right\Vert $, we find with (\ref{eq:lemQ1}) and (\ref{eq:lemQ2}) that the
image of $\mathfrak{L}_{1}$ is arbitrarily small. We then have by linearity of
$\mathfrak{L}_{2}$, that $\mathfrak{C}$ has a fixed point since $\left\Vert
(\partial_{k}\hat{F}_{2},-\partial_{k}\hat{F}_{1});\mathcal{B}_{\alpha
,\frac{3}{2},1}\times\mathcal{B}_{\alpha,\frac{3}{2},2}\right\Vert <\infty$.
This completes the proof of Theorem~\ref{thm:existence}.
\end{proof}
\bigskip
Theorem~\ref{thm:main} now follows by inverse Fourier transform and the decay
properties are a direct consequence of the spaces of which $\hat{u}$, $\hat
{v}$, $\hat{\omega}$ and $\partial_{k}\hat{\omega}$ are elements. Indeed, for
a function $\hat{f}\in\mathcal{B}_{\alpha,p,q}^{n}$ with $\alpha>3$, $n=0,1$
and $p$, $q$ $\geq0$, we have from the definition of the Fourier transform
that
\begin{equation}
\sup\limits_{x\in\mathbb{R}}\left\vert f(x,y)\right\vert \leq\frac{1}{2\pi
}\int_{\mathbb{R}}\left\vert \hat{f}(k,y)\right\vert
dk~,\label{eq:fouriertodirectspace
\end{equation}
and from the definition of the function spaces tha
\begin{align}
\int_{\mathbb{R}}\left\vert \hat{f}(k,t)\right\vert ~dk & \leq\left\Vert
\hat{f}_{n};\mathcal{B}_{\alpha,p,q}^{n}\right\Vert \int_{\mathbb{R}}\frac
{1}{\kappa^{n}}\left( \frac{1}{t^{p}}\bar{\mu}_{\alpha}(k,t)+\frac{1}{t^{q
}\tilde{\mu}_{\alpha}(k,t)\right) dk\nonumber\\
& \leq\mathrm{const.}\left\Vert \hat{f}_{n};\mathcal{B}_{\alpha,p,q
^{n}\right\Vert \left( \frac{1}{t^{p+(1-n)}}+\frac{1}{t^{q+2(1-n)}}\right)
\nonumber\\
& \leq\frac{\mathrm{const.}}{t^{\min\{p+(1-n),q+2(1-n))}}\left\Vert \hat
{f}_{n};\mathcal{B}_{\alpha,p,q}^{n}\right\Vert
~.\label{eq:fourierspacebounds
\end{align}
Combining (\ref{eq:fouriertodirectspace}) and (\ref{eq:fourierspacebounds}) we
hav
\[
\sup\limits_{x\in\mathbb{R}}\left\vert f(x,y)\right\vert \leq\frac
{\mathrm{const.}}{y^{\min\{p+(1-n),q+2(1-n))}}\left\Vert \hat{f
_{n};\mathcal{B}_{\alpha,p,q}^{n}\right\Vert ~.
\]
Finally, we have, using that $(\hat{\omega},\hat{u},\hat{v},\mathbf{\hat{d
)}\in\mathcal{V}_{\alpha}\times\mathcal{D}_{\alpha-1,\frac{3}{2},0}^{1}$~, and
tha
\[
|x\omega(x,y)|\leq\frac{1}{2\pi}\int_{\mathbb{R}}|\partial_{k}\omega(k,y)|dk~,
\]
tha
\begin{align*}
|y^{3/2}u(x,y)| & \leq C_{1}~,~|y^{3/2}v(x,y)|\leq C_{2}~,\\
|y^{3}\omega(x,y)| & \leq C_{3}~,~|yx\omega(x,y)|\leq C_{4}~,
\end{align*}
with $C_{i}\in\mathbb{R}$, for $i=1,\ldots,4$, which proves the bound in
Theorem~\ref{thm:main}.
\section{\label{sec:convolutiondisc}Convolution with singularities}
We first recall the convolution result from
\cite{Hillairet.Wittwer-Existenceofstationary2009}.
\begin{proposition}
[convolution]\label{prop:convHW}Let $\alpha$, $\beta>1$, and $r$, $s\geq0$ and
let $a$, $b$ be continuous functions from $\mathbb{R}_{0}\times\lbrack
1,\infty)$ to $\mathbb{C}$ satisfying the bounds
\begin{align*}
\left\vert a(k,t)\right\vert & \leq\mu_{\alpha,r}(k,t)~,\\
\left\vert b(k,t)\right\vert & \leq\mu_{\beta,s}(k,t)~.
\end{align*}
Then, the convolution $a\ast b$ is a continuous function from $\mathbb{R
\times\lbrack1,\infty)$ to $\mathbb{C}$ and we have the boun
\[
\left\vert \left( a\ast b\right) (k,t)\right\vert \leq\mathrm{const.}\left(
\frac{1}{t^{r}}\mu_{\beta,s}(k,t)+\frac{1}{t^{s}}\mu_{\alpha,r}(k,t)\right)
~,
\]
uniformly in $t\geq1$, $k\in\mathbb{R}$.
\end{proposition}
\vspace{0in}
Since $\partial_{k}\hat{\omega}$ diverges like $|\kappa|^{-1}$ at $k=0$ we
need to strengthen this result.
\begin{proposition}
[convolution with $\left\vert \kappa\right\vert ^{-1}$ singularity
\label{prop:convwithroot} Let $\alpha,\tilde{\beta}>1$ and $r,\tilde{s}\geq0$,
let $a$ be as in Proposition~\ref{prop:convHW} and $\tilde{b}$ a continuous
function from $\mathbb{R}_{0}\times\lbrack1,\infty)$ to $\mathbb{C}$,
satisfying the bound
\[
\left\vert \tilde{b}(k,t)\right\vert \leq\left\vert \kappa(k)\right\vert
^{-1}\mu_{\tilde{\beta},\tilde{s}}\left( k,t\right) ~,
\]
then the convolution $a\ast\tilde{b}$ is a continuous function from
$\mathbb{R\times\lbrack}1,\infty)\rightarrow\mathbb{C}$ and we have the bound
\begin{align}
\left\vert (a\ast\tilde{b})(k,t)\right\vert & \leq\mathrm{const.}\left(
\max\left\{ \frac{1}{t^{\frac{\tilde{s}}{2}}},\frac{1}{t^{\frac{\tilde
{s}+r-\tilde{s}^{\prime}}{2}}}\right\} \mu_{\tilde{\beta},s^{\prime}}\left(
k,t\right) +\frac{1}{t^{\frac{\tilde{s}}{2}}}\mu_{\alpha,r}\left(
k,t\right) \right) ~,\label{eq:convwithroot}\\
\left\vert (a\ast\tilde{b})(k,t)\right\vert & \leq\mathrm{const.}\left(
\max\left\{ \frac{1}{t^{\frac{\tilde{s}}{2}}},\frac{1}{t^{r-c\tilde
{s}^{\prime}}}\right\} \mu_{\tilde{\beta}+c,\tilde{s}^{\prime}}\left(
k,t\right) +\frac{1}{t^{\frac{\tilde{s}}{2}}}\mu_{\alpha,r}\left(
k,t\right) \right) ~, \label{eq:convwithrootgainbeta
\end{align}
for $\tilde{s}^{\prime}\leq\tilde{s}$, and $c\in\left\{ \frac{1
{2},1\right\} $.
\end{proposition}
\begin{proof}
We drop the \symbol{126}~to unburden the notation. Continuity is elementary.
Since the functions $\mu_{\alpha,r}$ are even in $k$, we only consider
$k\geq0$. The proof is in two parts, one for $0\leq k\leq t^{-s^{\prime}}$ and
the other for $t^{-s^{\prime}}<k$. The first part is valid for both
(\ref{eq:convwithroot}) and (\ref{eq:convwithrootgainbeta}). For $0\leq k\leq
t^{-s^{\prime}}$, and $\alpha^{\prime}\geq0$, we hav
\begin{align*}
\left\vert (a\ast b)(k,t)\right\vert & \leq\int_{\mathbb{R}}\mu_{\alpha
,r}(k^{\prime},t)|\kappa(k-k^{\prime})|^{-1}\mu_{\beta,s}(k-k^{\prime
},t)dk^{\prime}\\
& \leq\sup_{k^{\prime}\in\mathbb{R}}\left( \mu_{\alpha,r}(k^{\prime
},t)\right) \int_{\mathbb{R}}\frac{t^{\frac{s}{2}}}{|\tilde{k}|^{\frac{1}{2
}}\mu_{\beta,s}(\tilde{k},1)\frac{d\tilde{k}}{t^{s}}\\
& \leq\frac{\mathrm{const.}}{t^{\frac{s}{2}}}\leq\frac{\mathrm{const.
}{t^{\frac{s}{2}}}\mu_{\alpha^{\prime},s^{\prime}}(k,t)~,
\end{align*}
where we have used the change of variables $k-k^{\prime}=\tilde{k}/t^{s}$. For
$k>t^{-s^{\prime}}$ and $s^{\prime}\leq s$ we hav
\begin{align*}
\left\vert (a\ast b)(k,t)\right\vert & \leq\int_{\mathbb{R}}\mu_{\alpha
,r}(k^{\prime},t)\frac{\mu_{\beta,s}(k-k^{\prime},t)}{|\kappa(k-k^{\prime
)|}dk^{\prime}\\
& \leq\underset{:=I_{1}}{\underbrace{\int_{\mathbb{-\infty}}^{k/2}\mu
_{\alpha,r}(k^{\prime},t)\frac{\mu_{\beta,s}(k-k^{\prime},t)}{|\kappa
(k-k^{\prime})|}dk^{\prime}}}+\underset{:=I_{2}}{\underbrace{\int
_{k/2}^{\infty}\mu_{\alpha,r}(k^{\prime},t)\frac{\mu_{\beta,s}(k-k^{\prime
},t)}{|\kappa(k-k^{\prime})|}dk^{\prime}}}~.
\end{align*}
The integral $I_{2}$ is the same for (\ref{eq:convwithroot}) and
(\ref{eq:convwithrootgainbeta})
\begin{align*}
I_{2} & =\int_{k/2}^{\infty}\mu_{\alpha,r}(k^{\prime},t)\frac{1
{|\kappa(k-k^{\prime})|}\mu_{\beta,s}(k-k^{\prime},t)dk^{\prime}\\
& \leq\mathrm{const.}~\mu_{\alpha,r}(k/2,t)\int_{\mathbb{R}}\frac{t^{\frac
{s}{2}}}{|\tilde{k}|^{\frac{1}{2}}}\mu_{\beta,s}(\tilde{k},1)\frac{d\tilde{k
}{t^{s}}\\
& \leq\mathrm{const.}\frac{1}{t^{s/2}}\mu_{\alpha,r}(k,t)~,
\end{align*}
where again we have used the change of variables $k-k^{\prime}=\tilde{k
/t^{s}$. To compute the integral $I_{1}$ we use tha
\begin{align*}
\mu_{\alpha,s}\left( k,t\right) & \leq\mu_{\alpha,s^{\prime}}\left(
k,t\right) ~,\\
\mu_{\alpha,s}\left( k,t\right) \cdot\mu_{\beta,s}\left( k,t\right) &
\leq\mathrm{const.}\mu_{\alpha+\beta,s}\left( k,t\right) ~,
\end{align*}
and, for $k>t^{-s^{\prime}}$
\begin{align*}
\frac{1}{t^{\frac{s^{\prime}}{2}}}\frac{1}{|\kappa(k)|} & \leq
\frac{\mathrm{const.}}{t^{\frac{s^{\prime}}{2}}|k|^{1/2}}\leq\frac
{\mathrm{const.}}{2t^{\frac{s^{\prime}}{2}}|k|^{1/2}}\leq\frac{\mathrm{const.
}{1+\left( t^{s^{\prime}}|k|\right) ^{\frac{1}{2}}}\leq\mu_{\frac{1
{2},s^{\prime}}\left( k,t\right) ~,\\
\frac{1}{t^{s^{\prime}}}\frac{1}{|\kappa(k)|} & \leq\frac{\mathrm{const.
}{t^{s^{\prime}}\left( |k|^{1/2}+|k|\right) }\leq\frac{\mathrm{const.
}{t^{s^{\prime}/2}+|k|t^{s^{\prime}}}\leq\frac{\mathrm{const.}
{1+|k|t^{s^{\prime}}}\leq\mu_{1,s^{\prime}}\left( k,t\right) ~.
\end{align*}
To prove (\ref{eq:convwithroot}), we note tha
\begin{align*}
I_{1}^{(\ref{eq:convwithroot})} & \leq\int_{\mathbb{-\infty}}^{k/2}\frac
{\mu_{\alpha,r}(k^{\prime},t)}{|k^{\prime}|^{\frac{1}{2}}}\frac{|k^{\prime
}|^{\frac{1}{2}}}{|k-k^{\prime}|^{\frac{1}{2}}}\mu_{\beta,s}(k-k^{\prime
},t)dk^{\prime}\\
& \leq\int_{\mathbb{-\infty}}^{k/2}\frac{\mu_{\alpha,r}(k^{\prime
,t)}{|k^{\prime}|^{\frac{1}{2}}}\frac{|k|^{\frac{1}{2}}}{|k-k^{\prime
|^{\frac{1}{2}}}\mu_{\beta,s}(k-k^{\prime},t)dk^{\prime}\\
& +\int_{\mathbb{-\infty}}^{k/2}\frac{\mu_{\alpha,r}(k^{\prime
,t)}{|k^{\prime}|^{\frac{1}{2}}}\frac{|k-k^{\prime}|^{\frac{1}{2}
}{|k-k^{\prime}|^{\frac{1}{2}}}\mu_{\beta,s}(k-k^{\prime},t)dk^{\prime}\\
& \leq\frac{|k|^{\frac{1}{2}}}{|k/2|^{\frac{1}{2}}}\mu_{\beta,s
(k/2,t)\int_{\mathbb{-\infty}}^{k/2}\frac{\mu_{\alpha,r}(k^{\prime
,t)}{|k^{\prime}|^{\frac{1}{2}}}dk^{\prime}\\
& +\int_{\mathbb{-\infty}}^{k/2}\frac{\mu_{\alpha,r}(k^{\prime
,t)}{|k^{\prime}|^{\frac{1}{2}}}\frac{1}{|k-k^{\prime}|^{\frac{1}{2}}
\frac{\mathrm{const.}}{t^{s/2}}\mu_{\beta-\frac{1}{2},s}(k-k^{\prime
},t)dk^{\prime}\\
& \leq\frac{\mathrm{const.}}{|k|^{\frac{1}{2}}}\frac{1}{t^{s/2}}\mu
_{\beta-\frac{1}{2},s}(k,t)\int_{\mathbb{-\infty}}^{k/2}\frac{\mu_{\alpha
,r}(k^{\prime},t)}{|k^{\prime}|^{\frac{1}{2}}}dk^{\prime}\\
& \leq\mathrm{const.}\frac{t^{\frac{s^{\prime}}{2}}}{t^{\frac{s}{2}}}\frac
{1}{t^{\frac{s^{\prime}}{2}}|k|^{\frac{1}{2}}}\mu_{\beta-\frac{1}{2
,s}(k,t)\frac{1}{t^{\frac{r}{2}}}\\
& \leq\mathrm{const.}\frac{t^{\frac{s^{\prime}}{2}}}{t^{\frac{s}{2}}
\mu_{\frac{1}{2},s^{\prime}}\left( k,t\right) \mu_{\beta-\frac{1}{2
,s}(k,t)\frac{1}{t^{\frac{r}{2}}}\leq\frac{\mathrm{const.}}{t^{\frac
{s+r-s^{\prime}}{2}}}\mu_{\beta,s^{\prime}}(k,t)~,
\end{align*}
where we have used the family of inequalitie
\begin{equation}
|k|^{\rho}\mu_{\alpha,r}\left( k,t\right) \leq\mathrm{const.}\frac
{1}{t^{\rho r}}\mu_{\alpha-p,r}\left( k,t\right) ~,\forall\rho>0~.
\label{eq:ksacrificealphafort
\end{equation}
Finally, to prove (\ref{eq:convwithrootgainbeta}), we note tha
\begin{align*}
I_{1}^{\left( \ref{eq:convwithrootgainbeta}\right) } & \leq\int
_{\mathbb{-\infty}}^{k/2}\mu_{\alpha,r}(k^{\prime},t)\frac{1}{|\kappa
(k-k^{\prime})|}\mu_{\beta,s}(k-k^{\prime},t)dk^{\prime}\\
& \leq\frac{t^{cs^{\prime}}}{t^{cs^{\prime}}}\frac{1}{|\kappa(k/2)|
\mu_{\beta,s}(k/2,t)\int_{\mathbb{R}}\mu_{\alpha,r}(k^{\prime},t)dk^{\prime}\\
& \leq\mathrm{const.}t^{cs^{\prime}}\mu_{c,s^{\prime}}(k,t\mu_{\beta
,s}(k,t)\int_{\mathbb{R}}\mu_{\alpha,r}(k^{\prime},t)dk^{\prime}\\
& \leq\mathrm{const.}\frac{1}{t^{r-cs^{\prime}}}\mu_{\beta+c,s^{\prime
}(k,t)~.
\end{align*}
Collecting the bounds on the integrals $I_{1}^{(\ref{eq:convwithroot})}$,
$I_{1}^{\left( \ref{eq:convwithrootgainbeta}\right) }$ and $I_{2}$ proves
the claim in Proposition~\ref{prop:convwithroot}.
\end{proof}
\begin{corollary}
\label{corr:convsimplification}Let $\alpha>2$ and, for $i=1,2$, $p_{i
,q_{i}\geq0$. Let $f\in\mathcal{B}_{\alpha,p_{1},q_{1}}$ and $g\in
\mathcal{D}_{\alpha-1,p_{2},q_{2}}^{1}$. Le
\begin{align*}
p & =\min\{p_{1}+p_{2}+\frac{1}{2},p_{1}+q_{2}+1,q_{1}+p_{2}+\frac{1
{2}\}~,\\
q & =\min\{q_{1}+q_{2}+1,q_{1}+p_{2}+\frac{1}{2}\}~.
\end{align*}
Then $f\ast g\in\mathcal{B}_{\alpha,p,q}$, and there exists a constant $C$,
depending only on $\alpha$, such tha
\[
\Vert f\ast g;\mathcal{B}_{\alpha,p,q}\Vert\leq C~\Vert f;\mathcal{B
_{\alpha,p_{1},q_{1}}\Vert\cdot\Vert g;\mathcal{D}_{\alpha-1,p_{2},q_{2}
^{1}\Vert~.
\]
\end{corollary}
\begin{proof}
We consider the three cases $c\in\{0,\frac{1}{2},1\}$. Let $\tilde{g}$ be a
function in $\mathcal{B}_{\tilde{\alpha},\tilde{p},\tilde{q}}^{1}$, with
$\tilde{\alpha}=\alpha-c$, $\tilde{p},\tilde{q}\geq0$. The convolution product
$f\ast\tilde{g}$ is in each case bounded by a function in $\mathcal{B
_{\alpha,p,q}^{1}$ with $p$ and $q$ given by:
\begin{itemize}
\item if $c=0$, $p=\min\{p_{1}+\tilde{p}+\frac{1}{2},p_{1}+\tilde{q
+1,q_{1}+\tilde{p}+\frac{1}{2}\}$~, $q=\min\{q_{1}+\tilde{q}+1,q_{1}+\tilde
{p}+\frac{1}{2}\}$~,
\item if $c=\frac{1}{2}$, $p=\min\{p_{1}+\tilde{p}+\frac{1}{2},p_{1}+\tilde
{q}+\frac{1}{2},q_{1}+\tilde{p}+\frac{1}{2}\}~$, $q=\min\{q_{1}+\tilde
{q}+1,q_{1}+\tilde{p}+\frac{1}{2}\}$~,
\item if $c=1$, $p=\min\{p_{1}+\tilde{p}+0,p_{1}+\tilde{q}+0,q_{1}+\tilde
{p}+\frac{1}{2}\}$~, $q=\min\{q_{1}+\tilde{q}+0,q_{1}+\tilde{p}+\frac{1}{2}\}$~.
\end{itemize}
These are consequences of Proposition~\ref{prop:convwithroot}. Using equation
(\ref{eq:convwithroot}) for the first case and equation
(\ref{eq:convwithrootgainbeta}) for the following two cases, and choosing
$s^{\prime}=1$ to bound the term $\frac{1}{t^{p_{1}}}\bar{\mu}_{\alpha
\ast\frac{1}{t^{\tilde{q}}}\tilde{\mu}_{\tilde{\alpha}}$. It is now clear that
for a function in $\mathcal{D}_{\alpha-1,p_{2},q_{2}}^{1}$, the terms that
yield the lowest $p$ and $q$ are covered by the $c=0$ case above, because what
is lost in the bounds on convolution due to lower $\tilde{\alpha}$ is gained
through higher values of $\tilde{p}$ and $\tilde{q}$ by definition of the
space $\mathcal{D}_{\alpha-1,p_{2},q_{2}}^{1}$. This corollary allows to
streamline notations and shorten calculations throughout the paper.
\end{proof}
\section{\label{sec:bounds}Bounds on $\mathbf{\hat{d}}$}
We present some elementary inequalities and expressions used throughout this
section. Throughout the calculations we will use without further mention, that
for all $z\in\mathbb{C}$ with $\operatorname{Re}(z)\leq0$ and $N\in
\mathbb{N}_{0}$,
\[
\left\vert \frac{e^{z}-\sum_{n=0}^{N}\frac{1}{n!}z^{n}}{z^{N+1}}\right\vert
\leq\mathrm{const.}~,
\]
and for all $z\in\mathbb{C}$ with $\operatorname{Re}(z)>0
\[
\left\vert \frac{e^{z}-\sum_{n=0}^{N}\frac{1}{n!}z^{n}}{z^{N+1}}\right\vert
\leq\mathrm{const.}e^{\operatorname{Re}(z)}~.
\]
We also have tha
\[
\partial_{k}\kappa=\frac{2k-i}{2\kappa}~.
\]
By definition of the norm on $\mathcal{D}_{\alpha,p,q}^{1}$ we must bound
$\kappa\partial_{k}\hat{\omega}$. We thus bound all the terms $\kappa
\partial_{k}\hat{\omega}_{l,n,m}$, with $l=1,2,3$, $n=1,2,3$ and $m=0,1$ (see
definitions (\ref{eq:dkomegaInt}), (\ref{eq:dkomegaIntComp1
)-(\ref{eq:dkomegaIntComp3}) and (\ref{eq:f10})-(\ref{eq:dkf31})). This
requires a good deal of book-keeping to track what happens to $\alpha$, $p$,
and $q$. Some of it may be spared when one realizes that all losses in
$\alpha$ occur when applying (\ref{eq:ksacrificealphafort}) where there are
explicit factors $|k|^{c}$ with $c=\{\frac{1}{2},1\}$, which automatically
brings forth a structure satisfying the conditions of
Corollary~\ref{corr:convsimplification}. This allows us to show that each
component $\partial_{k}\hat{\omega}_{l,n,m}$ is an element of a $\mathcal{D
_{\alpha-1,p,q}^{1}$.
From (\ref{eq:Qispace}) we obtain, for $i=0,1$
\[
\left\vert \hat{Q}_{i}\left( k,s\right) \right\vert \leq\left\Vert \hat
{Q}_{i};\mathcal{B}_{\alpha,\frac{7}{2},\frac{5}{2}}\right\Vert \left(
\frac{1}{s^{\frac{7}{2}}}\bar{\mu}_{\alpha}+\frac{1}{s^{\frac{5}{2}}
\tilde{\mu}_{\alpha}\right) ~,
\]
which we will use throughout without further mention. We also make use of
equations (\ref{eq:sqrtKleqkappa}) and (\ref{eq:ksacrificealphafort}) without
explicit mention throughout these proofs.
\vspace{0in}
The bounds for the terms $n=2$ take advantage of the fact that, for $1\leq
t<2$
\[
\bar{\mu}_{\alpha}(k,t)\leq\mathrm{const.}~\tilde{\mu}_{\alpha}(k,t)\leq
\mathrm{const.
\]
and, for $t\geq2$ and $\alpha^{\prime}>0$
\[
e^{\Lambda_{-}(t-1)}\mu_{\alpha,r}(k,t)\leq\mathrm{const.}~e^{\Lambda
_{-}(t-1)}\leq\mathrm{const.}~\tilde{\mu}_{\alpha^{\prime}}(k,t)~,
\]
so that the inequalit
\begin{equation}
e^{\Lambda_{-}(t-1)}\mu_{\alpha,r}(k,t)\leq\mathrm{const.}~\tilde{\mu
_{\alpha}(k,t) \label{eq:mutomutilde
\end{equation}
holds for all $t$ and $\alpha>0$.
\subsection{\label{sec:boundsd1}Bounds on $\hat{d}_{1}$}
To show that $\hat{d}_{1}=\sum_{m=0}^{1}\sum_{n=1}^{3}\partial_{k}\hat{\omega
}_{1,n,m}$ is in $\mathcal{D}_{\alpha-1,\frac{3}{2},0}^{1}$, which constitutes
the first part of Proposition~\ref{prop:d1&d2space}, we first need to recall a
proposition proved in \cite{Hillairet.Wittwer-Existenceofstationary2009}.
\begin{proposition}
\label{prop:fnmbounds}Let $f_{n,m}$ be as given in Section$~
\ref{sec:integraleq}. Then we have the bound
\begin{align}
\left\vert f_{1,0}(k,\sigma)\right\vert & \leq\mathrm{const.}~e^{|\Lambda
_{-}|\sigma}\min\{|\Lambda_{-}|,|\Lambda_{-}|^{3}\sigma^{2
\}~,\label{eq:bndf10}\\
\left\vert f_{2,0}(k,\sigma)\right\vert & \leq\mathrm{const.}~(\left\vert
k\right\vert +\left\vert k\right\vert ^{1/2})e^{-\left\vert k\right\vert
\sigma}~,\label{eq:bndf20}\\
\left\vert f_{3,0}(k,\sigma)\right\vert & \leq\mathrm{const.}~e^{\Lambda
_{-}\sigma}\min\{1,|\Lambda_{-}|^{2}\}~,\label{eq:bndf30}\\
\left\vert f_{1,1}(k,\sigma)\right\vert & \leq\mathrm{const.}~(1+|\Lambda
_{-}|)e^{|\Lambda_{-}|\sigma}\min\{1,|\Lambda_{-}|\sigma\}~, \label{eq:bndf11
\\
\left\vert f_{2,1}(k,\sigma)\right\vert & \leq\mathrm{const.}~\left(
1+|k|\right) e^{-\left\vert k\right\vert \sigma}~,\label{eq:bndf21}\\
\left\vert f_{3,1}(k,\sigma)\right\vert & \leq\mathrm{const.}~e^{\Lambda
_{-}\sigma}\min\{1,|\Lambda_{-}|\}~, \label{eq:bndf31
\end{align}
uniformly in $\sigma\geq0$ and $k\in\mathbb{R}_{0}$.
\end{proposition}
\bigskip
We then note tha
\begin{align*}
\left\vert \kappa\partial_{k}\check{K}_{n}(k,\tau)\right\vert & =\left\vert
\frac{1}{2}\tau\kappa\frac{2k-i}{2\kappa}e^{-\kappa\tau}\right\vert
\leq\mathrm{const.}~\tau(1+|k|)e^{\Lambda_{-}\tau}~,\text{ for }n=1,2~,\\
\left\vert \kappa\partial_{k}\check{K}_{3}(k,\tau)\right\vert & =\left\vert
\frac{1}{2}\tau\kappa\frac{2k-i}{2\kappa}(e^{\kappa\tau}+e^{-\kappa\tau
})\right\vert \leq\mathrm{const.}~\tau(1+|k|)(e^{|\Lambda_{-}|\tau
+e^{\Lambda_{-}\tau})~.
\end{align*}
The bound on the function $\kappa\partial_{k}\hat{\omega}_{1,1,0}$ uses
(\ref{eq:bndf10}) and Propositions~\ref{prop:sgL1} and \ref{prop:sgL2},
leading to
\begin{align*}
& \left\vert \kappa\partial_{k}\hat{\omega}_{1,1,0}\right\vert =\left\vert
\kappa\frac{1}{2}\partial_{k}e^{-\kappa\tau}\int_{1}^{t}\check{f}_{1,0}\left(
k,\sigma\right) \hat{Q}_{0}\left( k,s\right) ds\right\vert \\
& \leq\mathrm{const.}~\tau(1+|k|)e^{\Lambda_{-}\tau}\int_{1}^{t
e^{|\Lambda_{-}|\sigma}\min\{|\Lambda_{-}|,|\Lambda_{-}|^{3}\sigma
^{2}\}\left( \frac{1}{s^{\frac{7}{2}}}\bar{\mu}_{\alpha}+\frac{1}{s^{\frac
{5}{2}}}\tilde{\mu}_{\alpha}\right) ds\\
& \leq\mathrm{const.}~\tau(1+|k|)e^{\Lambda_{-}\tau}\int_{1}^{\frac{t+1}{2
}e^{|\Lambda_{-}|\sigma}|\Lambda_{-}|^{3}\sigma^{2}\left( \frac{1
{s^{\frac{7}{2}}}\bar{\mu}_{\alpha}+\frac{1}{s^{\frac{5}{2}}}\tilde{\mu
}_{\alpha}\right) ds\\
& +\mathrm{const.}~\tau(1+|k|)e^{\Lambda_{-}\tau}\int_{\frac{t+1}{2}
^{t}e^{|\Lambda_{-}|\sigma}|\Lambda_{-}|\left( \frac{1}{s^{\frac{7}{2}}
\bar{\mu}_{\alpha}+\frac{1}{s^{\frac{5}{2}}}\tilde{\mu}_{\alpha}\right) ds\\
& \leq\mathrm{const.}~(1+|k|)\left( \frac{1}{t^{\frac{5}{2}}}\bar{\mu
}_{\alpha}+\frac{1}{t^{\frac{3}{2}}}\tilde{\mu}_{\alpha}\right) ~,
\end{align*}
which shows that $\kappa\partial_{k}\hat{\omega}_{1,1,0}\in\mathcal{D
_{\alpha-1,\frac{5}{2},\frac{3}{2}}^{1}$.
The bound on the function $\kappa\partial_{k}\hat{\omega}_{1,2,0}$ uses
(\ref{eq:bndf20}), Proposition~\ref{prop:sgk3} and (\ref{eq:mutomutilde}),
leading t
\begin{align*}
\left\vert \kappa\partial_{k}\hat{\omega}_{1,2,0}\right\vert & =\left\vert
\kappa\frac{1}{2}\partial_{k}e^{-\kappa\tau}\int_{t}^{\infty}\check{f
_{2,0}(k,s-1)\hat{Q}_{0}(k,s)ds\right\vert \\
& \leq\mathrm{const.}~\tau(1+|k|)e^{\Lambda_{-}\tau}e^{|k|\tau}\int
_{t}^{\infty}(|k|^{\frac{1}{2}}+|k|)e^{-|k|\sigma}\left( \frac{1}{s^{\frac
{7}{2}}}\bar{\mu}_{\alpha}+\frac{1}{s^{\frac{5}{2}}}\tilde{\mu}_{\alpha
}\right) ds\\
& \leq\mathrm{const.}~(1+|k|)e^{\Lambda_{-}\tau}\left( \frac{1}{t^{2}
\bar{\mu}_{\alpha}+\frac{1}{t^{1}}\tilde{\mu}_{\alpha}\right) \leq
\mathrm{const.}~(1+|k|)\frac{1}{t^{1}}\tilde{\mu}_{\alpha}~,
\end{align*}
which shows that $\kappa\partial_{k}\hat{\omega}_{1,2,0}\in\mathcal{D
_{\alpha-1,\infty,1}^{1}$.
The bound on the function $\kappa\partial_{k}\hat{\omega}_{1,3,0}$ uses
(\ref{eq:bndf30}) and Proposition~\ref{prop:sgL3}, leading t
\begin{align*}
\left\vert \kappa\partial_{k}\hat{\omega}_{1,3,0}\right\vert & =\left\vert
\frac{1}{2}\kappa\partial_{k}(e^{\kappa\tau}-e^{-\kappa\tau})\int_{t}^{\infty
}\check{f}_{3,0}(k,s-1)\hat{Q}_{0}(k,s)ds\right\vert \\
& \leq\mathrm{const.}~\tau(1+|k|)(e^{|\Lambda_{-}|\tau}+e^{\Lambda_{-}\tau
})\int_{t}^{\infty}\min\{1,|\Lambda_{-}|\}e^{\Lambda_{-}\sigma}\left(
\frac{1}{s^{\frac{7}{2}}}\bar{\mu}_{\alpha}+\frac{1}{s^{\frac{5}{2}}
\tilde{\mu}_{\alpha}\right) ds\\
& \leq\mathrm{const.}~\tau e^{|\Lambda_{-}|\tau}\int_{t}^{\infty
(1+|\Lambda_{-}|)\min\{1,|\Lambda_{-}|\}e^{\Lambda_{-}\sigma}\left( \frac
{1}{s^{\frac{7}{2}}}\bar{\mu}_{\alpha}+\frac{1}{s^{\frac{5}{2}}}\tilde{\mu
}_{\alpha}\right) ds\\
& \leq\mathrm{const.}\left( \frac{1}{t^{\frac{5}{2}}}\bar{\mu}_{\alpha
}+\frac{1}{t^{\frac{3}{2}}}\tilde{\mu}_{\alpha}\right) ~,
\end{align*}
which shows that $\kappa\partial_{k}\hat{\omega}_{1,3,0}\in\mathcal{D
_{\alpha-1,\frac{5}{2},\frac{3}{2}}^{1}$.
The bound on the function $\kappa\partial_{k}\hat{\omega}_{1,1,1}$ uses
(\ref{eq:bndf11}) and Propositions~\ref{prop:sgL1} and \ref{prop:sgL2},
leading t
\begin{align*}
\left\vert \kappa\partial_{k}\hat{\omega}_{1,1,1}\right\vert & =\left\vert
\kappa\frac{1}{2}\partial_{k}e^{-\kappa\tau}\int_{1}^{t}\check{f
_{1,1}(k,s-1)\hat{Q}_{1}(k,s)ds\right\vert \\
& \leq\mathrm{const.}~\tau(1+|k|)e^{\Lambda_{-}\tau}\int_{1}^{t
(1+|\Lambda_{-}|)e^{|\Lambda_{-}|\sigma}\min\{1,|\Lambda_{-}|\sigma\}\left(
\frac{1}{s^{\frac{7}{2}}}\bar{\mu}_{\alpha}+\frac{1}{s^{\frac{5}{2}}
\tilde{\mu}_{\alpha}\right) ds\\
& \leq\mathrm{const.}~\tau(1+|k|)e^{\Lambda_{-}\tau}\int_{1}^{t
e^{|\Lambda_{-}|\sigma}|\Lambda_{-}|\sigma\left( \frac{1}{s^{\frac{7}{2}
}\bar{\mu}_{\alpha}+\frac{1}{s^{\frac{5}{2}}}\tilde{\mu}_{\alpha}\right) ds\\
& +\mathrm{const.}~\tau(1+|k|)e^{\Lambda_{-}\tau}\int_{1}^{t}|\Lambda
_{-}|e^{|\Lambda_{-}|\sigma}\min\{1,|\Lambda_{-}|\sigma\}\left( \frac
{1}{s^{\frac{7}{2}}}\bar{\mu}_{\alpha}+\frac{1}{s^{\frac{5}{2}}}\tilde{\mu
}_{\alpha}\right) ds\\
& \leq\mathrm{const.}~(1+|k|)\left( \tilde{\mu}_{\alpha}+\frac{1
{t^{\frac{3}{2}}}\bar{\mu}_{\alpha}+\frac{1}{t^{\frac{1}{2}}}\tilde{\mu
}_{\alpha}\right) ~,
\end{align*}
which shows that $\kappa\partial_{k}\hat{\omega}_{1,1,1}\in\mathcal{D
_{\alpha-1,\frac{3}{2},0}^{1}$.
The bound on the function $\kappa\partial_{k}\hat{\omega}_{1,2,1}$ uses
(\ref{eq:bndf21}), Proposition~\ref{prop:sgk3} and (\ref{eq:mutomutilde}),
leading t
\begin{align*}
\left\vert \kappa\partial_{k}\hat{\omega}_{1,2,1}\right\vert & =\left\vert
\frac{1}{2}\kappa\partial_{k}e^{-\kappa\tau}\int_{t}^{\infty}\check{f
_{2,1}(k,s-1)\hat{Q}_{1}(k,s)ds\right\vert \\
& \leq\mathrm{const.}~(1+|k|)\tau e^{\Lambda_{-}\tau}\int_{t}^{\infty
}(1+|k|)e^{-|k|\sigma}\left( \frac{1}{s^{\frac{7}{2}}}\bar{\mu}_{\alpha
}+\frac{1}{s^{\frac{5}{2}}}\tilde{\mu}_{\alpha}\right) ds\\
& \leq\mathrm{const.}~(1+|k|)e^{\Lambda_{-}\tau}\left( \frac{1}{t^{\frac
{3}{2}}}\bar{\mu}_{\alpha}+\frac{1}{t^{\frac{1}{2}}}\tilde{\mu}_{\alpha
}\right) \leq\mathrm{const.}~(1+|k|)\frac{1}{t^{\frac{1}{2}}}\tilde{\mu
}_{\alpha}~,
\end{align*}
which shows that $\kappa\partial_{k}\hat{\omega}_{1,2,1}\in\mathcal{D
_{\alpha-1,\infty,\frac{1}{2}}^{1}$.
The bound on the function $\kappa\partial_{k}\hat{\omega}_{1,3,1}$ uses
(\ref{eq:bndf31}) and Proposition~\ref{prop:sgL3}, leading t
\begin{align*}
\left\vert \kappa\partial_{k}\hat{\omega}_{1,3,1}\right\vert & =\left\vert
\kappa\frac{1}{2}\partial_{k}(e^{\kappa\tau}-e^{-\kappa\tau})\int_{t}^{\infty
}\check{f}_{3,1}\hat{Q}_{1}(k,s)ds\right\vert \\
& \leq\mathrm{const.}~\tau(1+|k|)(e^{|\Lambda_{-}|\tau}+e^{\Lambda_{-}\tau
})\int_{t}^{\infty}e^{\Lambda_{-}\sigma}\left( \frac{1}{s^{\frac{7}{2}}
\bar{\mu}_{\alpha}+\frac{1}{s^{\frac{5}{2}}}\tilde{\mu}_{\alpha}\right) ds\\
& \leq\mathrm{const.}~\tau(e^{|\Lambda_{-}|\tau}+e^{\Lambda_{-}\tau})\int
_{t}^{\infty}(1+|\Lambda_{-}|)e^{\Lambda_{-}\sigma}\left( \frac{1
{s^{\frac{7}{2}}}\bar{\mu}_{\alpha}+\frac{1}{s^{\frac{5}{2}}}\tilde{\mu
}_{\alpha}\right) ds\\
& \leq\mathrm{const.}\left( \frac{1}{t^{\frac{3}{2}}}\bar{\mu}_{\alpha
}(k,t)+\frac{1}{t^{\frac{1}{2}}}\tilde{\mu}_{\alpha}(k,t)\right) ~,
\end{align*}
which shows that $\kappa\partial_{k}\hat{\omega}_{1,3,0}\in\mathcal{D
_{\alpha-1,\frac{3}{2},\frac{1}{2}}^{1}$.
Collecting the bounds we find that $\hat{d}_{1}\in\mathcal{D}_{\alpha
-1,\frac{3}{2},0}^{1}$~, which completes the first part of the proof of
Proposition~\ref{prop:d1&d2space}.
\subsection{\label{sec:boundsd2}Bounds on $\hat{d}_{2}$}
To show that $\hat{d}_{2}=\sum_{m=0}^{1}\sum_{n=1}^{3}\partial_{k}\hat{\omega
}_{2,n,m}$ is in $\mathcal{D}_{\alpha-1,\frac{3}{2},0}^{1}$, which constitutes
the second part of Proposition~\ref{prop:d1&d2space}, we first need to show
bounds on the functions $\partial_{k}\check{f}_{n,m}$.
\begin{proposition}
\label{prop:kappadkfnmbounds}Let $\partial_{k}\check{f}_{n,m}$ be as given in
Section$~$\ref{sec:integraleq}. Then we have the bound
\begin{align}
\left\vert \kappa\partial_{k}\check{f}_{1,0}(k,\sigma)\right\vert &
\leq\mathrm{const.}~\min\{(1+|\Lambda_{-}|\sigma),(s+|\Lambda_{-
|)|\Lambda_{-}|^{2}\sigma\}e^{|\Lambda_{-}|\sigma}~,\label{eq:bnddkf10}\\
\left\vert \kappa\partial_{k}\check{f}_{2,0}(k,\sigma)\right\vert &
\leq\mathrm{const.}~(|k|^{\frac{1}{2}}+|k|^{2})\sigma e^{-|k|\sigma
}~,\label{eq:bnddkf20}\\
\left\vert \kappa\partial_{k}\check{f}_{3,0}(k,\sigma)\right\vert &
\leq\mathrm{const.}~(1+|\Lambda_{-}|\sigma)e^{\Lambda_{-}\sigma
~,\label{eq:bnddkf30}\\
\left\vert \kappa\partial_{k}\check{f}_{1,1}(k,\sigma)\right\vert &
\leq\mathrm{const.}~(1+|\Lambda_{-}|^{2})\sigma e^{|\Lambda_{-}|\sigma
}~,\label{eq:bnddkf11}\\
\left\vert \kappa\partial_{k}\check{f}_{2,1}(k,\sigma)\right\vert &
\leq\mathrm{const.}~(1+|k|^{2})\sigma e^{-|k|\sigma}~,\label{eq:bnddkf21}\\
\left\vert \kappa\partial_{k}\check{f}_{3,1}(k,\sigma)\right\vert &
\leq\mathrm{const.}~(1+|\Lambda_{-}|)\sigma e^{\Lambda_{-}\sigma}~,
\label{eq:bnddkf31
\end{align}
uniformly in $\sigma\geq0$ and $k\in\mathbb{R}_{0}$.
\end{proposition}
\begin{proof}
We multiply (\ref{eq:dkf10})-(\ref{eq:dkf31}) by $\kappa$ and bound the
products. The function $\kappa\partial_{k}\check{f}_{1,0}$ is bounded in two
ways. We have a straightforward boun
\[
\left\vert \kappa\partial_{k}\check{f}_{1,0}(k,\sigma)\right\vert
\leq\mathrm{const.}~(1+|\Lambda_{-}|\sigma)e^{|\Lambda_{-}|\sigma}~.
\]
Since leading terms cancel, we ge
\begin{align*}
\left\vert \kappa\partial_{k}\check{f}_{1,0}(k,\sigma)\right\vert &
\leq\left\vert \frac{i}{2}\left( e^{\kappa\sigma}+e^{-\kappa\sigma
}-2e^{-|k|\sigma}\right) -\frac{ik^{2}}{2\kappa^{2}}\left( e^{\kappa\sigma
}-e^{-\kappa\sigma}\right) +2\frac{k^{2}+|k|\kappa}{k}(e^{-|k|\sigma
}-e^{-\kappa\sigma})\right\vert \\
& +\left\vert i\frac{k^{2}+\kappa^{2}}{2\kappa}\left( e^{\kappa\sigma
}-e^{-\kappa\sigma}\right) \sigma+\frac{k^{2}+|k|\kappa}{k}\frac{k^{2
+\kappa^{2}}{\kappa}e^{-\kappa\sigma}\sigma-2\kappa\frac{k^{2}+|k|\kappa
{k}e^{-|k|\sigma}\sigma\right\vert \\
& \leq\mathrm{const.}~|(e^{\kappa\sigma}-1-\kappa\sigma)+(e^{-\kappa\sigma
}-1+\kappa\sigma)-2(e^{-|k|\sigma}-1)|\\
& +\mathrm{const.}\left\vert \frac{k^{2}}{\kappa^{2}}\left( (e^{\kappa
\sigma}-1)-(e^{-\kappa\sigma}-1)\right) \right\vert +\mathrm{const.
~|\Lambda_{-}||(e^{-|k|\sigma}-1)-(e^{-\kappa\sigma}-1)|\\
& +\mathrm{const.}\left\vert \frac{k^{2}+\kappa^{2}}{2\kappa}\left(
(e^{\kappa\sigma}-1)-(e^{-\kappa\sigma}-1)\right) \sigma\right\vert \\
& +\mathrm{const.}\left\vert \frac{k^{2}+|k|\kappa}{k}\frac{k^{2}+\kappa^{2
}{\kappa}e^{-\kappa\sigma}\sigma\right\vert +\mathrm{const.}\left\vert
\kappa\frac{k^{2}+|k|\kappa}{k}e^{-|k|\sigma}\sigma\right\vert \\
& \leq\mathrm{const.}~(|\Lambda_{-}|^{2}\sigma^{2}+|k|\sigma)e^{|\Lambda
_{-}|\sigma}+\mathrm{const.}~|\Lambda_{-}|^{3}\sigma e^{|\Lambda_{-}|\sigma
}+\mathrm{const.}~|\Lambda_{-}|^{2}\sigma e^{|\Lambda_{-}|\sigma}\\
& +\mathrm{const.}~|\Lambda_{-}|^{2}\sigma^{2}e^{|\Lambda_{-}|\sigma
}+\mathrm{const.}~|\Lambda_{-}|^{2}\sigma e^{|\Lambda_{-}|\sigma
}+\mathrm{const.}~|\Lambda_{-}|^{2}\sigma e^{|\Lambda_{-}|\sigma}\\
& \leq\mathrm{const.}~(s+|\Lambda_{-}|)|\Lambda_{-}|^{2}\sigma e^{|\Lambda
_{-}|\sigma}~.
\end{align*}
Then we hav
\[
\left\vert \kappa\partial_{k}\check{f}_{1,0}(k,\sigma)\right\vert
\leq\mathrm{const.}~\min\{(1+|\Lambda_{-}|\sigma),(s+|\Lambda_{-
|)|\Lambda_{-}|^{2}\sigma\}e^{|\Lambda_{-}|\sigma}~,
\]
which proves (\ref{eq:bnddkf10}).
To bound $\kappa\partial_{k}\check{f}_{2,0}(k,\sigma)$ we use that, since
$|k|\leq\operatorname{Re}(\kappa)$ for all $k$
\begin{align}
\left\vert e^{-|k|\sigma}-e^{-\kappa\sigma}\right\vert & \leq\mathrm{const.
~e^{-|k|\sigma}\left\vert 1-e^{(|k|-\kappa)\sigma}\right\vert \nonumber\\
& \leq\mathrm{const.}~e^{-|k|\sigma}\left\vert |k|-\kappa\right\vert
\sigma\nonumber\\
& \leq\mathrm{const.}~(|k|^{\frac{1}{2}}+|k|)\sigma e^{-|k|\sigma}~,
\label{eq:exp(abs(k)-kappa)
\end{align}
such tha
\begin{align*}
\left\vert \kappa\partial_{k}\check{f}_{2,0}(k,\sigma)\right\vert &
\leq\left\vert \frac{(|k|+\kappa)^{2}}{k}\left( e^{-|k|\sigma}-e^{-\kappa
\sigma}\right) -2\frac{\kappa+|k|}{k}\left( |k|\kappa e^{-|k|\sigma
-\frac{k^{2}+\kappa^{2}}{2}e^{-\kappa\sigma}\right) \sigma\right\vert \\
& \leq\mathrm{const.}~(1+|k|)(|k|^{\frac{1}{2}}+|k|)e^{-|k|\sigma
\sigma+\mathrm{const.}~(|k|+|k|^{2})e^{-|k|\sigma}\sigma\\
& +\mathrm{const.}~(|k|^{\frac{1}{2}}+|k|^{2})e^{-|k|\sigma}\sigma\\
& \leq\mathrm{const.}~(|k|^{\frac{1}{2}}+|k|^{2})\sigma e^{-|k|\sigma}~,
\end{align*}
which gives (\ref{eq:bnddkf20}).
To bound $\kappa\partial_{k}\check{f}_{3,0}\left( k,\sigma\right) $ we have
the straightforward boun
\begin{align*}
\left\vert \kappa\partial_{k}\check{f}_{3,0}\left( k,\sigma\right)
\right\vert & \leq\left\vert \kappa\frac{k}{2\kappa^{3}}e^{-\kappa\sigma
}\right\vert +\left\vert \kappa\frac{k^{2}+\kappa^{2}}{2\kappa^{2}}\sigma
e^{-\kappa\sigma}\right\vert \\
& \leq\mathrm{const.}~(1+|\Lambda_{-}|)\sigma e^{\Lambda_{-}\sigma}~,
\end{align*}
which yields (\ref{eq:bnddkf30}).
To bound $\kappa\partial_{k}\check{f}_{1,1}\left( k,\sigma\right) $ we hav
\begin{align*}
\left\vert \kappa\partial_{k}\check{f}_{1,1}(k,\sigma)\right\vert &
\leq\left\vert i\frac{\left( |k|+\kappa\right) ^{2}}{|k|}(e^{-|k|\sigma
}-e^{-\kappa\sigma})\right\vert +\left\vert \frac{k^{2}+\kappa^{2}}{2k}\left(
e^{\kappa\sigma}+e^{-\kappa\sigma}\right) \sigma\right\vert \\
& +\left\vert 2i\frac{k^{2}+|k|\kappa}{k^{2}}\left( \frac{k^{2}+\kappa^{2
}{2}e^{-\kappa\sigma}-|k|\kappa e^{-|k|\sigma}\right) \sigma\right\vert \\
& \leq\mathrm{const.}~(1+|k|)(|k|+|\Lambda_{-}|)\sigma+\mathrm{const.
~(1+|k|)\sigma e^{|\Lambda_{-
\vert
\sigma}\\
& +\mathrm{const.}~|\Lambda_{-}|((1+|k|)+|\Lambda_{-}|)\sigma\leq
\mathrm{const.}~(1+|\Lambda_{-}|^{2})\sigma e^{|\Lambda_{-}|\sigma}~,
\end{align*}
and thus we have (\ref{eq:bnddkf11}).
To bound $\kappa\partial_{k}\check{f}_{2,1}\left( k,\sigma\right) $ we use
(\ref{eq:exp(abs(k)-kappa)}) to boun
\begin{align*}
\left\vert \kappa\partial_{k}\check{f}_{2,1}\left( k,\sigma\right)
\right\vert & \leq\left\vert i\frac{\left( |k|+\kappa\right) ^{2}
{|k|}(e^{-\kappa\sigma}-e^{-|k|\sigma})\right\vert +\left\vert i(|k|+\kappa
)\kappa\frac{k^{2}+\kappa^{2}}{k^{2}}e^{-\kappa\sigma}\sigma\right\vert \\
& +|2i\kappa\left( |k|+\kappa\right) e^{-|k|\sigma}\sigma|\\
& \leq\mathrm{const.}~(1+|k|)(|k|^{\frac{1}{2}}+|k|)\sigma e^{-|k|\sigma
}+\mathrm{const.}~(|k|+|k|^{2})(1+|k|^{-1})e^{-|k|\sigma}\sigma\\
& +\mathrm{const.}~(|k|+|k|^{2})e^{-|k|\sigma}\sigma\leq\mathrm{const.
~(1+|k|^{2})\sigma e^{-|k|\sigma
\end{align*}
which leads to (\ref{eq:bnddkf21}).
Finally, To bound $\kappa\partial_{k}\check{f}_{3,1}\left( k,\sigma\right) $
we have the straightforward boun
\[
\left\vert \kappa\partial_{k}\check{f}_{3,1}\left( k,\sigma\right)
\right\vert \leq\left\vert \frac{k^{2}+\kappa^{2}}{2k}e^{-\kappa\sigma
\sigma\right\vert \leq\mathrm{const.}~(1+|\Lambda_{-}|)\sigma e^{\Lambda
_{-}\sigma}~,
\]
and therefore we have (\ref{eq:bnddkf31}). This completes the proof of
Proposition~\ref{prop:kappadkfnmbounds}.
\end{proof}
\bigskip
We may now bound $\hat{d}_{2}$. The bound on the function $\kappa\partial
_{k}\hat{\omega}_{2,1,0}$ uses (\ref{eq:bnddkf10}) and
Propositions~\ref{prop:sgL1} and \ref{prop:sgL2}, leading t
\begin{align*}
\left\vert \kappa\partial_{k}\hat{\omega}_{2,1,0}\right\vert & =\left\vert
\frac{1}{2}e^{-\kappa\tau}\int_{1}^{t}\kappa\partial_{k}\check{f}_{1,0}\left(
k,\sigma\right) \hat{Q}_{0}\left( k,s\right) ds\right\vert \\
& \leq\mathrm{const.}~e^{\Lambda_{-}\tau}\int_{1}^{t}\min\{(1+|\Lambda
_{-}|\sigma),(s+|\Lambda_{-}|)|\Lambda_{-}|^{2}\sigma\}e^{|\Lambda_{-}|\sigma
}\left( \frac{1}{s^{\frac{7}{2}}}\bar{\mu}_{\alpha}+\frac{1}{s^{\frac{5}{2}
}\tilde{\mu}_{\alpha}\right) ds\\
& \leq\mathrm{const.}~e^{\Lambda_{-}\tau}\int_{1}^{\frac{t+1}{2}
(s+|\Lambda_{-}|)|\Lambda_{-}|^{2}\sigma e^{|\Lambda_{-}|\sigma}\left(
\frac{1}{s^{\frac{7}{2}}}\bar{\mu}_{\alpha}+\frac{1}{s^{\frac{5}{2}}
\tilde{\mu}_{\alpha}\right) ds\\
& +\mathrm{const.}~e^{\Lambda_{-}\tau}\int_{\frac{t+1}{2}}^{t}(1+|\Lambda
_{-}|\sigma)e^{|\Lambda_{-}|\sigma}\left( \frac{1}{s^{\frac{7}{2}}}\bar{\mu
}_{\alpha}+\frac{1}{s^{\frac{5}{2}}}\tilde{\mu}_{\alpha}\right) ds\\
& \leq\mathrm{const.}~(1+|\Lambda_{-}|)\frac{1}{t^{\frac{3}{2}}}\tilde{\mu
}_{\alpha}+\mathrm{const.}\left( \frac{1}{t^{\frac{5}{2}}}\bar{\mu}_{\alpha
}+\frac{1}{t^{\frac{3}{2}}}\tilde{\mu}_{\alpha}\right) ~,
\end{align*}
which shows that $\kappa\partial_{k}\hat{\omega}_{2,1,0}\in\mathcal{D
_{\alpha-1,\frac{5}{2},\frac{3}{2}}^{1}$.
The bound on the function $\kappa\partial_{k}\hat{\omega}_{2,2,0}$ uses
(\ref{eq:bnddkf20}), Proposition~\ref{prop:sgk3} and (\ref{eq:mutomutilde}),
leading t
\begin{align*}
\left\vert \kappa\partial_{k}\hat{\omega}_{2,2,0}\right\vert & =\left\vert
\frac{1}{2}e^{-\kappa\tau}\int_{t}^{\infty}\kappa\partial_{k}\check{f
_{2,0}(k,s-1)\hat{Q}_{0}(k,s)ds\right\vert \\
& \leq\mathrm{const.}~e^{\Lambda_{-}\tau}e^{|k|\tau}\int_{t}^{\infty
}(|k|^{\frac{1}{2}}+|k|^{2})\sigma e^{-|k|\sigma}\left( \frac{1}{s^{\frac
{7}{2}}}\bar{\mu}_{\alpha}+\frac{1}{s^{\frac{5}{2}}}\tilde{\mu}_{\alpha
}\right) ds\\
& \leq\mathrm{const.}~(1+|k|)e^{\Lambda_{-}\tau}\left( \frac{1}{t^{2}
\bar{\mu}_{\alpha}+\frac{1}{t^{1}}\tilde{\mu}_{\alpha}\right) \leq
\mathrm{const.}~(1+|k|)\frac{1}{t^{1}}\tilde{\mu}_{\alpha}~,
\end{align*}
which shows that $\kappa\partial_{k}\hat{\omega}_{2,2,0}\in\mathcal{D
_{\alpha-1,\infty,1}^{1}$.
The bound on the function $\kappa\partial_{k}\hat{\omega}_{2,3,0}$ uses
(\ref{eq:bnddkf30}) and Proposition~\ref{prop:sgL3}, leading t
\begin{align*}
\left\vert \kappa\partial_{k}\hat{\omega}_{2,3,0}\right\vert & =\left\vert
\frac{1}{2}(e^{\kappa\tau}-e^{-\kappa\tau})\int_{t}^{\infty}\kappa\partial
_{k}\check{f}_{3,0}(k,s-1)\hat{Q}_{0}(k,s)ds\right\vert \\
& \leq\mathrm{const.}~e^{|\Lambda_{-}|\tau}\int_{t}^{\infty}(1+|\Lambda
_{-}|\sigma)e^{\Lambda_{-}\sigma}\left( \frac{1}{s^{\frac{7}{2}}}\bar{\mu
}_{\alpha}+\frac{1}{s^{\frac{5}{2}}}\tilde{\mu}_{\alpha}\right) ds\\
& \leq\mathrm{const.}~e^{|\Lambda_{-}|\tau}\int_{t}^{\infty}e^{\Lambda
_{-}\sigma}\left( \frac{1}{s^{\frac{7}{2}}}\bar{\mu}_{\alpha}+\frac
{1}{s^{\frac{5}{2}}}\tilde{\mu}_{\alpha}\right) ds\\
& +\mathrm{const.}~e^{|\Lambda_{-}|\tau}\int_{t}^{\infty}|\Lambda
_{-}|e^{\Lambda_{-}\sigma}\left( \frac{1}{s^{\frac{5}{2}}}\bar{\mu}_{\alpha
}+\frac{1}{s^{\frac{3}{2}}}\tilde{\mu}_{\alpha}\right) ds\\
& \leq\mathrm{const.}\left( \frac{1}{t^{\frac{5}{2}}}\bar{\mu}_{\alpha
}+\frac{1}{t^{\frac{3}{2}}}\tilde{\mu}_{\alpha}\right) ~,
\end{align*}
which shows that $\kappa\partial_{k}\hat{\omega}_{2,3,0}\in\mathcal{D
_{\alpha-1,\frac{5}{2},\frac{3}{2}}^{1}$.
The bound on the function $\kappa\partial_{k}\hat{\omega}_{2,1,1}$ uses
(\ref{eq:bnddkf11}) and Propositions~\ref{prop:sgL1} and \ref{prop:sgL2},
leading t
\begin{align*}
\left\vert \kappa\partial_{k}\hat{\omega}_{2,1,1}\right\vert & =\left\vert
\frac{1}{2}e^{-\kappa\tau}\int_{1}^{t}\kappa\partial_{k}\check{f
_{1,1}(k,s-1)\hat{Q}_{1}(k,s)ds\right\vert \\
& \leq\mathrm{const.}~e^{\Lambda_{-}\tau}\int_{1}^{t}(1+|\Lambda_{-
|^{2})\sigma e^{|\Lambda_{-}|\sigma}\left( \frac{1}{s^{\frac{7}{2}}}\bar{\mu
}_{\alpha}+\frac{1}{s^{\frac{5}{2}}}\tilde{\mu}_{\alpha}\right) ds\\
& \leq\mathrm{const.}\left( \tilde{\mu}_{\alpha}+\frac{1}{t^{\frac{3}{2}
}\bar{\mu}_{\alpha}+\frac{1}{t^{\frac{1}{2}}}\tilde{\mu}_{\alpha}\right) \\
& +\mathrm{const.}~\frac{1}{t^{1}}\tilde{\mu}_{\alpha}+\mathrm{const.
~|\Lambda_{-}|\left( \frac{1}{t^{\frac{5}{2}}}\bar{\mu}_{\alpha}+\frac
{1}{t^{\frac{3}{2}}}\tilde{\mu}_{\alpha}\right) ~,
\end{align*}
which shows that $\kappa\partial_{k}\hat{\omega}_{2,1,1}\in\mathcal{D
_{\alpha-1,\frac{3}{2},0}^{1}$.
The bound on the function $\kappa\partial_{k}\hat{\omega}_{2,2,1}$ uses
(\ref{eq:bnddkf21}), Proposition~\ref{prop:sgk3} and (\ref{eq:mutomutilde}),
leading t
\begin{align*}
\left\vert \kappa\partial_{k}\hat{\omega}_{2,2,1}\right\vert & =\left\vert
\frac{1}{2}e^{-\kappa\tau}\int_{t}^{\infty}\kappa\partial_{k}\check{f
_{2,1}(k,s-1)\hat{Q}_{1}(k,s)ds\right\vert \\
& \leq\mathrm{const.}~e^{\Lambda_{-}\tau}e^{|k|\tau}\int_{t}^{\infty
}(1+|k|^{2})\sigma e^{-|k|\sigma}\left( \frac{1}{s^{\frac{7}{2}}}\bar{\mu
}_{\alpha}+\frac{1}{s^{\frac{5}{2}}}\tilde{\mu}_{\alpha}\right) ds\\
& \leq\mathrm{const.}~(1+|k|)e^{\Lambda_{-}\tau}\left( \frac{1}{t^{\frac
{3}{2}}}\bar{\mu}_{\alpha}+\frac{1}{t^{\frac{1}{2}}}\tilde{\mu}_{\alpha
}\right) \leq\mathrm{const.}~(1+|k|)\frac{1}{t^{\frac{1}{2}}}\tilde{\mu
}_{\alpha}~,
\end{align*}
which shows that $\kappa\partial_{k}\hat{\omega}_{2,2,1}\in\mathcal{D
_{\alpha-1,\infty,\frac{1}{2}}^{1}$.
The bound on the function $\kappa\partial_{k}\hat{\omega}_{2,3,1}$ uses
(\ref{eq:bnddkf31}) and Proposition~\ref{prop:sgL3}, leading t
\begin{align*}
\left\vert \kappa\partial_{k}\hat{\omega}_{2,3,1}\right\vert & =\left\vert
\frac{1}{2}(e^{\kappa\tau}-e^{-\kappa\tau})\int_{t}^{\infty}\kappa\partial
_{k}\check{f}_{3,1}\hat{Q}_{1}(k,s)ds\right\vert \\
& \leq\mathrm{const.}~(e^{|\Lambda_{-}|\tau}+e^{\Lambda_{-}\tau})\int
_{t}^{\infty}(1+|\Lambda_{-}|)\sigma e^{\Lambda_{-}\sigma}\left( \frac
{1}{s^{\frac{7}{2}}}\bar{\mu}_{\alpha}+\frac{1}{s^{\frac{5}{2}}}\tilde{\mu
}_{\alpha}\right) ds\\
& \leq\mathrm{const.}\left( \frac{1}{t^{\frac{3}{2}}}\bar{\mu}_{\alpha
}+\frac{1}{t^{\frac{1}{2}}}\tilde{\mu}_{\alpha}\right) ~,
\end{align*}
which shows that $\kappa\partial_{k}\hat{\omega}_{2,3,1}\in\mathcal{D
_{\alpha-1,\frac{3}{2},\frac{1}{2}}^{1}$.
Collecting the bounds we have that $\hat{d}_{2}\in\mathcal{D}_{\alpha
-1,\frac{3}{2},0}^{1}$~, which completes the second part of the proof of
Proposition~\ref{prop:d1&d2space}.
\subsection{\label{sec:boundsd3}Bounds on $\hat{d}_{3}$}
We prove the bounds on $\hat{d}_{3}$ needed to complete the proof of
Lemma~\ref{lem:Q}. For compatibility with the maps $\mathfrak{L}_{1}$ and
$\mathfrak{L}_{2}$ we will bound $\kappa\hat{d}_{3}$ instead of $\hat{d}_{3}$.
Throughout this proof we will use without further mention the bound
\begin{align*}
\left\vert \partial_{k}\hat{Q}_{0}\left( k,s\right) \right\vert &
\leq\left\Vert \partial_{k}\hat{Q}_{0}\right\Vert \left( \frac{1}{s^{\frac
{3}{2}}}\bar{\mu}_{\alpha}+\frac{1}{s^{1}}\tilde{\mu}_{\alpha}\right) ~,\\
\left\vert \partial_{k}\hat{Q}_{1}\left( k,s\right) \right\vert &
\leq\left\Vert \partial_{k}\hat{Q}_{1}\right\Vert \left( \frac{1}{s^{\frac
{3}{2}}}\bar{\mu}_{\alpha}+\frac{1}{s^{2}}\tilde{\mu}_{\alpha}\right) ~.
\end{align*}
The bound on the function $\kappa\partial_{k}\hat{\omega}_{3,1,0}$ uses
(\ref{eq:bndf10}) and Propositions~\ref{prop:sgL1} and \ref{prop:sgL2},
leading t
\begin{align*}
\left\vert \kappa\partial_{k}\hat{\omega}_{3,1,0}\right\vert & =\left\vert
\frac{1}{2}e^{-\kappa\tau}\int_{1}^{t}\check{f}_{1,0}\left( k,\sigma\right)
\kappa\partial_{k}\hat{Q}_{0}\left( k,s\right) ds\right\vert \\
& \leq\mathrm{const.}~|\Lambda_{-}|e^{\Lambda_{-}\tau}\int_{1}^{t
e^{|\Lambda_{-}|\sigma}\min\{|\Lambda_{-}|,|\Lambda_{-}|^{3}\sigma
^{2}\}\left( \frac{1}{s^{\frac{3}{2}}}\bar{\mu}_{\alpha}+\frac{1}{s
\tilde{\mu}_{\alpha}\right) ds\\
& \leq\mathrm{const.}~|\Lambda_{-}|e^{\Lambda_{-}\tau}\int_{1}^{\frac{t+1
{2}}e^{|\Lambda_{-}|\sigma}|\Lambda_{-}|^{3}\sigma^{2}\left( \frac
{1}{s^{\frac{3}{2}}}\bar{\mu}_{\alpha}+\frac{1}{s}\tilde{\mu}_{\alpha}\right)
ds\\
& +\mathrm{const.}~|\Lambda_{-}|e^{\Lambda_{-}\tau}\int_{\frac{t+1}{2}
^{t}e^{|\Lambda_{-}|\sigma}|\Lambda_{-}|\left( \frac{1}{s^{\frac{3}{2}}
\bar{\mu}_{\alpha}+\frac{1}{s}\tilde{\mu}_{\alpha}\right) ds\\
& \leq\mathrm{const.}~|\Lambda_{-}|\left( \frac{1}{t^{3}}\tilde{\mu
_{\alpha}+\frac{1}{t^{\frac{3}{2}}}\bar{\mu}_{\alpha}+\frac{1}{t^{1}
\tilde{\mu}_{\alpha}\right) ~,
\end{align*}
which shows that $\kappa\partial_{k}\hat{\omega}_{3,1,0}\in\mathcal{D
_{\alpha-1,\frac{3}{2},1}^{1}$.
The bound on the function $\kappa\partial_{k}\hat{\omega}_{3,2,0}$ uses
(\ref{eq:bndf20}), (\ref{eq:mutomutilde}) and Proposition~\ref{prop:sgk3},
which, to be applicable, requires first the use of
(\ref{eq:ksacrificealphafort}) to trade a $|k|$ for an $s^{-1}$ multiplying
$\bar{\mu}_{\alpha}$ and $\tilde{\mu}_{\alpha}$. We then hav
\begin{align*}
\left\vert \kappa\partial_{k}\hat{\omega}_{3,2,0}\right\vert & =\left\vert
\frac{1}{2}e^{-\kappa\tau}\int_{t}^{\infty}\check{f}_{2,0}(k,s-1)\kappa
\partial_{k}\hat{Q}_{0}(k,s)ds\right\vert \\
& \leq\mathrm{const.}~e^{\Lambda_{-}\tau}\int_{t}^{\infty}(|k|+|k|^{\frac
{1}{2}})(|k|^{\frac{1}{2}}+|k|)e^{-|k|\sigma}\left( \frac{1}{s^{\frac{3}{2}
}\bar{\mu}_{\alpha}+\frac{1}{s}\tilde{\mu}_{\alpha}\right) ds\\
& \leq\mathrm{const.}~e^{\Lambda_{-}\tau}e^{|k|\tau}\int_{t}^{\infty
}|k|e^{-|k|\sigma}\left( \frac{1}{s^{\frac{3}{2}}}\bar{\mu}_{\alpha}+\frac
{1}{s^{\frac{5}{2}}}\bar{\mu}_{\alpha-1}\right) ds\\
& +\mathrm{const.}~e^{\Lambda_{-}\tau}e^{|k|\tau}\int_{t}^{\infty}\left(
1+|k|\right) e^{-|k|\sigma}\frac{1}{s^{3}}\tilde{\mu}_{\alpha-1}ds\\
& \leq\mathrm{const.}~e^{\Lambda_{-}\tau}\left( \frac{1}{t^{\frac{3}{2}
}\bar{\mu}_{\alpha}+\frac{1}{t^{\frac{5}{2}}}\bar{\mu}_{\alpha-1}+\frac
{1}{t^{2}}\tilde{\mu}_{\alpha-1}\right) \\
& \leq\mathrm{const.}\left( \frac{1}{t^{\frac{3}{2}}}\tilde{\mu}_{\alpha
}+\frac{1}{t^{2}}\tilde{\mu}_{\alpha-1}\right) ~,
\end{align*}
which shows that $\kappa\partial_{k}\hat{\omega}_{3,2,0}\in\mathcal{D
_{\alpha-1,\infty,\frac{3}{2}}^{1}$.
The bound on the function $\kappa\partial_{k}\hat{\omega}_{3,3,0}$ uses
(\ref{eq:bndf30}) and Proposition~\ref{prop:sgL3}, which, to be applicable,
requires first the use of (\ref{eq:ksacrificealphafort}) to trade a
$|\Lambda_{-}|$ for a $s^{-1/2}$ multiplying $\tilde{\mu}_{\alpha}$. We then
hav
\begin{align*}
\left\vert \kappa\partial_{k}\hat{\omega}_{3,3,0}\right\vert & =\left\vert
\frac{1}{2}(e^{\kappa\tau}-e^{-\kappa\tau})\int_{t}^{\infty}\check{f
_{3,0}(k,s-1)\kappa\partial_{k}\hat{Q}_{0}(k,s)ds\right\vert \\
& \leq\mathrm{const.}~e^{|\Lambda_{-}|\tau}\int_{t}^{\infty}\min
\{1,|\Lambda_{-}|\}e^{\Lambda_{-}\sigma}|\Lambda_{-}|\left( \frac{1
{s^{\frac{3}{2}}}\bar{\mu}_{\alpha}+\frac{1}{s}\tilde{\mu}_{\alpha}\right)
ds\\
& \leq\mathrm{const.}~e^{|\Lambda_{-}|\tau}\int_{t}^{\infty}|\Lambda
_{-}|e^{\Lambda_{-}\sigma}\left( \frac{1}{s^{\frac{3}{2}}}\bar{\mu}_{\alpha
}+\frac{1}{s^{2}}\tilde{\mu}_{\alpha-\frac{1}{2}}+\frac{1}{s^{3}}\tilde{\mu
}_{\alpha-1}\right) ds\\
& \leq\mathrm{const.}\left( \frac{1}{t^{\frac{3}{2}}}\bar{\mu}_{\alpha
}+\frac{1}{t^{2}}\tilde{\mu}_{\alpha-\frac{1}{2}}+\frac{1}{t^{3}}\tilde{\mu
}_{\alpha-1}\right) ~,
\end{align*}
which shows that $\kappa\partial_{k}\hat{\omega}_{3,3,0}\in\mathcal{D
_{\alpha-1,\frac{3}{2},\frac{3}{2}}^{1}$.
The bound on the function $\kappa\partial_{k}\hat{\omega}_{3,1,1}$ uses
(\ref{eq:bndf11}) and Propositions~\ref{prop:sgL1} and \ref{prop:sgL2},
leading t
\begin{align*}
\left\vert \kappa\partial_{k}\hat{\omega}_{3,1,1}\right\vert & =\left\vert
\frac{1}{2}e^{-\kappa\tau}\int_{1}^{t}\check{f}_{1,1}(k,s-1)\kappa\partial
_{k}\hat{Q}_{1}(k,s)ds\right\vert \\
& \leq\mathrm{const.}~e^{\Lambda_{-}\tau}\int_{1}^{t}(1+|\Lambda
_{-}|)e^{|\Lambda_{-}|\sigma}\min\{1,|\Lambda_{-}|\sigma\}|\Lambda_{-}|\left(
\frac{1}{s^{\frac{3}{2}}}\bar{\mu}_{\alpha}+\frac{1}{s^{2}}\tilde{\mu
_{\alpha}\right) ds\\
& \leq\mathrm{const.}~(1+|\Lambda_{-}|)e^{\Lambda_{-}\tau}\int_{1
^{\frac{t+1}{2}}|\Lambda_{-}|e^{|\Lambda_{-}|\sigma}|\Lambda_{-}|\sigma\left(
\frac{1}{s^{\frac{3}{2}}}\bar{\mu}_{\alpha}+\frac{1}{s^{2}}\tilde{\mu
_{\alpha}\right) ds\\
& +\mathrm{const.}~(1+|\Lambda_{-}|)e^{\Lambda_{-}\tau}\int_{\frac{t+1}{2
}^{t}|\Lambda_{-}|e^{|\Lambda_{-}|\sigma}\left( \frac{1}{s^{\frac{3}{2}}
\bar{\mu}_{\alpha}+\frac{1}{s^{2}}\tilde{\mu}_{\alpha}\right) ds\\
& \leq\mathrm{const.}~(1+|\Lambda_{-}|)\left( \frac{1}{t^{\frac{3}{2}
}\tilde{\mu}_{\alpha}+\frac{1}{t^{\frac{3}{2}}}\bar{\mu}_{\alpha}+\frac
{1}{t^{2}}\tilde{\mu}_{\alpha}\right) ~,
\end{align*}
which shows that $\kappa\partial_{k}\hat{\omega}_{3,1,1}\in\mathcal{D
_{\alpha-1,\frac{3}{2},\frac{3}{2}}^{1}$.
The bound on the function $\kappa\partial_{k}\hat{\omega}_{3,2,1}$ uses
(\ref{eq:bndf21}), Proposition~\ref{prop:sgk3} and (\ref{eq:mutomutilde}),
leading t
\begin{align*}
\left\vert \kappa\partial_{k}\hat{\omega}_{3,2,1}\right\vert & =\left\vert
\frac{1}{2}e^{-\kappa\tau}\int_{t}^{\infty}\check{f}_{2,1}(k,s-1)\kappa
\partial_{k}\hat{Q}_{1}(k,s)ds\right\vert \\
& \leq\mathrm{const.}~e^{\Lambda_{-}\tau}\int_{t}^{\infty}(1+|k|)|\Lambda
_{-}|e^{-|k|\sigma}\left( \frac{1}{s^{\frac{3}{2}}}\bar{\mu}_{\alpha
+\frac{1}{s^{2}}\tilde{\mu}_{\alpha}\right) ds\\
& \leq\mathrm{const.}~(1+|k|)e^{\Lambda_{-}\tau}e^{|k|\tau}\int_{t}^{\infty
}(|k|^{\frac{1}{2}}+|k|)e^{-|k|\sigma}\left( \frac{1}{s^{\frac{3}{2}}
\bar{\mu}_{\alpha}+\frac{1}{s^{2}}\tilde{\mu}_{\alpha}\right) ds\\
& \leq\mathrm{const.}~(1+|k|)e^{\Lambda_{-}\tau}\left( \frac{1}{t^{1}
\bar{\mu}_{\alpha}+\frac{1}{t^{\frac{3}{2}}}\tilde{\mu}_{\alpha}\right)
\leq\mathrm{const.}~(1+|k|)\frac{1}{t^{1}}\tilde{\mu}_{\alpha}~,
\end{align*}
which shows that $\kappa\partial_{k}\hat{\omega}_{3,2,1}\in\mathcal{D
_{\alpha-1,\infty,1}^{1}$.
The bound on the function $\kappa\partial_{k}\hat{\omega}_{3,3,1}$ uses
(\ref{eq:bndf31}) and Proposition~\ref{prop:sgL3}, leading t
\begin{align*}
\left\vert \kappa\partial_{k}\hat{\omega}_{3,3,1}\right\vert & =\left\vert
\frac{1}{2}(e^{\kappa\tau}-e^{-\kappa\tau})\int_{t}^{\infty}\check{f
_{3,1}\kappa\partial_{k}\hat{Q}_{1}(k,s)ds\right\vert \\
& \leq\mathrm{const.}~(e^{|\Lambda_{-}|\tau}+e^{\Lambda_{-}\tau})\int
_{t}^{\infty}e^{\Lambda_{-}\sigma}\left\vert \Lambda_{\_}\right\vert \left(
\frac{1}{s^{\frac{3}{2}}}\bar{\mu}_{\alpha}+\frac{1}{s^{2}}\tilde{\mu
_{\alpha}\right) ds\\
& \leq\mathrm{const.}\left( \frac{1}{t^{\frac{3}{2}}}\bar{\mu}_{\alpha
}+\frac{1}{t^{2}}\tilde{\mu}_{\alpha}\right) ~,
\end{align*}
which shows that $\kappa\partial_{k}\hat{\omega}_{3,3,1}\in\mathcal{D
_{\alpha-1,\frac{3}{2},2}^{1}$.
Collecting the bounds we have that $\hat{d}_{3}\in\mathcal{D}_{\alpha
-1,\frac{3}{2},1}^{1}\subset\mathcal{D}_{\alpha-1,\frac{3}{2},0}^{1}$, which
proves Lemma~\ref{lem:Q}.
|
1,941,325,220,478 | arxiv | \section{Introduction}}
Kites have been flown in the sky for several centuries. Nowadays, they are no longer considered just a toy for children. The flight of a kite is a complex physical phenomenon and scientists have shown renewed interest in its dynamics and have investigated new applications or improved the existing ones. Till the recent past, many of such studies have concerned the use of kites as a tool to acquire meteorological data or as equipment for extreme sports, but a new frontier is now appearing: wind energy conversion.
Current wind technology, based on wind towers, has many limitations in terms of energy production, costs and environmental impact. In fact, wind turbines not only impact the surrounding environment with the land usage of their installation and with the noise generated by their blades, but the power they can provide is also limited by the low altitude at which operate, no more than 100-150 m above the ground (see, e.g. \cite{Milborrow}) .
The possibility of collecting wind energy at high altitudes could bring a major improvement in the design of next generation wind power plants.
This task can be achieved by using non-powered flight vehicles such as kites, which can provide a means to transfer wind energy from higher altitudes, between 500 and 1000 m above the ground, to a power conversion system on the ground by means of tethers (see, e.g., \cite{Breukels}).
The design of such a high-altitude wind power generator requires a careful aerodynamic design of the kites, integrated with automated flight control.
The mathematical models of the power system configurations which have been proposed up to now -- e.g. the laddermill, the yo-yo and carousel configurations for wind energy extraction and the towing configuration for the propulsion of ships \cite{skysails, fagiano2012a} --- have focused on the design of non-linear predictive controllers \cite{Canale,canale2010,fagiano2012,fagiano2010,novara2011,Ilzhoer,Williams,Lyod}. These controllers aim
to maximize energy generation while preventing the airfoils from falling to the ground or tethers from tangling.
In these control models, a constant lift coefficient and a constant drag coefficient have always been considered for the kite wing. However, the ``engine'' of such a wind generator is a power kite and model-based control systems may not perform well without an accurate representation of the kite dynamics.
Therefore, in order to understand how kites can convert wind energy into electric energy the first step is to examine the aerodynamic performance of a kite.
\begin{figure*}
\centering
{\includegraphics[width=1.30\columnwidth]{Fig1.ps}}%
\vskip -3mm
\caption{Curved wing flow schematic and reference system (the direction $x$ is the chord direction); the chords are all parallel.
}
\label{kite_scheme}
\end{figure*}
\begin{figure*}
\centering
{\includegraphics[width=1.5\columnwidth]{Fig2.ps}}%
\caption{Schematic of the curved wing and of the two airfoils analysed to validate the numerical code performances.
}
\label{airfoils_sections}
\end{figure*}
In this paper, we provide a preliminary set of data concerning the aerodynamics of an arc shaped, rigid, non twisted and non tethered wing which models an actual kite wing.
The aspect ratio of the kite wing and the Reynolds number of the air flow, which gives a measure of the ratio of inertial forces to viscous forces, we have simulated are typical of current traction kite applications (aspect ratio $AR = 3.2$, Reynolds number based on the chord length $Re = 3 \times 10^6$, see e.g. \cite{Canale}). These data can be used to improve the design of purpose-built kites for energy extraction and related flight control strategies. They can also represent a benchmark for comparison with experimentally collected data and with any future unsteady flow simulations carried out by means of the large eddy simulation method. The present simulations rely on the Reynolds Averaged Navier-Stokes model (RANS) and were carried out by using the STAR - CCM+ Computational Fluid Dynamics code set up by CD-Adapco \cite{CD-Adapco}. This code solves the Navier-Stokes equations for an incompressible fluid using a finite volumes discretization. In order to verify the correct setting and flow features of the STAR - CCM+ code, a comparison between the numerical and the laboratory characteristics of two archetype two-dimensional airfoils -- the NACA0012 and ClarkY profiles -- is also shown.
One of the aims of the paper is to obtain a comparison between the aerodynamics of a flat and a curved twin-set of rigid non-twisted wings with the same aspect ratio and the same Reynolds number, which is a topic that has not been discussed frequently in literature. The comparison is carried out by considering the boundary layer to be turbulent throughout. This choice slightly penalizes the prediction of the aerodynamic drag and can be considered a sort of systematic error we introduce {\it a priori} into the analysis to avoid the insertion of a non rational parametrization of the three dimensional transition on the kite, caused by the lack of reliable information about the three dimensional transition over curved three-dimensional non axial-symmetrical surfaces.
In the context of kite dynamics, the present study, even though carried out by numerically simulating the three-dimensional turbulent viscous flow past a arc-shaped curved wing, should be considered of a preliminary nature. We did not address stability or optimization aspects, which
could be considered in future works where the present physically comprehensive numerical simulations could be joined to stability and/or optimization techniques. It should be noted that, in this contest, a few papers have instead been published on simplified aerodynamic models of the complete system - i.e. kites and tethers. For example, stability in a simplified simulation where the kite is a flat two-dimensional wing was considered by Alexander and Stevenson (2001) \cite{Alexander} , the dynamics of circular trajectories of a rigid flat kite was studied by Stevenson and Alexander (2006) \cite{Stevenson} , the optimization of the twist spanwise distribution - in the limit of a high wing aspect ratio - was addressed by Jackson (2005) \cite{Jakson} using the inviscid lifting line theory.
The paper is organized as follows. Next section is dedicated to the numerical simulation methodology. The third section presents the comparison between the aerodynamics properties of two wing sections, the Clark Y and the NACA0012 profiles, obtained from our simulations and those obtained from laboratory measurements. A fourth section is dedicated to the aerodynamics of the kite wing compared to the equivalent flat wing. We give also information about the mean pressure and kinetic energy fields along many kite sections and in the wake. The concluding remarks are in the last section.
\begin{figure*}
\centering
{\includegraphics[width=\columnwidth]{Fig4a_draft.ps}}
{\includegraphics[width=\columnwidth]{Fig4b_draft.ps}}%
\caption{On the left: Naca0012 airfoil mesh ($Re = 3 \times 10^6$). On the right: the half flat wing volume mesh (Clark Y airfoil, $Re = 3 \times 10^6$, AR=6)}
\label{volumemesh}
\end{figure*}
\vspace{2mm}
\section{Numerical method}
The STAR-CCM+ industrial CFD code has been used to carry out the simulations. This code solves the Reynolds Averaged Navier-Stokes equations for an incompressible fluid using an unstructured, collocated finite-volume technique \cite{Ferziger}. The convection contribution to the velocity increment is predicted by an upwind scheme while a centered spatial discretization of the convection is introduced as a deferred correction (implicit pressure-correction method, SIMPLE \cite{Caretto} and SIMPLEC \cite{Doormal} algorithms). The Crank--Nicholson scheme is used for diffusion. The global scheme is thus second-order in space for steady state flows and, formally, first-order in time dependent flows.
This integration scheme is very stable, which is a necessary condition for a commercial code.
The problem symmetry, see figures \ref{kite_scheme}-\ref{airfoils_sections}, allows us to consider the infinite half-space by the side of the plane of symmetry of the wings as the computational domain. The domain boundaries are located three chord lengths upstream and six chord lengths downstream from the leading edge. The upper and lower boundaries are placed at five chord lengths each from the leading edge. The lateral boundary is located at three chord lengths from the wing tip.
Velocity and pressure are imposed on the domain inlet and uniformity conditions are imposed on the lateral boundaries and on the outlet. This includes the symmetry condition on the symmetry plane. The wing surface is treated like a rigid, non porous, wall where a no-slip condition applies.
The grid mesh is composed of an inner layer surrounding the wing surface with a thickness that is suitable to capture the boundary layer. The mesh is particularly refined around the leading and trailing edges of the wing, while it is coarser on the remaining wing part of the surface. The outer mesh is composed of tetrahedral elements that become coarser towards the external boundaries of the computational domain, see figure \ref{volumemesh}.
The optimal cell density has been estimated by running several two-dimensional cases with a progressively increasing number of cells until a good agreement with laboratory data, obtained from the existing literature, has been reached. The mesh has been extended along the wing span, while maintaining a similar density, until a final cell count close to $2.5 \times 10^6$ has been obtained, see figure \ref{volumemesh}.
The simulations were carried out using a commonly employed eddy-viscosity turbulence model, the turbulent viscosity transport equation model by Spalart and Allmaras \cite{Spalart}. This model was built using heuristics and dimensional analysis arguments. The transport equation is local, which means that the equation at one point does not depend on the solution at the neighbouring points, and it includes a non-viscous reduction term that depends on the distance from the wall. This property makes the model compatible with grids of any structure. A laminar-turbulence transition was not imposed, the boundary layer was considered turbulent throughout. This choice was motivated, on the one hand, by the desire to avoid the inclusion of uncertain parameters linked to the as yet unknown transition dynamics on curved three-dimensional non axial-symmetrical surfaces, and, on the other, because of the awareness that the associated overestimation of the drag coefficients implies results on the safer side.
\begin{figure}
\centering
{\includegraphics[width=\columnwidth]{fig4a_test.eps}}\\
{\includegraphics[width=\columnwidth]{fig4b_test.eps}}
\caption{Comparison between the polar curves of the NACA0012 and Clark Y profiles obtained from the present numerical simulations (STAR-CCM+ code with the Spalart-Allmaras turbulence model) and from laboratory measurements in literature: Abbott and von Doenhoff \cite{Abbott}, Silverstein \cite{R502} and Jacobs and Abbott \cite{TR669}.}
\label{2Dpolars}
\end{figure}
\begin{figure}
\centering
{\includegraphics[width=\columnwidth]{fig5a_test.eps}}\\ %
{\includegraphics[width=\columnwidth]{fig5b_test.eps}}%
\caption{Aerodynamics characteristics of the curved wing contrasted with the equivalent flat wing. The Reynolds number, based on the chord length, is equal to $3 \times 10^6$. All simulations have been carried out with the STAR-CCM+ code using the Spalart-Allmaras turbulence model. The optimum loading for a tension kite from \cite{Jakson} is shown as a reference.}
\label{kite_polars}
\end{figure}
{\section{Validation of the numerical results.\\ The NACA0012 and Clark Y airfoil test cases.}}
The numerical results produced using the STAR-CCM+ code have been validated through a comparison with results produced in the laboratory for two important two-dimensional test cases, the incompressible flows past the NACA0012 and the Clark Y profiles, at various angles of attack.
In order to obtain a set of numerical results that are consistent with laboratory results in a range of different flow conditions it is necessary to achieve sufficient confidence in the use of the code parameter setting. This setting is of utmost importance as it specifies the physical model of the problem under study.
For this reason, we analysed the flow around two wing sections for which a large amount of laboratory literature is available. The NACA 0012 is a symmetrical airfoil with a maximum thickness of 12\%\ of the chord. This is probably the most extensively studied wing section. The Clark Y is an asymmetrical profile with a flat bottom. This profile is the section proposed for the kite wing. An extended laboratory database was also available for this section.
The flow past the two wing sections has a Reynolds number $Re$ of $3 \times 10^6$. The angle of incidence $\alpha$ is varied from $0^{\rm o}$ to $20^{\rm o}$. The $Re$ is based on the chord length, $c=1$ m, and on the freestream velocity, $U_{\infty}=\;43,86$ m/s. The air density is set to $\rho=1,225$ kg/m$^3$, the pressure to $p_{\infty}=101325$ Pa, the dynamic viscosity to $\mu=1,79\times 10^{-5}$ poise and the temperature to $T_{\infty}=288$ K. The boundary layer is considered turbulent throughout, a condition which is physically close to the flow configuration experimented for sections with a non zero surface roughness.
The domain extends three chords in front of the profile, six chords downstream from the leading edge and ten chords in the transversal direction. The mesh consist of 30105 cells.
The mesh is refined through prism layers in the proximity of the airfoil. Approaching the stall condition, the number of iterations has been increased, from about 1000 to about 1800, to grant solution convergence. This criterion was also applied to the three-dimensional simulations, where the number of iterations necessary to converge resulted to be of the same order of magnitude.
The results concerning the polar curve for the NACA0012 airfoil were compared with the experimental measures reported by Abbott \& Doenhoff \cite{Abbott}. Figure 4 shows the numerical prediction of the lift and drag coefficients of the airfoils compared with the laboratory data. These coefficients are defined as $C_l=L/(\rho U_{\infty}^2 c/2)$ and $C_d=D/(\rho U_{\infty}^2 c/2)$ where $L$ and $D$ are the lift and drag forces per unit length and $c$ is the chord length.
As expected, since the boundary layer was considered turbulent along the entire profile, the agreement is excellent with the data relevant to the airfoil with standard roughness. The agreement becomes more qualitative with regards the data relevant to smooth profiles. As previously explained, in this last case, the numerical prediction of the drag coefficient is biased. It can in fact be observed that the setting for the boundary layer of a turbulence ubiquity condition induces an over-estimation of the drag of about a 40\%. However, it is important to note that this bias does not spoil the parallelism between the laboratory and numerical polar curves.
For the Clark Y airfoil, the numerical prediction of the aerodynamic characteristics was compared with the wind tunnel measurement carried out in the Langley variable-density tunnel ($Re = 8.37 \times 10^6$, \cite{TR669}) and in the full-scale tunnel ($Re = 3 \times 10^6$, \cite{R502}). It should be noted that, for the Clark Y profile, data for rough surfaces are not present in literature. Even though old, we decided to consider these experimental data because they were not produced in a low-turbulence tunnel. As a consequence, in principle, these data can offer closer {\it a priori} estimate of the flow configuration over a profile where the boundary layer is turbulent throughout. Fig ure\ \ref{2Dpolars} shows a comparison between the numerical and laboratory experiments. As above, it can be seen that the numerical polar is parallel to the laboratory one. Again in this case, the numerical polar over-predicts the drag coefficient by about a 40$\div$50\%.
In conclusion, we can observe a good trend of the polar curves, which are parallel to the experimental ones. The drag is over-estimated, as it should be, since we decided not to enlarge this preliminary study of the kite wing aerodynamics with other parameters to fix the transition line on the wings. The drag over-estimation should therefore be considered a consistent result.
\begin{figure*}
\centering
{\includegraphics[width=\columnwidth]{Cp_Flat_a6_provv.ps}}%
\quad%
{\includegraphics[width=\columnwidth]{Cp_Kite_a6_provv.ps}}%
\caption{Pressure coefficient distributions at different $z=const$ sections along the flat wing (part a). Pressure coefficient distributions at different $\theta=const$ sections along the curved wing (part b). The angle of incidence is $6^{\rm o}$; $Re=3 \times 10^6$ based on the chord length, $AR = 3.2$.}
\label{Cp_6}
\end{figure*}
\begin{figure*}
\centering
{\includegraphics[width=\columnwidth]{Cp_Flat_a18_provv.ps}}%
\quad%
{\includegraphics[width=\columnwidth]{Cp_Kite_a18_provv.ps}}%
\caption{
Pressure coefficient distributions at different $z=const$ sections along the flat wing (part a). Pressure coefficient distributions at different $\theta=const$ sections along the curved wing (part b). The angle of incidence is $18^{\rm o}$; $Re = 3 \times 10^6$ based on the chord length, $AR = 3.2$}
\label{Cp_18}
\end{figure*}
\vspace{2mm}
\section{Aerodynamics of the curved kite wing and modification of the flow past the wing. Comparison with the equivalent flat wing}
We present two main sets of results. The first is associated to the polar curves $C_L=C_L(\alpha)$, $C_D=C_D(\alpha)$ and the pressure distribution on the upper and lower surfaces of both the flat and curved wings.
These data specify the aerodynamic characteristics of the two kinds of wings. We adopted the Clark Y airfoil as wing section for both the curved kite wing and the straight wing. The aerodynamic lift and drag coefficients are formed using the reference force $1/2 \rho S U_{\infty}^2$, where $S$ is half the net surface (or the surface formed by the set of wing chords). The Reynolds number, based on the chord length and the free stream air velocity, is fixed and is equal to $3 \times 10^6$. The polar curves are shown in figure \ref{kite_polars}, while the pressure distribution in part (a) - the flat wing, and part (b) - the curved wing - are shown in figures \ref{Cp_6} and \ref{Cp_18}.
As explained before, see the introduction and following sections, the drag coefficients are biased (overestimated by about 40-50 \%)) due to the turbulence ubiquity flow conditions that were adopted in the boundary layer in the absence of a reliable criterion to approximate the three-dimensional separation transition on the curved surfaces of the kite wing.
The polars were obtained by varying the angle of attack in the $[-8^{\rm o}$ to $20^{\rm o}]$ range. Figure \ref{kite_polars} shows the polar curves for two flat wings with aspect ratios equal to $3.37$ and $6$, and for the kite wing with an aspect ratio of $3.2$. It can be observed that the characteristics are very close in the angle of attack range $[-8^{\rm o}, 4^{\rm o}]$, to which a range of lift coefficients from $-0.2$ to $0.6$ corresponds. As expected, the slope of the lift coefficient curve deteriorates (decreases) a little moving from the flat wing with $AR=6$ to the flat wing with $AR=3.37$, and from the latter to the curved wing with $AR=3.37$. Beyond an angle of attack of about $4^{\rm o}$, the lift coefficient curve for the curved wing starts to bend. Stall is reached at an angle of attack of $18^{\rm o}$, where $C_L = 1.1$ and $C_D = 0.18$. It should be noted that the flat wings have not stalled yet at $18^{\rm o}$ of incidence.
An analogous behaviour is shown by the $C_D = C_D(\alpha)$ polar curve, which deteriorates by reducing the aspect ratio and by switching from the flat to the curved configuration. This curve contains the information on the aerodynamic efficiency. For the flat wing with $AR=6$, the maximum efficiency is $40$. This value reduces to $26$ for both the flat and the curved wings ($AR \sim 3$).
The set of pressure distributions on the lower and upper wing surfaces also describes the lift distribution on the wing. The pressure is normalized in the form of pressure coefficients $C_p$ ($C_p = p - p_{\infty} / (1/2 \rho U^2_{\infty}$), where the suffix $\infty$ means the asymptotic upstream condition, $\rho$ is the density and $U_{\infty}$ the free stream speed). The two wing distributions are compared in figures \ref{Cp_6} and \ref{Cp_18} at $\alpha = 6^{\rm o}$ and
$\alpha = 18^{\circ}$, respectively. The distributions agree with the behaviour observed in the laboratory on a two-dimensional wing with a Clark Y section \cite{Wenzinger}. In particular, with the exclusion of the wing tip region, a slightly negative pressure value is observed at the trailing edge, which is a typical feature of the Clark Y profile.
The position along the span on the curved and on the flat wing is measured in terms of the angle $\theta$ and of the value of the $z$ coordinate, respectively in figures \ref{Cp_6} and \ref{Cp_18}, see figures \ref{kite_scheme} and \ref{airfoils_sections}. The correspondence is made in such a way that the corresponding sections have the same value as the curvilinear (for the curved wing) or rectilinear (for the flat wing) coordinates running along the wing span. It can be noted that, at the lower angle of attack of $6^{\circ}$, see Figure \ref{Cp_6}, the flat wing and the curved wing show similar pressure distributions at the corresponding sections. The distributions are almost equal on the lower surface. On the higher surfaces, but only in the region near the leading edge, a $10-15\%$ of difference is noted in the central part of the wing, a value which increases to 40-50\%\ in the wing tip region. The same trend can be observed at the angle of attack of $18^{\circ}$, see figure \ref{Cp_18}. However, the difference in the pressure values at the leading edge now rises to values close to 15\%\ in the central part of the wing and close to 100\%\ in the wing tip region, see also the flow visualization in figures \ref{kite_streamlines_alfa6} and \ref{kite_streamlines_alfa18}.
\begin{figure*}
\centering
{\includegraphics[width=0.65\textwidth]{flatkite-realkiteA6_P.ps}}%
\caption{Pressure levels in flow sections ($y, z$) across the wing and the wake at 3/4, 1, 2 and 3 chord lengths. Part a) flat wing. Part b) curved wing. The angle of incidence is $6^{\rm o}$; $Re=3 \times 10^6$ based on the chord length, $AR = 3.2$, $p_\infty = 101325$ Pa.}
\label{PA6}
\end{figure*}
\begin{figure*}
\centering
{\includegraphics[width=0.65\textwidth]{flatkite-realkiteA18_P_v2.ps}}%
\caption{Pressure levels in flow sections ($y, z$) across the wing and the wake at 3/4, 1, 2 and 3 chord lengths. Part a) flat wing. Part b) curved wing. The angle of incidence is $18^{\rm o}$; $Re=3 \times 10^6$ based on the chord length, $AR = 3.2$, $p_\infty = 101325$ Pa.}
\label{PA18}
\end{figure*}
\begin{figure*}
\centering
{\includegraphics[width=0.65\textwidth]{flatkite-realkiteA6_E.ps}}%
\caption{Averaged kinetic energy levels in flow sections ($y, z$) across the wing and the wake at 3/4, 1, 2 and 3 chord lengths downstream from the leading edge. Part a) flat wing. Part b) curved wing. The angle of incidence is $6^{\rm o}$; $Re=3 \times 10^6$ based on the chord length, $AR = 3.2$, $E_\infty=247.45$ Pa.}
\label{EA6}
\end{figure*}
\begin{figure*}
\centering
{\includegraphics[width=0.65\textwidth]{flatkite-realkiteA18_E.ps}}%
\caption{Averaged kinetic energy levels in flow sections ($y, z$) across the wing and the wake at 3/4, 1, 2 and 3 chord lengths downstream from the leading edge. Part a) flat wing. Part b) curved wing. The angle of incidence is $18^{\rm o}$; $Re=3 \times 10^6$ based on the chord length, $AR = 3.2$; $E_\infty=247.45$ Pa.}
\label{EA18}
\end{figure*}
\begin{figure*}
\centering
{\includegraphics[width=0.65\textwidth]{realkite_pressure_streamlines_a6_outlet.ps}}%
\caption{Flow streamlines for the curved wing at an incidence of $6^{\rm o}$; the streamline visualization is associated to the pressure levels obtained on the wing surface, the symmetry plane and outlet boundary. $Re=3 \times 10^6$ based on the chord length, $AR = 3.2$. Note that the $x$ direction is the chord direction, see figure \ref{kite_scheme}, and it is parallel to the lateral domain boundaries.}
\label{kite_streamlines_alfa6}
\end{figure*}
\begin{figure*}
\centering
{\includegraphics[width=0.65\textwidth]{realkite_pressure_streamlines_a18_outlet.ps}}%
\caption{Flow streamlines for the curved wing at an incidence of $18^{\rm o}$; the streamline visualization is associated to the pressure levels obtained on the wing surface, the symmetry plane and outlet boundary. $Re=3 \times 10^6$ based on the chord length, $AR = 3.2$. Note that the $x$ direction is the chord direction, see figure \ref{kite_scheme}, and it is parallel to the lateral domain boundaries.}
\label{kite_streamlines_alfa18}
\end{figure*}
The second set of results concern information on the structure of the flow field, in particular, on the pressure and kinetic energy fields, see figures \ref{PA6} - \ref{kite_streamlines_alfa18}.
The pressure field visualization in figures \ref{PA6} and \ref{PA18} shows that the pressure distributions are qualitatively similar for the two kinds of wings. The pressure drop above the wing and inside the wing tip vortices is more intense at the higher angle of attack. At a three chord length behind the leading edge, for both angles of attack, the pressure becomes almost uniform, with the exclusion of the traces of the tip vortices. A similar overall behaviour can also be observed for the kinetic energy field, see figures \ref{EA6} - \ref{EA18}. Most of the averaged kinetic energy is concentrated above the wing and just outside the wing tip vortex cores. Above the curved wing, for both angles of attack, a comparatively small kinetic energy, with regards to the flat configuration, can be seen. It means that, in this section (3/4 of the chord), the flow on the curved wing has already separated, a fact that can be also deduced from the pressure distribution near the wing ends in figures \ref{Cp_6} and \ref{Cp_18}.
Furthermore, if we consider the vorticity in the wake and, in particular, that in the tip vortices, interesting observations can be made. For instance, in the 3/4 chord section, the vorticity produced by the flat wing is 1.3 times that produced by the curved wing, at $\alpha=6^{\rm o}$, a value that decreases to 1.075 at $\alpha=18^{\rm o}$. However, if we compare the vorticity of the wing tips at $6^{\rm o}$ to that at $18^{\rm o}$, a ratio of 0.78 can be observed for the flat wing and of 0.64 for the curved wing. By moving downstream the section at a 3 chord lengths the vorticity produced by the curved wing becomes slightly higher than that produced by the flat wing (flat/curved yields ratios of 0.99 at $6^{\rm o}$ and 0.85 at $18^{\rm o}$). This is in agreement with the fact the kinetic energy at 3 chords downstream is about 22\%\ of that on the wing for the flat configuration, and is about 28\%\ for the curved configuration. These observations mean that the curvature is less efficient in increasing the vorticity than the increase of angle of incidence, but it is capable, due to non-linear convective effects, of inducing slower spatial decay in the near wake.
Another interesting point is that, by changing the angle of incidence, the convergence of the tip vortex axes remains almost constant ($2.1^{\rm o}$) for the flat wing, while it increases for the curved wing ($1.9^{\rm o}$ at $6^{\rm o}$ of angle of attack, $2.5^{\rm o}$ at $18^{\rm o}$ of angle of attack). This can also be seen also in figures \ref{kite_streamlines_alfa6} and \ref{kite_streamlines_alfa18} observing the visualization of the streamlines of the tip vortices. Thus, it can be concluded that the curvature induces more intense non linear effects (convection and stretching) on the vorticity and, as a consequence the vortices keep their identity for longer distances.
We conclude this section by citing the work of Jackson \cite{Jakson}, where it can be noted that a possible design point for a kite made by a flexible membrane, like the ones available on the market, could be $C_L=0.55$ and $C_D=0.1$.
This optimized result has been obtained under several hypothesis: inviscid flow, lifting line theory (asymptotically accurate for large aspect ratios) and last, but not least, the need to maintain a constant tension in the kite canopy.
In our case, a rigid wing has been considered, showing that light but stiff structures, such as, for instance, inflatable wings, could be preferably employed to obtain a higher efficiency.
In fact, from the polar curve in figure \ref{2Dpolars}, it can be seen that the curved wing at the same lift coefficient $C_L=0.55$ has a definitely lower drag coefficient $C_D=0.04$.
However, the design of similar structures requires the analysis of complex fluid structure interactions due to the small bending stiffness of the wing and the huge deformations that occur under aerodynamic loads. This kind of analysis was not the aim of the present work.
\section{Conclusions}
In this work we have compared the aerodynamics of two rigid non twisted non tethered wings that are alike in all aspects (i.e. shape and profile section, aspect ratio and Reynolds number) but which differ in their curvature: an arc shaped curved wing which models a kite wing and reference straight wing. Given the lack of information on the transition on curved wings, we carried out a comparison between the flat and curved configuration by modelling the boundary layer on the wings as turbulent from the leading edge.
The results were obtained through the computation of the numerical solutions of the Reynolds averaged Navier-Stokes equations (STAR-CCM+ code) for the mean flow. We observed a slight deterioration of the overall aerodynamic performances of the curved wing (non-tethered kite) with respect to the flat configuration. Towards the wing tips, the lift on the curved wing was comparatively lower than that of the flat wing, due to a more extended separation region above the airfoil.
A non trivial behaviour was observed in the vorticity dynamics in the near wake, up to five chords downstream from the trailing edge.
The curved wing did not generate more intense wing tip vortices than the flat wing, however, the downstream decay in the near wake was slower. At the higher angle of incidence, $\alpha=18^{\rm o}$, the curved wing induces a higher convergence of the wing ends-vortices. The flat wing instead maintained a constant convergence.
Such information could be useful for the design of a system configuration where a set of kites fly under mutual interference (in a ladder or carousel configuration), as proposed for wind generators systems.
Moreover, the data outlined in this study have three other implications. Firstly, they can be a first step for more advanced, unsteady simulations, namely large eddy simulation of the flow field near the wing and inside the wake. Furthermore, they could be used by designers of kites for wind power plants to improve the set up of automated non linear control systems. Lastly, they could represent a basic mean flow to be used as an equilibrium starting point for perturbative stability analysis.
|
1,941,325,220,479 | arxiv | \section{Introduction}
We continue on the research path initiated in \cite{YanBel2013} where
the concepts of extension and their combination were introduced for the first time.
In this seminal work proofs were formalized as rewriting strategies and
extensions were formalized as second-order rewriting strategies.
However the combination of extensions was done via composition, not allowing for conflicts between
extensions. The complete principle of the extension-combination
method was introduced in \cite{belkhir:SYNASC:15}. In this work, we
have presented the design and implementation of a user language for
the specification of rewriting strategies based proofs and
extensions. We also stated computation rules for combinations of
extensions. Although we considered combinations for a small
class of usual rewriting strategies as \texttt{OuterMost} and \texttt{InnerMost
, the question whether this class, or possibly a wider class, is
closed under combination was left open, as well as the question of
the correctness and soundness of the combination formulae.
This question was addressed in \cite{belkhir:hal-01277395} where the authors introduced a larger the class of \emph{context embedding strategies},
or \ces for short.
This framework involves more
elementary operations but generating a wider class of rewriting
strategies.
Although the idea of combination is kept the same,
the tools and the techniques are different.
The elementary extension operation on a
term is still an enrichment by context insertion. However, the
traversal strategies in a CE-strategy are built with a jump operator and an
iterator/fixed-point operator instead of \texttt{OuterMost} a more complex strategy.
This class is indeed closed under combination and the correctness of the combination operation was proved.
Although the class of CE-strategies enjoys nice algebraic properties, it has a major practical drawback: it is built up with low level strategy
constructors making it hard to use in practice.
In particular, the definition of the traversal navigation strategies such as \texttt{OuterMost} yields a CE-strategy whose size depends on the signature.
Even worse, the size of the resulting combined CE-strategy can be exponential with respect to the size of the two input CE-strategies.
In this chapter we overcome these difficulties by finding another class of strategies, called \emph{high level context embedding strategies},
or HCE-strategies for short, which is a strict subclass of the class of CE-strategies.
It enjoys similar algebraic properties and seems reasonably easy to use in practice.
In particular, the class of HCE-strategies is closed by combination, and
the size of the resulting combined HCE-strategy is polynomial with respect of the size of the two input HCE-strategies.
The strategy language underlying both the CE-strategies and the HCE-strategies is inspired by
the modal $\mu$-calculus \cite{rudimemt:mu-calculus:book}.
Instead of formulating the strategy language as in \cite{RewriteStrat_CHK2003}, the $\mu $-calculus-like approach makes
the strategy constructors more rudimentary and therefore tractable
the question of language closure for combinations. Moreover, the
formulae of combination of \ces together with their
verification is also much simplified.
This article presents a new approach based on the reusability for the development of complex models described by abstract terms.
This method is based on two operations. The operation of
extension transforms a reference model in a more complex model by
adding or \emph{embedding} sub-trees, and the combination assembles
several extensions to produce one that has all the characteristics
of those used for its generation. At this stage, the process is
purely operative and does not include any aspect of model semantics.
The
concepts of combination of two extensions is well illustrated with the term
\partial _{x}v(x)$ that plays the role of the reference model, with an
extension that adds an index $j$ on the variable $x$ of derivation,
and with an extension that adds an index $i$ on the derivated
function $v$.\ Applying these two extensions to the reference term
yields the terms $\partial _{x}v_{i}(x)$ and $\partial
_{x_{j}}v(x)$. The combination of these two extensions applied to
the reference term might yield $\partial _{x_{j}}v_{i}(x)$.
The concept of extension, also called refinement, is developed in
different contexts, for example in
\cite{Gorrieri20011047-action-refinement} the refinement is done by
replacement of components with more complex components. Combination
principles are present in different areas of application, they
involve different techniques but follow the same key idea.
For instance, the works in combination of logics \cit
{Ghilardi03algebraicand,Combining:Logics:13}, algorithms,
verification
methods \cite{TAP:2015}, and decision procedures \cit
{Combining:decision:procedures:MannaZ02} share a common principle of
incremental design of complex systems by integration of simple and
heterogeneous subsystems.
The integration of the two concepts of extension and combination
seems to have not been addressed in the literature. To make it
simple to operate and effective, we have adopted the simplest
possible principles. Reformulating the above description in terms of
trees, an extension applied to a reference tree is an operation of
context insertion at different positions. We call it
a \emph{position based strategy for Context Embedding} or shortly a \emph
position-based {\textsf{CE}-strate}gy}. A combination of several
extensions therefore consists of all of their contexts and insertion
positions. Obviously if two contexts have to be inserted at the same
place they are first assembled one above the other before insertion
excepted if they are identical. In the latter case, the context is
inserted one time only so that the extensions are idempotent for the
operation of combination. With this definition, the combination of
two position-based {\textsf{CE}-strate}gies is another one so that
this set of extensions is closed by combination. Note that unlike
these kind of extensions, extensions comprising substitutions cannot
be combined. The principle of {\textsf{CE}-strate}gy has been
developed for a software tool that does automatic derivation of
multiscale models based on partial differential equation and that
uses asymptotic
methods.\ The first target applications are in micro and nanotechnology \cit
{YanBel2013,belkhir2014symbolic,belkhir:SYNASC:15}.
The drawback of the principle of extensions at positions is its lack
of robustness with respect to changes in the reference tree. Indeed,
any of its change requires another determination of the insertion
positions. To add flexibility and robustness, the strategy of
insertion at some positions is completed by strategies of navigation
in trees using pattern matching. This leads to the broader concept
of extensions called \emph{strategy for Context Embedding} or
\emph{{\textsf{CE}-strate}gy} for shortness. These class of
extensions can be expressed with a language of high-level strategies \cite{Cirstea:Rew:Staretgies:03,Terese03}. To perform the combination of two {\textsf
CE}-strate}gies in view of its application to a particular reference
tree,
we starts by detecting the positions of the context insertion of the
\textsf{CE}-strate}gies when they are applied to the tree. This
allows to build the equivalent {\textsf{CE}-strate}gies based on
positions and then to achieve the combination without further
difficulty.
It is natural to ask whether the step of replacement of strategies
by positions can be avoided, i.e. if it is possible to determine
formulas of combination for {\textsf{CE}-strate}gies that are
expressed as high-level strategies. Of course, the combination
formulas should be theoretically validated by comparison to the
principle of combination based on positions.
Thus, combinations formulas may be set as definitions, but their \emph
correctness} has be proved. To this end, a preliminary step is to
establish
calculation formulas of positions associated with any {\textsf{CE}-strate
gies applied to any reference tree.
In our work, we found that the combination of extensions based on
high-level strategies such as \texttt{BottomUp} or \texttt{TopDown}
can not be expressed with high-level strategies. We thus understood
that more rudimentary\textbf{\ }strategies are needed, especially
operators of jumping
and iteration with fixed point issued from mu-calculus \cit
{rudimemt:mu-calculus:book}.\ From this standpoint, we asked the
question of finding a class of {\textsf{CE}-strate}gies which is
closed by the operation of combination. Moreover, we consider as
highly desirable that a number of
nice algebraic properties as associativity of the combination of {\textsf{CE
-strate}gies based on positions or their idempotence are still true for all \ces.
All these theoretical questions have been addressed with success,
and the results are presented in this article. An application is
implemented in the context of our work on the generation of
multiscale models but with an intermediate-level and yet a closed
fragment of {\textsf{CE}-strate}gies. A user language allows the
expression of an input reference partial differential equation (PDE)
and of a reference proof that transforms this PDE into another one.
The reference proof corresponds to what is called the reference
model in the paper. The user language allows also the statement of
{\textsf{CE}-strate}gies and combinations. An OCaml program
generates the reference tree and allows to apply the extensions. The
combinations of extensions are then computed and applied to the
reference model.
Transforming {\textsf{CE}-strate}gies into position-based {\textsf{CE}-strat
}gies is used to test the validity of the program. Nevertheless, the
implementation aspects are not presented here for the obvious reason
of lack of space.
\newpage
\tableofcontents
\newpage
\section{Preliminaries: terms, substitution, notations, rewriting}
\label{Preliminaries} We introduce preliminary definitions and
notations.
\paragraph*{Terms, $\mybox$-terms.}
Let ${\mathcal{F}}=\cup _{n\geq 0}{\mathcal{F}}_{n}$ be a set of symbols
called \emph{function symbols}. The \emph{arity} of a symbol $f$ in ${\mathcal{F}}_{n}$ is $n$ and is denoted $\mathit{ar}(f)$.
Elements of arity zero are called \emph{constants} and often denoted by the letters $a,b,c,$ etc. The set ${\mathcal{F}}_{0}$ of constants is always
assumed to be not empty. Given a denumerable set ${\mathcal{X}}$ of \emph{variable} symbols, the set of \emph{terms} $\mathcal{T}\left( \mathcal{F},\mathcal{X}\right)$, is the smallest set containing ${\mathcal{X}}$ and such that $f(t_{1},\ldots ,t_{n})$ is in $\mathcal{T}\left( \mathcal{F},\mathcal{X}\right) $ whenever
$ar(f)=n$ and $t_{i}\in \mathcal{T}\left(\mathcal{F},\mathcal{X}\right) $ for $i\in \lbrack 1..n]$.
Let the constant $\square \not\in {\mathcal{F}}$, the set $\mathcal{T}_{\square }(\mathcal{F},\mathcal{X})$ of
"$\mybox$-terms", denoted simply by $\mycal{T}_{\square }$, is made with terms with symbols in $\mathcal{F}\cup \mathcal{X}\cup \{\square \}$ which
includes exactly one occurence of $\square$.
Evidently, $\mathcal{T}_{\square }(\mathcal{F},\mathcal{X})$ and $\mathcal{T}(\mathcal{F},\mathcal{X})$ are two disjoint sets.
For a term $t$ and a context $\tau$, we shall write $\tau[t]$ for the term that results from the replacement
of $\square$ by $t$ in $\tau$.
We shall write simply $\mycal{T}$ (resp. $\mycal{T}_{\square}$) instead of $\mathcal{T}\left( \mathcal{F},\mathcal{X}\right)$
(resp. $\mathcal{T}_{\square }(\mathcal{F},\mathcal{X})$). We denote by $\mathcal{V}ar\left({t}\right) $ the set of variables occurring in $t$.
\paragraph*{Positions, prefix-order}
Let $t$ be a term in
$\mathcal{T}\left(\mathcal{F},\mathcal{X}\right) $.
The position $\epsilon $ is called the root position of term $t,$
and the function or variable symbol at this position is called root
symbol of $t$. A position in a tree is a sequence of integers of
$\PosSet=\{{\epsilon }\}\cup \mathbb{N}\cup (\mathbb{N}\times
\mathbb{N}) \cup \cdots$. In particular we shall write
$\mathbb{N}_{\E}$ for $\set{\E} \cup \mathbb{N}$. Given two
positions $p=p_{1}p_{2}\ldots p_{n}$ and $q=q_{1}q_{2}\ldots q_{m}$,
the \emph{concatenation} of $p$ and $q$, denoted by $p\cdot q$ or
simply $pq$, is the position $p_{1}p_{2}\ldots p_{n}q_{1}q_{2}\ldots
q_{m}$. The set of positions of the term $t$, denoted by
$\mathcal{P}os\left( t\right)$, is a set of positions of positive
integers such that, if $t\in \mathcal{X}$ is a variable or $t\in
\mycal{F}_{0}$ is a constant, then $\mathcal{P}os\left( t\right)
=\left\{ \epsilon \right\} $. If $t=f\left( t_{1},...,t_{n}\right) $
then $\mathcal{P}os\left(t\right)=\left\{ \epsilon \right\} \cup
\bigcup_{i=1,n}\left\{ip\mid p\in \mathcal{P}os\left( t_{i}\right)
\right\}$.
The prefix order defined as $p\leq q$ iff there exists $p^{\prime }$ such that $pp^{\prime }=q$, is a partial order on positions.
If $p^{\prime }\neq \epsilon$ then we obtain the strict order $p<q$.
We write $\left( p\parallel q\right) $ iff $p$ and $q$ are incomparable with
respect to $\leq$. The binary relations $\sqsubset$ and
$\sqsubseteq$ defined by $p \sqsubset q \quad \text{ iff } \quad
\big(p < q \tor p\parallel q \big)$ and $p \sqsubseteq q \quad
\text{ iff } \quad \big(p\le q \tor p\parallel q \big) $, are total
relations on positions.
For any $p\in \mathcal{P}os(t) $ we denote by $t_{|p}$ the subterm
of $t$ at position $p$, that is, $t_{|{\epsilon }} =t$, and $f(
t_{1},...,t_{n})_{|iq} =(t_{i})_{|q}$. For a term $t$, we shall
denote by $\delta(t)$ the depth of $t$, defined by $\delta(t_0)=0$,
if $t_0 \in \mycal{X} \cup \mycal{F}_0$ is a variable or a constant,
and $\delta(f(t_1,\ldots,t_n)) = 1+ max(\delta(t_i))$, for
$i=1,\ldots,n$.
For any position $p\in \mathcal{P}os\left( t\right)$ we denote by $t\left[ s
\right]_{p}$ the term obtained by replacing the subterm of $t$ at
position $p $ by $s$: $t[s]_{\epsilon } =s$ and $f(t_{1},...,t_{n})
[s]_{iq} =f(t_{1},...,t_{i}[s]_{q},...,t_{n})$.
A substitution is a mapping $\sigma :\mathcal{X}\rightarrow \mathcal{T}
\mathcal{F},\mathcal{X})$ such that $\sigma (x)\neq x$ for only
finitely many $x$s. The finite set of variables that $\sigma $ does
not map to
themselves is called the domain of $\sigma $: $\Dom(\sigma )\overset{def}{=
\left\{ x\in \mathcal{X}\gvert\sigma (x)\neq x\right\} $. If
$\Dom(\sigma )=\left\{ x_{1},...,x_{n}\right\} $ then we write
$\sigma $ as: $\sigma =\left\{ x_{1}\mapsto \sigma \left(
x_{1}\right) ,...,x_{n}\mapsto \sigma
\left( x_{n}\right) \right\} $.
A substitution $\sigma :\mathcal{X
\rightarrow {\mathcal{T}(\mathcal{F},\mathcal{X})}$ uniquely extends
to an
endomorphism $\widehat{\sigma }:\mathcal{T}(\mathcal{F},\mathcal{X
)\rightarrow \mathcal{T}(\mathcal{F},\mathcal{X})$ defined by: $\widehat
\sigma }(x)=\sigma (x)$ for all $x\in \Dom(\sigma )$, $\widehat{\sigma
(x)=x$ for all $x\not\in {\mathsf{Dom}}(\sigma )$, and $\widehat{\sigma
(f(t_{1},\ldots ,t_{n}))=f(\widehat{\sigma }(t_{1}),\ldots ,\widehat{\sigma
(t_{n}))$ for $f\in \mathcal{F}$. In what follows we do not
distinguish between a substitution and its extension.
For two terms $t,t^{\prime }\in \mycal{T}$, we say that $t$ matches
t^{\prime }$, written $\match{t}{t'}$, iff there exists a substitution
\sigma$, such that $\sigma(t)=t^{\prime }$. It turns out that if
such a substitution exists, then it is unique.
A term $t'$ is subsumed by a term $t$ iff there exists a substitution $\sigma$ such that
$\sigma(t)=t'$. A substitution $\sigma'$ is subsumed by a substitution $\sigma$ iff
$\sigma'(t)$ is subsumed by $\sigma(t)$ for each term $t$.
The most general unifier of the two terms $t$ and $t^{\prime }$ is a
substitution $\gamma $ such that $\gamma (t)=\gamma (t^{\prime })$
and, for any other substitution $\gamma ^{\prime }$ satisfying
$\gamma ^{\prime }(t)=\gamma ^{\prime }(t^{\prime })$, we have that
$\gamma ^{\prime }$ is subsumed by $\gamma $. Besides, we shall
write $t\wedge t^{\prime }$ to denote the term $\gamma (t)$.
The composition of functions will be denoted by ``$\circ$''.
If $l_1$ and $l_2$ are lists, then we denote by $l_1 \sqcup l_2$
their concatenation Sometimes we shall write $\sqcup_{i=1,n} l_i$ to
denote the list $[l_1,\ldots,l_n]$. For any $n\in \mathbb{N}$ we
simply denote by $[n]$ the interval $[1,\ldots,n]$.
\paragraph{Lexicographic ordering}
A lexicographic ordering, denoted by "$<$", on the Cartesian product $\mathbb{N}\times \mathbb{N}$ is defined for any $(a_1,b_1)$ and $(a_2,b_2)$ in $\mathbb{N}\times \mathbb{N}$
such that $(a_1,b_1)<(a_2,b_2)$ iff either \emph{i.)} $a_1<a_2$ or \emph{ii.)} $a_1=a_2$ and $b_1<b_2$.
The maximum of $\mathbf{a_1},\ldots,\mathbf{a_n} \in \mathbb{N}\times \mathbb{N}$ is defined in the usual way and denoted by $\mmax\{\mathbf{a_1},\ldots,\mathbf{a_n}\}$.
The addition $(a_1,a_2)+(b_1,b_2)$ is $(a_1+b_1,a_2+b_2)$.
\section{Position-based context-embedding strategies (\ces) and their combination}
\label{Implement:by:position:sec}
We need to consider the combination of contexts when they are inserted at the same position.
\begin{example}[Combination of $\mybox$-terms]
\label{ExamTermContext}
We give an example of the combination of $\mybox$-terms as follows:
\begin{align*}
{\tau} \cdot {\tau}' =\tau \lbrack \tau ^{\prime }]_{\mathcal{P}os\left( \tau,\square \right)},
\end{align*}
where ${\mathcal{P}os\left( t,\square \right)}$ is the position of $\square$ in $t$.
\end{example}
\begin{example} \label{ExamCombinationContexts}
The combination of the two contexts $\tau_1=\mathtt{List}(\square,\underline{i})$ and $\tau_2=\mathtt{List}(\square,\underline{j})$ is given by
\begin{equation*}
\tau_1 \cdot \tau_2 = \tau_1[\tau_2]_{1} = \mathtt{List}(\mathtt{List}(\square,\underline{j}),\underline{i})
\end{equation*}
where $\underline{i}$ and $\underline{j}$ are shortcut terms which represent $\mathtt{Index}(i,[1,2,3])$ and $\mathtt{Index}(j,X)$ respectively. This concept
has already been introduced in \cite{yang2014contribution}.
\end{example}
To define the position-based \ces, we introduce two position-based
strategies. For a position $p$ and a context $\mbf{\tau}$, the jump strategy
@p.\mbf{\tau}$ applied to a term $t$ inserts $\mbf{\tau}$ at the
position $p$ of the input term $t$. The failing strategy
$\emptylist$ fails when applied to any term. Their precise semantics
are given in Definition \ref{semantics-posi-based:def} below for
semantics of position-based \ces.
\begin{definition}[Position-based \ces]
\label{posi-based:def}
Let $p_1,\ldots,p_n$ positions in $\PPos$ and $\tau_1,\ldots,\tau_n$ be $\mybox$-terms in $\mycal{T}_{_{\square }}$ with $n\ge 1$.
A position-based \ce is either the failing strategy $\emptylist$ or the ??
\begin{align*}
\bigand_{i=1,n} @p_{i}.\tau_i
\end{align*}
The set of elementary \ces is denoted by $\eceSet$.
\end{definition}
We impose that the position-based \ces respect some constraints on
positions of insertions to avoid conflicts: the order of context
insertions goes up from the leaves to the root.
\begin{definition}[Well-founded position-based \ce]
\label{Well-founded:simple:ext:def}
Let $p_1,\ldots,p_n$ positions in $\PPos$ and $\tau_1,\ldots,\tau_n$ be $\mybox$-terms in $\mycal{T}_{_{\square }}$ with $n\ge 1$.
A position-based \ce $E$
\begin{align*}
E = \bigand_{i=1,n} @p_{i}.\tau_i
\end{align*}
is well-founded iff
\begin{enumerate}[i.)]
\item every position occurs at most once in $E$, i.e. $p_i \neq p_j$ for all $i\neq j$, and
\item insertions at lower positions occur earlier in $E$, i.e. $i < j$ if $p_i \sqsubset p_j$, for all $i,j \in [n]$.
\end{enumerate}
Moreover, the empty position-based \ce $\emptylist$
is well-founded.
\end{definition}
In all what follows we work only with the set of well-founded
position-based \ces, denoted by $\eceSet$. For two position-based
\ces $E$ and $E'$, we shall abuse of notation and write $E=E'$ to
mean that they are equal up to a permutation of their parallel
positions. We shall simply write $@p.\tau$ instead of $[@p.\tau]$.
For a position $p$, we let
$@p.[@p_1.{\tau}_1,\ldots,@p_n.{\tau}_n]=[@pp_1.{\tau}_1,\ldots,@pp_n.{\tau}_n]$.
We next define the semantics
of a position-based \ce as a function in $\funset{\mycal{T} \cup
\set{\fail}}$, with the idea that if the application of a
position-based \ce to a term fails, the result is $\fail$. Besides,
we adopt a stronger version of failure, that is,
$[@p_1.{\tau}_1,\ldots,@p_n.{\tau}_n]$ fails when each of
$@p_i.{\tau}_i$ fails. To formalize this notion of failure we need
to introduce an intermediary function $\eta: (\funset{\mycal{T}
\cup \set{\fail}}) \rightarrow \mycal{T}\cup \set{\fail}
\rightarrow \mycal{T}\cup \set{\fail}$, that stands for the
\emph{fail as identity}. It is defined for any function $f$ in
$\funset{\mycal{T}\cup \set{\fail}}$ and any term $t \in
\mycal{T}\cup \set{\fail}$ by
\begin{align*}
(\eta(f))(t) = \begin{cases}
f(t) & \tif f(t) \neq \fail, \\
t & \totherwise.
\end{cases}
\end{align*}
The semantics of position-based \ces follows.
\begin{definition}[Semantics of position-based \ces]
\label{semantics-posi-based:def} The semantics of a position-based
\ce $E$ is a function $\sembrackk{E}$ in $\funset{\mycal{T}\cup
\set{\fail}}$ inductively defined by:
\begin{align*}
\sembrackk{\emptylist}(t) & \uberEq{def} \fail, \\
\sembrackk{E}(\fail) & \uberEq{def} \fail, \\
\sembrackk{@p. {\tau}}(t) & \uberEq{def}
\begin{cases}
t{[{\tau}[t_{|p}]]}_{p} & \tif p \in \PPos(t) \\
\fail & \totherwise,
\end{cases} \\
\sembrackk{[@p_1.{\tau}_1,\ldots,@p_n.{\tau}_n]}(t) &
\uberEq{def}
\begin{cases}
\Big(\big(\eta(\sembrackk{@p_n.\mbf{\tau}_n})\big) \circ \cdots \circ \big(\eta(\sembrackk{@p_1. \mbf{\tau}_1})\big)\Big) (t) & \textrm{if } \exists p_i \in \set{p_1,\ldots,p_n} \\
& \textrm{s.t. } p_i \in \PPos(t) \\
\fail & \textrm{ otherwise}.
\end{cases}
\end{align*}
\end{definition}
\begin{myexample}\label{Example:SemanticOfCEStrategy}
We illustrate the idea and the interest of position-based \ces
through the term $t$ of Example \ref{ExamTermContext} and the two
contexts $\tau_1= \mathtt{List}\left( \square,\underline{i} \right)$
and $\tau_2= \mathtt{List}\left( \square,\underline{j} \right)$
defined in Example \ref{ExamTermContext} but with the short-cut
notation used in Example \ref{ExamCombinationContexts}. Applying the
strategy of $@\epsilon.\tau_1$ to the term $t=\mathtt{Var}\left(
x,\mathtt{Reg}\left( \Omega ,1\right) \right)$ gives the
transformation of a one-dimensional space coordinate variable $x$ to
an indexed multi-dimensional space coordinate variable $x_{i}$. The
procedure is given as the following equation
\begin{align*}
\sembrackk{@\epsilon.\tau_1}(t) &= t[\tau_1[t_{|\epsilon}]]_\epsilon = t[\tau_1[t]]_\epsilon = \tau_1[t]_{\mathcal{P}os(\tau_1,\square)}\\
& =\mathtt{List}\left(\mathtt{Var}\left( x,\mathtt{Reg}\left( \Omega ,1\right) \right), \underline{i}\right)
\end{align*}
Let $\tau = \tau_1 \cdot \tau_2$. The application of $@p.\mbf{\tau}$ to the term $t$ is given as
\begin{align*}
\sembrackk{@\epsilon.\mathbf{\tau}}(t) &= \tau[t]_{\mathcal{P}os(\tau_1[\tau_2],\square)} \\
& = \mathtt{List}\left(\mathtt{List}\left(\mathtt{Var}\left( x,\mathtt{Reg}\left( \Omega ,1\right) \right),\underline{j} \right),\underline{i} \right)
\end{align*}
\end{myexample}
\begin{myexample}[ ]\label{Example:SemanticOfCEStrategy1}
Another simple illustration is for the derivative of a function
represented by the term
$t'=\underline{\partial}_{\underline{x}}\underline{u}$ where
$\underline{u}$ is a shortcut for the derived function,
$\underline{x}$ for the mathematical variable and
$\underline{\partial}_{\underline{x}}$ for the derivation operator
about $\underline{x}$.
Let $p$ and $q$ the positions in $\mathcal{P}os(t')$ of
$\underline{u}$ and $\underline{x}$ in $t'$, the application of
$\sembrackk{@p.\tau_1 \circ @q.\tau_2 }(t')$ yields the term
$\underline{\partial}_{\underline{x}_{\underline{j}}}
\underline{u}_{\underline{i}}$. Since these positions are parallel,
i.e. $p \parallel q$, this list of \ces is well-founded and the
semantics of its application to $t'$ is given as
\begin{equation*}
\sembrackk{[@p.\tau_1, @q.\tau_2]}(t') = (\sembrackk{@p.\tau_1} \circ \sembrackk{@q.\tau_2})(t') = \sembrackk{@p.\tau_1}(\sembrackk{@q.\tau_2}(t')).
\end{equation*}
The complete tree structures of $\underline{x}_{\underline{i}}$, $\underline{x}_{\underline{i}\underline{j}}$, $t'$ and $\underline{\partial}_{\underline{x}_{\underline{j}}}\underline{u}_{\underline{i}}$ are depicted in Figure \ref{fig:JumpStrategy}.
\end{myexample}
\begin{figure}[]
\centering
\begin{tikzpicture}[level distance=1cm,
level 1/.style={sibling distance=1.2cm},
level 2/.style={sibling distance=1.2cm},
scale=0.95]
\node {$\mathtt{List}$}
child {node {$\underline{x}$}}
child {node{$\underline{i}$}
};
\begin{scope}[xshift=2.5cm,level 1/.style={sibling distance=1.2cm}]
\node {$\mathtt{List}$}
child {node {$\mathtt{List}$}
child {node {$\underline{x}$}}
child {node {$\underline{j}$}} }
child {node {$\underline{i}$}};
\end{scope}
\begin{scope}[xshift=4.5cm,level 1/.style={sibling distance=1.2cm}]
\node {$\underline{\partial}$}
child {node {$\underline{u}$} edge from parent node[left,draw=none] {\tiny $p$}}
child {node {$\underline{x}$} edge from parent node[right,draw=none] {\tiny $q$}};
\end{scope}
\begin{scope}[xshift=7cm,level 1/.style={sibling distance=1.2cm}, level 2/.style={sibling distance=0.8cm}]
\node {$\underline{\partial}$}
child {node {$\mathtt{List}$}
child{ node {$\underline{u}$}} child{ node {$\underline{i}$}}
edge from parent node[left,draw=none] {\tiny $p$}}
child {node {$\mathtt{List}$}
child {node {$\underline{x}$}} child{ node {$\underline{j}$}}
edge from parent node[right,draw=none] {\tiny $q$}};
\end{scope}
\end{tikzpicture}
\caption{The tree structure of the terms $@\epsilon.\tau_1(t)$, $@\epsilon.\mathbf{\tau}(t)$, $t'$ and $\sembrackk{@p.\tau_1 , @q.\tau_2 }(t')$
discussed in Example \ref{Example:SemanticOfCEStrategy1}.}
\label{fig:JumpStrategy}
\end{figure}
The unification of two position-based \ces amounts to sort and merge their positions, and to combine their contexts if they are inserted at the same position.
\begin{definition}[Unification of position-based \ces]
\label{unif:posi:non:empty:def} The unification of two position-based \ces is the binary operation $\combb: \eceSet\times
\eceSet\longrightarrow \eceSet$
defined as
\begin{enumerate}
\item \begin{enumerate}[(a)]
\item \label{final:E:1} $\emptylist \combb E = \emptylist.$
\item \label{final:E:2} $S \combb \emptylist = \emptylist.$
\end{enumerate}
\item If $E=\bigand_{i \in I} @i.\tau_i \uand @\epsilon.\tau$ and $E'=\bigand_{j \in J} @j.\tau'_j \uand @\epsilon.\tau'$ then \\
\begin{align*}
E \combb E' \reduce \bigand_{i \in I \cap J} @i.(\tau_i\cdot \tau'_i) \uand R \uand R'\uand @\epsilon.(\tau\cdot \tau'),
\end{align*}
where \begin{align*} R &= \bigand_{i \in I\setminus J} @i.\tau_i &\tand &&
R'&= \bigand_{j \in J\setminus I} @j.\tau'_j, &&
\end{align*}
\end{enumerate}
\end{definition}
\begin{myexample}
Consider the lists of \ces $E=[@p_1.\tau_1,@p_2.\tau_2,@p_3.\tau_3]$ and $E'=[@p_1.\tau'_1,@q_1.\tau'_2,@q_2.\tau'_3]$, the sets of positions of $E$ and $E'$ are
$P=\{p_1,p_2,p_3\}$ and $P'=\{p_1,q_1,q_2\}$, $P \cup P'=\{p_1,p_2,p_3,q_1,q_2\}$, $P \cap P' = \{p_1\}$. The unification of $E$ and $E'$ is given as
\begin{equation*}
E''=[@p_1.\tau'_1.\tau_1,@p_2.\tau_2,@p_3.\tau_3,@q_1.\tau'_2,@q_2.\tau'_3].
\end{equation*}
\end{myexample}
The combination of two position-based \ces is the same as
their unification apart that it is defined on non-failing
position-based \ces.
\begin{definition}[Combination of two position-based \ces]
\label{comb:posi:def}
The combination of two position-based \ces is a binary operation
$\comb: \eceSet \,\times\, \eceSet \longrightarrow \eceSet$
defined for any $E$ and $E'$ in $\eceSet$ by
\begin{align*}
E \comb E'=
\begin{cases}
E \combb E' & \tif E\neq \emptylist \tand E' \neq \emptylist \\
E & \tif E\neq \emptylist \tand E'=\emptylist \\
E' & \tif E = \emptylist \tand E' \neq \emptylist \\
\emptylist & \tif E = \emptylist \tand E' = \emptylist
\end{cases}
\end{align*}
\end{definition}
\begin{proposition}
\label{main:prop:elemntary:og:prop:1}
The set $\eceSet$ of position-based \ces together with the unification operation enjoy the following properties.
\begin{enumerate}
\item The neutral element of the unification is $@\E.\square$,
\item the absorbing element of the unification is $\emptylist$,
\item The unification is associative, i.e. $(E \combb E') \combb E'' = E \combb (E' \combb E'')$.
\item The unification of position-based \ces is (non-)commutative if and only if the operation of merging of the contexts $\cdot: \mycal{T}_{\square} \times \mycal{T}_{\square} \rightarrow \mycal{T}_{\square}$ is (non-)commutative
\item the unification is idempotent if and only if the operation of merging of the contexts is idempotent,
that is, $E \combb E = E$ for any $E\in \eceSet$ iff $\tau \cdot \tau = \tau$ for any $\mybox$-term $\tau$ in $\mycal{T}_{\square}$.
\end{enumerate}
\end{proposition}
\begin{proposition}
\label{main:prop:elemntary:og:prop:2}
The set $\eceSet$ of position-based \ces together with the unification and combination operations enjoy the following properties.
\begin{enumerate}
\item The neutral element of the combination is $\emptylist$.
\item The combination is associative, i.e. $(E\comb E')\comb E'' = E\comb (E'\comb E'')$.
\item The combination of position-based \ces is (non-)commutative if and only if the operation of merging of the contexts $\cdot: \mycal{T}_{\square} \times \mycal{T}_{\square} \rightarrow \mycal{T}_{\square}$ is (non-)commutative.
\item The combination is idempotent iff the operation of merging of the contexts is idempotent,
that is, $E \comb E = E$ for any $E\in \eceSet$ iff $\tau \cdot \tau = \tau$ for any $\mybox$-term $\tau$ in $\mycal{T}_{\square}$.
\end{enumerate}
\end{proposition}
\section{The class $\ceSet$ of context-embedding strategies (\ces)}
\label{Implement:by:strategies:sec}
The challenging elements are fourfold:
\emph{1.)} finding the right class of extensions that is closed by combination: a less expressive class would not be closed under combination nor useful in practice, while very expressive extensions are impossible to combine,
\emph{2.)} finding the right basic constructors of the extensions: very rudimentary constructors would make the size of the extensions very huge and non-practical, while more general constructors are very hard to combine,
\emph{3.)} combining the "\emph{while}" loops is the most difficult part and requires a special care,
\emph{4.)} proving the correctness of the combination must take into account the semantics of the extensions.
]]
We introduced the position-based \ces
to clarify the ideas behind contexts, their insertion as well as their combination.
However, position-based \ces are not satisfactory for practical
applications, since the positions are generally not accessible and cannot be
used on a regular basis in applications.
So, we enrich this framework by introducing navigation strategies to form a class of \emph{\ces} that is
closed under combination.
\paragraph{Syntax and semantics of \ces}
A \ce is composed of two parts: a navigation of the input term
without changing it, and an insertion of contexts at certain
positions. We shall introduce the left-choice strategy constructor
($\oplus$), a conditional constructor
``$\mathtt{if}\textrm{-}\mathtt{then}$'', a restricted form of the
composition, and the fixed-point constructor (``$\mu$'') allowing
the recursion in the definition of strategies. The resulting class
is called the class of \ces. In what follows we assume that there
is a denumerable set of \emph{fixed-point variables} denoted by
$\fixset$. Fixed-point variables in $\fixset$
will be denoted by $X,Y,Z,
\ldots$
\begin{definition}[\ces]
\label{Def:HCE-strategies}
The class of \ces is defined by the following grammar:
\begin{align*}
S,S' \; ::= \; &\emptylist \gvert X \gvert @\varepsilon.\tau \gvert u;{S} \gvert S \oplus S' \gvert \mu X.S \gvert @i_1.S \uand @i_2.S' \gvert \most({S}) \gvert \tifthen{S}{S'} \\
\end{align*}
where $X$ is a fixed-point variable in $\fixset$, and $\mbf{\tau}$
is a context in $\mycal{T}_{\square}$, and $u$ is a term in
$\mycal{T}$, and $i_1,i_2$ are positions in
$\mathbb{N}_{\epsilon}$, and $n\ge 1$. The set of \ces will be
denoted by $\ceSet$.
In particular, the subset of fixed-point free \ces will be denoted by $\ceSetFree$.
\end{definition}
\paragraph{Notations.}
We shall simply write [email protected]$ instead of $[@i.S]$.
And we shall write "$\tifthen{S_1 \&S_2}{S}$" instead of \\ $\tifthen{S_1}{\big(\tifthen{S_2}{S}\big)}$.
We notice that extending the class of \ces by allowing the position
$i$ of the jump operator [email protected]$ to range over $\PosSet$ instead of
$\mathbb{N}_{\epsilon}$ does not increase the expressiveness of the
strategy language. This can be achieved by turning each \ces
[email protected]$, where $p$ is a position in $\PosSet$ into
$@q_1.\cdots.@q_n.S$, where $p=q_1.\cdots.q_n$ and each $q_j$ is a
position in $\mathbb{N}_{\epsilon}$.
The design of the class of \ces is inspired by the $\mu $-calculus formalism \cite{rudimemt:mu-calculus:book}
since we need very rudimentary strategy constructors.
In particular the jumping into the immediate positions of the term tree
is morally similar to the diamond and box modalities ($\langle \cdot \rangle$ and $[ \cdot ]$) of the propositionsal modal $\mu$-calculus.
And the fixed-point constructor is much finer than the iterate operator of e.g. \cite{RewriteStrat_CHK2003}.
Besides, we incorporate the left-choice strategy constructor
and a restricted form of the composition.
We shall sometimes write $\mu X.\mycal{S}(X)$ instead of $\mu X. \mycal{S}$
to emphasize that the fixed-point variable $X$ is free in $\mycal{S}$.
\begin{myexample}\label{Example:MuStrategy}
Consider the two \ces defined by $\mycal{S}(X)=(u;\mbf{\tau}) \oplus (@1.X)$ and $\mu X. \mycal{S}(X)$, where $u$ is a term and $\tau$ is a context.
When applied to a term $t$, the \ce $\mu X. \mycal{S}(X)$ checks first whether $u$ matches with $t$.
If it is the case, then the context $\tau$ is inserted at the root of $t$, yielding the term $\tau[t]$. Otherwise,
the \ce jumps to the position $1$ of $t$ and restarts again. If it reaches
the left-most leaf of $t$ and $u$ does not match with this leaf, then the \ce $\mu X. \mycal{S}(X)$ fails.
\end{myexample}
For any \ces $\mycal{S}(X)$
and $\mycal{S}'$ in $\ceSet$, and $i \ge 0$, we define
\begin{align*}
\mu^{0}X.\mycal{S}(X) \uberEq{def} \emptylist &&\tand && \mu^{i+1}X.\mycal{S} \uberEq{def} \mycal{S}(\mu^{i}X.\mycal{S}(X))
\end{align*}
which stands for [[\textbf{[M: compl\'{e}ter]}]]. A \ce is closed if
all its fixed-point variables are bound.
\begin{definition}[Semantics of \ces]
\label{SemanticsOfCEStrategies}
The semantics of a closed \ce $\mycal{S}$ is the function
$\sembrackk{\mycal{S}} : \funset{\mathcal{T} \cup \mathbb{F}}$,
which is defined inductively as follows.
\begin{align*}
& \sembrackk{\emptylist}(t) \uberEq{def} \fail. \\
& \sembrackk{{S}}(\fail) \uberEq{def} \fail. \\
& \sembrackk{u;S}(t) {\overset{def}{=}
\begin{cases}
\sembrackk{S}(t) & \textrm{if } \match{u}{t}, \\
{\mathbb{F}} & \text{otherwise}.
\end{cases} \\
& \sembrackk{@\varepsilon.\mbf{\tau}}(t) \uberEq{def} \tau(t), \\
& \sembrackk{{S}_{1}\oplus {S}_{2}}(t) {\overset{def}{=}
\begin{cases}
\sembrackk{{S}_{1}}(t) & {\text{if }}\sembrackk{{S}_{1}}(t)\neq {\mathbb{F}}, \\
\sembrackk{{S}_{2}}(t) & \text{otherwise.
\end{cases} \\
& \sembrackk{\mu X. {S}(X)}(t) \uberEq{def} \sembrackk{\mu^{\delta(t)} X. S(X)}(t). \\
&\sembrackk{\tifthen{S_1}{{S}}}(t) \uberEq{def}
\begin{cases}
\sembrackk{{S}}(t) & \textrm{if } \sembrackk{S_1}(t) \neq \fail ,\\
\fail & \textrm{otherwise}.
\end{cases} \\
&\sembrackk{@p.{S}}(t) \uberEq{def}
\begin{cases}
t[\sembrackk{{S}}(t_{|p})]_{p} & \textrm{if } \sembrackk{{S}}(t_{|p}) \neq \fail \tand p \in \PPos(t), \\
\fail & \textrm{otherwise}.
\end{cases} \\
&\sembrackk{\bigand_{i=1,n} @p_i.S_i}(t) \uberEq{def} \\
& \hspace{1cm} \begin{cases}
\big(\eta(\sembrackk{@p_n.{S}_n}) \circ \cdots \circ \eta(\sembrackk{@p_1.{S}_1})\big)(t) \, &\textrm{ if } \exists i\in[n] \tst \sembrackk{@p_i.{S}_i} (t)\neq \fail, \\
\fail & \totherwise.
\end{cases}\\
&\sembrackk{\most({S})}(t) \uberEq{def} \sembrackk{\bigand_{i=1,ar(t)} @p_i.S_i}(t)
\end{align*}
\end{definition}
The general definition of the fixed-point constructor requires a
heavy machinery involving Knaster-Tarski fixed-point theorem
\cite{Tarski55}.
However, due to the particular nature of \ces, we gave an adhoc definition of the fixed-point \ce by
$\sembrackk{\mu X. \mycal{S}(X)}(t) \uberEq{def} \sembrackk{\mycal{S}^{\delta(t)}(\emptylist)}(t)$.
The justification of the iteration of $\mycal{S}(\emptylist)$ at most $\delta(t)$ times, the depth of $t$, is that the navigation part of a \ce does not change the input term $t$.
Therefore, either the \ce $\mycal{S}$ progresses on the term $t$ and will reach the leaves of $t$ after at most $\delta(t)$ iterations,
or $\mycal{S}$ does not progress and in this case it fails after any iteration.
Examples of \ces that do not progress are $\mycal{S}=\mu X. X$ and $\mycal{S}=\mu X. (u,X)$ for a term $u$.
In technical terms, we show in Corollary \ref{general-fixed-point-corollary} that $\mu X^{\delta(t)}.S(\emptylist)$ is a fixed-point of $\mycal{S}(X)$ in the sense that,
for every term $t$, we have $\sembrackk{\mycal{S}\big(\mu X^{\delta(t)}.\mycal{S}(\emptylist)\big)}(t) =\sembrackk{\mu X^{\delta(t)}.\mycal{S}(\emptylist)}(t)$.
\begin{definition}
\label{equivalence:ces:def}
Let $S,S'$ be \ces and $n\ge 0$ an integer. We shall write
\begin{enumerate}[i)]
\item ${S} \equiv {S}'$ iff $\sembrackk{{S}} = \sembrackk{{S}'}$. In this case, $S$ and $S'$ are called equivalent.
\item ${S} \equiv_{n} {S}'$ iff $\sembrackk{{S}}(t) = \sembrackk{{S}'}(t)$ for any term $t$ with depth $\delta(t) \le n$. In this case, $S$ and $S'$ are called $n$-equivalent.
\end{enumerate}
\end{definition}
Notice that $"\equiv"$ is an equivalence relation
and that $S$ and $S'$ are equivalent iff they are $n$-equivalent for any $n \ge 0$.
\begin{myexample}
\label{ex:TD:strategy}
We show how to encode some standard traversal strategies in our formalism using the fixed-point
constructor. In what follows we assume that $\mycal{S}$ is a \ce.
We recall that, when applied to a term $t$, the \ce $\TDDD(\mycal{S})$ tries to apply $\mycal{S}$ to the maximum of the sub-terms of $t$ starting from the root of $t$,
it stops when it is successfully applied. Hence,
\begin{align*}
\TDDD(S) & := \mu X. \big(S \oplus \most(X) \big)
\end{align*}
\end{myexample}
We generalize next the condition of well-foundedness from position-based \ces
to \ces.
\begin{definition}[Well-founded \ces.]
\label{Well-founded:strategy:ext:def}
A \ce $\mycal{S}$ is well-founded iff
every position-based \ce
that is a sub-strategy of $\mycal{S}$ is well-founded in the sense of Definition \ref{Well-founded:simple:ext:def}.
\end{definition}
\section{Unification and combination of \ces}
\label{unification:combination:section}
We define the combination of \ces (Definition \ref{combination:def})
by means of their unification (Definition \ref{unif:ces}) together
with an example. The first main result of this section is Theorem
\ref{main:theorem:1} that guarantees the correctness of the
combination of \ces. The correctness is given in terms of the
position-based \ces, it imposes that the mapping (via the
transformation $\Psi$ of Definition \ref{psi:def}) of the
combination of two \ces is equivalent to the combination of their
respective mapping. Besides, Theorem \ref{main:theorem:2} is a
consequence of Theorem \ref{main:theorem:1} which is more difficult
and proves the same result but for the unification of \ces instead
of the combination. The second main result is the nice algebraic
properties of the unification and combination of \ces stated in
Proposition \ref{main:prop:2}. In particular, the combination and
unification are associative, which is an important property in the
applications, and are a congruence.
\subsection{Augmented Sub-\ces, memory and pre-\ces}
\begin{definition}[Augmented sub-\ces of a \ce]
Given a \ce $S$, we define the set of augmented sub-\ces of $S$, denoted by $\Phi(S)$, inductively as follows.
\begin{align*}
\Phi(\emptylist) &= \set{\emptylist} \\
\Phi(X) &= \set{X} \\
\Phi(@\varepsilon.\mbf{\tau}) &= \set{@\varepsilon.\mbf{\tau}} \\
\Phi(u;S) &= \set{u;S} \cup \Phi(S) \\
\Phi(@p.S) &= \set{@p.S} \cup \Phi(S) \\
\Phi(S_1 \oplus S_2) &= \set{S_1 \oplus S_1} \cup \Phi(S_1) \cup \Phi(S_2) \\
\Phi(\bigand_{i=1,n} S_i) & = \set{\bigand_{i=1,n} S_i} \cup \bigcup_{i=1,n} \Phi(S_i) \\
\Phi\big(\tifthen{S_1}{S}\big) & = \set{\tifthen{S_1}{S}} \cup \Phi(S_1) \cup \Phi(S) \\
\Phi(\mu X.S(X)) & = \set{\mu X.S(X)} \cup \Phi\big(S \big(\mu X.S(X)\big)\big) \cup \Phi(S(X))
\end{align*}
Similarly, the set of all fixed-point sub-\ces of $S$, denoted by $\Phi_{\mu}(S)$, is defined by:
\begin{align*}
\Phi_{\mu}(\emptylist) &= \emptyset \\
\Phi_{\mu}(X) &= \emptyset \\
\Phi_{\mu}(@\varepsilon.\mbf{\tau}) &= \emptyset \\
\Phi_{\mu}(u;S) &= \Phi_{\mu}(S) \\
\Phi_{\mu}(@p.S) &= \Phi_{\mu}(S) \\
\Phi_{\mu}(S_1 \oplus S_2) &= \Phi_{\mu}(S_1) \cup \Phi_{\mu}(S_2) \\
\Phi_{\mu}(\bigand_{i=1,n} S_i) & = \bigcup_{i=1,n} \Phi_{\mu}(S_i) \\
\Phi_{\mu}\big(\tifthen{S_1}{S}\big) & = \Phi_{\mu}(S_1) \cup \Phi_{\mu}(S)\\
\Phi_{\mu}(\mu X.S(X)) & = \set{\mu X.S(X)} \cup \Phi_{\mu}(S(X))
\end{align*}
\end{definition}
Clearly, $\Phi_{\mu}(S) \subset \Phi(S)$.
The unification reduction system requires storing a piece of information, called \emph{memory}, related to the input fixed-point \ces.
Roughly speaking, a memory is a set of triples where the first and the second element of each triple is a fixed-point sub-\ce or an augmented \ce, and
the third element is a fixed-point variable. The formal definition follows.
\begin{definition}[Memory]
Given \ces $S$ and $R$ we define the set of all \emph{memories} related to $S$ and $R$, denoted by $\mathfrak{M}(S,R)$ as follows.
\begin{align*}
\mathfrak{M}(S,R) &= \big(\Phi_{\mu}(S) \times (\Phi(R)\setminus \fixset) \times \fixset \big) \;\cup\; \big((\Phi(S)\setminus \fixset) \times \Phi_{\mu}(R) \times \fixset\big)
\end{align*}
More generally, the set of all memories, denoted by $\mathfrak{M}$, is defined by
\begin{align*}
\mathfrak{M} &= \bigcup_{S,R \in \mycal{C}} \mathfrak{M}(S,R)
\end{align*}
An element of $\mathbf{\Eu{M}}(S,R)$ or of $\mathfrak{M}$ is called a memory.
\end{definition}
\begin{definition}[Pre-\ces]
\label{Def:pre-HCE-strategies}
The class of pre-\ces is defined by the following grammar:
\begin{align*}
P,P' \; ::= \; & S \gvert \tuple{S,S',\Eu{M}} \gvert u;{P} \gvert P \oplus P' \gvert \mu X.P \gvert @i.P \uand @i'.P' \gvert \most(P) \gvert \tifthen{S}{P} \\
\end{align*}
where
$S,S'$ are \ces in $\mycal{C}$ and
$\Eu{M}$ is a memory in $\mathfrak{M}$ and
$X$ is a fixed-point variable in $\fixset$ and
$u$ is a term in $\mycal{T}$
and $i_1,i_2$ are positions in $\mathbb{N}_{\epsilon}$.
The set of pre-\ces will be denoted by $\preceSet$.
\end{definition}
\begin{definition}
\label{Monotony}An HCE-strategy is monotone if in any of its fixed
points each fixed point variable is embedded in a position like
strategy $@i.$ where $i\in \mathbb{N}$ (i.e. $i\neq \varepsilon $)
or in the strategy \texttt{Inside}$(.)$.
\end{definition}
\subsection{The procedure of unification of \ces }
\paragraph{Assumptions.}
In what follows each \ce is monotonic, closed, and in which each fixed-point variable appears once.
\begin{definition}
\label{reduction:unif:def}
We define the reduction system $\Unif$ operating on pre-\ces and composed of the following reduction rules with a decreasing order of priority.
\begin{enumerate}
\item \begin{enumerate}[(a)]
\item \label{final:1} $\tuple{\emptylist, S, \Eu{M}} \reduce \emptylist.$
\item \label{final:2} $\tuple{S,\emptylist,\Eu{M}} \reduce \emptylist.$
\end{enumerate}
\item \label{final:3} $\tuple{@\varepsilon.\mbf{\tau}, @\varepsilon.\mbf{\tau}',\Eu{M}} \reduce @\varepsilon.(\mbf{\tau}\cdot \mbf{\tau}').$
\item \begin{enumerate}[(a)]
\item \label{pattern:ext:1} $\tuple{(u;S), S',\Eu{M}} \reduce u; \tuple{S,S',\Eu{M}}.$
\item \label{pattern:ext:2} $\tuple{S',(u;S),\Eu{M}} \reduce u; \tuple{S', S,\Eu{M}}.$
\end{enumerate}
\item \begin{enumerate}
\item \label{list:ext'} $\tuple{@i.S,@i.S',\Eu{M}} \reduce @i.\tuple{S,S',\Eu{M}}$.
\item \label{list:ext} If $S=\bigand_{i \in I} @i.S_i \uand @\epsilon.\tau$ and $S'=\bigand_{j \in J} @j.S'_j \uand @\epsilon.\tau'$ then \\
\begin{align*}
\tuple{S,S', \Eu{M}} \reduce \tifthen{S \& S'} {\bigand_{i \in I \cap J} @i.\big(\tuple{S_i,S'_i,\Eu{M}} \oplus S_i \oplus S'_i\big)\uand R \uand R'\uand @\epsilon.(\tau \cdot \tau')},
\end{align*}
where \begin{align*} R &= \bigand_{i \in I\setminus J} @i.S_i &\tand &&
R'&= \bigand_{j \in J\setminus I} @j.S'_j.
\end{align*}
\end{enumerate}
\item \begin{enumerate}[(a)]
\item \label{choice:ext:1} $\tuple{(S_1 \oplus S_2), S,\Eu{M}} \reduce \tuple{S_1,S,\Eu{M}} \oplus \tuple{S_2, S,\Eu{M}}.$
\item \label{choice:ext:2} $\tuple{S,(S_1 \oplus S_2) ,\Eu{M}} \reduce \tuple{S,S_1,\Eu{M}} \oplus \tuple{S,S_2,\Eu{M}}.$
\end{enumerate}
\item \begin{enumerate}[(a)]
\item \label{if:ext:1} $\tuple{(\tifthen{S_1}{S_2}),{S},\Eu{M}} \reduce \tifthen{S_1}{\tuple{S_2,{S},\Eu{M}}}$.
\item \label{if:ext:2} $\tuple{S, (\tifthen{S_1}{S_2}) ,\Eu{M}} \reduce \tifthen{S_1}{\tuple{S,S_2,\Eu{M}}} $.
\end{enumerate}
\item \begin{enumerate}[(a)]
\item \label{most:ext:1} $\tuple{\most(S) , \most(S'),\Eu{M}} \reduce
\mathtt{\mathbf{If}}{\big(\most(S) \& \most(S')\big)} \mathtt{\mathbf{ Then }} \, \most\big(\tuple{S, S',\Eu{M}} \oplus S \oplus S'\big)$.
\item \label{most:ext:2} $\tuple{\most(S), \bigand_{i \in I} @i.S_i ,\Eu{M}} \reduce \tuple{\bigand_{i \in [1,arity(u)]} @i.S, \bigand_{i \in I} @i.S_i,\Eu{M}} \twhere u=Patt(i) $
\item \label{most:ext:3} $\tuple{\bigand_{i \in I} @i.S_i, \most(S),\Eu{M}} \reduce \tuple{\bigand_{i \in I} @i.S_i,\bigand_{i \in [1,arity(u)]} @i.S,\Eu{M}} \twhere u=Patt(i) $
\end{enumerate}
\item \begin{enumerate}[(a)]
\item \label{fixed:ext:1} \begin{align*}
\tuple{\underbrace{\mu X. S(X)}_{\xi}, S',\Eu{M}} \reduce
\begin{cases} \mu Z. \tuple{S(\xi), S',\Eu{M}'}, & \tif (\xi,S',\cdot) \notin \Eu{M}, \\
&\;\;\;\; \twhere \begin{cases} Z &= \fresh{\xi,S'}, \\
\Eu{M}' &= \Eu{M} \cup \set{(\xi,S',Z)}.\end{cases}\\
&\\
Z & \tif (\xi,S',Z) \in \Eu{M}.
\end{cases}
\end{align*}
\item \label{fixed:ext:2}
\begin{align*}
\tuple{S',\underbrace{\mu X. S(X)}_{\xi},\Eu{M}} \reduce
\begin{cases} \mu Z.\tuple{S',S(\xi) ,\Eu{M}'}, & \tif (S',\xi,\cdot) \notin \Eu{M}, \\
& \;\;\;\; \twhere \begin{cases} Z &= \fresh{S',\xi}, \\
\Eu{M}' &= \Eu{M} \cup \set{(S',\xi,Z)}. \end{cases} \\
&\\
Z & \tif (S',\xi_1,Z) \in \Eu{M}.
\end{cases}
\end{align*}
\end{enumerate}
\end{enumerate}
\end{definition}
\paragraph{Explanation of the rules.}
We comment on the key points in Definition \ref{reduction:unif:def}.
The unification of $(u,S)$ with $(u',S')$ is naturally $(u\land u', S \combb S')$ since we want to merge them.
The idea behind the unification of $\mu X.S(X)$ with $R$ is to unfold $\mu X.S(X)$
to $S(\mu X.S(X))$ and then unifying $S(\mu X.S(X))$ with $R$. Indeed this process is terminating thanks to the use of memory.
We shall show in sub-section \ref{termination:confluence:reduction:unif:sec} that the unification system $\Unif$ is terminating and confluent.
This allows us to define the unification operation in terms of the normal form with respect to $\Unif$.
\begin{definition}[Unification of \ces]
\label{combination:def}
The unification of \ces is the binary operation \\$\combb: \ceSetCan \,\times\, \ceSetCan \longrightarrow \ceSetCan$,
defined for any $S$ and $S'$ in $\mycal{C}$ by
\begin{align*}
S \combb S' \uberEq{def} \NF \tuple{S,S',\emptyset}.
\end{align*}
\end{definition}
\begin{definition}[Combination of \ces]
\label{combination:def}
The combination of \ces is the binary operation \\$\comb: \ceSetCan \,\times\, \ceSetCan \longrightarrow \ceSetCan$,
defined for any ${{S}}$ and ${{S}}'$ in ${C}$ by
\begin{align*}
{{S}} \comb {{S}}' \uberEq{def} {{S}} \nfcombb {{S}}' \oplus {{S}} \oplus {{S}}'.
\end{align*}
\end{definition}
\begin{myexample}
For given patterns $u,u' \in \mycal{T}$ and $\mybox$-terms $\tau,\tau'$ let
\begin{align*}
{S}(X) &=(u;@\epsilon.\tau) \oplus @1.X && \tand & S'(X') &=(u';@\epsilon.\tau') \oplus @1.X' \\
\xi &=\mu X.S(X) && \tand & \xi' &=\mu X'.S'(X').
\end{align*}
be \ces.
We compute the unification $\mu X. {S}(X) \combb \mu X'.{S}'(X')$ which is the normal form of the tuple \\
$\tuple{\mu X. {S}(X),\mu X'. {S}'(X'),\emptyset}$ by applying the reduction rules of $\Unif$
given in Definition \ref{reduction:unif:def}.
Let
\end{myexample}
\begin{align}
(*) &=\tuple{\mu X. {S}(X),\mu X'. {S}'(X'),\emptyset} \notag \\
& \reduces \mu Z. \tuple{S(\xi),\xi',\set{(\xi,\xi',Z)}} \tag{Rule \ref{fixed:ext:1}} \\
& \reduces \mu Z.\mu Z'. \tuple{S(\xi),S'(\xi'),\underbrace{\set{(\xi,\xi',Z),(S(\xi),\xi',Z')}}_{\Eu{M}}} \tag{Rule \ref{fixed:ext:2}} \\
& = \mu Z.\mu Z'. \tuple{(u; @\epsilon.\tau) \oplus @1.\xi,S'(\xi'),\Eu{M}} \tag{Def. of $S(X)$} \\
& \reduces \mu Z.\mu Z'.\big( \underbrace{\tuple{u;@\epsilon.\tau,S'(\xi'),\Eu{M}}}_{(\textrm{I})} \oplus \underbrace{\tuple{@1.X,S'(\xi'),\Eu{M}}}_{(\textrm{II})} \big) \tag{Rule \ref{choice:ext:1}} \\
(\textrm{I}) & \reduces u;\tuple{@\epsilon.\tau,S'(\xi'),\Eu{M}} \tag{Rule \ref{pattern:ext:1}}\\
& = u;\tuple{@\epsilon.\tau,(u';@\epsilon.\tau') \oplus @1.\xi',\Eu{M}} \tag{Def. of $S'(X')$}\\
& \reduces u;\big(\tuple{@\epsilon.\tau, u';@\epsilon.\tau' ,\Eu{M}} \oplus \tuple{@\epsilon.\tau, @1.\xi',\Eu{M}} \big) \tag{Rule \ref{choice:ext:2}}\\
& \reduces u;\big((u';\tuple{@\epsilon.\tau, @\epsilon.\tau' ,\Eu{M}}) \oplus \tuple{@\epsilon.\tau, @1.\xi',\Eu{M}} \big) \tag{Rule \ref{pattern:ext:2}}\\
& \reduces u;\big((u'; @\epsilon.(\tau\cdot\tau')) \oplus \tuple{@\epsilon.\tau, @1.\xi',\Eu{M}} \big) \tag{Rule \ref{final:3}}\\
& \reduces u;\big((u'; @\epsilon.(\tau\cdot\tau')) \oplus (\tifthen{@1.\xi'}{@1.\xi' \uand @\epsilon.\tau}) \big) \tag{Rule \ref{list:ext}}\\
& \notag \\
(\textrm{II}) & = \tuple{@1.\xi,\,(u';@\epsilon.\tau') \oplus @1.\xi',\,\Eu{M}} \tag{Def. of $S'(X')$}\\
& \reduces \tuple{@1.\xi,u';@\epsilon.\tau',\Eu{M}} \oplus \tuple{@1.\xi,@1.\xi',\Eu{M}} \tag{Rule \ref{choice:ext:2}} \\
& \reduces \big( u';\tuple{@1.\xi,@\epsilon.\tau',\Eu{M}}\big) \oplus \tuple{@1.\xi,@1.\xi',\Eu{M}} \tag{Rule \ref{pattern:ext:2}} \\
& \reduces \big( u'; (\tifthen{@1.\xi}{@1.\xi \uand\epsilon.\tau'})\big) \oplus \tuple{@1.\xi,@1.\xi',\Eu{M}} \tag{Rule \ref{list:ext}} \\
& = \big( u'; \tifthen{@1.\xi}{@1.\xi \uand\epsilon.\tau'}\big) \oplus @1.\tuple{\xi,\xi',\Eu{M}} \tag{Rule \ref{list:ext'}} \\
& = \big( u'; \tifthen{@1.\xi}{@1.\xi \uand\epsilon.\tau'}\big) \oplus @1.Z \tag{Rule \ref{fixed:ext:1} since $(\xi,\xi',Z) \in \Eu{M}$ }
\end{align}
Summing up, the unification $(**)$ of $\mu X. {S}(X)$ and $\mu X'. {S}'(X')$ is:
\begin{align*}
(**) &= \mu X. {S}(X) \,\combb\, \mu X'. {S'}(X')\\
&= \mu Z. \mu Z'. \bigg( u;\big((u'; @\epsilon.(\tau\cdot\tau')) \oplus (\tifthen{@1.\xi'}{@1.\xi' \uand @\epsilon.\tau}) \big) \\
& \hspace{5cm} \oplus \big( u'; \tifthen{@1.\xi}{@1.\xi \uand\epsilon.\tau'}\big) \\
& \hspace{5cm} \oplus @1.Z \bigg)
\end{align*}
Notice that the fixed-point variable $Z'$ does not apprear in the resulting \ce and therefore "$\mu Z$" can be removed.
The application of $(**)$ to a term $t$ features four cases.
\begin{enumerate}[i.)]
\item Either $t$ matches with both $u$ and $u'$, and in this case the context $\mbf{\tau}'\cdot \mbf{\tau}$ is inserted at the root of $t$.
\item Or only $u$ matches with $t$, and in this case $\tau$ is inserted at the position $1$ of $t$ provided the \ce $\mu X'. {S}'(X')$ is applied successfully at the position $1$ of $t$.
\item Or only $u'$ matches with $t$, and in this case $\tau'$ is inserted at the position $1$ of $t$ provided the \ce $\mu X. {S}(X)$ is applied successfully at the position $1$ of $t$.
\item Or both $\mu X. {S}(X)$ and $\mu X'. {S'}(X')$ are applied at the position $1$ of $t$.
\end{enumerate}
\section{Statement of the main results}
\subsection{Algebraic properties of the unification and combination}
Since the semantic equiavalence "$\equiv$" (Definition \ref{equivalence:ces:def}) is an equivalence relation, we shall use the standard notation $[S]$ for the equivalence class of the \ce $S$, i.e. $[S]=\set{S' \in \ceSet \gvert S' \equiv S}$,
and the notation $\ceSetEquiv$ for the quotient set of $\ceSet$ by "$\equiv$", i.e. $\ceSetEquiv=\set{[S] \gvert S \in \ceSet}$.
Moreover, the unification and combination of the equivalence classes of \ces in $\ceSetEquiv$ can be
defined in a natural way as:
\begin{align}
[S_1] \combb [S_2] := [S_1 \combb S_2] && [S_1] \comb [S_2] := [S_1 \comb S_2]
\end{align}
\begin{theorem}
\label{main:alg:theorem:1}
The quotient set $\ceSetEquiv$ of \ces together with the unification operation enjoy the following properties.
\begin{enumerate}
\item The neutral element of the unification upon $\ceSetEquiv$ is $[@\E.\square]$.
\item The absorbing element of the unification is $[\emptylist]$.
\item The unification of \ces is associative i.e. $([S_1] \combb [S_2]) \combb [S_3] = [S_1] \combb ([S_2] \combb [S_3])$,
for any $S_1,S_2,S_3 \in \ceSet$.
\item The unification of \ces is (non-)commutative if and only if the operation of merging of contexts $\cdot: \mycal{T}_{\square} \times \mycal{T}_{\square} \rightarrow \mycal{T}_{\square}$ is (non-)commutative.
\item The unification of \ces is idempotent if and only if the operation of merging of contexts is idempotent,
that is, $[S] \combb [S]= [S]$ for any $S \in \ceSet$ iff $\tau \cdot \tau = \tau$ for any $\mybox$-term $\tau$ in $\mycal{T}_{\square}$.
\end{enumerate}
\end{theorem}
\begin{theorem}
\label{main:alg:theorem:2}
The quotient set $\ceSetEquiv$ of \ces together with the combination operation enjoy the following properties.
\begin{enumerate}
\item The neutral element of the combination upon $\ceSetEquiv$ is $[\emptylist]$.
\item The combination of \ces is associative i.e. $([S_1] \comb [S_2]) \comb [S_3] = [S_1] \combb ([S_2] \combb [S_3])$,
for any $S_1,S_2,S_3 \in \ceSet$.
\item combination of \ces is (non-)commutative if and only if the operation of merging of contexts $\cdot: \mycal{T}_{\square} \times \mycal{T}_{\square} \rightarrow \mycal{T}_{\square}$ is (non-)commutative.
\item The combination of \ces is idempotent if and only if the operation of merging of contexts is idempotent,
that is, $[S] \comb [S]= [S]$ for any $S \in \ceSet$ iff $\tau \cdot \tau = \tau$ for any $\mybox$-term $\tau$ in $\mycal{T}_{\square}$.
\end{enumerate}
\end{theorem}
\begin{theorem}[Congruence and non-degeneracy of the unification]
The following hold.
\begin{enumerate}
\item The unification of \ces is a congruence, that is, for any \ces ${S}_1,{S}_2, {S}$ in $\ceSet$, we have that:
\begin{align*}
\textrm{If } {S}_1 \equiv {S}_2 &&\tthen&& {S}_1 \nfcombb {S} \equiv {S}_2 \nfcombb {S} \;\tand\; {S} \nfcombb {S}_1 \equiv {S} \nfcombb {S}_2.
\end{align*}
\item The unification is non-degenerate, that is, for any \ces $[S]$ and $[S']$ in $\ceSetEquiv$, we have that
\begin{align*}
[S] \nfcombb [S'] = [\emptylist] &&\tiff && [S] = [\emptylist] \;\tor\; [S'] = [\emptylist].
\end{align*}
\end{enumerate}
\end{theorem}
\begin{theorem}[Congruence and non-degeneracy of the combination]
The following hold.
\begin{enumerate}
\item The combination of \ces is a congruence, that is, for any \ces ${S}_1,{S}_2, {S}$ in $\ceSet$, we have that:
\begin{align*}
\textrm{If } {S}_1 \equiv {S}_2 &&\tthen && {S}_1 \comb {S} \equiv {S}_2 \comb {S} \;\tand\; {S} \comb {S}_1 \equiv {S} \comb {S}_2.
\end{align*}
\item The combination is non-degenerate, that is, for any \ces $[S]$ and $[S']$ in $\ceSetEquiv$, we have that
\begin{align*}
[S] \comb [S'] = [\emptylist] &&\tiff && [S] = [\emptylist] \;\textrm{and }\; [S'] = [\emptylist].
\end{align*}
\end{enumerate}
\end{theorem}
\subsection{Correction of the unification procedure}
Out of a \ce and a term it is possible to construct a position-based \ce.
The main purpose of this mapping is to formulate
a correctness-completeness criterion for the unification and combination
of \ces in terms of position-based \ces.
Roughly speaking, this criterion imposes that
the mapping of the combination of two \ces is equivalent to the
combination of their respective mappings.
\begin{definition}
\label{homomorphism:def}
A $(\ceSet,\eceSet)$-homomorphism is a function $ \Psi : \ceSet \times \mycal{T} \longrightarrow \eceSet $
that associates to each closed \ce $S$ in $\ceSet$ and a term $t$ in $\mycal{T}$ a position-based \ce $\Psi_t(S)$ in $\eceSet$ such that the semantic equivalence is preserved, that is,
\begin{align*}
\sembrackk{\Psi_t(S)}(t) = \sembrackk{S}(t).
\end{align*}
\end{definition}
We shall construct a $(\ceSet,\eceSet)$-homomorphism and prove its uniqueness in Section \ref{Psi:construction:subsec}.
The $(\ceSet,\eceSet)$-homomorphism $\Psi$ is useful to formulate the correction of the unification and combination
of \ces in terms of the unification and combination of position-based \ces as given in the following diagrams:
\[\begin{tikzcd}
\mycal{C} \times \mycal{C} \arrow{r}{\combb} \arrow[swap]{d}{\Psi_t \times \Psi_t} & \mycal{C} \arrow{d}{\Psi_t} \\
\mycal{E} \times \mycal{E} \arrow{r}{\combb} & \mycal{E}
\end{tikzcd}
\;\;\;\;\;\;\;
\begin{tikzcd}
\mycal{C} \times \mycal{C} \arrow{r}{\comb} \arrow[swap]{d}{\Psi_t \times \Psi_t} & \mycal{C} \arrow{d}{\Psi_t} \\
\mycal{E} \times \mycal{E} \arrow{r}{\comb} & \mycal{E}
\end{tikzcd}
\]
or equivalently stated in the following theorems:
\begin{theorem}[Correction of the unification]
\label{main:theorem:1}
For every term $t \in \mycal{T}$ and for every \ces $S$ and $R$ in $\ceSet$,
we have that
\begin{align*}
\Psi_t(S \combb R) & = \Psi_t(S)\combb \Psi_t(R).
\end{align*}
\end{theorem}
\begin{theorem}[Correction of the combination]
\label{main:theorem:2}
For every term $t \in \mycal{T}$ and for every \ces $S$ and $R$ in $\ceSetCan$,
we have that
\begin{align*}
\Psi_t(S \comb R) = \Psi_t(S) \comb \Psi_t(R).
\end{align*}
\end{theorem}
\section{Structure of the proof of the main results}
We proceed in two steps:
\begin{description}
\item[Step 1] We firstly show that the unification and combination of \ces is correct in the particular setting where the \ces are \emph{fixed-point free}.
More precisely, we shall show that the $(\ceSet,\eceSet)$-homorphism permutes with the unification and combination (in the sense of Theorems \ref{main:theorem:1} and \ref{main:theorem:2}) within this particular setting.
The proof is relatively easy and will be exposed in Section \ref{proof:correction:fixed-point-free:sec}.
\item[Step 2] We reduce the general setting to the fixed-point free setting by replacing the fixed-point constructors by iterations whose number depends on the input term.
That is, we replace each fixed-point constructor $\mu X.S(X)$ with an iteration $S(S(\ldots(S(\emptylist)))$.
The resulting \ce is called the \emph{unfolding} of the original \ce.
Clearly, the unfolding of a \ce is a fixed-point free one.
The idea is to show that the unification of two \ces is equivalent to the unification of their unfolding.
To accomplish this, we compare the structure of the resulting \ces by showing that \emph{the unfolding of the unification of two \ces is "almost" the unification of their unfolding}.
We illustrate this idea with the special setting where each \ce has exactly one fixed-point constructor.
We relate the series of reductions out of $\tuple{\ufold{S}{\mathbf{n}},\ufold{R}{\mathbf{n}},\emptyset,\emptyset}$ to those of $\tuple{S(\mu X.S'(X)),R(\mu Y.R'(X)),\emptyset,\emptyset}$
\end{description}
\begin{figure}[H]
\input{tree-ext-unif-simple.tex}
\caption{The structure of the \ce $\mu^n X.S(X) \combb {R}$ (left) and that of the \ce $\rename{\mu X.S(X)} \nfcombb {R}$ (right)
where $\xi^{n}$ stands for $\mu^n X.S(X)$}
\label{S:T:structure}
\end{figure}
\section{Properties of \ces and their semantics}
\subsection{Measures of \ces: the star height and the depth of \ces}
Taking into account that the structure of a \ce
is no longer a tree but a tree with back-edges
that may contains cycles,
we slightly modify the standard measure of
the depth of trees in order to capture both the number of nested
loops, caused by the nested application of the constructor $\mu$, and the
distance from the root of the tree to the leaves.
Many proofs will be done by induction with respect to this measure.
The \ces are no longer trees but trees with back-edges causing the existence of cycles.
Hence we need to find an appropriate measure that generalizes the notion of depth of a tree by taking into account both
the number of nested cycles in the tree with back-edges and the depth of such a tree after removing the back-edges.
The new measure will be still called \emph{depth}.
We adapt the definition of the star height \cite{eggan1963,Courcelle84} that measures the depth of Kleen operator $\star$ in regular languages to
\ces in order to capture the number of the nested fixed-point variables.
\begin{definition}[Star height of a \ce]
\label{def:star:height:strategy}
The star height of a \ce is the function $\h: \ceSet \longrightarrow \mathbb{N}$ defined inductively as follows.
\begin{align*}
\h(S) = \begin{cases}
0 & \tif S \textrm{ is fixed-point free} \\
\mmax\big\{\h(S'(X_1,\ldots,X_n)),\h(R_1),\ldots, \h(R_n)\big\} &\tif S = S'(R_1,\ldots,R_n), n \ge 1 \\
1+ \h(S') &\tif S=\mu X.S'
\end{cases}
\end{align*}
\end{definition}
For instance, if $S(X)$ and $R(Y)$ are fixed-point free \ces where $\mycal{V}ar(S(X)) \cap \mycal{V}ar(S(Y))=\emptyset$, then $\h\big(\mu X.S(X) \oplus \mu Y.R(Y)\big)=1$ since
the two fixed-point variables in $\mu X.S(X) \oplus \mu Y.R(Y)$ are not nested.
However, $\h\big(\mu X.\mu Y. (S(X) \oplus R(Y))\big)=2$ since the two fixed-point variables in $\mu X.\mu Y. (S(X) \oplus R(Y))$ are nested.
We define next the tree depth of a \ce that corresponds to the usual notion of depth of such a \ce after removing
all the back-edges.
\begin{definition}[Tree depth of a \ce]
\label{def:Delta:strategy}
The tree depth of a \ce is the function $\delta: \ceSet \longrightarrow \mathbb{N}$ defined inductively as follows.
\begin{align*}
\mathbf{\delta}(\emptylist) &= 0 \\
\mathbf{\delta}(X) &= 0 \\
\mathbf{\delta}(@\varepsilon.\mbf{\tau}) &= 1 \\
\mathbf{\delta}(u;S) &= 1 + \mathbf{\delta}(S) \\
\mathbf{\delta}(@p.S) &= 1 + \mathbf{\delta}(S) \\
\mathbf{\delta}(S_1 \oplus S_2) &= 1 + \mmax\{\mathbf{\delta}(S_1), \mathbf{\delta}(S_n)\} \\
\mathbf{\delta}(\bigand_{i=1,n} S_i) & = 1 + \mmax\{\mathbf{\delta}(S_1), \ldots, \mathbf{\delta}(S_n)\} \\
\mathbf{\delta}\big(\tifthen{S_1}{S}\big) & = 1 + \mmax\{\mathbf{\delta}(S_1),\mathbf{\delta}(S)\} \\
\mathbf{\delta}(\mu X.S(X)) & = \mathbf{\delta}(S(X))
\end{align*}
\end{definition}
We combine the star height and the tree depth to obtain the desired measure that
takes into account both the number of the nested cycles and the size of a \ce.
\begin{definition}[Depth of a \ce]
\label{def:Delta:strategy}
The Depth of a \ce $S$ is the is the function $\Delta: \ceSet \longrightarrow \mathbb{N} \times \mathbb{N}$ defined by
\begin{align*}
\Delta(S)=(\h(S),\delta(S)).
\end{align*}
\end{definition}
Notice that if a \ce $S$ is fixed-point free, i.e. it does not contain the
constructor $\mu$, then its depth $\Delta(S)=(0,n)$, for some $n \in \mathbb{N}$.
The following fact shows that the depth of a fixed-point \ce is strictly greater than the depth of its unfolding.
\begin{fact}
\label{Delta:monotonic:fact}
Let $\mu X.S(X)$ be a \ce where $X$ is free in $S(X)$.
Then for any integer $n\ge 0$ we have
\begin{align*}
\Delta(\mu^{n} X.S(X)) < \Delta(\mu X.S(X)).
\end{align*}
\end{fact}
\begin{proof}
The case when $n=0$ is trivial since $\Delta(\mu^{0} X.S(X))=\Delta(\emptylist)=(0,0)$.
We show next that $\h(\mu X.S(X))=1 + \h(\mu^{n} X.S(X))$ for any $ n \ge 1$.
It follows from the definition of the star height that $\h(\mu^{n} X.S(X))=\h\big(S(S(\ldots(S(\emptylist))))\big)=\mmax\{\h(S(X)),\h(S(\emptylist))\}=\h(S(\emptylist)) = \h(S(X))$.
On the other hand, by the definition of the star height $\h(\mu X.S(X))=1 + \h(S(X))$. And it follows from the lexicographic order that $\Delta(\mu^{n} X.S(X))< \Delta(\mu X.S(X))$.
\end{proof}
We next define the replacement of each fixed-point constructor of a \ce by an iteration.
The resulting \ce is obviousely fixed-point free.
\begin{definition}[Unfolding of a \ce]
\label{ufold:def}
Let $S$ be a \ce with (bound) fixed-point variables $X_1,\ldots,X_r$ and let $\mathbf{n}:\{X_1,\ldots,X_r\} \to \mathbb{N}$ be a mapping.
The unfolding of $S$ with respect to $\mathbf{n}$, denoted by $\ufold{S}{\mathbf{n}}$, is inductively defined as follows:
\begin{align*}
\ufold{\emptylist}{\mathbf{n}} &= \emptylist \\
\ufold{X}{\mathbf{n}} &= X \\
\ufold{@\varepsilon.\mbf\tau}{\mathbf{n}} &=@\varepsilon.\mbf\tau \\
\ufold{u;S}{\mathbf{n}} &= u;\ufold{S}{\mathbf{n}} \\
\ufold{@p.S}{\mathbf{n}} &= @p.\ufold{S}{\mathbf{n}} \\
\ufold{S_1 \oplus S_2}{\mathbf{n}} &= \ufold{S_1}{\mathbf{n}} \oplus \ufold{S_2}{\mathbf{n}} \\
\ufold{\bigand_{i=1,m} S_i}{\mathbf{n}} & = \bigand_{i=1,m} \ufold{S_i}{\mathbf{n}} \\
\ufold{\tifthen{S_1}{S_2}}{\mathbf{n}} & = \tifthen{\ufold{S_1}{\mathbf{n}}}{\ufold{S_2}{\mathbf{n}}} \\
\ufold{\mu X.S(X)}{\mathbf{n}} & = \mu^{\mathbf{n}(X)}X.\ufold{S(X)}{\mathbf{n}}
\end{align*}
\end{definition}
\begin{lemma}
Let $S$ be a \ce with (bound) fixed-point variables $X_1,\ldots,X_r$ and let $\mathbf{n}:\{X_1,\ldots,X_r\} \to \mathbb{N}$ and $\mathbf{k}:\{X_1,\ldots,X_r\} \to \mathbb{N}$ be mappings
with $n=\min\set{\mathbf{n}(X_1),\ldots,\mathbf{n}(X_r)}$.
Then there is a $n$-path simulation of $S$ by $\ufold{S}{\mathbf{n}}$ and of $\ufold{S}{\mathbf{n}}$ by $S$.
\end{lemma}
\begin{lemma}
\label{unfold:equiv:lemma}
Let $S$ be a \ce with (bound) fixed-point variables $X_1,\ldots,X_r$ and let $\mathbf{n}:\{X_1,\ldots,X_r\} \to \mathbb{N}$ and $\mathbf{k}:\{X_1,\ldots,X_r\} \to \mathbb{N}$ be mappings
with $n=\min\set{\mathbf{n}(X_1),\ldots,\mathbf{n}(X_r)}$ and \\ $m=\min\set{\mathbf{n}(X_1),\ldots,\mathbf{n}(X_r),\mathbf{k}(X_1),\ldots,\mathbf{k}(X_r)}$. Then
\begin{itemize}
\item for every term $t$ with depth $\delta(t)\le n$, we have that $\ufold{S}{\mathbf{n}} \equiv_n S$, and
\item if $S \equiv_{n} R$ and $R \equiv_{k} T$ then $S \equiv_{m} T$
\end{itemize}
\end{lemma}
\begin{definition}
\label{ufold:def}
Let $S(X)$ be a \ce where the fixed-point variable $X$ is free and appears once.
The number of positions between $X$ and the root of $S(X)$, denoted by $\Pi_X(S(X))$, is the number of jumps or $\most(\cdot)$ between $X$ and the root of $S(X)$, which is inductively defined as follows:
\begin{align*}
\Pi_X(X) &= 0 \\
\Pi_X(u;S'(X)) &= \Pi_X(S'(X)) \\
\Pi_X(S_1(X) \oplus S_2) &= \Pi_X(S_1(X)) \\
\Pi_X\big(\tifthen{S''}{S'(X)}\big) & = \Pi_X(S'(X)) \\
\Pi_X(\mu Y.S'(X,Y)) & = \Pi_X(S'(X,Y))\\
\Pi_X\big( (\bigand_{i=1,m} @i.S_i) \wedge @j.S'(X)\big) & = 1+ \Pi_X(S'(X)) \\
\Pi_X(\most(S'(X)) & = 1+ \Pi_X(S'(X))
\end{align*}
\end{definition}
Notice that if $S$ is monotonic, then for every sub-\ce $\mu X.S'(X)$ of $S$, we have that $\Pi_X(S'(X))\ge 1$.
\subsection{From \ces to position-based \ces: the construction of the $(\ceSet,\eceSet)$-homomorphism}
\label{Psi:construction:subsec}
The definition of this mapping follows.
\begin{definition}
\label{psi:def}
$t\in \mathcal{C}\cup \mathbb{F}$]
We define the function
\begin{align*}
\Psi : \ceSet \times \mycal{T} \longrightarrow \eceSet
\end{align*}
that associates to each closed \ce $S$ in $\ceSet$ and a term
$t$ in $\mycal{T}$ a position-based \ce $\Psi_t(S)$ in $\eceSet$ by
\begin{enumerate}
\item \label{psi:def:item:empty} $\Psi_t(\emptylist) = \emptylist$
\item \label{psi:def:item:insert} $\Psi_t(@\epsilon.\mbf{\tau})= @\epsilon.\mbf{\tau}$
\item \label{psi:def:item:choice} $ \Psi_t(S \oplus S') = \begin{cases}
\Psi_t(S) & \textrm{if } \Psi_t(S) \neq \emptylist, \\
\Psi_t(S') & \textrm{otherwise}.
\end{cases}$
\item \label{psi:def:item:mu} $\Psi_t(\mu X.S(X)) = \Psi_t\big(\mu^{\delta(t)} X.S(X)\big)$
\item \label{psi:def:item:pattern} $ \Psi_t(u;S) = \begin{cases}
\Psi_t(S) & \textrm{if } \match{u}{t}, \\
\emptylist & \textrm{otherwise}.
\end{cases}$
\item \label{psi:def:item:ifthen} $ \Psi_t\big(\tifthen{S'}{S}\big) =
\begin{cases}
\Psi_t(S) & \textrm{if } \Psi_t(S') \neq \emptylist ,\\
\emptylist & \textrm{otherwise}.
\end{cases}$
\item \label{psi:def:item:and} $\Psi_t\big(\bigand_{i=1,n}@p_i.S_i\big) = \theta\big(\bigand_{i=1,n} @p_i.\Psi_{t_{|i}}(S_i) \big)$
\item \label{psi:def:item:most} $\Psi_t(\most(S)) = \Psi_t\big(\bigand_{i=1,ar(t)} @i.S \big)$
\end{enumerate}
\end{definition}
\begin{lemma}
\label{psi:sem:lemma}
The mapping $\Psi$ constructed in the Definition \ref{psi:def} is a $(\ceSet,\eceSet)$-homomorphism, that is,
for any \ce $S$ in $\ceSet$ and any term $t$ in $\mycal{T}$, we have that
\begin{align*}
\sembrackk{\Psi_t(S)}(t) = \sembrackk{S}(t).
\end{align*}
\end{lemma}
The proof of this Lemma does not provide any difficulties
since the definition of $\Psi$ is close to the definition of the semantics of \ces.
This definition can be restated as follows.
\begin{lemma}
\label{nice:prop:Psi:lemma}
The mapping $\Psi$ satisfies the following properties for any closed \ces $S,S',R,R'$ and any any position-based \ce $E$ and any terms $t,u$:
\begin{enumerate}
\item \begin{enumerate}
\item \label{Properties-of-Psi:Lemma:item:0} $\Psi_t(E) = E$.
\item \label{Properties-of-Psi:Lemma:item:0'} $\Psi_t(\Psi_t(S)) = \Psi_t(S)$.
\end{enumerate}
\item \label{Properties-of-Psi:Lemma:item:1} $\Psi_t(u;S) = \Psi_t(u;\Psi_t(S))$.
\item \label{Properties-of-Psi:Lemma:item:2} $\Psi_t(S \oplus S') = \Psi_t(\Psi_t(S)\oplus \Psi_t(S'))$.
\item \begin{enumerate}
\item \label{Properties-of-Psi:Lemma:item:3''} $\Psi_t(\tifthen{S'}{S}) = \Psi_t(\tifthen{\Psi_t(S')}{S})$.
\item \label{Properties-of-Psi:Lemma:item:3} $\Psi_t(\tifthen{S'}{S}) = \Psi_t(\tifthen{S'}{\Psi_t(S)})$.
\item \label{Properties-of-Psi:Lemma:item:3'''} $\Psi_t(\tifthen{S'}{S}) = \Psi_t(\tifthen{R'}{S})$ if $\Psi_t(S')=\Psi_t(R')$.
\item \label{Properties-of-Psi:Lemma:item:3'} $\Psi_t(\tifthen{S'}{S}) = \Psi_t(\tifthen{\theta(S')}{\Psi_t(S)})$.
\end{enumerate}
\item \begin{enumerate}
\item \label{Properties-of-Psi:Lemma:item:4} $\Psi_t(S \wedge R) = \Psi_t\big( S \wedge R' \big)$ if $ \Psi_t(R)=\Psi_t(R')$.
\item \label{Properties-of-Psi:Lemma:item:4'} $\Psi_t(S \wedge R) = \Psi_t(S)$ if $ \Psi_t(R)=\emptylist$.
\end{enumerate}
\end{enumerate}
\end{lemma}
It turns out that the function $\Psi$ (Definition \ref{psi:def})
preserves the semantics of \ces in the following sense.
\begin{lemma}
\label{nice:prop:Psi:lemma:}
The function $\Psi$ enjoys the following properties.
\begin{enumerate}[i.)]
\item \label{item:1:nice:prop:Psi:lemma} For any position-based \ces $E, E'$ in $\eceSet$, we have that
$E = E'$ iff $\Psi(E,t)=\Psi(E',t)$ for any term $t$.
\item \label{item:2:nice:prop:Psi:lemma} For any \ces ${S},{S}'$ in $\ceSet$, we have that
${S} \equiv {S}'$ iff $\Psi({S},t)=\Psi({S}',t)$ for any term $t$.
\item \label{item:3:nice:prop:Psi:lemma} For any \ces ${S},{S}'$ in $\ceSet$, we have that
${S} \equiv_n {S}'$ iff $\Psi({S},t)=\Psi({S}',t)$ for any term $t$ of depth $\delta(t)=n$.
\end{enumerate}
\end{lemma}
\begin{proof}
We only prove Item \emph{ii.)}, the other item follows immediately
from the definition of $\Psi$. On the one hand, from the definition
of $\equiv$ we have that
\begin{align*}
S \equiv S' &&\tiff && \sembrackk{S}(t) = \sembrackk{S'}(t), \;\;
\forall t \in \mycal{T}.
\end{align*}
However, it follows from Lemma \ref{psi:sem:lemma} that
\begin{align*}
\sembrackk{S}(t) = \sembrackk{\Psi(S,t)}(t) &&\tand &&
\sembrackk{S'}(t) = \sembrackk{\Psi(S',t)}(t).
\end{align*}
Therefore,
\begin{align*}
\sembrackk{\Psi(S,t)}(t) = \sembrackk{\Psi(S',t)}(t), & \forall t
\in \mycal{T}.
\end{align*}
Since, both $\Psi(S,t)$ and $\Psi(S',t)$ are elementary \ces, it
follows from
Item \ref{item:1:nice:prop:Psi:lemma}.) of this Lemma that $\Psi(S,t) =\Psi(S',t)$.
\end{proof}
\subsection{Properties of the semantics of \ces}
\begin{lemma}
\label{depth:position:composition:lemma}
Let $S(X)$, $R$ and $R'$ be \ces where the fixed-point variable $X$ appears once in $S(X)$, and let $m\ge 0$.
\begin{enumerate}
\item \label{depth:position:composition:lemma:item:1} If $R \equiv_{m} R'$ and $m'=\Pi_{X}(S(X))$ then $S(R)\equiv_{m'+m} S(R')$.
\item \label{depth:position:composition:lemma:item:2} If $R \equiv_{m} R'$ and $m'<m$ then $S(R)\equiv_{m'} S(R')$.
\end{enumerate}
\end{lemma}
\begin{proof}
\end{proof}
\begin{corollary}
\label{fixed:point:semantics:corollary}
Let $T(X)$ and $R$ be \ces.
Then, for any term $t$ with depth $n={\delta(t)}$ and positive interger $m\ge 0$,
we have that
\begin{align}
\label{fixed:point:semantics:eq}
\sembrackk{\mu^{n+m}X.T(X))}(t) = \sembrackk{\mu^{n} X.T(X)}(t) = \sembrackk{T^{n}(R)}(t) .
\end{align}
\end{corollary}
\begin{corollary}
\label{general-fixed-point-corollary}
Let $T(X)$ and $R$ be \ces.
For any term $t$ in $\mycal{T}$ with depth $n=\delta(t)$,
\begin{align*}
\tif && T(R) \equiv_{n} R && \tthen && \mu X.T(X) \equiv_{n} R.
\end{align*}
\end{corollary}
\begin{proof}
If $T(R) \equiv_{t} R$ then $T^{n}(R) \equiv_{t}R$
follow from the fact that $T(\mu^{\delta(t)}Z.T(Z)) \equiv_t \mu^{n}Z.T(Z)$ proved in Corollary \ref{fixed:point:semantics:corollary}.
\end{proof}
\newpage
\section{Proof of the correction of the unification of \ces: the fixed-point free setting}
\label{proof:correction:fixed-point-free:sec}
In this section we prove the correction of the unification procedure in the case where the two input \ces are fixed-point free (Lemma \ref{main:lemma:unif:fixed-point-free}).
This is an important step since we shall reduce in Section \ref{correction:unif:general:setting:sec} the general setting into the fixed-point free one.
We notice that, in the fixed-point free setting, the memory involved in the unification system $\Unif$ remains empty and does not play any role since
the only rules that modify the contexts are the fixed-point ones. Obviously, such rules are not applied since the input \ces are fixed-point free.
Besides, in this setting, the proof of the termination and the confluence of $\Unif$ is trivial.
Indeed, $\Unif$ terminates since each rule transforms a left-hand side \ce into its immediate sub-\ces.
We firstly show in lemma \ref{normalization:of:unif:Lemma} the correction of the unification of a form of elementary \ces,
namely the \ces composed of a list composed of faillures and insertions.
But before that, we need a simple set theoretic fact.
\begin{fact}
\label{set:theoretic:fact}
Let $I',J',J''$ be sets.
Then, $(I' \cap J'')\cup (I' \setminus (J' \cup J''))=I' \setminus J'$.
\end{fact}
\begin{proof}
\begin{align*}
(I' \cap J'')\cup (I' \setminus (J' \cup J''))
& = \set{x\gvert x \in I' \tand x\in J''} \cup \set{x \gvert x\in I' \tand x \notin J' \cup J''} \\
& = \set{x\gvert x \in I' \tand x\in J''} \cup \set{x \gvert x\in I' \tand x \notin J' \tand x \notin J''} \\
& = \set{x\gvert (x \in I' \tand x\in J'') \tor (x\in I' \tand x \notin J' \tand x \notin J'')} \\
& = \set{x\gvert x \in I' \tand (x\in J'' \tor \tor x \notin J' \tor x \notin J'')} \\
& = \set{x\gvert x \in I' \tand x \notin J' }\\
& = I' \setminus J'
\end{align*}
\end{proof}
\begin{lemma}
\label{normalization:of:unif:Lemma}
Let $S=\bigand_{i \in I} @i.S_i$ and $R=\bigand_{j \in J} @j.R_j$ be two \ces where each $S_i$ and $R_i$ is either the failure $\emptylist$ or the insertion $@\epsilon.\tau_i$, for a $\mybox$-term $\tau_i$ in $\mycal{T}_{\square}$.
Then,
\begin{align}
\label{normalization:of:unif:Lemma:eq}
\Psi_t\big(S\combb R\big)=\Psi_t\big(\theta(S)\combb \theta(R)\big)
\end{align}
\end{lemma}
\begin{proof}
Assume that
\begin{align*}
S=\bigand_{i \in I'} @i.S_i \wedge \bigand_{i \in I''} @i.\emptylist &&\tand&& R=\bigand_{i \in J'} @j.R_j \wedge \bigand_{j \in J''} @j.\emptylist \\
\end{align*}
where $S_i \in \mycal{T}_{\square} $ for any $i\in I'$, and $R_j \in \mycal{T}_{\square} $ for any $j\in J'$, and $I' \cap I''=\emptyset$ and $J' \cap J''=\emptyset$. Therefore,
\begin{align*}
\theta(S)=\bigand_{i \in I'} @i.S_i &&\tand&& \theta(R)=\bigand_{i \in J'} @j.R_j.
\end{align*}
Consider the \ces $\Lambda$ and $\tilde{\Lambda}$:
\begin{align}
\Lambda &= \Big(\bigand_{i \in I'} @i.S_i \wedge \bigand_{i \in I''} @i.\emptylist \Big) \combb \Big(\bigand_{i \in J'} @j.R_j \wedge \bigand_{j \in J''} @j.\emptylist \Big) \notag \\
\tilde{\Lambda} &= \bigand_{i \in I' \cap J'} @i.(S_i\combb R_i \oplus S_i \oplus R_i) \wedge \bigand_{i \in I' \setminus J'} @i.S_i \wedge \bigand_{i \in J' \setminus I'} @i.R_i \notag
\end{align}
By computing the \ces $S \combb R$ and $\theta(S) \combb \theta(R)$ involved in the left-hand side and the right-hand side of Eq.(\ref{normalization:of:unif:Lemma:eq}) respectively, we get:
\begin{align}
\Psi_t\big(S \combb R\big) &= \Psi_t\big(\tifthen{S\& R}{\Lambda}\big) \tag{Item \ref{list:ext} of Def. \ref{reduction:unif:def} of $\combb$ } \\
&= \Psi_t(\tifthen{S\& R}{\Psi_t(\Lambda)}) \tag{Item \ref{Properties-of-Psi:Lemma:item:3} of Lemma \ref{nice:prop:Psi:lemma}}\\
& \tand \notag \\
\Psi_t(\theta(S) \combb \theta(R)) &= \Psi_t(\tifthen{\theta(S)\& \theta(R)}{\tilde{\Lambda}}) \tag{Item \ref{list:ext} of Def. \ref{reduction:unif:def} of $\combb$ } \\
& =\Psi_t(\tifthen{S\& R}{\tilde{\Lambda}}) \tag{Item \ref{Properties-of-Psi:Lemma:item:3'} of Lemma \ref{nice:prop:Psi:lemma}}\\
& =\Psi_t(\tifthen{S\& R}{\Psi_t(\tilde{\Lambda})}) \tag{Item \ref{Properties-of-Psi:Lemma:item:3} of Lemma \ref{nice:prop:Psi:lemma}}
\end{align}
Hence to prove Eq.(\ref{normalization:of:unif:Lemma:eq}) we need to show that $\Psi_t(\tilde{\Lambda})=\Psi_t(\Lambda)$. By expanding $\Lambda$ we get
\begin{align}
\Lambda &= \bigand_{i \in I' \cap J'} @i.(S_i\combb R_i \oplus S_i \oplus R_i) \wedge
\bigand_{i \in I' \cap J''} @i.(S_i\combb \emptylist \oplus S_i \oplus \emptylist) \wedge
\bigand_{i \in I''\cap J'} @i.(\emptylist \combb R_i \oplus \emptylist \oplus R_i ) \wedge \notag \\
&\;\; \bigand_{i \in I'' \cap J''} @i.\emptylist \wedge
\bigand_{i \in I' \setminus (J' \cup J'') } @i.S_i \wedge
\bigand_{i \in I'' \setminus (J' \cup J'') } @i. \emptylist \wedge
\bigand_{i \in J' \setminus (I' \cup I'') } @i.R_i \wedge
\bigand_{i \in J'' \setminus (I' \cup I'') } @i. \emptylist \tag{Item \ref{list:ext} Def. \ref{reduction:unif:def} of $\combb$}
\end{align}
Therefore, $\Psi_t(\Lambda)$ can be written as
\begin{align}
\Psi_t(\Lambda)
&= \Psi_t\Big(\bigand_{i \in I' \cap J'} @i.(S_i\combb R_i \oplus S_i \oplus R_i) \wedge
\bigand_{i \in I' \cap J''} @i.S_i \wedge
\bigand_{i \in I''\cap J'} @i.R_i \wedge \notag
\bigand_{i \in I'' \cap J''} @i.\emptylist \wedge\\
&\;\; \bigand_{i \in I' \setminus (J' \cup J'') } @i.S_i \wedge
\bigand_{i \in I'' \setminus (J' \cup J'') } @i. \emptylist \wedge
\bigand_{i \in J' \setminus (I' \cup I'') } @i.R_i \wedge
\bigand_{i \in J'' \setminus (I' \cup I'') } @i. \emptylist \Big) \tag{since $\Psi_t(S_i\combb \emptylist \oplus S_i \oplus \emptylist)=\Psi_t(S_i)$ and $\Psi(\emptylist \combb R_i \oplus \emptylist \oplus R_i)=\Psi_t(R_i)$,
by Item \ref{Properties-of-Psi:Lemma:item:4} of Lemma \ref{nice:prop:Psi:lemma}}\\
& \notag \\
&= \Psi_t\Big(\bigand_{i \in I' \cap J'} @i.(S_i\combb R_i \oplus S_i \oplus R_i) \wedge
\bigand_{i \in I' \cap J''} @i.S_i \wedge
\bigand_{i \in I' \setminus (J' \cup J'') } @i.S_i \wedge
\bigand_{i \in J' \setminus (I' \cup I'') } @i.R_i \wedge
\bigand_{i \in I''\cap J'} @i.R_i \Big) \tag{since $\Psi_t(@i.\emptylist)=\emptylist$, by Item \ref{Properties-of-Psi:Lemma:item:4'} of Lemma \ref{nice:prop:Psi:lemma}}\\
& \notag \\
&= \Psi_t\Big( \bigand_{i \in I' \cap J'} @i.(S_i\combb R_i \oplus S_i \oplus R_i) \wedge
\bigand_{i \in I' \setminus J'} @i.S_i \wedge
\bigand_{i \in J' \setminus I'} @i.R_i \Big) \tag{since $(I' \cap J'')\uplus (I' \setminus (J' \cup J''))=I' \setminus J'$ and $(J'\cap I'')\uplus (J'\setminus (I'\cup I''))=J' \setminus I'$, by Fact \ref{set:theoretic:fact}} \\
& = \Psi_t(\tilde{\Lambda}) \tag{Def. of $\tilde{\Lambda}$}
\end{align}
\end{proof}
We show in the follwing lemma that the $(\ceSet,\eceSet)$-homomorphism $\Psi$ ...
\begin{lemma}
\label{psi:unif-congruence:Lemma}
The $(\ceSet,\eceSet)$-homomorphism $\Psi$ satisfies the following properties for any closed \ces $S,S'$ and any position-based \ce $E$ and any terms $t,u$:
\begin{enumerate}
\item \begin{enumerate} \item \label{psi:unif-congruence:Lemma:item:1} $ \Psi_t\big(u ; \big(\Psi_t(S) \combb E \big) \big) = \Psi_t(u ; S) \combb E$.
\item \label{psi:unif-congruence:Lemma:item:1'} $\Psi_t\big(u ; \big(E \combb \Psi_t(S)\big) \big) = \Psi_t \big(E \combb \Psi (u ; S)\big) $.
\end{enumerate}
\item \begin{enumerate} \item \label{psi:unif-congruence:Lemma:item:2} $\Psi_t\big( (\Psi_t(S) \oplus \Psi_t(S')) \combb E \big) = \Psi_t (S \oplus S') \combb E$.
\item \label{psi:unif-congruence:Lemma:item:2'} $E \combb \Psi_t\big(\Psi_t(S)\oplus \Psi_t(S')\big) = E \combb \Psi_t(S \oplus S')$.
\end{enumerate}
\item \begin{enumerate} \item \label{psi:unif-congruence:Lemma:item:3} $\Psi_t\big(\tifthen{S'}{(\Psi_t(S) \combb E)}\big)= \Psi_t\big(\tifthen{S'}{\Psi_t(S)}\big) \combb E$.
\item \label{psi:unif-congruence:Lemma:item:3'} $\Psi_t\big(\tifthen{S'}{(E \combb \Psi_t(S))}\big) = E \combb \Psi_t(\tifthen{S'}{\Psi_t(S)})$.
\end{enumerate}
\end{enumerate}
\end{lemma}
\begin{proof}
We only prove the cases \ref{psi:unif-congruence:Lemma:item:1} and \ref{psi:unif-congruence:Lemma:item:2} and \ref{psi:unif-congruence:Lemma:item:3} since
the proof of the cases \ref{psi:unif-congruence:Lemma:item:1'} and \ref{psi:unif-congruence:Lemma:item:2'} and \ref{psi:unif-congruence:Lemma:item:3'} is similar.
\begin{enumerate}
\item \begin{enumerate}
\item We distinguish two cases depending on whether $u$ match with $t$. If $u$ matches with $t$ then the left-hand side of the equation is
\begin{align}
\Psi\big(u ; \big(\Psi(S,t) \combb E \big),t\big) &= \Psi(\Psi(S,t) \combb E,t) \tag{Def. of $\Psi$} \\
&= \Psi(S,t) \combb E \tag{ since $\Psi(S,t) \combb E$ is a position-based \ce, Item \ref{Properties-of-Psi:Lemma:item:0} of Lemma \ref{nice:prop:Psi:lemma}},
\end{align}
and the right-hand side of the equation is $\Psi (u ; S,t) \combb E = \Psi(S,t) \combb E$ by the Definition of $\Psi$, which is equal to the left-hand side.
If $u$ does not match with $t$ then, the left-hand side of the equation is $\emptylist$ by the definition of $\Psi$; and the right-hand side is
$\Psi (u ; S,t) \combb E= \emptylist \combb E = \emptylist$.
\end{enumerate}
\item \begin{enumerate}
\item We distinguish two cases depending on whether $\Psi_t(S)=\emptylist$. If $\Psi_t(S)=\emptylist$ then left-hand side of the equation is
\begin{align}
\Psi_t\big( (\Psi_t(S) \oplus \Psi_t(S')) \combb E \big) & = \Psi_t\big( (\emptylist \oplus \Psi_t(S')) \combb E \big) \notag \\
& = \Psi_t\big( \Psi_t(S') \combb E \big) \notag \\
&=\Psi_t(S') \combb E \tag{since $\Psi_t(S') \combb E$ is position-based, Item \ref{Properties-of-Psi:Lemma:item:0} of Lemma \ref{nice:prop:Psi:lemma}}
\end{align}
and the right-hand side of the equation is $\Psi_t (S \oplus S') \combb E = \Psi_t (S') \combb E$ by the definition of $\Psi$, which is equal to the left-hand side.
If $\Psi_t(S) \neq \emptylist$, then left-hand side of the equation is $\Psi_t\big( (\Psi_t(S) \oplus \Psi_t(S')) \combb E \big) = \Psi_t\big(\Psi_t(S) \combb E \big)$
by the definition of $\Psi$, which is equal to $\Psi_t(S) \combb E $, since $\Psi_t(S) \combb E$ is position-based.
For the right-hand side, we have $\Psi_t(S \oplus S')=\Psi_t(S)$ by the definition of $\Psi$, thus we get the desired result.
\end{enumerate}
\item We distinguish two cases depending on whether $\Psi_t(S')=\emptylist$. If $\Psi_t(S')=\emptylist$ then left-hand side of the equation is
$\Psi_t\big(\tifthen{S'}{(\Psi_t(S) \combb E)}\big)=\emptylist$ by the definition of $\Psi$,
and the right-hand side is $\Psi_t\big(\tifthen{S'}{\Psi_t(S)}\big) \combb E = \emptylist \combb E = \emptylist$.
If $\Psi_t(S') \neq \emptylist$ then left-hand side of the equation is \\ $\Psi_t\big(\tifthen{S'}{(\Psi_t(S) \combb E)}\big)=\Psi_t(\Psi_t(S) \combb E)$ which is equal to $\Psi(S,t) \combb E$
since $\Psi(S,t) \combb E$ is a position-based \ce, by the Item \ref{Properties-of-Psi:Lemma:item:0} of Lemma \ref{nice:prop:Psi:lemma}.
And the right-hand side is $\Psi_t\big(\tifthen{S'}{\Psi_t(S)}\big) \combb E=\Psi_t(\Psi_t(S)) \combb E$ which is equal to $\Psi_t(S) \combb E$ by the same Idem.
\end{enumerate}
\end{proof}
We are ready to show the main result of this section, namely that the unification of fixed-point free \ces is correct.
\begin{lemma}
\label{main:lemma:unif:fixed-point-free}
For every term $t \in \mycal{T}$ and for every fixed-point free \ces $S$ and $R$ in $\ceSet$,
we have that
\begin{align}
\label{main:lemma:unif:fixed-point-free:eq}
\Psi_t(S \combb R) & = \Psi_t(S) \combb \Psi_t(R)
\end{align}
\end{lemma}
\begin{proof}
The proof is by induction on the pair $(\delta(S),\delta(R))$ where $\delta(S)$ (resp. $\delta(R)$) is the depth of $S$ (resp. $R$).
\begin{description}
\item \textbf{Base case}. If $(\delta(S),\delta(R))=(0,0)$ then $S=\emptylist$ or $S=@\varepsilon.\tau$, and $R=\emptylist$ or $R=@\varepsilon.\tau'$.
In this case the proof is trivial since $\Psi(S,t) =S$ and $\Psi(R,t) =R$.
\item \textbf{Induction step}. We assume that ...
\begin{enumerate}
\item If $S=u;S'$ and $R$ is arbitrary then
\begin{align}
\Psi_t(S \combb R) & = \Psi_t((u;S') \combb R) \notag \\
& = \Psi_t\big(u;(S' \combb R)\big) \tag{Item \ref{pattern:ext:1} of Def. \ref{reduction:unif:def} of $\combb$} \\
& = \Psi_t\big(u;\Psi_t(S' \combb R)\big) \tag{Item \ref{Properties-of-Psi:Lemma:item:1} of Lemma \ref{nice:prop:Psi:lemma}} \\
&= \Psi_t\big(u; \big(\Psi_t(S') \combb \Psi_t(R)\big)\big) \tag{Ind. hypothesis since $\delta(S)=\delta(S')+1$} \\
&= \Psi_{t}(u; S') \combb \Psi_t(R) \tag{Item \ref{psi:unif-congruence:Lemma:item:1} of Lemma \ref{psi:unif-congruence:Lemma}} \\
& = \Psi_t(S) \combb \Psi_t(R) \tag{Def. of $S$}
\end{align}
\item If $S=S' \oplus S''$ and $R$ is arbitrary then
\begin{align}
\Psi_t(S \combb R) &= \Psi_t((S' \oplus S'') \combb R) \notag \\
&= \Psi_t\big((S' \combb R) \oplus (S'' \combb R)\big) \tag{Item \ref{choice:ext:1} of Def. \ref{reduction:unif:def} of $\combb$} \\
&= \Psi_t\big(\Psi_t(S' \combb R) \oplus \Psi_t(S'' \combb R)\big) \tag{Item \ref{Properties-of-Psi:Lemma:item:2} of Lemma \ref{nice:prop:Psi:lemma}} \\
&= \Psi_t\big(\big(\Psi_t(S') \combb \Psi_t(R)\big) \oplus \big(\Psi_t(S'') \combb \Psi_t(R)\big)\big) \tag{Ind. hypothesis since $\delta(S)=\delta(S')+1$} \\
&= \Psi_t\big(\big(\Psi_t(S') \oplus \Psi_t(S'') \big) \combb \Psi_t(R)\big) \tag{Def. of $\combb$} \\
&= \Psi_t(S' \oplus S'') \combb \Psi_t(R) \tag{Item \ref{psi:unif-congruence:Lemma:item:2} of Lemma \ref{psi:unif-congruence:Lemma}} \\
&= \Psi_t(S) \combb \Psi_t(R) \tag{Def. of $S$}
\end{align}
\item If $S=\tifthen{S'}{S''}$ and $R$ is arbitrary then
\begin{align}
\Psi_t(S \combb R) &= \Psi_t((\tifthen{S'}{S''}) \combb R) \notag \\
&= \Psi_t(\tifthen{S'}{(S''\combb R)}) \tag{Item \ref{if:ext:1} Def. \ref{reduction:unif:def} of $\combb$} \\
&= \Psi_t\big(\tifthen{S'}{\Psi_t((S''\combb R))}\big) \tag{Item \ref{Properties-of-Psi:Lemma:item:3} of Lemma \ref{nice:prop:Psi:lemma}} \\
&= \Psi_t\big(\tifthen{S'}{\big(\Psi_t(S'')\combb \Psi_t(R)\big)}\big) \tag{Ind. hypothesis} \\
&= \Psi_t\big(\tifthen{S'}{\Psi_t(S'')}\big) \combb \Psi_t(R) \tag{Item \ref{psi:unif-congruence:Lemma:item:3} of Lemma \ref{psi:unif-congruence:Lemma}} \\
&= \Psi_t\big(\tifthen{S'}{S''}\big) \combb \Psi_t(R) \tag{Item \ref{Properties-of-Psi:Lemma:item:3} of Lemma \ref{nice:prop:Psi:lemma}} \\
& = \Psi_t(S) \combb \Psi_t(R) \tag{Def. of $S$}
\end{align}
\item \label{proof:lemma:unif:free-fp:item:list:list} If $S=\bigand_{i \in I} @i.S_i$ and $R=\bigand_{j \in J} @j.R_j$ then let
\begin{align*}
M_1 &=\bigand_{i \in I\setminus J} @i.S_i &\tand&& M_2 &=\bigand_{j \in J\setminus I} @j.R_j, \\
M^{\star}_1 &=\bigand_{i\in I \setminus J}@i.\Psi_{t_{|i}}(S_i) &\tand&& M^{\star}_2 &=\bigand_{j\in J \setminus I}@j.\Psi_{t_{|j}}(R_j)
\end{align*}
and the left-hand side of Eq.(\ref{main:lemma:unif:fixed-point-free:eq}) can be written as
\begin{align}
\textrm{LH}.\ref{main:lemma:unif:fixed-point-free:eq}
&=\Psi_t( S \combb R) \notag \\
&= \Psi_t\big(\tifthen{S\&R}{\bigand_{i \in I \cap J} @i.(S_i \combb R_i \oplus S_i \oplus R_i) \uand M_1 \uand M_2}\big) \tag{Item \ref{list:ext} of Def. \ref{reduction:unif:def} of $\combb$}\\
&= \Psi_t\Big(\tifthen{S\&R}{\Psi_t\big(\bigand_{i \in I \cap J} @i.(S_i \combb R_i \oplus S_i \oplus R_i) \uand M_1 \uand M_2\big)}\Big) \tag{Item \ref{Properties-of-Psi:Lemma:item:3} of Lemma \ref{nice:prop:Psi:lemma}}\\
&= \Psi_t\bigg(\tifthen{S\&R}{\theta\Big(\bigand_{i \in I \cap J} @i.\Psi_{t_{|i}}(S_i \combb R_i\oplus S_i \oplus R_i) \uand M^{\star}_1 \uand M^{\star}_2 \Big)}\bigg) \tag{Item \ref{psi:def:item:and} of Def. \ref{psi:def} of $\Psi(\bigand(\cdot))$}\\
&= \Psi_t\bigg(\tifthen{S\&R}{\bigand_{i \in I \cap J} @i.\Psi_{t_{|i}}(S_i \combb R_i\oplus S_i \oplus R_i) \uand M^{\star}_1 \uand M^{\star}_2 }\bigg) \tag{Item \ref{Properties-of-Psi:Lemma:item:3'} of Lemma \ref{nice:prop:Psi:lemma}} \\
&= \Psi_t\bigg(\tifthen{S\&R}{\bigand_{i \in I \cap J} @i.\big(\Psi_{t_{|i}}(S_i) \combb \Psi_{t_{|i}}(R_i) \oplus \Psi_{t_{|i}}(S_i) \oplus \Psi_{t_{|i}}(R_i) \big) \uand M^{\star}_1 \uand M^{\star}_2 }\bigg) \tag{Ind. hyp.}\\
&=\Psi_t\bigg(\bigand_{i\in I} @i.\Big(\Psi_{t_{|i}}(S_i)\Big) \combb \bigand_{j\in J} @i.\Big(\Psi_{t_{|i}}(R_i) \Big) \bigg) \tag{Item \ref{list:ext} of Def. \ref{reduction:unif:def} of $\combb$}\\
&=\Psi_t\bigg(\theta\Big(\bigand_{i\in I} @i.\big(\Psi_{t_{|i}}(S_i)\Big) \combb \theta\Big(\bigand_{j\in J} @i.\big(\Psi_{t_{|i}}(R_i) \big)\big) \Big) \bigg) \tag{Lemma \ref{normalization:of:unif:Lemma}}\\
&=\Psi_t\bigg(\Psi_{t}\Big(\bigand_{i\in I} @i.S_i \Big) \combb \Psi_{t}\Big(\bigand_{j\in J} @j.R_j \Big)\bigg) \tag{Item \ref{psi:def:item:and} of Def. \ref{psi:def} of $\Psi(\bigand\cdot)$}\\
&=\Psi_{t}\Big(\bigand_{i\in I} @i.S_i \Big) \combb \Psi_{t}\Big(\bigand_{j\in J} @j.R_j \Big) \tag{Lemma \ref{nice:prop:Psi:lemma}} \\
&= \Psi_t(S) \combb \Psi_t(R) \tag{Def. of $S$ and $R$}
\end{align}
\item If $S=\most(S')$ and $R=\most(R')$ then assume that $t$ is neither a constant nor a variable, i.e. $\delta(t)\ge 2$, the case when $\delta(t)=1$ is trivial since both sides of the equation are equal to $\emptylist$.
In this case we rewrite $\most(\cdot)$ as $\bigand_i(\cdot)$ and we apply Item \ref{proof:lemma:unif:free-fp:item:list:list} of this proof.
Let
\begin{align*}
S^{\star}=\bigand_{i=1,ar(t)} @i.S' &\tand && R^{\star}=\bigand_{i=1,ar(t)} @i.R',
\end{align*}
and notice that $\Psi_t(S^{\star})=\Psi_t(S)$ and $\Psi_t(R^{\star})=\Psi_t(R)$.
Hence
\begin{align}
\Psi_t(S \combb R) &= \Psi_t \big(\most(S') \combb \most(R')\big) \notag \\
&= \Psi_t\big( \tifthen{(S\&R)}{\big(\most\big((S'\combb R') \oplus S' \oplus R'\big)\big)}\big) \tag{Def. \ref{reduction:unif:def} of $\combb$} \\
&= \Psi_t \big(\tifthen{(S\&R}{\Psi_t\big(\most\big((S'\combb R') \oplus S' \oplus R'\big)\big)} \big) \tag{Item \ref{Properties-of-Psi:Lemma:item:3} of Lemma \ref{nice:prop:Psi:lemma}} \\
&= \Psi_t \big(\tifthen{(S^{\star}\&R^{\star})}{\Psi_t\big(\most\big((S'\combb R') \oplus S' \oplus R'\big)\big)} \big) \tag{Item \ref{Properties-of-Psi:Lemma:item:3'''} of Lemma \ref{nice:prop:Psi:lemma}} \\
&= \Psi_t \bigg(\tifthen{(S^{\star}\&R^{\star})}{\Psi_t \Big(\bigand_{i=1,ar(t)} @i.\big((S'\combb R') \oplus S' \oplus R' \big)\Big)} \bigg) \tag{Item \ref{psi:def:item:most} of Def. \ref{psi:def} of $\Psi(\most(\cdot))$} \\
&= \Psi_t \bigg(\tifthen{(S^{\star}\&R^{\star})}{\bigand_{i=1,ar(t)} @i.\big((S'\combb R') \oplus S' \oplus R' \big)} \bigg) \tag{Item \ref{Properties-of-Psi:Lemma:item:3'} of Lemma \ref{nice:prop:Psi:lemma}} \\
&= \Psi_t \Big(\bigand_{i=1,ar(t)} @i.S' \combb \bigand_{i=1,ar(t)} @i.R'\Big) \tag{Item \ref{list:ext} of Def. \ref{reduction:unif:def} of $\combb$ in which $I=J=\{1,\ldots,ar(t)\}$} \\
&= \Psi_t \Big(\bigand_{i=1,ar(t)} @i.S'\Big) \combb \Psi_t \Big(\bigand_{i=1,ar(t)} @i.R'\Big) \tag{Item \ref{proof:lemma:unif:free-fp:item:list:list} of this proof} \\
&= \Psi_t \big(\most(S')\big) \combb \Psi_t\big(\most(R')\big) \tag{Item \ref{psi:def:item:most} of Def. \ref{psi:def} of $\Psi(\most(\cdot))$}\\
&= \Psi_t(S) \combb \Psi_t(R) \tag{Def. of $S$ and $R$}
\end{align}
\end{enumerate}
\end{description}
\end{proof}
\section{Proof of the correction of the unification of \ces: the general setting}
\label{correction:unif:general:setting:sec}
\subsection{Termination and confluence of the unification reduction system}
\label{termination:confluence:reduction:unif:sec}
To show the termination of the reduction system $\Unif$
we need to define a measure on the tuples that strictly decreases with each derivation rule.
Notice that all the reduction rules strictly decrease the size of one or both of the left-hand side \ces except
the fixed-point rules (\ref{fixed:ext:1}) and (\ref{fixed:ext:2}) which can replace $\mu X.S(X)$ with $S(\mu X.S(X))$ that is larger than $\mu X.S(X)$.
However, on the other hand, these fixed-point rules increase the size of the memory because the right-hand side memory is augmented with $(\mu X.S(X),R,\cdot)$.
Since the size of any memory related to two fixed \ces is bounded, to ensure the termination of $\Unif$, we need to define a measure that
couples the difference between such bound and the size of the memory with the size of the \ces.
For this reason we define
\begin{definition}
Let $S$ and $R$ be \ces, and let $\Eu{M}$ be a memory in $\mathfrak{M}(S,R)$.
\begin{align*}
\Lambda(S,R,\Eu{M}) := |\Phi_{\mu}(S)|\cdot|\Phi(R)| + |\Phi(S)|\cdot|\Phi_{\mu}(R)| - |\Eu{M}|
\end{align*}
and define the measure $(\Lambda(S,R,{\Eu{M}}),\Delta(S),\Delta(R))$.
\end{definition}
\begin{proposition}
The unification reduction system $\Unif$ enjoys the following properties.
\begin{enumerate}
\item The reduction system $\Unif$ is terminating and confluent.
\item The normal form of a pre-\ce with respect to $\Unif$ is a \ce in $\mycal{C}$ (i.e. the normal form does not contain tuples).
\end{enumerate}
\end{proposition}
\begin{proof}
\begin{enumerate}
\item The termination is guaranteed by the fact that each reduction rule strictly decreases the measure \\ $(\Lambda(S,R,{\Eu{M}}),\Delta(S),\Delta(R))$ with respect to the lexicographic order.
The confluence is guaranteed
\item Each rule either advances in the \ce of the tuple of the left-hand side part of this rule, or reduces the left-hand side part into a \ce.
\end{enumerate}
\end{proof}
And we shall simply write $\NF$ instead of $\NF_\Unif$ for the normal form. We show next in Lemma \ref{position:cross:in:generated:ces:lemma} a useful property of the unification
of monotonic \ces: if the same fixed-point \ce appears twice in a derivation, then this derivation produces a $@i.(\cdot)$ or $\most(\cdot)$.
And more generally, if the same fixed-point \ce appears $k$-times in a derivation, then this derivation produces at least $k$ times a $@i.(\cdot)$ or $\most(\cdot)$.
\begin{lemma}
\label{position:cross:in:generated:ces:lemma}
Let $\mu X.S(X)$ and $R$ be \ces, and $P(Z)$ be a pre-\ce. If, for $i=0,\ldots, k-1$, there are \ces $R_i$, and memories $\Eu{M}_i \in \mathfrak{M}$, and tuples $P_i(Z_i)$ and
derivations
\begin{align*}
\tuple{\mu X.S(X),R_i,\Eu{M}_i} \xreduces{\star} P_{i+1}[\tuple{\mu X.S(X),R_{i+1},\Eu{M}_{i+1}}],
\end{align*}
then the path from the root of $P_{k}(Z_k)$ to $Z_k$ crosses at least $k$-times either a position or $\most(\cdot)$, that is
\begin{align*}
\Pi_{Z_k}(P_k(Z_k)) \ge k
\end{align*}
\end{lemma}
\begin{proof}
Recall that $\mu X.S(X)$ is monotonic by assumption, that is, the path from the root of $\mu X.S(X)$ to $X$ passes through a position or $\most$.
This implies that, for any $i=0,\ldots,k-1$, there exist \ces $S'$ and $R'$, and a memory $\Eu{M}'$, and a tuple $P'(Z')$, and a derivation
\begin{align*}
\tuple{\mu X.S(X),R_i,\Eu{M}_i} \xreduces{\star} P'[\tuple{S',R',\Eu{M}'}] \xreduces{\star} P_{i+1}[\tuple{\mu X.S(X),R_{i+1},\Eu{M}_{i+1}}]
\end{align*}
where $S'$ is either of the form $S'=\bigand_i @i.S'_i$ or $S'=\most(S'')$. This implies that one of the rules (\ref{list:ext'}), (\ref{list:ext}), (\ref{most:ext:1}), (\ref{most:ext:2}), (\ref{most:ext:3}) is applied in the
derivation from $P'[\tuple{S',R',\Eu{M}'}]$ to $P_{i+1}[\tuple{\mu X.S(X),R_{i+1},\Eu{M}_{i+1}}]$.
Each of which produces a $@i.(\cdot)$ or $\most(\cdot)$.
\end{proof}
An immediate consequence of the previous Lemma \ref{position:cross:in:generated:ces:lemma} is the following Corollary.
\begin{corollary}
\label{monotonic:unif:corolarry}
The unification of two monotonic \ces is a monotonic \ce.
\end{corollary}
\subsection{Sequence of fixed-points, Covering of fixed-point free \ces and $(\ceSet,\ceSetFree)$-morphism }
\begin{definition}[Fixed-point tree and fixed-point sequence of a \ce]
Let $S$ be a strategy in which each fixed-point variable appears once.
Then a \emph{sequence} $M_1,\ldots,M_m$ is a set of \ces where each $M_i$ is a fixed-point \ce in $\Phi_{\mu}(S)$ and each $M_{i+1}$ is a sub-\ce of $M_i$.
Besides,this sequence is \emph{maximal} there is no fixed-point \ce $M$ such that $M_i$ is a sub-\ce of $M$ and $M$ is a sub-\ce of $M_{i+1}$ for any $i=1,\ldots,{m-1}$.
\end{definition}
Notice that the pair $(\Phi_{\mu}(S),\sqsubset)$ is a partial order, or a tree.
\begin{definition}[Covering of fixed-point free \ces]
Let $S$ and $S'$ be fixed-point free \ces. We say that $S$ is covered by $S'$ if there are \ces $R(X_1,\ldots,X_k), R_1,\ldots,R_k$, where $R_i \neq \emptylist$ for $i=1,\ldots,k$, such that $S$ and $S'$ can be written as
\begin{align*}
S=R(\emptylist,\ldots,\emptylist) && S'=R(R_1,\ldots,R_k).
\end{align*}
\end{definition}
For example, $S(\emptylist)$ is covered by $S(S(\emptylist))$ by letting $R(X_1)=S(X_1)$ and $R_1=S(\emptylist)$.
\begin{lemma}
\label{sem:equiv:and:rank}
Let $S,S',R(X_1,\ldots,X_k)$, $R_1,\ldots,R_k$ be \ces.
Assume that $S$ is covered by $S'$ with $S=R(\emptylist,\ldots,\emptylist)$ and $S'=R(R_1,\ldots,R_k)$, and let
$m = \min\{ \Pi_{X_i}(R(X_1,\ldots,X_k)) \gvert i=1,k \}$,
then
\begin{align*}
S \equiv_{m} S'.
\end{align*}
\end{lemma}
\begin{definition}[$(\ceSet,\ceSetFree)$-morphisms]
\label{C-C0-morphism:def}
there is a $(\ceSet,\ceSetFree)$-morphism from $S$ to $R$ if and only if one the following cases holds:
\begin{enumerate}
\item either both $S$ and $R$ are fixed-point free and in this case they are equal, or
\item $S$ is not a fixed-point \ce and in this case there are fixed-point free \ces $S'(X_1,\ldots,X_m)$ and $R'(X_1,\ldots,X_m)$, and \ces $S_1,\ldots,S_m$ and $R_1,\ldots,R_m$, with $m\ge 1$, such that
$S=S'(S_1,\ldots,S_m)$ and $R=R'(R_1,\ldots,R_m)$ and there is a $(\ceSet,\ceSetFree)$-morphism from every $S_i$ to $R_i$, for $i=1,\ldots,m$. Or,
\item $S$ is a fixed-point \ce, say $\mu X.S'(X)$, and in this case either
\begin{enumerate}
\item $R=\emptylist$, or
\item there is a $(\ceSet,\ceSetFree)$-morphism from $S'(\mu X(S'(X)))$ to $R$.
\end{enumerate}
\end{enumerate}
\end{definition}
In other words, the Definition \ref{C-C0-morphism:def} can be stated in terms of the inference rules of Table \ref{Inference Rules}.
\begin{table}[H]
\centering
\fbox{
\parbox{8cm}{
\begin{align*}
&\infer[S,R \in \ceSetFree]{S \morphy R}
S=R}
&&&
&\infer[S(X_1,\ldots,X_m) \in \ceSetFree]{S(S_1,\ldots,S_m) \morphy S(R_1,\ldots,R_m)}
S_i \morphy R_i} \\
&&&&& \\
&\infer{\mu X.S(X) \morphy \emptylist}
}
&&&
&\infer{\mu X.S(X) \morphy R}
S(\mu X.S(X)) \morphy R}
\end{align*}
}}
\caption{Inference rules for $(\ceSet,\ceSetFree)$-morphisms.}
\label{Inference Rules}
\end{table}
The following claims are not hard to prove.
\begin{remark}\label{ufold:mophism:rq}
For any \ce $S$ with bound fixed-variables $X_1,\ldots, X_s$ with $s\ge 0$, and any mapping $\mathbf{s}: \set{X_1,\ldots, X_s}\to \mathbb{N}$, and any \ce $M(Z)$, the following hold:
\begin{enumerate}
\item \label{ufold:mophism:rq:item:1} There is a $(\ceSet,\ceSetFree)$-morphism from $S$ to $\ufold{S}{\mathbf{s}}$.
\item \label{ufold:mophism:rq:item:2} If there is a $(\ceSet,\ceSetFree)$-morphism from $S$ to $S'$, then there is a $(\ceSet,\ceSetFree)$-morphism from $M(S)$ to $M(S')$.
\item \label{ufold:mophism:rq:item:2} If there is a $(\ceSet,\ceSetFree)$-morphism from $S$ to $S'$ and if $\tilde{S}$ results from $S$ by the algorithm \ref{} that transforms a \ce into \ce in which each fixed-point variables occurs once, then there is a $(\ceSet,\ceSetFree)$-morphism from $\tilde{S}$ to $S'$ as well.
\end{enumerate}
\end{remark}
\begin{definition}
For any $(\ceSet,\ceSetFree)$-morphism $\phi :\Phi(S) \rightarrow C$, define the mappings $\phi_{\mu}$ and $\phi_{\nu}$
\begin{align*}
\phi_{\mu}:\Phi_{\mu}(S) \rightarrow C &&\tand&& \phi_{\nu}: \boundv{S} \rightarrow C.
\end{align*}
as the restriction of $\phi$ on $\Phi_{\mu}(S)$ and $\boundv{S}$ respectively.
\end{definition}
\subsection{The unification commutes with the $(\ceSet,\ceSetFree)$-morphisms}
We show in the following key lemma that there is a $(\ceSet,\ceSetFree)$-morphism between the \ce that results from the unification of two \ces
and the fixed-point free \ce that results from the unification of their related unfolding.
\begin{lemma}
\label{main:lemma:mophism}
Let $S$ and $R$ be \ces, and let $\Eu{M} \in \mathfrak{M}(S,R)$ be a memory with respect to $S$ and $R$.
The following diagram commutes
\[\begin{tikzcd}
\mycal{C} \times \mycal{C} \arrow{r}{\combb} \arrow[swap]{d}{\phi \times \phi} & \mycal{C} \arrow{d}{\phi} \\
\ceSetFree \times \ceSetFree \arrow{r}{\combb} & \ceSetFree
\end{tikzcd}
\]
That is, if there is a $(\ceSet,\ceSetFree)$-morphism from $S$ to $\tilde{S}$ and from $S$ to $\tilde{S}$ then
there is a $(\ceSet,\ceSetFree)$-morphism from $\NF(\tuple{S,R,\Eu{M}})$ to $\NF(\tuple{\tilde{S},\tilde{R},\emptyset})$.
\end{lemma}
\begin{proof}
Since $S \morphy \tilde{S}$ and $R \morphy \tilde{R}$, we distinguish three cases depending on $S$ and $R$:
\begin{enumerate}
\item If $S$ and $R$ are fixed-point free, then this case is trivial.
\item \label{case:2:lemma:unif:morphism} If $S$ and $R$ are of the form $S=S'(S_1,\ldots,S_m)$ and $R=R'(R_1,\ldots,R_m)$, then we make a simple induction on $S$ and/or $R$ by examining
the structure of $S$ and/or $R$.
\item If $S$ is fixed-point $S=\mu X.S'(X)$ then we $S$ is replaced by $S'(S)$ and we reduce this case to the case \ref{case:2:lemma:unif:morphism} above.
\end{enumerate}
We consider the measure $(\Lambda(S,R,F),\Delta(S),\Delta(R))$, together with the usual lexicographic order, according to which the morphism $\phi$ will be inductively constructed.
\begin{itemize}
\item \underline{Base case $(\Lambda(S,R,F),\Delta(S),\Delta(R))=(0,(0,0),(0,0))$}. In this cases each of $S$ and $R$ is either the fail or an insertion at a position.
This case is trivial since the two \ces resulting from the unification are equal.
\item \underline{Induction Step}. Assume that there is a $(\ceSet,\ceSetFree)$-morphism
$\NF(\tuple{S',R',\Eu{M}'}) \morphy \NF(\tuple{\ufold{S'}{\mathbf{s}'},\ufold{R'}{\mathbf{r}'},\emptyset})$
for a \ce $S'$ and $R'$, and a memory $\Eu{M}'$, and a mapping $\mathbf{r}'$ and $\mathbf{s}'$ with $(\Lambda(S',R',F'),\Delta(S'),\Delta(R'))>0$,
and we will construct a morphism $\phi$ for any \ces $S$ and $R$, and any memory $\Eu{M}$, and any mappings $\mathbf{r}$ and $\mathbf{s}$ with
$(\Lambda(S',R',F'),\Delta(S'),\Delta(R')) < (\Lambda(S,R,F),\Delta(S),\Delta(R))$.
We make a structural induction on $S$, and on $R$ if necessary, by distinguishing ?? cases.
\begin{enumerate}[(1)]
\item \label{u:case:lemma:structure} \underline{If $S=u;S'$} then $\ufold{S}{\mathbf{s}}= u; \ufold{S'}{\mathbf{s}}$ and therefore
\begin{align*}
\tuple{S,R,\Eu{M}} &\reduces u;\tuple{S',R,\Eu{M}}, \\
\tuple{\ufold{S}{\mathbf{s}},\ufold{R}{\mathbf{r}},\emptyset} &\reduces u; \tuple{\ufold{S'}{\mathbf{s}},\ufold{R}{\mathbf{r}},\emptyset}.
\end{align*}
Since $S'$ is a sub-\ce of $S$, or more precisely $1+\delta(S')=\delta(S)$ and $(0,1)+\Delta(S')=\Delta(S)$,
then if follows from the induction hypothesis that there exists a $(\ceSet,\ceSetFree)$-morphism \\
$ \NF(\tuple{S',R,\Eu{M}}) \morphy \NF(\tuple{\ufold{S'}{\mathbf{s}},\ufold{R}{\mathbf{r}},\emptyset}$
Therefore we get the desired result.
\item \underline{If $S=S' \oplus S''$} then this case is similar to the case (\ref{u:case:lemma:structure}) above in which $S=u;S'$.
\item \label{most:case:lemma:structure} \underline{If $S=\most(S')$} then make an induction on $R$ but we only discuss the cases when
$R=\most(R')$ since the remaining cases are handled either by symmetry or in case \ref{last:case} below. In this case
\begin{align}
\label{most:morphism:eq:1}
\tuple{S,R,\Eu{M}} &= \tuple{\most(S'),\most(R'),\Eu{M}} \notag \\
& \reduces \mathbf{If} \,\big(\most(S')\, \& \,\most{(R')}\big) \, \mathbf{Then}\;
\most\big(\tuple{S',R',\Eu{M}} \oplus S' \oplus R' \big) \tag{Rule \ref{most:ext:1}} \\
& \tand \notag \\
\tuple{\ufold{S}{\mathbf{s}},\ufold{R}{\mathbf{r}},\emptyset}
&= \tuple{\most(\ufold{S'}{\mathbf{s}}),\most(\ufold{R'}{\mathbf{r}}),\emptyset} \tag{Def. \ref{ufold:def} of the unfolding}\\
& \reduces \mathbf{If} \,\big(\most(\ufold{S'}{\mathbf{s}})\, \& \,\most{(\ufold{R'}{\mathbf{r}})}\big) \notag \\
&\;\;\;\;\; \mathbf{Then} \, \most\big(\tuple{S',R',\emptyset} \oplus \ufold{S'}{\mathbf{s}} \oplus \ufold{R'}{\mathbf{r}} \big)
\tag{Rule \ref{most:ext:1}}
\end{align}
Since the resulting \ces have the same "$\tifthen{\cdot}{\cdot}$" structure,
it follows from Item \ref{ufold:mophism:rq:item:2} of the Remark \ref{ufold:mophism:rq}
that we need show that there is a morphism from each part of the former to the corresponding part of latter.
On the one hand, since $S'$ (resp. $R'$) is a sub-\ce of $S$ (resp. $R$), or more precisely $1+\delta(S')=\delta(S)$ and $(0,1)+ \Delta(S')=\Delta(S)$ (resp. $1+\delta(R')=\delta(R)$ and $(0,1)+ \Delta(R')=\Delta(R)$),
then it follows from the induction hypothesis that there is a $(\ceSet,\ceSetFree)$-morphism
$\NF(\tuple{S',R',\Eu{M}}) \morphy \NF(\tuple{\ufold{S'}{\mathbf{s}},\ufold{R'}{\mathbf{r}},\emptyset}$.
And on the other hand, it follows from Item \ref{ufold:mophism:rq:item:1} of the Remark \ref{ufold:mophism:rq} that there is a $(\ceSet,\ceSetFree)$-morphism from $S'$ (resp. $R'$) to $\ufold{S'}{\mathbf{s}}$ (resp. $\ufold{R'}{\mathbf{r}}$).
\item \underline{If $S=\tifthen{S'}{S''}$ or $S= @p.S' \uand \bigand_{i=1,k}@p_i.S_i$} then these cases are similar to the previous case \ref{most:case:lemma:structure} in which $S=\most(S')$ because
they both feature an induction on the input \ce and the existence of a $(\ceSet,\ceSetFree)$-morphism from a \ce to its unfolding.
\item \underline{If $S=\mu X.S'(X)$} then
\begin{align*}
\tuple{S,R,\Eu{M}} =\tuple{\mu X.S'(X),R,\Eu{M}} \reduces
\begin{cases}
\mu Z. \tuple{S'(S),R,\Eu{M}'} & \tif (S,R,\cdot) \notin \Eu{M}\\
Z & \tif (S,R,Z) \in \Eu{M}
\end{cases}
\end{align*}
where $Z = \fresh{S,R}$ and $\Eu{M}' = \Eu{M} \cup \set{(S,R,Z)}$, and
\begin{align*}
\tuple{\ufold{S}{\mathbf{s}},\ufold{R}{\mathbf{r}},\Eu{M}} & = \tuple{\mu^{\mathbf{s}(X)} X. \ufold{S'(X)}{\mathbf{s}},\ufold{R}{\mathbf{r}},\emptyset} \\
& =
\begin{cases}
\tuple{\emptylist,\ufold{R}{\mathbf{r}},\emptyset} & \tif \mathbf{s}(X)=0\\
\tuple{\mu^{\mathbf{s}(X)} X. \ufold{S'(X)}{\mathbf{s}},\ufold{R}{\mathbf{r}},\emptyset} & \tif \mathbf{s}(X)>0
\end{cases} \\
\end{align*}
where $\mathbf{s}'(X)=\mathbf{s}(X)-1$ and $\mathbf{s}'(Y)=\mathbf{s}(Y)$ for $Y\neq X$.
If $\mathbf{s}(X)=0$ then this case is trivial.
If $\mathbf{s}(X)>0$ and $(S,R,Z) \in \Eu{M}$ then we let $\phi(Z)= \NF\big(\tuple{S'\big(\ufold{X}{\mathbf{s'}}\big),\ufold{R}{\mathbf{r}},\emptyset}\big)$.
If $\mathbf{s}(X)>0$ and $(S,R,\cdot) \notin F$ then there exists a fixed-point free \ce $\tilde{S}(X_1,\ldots,X_m)$ in $\ceSetFree$, with $m \ge 1$, such that
$S'(X)$ can be written as $S'(X)=\tilde{S}(S_1,\ldots,S_{m-1},X)$.
\\ Hence $\ufold{S}{\mathbf{s}}=\tilde{S}(\ufold{S_1}{\mathbf{s}},\ldots,\ufold{S_1}{\mathbf{s}},X)$. Going back to the cases (1--4) above.
\item \label{last:case} The remaining cases on which the unification depends on $R$, in particular the cases when
$R=\emptylist$ (Rule \ref{final:2}),
$R=u;R'$ (Rule \ref{pattern:ext:2}),
$R=R' \oplus R''$ (Rule \ref{choice:ext:2}),
$R=\tifthen{R'}{R''}$ (Rule \ref{if:ext:2}),
$R=\most(R')$ (Rule \ref{most:ext:2}), and
$R=\mu Y.R'(Y)$ (Rule \ref{fixed:ext:2}),
were are similar to the cases discussed above in which the induction depends on $S$ since they can be treated by symmetry.
\end{enumerate}
\end{itemize}
This end the proof of Lemma \ref{main:lemma:mophism}.
\end{proof}
We show in the following key lemma that there is a $(\ceSet,\ceSetFree)$-morphism between the \ce that results from the unification of two \ces
and the fixed-point free \ce that results from the unification of their related unfolding.
\begin{corollary}
\label{main:lemma:unfolding}
Let $S$ and $R$ be \ces. Let $s$ (resp. $r$) be the number of fixed-point variables of $S$ (resp. $R$).
Let $\mathbf{s}:\{X_1,\ldots,X_s\} \to \mathbb{N}$ and $\mathbf{r}:\{X_1,\ldots,X_r\} \to \mathbb{N}$ mappings.
And let $\Eu{M} \in \mathfrak{M}(S,R)$ be a memory with respect to $S$ and $R$.
\[\begin{tikzcd}
\mycal{C} \times \mycal{C} \arrow{r}{\combb} \arrow[swap]{d}{\ufold{\cdot}{\mathbf{s}} \times \ufold{\cdot}{\mathbf{r}}} & \mycal{C} \arrow{d}{\phi} \\
\ceSetFree \times \ceSetFree \arrow{r}{\combb} & \ceSetFree
\end{tikzcd}
\]
Therefore, there is a $(\ceSet,\ceSetFree)$-morphism
\begin{align*}
\NF(\tuple{S,R,\Eu{M}}) \morphy \NF(\tuple{\ufold{S}{\mathbf{s}},\ufold{R}{\mathbf{r}},\emptyset})
\end{align*}
\end{corollary}
\begin{proof}
Immediate since there is a $(\ceSet,\ceSetFree)$-morphism from any \ce to its unfolding, Item \ref{ufold:mophism:rq:item:1} of the Remark \ref{ufold:mophism:rq}.
\end{proof}
\subsection{Properties of the $(\ceSet,\ceSetFree)$-morphisms}
\begin{lemma}
\label{properties:morphism:lemma}
Let $S$ and $R$ be \ces with bound fixed-point variables $\boundv{S}=\set{ X_1,\ldots,X_s}$ and $\boundv{R}=\set{Y_1,\ldots,Y_r}$.
Let $\mathbf{s}:\{X_1,\ldots,X_s\} \to \mathbb{N}$ and $\mathbf{r}:\{X_1,\ldots,X_r\} \to \mathbb{N}$ be iteration mappings.
The $(\ceSet,\ceSetFree)$-morphism $\NF(\tuple{S,R,\emptyset}) \morphy \NF(\tuple{\ufold{S}{\mathbf{s}},\ufold{R}{\mathbf{r}},\emptyset})$ constructed in the proof of Lemma \ref{main:lemma:mophism} has the following properties:
\begin{enumerate}
\item \label{properties:morphism:lemma:item:1}
For any fixed-point \ce $\mu Z.T(Z)$ in $\NF(\tuple{S,R,\emptyset})$, there exist \ces $\mu X.S'(X)$ and $R'$ and mappings $\mathbf{s}'$ and $\mathbf{r}'$ and a memory $\Eu{M}'$ such that
\begin{enumerate}
\item \label{properties:morphism:lemma:item:1:1} $\mu Z.T(Z)=\NF\big(\tuple{\mu X'.S'(X),R',\Eu{M}'}\big)$ and
$\phi_{\mu}(\mu Z.T(Z))=\NF\big(\tuple{\ufold{\mu X.S'(X)}{\mathbf{s}'},\ufold{R'}{\mathbf{r}'},\emptyset}\big)$. Or,
\item \label{properties:morphism:lemma:item:1:2} $\mu Z.T(Z)=\NF\big(\tuple{R',\mu X'.S'(X),\Eu{M}'}\big)$ and
$\phi_{\mu}(\mu Z.T(Z))=\NF\big(\tuple{\ufold{R'}{\mathbf{r}'},\ufold{\mu X.S'(X)}{\mathbf{s}'},\emptyset}\big)$.
\end{enumerate}
\item \label{properties:morphism:lemma:item:2} For any fixed-point sequence $\NF(\tuple{S,R,\emptyset}) \subcer \mu Z_1.T_1(Z_1) \subcer \cdots \subcer \mu Z_m.T_m(Z_m) \subcer Z_{m+1}$ with
\begin{align*}
\phi_{\mu}(\mu Z_i.T_i(Z_i))=\NF(\tuple{\ufold{\mu X_i.S_i(X_i)}{\mathbf{s}_i},R_i,\emptyset}) && \tand && \phi(Z_{m})=\NF(\tuple{\ufold{\mu X_m.S_m(X_m)}{\mathbf{s}_m},R_m,\emptyset}),
\end{align*}
for some mappings $\mathbf{s}_i$ and \ces $S_i \in \Phi(S),R_i \in \Phi(R)$, for $i=1,\ldots,m$, we have
\begin{align*}
\mathbf{s}_i(X) =
\begin{cases}
\mathbf{s}_{i-1}(X), & \tif X\neq X_j, \forall j=1,\ldots,i \\
\mathbf{s}_{i-1}(X_j)-1, & \tif X = X_j, \textrm{for some } j=1,\ldots,i
\end{cases}
\end{align*}
\item Same as Item \ref{properties:morphism:lemma:item:2} but with $\phi(\mu Z_i.T_i(Z_i))=\NF(\tuple{S_i,\ufold{\mu X_i.R_i(Y_i)}{\mathbf{r}_i},\emptyset})$ and by replacing $X$ with $Y$.
\end{enumerate}
\end{lemma}
\begin{proof}
Items \ref{properties:morphism:lemma:item:1:1} ( resp. \ref{properties:morphism:lemma:item:1:2}) of this Lemma follows immediately from the cases 1 (ref. 2 ) of the proof of Lemma \ref{main:lemma:mophism} since
any fixed-point variable and any fixed-point \ce in the resulting \ce is sent by the morphism to a unification of two \ces where one of them is an iteration.
For Item \ref{properties:morphism:lemma:item:2}, the proof is by induction on $m$. If $m=1$ then it follows from Item \ref{properties:morphism:lemma:item:1:1} that there exists a maximal
sequence $\mu Z_1, Z_{2}$ in $\NF(\tuple{S,R,\emptyset})$ with $\phi(\mu Z_1)=\NF(\tuple{\mu^{\mathbf{s}_1}X_1.\ufold{S_1}{\mathbf{s}_1},R_1,\emptyset})$ and \\
$\phi(Z_{2})=\NF(\tuple{\mu^{\mathbf{s}_2}X_2.\ufold{S_2}{\mathbf{s}_2},R_2,\emptyset})$ for some mappings $\mathbf{s}_1$ and $\mathbf{s}_2$, and \ces $S_1 \in \Phi(S),R_1 \in \Phi(R),S_2 \in \Phi(S)$, and $R_2\in \Phi(R)$.
We distinguish two cases depending whether $X_1=X_2$. If $X_1 \neq X_2$ then $S_1 \neq S_2$ and in this case $\mathbf{s}_2(X_1)=\mathbf{s}_1(X_1)-1$ since
$\ufold{\mu X_1.S_1(X_1)}{\mathbf{s_1}} = \mu^{\mathbf{s_1}(X_1)} \myred{\tilde{S_1}}(\ufold{\mu X_1.S_1(X_1)}{\mathbf{s'_1}})$ with $\mathbf{s'_1}(X_1)=\mathbf{s_1}(X_1)-1$ and $\mathbf{s'_1}(X)=\mathbf{s_1}(X)$ if $X\neq X_1$,
and $\mathbf{s}_2(X_2)=\mathbf{s}_1(X_2)$. If $X_1 = X_2$ then $S_1 = S_2$ and in this case $\mathbf{s}_2(X_1)=\mathbf{s}_1(X_1)-1$ for the same reason above.
\end{proof}
\subsection{The unification commutes with the covering maps}
In this subsection we let $S$ and $R$ to be \ces with bound fixed-point variables $\boundv{S}=\set{ X_1,\ldots,X_s}$ and $\boundv{R}=\set{Y_1,\ldots,Y_r}$.
Let $\mathbf{s}:\{X_1,\ldots,X_s\} \to \mathbb{N}$ and $\mathbf{r}:\{X_1,\ldots,X_r\} \to \mathbb{N}$ be iteration mappings.
\begin{lemma}
\label{increasing:unfolding:lemma}
Let $\mu X_k.S(X_k)$ be a \ce with fixed-point variables $X_1,\ldots, X_s$ and $1 \le k \le s$.
Assume that $\mathbf{s}(X_k) \ge 1$.
There exists a \ce $\tilde{S}$ such that
\begin{align*}
\ufold{\mu X_k.S(X_k)}{\mathbf{s}} = \ufold{\tilde{S}}{\varpi(k,\mathbf{s})}
\end{align*}
\end{lemma}
\begin{proof}
Easy by examining case by case the structure of $S(X_k)$.
\end{proof}
\begin{lemma}
\label{simul:reduce:lemma}
For any fixed-point free \ce $M(X_1,\ldots,X_k)$, where $X_1,\ldots,X_k$ are fixed-point variables, and any tuples $T_1,\ldots,T_k$,
if $T_i \reduces C_i$, for $i=1,\ldots,k$ then there exist a fixed-point free \ce $M'(X_1,\ldots,X_{k'})$, where $X_1,\ldots,X_{k'}$ are fixed-point variables, and
tuples $T'_1,\ldots,T'_{k'}$ such that
\begin{align*}
M(C_1,\ldots,C_k) = M'(T'_1,\ldots,T'_{k'})
\end{align*}
\end{lemma}
\begin{proof}
\begin{itemize}
\item If $M=X$ then $M(T_1)=T_1$ in this case assume that $T_1 \reduces C_1$ and we examine all possible structure of $C_1$:
\begin{enumerate}
\item If $C_1$ is terminal then we let $M':=C_1$.
\item If $C_1=u;T'_1$ then we let $M'(X):=u;X$.
\item If $C_1=\tifthen{S}{T'_1}$ then we let $M'(X):= \tifthen{S}{X}$.
\item If $C_1=T'_1 \oplus T'_2$ then we let $M'(X_1,X_2):= X_1 \oplus X_2$.
\item If $C_1=\most(T_1)$ then we let $M'(X_1):= \most(X_1)$.
\end{enumerate}
\item If $M=u;M'(X)$ then we let $M'(X):=u;X$.
\item If $C_1=\tifthen{S}{T'_1}$ then we let $M'(X):= \tifthen{S}{X}$.
\item If $C_1=T'_1 \oplus T'_2$ then we let $M'(X_1,X_2):= X_1 \oplus X_2$.
\item If $C_1=\most(T_1)$ then we let $M'(X_1):= \most(X_1)$.
\end{itemize}
\end{proof}
In the following Lemma \ref{comparing:unif:unfolding:lemma} and Corollary \ref{comparing:unif:unfolding:corollary} we use the following definitions:
let $S$ (resp. $R$) be a \ce with fixed-point variables $X_1,\ldots, X_s$ (resp. $Y_1,\ldots, Y_r$).
Let $\mathbf{s}_0^0: \{X_1,\ldots,X_s\} \to \mathbb{N}$ and $\mathbf{r}_0^0:\{Y_1,\ldots,Y_r\} \to \mathbb{N}$ be mappings.
Let $v: \set{1,\ldots,s} \rightarrow \mathbb{N}$ and $w: \set{1,\ldots,s} \rightarrow \mathbb{N}$ be positive functions together
with their induced iteration mapping $\hat{v}$ defined by $(\hat{v}(\mathbf{s}))(X_i)=\mathbf{s}(X_i)-v(i)$ and $(\hat{w}(\mathbf{r}))(Y_j)=\mathbf{r}(Y_j)-w(j)$
for any mapping $\mathbf{s}: \{X_1,\ldots,X_s\} \to \mathbb{N}$ and $\mathbf{r}: \{Y_1,\ldots,Y_r\} \to \mathbb{N}$ and $i=1,\ldots,s$ and $j=1,\ldots,r$.
\begin{lemma}
\label{comparing:unif:unfolding:lemma}
There exist $l \ge 0$ and fixed-point free \ces $M_1,\ldots,M_l, M(Z_1,\ldots,Z_l)$, where each $Z_i$ is a fixed-point variable, such that
\begin{align*}
M(\emptylist,\ldots,\emptylist) &= \ufold{S}{\hat{v}(\mathbf{s}_0^0)}\combb \ufold{R}{\hat{w}(\mathbf{r}_0^0)} \\
M(M_1,\ldots,M_l) &= \ufold{S}{\mathbf{s}_0^0}\combb \ufold{R}{\mathbf{r}_0^0}
\end{align*}
\end{lemma}
\begin{proof}
We relate the derivations starting from $\tuple{\ufold{S}{\hat{v}(\mathbf{s}_0^0)},\ufold{R}{\hat{w}(\mathbf{r}_0^0)}}$
to the ones starting from \\ $\tuple{\ufold{S}{\mathbf{s}_0^0},\ufold{R}{\mathbf{r}_0^0}}$.
We shall prove that for any $m\ge 0$, there exist a function $g:\mathbb{N} \rightarrow \mathbb{N}$ and a fixed-point free \ce $M_m(Z_1,\ldots,Z_{g(m)})$
with free fixed-point variables $Z_1,\ldots,Z_{g(m)}$, such that for any derivation
\begin{align*}
\tuple{\ufold{S}{\hat{v}(\mathbf{s}_0^0)},\ufold{R}{\hat{w}(\mathbf{r}_0^0)}} &\usimreduces{m} M_m[C_1^m,\ldots,C_{g(m)}^m] \\
\textrm{ there is a derivation } &\\
\tuple{\ufold{S}{\mathbf{s}_0^0},\ufold{R}{\mathbf{r}_0^0}} &\usimreduces{m} M_m[D_1^m,\ldots,D_{g(m)}^m] \\
\textrm{ such that } &\\
C_i^m &=\tuple{\ufold{S_i^m}{\hat{v}(\mathbf{s}_{i}^m)},\ufold{R_i^m}{\hat{w}(\mathbf{r}_i^m)}} \\
D_i &=\tuple{\ufold{S_i^m}{\mathbf{s}_i^m},\ufold{R_i^m}{\mathbf{r}_i^m}}
\end{align*}
We make an induction on $m$.
\underline{Base case}: If $m=0$ then the claim holds trivially by letting $M_0$ being a fixed-point variable.
\underline{Induction step}: Assume that the claim holds for $m$ and we shall prove it for $m+1$.
Let
\begin{align*}
M_{m}(C_1^m,\ldots,C_{g(m)}^m) \simreduces M_{m+1}(C_1^{m+1},\ldots,C_{g(m+1)}^{m+1})
\end{align*}
We make an induction on $\big(\Im(\hat{v}(\mathbf{s}_i^m)),\Im(\hat{w}(\mathbf{r}_i^m)),\delta(S_i^m),\delta(R_i^m)\big)$.
We distinguish many cases depending on $C_i^m$, which amounts to discuss the structure of $S_{i}^m$ and/or $R_{i}^m$:
\begin{enumerate}
\item The case when $S_{i}^m=\emptylist$ is trivial.
\item If $S_{i}^m= u;S_{i}^{m+1}$ then $\ufold{S_i^m}{\hat{v}(\mathbf{s}_{i}^m)}=u;\ufold{S_i^{m+1}}{\hat{v}(\mathbf{s}_{i}^m)}$ and therefore
\begin{align*}
C_i^m &= \tuple{ u ;\ufold{S_{i}^{m+1}}{\hat{v}(\mathbf{s}_{i}^m)},\ufold{R_i^m}{\hat{w}(\mathbf{r}_{i}^m)}} \\
& \reduces u ; \underbrace{\tuple{\ufold{S_{i}^{m+1}}{\hat{v}(\mathbf{s}_{i}^m)},\ufold{R_i^m}{\hat{w}(\mathbf{r}_{i}^m)}}}_{C_i^{m+1}} \\
\textrm{and similarly} &\\
D_i^m &= \tuple{ u ;\ufold{S_{i}^{m+1}}{\mathbf{s}_{i}^m},\ufold{R_i^m}{\mathbf{r}_{i}^m}} \\
& \reduces u ; \underbrace{\tuple{\ufold{S_{i}^{m+1}}{\mathbf{s}_{i}^m},\ufold{R_i^m}{\mathbf{r}_{i}^m}}}_{D_i^{m+1}}
\end{align*}
and by Lemma \ref{simul:reduce:lemma}, there exists $M_{m+1}(Z_1,\ldots,Z_{g(m+1)})$ such that \\
$M_{m}(C_1^{m},\ldots,C_{g(m)}^{m})= M_{m+1}(C_1^{m+1},\ldots,C_{g(m+1)}^{m+1})$ and $M_{m}(D_1^{m},\ldots,D_{g(m)}^{m})= M_{m+1}(D_1^{m+1},\ldots,D_{g(m+1)}^{m+1})$.
\item The cases when $S_{i}^m$ is a "$\most(\cdot)$" or "$\tifthen{\cdot}{\cdot}$" or "$\bigwedge\cdot$" or "$\cdot \oplus \cdot$" are similar to the previous case.
\item If $S_i^m = \mu X_k.S_i^{m+1}(X_k)$, where $k \in \set{1,\ldots, s}$, then we distinguish two cases depending on $\mathbf{s}_i^m(X_k)$.
\begin{enumerate}
\item If $\mathbf{s}_i^m(X_k)=1$ and therefore $\hat{v}(\mathbf{s}_i^m)(X_k)=0$, then $\ufold{S_{i}^m}{\hat{v}(\mathbf{s}_i^m)}=\emptylist$ by the Definition \ref{ufold:def} of the unfolding.
It follows that $C_i^m= \tuple{\ufold{S_{i}^{m}}{\hat{v}(\mathbf{s}_{i}^m)},\ufold{R_i^m}{\hat{w}(\mathbf{r}_{i}^m)}} \reduces \emptylist$ and hence the claim holds regardless
of $D_i^m=\tuple{\ufold{S_{i}^{m}}{\mathbf{s}_{i}^m},\ufold{R_i^m}{\mathbf{r}_{i}^m}}$.
\item If $\mathbf{s}_i^m(X_k)>1$, and therefore $\hat{v}(\mathbf{s}_i^m)(X_k)>0$, then
it follows from Lemma \ref{increasing:unfolding:lemma}, that there exists a
\ce $\widetilde{S}_i^m$ such that
\begin{align*}
\ufold{\mu X_k.S_i^{m+1}(X_k)}{\hat{v}(\mathbf{s}_i^m)} &= \ufold{\widetilde{S}_i^m}{\hat{v}(\hat{v}'(\mathbf{s}_i^m))} \\
\tand & \\
\ufold{\mu X_k.S_i^{m+1}(X_k)}{\mathbf{s}_i^m} &= \ufold{\widetilde{S}_i^m}{\hat{v}'(\mathbf{s}_i^m)}
\end{align*}
with $\hat{v}':\set{X_1,\ldots,X_s}\rightarrow \mathbb{N}$ being the iteration mapping induced by the function $v':\set{1,\ldots,s} \rightarrow \mathbb{N}$ defined by $v'(k)=1$ and $v'(j)=0$ for $j\neq k$.
Then we proceed by induction since
\begin{align*}
\big(\Im(\hat{v}(\hat{v}'(\mathbf{s}_i^m))),\Im(\hat{w}(\mathbf{r}_i^m)),\delta(\widetilde{S}_i^m),\delta(R_i^m)\big) < \big(\Im(\hat{v}(\mathbf{s}_i^m)),\Im(\hat{w}(\mathbf{r}_i^m)),\delta(S_i^m),\delta(R_i^m)\big).
\end{align*}
\end{enumerate}
\end{enumerate}
\end{proof}
\begin{corollary}
\label{comparing:unif:unfolding:corollary}
Let
\begin{align*}
n_1&= \max\set{\mathbf{s}(X_i)-\hat{v}(\mathbf{s})(X_i) \gvert \hat{v}(\mathbf{s})(X_i)\neq 0, \textrm{for} i=1,\ldots,s} -1, \tand\\
n_2&= \max\set{\mathbf{r}(Y_i)-\hat{w}(\mathbf{r})(Y_i) \gvert \hat{w}(\mathbf{r})(Y_i)\neq 0, \textrm{for} i=1,\ldots,r} -1, \tand \\
n&=\max(m_1,m_2)
\end{align*}
Then
\begin{align*}
\ufold{S}{\mathbf{s}}\combb \ufold{R}{\mathbf{r}} \equiv_{m} \ufold{S}{\hat{v}(\mathbf{s})}\combb \ufold{R}{\hat{w}(\mathbf{r})}.
\end{align*}
\end{corollary}
\begin{proof}
Since any derivation
\begin{align*}
\tuple{\ufold{S}{\hat{v}(\mathbf{s}_0^0)},\ufold{R}{\hat{w}(\mathbf{r}_0^0)}} &\usimreduces{m} M_m[C_1^m,\ldots,C_{g(m)}^m] \\
\textrm{ there is a derivation } &\\
\tuple{\ufold{S}{\mathbf{s}_0^0},\ufold{R}{\mathbf{r}_0^0}} &\usimreduces{m} M_m[D_1^m,\ldots,D_{g(m)}^m] \\
\textrm{ such that } &\\
C_i^m &=\tuple{\ufold{S_i^m}{\hat{v}(\mathbf{s}_{i}^m)},\ufold{R_i^m}{\hat{w}(\mathbf{r}_i^m)}} \\
D_i &=\tuple{\ufold{S_i^m}{\mathbf{s}_i^m},\ufold{R_i^m}{\mathbf{r}_i^m}}
\end{align*}
and if $C_i^m=\emptylist$ and $D_i^m \neq \emptylist$ then the $m$-derivations passes via a least $m$ positions.
\end{proof}
\subsection{The unification of iteration of \ces}
\begin{definition}[Distance between iteration mappings]
Let $\mathbf{s},\mathbf{s}': \{X_1,\ldots,X_s\} \to \mathbb{N}$ and $\mathbf{r},\mathbf{r}':\{Y_1,\ldots,Y_r\} \to \mathbb{N}$ be iteration mappings such that
$\mathbf{s}>\mathbf{s}'$ and $\mathbf{r}>\mathbf{r}'$. Define
\begin{align*}
d(\mathbf{s},\mathbf{s}') &= \min\set{\mathbf{s}'(X_i) \gvert \mathbf{s}(X_i) \neq \mathbf{s}'(X_i), i=1,\ldots,s} \\
d^{\star}((\mathbf{s},\mathbf{r}),(\mathbf{s}',\mathbf{r}')) &= \max(d(\mathbf{s},\mathbf{s}'),d(\mathbf{r},\mathbf{r}'))
\end{align*}
\end{definition}
\begin{definition}[Distance between iteration mappings]
Let $S$ (resp. $R$) be a \ce with fixed-point variables $X_1,\ldots, X_s$ (resp. $Y_1,\ldots, Y_r$).
Let $n \ge 1$ and let $\mathbf{s}: \{X_1,\ldots,X_s\} \to \mathbb{N}$ and $\mathbf{r}:\{Y_1,\ldots,Y_r\} \to \mathbb{N}$ be iteration mappings with $\mathbf{s}(X_i)=\mathbf{s}(X_j)=n$,
and let $\widehat{T}=\ufold{S}{\mathbf{s}} \combb \ufold{R}{\mathbf{r}}$.
Given a maximal sequence $\Eu{T} = T_1, \ldots, T_m$ ,with $m\ge 2$, in the unification $S \combb R$,
define, for any $i,j$ with $1 \le i < j \le m$,
\begin{align*}
\Omega_{\Eu{T},\widehat{T}}(T_i,T_j) &= n- \max\big(\#_{{\Eu{T},\widehat{T}}}^1(T_i,T_j), \#_{{\Eu{T},\widehat{T}}}^2(T_i,T_j) \big)\\
\omega_{\Eu{T},\widehat{T}}(T_i,T_j) &= \begin{cases}
n - \Omega_{\Eu{T},\widehat{T}}(T_1,T_j) & \tif i=1 \\
\omega_{\Eu{T},\widehat{T}}(T_1,T_j) - \Omega_{\Eu{T},\widehat{T}}(T_i,T_j) & \totherwise
\end{cases}
\end{align*}
\end{definition}
If there is no ambiguity we simplify the notations by omitting $\Eu{T}$ and $\widehat{T}$ and simply writing $\Omega(T_i,T_j)$ instead of $\Omega_{\Eu{T},\widehat{T}}(T_i,T_j)$,
and writing $\omega(T_i,T_j)$ instead of $\omega_{\Eu{T},\widehat{T}}(T_i,T_j)$.
\begin{remark}
\label{triangle:inequality:remark}
Notice that
\begin{align}
\Omega(T_i,T_{j+k}) = \Omega(T_i,T_j) + \Omega(T_j,T_{j+k})
\end{align}
and that
\begin{align}
d^{\star}(\mycal{I}(T_i),\mycal{I}(T_j))-1 \le \omega(T_i,T_j) \le d^{\star}(\mycal{I}(T_i),\mycal{I}(T_j)).
\end{align}
\end{remark}
From Lemma \ref{comparing:unif:unfolding:lemma} we get the two following corollaries.
\begin{corollary}
\label{properties:morphism:corollary}
Let $S$ and $R$ be \ces with $s$ and $r$ bound fixed-point variables, respectively. Let $\mathbf{s}:\{X_1,\ldots,X_s\} \to \mathbb{N}$ and $\mathbf{r}:\{X_1,\ldots,X_r\} \to \mathbb{N}$ mappings.
For any maximal sequence in $S \combb R$:
\begin{align*}
(S \combb R) \subcer \mu Z_1.T_1(Z_1) \subcer \cdots \subcer \mu Z_m.T_m(Z_m) \subcer Z_i,
\end{align*}
with $m \ge 1$ and $i \in \set{1,\ldots,m}$,
if there is a $(\ceSet,\ceSetFree)$-morphism $\NF(\tuple{S,R,\emptyset}) \morphy \NF(\tuple{\ufold{S}{\mathbf{s}},\ufold{R}{\mathbf{r}},\emptyset})$,
then
\begin{align}
\label{properties:morphism:corollary:eq}
\phi_{\nu}(Z_i) \equiv_{\omega(\mu Z_i.T(Z_i),Z_i)} \phi_{\mu}(Z_i)
\end{align}
\end{corollary}
\begin{proof}
From Lemma \ref{properties:morphism:lemma} and Corollary \ref{comparing:unif:unfolding:corollary} it follows that
\end{proof}
By generalizing Item \ref{properties:morphism:lemma:item:2} of Lemma \ref{properties:morphism:lemma}.
\begin{lemma}
\label{distance:fixed-point-in-tree}
Let $S$ and $R$ be \ces with $s$ and $r$ bound fixed-point variables, respectively. Let $\mathbf{s}:\{X_1,\ldots,X_s\} \to \mathbb{N}$ and $\mathbf{r}:\{X_1,\ldots,X_r\} \to \mathbb{N}$ mappings.
For any maximal sequence in $S \combb R$:
\begin{align*}
(S \combb R) \subcer \mu Z_1.T_1(Z_1) \subcer \cdots \subcer \mu Z_m.T_m(Z_m) \subcer Z_i,
\end{align*}
with $m \ge 1$ and $i \in \set{1,\ldots,m}$.
The following hold.
\begin{enumerate}
\item \label{distance:fixed-point-in-tree:item:0} If $m \ge 2$ and let $i,j$ be such that $1 \le i < j \le m$. Let $T^{\star}_i(Z_i,Z^{\star}_j)$ be the \ce that satisfies
\begin{align*}
T^{\star}_i(Z_i,\mu Z_j.T_j(Z_j))= T_i(Z_i).
\end{align*}
then
\begin{align}
\label{distance:fixed-point-in-tree:eq:0}
\Omega(\mu Z_i.T_i(Z_i),\mu Z_j.T_j(Z_j)) & \le \Pi_{Z^{\star}_{j}} (T^{\star}_i(Z_i,Z^{\star}_j))
\end{align}
\item \label{distance:fixed-point-in-tree:item:1} If $i=m$ then
\begin{align}
\omega(\mu Z_1. T_1(Z_1),Z_m) & \le \omega(\mu Z_m. T_m(Z_m), Z_m) \label{distance:fixed-point-in-tree:eq:1}
\end{align}
\item \label{distance:fixed-point-in-tree:item:2} For any $i \in \set{1,\ldots,m}$, there exists a \ce $T^{\star}_{m}[X]$ such that $T_m(Z_m)=T^{\star}_m[Z_i]$ and
\begin{align}
\omega\big(\mu Z_1. T_1(Z_1), \mu Z_m. T_m(Z_m)\big) & \le \omega\big(\mu Z_i. T_i(Z_i), Z_i\big) + \Pi_{X}(T^{\star}_m[X]) \label{distance:fixed-point-in-tree:eq:2}
\end{align}
\item \label{distance:fixed-point-in-tree:item:3} If $m\ge 2$ then for any $j \in \set{2,\ldots,m}$ we have
\begin{align}
\omega\big(\mu Z_1. T_1(Z_1), \mu Z_i. T_j(Z_j)\big) & \le \omega\big(\mu Z_j. T_j(Z_j,Z), Z\big) + \Pi_{Z}(T_j[Z_j,Z]) \label{distance:fixed-point-in-tree:eq:3}
\end{align}
\end{enumerate}
\end{lemma}
\begin{proof}
\begin{enumerate}
\item The claim follows from the monotonicity property, that is, it is generalization of Lemma \ref{position:cross:in:generated:ces:lemma}.
\item Eq. (\ref{distance:fixed-point-in-tree:eq:1}) follows from from the definition of $\omega$.
\item To show Eq. (\ref{distance:fixed-point-in-tree:eq:2}) we firstly rely on the definition of $\omega$ and $\Omega$ claim that
\begin{align}
\omega\big(\mu Z_1. T_1(Z_1), \mu Z_m. T_m(Z_m)\big) & = n - \Omega(\mu Z_1. T_1(Z_1), \mu Z_m. T_m(Z_m)) \notag\\
\omega\big(\mu Z_i. T_i(Z_i), Z_i\big) &= \omega\big(\mu Z_1. T_1(Z_1), \mu Z_i. T_i(Z_i)\big) - \Omega\big(\mu Z_i. T_i(Z_i), Z_i\big) \notag{} \\
&= n - \Omega\big(\mu Z_1. T_1(Z_1), \mu Z_i. T_i(Z_i)\big) - \Omega\big(\mu Z_i. T_i(Z_i), Z_i\big) \notag{}\\
+ \Pi_{X}(T^{\star}_m[X]) &
\end{align}
Therefore,
\begin{align*}
(*) &= \omega\big(\mu Z_1. T_1(Z_1), \mu Z_m. T_m(Z_m)\big)- \omega\big(\mu Z_i. T_i(Z_i), Z_i\big) \\
&= n - \Omega(\mu Z_1. T_1(Z_1), \mu Z_m. T_m(Z_m)) - \big(n - \Omega\big(\mu Z_1. T_1(Z_1), \mu Z_i. T_i(Z_i)\big) - \Omega\big(\mu Z_i. T_i(Z_i), Z_i\big)\big) \\
&= - \Omega(\mu Z_1. T_1(Z_1), \mu Z_m. T_m(Z_m)) + \Omega\big(\mu Z_1. T_1(Z_1), \mu Z_i. T_i(Z_i)\big) + \Omega\big(\mu Z_i. T_i(Z_i), Z_i\big) \\
&= - \Omega(\mu Z_i. T_i(Z_i), \mu Z_m. T_m(Z_m)) + \Omega\big(\mu Z_i. T_i(Z_i), Z_i\big) \\
&= \Omega\big(\mu Z_m. T_m(Z_m), Z_m\big) \\
& \le \Pi_{X}(T^{\star}_m[X]) &
\end{align*}
\item Eq. (\ref{distance:fixed-point-in-tree:eq:3}) Follows from Eq. (\ref{distance:fixed-point-in-tree:eq:0}) since $\Omega\big(\mu Z_j. T_j(Z_j,Z), Z\big) \le \Pi_{Z}(T_j[Z_j,Z])$.
\end{enumerate}
\end{proof}
\begin{lemma}
\label{distance:fixed-point-in-tree:remove:fixed-point:lemma}
Let $S$ and $R$ be \ces with $s$ and $r$ bound fixed-point variables, respectively. Let $\mathbf{s}:\{X_1,\ldots,X_s\} \to \mathbb{N}$ and $\mathbf{r}:\{X_1,\ldots,X_r\} \to \mathbb{N}$ mappings.
Let
\begin{align*}
(S \combb R) \subcer \mu Z_1.T_1(Z_1) \subcer \cdots \subcer \mu Z_m.T_m(Z_m),
\end{align*}
be a maximal sequence in $S \combb R$ with $m \ge 1$. Let for $i=1,\ldots,m$.
Then, for any $i=1,\ldots,m$ we have that
\begin{align}
\label{distance:fixed-point-in-tree:remove:fixed-point:lemma:eq}
\widehat{\phi}_{\nu}(\mu Z_i.T_i(Z_i)) & \equiv_{\omega(\mu Z_1. T_1(Z_1), \mu Z_i. T_i(Z_i))} \widehat{\phi}_{\nu}(T_i(Z_i)\big)
\end{align}
\end{lemma}
\begin{proof}
The proof is by induction on $m-i$.
\begin{description}
\item \underline{Base case:} we have $m=j$ and $T_m(Z_m)$ is fixed-point free and hence there exists a fixed-point free \ce $T^{\star}_m(Z^1,\ldots,Z^l,Z_m)$, with
$l \ge 0$ and $\set{Z^1,\ldots,Z^l} \subseteq \set{Z_1,\ldots,Z_{m-1}}$ such that $T_m(Z_m)=T^{\star}_m(Z^1,\ldots,Z^l,Z_m)$.
Therefore the left and right hand side of Eq. (\ref{distance:fixed-point-in-tree:remove:fixed-point:lemma:eq}) can be written respectively as
\begin{align}
\widehat{\phi}_{\nu}(\mu Z_m.T_m(Z_m))&= \widehat{\phi}_{\nu}\big(\mu Z_m. T^{\star}_m(Z^1,\ldots,Z^l,Z_m)\big) = \mu Z_m. T^{\star}_m( \widehat{\phi}_{\nu}(Z^1),\ldots, \widehat{\phi}_{\nu}(Z^l),Z_m) \\
\widehat{\phi}_{\nu}(T_m(Z_m)) &= \widehat{\phi}_{\nu}\big( T^{\star}_m(Z^1,\ldots,Z^l,Z_m)\big) = T^{\star}_m( \widehat{\phi}_{\nu}(Z^1),\ldots, \widehat{\phi}_{\nu}(Z^l),\widehat{\phi}_{\nu}(Z_m))
\end{align}
Therefore, to show Eq.(\ref{distance:fixed-point-in-tree:remove:fixed-point:lemma:eq}) for $i=m$ we need to show that
\begin{align*}
\mu Z_m. T^{\star}_m( \widehat{\phi}_{\nu}(Z^1),\ldots, \widehat{\phi}_{\nu}(Z^l),Z_m) \equiv_{\omega(\mu Z_1. T_1(Z_1), \mu Z_m. T_m(Z_m))} T^{\star}_m( \widehat{\phi}_{\nu}(Z^1),\ldots, \widehat{\phi}_{\nu}(Z^l),\widehat{\phi}_{\nu}(Z_m))
\end{align*}
By Corollary \ref{general-fixed-point-corollary} it suffices to show
\begin{align*}
T^{\star}_m\big( \widehat{\phi}_{\nu}(Z^1),\ldots, \widehat{\phi}_{\nu}(Z^l),\widehat{\phi}_{\nu}(Z_m)\big) \equiv_{\omega(\mu Z_1. T_1(Z_1), \mu Z_m. T_m(Z_m))} \widehat{\phi}_{\nu}(Z_m).
\end{align*}
But since $T^{\star}_m\big( \widehat{\phi}_{\nu}(Z^1),\ldots, \widehat{\phi}_{\nu}(Z^l),\widehat{\phi}_{\nu}(Z_m)\big) = \widehat{\phi}_{\mu}(Z_m)$ by Definition \ref{} of $\phi_{\mu}$ and $\phi_{\nu}$,
we need to show that
\begin{align*}
\widehat{\phi}_{\mu}(Z_m) \equiv_{\omega(\mu Z_1. T_1(Z_1), \mu Z_m. T_m(Z_m))} \widehat{\phi}_{\nu}(Z_m).
\end{align*}
But from Corollary \ref{properties:morphism:corollary} we know that $\widehat{\phi}_{\mu}(Z_m) \equiv_{\omega(\mu Z_m. T_m(Z_m),Z_m)} \widehat{\phi}_{\nu}(Z_m)$.
By Item \ref{depth:position:composition:lemma:item:1} of Lemma \ref{depth:position:composition:lemma},
it suffices to show that $\omega(\mu Z_1. T_1(Z_1), \mu Z_m. T_m(Z_m)) \le \omega(\mu Z_m. T_m(Z_m),Z_m) +\Pi_{Z_m}(T^{\star}_m(Z^1,\ldots,Z^l,Z_m))$.
But this follows from Eq. (\ref{distance:fixed-point-in-tree:eq:2}) of Lemma \ref{distance:fixed-point-in-tree} in which we take $i=m$.
\item \underline{Induction step:} [Assume ....]
there exists a fixed-point free \ce $T^{\star}_{i-1}(Z^1,\ldots,Z^l)$, with
$l \ge 1$ and fixed-point variables $\set{Z^1,\ldots,Z^l}$ such that $T_{i-1}(Z_{i-1})=T^{\star}_{i-1}(M^1,\ldots,M^l)$ where each $M^j$ is either a fixed-point sub-\ce $\mathbf{T}_j$ of $\mu Z_{i-1}. T_{i-1}(Z_{i-1})$ or
a fixed-point variable $Z^j$ in $\set{Z_1,\ldots,Z_{i-1}}$ i.e.
\begin{align*}
T_{i-1}(Z_{i-1})=T^{\star}_{i-1}(\mathbf{T}_1,\ldots,\mathbf{T}^k,Z^1,\ldots,Z^{v})
\end{align*}
Therefore the left and right hand side of Eq. (\ref{distance:fixed-point-in-tree:remove:fixed-point:lemma:eq}) can be written respectively as follows:
\begin{enumerate}
\item \underline{If $Z_i \in \set{Z^1,\ldots,Z^v}$}, then assume for simplicity that $Z_i=Z^v$. In this case
\begin{align*}
\widehat{\phi}_{\nu}(\mu Z_{i-1}.T_{i-1}(Z_{i-1}))&= \widehat{\phi}_{\nu}\big(\mu Z_{i-1}. T^{\star}_{i-1}(\mathbf{T}^1,\ldots,\mathbf{T}^k,Z^1,\ldots,Z^{v-1},Z_{i-1})\big) \\
&= \mu Z_{i-1}. T^{\star}_{i-1}(\widehat{\phi}_{\nu}(\mathbf{T}_1),\ldots,\widehat{\phi}_{\nu}(\mathbf{T}^k),\widehat{\phi}_{\nu}(Z^1),\ldots,\widehat{\phi}_{\nu}(Z^{v-1}),Z_{i-1}) \\
\tand & \\
\widehat{\phi}_{\nu}(T_{i-1}(Z_{i-1})) &= \widehat{\phi}_{\nu}\big(T^{\star}_{i-1}(\mathbf{T}^1,\ldots,\mathbf{T}^k,Z^1,\ldots,Z^{v-1},Z_{i-1})\big) \\
&= T^{\star}_{i-1}(\widehat{\phi}_{\nu}(\mathbf{T}^1),\ldots,\widehat{\phi}_{\nu}(\mathbf{T}^k),\widehat{\phi}_{\nu}(Z^1),\ldots,\widehat{\phi}_{\nu}(Z^{v-1}), \widehat{\phi}_{\nu}(Z_{i-1}))
\end{align*}
By Corollary \ref{general-fixed-point-corollary} it suffices to show
\begin{align*}
T^{\star}_{i-1}(\widehat{\phi}_{\nu}(\mathbf{T}^1),\ldots,\widehat{\phi}_{\nu}(\mathbf{T}^k),\widehat{\phi}_{\nu}(Z^1),\ldots,\widehat{\phi}_{\nu}(Z^{v-1}), \widehat{\phi}_{\nu}(Z_{i-1}))
\equiv_{\omega(\mu Z_1. T_1(Z_1), \mu Z_{i-1}. T_{i-1}(Z_{i-1}))}
\widehat{\phi}_{\nu}(Z_{i-1}).
\end{align*}
But since $ T^{\star}_{i-1}(\widehat{\phi}_{\nu}(\mathbf{T}^1),\ldots,\widehat{\phi}_{\nu}(\mathbf{T}^k),\widehat{\phi}_{\nu}(Z^1),\ldots,\widehat{\phi}_{\nu}(Z^{v-1}), \widehat{\phi}_{\nu}(Z_{i-1})) = \widehat{\phi}_{\mu}(Z_{i-1})$ by Definition \ref{} of $\phi_{\mu}$ and $\phi_{\nu}$,
we need to show that
\begin{align*}
\widehat{\phi}_{\mu}(Z_{i-1}) \equiv_{\omega(\mu Z_1. T_1(Z_1), \mu Z_{i-1}. T_{i-1}(Z_{i-1}))} \widehat{\phi}_{\nu}(Z_{i-1}).
\end{align*}
But from Corollary \ref{properties:morphism:corollary} we know that $\widehat{\phi}_{\mu}(Z_{i-1}) \equiv_{\omega(\mu Z_{i-1}. T_{i-1}(Z_{i-1}),Z_{i-1})} \widehat{\phi}_{\nu}(Z_{i-1})$.
By Item \ref{depth:position:composition:lemma:item:1} of Lemma \ref{depth:position:composition:lemma},
it suffices to show that
\begin{align*}
\omega(\mu Z_1. T_1(Z_1), \mu Z_{i-1}. T_{i-1}(Z_{i-1})) \le \omega(\mu Z_{i-1}. T_{i-1}(Z_{i-1}),Z_{i-1}) +\Pi_{Z_{i-1}}(T^{\star}_{i-1}(Z^1,\ldots,Z^l,Z_{i-1})).
\end{align*}
But this follows from Eq. (\ref{distance:fixed-point-in-tree:eq:2}) of Lemma \ref{distance:fixed-point-in-tree} by taking $i-1$ in place of $i$.
\item \underline{If $Z_i \notin \set{Z^1,\ldots,Z^v}$} then in this case $Z_i$ is a free variable of one of $\mathbf{T}_1,\ldots,\mathbf{T}_k$. Assume for simplicity that $Z_i$ is a free variable of $\mathbf{T}_k$
then in this case let $\phi'_{\nu}: {Z_1,\ldots,Z_{i-1}}\setminus \set{Z_{i-1}} \rightarrow .$ such that $\phi'_{\nu}(Z)=\phi_{\nu}(Z)$ if $Z \neq Z_{i-1}$
\begin{align*}
\widehat{\phi}_{\nu}(\mu Z_{i-1}.T_{i-1}(Z_{i-1}))&= \widehat{\phi}_{\nu}\big(\mu Z_{i-1}. T^{\star}_{i-1}(\mathbf{T}^1,\ldots,\mathbf{T}^k[Z_{i-1}],Z^1,\ldots,Z^v)\big) \\
&= \mu Z_{i-1}. T^{\star}_{i-1}(\widehat{\phi}_{\nu}(\mathbf{T}_1),\ldots,\widehat{\phi}'_{\nu}(\mathbf{T}^k[Z_{i-1}]),\widehat{\phi}_{\nu}(Z^1),\ldots,\widehat{\phi}_{\nu}(Z^v)) \\
\tand & \\
\widehat{\phi}_{\nu}(T_{i-1}(Z_{i-1})) &= \widehat{\phi}_{\nu}\big(T^{\star}_{i-1}(\mathbf{T}^1,\ldots,\mathbf{T}^k[Z_{i-1}],Z^1,\ldots,Z^v)\big) \\
&= T^{\star}_{i-1}(\widehat{\phi}_{\nu}(\mathbf{T}^1),\ldots,\widehat{\phi}'_{\nu}(\mathbf{T}^k[\widehat{\phi}_{\nu}(Z_{i-1})]),\widehat{\phi}_{\nu}(Z^1),\ldots,\widehat{\phi}_{\nu}(Z^v))
\end{align*}
By Corollary \ref{general-fixed-point-corollary} it suffices to show
\begin{align*}
T^{\star}_{i-1}\big(\widehat{\phi}_{\nu}(\mathbf{T}^1),\ldots,\widehat{\phi}_{\nu}(\mathbf{T}^k[Z_{i-1}]),\widehat{\phi}_{\nu}(Z^1),\ldots,\widehat{\phi}_{\nu}(Z^{v})\big)
\equiv_{\omega(\mu Z_1. T_1(Z_1), \mu Z_{i-1}. T_{i-1}(Z_{i-1}))}
\widehat{\phi}_{\nu}(Z_{i-1}).
\end{align*}
But since $ T^{\star}_{i-1}(\widehat{\phi}_{\nu}(\mathbf{T}^1),\ldots,\widehat{\phi}_{\nu}(\mathbf{T}^k[Z_{i-1}]),\widehat{\phi}_{\nu}(Z^1),\ldots,\widehat{\phi}_{\nu}(Z^{v})) = \widehat{\phi}_{\mu}(Z_{i-1})$ by Definition \ref{} of $\phi_{\mu}$ and $\phi_{\nu}$,
we need to show that
\begin{align*}
\widehat{\phi}_{\mu}(Z_{i-1}) \equiv_{\omega(\mu Z_1. T_1(Z_1), \mu Z_{i-1}. T_{i-1}(Z_{i-1}))} \widehat{\phi}_{\nu}(Z_{i-1}).
\end{align*}
But from Corollary \ref{properties:morphism:corollary} we know that $\widehat{\phi}_{\mu}(Z_{i-1}) \equiv_{\omega(\mu Z_{i-1}. T_{i-1}(Z_{i-1}),Z_{i-1})} \widehat{\phi}_{\nu}(Z_{i-1})$.
By Item \ref{depth:position:composition:lemma:item:1} of Lemma \ref{depth:position:composition:lemma}, it suffices to show that
\begin{align*}
\omega(\mu Z_1. T_1(Z_1), \mu Z_{i-1}. T_{i-1}(Z_{i-1})) \le \omega(\mu Z_{i-1}. T_{i-1}(Z_{i-1}),Z_{i-1}) +\Pi_{Z_{i-1}}(T^{\star}_{i-1}(Z^1,\ldots,Z^l,Z_{i-1})).
\end{align*}
But this follows from Eq. (\ref{distance:fixed-point-in-tree:eq:3}) of Lemma \ref{distance:fixed-point-in-tree}.
\end{enumerate}
\end{description}
\end{proof}
\subsection{Relating the structure of the unification of two \ces with the that of their unfolding}
We come to the main result of this subsection by relating the structure of the unification of two \ces with the that of their unfolding.
\begin{lemma}
\label{main:lemma:unif}
Let $S$ (resp. $R$) be a \ce with bound fixed-point variables $X_1,\ldots, X_s$ (resp. $Y_1,\ldots, Y_r$) and let $n \ge 0$.
Let $\mathbf{s}:\{X_1,\ldots,X_s\} \to \mathbb{N}$ and $\mathbf{r}:\{Y_1,\ldots,Y_r\} \to \mathbb{N}$ be constant mappings with $\mathbf{s}(X_i)=\mathbf{r}(Y_j)=n$, for $i=1,\ldots,s$ and $j=1,\ldots,r$.
Let $\Eu{T}=(\Phi_{\mu}(S \combb R),\subcel,\mu Z_1.T_1(Z_1))$ be a maximal fixed-point tree of $S \combb R$.
For any sub-tree $\Eu{T}_m$ of $\Eu{T}$ rooted at $\alpha_m$ that comes with the maximal sequence
\begin{align*}
(S \combb R) \subcer \alpha_1 \subcer \cdots \subcer \alpha_m
\end{align*}
if $S \combb R \morphy \ufold{S}{\mathbf{s}} \combb \ufold{R}{\mathbf{r}} $ then either
\begin{enumerate}
\item $\alpha_m=Z_m$ and in this case
\begin{align}
\label{main:lemma:unif:eq:1}
\widehat{\phi}_{\nu}(Z_m) & \equiv_{\omega{(\mu Z_1. T_1(Z_1), Z_m)}} \phi_{\mu} (Z_m)
\end{align}
or,
\item $\alpha_m= \mu Z_m.T_m(Z_m)$ and in this case
\begin{align}
\label{main:lemma:unif:eq:2}
\widehat{\phi}_{\nu}(T_m(Z_m)) & \equiv_{\omega{(\mu Z_1. T_1(Z_1), \mu Z_m. T_m(Z_m))}} \phi_{\mu}\big(\mu Z_m.T_m(Z_m)\big)
\end{align}
\end{enumerate}
\end{lemma}
\begin{proof}
The proof is by induction on $\prof{\Eu{T}_m}$, the depth of $\Eu{T}_m$.
\begin{description}
\item \textbf{Base case}. If $\prof{\Eu{T}_m}=1$, then in this case $\alpha_m$ is the fixed-point variable, say $Z_m$, and thus we need to show Eq. (\ref{main:lemma:unif:eq:1}).
From Eq.(\ref{properties:morphism:corollary:eq}) of Lemma \ref{properties:morphism:corollary} we have that
\begin{align}
\phi_{\nu}(Z_m)\equiv_{\omega{(\mu Z_m. T_m(Z_m), Z_m)}} \phi_{\mu}(Z_m)
\end{align}
However, from Item \ref{distance:fixed-point-in-tree:item:1} of Lemma \ref{distance:fixed-point-in-tree} we have that $\omega(\mu Z_1. T_1(Z_1),Z_m)< \omega(\mu Z_m. T_m(Z_m), Z_m)$.
Hence Eq. (\ref{main:lemma:unif:eq:1}) holds by Item \ref{depth:position:composition:lemma:item:2} of Lemma \ref{depth:position:composition:lemma}.
\item \textbf{Induction step}. Assume that Eq.(\ref{main:lemma:unif:eq:2}) holds for any fixed-point tree $\Eu{T}_m$ of depth $\delta(\Eu{T}_m)$ and we shall prove it for any fixed-point tree of depth $\delta(\Eu{T}_m)+1$.
Notice that in this case $m \ge 2$. Consider a sub-tree $\Eu{T}_{m-1}$ of $\Eu{T}$ of depth $\delta(\Eu{T}_{m-1})=\delta(\Eu{T}_{m})+1$ and rooted at $\mu Z_{m-1}.T_{m-1}(Z_{m-1})$ that comes with the maximal sequence
\begin{align*}
(S \combb R) \subcer \mu Z_1.T_1(Z_1) \subcer \cdots \subcer \mu Z_{m-1}.T_{m-1}(Z_{m-1}).
\end{align*}
From Remark \ref{} the \ce $T_{m-1}(Z_{m-1})$ can be written in terms of its immediate fixed-point sub-\ces and fixed-point variables in the sense that there exist $k \ge 1$ and
\begin{enumerate}[i.)]
\item a fixed-point free \ce $T_{m-1}^{\star}[X^1 ,\ldots,X^k]$ where each $X^j$ is a fixed-point variable, and
\item \ces $\mathbf{T}_1,\ldots,\mathbf{T}_k$ where each $\mathbf{T}_j$ is either a fixed-point \ce or a fixed-point variable in $\set{Z_1,\ldots,Z_{m-1}}$,
\end{enumerate}
such that $T_{m-1}(Z_{m-1})$ can be written as
\begin{align*}
T_{m-1}(Z_{m-1}) = T_{m-1}^{\star}[\mathbf{T}_1 ,\ldots,\mathbf{T}_{k}].
\end{align*}
On the one hand, by the definition \ref{} of $\widehat{\phi}_{\nu}$, we have that
$\widehat{\phi}_{\nu}\big(T_{m-1}(Z_{m-1})\big) = \widehat{\phi}_{\nu}\big(T_{m-1}^{\star}[\mathbf{T}_1 ,\ldots,\mathbf{T}_{k}]\big)=T_{m-1}^{\star}[\widehat{\phi}_{\nu}(\mathbf{T}_1) ,\ldots,\widehat{\phi}_{\nu}(\mathbf{T}_{k})]$.
On the other hand, by the property \ref{} of $\phi_{\mu}$, we have that $\phi_{\mu}(\mu Z_m. T_m(Z_m))=T_{m-1}^{\star}[\phi_{\mu}(\mathbf{T}_1) ,\ldots,\phi_{\mu}(\mathbf{T}_{k})]$.
Therefore we need to show that
\begin{align}
\label{equivalnece:main:lemma:induction:eq}
T^{\star}_{m-1}[\widehat{\phi}_{\nu}(\mathbf{T}_1) ,\ldots,\widehat{\phi}_{\nu}(\mathbf{T}_{k})] & \equiv_{\omega{(\mu Z_1. T_1(Z_1), \mu Z_{m-1}. T_{m-1}(Z_{m-1}))}} T^{\star}_{m-1}[\phi_{\mu}(\mathbf{T}_1) ,\ldots,\phi_{\mu}(\mathbf{T}_{k})].
\end{align}
We distinguish two cases depending if $\mathbf{T}_{i}$ is a fixed-point variable in $\set{Z_1,\ldots,Z_{m-1}}$, or a fixed-point \ce that is, by definition, a sub-\ce of $S \combb R$.
\begin{itemize}
\item \underline{If $\mathbf{T}_{i} \in \set{Z_1,\ldots,Z_{m-1}} $} then assume for simplicity that $\mathbf{T}_{i} =Z_i=X^i$.
It follows from Eq.(\ref{properties:morphism:corollary:eq}) of Corollary \ref{properties:morphism:corollary} that
\begin{align}
\widehat{\phi}_{\nu}(Z_{i})\equiv_{\omega{(\mu Z_i. T_i(Z_i), Z_i)}} \phi_{\mu}(Z_{i}).
\end{align}
Hence, it follows from Item \ref{depth:position:composition:lemma:item:1} of Lemma \ref{depth:position:composition:lemma} that to show Eq.(\ref{equivalnece:main:lemma:induction:eq}) it suffices to show that
\begin{align}
\omega\big(\mu Z_1. T_1(Z_1), \mu Z_{m-1}. T_{m-1}(Z_{m-1})\big) \le \omega\big(\mu Z_i. T_i(Z_i), Z_i)\big) + \Pi_{X^i}(T^{\star}_{m-1}[X^1,\ldots,X^k])
\end{align}
But this follows from Item \ref{distance:fixed-point-in-tree:item:2} of Lemma \ref{distance:fixed-point-in-tree}.
\item \underline{If $\mathbf{T}_i \in \Phi_{\mu}(S\combb R)$} then assume that $\mathbf{T}_{i}$ is of the form $\mathbf{T}_{i}=\mu Z^i. \mathbf{T}_{i}^{\star}(Z^i)$.
In order to apply the induction hypothesis we need to show that
\begin{align}
\widehat{\phi}_{\nu}(\mu Z^i. \mathbf{T}_{i}^{\star}(Z^i)) & \equiv_{\omega{(\mu Z_1. T_1(Z_1), \mu Z_i. T_i(Z_i))}} \widehat{\phi}_{\nu}(\mathbf{T}_{i}^{\star}(Z^i)) \label{a:1}\\
\tand & \notag \\
\omega\big(\mu Z_1. T_1(Z_1), \mu Z_{m-1}. T_{m-1}(Z_{m-1})\big) & \le \omega\big(\mu Z_1. T_1(Z_1),\mu Z_i. T_i(Z_i))\big) + \Pi_{X^i}(T^{\star}_{m-1}[X^1,\ldots,X^k]) \label{a:2}
\end{align}
Eq. (\ref{a:1}) follows from Lemma \ref{distance:fixed-point-in-tree}, while
Eq. (\ref{a:2}) follows from Corollary \ref{properties:morphism:corollary}.
\end{itemize}
\end{description}
\end{proof}
\begin{corollary}
\label{main:corollary:unif}
Let $S$ (resp. $R$) be a \ce with bound fixed-point variables $X_1,\ldots, X_s$ (resp. $Y_1,\ldots, Y_r$) and let $n \ge 0$.
Let $\mathbf{s}:\{X_1,\ldots,X_s\} \to \mathbb{N}$ and $\mathbf{r}:\{Y_1,\ldots,Y_r\} \to \mathbb{N}$ be constant mappings with $\mathbf{s}(X_i)=\mathbf{r}(Y_j)=n$ for $i=1,\ldots,s$ and $j=1,\ldots,r$.
Then,
\begin{align*}
S \combb R \equiv_n \ufold{S}{\mathbf{s}} \combb \ufold{R}{\mathbf{r}}
\end{align*}
\end{corollary}
\begin{proof}
Immediate from Lemma \ref{main:lemma:unif} by taking $m=1$ and getting $\omega\big(\mu Z_1.T_1(Z_1),\mu Z_1.T_1(Z_1)\big)=n$.
\end{proof}
\subsection{Proof of the correctness of the unification and combination}
Now we are ready to prove the first main theorem of this paper regarding the correctness of the unification of \ces.
\begin{theorem}[Correctness of the unification]
\label{main:theorem:1}
For every term $t \in \mycal{T}$ and for every \ces $S$ and $R$ in $\ceSet$,
we have that
\begin{align*}
\Psi_t(S \combb R) & = \Psi_t(S) \combb \Psi_t(R).
\end{align*}
\end{theorem}
\begin{proof}
Let $n$ be the depth of $t$. Assume that $X_1,\ldots, X_s$ (resp. $Y_1,\ldots, Y_r$) are the (bound) fixed-point variables of $S$ (resp. $R$)
and let $\mathbf{s}$ and $\mathbf{r}$ be constant mapping with $\mathbf{s}(X_i)=\mathbf{r}(Y_j)=n$, for $i=1,\ldots,s$ and $j=1,\ldots,r$.
From Corollary \ref{main:corollary:unif} we have that $S \combb R \equiv_n \ufold{S}{\mathbf{s}} \combb \ufold{R}{\mathbf{r}}$.
Therefore, it follows from Item (\ref{item:3:nice:prop:Psi:lemma}) of Lemma \ref{nice:prop:Psi:lemma} that $\Psi_t(S \combb R) = \Psi_t\big(\ufold{S}{\mathbf{s}} \combb \ufold{R}{\mathbf{r}}\big)$.
But since $\ufold{S}{\mathbf{s}}$ and $\ufold{R}{\mathbf{r}}$ are fixed-point free,
we can apply Lemma \ref{main:lemma:unif:fixed-point-free} and get $\Psi_t\big(\ufold{S}{\mathbf{s}} \combb \ufold{R}{\mathbf{r}}\big) =\Psi_t(\ufold{S}{\mathbf{s}}) \combb \Psi_t(\ufold{R}{\mathbf{r}})$.
From Lemma \ref{unfold:equiv:lemma} we have that $\ufold{S}{\mathbf{s}} \equiv_n S$ and $\ufold{R}{\mathbf{s}} \equiv_n R$.
Hence by Item (\ref{item:3:nice:prop:Psi:lemma}) of Lemma \ref{nice:prop:Psi:lemma} we get $\Psi_t(\ufold{S}{\mathbf{s}})= \Psi_t(S)$ and $\Psi_t(R)= \Psi_t(\ufold{R}{\mathbf{r}})$.
Therefore $\Psi_t(S \combb R) = \Psi_t(S) \combb \Psi_t(R)$.
\end{proof}
We can now state and prove the second main theorem of this paper on the correctness of the combination of \ces.
\begin{theorem}[Correctness of the combination]
\label{main:theorem:2}
For every term $t \in \mycal{T}$ and for every \ces $S$ and $R$ in $\ceSetCan$,
we have that
\begin{align*}
\Psi_t(S \comb R) = \Psi_t(S) \comb \Psi_t(R).
\end{align*}
\end{theorem}
\begin{proof}
\begin{align*}
\Psi_t(S \comb R)
&= \Psi_t\big((S \combb R) \oplus S \oplus R\big) \tag{Def. \ref{combination:def} of $\comb$} \\
&\equiv \Psi_t(S \combb R) \oplus \Psi_t(S) \oplus \Psi_t(R) \tag{Item (\ref{Properties-of-Psi:Lemma:item:2}) of Lemma \ref{Properties-of-Psi:Lemma} } \\
&= \left(\Psi_t(S) \combb \Psi_t(R)\right) \oplus \Psi_t(S) \oplus \Psi_t(R) \tag{Theorem \ref{main:theorem:1}} \\
&= \Psi_t(S) \comb \Psi_t(R) \tag{Def. \ref{combination:def} of $\comb$}
\end{align*}
Thus we get $\Psi_t(S \comb R) \equiv \Psi_t(S) \comb \Psi_t(R)$. But since both $\Psi_t(S \comb R)$ and $\Psi_t(S) \comb \Psi_t(R)$ are position based \ces in $\eceSet$,
it follows from Item (\ref{item:1:nice:prop:Psi:lemma}) of Lemma \ref{nice:prop:Psi:lemma} that any equivalent position-based \ces are equal, that is, $\Psi_t(S \comb R) = \Psi_t(S) \comb \Psi_t(R)$.
\end{proof}
\section{The algebraic properties of the unification and combination}
One can transfer all the properties of the combination and
unification of position-based \ces (stated in Propositios
\ref{main:prop:elemntary:og:prop:1} and \ref{main:prop:elemntary:og:prop:2}) to \ces.
Since "$\equiv$" is an equivalence relation we shall use the standard notation $\ceSet\slash\equiv$ for the equivalence set,
and $[S]$ for the equivalence of the \ce $S$. Moreover, the unification and combination of the equivalence classes of \ces in $\ceSet\slash\equiv$ can be
defined in a natural way as:
\begin{align}
[S_1] \combb [S_2] := [S_1 \combb S_2] && [S_1] \comb [S_2] := [S_1 \comb S_2]
\end{align}
\begin{proposition}
\label{main:prop:2}
The following hold.
\begin{enumerate}
\item The quotient set $\ceSet\slash\equiv$ of \ces together with the unification and combination operations enjoy the three following properties.
\begin{enumerate}
\item The neutral element of the unification upon $\ceSet\slash\equiv$ is $[@\E.\square]$, and the absorbing element is $[\emptylist]$.
\item The neutral element of the combination upon $\ceSet\slash\equiv$ is $[\emptylist]$.
\item The unification and combination of \ces are associative i.e. $([S_1] \combb [S_2]) \combb [S_3] = [S_1] \combb ([S_2] \combb [S_3])$,
for any $[S_1],[S_2],[S_3] \in \ceSet\slash \equiv$.
\end{enumerate}
\item The unification and combination of \ces are non commutative.
\item The unification and combination is non-degenerate, that is, for any \ces $[S]$ and $[S']$ in $\ceSet\slash\equiv$, we have that
\begin{align*}
[S] \nfcombb [S'] = [\emptylist] &&\tiff && [S] = [\emptylist] \;\tor\; [S'] = [\emptylist]. \\
[S] \comb [S'] = [\emptylist] &&\tiff && [S] = [\emptylist] \;\textrm{and }\; [S'] = [\emptylist].
\end{align*}
\item The unification and combination of \ces is a congruence, that is,
for any \ces $\mycal{S}_1,\mycal{S}_2, \mycal{S}$ in $\ceSet$, we have that:
\begin{align*} \textrm{If } \mycal{S}_1 \equiv \mycal{S}_2 &&\tthen&& \mycal{S}_1 \nfcombb \mycal{S} \equiv \mycal{S}_2 \nfcombb \mycal{S} \;\tand\; \mycal{S} \nfcombb \mycal{S}_1 \equiv \mycal{S} \nfcombb \mycal{S}_2.\\
\textrm{If } \mycal{S}_1 \equiv \mycal{S}_2 &&\tthen &&
\mycal{S}_1 \comb \mycal{S} \equiv \mycal{S}_2 \comb \mycal{S} \;
\tand\; \mycal{S} \comb \mycal{S}_1 \equiv \mycal{S} \comb
\mycal{S}_2.
\end{align*}
\end{enumerate}
\end{proposition}
We notice that the neutral and absorbing element, and the
associativity property of the unification and combination must be
understood at the semantic level and not at the syntactic level
since there are \ces which are syntactically different but
semantically equivalent. For instance, the \ces $@\E.\square$ and
$(x,@\E.\square)$ and
$(x,@\E.\square) \oplus (y,@\E.\square)$, where $x,y$ are variables, are all equivalent.
Therefore, saying that $@\E.\square$ is the neutral element for the unification of \ces must be understood as follows.
For any \ces $e, \mycal{S} \in \ceSet$ such that $e \equiv @\E.\square$, we have that $e \comb \mycal{S} \equiv \mycal{S} \comb e \equiv \mycal{S}$.
And the associativity of the unification must be understood as follows. For any \ces $\mycal{S}_1,\mycal{S}_2,\mycal{S}_3 \in \ceSet$, we have that
$(\mycal{S}_1 \comb \mycal{S}_2) \comb \mycal{S}_3 \equiv \mycal{S}_1 \comb (\mycal{S}_2 \comb \mycal{S}_3)$.
\begin{proof}
We only prove the last Item. To prove the associativity of the both
unification and combination for \ces we rely on the associativity
of the unification and combination of elementary \ces
(Proposition \ref{main:prop:elemntary:og:prop}) together with the
property of the function $\Psi_t$ (Theorems \ref{main:theorem:1} and
\ref{main:theorem:2}).
Let $S_1,S_2$ and $S_3$ be \ces in $\ceSet$.
It follows from Item \emph{iii.)} of Lemma \ref{nice:prop:Psi:lemma} that in order to prove that
\begin{align*}
S_1 \comb (S_2 \comb S_3) \equiv (S_1 \comb S_2) \comb S_3,
\end{align*}
it suffices to prove that, for any term $t \in \mycal{T}$, we have that
\begin{align*}
\Psi_t\big(S_1 \comb (S_2 \comb S_3)\big) = \Psi_t\big((S_1 \comb S_2) \comb S_3\big).
\end{align*}
But this follows from an easy computation:
\begin{align*}
\Psi_t\big(S_1 \comb (S_2 \comb S_3)\big)
&= \Psi_t(S_1) \comb \Psi_t(S_2 \comb S_3) \tag{Theorem \ref{main:theorem:2}} \\
&= \Psi_t(S_1) \comb (\Psi_t(S_2) \comb \Psi_t(S_3)) \tag{Theorem \ref{main:theorem:2}} \\
&= (\Psi_t(S_1) \comb \Psi_t(S_2)) \comb \Psi_t(S_3) \tag{Proposition \ref{main:prop:elemntary:og:prop}}\\
&= \Psi_t(S_1 \comb S_2) \comb \Psi_t(S_3) \tag{Theorem \ref{main:theorem:2}} \\
&= \Psi_t\big((S_1 \comb (S_2 \comb S_3)\big) \tag{Theorem \ref{main:theorem:2}}
\end{align*}
On the one hand, if follows from Theorem \ref{main:theorem:2} that
\begin{align*}
\Psi_t(S_1 \nfcombb S) = \Psi_t(S_1) \combb \Psi_t(S).
\end{align*}
On the other hand, since $S_1 \equiv S_2$, it follows from Item
\emph{iii.)} of Lemma \ref{nice:prop:Psi:lemma} that
\begin{align*}
\Psi_t(S_1) = \Psi_t(S_2).
\end{align*}
Hence we get
\begin{align*}
\Psi_t(S_1 \nfcombb S) &= \Psi_t(S_2) \combb \Psi_t(S) \\
&= \Psi_t(S_2 \nfcombb S) \tag{Theorem \ref{main:theorem:2}}
\end{align*}
Again, from Item \emph{iii.)} of Lemma \ref{nice:prop:Psi:lemma}, we
get
\begin{align*}
S_1 \nfcombb S \equiv S_2 \nfcombb S.
\end{align*}
The proof of the remaining claims is similar.
\end{proof}
\begin{proof}
We only prove the last Item.
To prove the associativity of the both unification and combination for \ces we rely
on the associativity of the unification and combination of elementary \ces (Proposition \ref{main:prop:elemntary:og:prop}) together with
the property of the function $\Psi_t$ (Theorems \ref{main:theorem:1} and \ref{main:theorem:2}).
Let $S_1,S_2$ and $S_3$ be \ces in $\ceSet$.
It follows from Item \emph{iii.)} of Lemma \ref{nice:prop:Psi:lemma} that in order to prove that
\begin{align*}
S_1 \comb (S_2 \comb S_3) \equiv (S_1 \comb S_2) \comb S_3,
\end{align*}
it suffices to prove that, for any term $t \in \mycal{T}$, we have that
\begin{align*}
\Psi_t\big(S_1 \comb (S_2 \comb S_3)\big) = \Psi_t\big((S_1 \comb S_2) \comb S_3\big).
\end{align*}
But this follows from an easy computation:
\begin{align*}
\Psi_t\big(S_1 \comb (S_2 \comb S_3)\big)
&= \Psi_t(S_1) \comb \Psi_t(S_2 \comb S_3) \tag{Theorem \ref{main:theorem:2}} \\
&= \Psi_t(S_1) \comb (\Psi_t(S_2) \comb \Psi_t(S_3)) \tag{Theorem \ref{main:theorem:2}} \\
&= (\Psi_t(S_1) \comb \Psi_t(S_2)) \comb \Psi_t(S_3) \tag{Proposition \ref{main:prop:elemntary:og:prop}}\\
&= \Psi_t(S_1 \comb S_2) \comb \Psi_t(S_3) \tag{Theorem \ref{main:theorem:2}} \\
&= \Psi_t\big((S_1 \comb (S_2 \comb S_3)\big) \tag{Theorem \ref{main:theorem:2}}
\end{align*}
On the one hand, if follows from Theorem \ref{main:theorem:2} that
\begin{align*}
\Psi_t(S_1 \combb S) = \Psi_t(S_1) \combb \Psi_t(S).
\end{align*}
On the other hand, since $S_1 \equiv S_2$, it follows from Item \emph{iii.)} of Lemma \ref{nice:prop:Psi:lemma} that
\begin{align*}
\Psi_t(S_1) = \Psi_t(S_2).
\end{align*}
Hence we get
\begin{align*}
\Psi_t(S_1 \combb S) &= \Psi_t(S_2) \combb \Psi_t(S) \\
&= \Psi_t(S_2 \combb S) \tag{Theorem \ref{main:theorem:2}}
\end{align*}
Again, from Item \emph{iii.)} of Lemma \ref{nice:prop:Psi:lemma}, we get
\begin{align*}
S_1 \combb S \equiv S_2 \combb S.
\end{align*}
The proof of the remaining claims is similar.
\end{proof}
\section{Conclusion and future work}
We addressed the problem of extension and combination of
proofs encountered in the field of computer aided asymptotic model derivation.
We identified a class of rewriting strategies of which the operations of unification and
combination were defined and proved correct.
The design of this class is inspired by the $\mu $-calculus formalism \cite{rudimemt:mu-calculus:book}.
On the other hand we use of the fixed-point operator which is finer and more powerful
than the \texttt{repeat} constructor used e.g. in \cite{Cirstea:Rew:Staretgies:03}.
The \ces are indeed modular in the sense that they navigate in the tree without modifying it, then
they insert contexts. This makes our formalism flexible since it allows one to modify and enrich the navigation part and/or the insertion
part without disturbing the set-up.
Although the \ces can be viewed as a finite algebraic representation of infinite trees \cite{COURCELLE-infinite-trees:83,CY91infinite_terms},
our technique of the unification and combination involving $\mu$-terms and their unfolding is new.
Therefore, we envision consequences of these
results on the study of the syntactic (or modulo a theory)
unification and the pattern-matching of infinite trees once the
infinite trees are expressed as $\mu $-terms in the same
way we expressed the \ces. Thus, a rewriting
language that transforms algebraic infinite trees can be elaborated.
The class of HCES-strategies is indeed a strict subclass of the
class of \emph{context embedding strategies}, CES-strategies for short, introduced in \cite{belkhir:hal-01277395}.
The strategy constructors of the class of CES-strategies feature
the insertion of contexts, the jump operator "@", the left-choice
"$\oplus$", the fixed-point operator "$\mu$" and a mechanism to
specify and handle the failure. While the constructors of the class
of HCE-strategies feature the insertion of contexts, the jump
operator "@", the left-choice "$\oplus$", the fixed-point operator
"$\mu$" and the \some strategy. This makes the class of
HCE-strategies less expressive than the class of CES-strategies but,
on the other hand, the encoding of the (HCE-strategy)
\texttt{Inside} in the class of CES-strategies yields a strategy
whose size depends on the signature. This makes the class of
HCE-strategies more practical although its constructors are less
rudimentary than the constructors of the class of CES-strategies.
\newcommand{\etalchar}[1]{$^{#1}$}
|
1,941,325,220,480 | arxiv | \section{Introduction}
Goods in an economy are produced by a network of industries, where each industry produces goods by combining the output of others. The structure of this network may provide clues to how economies function and eventually shed light on how economies change over time. While direct data on physical production flows between industries are unavailable, data on money flows are. This study presents some initial findings about the structure of this money flow network, with a particular emphasis on patterns that are shared across economies and can serve as targets for statistical physics models.
Money flows fall into a number of large categories of transactions, such as output, consumption, income, and investment. Also included are the somewhat smaller (though still large) flows between industries. National accounting provides a system for cataloguing these money flows. Although national accounting does not use network terminology to describe these flows, they are naturally expressed in these terms, with links representing flows and nodes representing industries or sectors. Here, we focus on a subset of money flows, those within the business sector, which comprises the industries of an economy (Fig. \ref{fig:economy_diagram}). The resulting web of industrial trading is therefore not a closed network but an open one, with flows entering and exiting from outside.
Our data comes from input/output (I/O) tables, which are part of the national accounting data compiled by national statistical agencies. The I/O tables are quite similar to adjacency matrices, with several additional rows and columns added to account for boundary flows, changes to stocks, and special categories of goods, as well as separate tables to account for import flows.
A few studies have already applied network approaches to the I/O tables. These studies are roughly divided between empirical studies of structure \cite{Slater1977,Slater1978,Aroche-Reyes2003,Carvalho2007} and theoretical models of dynamics \cite{Leontief1986,Carvalho2007,Bloechl2011}. The structure studies suggest the existence of clustering among industries. Carvalho \cite{Carvalho2007} further finds an asymmetry between in-flows and out-flows of industries that implies an asymmetry between industries as providers of goods and users of goods. While different industries tend to require similar numbers of input goods, they may provide inputs to either many or few other industries -- showing that some industries are general purpose providers while others are specialists. The models of dynamics have focused on the role that network structure may play in economy-wide fluctuations, by modeling how shocks propagate through the web of industries. However, despite previous work, many basic properties of these networks remain uninvestigated.
This paper is a step towards eventually building network models of economies. We begin by explaining some basic principles of national accounting and the measurement basis for money flows, since these concepts may not be common knowledge among physicists. We then analyze industrial networks in terms of topology, flow size distribution, industry size distribution, and community structure. Our findings suggest that industrial networks have rich structure that is susceptible to analysis using complex systems approaches.
The paper is organized as follows. In Section \ref{sec:national_accounting} we explain the principles of national accounting. In Section \ref{sec:description_of_data} we describe the data set. In Section \ref{sec:network_characteristics} we discuss the topology, flow size distribution, industry size distribution, and community structure of the industrial networks in our data set. In Section \ref{sec:discussion} we discuss our results.
\section{National accounting}\label{sec:national_accounting}
\begin{figure*}[t!]
\textsf{\textbf{a}}\\ \includegraphics[width=.7\textwidth]{economy_diagram.pdf}
\textsf{\textbf{b}} \includegraphics[width=.3\textwidth]{industry_diagram.pdf}
\caption[Sectoral level money flows]{(a) Flows of money between sectors in an economy. The dashed box indicates the scope of the I/O tables. The 3 ``gates'' X, O, and V show approximately where the 3 methods for measuring GDP capture flows to compute GDP. (See Eq. \eqref{GDP_equation}.) Gate X corresponds to the ``expenditure approach'', gate O to the ``output approach'', and gate V to the ``value-added approach''. Although the finance sector is shown apart from the business sector, the I/O tables do include inferred service fee flows between finance and industries of the business sector. Credit flows and deposits, however, are outside the scope of the I/O tables. For clarity, several features are not shown: credit flows from non-finance sectors, business taxes, government subsides, transfer payments, government imports, government self-flows, investment to foreign countries. Not all interest flows are shown, but can be inferred from credit flows. One of the five SNA sectors, non-profits, is also not shown for clarity. (b) Flows through a particular industry of the business sector.}
\label{fig:economy_diagram}
\end{figure*}
Measurement of money flows involves substantially more complications than measurement of other kinds of network flows, such as energy, information, or air passenger traffic, due to the many categories of transactions that are separately accounted for and the conventions of national accounting. In this section we briefly describe national accounting methods for quantifying these flows.
First it is useful to describe the general structure of economies; this broader context helps make sense of the logic behind industry network data. Economies are composed of five ``institutional sectors'' or simply ``sectors'' for short: households, non-financial business, financial business, government, and non-profits. (Fig. \ref{fig:economy_diagram}a) The largest money flows are ``household consumption'' -- purchases of business sector goods and services by the household sector -- and ``value added''. Value added partly corresponds to purchases of household sector labor by the business sector, though it contains other components as well, as we discuss further on. Household consumption and value added collectively are referred to as the ``circular flow'' by economists and constitute the backbone of sectoral money flow structure. Note that the circular flow in this sense refers only to the monetary aspect of the economy. Biophysical flows, which also have a circular component, are maintained by boundary flows from free energy to wastes that have no monetary analog.
Next in size is ``intermediate consumption'' by the business sector. Intermediate consumption represents purchases made by industries for goods produced by other industries. Whereas household consumption goods are intrinsically desirable -- a bottle of cola, say -- goods purchased for intermediate consumption are not, but are inputs required to produce intrinsically desirable goods -- e.g., carbonated water, syrup, and glass. Intermediate consumption is recorded in input/output tables, which are a part of the accounting system used by national statistical agencies to record economic activity.
Capital purchases are an important exclusion from transactions classified as intermediate consumption. Capital purchases are purchases for goods that aid the production of other goods \emph{and} can be used repeatedly over time -- a bottling machine, say. Goods are classified as capital goods when they can be used repeatedly for more than one accounting period, usually one year. Most input/output tables only record the industry selling a capital good and not the industry buying it. Thus, instead of a full adjacency matrix of capital purchases, input/output data usually only records a vector of capital revenues received by each industry.
The transactions underlying money flows in the network are compiled on an ``accrual basis''. Under this accounting system, revenues are recognized when they are earned by the transfer of goods or the performance of a service. Expenses are recognized when the associated revenues are earned. To see how this affects the recording of money flows, consider a car maker purchasing steel, producing a car, and selling it over some period of time. Under an accrual basis, the accounts of the car maker -- and those of the automotive industry in the input/output tables -- will record sales revenue being received when the car is transferred to the consumer, even if the full purchase price is not paid immediately. At the same time, the steel expense will be matched to the car that it helped produce, even though such expenses were actually incurred earlier. The alternative method of recording transactions is ``cash-flow basis'' accounting, in which transactions are recognized when money is paid or received. Accrual basis accounting can be thought of as a pseudo-goods tracking approach, because it follows the movement of goods rather than the movement of money.\footnote{An important special case of the distinction between accrual basis and cash-flow basis transactions is depreciation flows. Depreciation in accounting is the assignment of portions of a fixed expense to multiple time periods. Depreciation transactions are recorded as though the depreciable asset is consumed over time. The consumption of a depreciable asset is thus recognized as a transaction many times throughout the depreciable lifetime of the asset, even though no literal cash flow occurs.}
Money flows within the full sector-level network are not conserved for at least two reasons. First, money may disappear from accidental loss or destruction. Second and more importantly, money is regularly created and destroyed by the financial sector. National accounting does enforce a virtual conservation law, though, through the use of balancing items, which are accounting entries that are calculated as the difference between other accounting entries. In the I/O tables, the balancing item is value-added, which is calculated as the difference between total sales by the business sector and intermediate consumption sales. Value-added ``measures the value created by production'' \cite{SNA2008} and encompass all forms of personal income -- employee compensation, interest, dividends, and rent, as well as certain kinds of taxes and depreciation.
Finally, though it is not essential to our purpose of analyzing industry networks, it is useful to understand how GDP is calculated and how it relates to industry networks. Exploiting the conservation enforced by the definition of value-added, one can equate money flows in and out of the business sector:
\begin{align}\label{business_throughflow}
{\small
\begin{array}{r}
\text{value added}\\
+ \text{intermediate consumption}\\
+ \text{imports}\\
+ \text{business taxes}
\end{array}
=
\begin{array}{l}
\text{intermediate consumption}\\
+ \text{household consumption}\\
+ \text{government consumption}\\
+ \text{capital formation}\\
+ \text{exports}\\
+ \text{subsidies}
\end{array}}
\end{align}
Or, by rearranging terms,
\begin{align}\label{GDP_equation}
{\small
\begin{array}{r}
\text{value added}\\
+ \text{business taxes}\\
- \text{subsidies}
\end{array}
=
\begin{array}{l}
\text{household consumption}\\
+ \text{government consumption}\\
+ \text{capital formation}\\
+ \text{exports}\\
- \text{imports}
\end{array}
\equiv \text{GDP}}.
\end{align}
The left hand side represents the ``income approach'' to calculating GDP, in which forms of income are summed. The right hand side represents the ``expenditure approach'' to calculating GDP. By using the identity ``$\text{value added} = \text{gross output} - \text{intermediate consumption}$'', a third approach can be derived -- the ``output approach'' -- where value added is calculated as the difference between all business sales and intermediate goods sales. All three approaches are used by statistical agencies to validate GDP calculations. They also provide equivalent intuitive interpretations of GDP as a measure of total income, a measure of total expenditures, and as the net output of the business sector.
\begin{table*}[t!]
\center
\caption{Country data statistics.}
\rowcolors{2}{}{lightblue}
\small
\begin{tabular}{lcccc}
\hline\hline
\rowcolor{lightgray}
\textbf{Country}
&\textbf{Year}
&\textbf{Num. industries in data}
&\textbf{Fraction self-flows}
&\textbf{Completeness}
\rule[-2mm]{0pt}{4ex}\\
\hline
Australia &1994-95 &38 &0.215 &0.999\\
Brazil &1996 &30 &0.240 &0.998\\
Canada &1997 &34 &0.232 &0.969\\
China &1997 &38 &0.238 &0.943\\
Czech Republic &1995 &40 &0.292 &0.965\\
Denmark &1997 &39 &0.179 &0.957\\
Finland &1995 &35 &0.274 &0.977\\
France &1995 &39 &0.285 &0.776\\
Germany &1995 &36 &0.228 &0.995\\
Greece &1994 &36 &0.168 &0.929\\
Hungary &1998 &36 &0.237 &1.000\\
Italy &1992 &37 &0.247 &0.854\\
Japan &1995 &40 &0.219 &0.818\\
Korea &1995 &39 &0.253 &0.888\\
Netherlands &1995 &38 &0.260 &0.907\\
Norway &1997 &40 &0.204 &0.999\\
Poland &1995 &35 &0.270 &0.998\\
Spain &1995 &39 &0.225 &0.961\\
United Kingdom &1998 &40 &0.286 &0.949\\
United States &1997 &39 &0.238 &0.994\\
\hline\hline
\end{tabular}
\label{tab:country_data_statistics}
\end{table*}
\section{Description of data}\label{sec:description_of_data}
Our data comes from 1997 I/O tables produced by the Organization for Economic Cooperation and Development (OECD) \cite{OECDweb}. The tables describe intermediate consumption flows in 20 countries (not all OECD members) between 41 industries. The I/O data were initially collected by national statistical agencies, who followed country-specific practices for partitioning the business sector into industries and measuring flows. The OECD converted country data sets into a uniform 41-industry system to make international comparisons possible.
The countries are listed in Table \ref{tab:country_data_statistics} and the industries in Table \ref{tab:industry_statistics}. One industry, ``Private households with employed persons'', was excluded from analysis because it was poorly represented in the data, with only 3 out of 20 countries (Australia, Japan, and Korea) providing any data for it. This industry represents the production activity of cooks, butlers, chauffeurs, gardeners, nannies, etc. and does not make a significant contribution to flows in any of the 3 countries where data is available.
Because the I/O tables of individual countries differed in both the number of industries and the boundaries between them, the translation step between the national system and SNA involved undesired splits and mergers that affect the size of flows and industries. When an industry defined by the source country overlapped two or more of the industries defined by the OECD, the OECD was forced to choose which OECD industry to assign the source industry to. As a result, some industries in the OECD data represent more than their intended scope of production activities, while others represent less. In many instances, such mergers caused entire industries to be completely subsumed under other industries. Table \ref{tab:country_data_statistics} lists the number of industries represented after all mergers are taken into account.
\section{Network characteristics}\label{sec:network_characteristics}
\subsection{Notation}
Let $\mat{A}$ be the adjacency matrix for the money flows between industries. An element $A_{ij}$ denotes the flow from industry $j$ to industry $i$:
\begin{align}
A_{ij} = \text{flow from $j$ to $i$.}
\end{align}
Self links, representing payments of an industry to itself, are permitted.
In addition to flows between nodes, an industrial network has in-flows entering the network from outside, and out-flows exiting the network (Fig. \ref{fig:toy_industrial_network}.) As explained in Section \ref{sec:national_accounting}, the in-flows correspond to final consumption, capital purchases, and export revenues. The out-flows correspond to value added and import expenditures. Let the sum over in-flows to each industry be denoted by the \emph{in-vector} $\vec{U}$, and let the sum over out-flows to each industry be denoted by the \emph{out-vector} $\vec{V}$:
\begin{align}
U_i &= \text{flow from outside to $i$}\\
V_i &= \text{flow from $i$ to outside}
\end{align}
Flow is conserved at all nodes because of the definition of value added, as described in Section \ref{sec:national_accounting}. At each node $i$, flow in equals flow out. Borrowing from ecology, we will refer to the flow into/out of node $i$ as its \emph{throughflow}, $T_i$ \cite{Fath2001}:
\begin{align}\label{throughflow}
T_i \equiv U_i + \sum_j A_{ij} &= \sum_j A_{ji} + V_i.
\end{align}
Summing over all nodes, the throughflow of the whole business sector is
\begin{align}
\Omega \equiv \sum_i T_i &= \sum_i U_i + \sum_{ij} A_{ij} = \sum_{ij} A_{ij} + \sum_i V_i.
\end{align}
This equation is the same as Eq. \eqref{business_throughflow}, with $\sum_{ij} A_{ij}$ corresponding to intermediate consumption and the other terms corresponding to $\sum_i V_i$ or $\sum_i U_i$. In economic terms, the total throughflow $\Omega$ represents gross output (the total of all sales by the business sector) plus imports and business taxes.
\begin{figure}[t!]
\center
\includegraphics[width=.3\textwidth]{simplified_network_notation.pdf}
\caption{Simplified networks structure and notation.}
\label{fig:toy_industrial_network}
\end{figure}
\subsection{Topology}
At the level of aggregation used in our data, industrial networks are nearly complete graphs, typically with more than 90\% of all possible flows having non-zero weight. (Table \ref{tab:country_data_statistics})
The high degree of completeness is only a feature of highly aggregated I/O tables. Carvalho, studying use tables\footnote{Use tables are a related data set that shows the expenditure of each industry on individual commodities. Use tables are similar to I/O tables and are used in their construction.} with approximately 500 industries, notes that the network is only 18\% complete at that level of aggregation \cite{Carvalho2007}.
\subsection{Flow weight distribution}
The magnitudes of money flows in different countries differ because they are expressed in different currencies and their economies vary in size. To make flow weights comparable across countries, we normalize them by the total throughflow of the country:
\begin{align}
a_{ij}^{c} \equiv \frac{A_{ij}^c}{\Omega^{c}},
\end{align}
where $\Omega^{c}$ is the throughflow of country $c$ and $a_{ij}^{c}$ is the normalized flow weight of country $c$.
The distributions of the normalized flow weights for all 20 countries are shown in Fig. \ref{fig:weight_distribution}. These distributions cover a wide range, with largest and smallest weights separated by 5 to 8 orders of magnitude, depending on the country. The flow weight distribution is heavy-tailed and shows significant curvature on log-log axes. It behaves very similarly for different countries throughout much of its range. At lower weights, the various country distributions diverge from each other to some extent.
The weight distributions are similar to both the Weibull,
\begin{align}
f(a) = \frac{k}{\lambda} \left( \frac{a}{\lambda} \right)^{k-1} \exp \left[ -\left(\frac{a}{\lambda}\right)^k \right]
\end{align}
and lognormal distributions,
\begin{align}
f(a) = \frac{1}{\sqrt{2\pi s^2}} \frac{1}{a} \exp\left[- \frac{(\ln a - m)^2}{2s^2} \right].
\end{align}
These two distributions are frequently difficult to distinguish in empirical data.\cite{Kundu2004}. A standard method for choosing the better fit between them is to compare the log-likelihoods from maximum likelihood fits of each distribution, accepting the distribution with the higher log-likelihood \cite{Kundu2004,Kundu2006,Kim2008}. Results are shown in Table \ref{tab:Weibull_v_lognormal}. Out of 20 countries, 11 are better described by a Weibull distribution and 9 by a lognormal. We also run a pooled regression under the assumption that the data follow approximately the same distribution. The pooled regression favors the Weibull and is shown as the dotted line in Fig. \ref{fig:weight_distribution}. In addition, two other factors favor the Weibull.
First, most countries do not show clear evidence of non-monotonic behavior, which would occur under a lognormal. Finland and Hungary are exceptions, showing a small amount non-monotonicity. Second, the Weibull tends to overestimate the occurrence of the smallest flows, while the lognormal tends to underestimate it. It is more likely that the smallest flows would be underrepresented in the data due to incomplete sampling rather than being overrepresented.
Because the network is simultaneously directed and nearly complete (at this level of aggregation), almost every flow $a_{ij}$ in the network has a reciprocating flow $a_{ji}$ of non-zero weight. The inset of Fig. \ref{fig:weight_distribution} plots weights against reciprocating weights for the United States IO network, with similar results for other countries. The correlation between off-diagonal elements is low (with typical correlation coefficients in the range $\rho = 0.1$ to $0.4$). In many cases, a flow is several orders of magnitude larger or smaller than the reciprocating flow, indicating a high degree of asymmetry in the network. This is not surprising, since for most pairs of transacting industries, one industry is primarily the supplier and the other primarily the user.
\begin{figure}[t]
\includegraphics[width=.5\textwidth]{weight_distribution.pdf}
\caption[Inter-industry flow weight distribution]{(Color online) Weight distributions of 20 countries studied. The dashed line is the best fit Weibull distribution to the pooled data from all 20 countries. Inset: $a_{ij}$ v. $a_{ji}$ for Spain.}
\label{fig:weight_distribution}
\end{figure}
The external flows, $U_i$ and $V_i$, between an industry $i$ and other sectors of the economy are generally much larger than flows between $i$ and other industries, and are comparable in size to the whole throughflow $T_i$. In Fig. \ref{fig:uv_distributions}, we plot the densities of $U_i/T_i$ and $V_i/T_i$. The first quantity is the fraction of money in-flows received from final consumption sales, sales of capital goods, and exports. (That is, all non-intermediate categories of receipts.) The second quantity is the fraction of money out-flows paid to value-added and imports (all non-intermediate categories of expenditures.) The density of $U_i/T_i$ is spread out across the whole interval $[0,1]$. This mainly reflects the large variation among industries in how directly they service final consumption, which is the most important component of $U_i$. In contrast, the density of $V_i/T_i$ is peaked, roughly around 0.6. This means that industries are more similar with respect to how much they spend on payments to the household sector than in how much they receive from it. This suggests that while industries differ significantly in where they lie on production chains, they have somwhat similar labor needs in monetary terms.
\begin{figure}[t!]
\center
\includegraphics[width=.48\textwidth]{uv_distribution.pdf}
\caption[Density of $U_i/T_i$ and $V_i/T_i$]{(Color online) Density of $U_i/T_i$ and $V_i/T_i$. Country lines (solid) were estimated used kernel density smoothing. The dashed line represents the pooled data.}
\label{fig:uv_distributions}
\end{figure}
\subsection{Node throughflow distribution} \label{sec:throughflow_distribution}
Node strength generalizes the concept of node degree to weighted networks. Since the network is directed, each node $i$ has both an in-strength and an out-strength, defined as the sum of either the in-flows or out-flows incident on $i$. These sums are equal in this network due to flow conservation, so there is only quantity to keep track of, which we refer to as the \emph{throughflow} $T_i$ of node $i$. (Eq. \eqref{throughflow}.) As was done for link weights, we normalize node throughflows to render them comparable between countries:
\begin{align}\label{eq:normalized_thruflow}
t_i^c &\equiv \frac{T_i^c}{\Omega^c}.
\end{align}
The quantity $t_i$ measures the size of industry $i$ as the fraction of money flowing through industry $i$.
\begin{figure}[t]
\includegraphics[width=.5\textwidth]{throughflow_distribution.pdf}
\caption[Industry throughflow distribution]{(Color online) The throughflow distributions of all 20 countries studied.}
\label{fig:throughflow_distribution}
\end{figure}
The throughflow distributions of all 20 countries are shown in Fig. \ref{fig:throughflow_distribution}. The distribution is similar from country to country and is approximately exponential.
Table \ref{tab:industry_statistics} shows the sizes of the 40 industries recognized in the OECD data. Under the OECD's partitioning of industries, the five largest industries are
\begin{itemize}
\setlength{\itemsep}{\SMALLitemsep}%
\setlength{\parskip}{\SMALLparskip}
\item wholesale and retail trade
\item construction
\item real estate activities
\item food, beverages, and tobacco
\item public administration and defense.
\end{itemize}
The industries most likely to export are
\begin{itemize}
\setlength{\itemsep}{\SMALLitemsep}%
\setlength{\parskip}{\SMALLparskip}
\item office, accounting, and computing machinery
\item aircraft and spacecraft
\item radio, television, and communication equipment
\item building and repairing of ships and boats
\item motor vehicles, trailers, and semi-trailers.
\end{itemize}
Unsurprisingly, the least likely to export are
\begin{itemize}
\setlength{\itemsep}{\SMALLitemsep}%
\setlength{\parskip}{\SMALLparskip}
\item real estate
\item health and social work
\item public administration and defense
\item education
\item construction,
\end{itemize}
all industries whose products are not easily traded across national borders. The industries receiving the most revenue from final demand are quite similar:
\begin{itemize}
\setlength{\itemsep}{\SMALLitemsep}%
\setlength{\parskip}{\SMALLparskip}
\item public administration and defense
\item education
\item health and social work
\item construction
\item real estate.
\end{itemize}
The industries least likely to receive revenue from final demand are
\begin{itemize}
\setlength{\itemsep}{\SMALLitemsep}%
\setlength{\parskip}{\SMALLparskip}
\item iron \& steel
\item non-ferrous metals
\item mining and quarrying
\item other non-metallic mineral products
\item rubber and plastic products.\footnote{See Chenery \& Watanabe \cite{Chenery1958} for a classification of industries based on the fraction of revenues from intermediate sales and the fraction of expenditures on intermediate goods. They use the first fraction to measure how ``final'' versus ``intermediate'' an industry is. They use the second to determine whether an industry is ``primary'' or ``manufacturing''. Using these two dimensions, they classify industries into four rough categories.}
\end{itemize}
\subsection{Community structure}\label{sec:community_structure}
In addition to knowing the statistics of flows and industry sizes, we would like to know whether industries cluster in any particular way. Such clusters are usually referred to as ``communities''. Many methods exist for finding communities in networks \cite{Porter2009,Fortunato2010}; here, we apply the method of modularity optimization \cite{Newman2004,Newman2010}. Modularity maximization involves searching for partitions of the network into communities that yield high values of the \emph{modularity} $Q$ over all possible partitions of the network. Since our network is directed, we use the directed generalization of modularity \cite{Leicht2008},
\begin{align}\label{modularity_function}
Q(c_1,\ldots,c_n) = \frac{1}{m} \sum_{ij} \left( a_{ij} - \frac{\hat{s}_i \check{s}_j}{m} \right) \delta(c_i, c_j),
\end{align}
Here, $c_i$ is the community that node $i$ belongs to, $m = \sum_{ij} a_{ij}$ is the total weight of all edges, and $\hat{s}_i = \sum_j a_{ji}$ and $\check{s}_i = \sum_j a_{ij}$. The Dirac delta function $\delta(k,l) = 1$ if $k=l$ and 0 otherwise. The modularity gives the total weight of edges within communities minus the expected weight under a null model. The modularity function scores a given partition of the nodes into groups; the task then is to search over the many possible partitions of the network and find the one with the highest score. In practice, the number of partitions is usually extremely large, so that only a small fraction can be examined directly. This has led to many proposals for algorithms that attempt to search the space of partitions efficiently for high values of $Q$ rather than find the global maximum \cite{Fortunato2010,Good2010}.
Recent work has shown that the modularity function $Q$ admits a large number of high-scoring partitions that are not necessarily similar \cite{Good2010}. As a result, different searches may arrive at different high-scoring partitions. Deterministic algorithms in particular are problematic because they fail to show the many alternative partitions. To address this problem, we use a stochastic search algorithm based on simulated annealing that returns a different high-scoring partition in each run. We repeat the algorithm many times, collect an alternative partition from each run, and compare them to test their robustness from run to run.
Specifically, we use the following simple procedure. For each country, we run the simulated annealing algorithm 100 times and extract 100 high-modularity partitions. From these partitions we produce a \emph{coclassification matrix} \cite{Sales-Pardo2007} with elements $p_{ij} \in [0,1]$ equal to the frequency with which node $i$ is grouped with node $j$. If certain nodes or groups of nodes are frequently grouped together, they will appear as blocks of high frequencies in the coclassification matrix; if the groups are highly variable, then no particular part of the matrix will accumulate a high value.
For the purpose of community finding, we set self-flows $a_{ii}$ of industries to zero, since these flows may reduce the resolution of the method. This happens because including self-flows increases $m$ in Eq. \eqref{modularity_function}, decreasing the null model ``penalty term'' $\hat{s}_i\check{s}_j/m$. This makes mergers between communities that we would like to distinguish more favorable, since it is then easier for a link between two industries to exceed the null model penalty term. A potential drawback of excluding self-flows is that if there are industries that should be classified as singleton communities, our method will not find them, because the associated term $a_{ii} - \hat{s}_i\check{s}_i/m$ in Eq. \eqref{modularity_function} can only contribute negatively to $Q$. However, in return we gain the benefit of more effectively resolving communities between two or more industries. This tradeoff is acceptable, since the communities we are interested in are \emph{inter}-industry ones. In fact, we find similar results whether self-flows are excluded or not, though we only show the results based on excluding self-flows.
Figures \ref{fig:coclassification_matrix}a-c show the coclassification matrices for Australia, China, and the United States. These figures show the level of variation possible within countries from one simulated annealing run to another. Although both the communities and their stability varied somewhat from country to country, different countries nevertheless tended toward similar groupings corresponding to food industries (rows/columns 1-3), chemical industries (4-6), manufacturing industries (7-22), service industries (23-38), and energy industries (39-41). Unsurprisingly, industries had a higher tendency to transact with other industries of similar type.
To study this common tendency more closely, we constructed the average CCM of all 20 countries. The result is another CCM (Fig. \ref{fig:coclassification_matrix}d), whose $i$-$j$th element now indicates the frequency with which industries $i$ and $j$ were grouped together out of 2000 search runs (100 per country). Overall, the five-way grouping above performs well as a coarse-grained description of the community structure.
Going beyond this quick description, we can also study the matrix in Fig. \ref{fig:coclassification_matrix}d for clues of hierarchical community structure \cite{Good2010,Sales-Pardo2007}. Such structure arises in the CCM because industries with ambiguous community membership may switch back and forth across a community boundary between different runs of the search algorithm.
\begin{figure*}[t!]
\begin{tabular}{cc}
\begin{minipage}{.25\textwidth}
\textsf{\textbf{a}} \textsf{Australia}\\
\includegraphics[width=1\textwidth]{CCM_Australia.pdf}\\
\textsf{\textbf{b}} \textsf{China}\\
\includegraphics[width=1\textwidth]{CCM_China.pdf}\\
\textsf{\textbf{c}} \textsf{United States}\\
\includegraphics[width=1\textwidth]{CCM_USA.pdf}
\end{minipage}
&
\begin{minipage}{.75\textwidth}
\textsf{\textbf{d}} \hspace{105pt} \textsf{All 20 countries}\\
\includegraphics[width=1\textwidth]{CCM_average.pdf}
\end{minipage}
\end{tabular}
\caption[Coclassification matrices for industry communities]{(Color online) Coclassification matrices (CCMs) giving the probability of two industries being grouped in the same community. Rows and columns correspond to the 40 economic industries in Table \ref{tab:industry_statistics}. \textbf{a}, \textbf{b}, and \textbf{c} CCMs for Australia, China, and United States. \textbf{d} Average CCM of all 20 countries in Table \ref{tab:country_data_statistics}, and dendrogram showing results of hierarchical clustering. The vertical axis of the dendrogram measures clustering probabilities $p_{AB} = 1 - d_{AB}$.}
\label{fig:coclassification_matrix}
\end{figure*}
For example, the ``transport and storage'' industry may be grouped with service industries in one run, and with energy industries in another. The two runs may be different runs for the same country or for two different countries, as in the case of Fig. \ref{fig:coclassification_matrix}d. An industry that switches back and forth between one group and another will appear ``smeared'' across both groups. This indeed occurs for ``transport and storage'' ($i=34$). Other industries that show this straddling behavior are ``hotels and restaurants'' ($i=3$, straddles service-food border), ``manufacturing NEC, recycling'' ($i=7$, chemical-manufacturing), ``office, accounting, and computing machinery'' ($i=21$, manufacturing-service), ``aircraft and spacecraft'' ($i=22$, manufacturing-service), and ``research and development'' ($i=37$, manufacturing-service).
We also observe weak cogrouping at a larger scale, beyond that of single straddler industries. To study these grouping patterns, we use hierarchical clustering methods. We define the distance between industries to be
\begin{align}\label{distance_function}
d_{ij} = 1 - p_{ij}
\end{align}
where $p_{ij} \in [0,1]$ is the probability with which $i$ cogroups with $j$. To create a hierarchical tree, we use agglomerative clustering with the average linkage criterion. We find similar results using other distances and linkage criteria. We construct a tree by joining industries one-by-one, starting with the closest pair of industries and ending with the most separated. Distances between clusters of industries are defined as
\begin{align}
d_{AB}&= \frac{1}{|A| |B|}\sum_{i\in A, j\in B} d_{ij}\\
&= 1 - \frac{1}{|A| |B|}\sum_{i\in A, j\in B} p_{ij}\\
&= 1 - p_{AB},
\end{align}
where $p_{AB} \equiv \frac{1}{|A| |B|}\sum_{i\in A, j\in B} p_{ij}$ is the probability that a randomly picked pair from clusters $A$ and $B$ are cogrouped. This choice of cluster distance is known as the ``average linkage criterion'', and in the present context enables a simple interpretation of industry and cluster distances in terms of probabilities. In Appendix \ref{sec:overlap_distance} we discuss properties of the distance function Eq. \eqref{distance_function}.
The results of hierarchical clustering are shown in the dendrogram at the bottom of Fig. \ref{fig:coclassification_matrix}d. The dendrogram supports the five-way division into food, chemical, manufacturing, service, and energy industries. Further interpretation has to proceed cautiously, but we observe the following:
\begin{itemize}
\item The chemical and manufacturing industries appear to form a hierarchy in which the two communities are members of a larger ``chemo-manufacturing'' community.
\item Two large sub-communities appear within manufacturing. The industries in the upper left of the manufacturing block of Fig. \ref{fig:coclassification_matrix}d (7-10) are ``manufacturing NEC, recycling'', ``wood and products of wood and cork'', ``construction'', and ``other non-metallic mineral products'', and those in the bottom right (11-22) are various metal and machinery industries. The manufacturing group thus appears to divide into those industries that are structure-producing and those that are machinery-producing.
\item The machinery-producing industries further appear to contain two subsets. The first, industries 11-15, contains basic metal and machinery products. The second, industries 19-21, contains ``radio, television, and communication equipment'', ``medical, precision, and optical instruments'', and ``office, accounting, and computing machinery''. These industries appear to follow a ``precision equipment'' pattern. The four remaining machinery-producing industries that are not in either of these subsets ($i$=16-18,22) do not form their own cluster, but are all transportation equipment industries (ships and boats, motor vehicles, rail vehicles, aircraft and spacecraft).
\item The service community contains two well-connected subsets. One subset, ``health and social work'' and ``pharmaceuticals'' ($i$=35 \& 36), is health-oriented. The other subset is less clear cut; its seven members are ``finance, insurance'', ``post and telecommunications'', ``other business activities'', ``computer and related activities'', ``other community, social, and personal services'', ``education'', and ``pulp, paper, paper products, printing, and publishing''. Roughly, these sectors follow an ``information'' theme.
\end{itemize}
Although these groupings represent increased tendencies for intra-group transactions, the hierarchical structure given by the dendrogram in Fig. \ref{fig:coclassification_matrix} oversimplifies the community structure of the network somewhat. Hierarchical clustering forces hierarchical structure even where none exists \cite{Tibshirani2009}, and the actual clustering behavior may be more nuanced. The CCM displays substantial overlap between communities that is not apparent from the dendrogram in Fig. \ref{fig:coclassification_matrix}d. For example, the food and chemical industries show some tendency to cogroup; in certain countries (e.g. Australia) this cogrouping is strong. This behavior suggests an alternative hierarchy in which the two communities are members of a larger ``agrochemical'' community, or equivalently, overlap with the chemo-manufacturing community. As a second example, the service community as a whole shows overlap with part of the manufacturing community. The particular manufacturing industries overlapped tend to be ones further along the supply chain -- construction, radio, computer, medical, aircraft -- rather than basic materials industries -- metals, fabricated metal products, other non-metal materials. These particular manufacturing industries and the service industries may constitute some larger definition of the service community that includes its immediate suppliers.
It is also important to note that the communities at this level of aggregation are not mostly isolated clusters, but are more like perturbations on top of an otherwise strongly connected network. It is possible this behavior would change at lower levels of aggregation, with more narrow industry definitions serving to isolate industries from irrelevant parts of the economy.
\begin{figure*}[t!]
\includegraphics[width=\textwidth]{network_picture.pdf}
\caption[Industry money flow network for the United States]{(Color online) The industry money flow network of the United States in 1997. Nodes are colored according to the communities identified in Fig. \ref{fig:coclassification_matrix}d. The size of a node corresponds to its throughflow (Eq. \eqref{throughflow}.) External flows $\vec{U}$ and $\vec{V}$ are omitted for clarity. To reduce picture file size, only flows larger than $\frac{1}{1000}$th of the largest flow are displayed. Remaining flows represent about 57\% of the $40^2=1600$ possible links. The true size of many of these flows can be best seen online by zooming in. No intermediate consumption data was available for the ``Public administration and defense'' industry for the U.S, so it appears as an isolated node.}
\label{fig:US_IO_network}
\end{figure*}
\section{Discussion} \label{sec:discussion}
Comparisons of national economies typically focus on their differences; it is less often appreciated that economies may have substantial amounts of shared structure. Chenery and Watanabe write, ``The structure of production, as defined by the input-output model, is the result of the interaction of a variety of forces, some leading to uniformity among countries and others to diversity. To the extent that production in various countries is intended to satisfy biologically determined human needs, is based on the same body of technological knowledge, and is constrained by the physical world, we should expect similarity in structure. To the extent that there are, among countries, variations in the relative scarcity of capital, labor and raw materials, differences in levels of income and composition of final demand, and variation in the scale of production, we may expect diversity.'' \cite{Chenery1958} While differences are apparent from statistics like GDP per capita or the export trade network, similarities are not yet well characterized. Such similarities can serve as constraints for theoretical and computational models of economies.
Both for the construction of such theories and further empirical work, the level of aggregation is important. Unlikely other networks where the meaning of a node is clear (as a person, city, router, web page, species, etc.), the meaning of nodes as industries is necessarily ambiguous and subject to arbitrary decisions on the part of the statistical agencies collecting economic data. These ambiguities are not drawbacks of the data per se, but rather reflect fundamental ambiguities in the distinctions between products, though they sometimes also reflect the limited resources of the statistical agencies. Because of this ambiguity, it is important for future theoretical and empirical work to account for the way results should change at different levels of aggregation.
A useful way to gauge the aggregation level of an industry network is to look at the amount of ``self-flow'' in the network. Self-flow represents transactions between firms that are classified within the same industry. Although these firms may produce different products, they are not different enough for them to have fallen into different industry bins. In this case, the industry partitioning scheme is too coarse-grained to differentiate them. The fraction of all intermediate flows that are self-flows, $\frac{\sum_i a_{ii}}{\sum_{jk} a_{jk}}$, can serve as a measure of the aggregation level of an industry network data set. For our data, this number varies between 0.15 to 0.30; that is, some 15 to 30\% of inter-industry money flows are really transactions of an industry with itself, reflecting the high level of aggregation of our data. Individual industries with large self-flows represent good candidates for subdivision in future I/O tables. (Table \ref{tab:industry_statistics}.)
\section{Conclusions}
Network methods are useful for studying the relationships between industries. Here, we have applied them to flows of money between industries. These networks are weighted, directed, dense, and contain self-links. We have characterized the flow weight and industry size distributions, identifying functional forms to serve as targets for theoretical models. We have examined the community structure of industries, finding groups corresponding to food, chemical, manufacturing, service, and energy industries, as well as nested sub-groups corresponding to finer categories of industries. Applying network methods to industrial money flows involves challenges not encountered in other network data sets, so to aid other researchers we have provided a brief introduction to the concepts and definitions of national accounting, as well as the measurement basis and interpretation of money flows.
\section{Acknowledgements}
JM gratefully acknowledge financial support from NSF Grant SBE0738187. We thank the International Institute for Applied Systems Analysis (IIASA) and the Young Scientist Summer Program (YSSP) where this research began, with financial support from The National Academies. We thank the Santa Fe Institute for support during the continuation of this research. We thank Aaron Clauset, Ben Good, and Doyne Farmer for several helpful conversations and suggestions.
|
1,941,325,220,481 | arxiv | \section{Introduction}
In the on-going development of quantum optical technologies, devices will need to be easier to use, more compact, robust, and scalable, making them available to a broader community. These technologies include applications in quantum communication~\citep{Gisin2007, Simon2007,Ursin2007,Liao2017}, optical quantum metrology~\citep{Giovannetti2004,Banaszek2009,Matthews2016,muller2017}, and optical quantum computation and simulation~\citep{Knill2001, Kok2007, OBrien2007, Gazzano2016}. For example, a true single photon source on chip as a turnkey device would open quantum technologies to a unprecedented user group.
Quantum dot (QD) excitonic states are excellent quantum emitters, showing bright emission of
single photons~\citep{Michler2000, Solomon2001, Gazzano2013, Somaschi2016, Senellart2017} and excellent suppression of multi-photons states~\citep{Jayakumar2013,Somaschi2016,Schweickert2018}. These properties are achieved due to the level structure and radiative efficiency of the optically allowed lowest level exciton states.
While single QD exciton emission is inherently bright with low multi-photon contribution, the emitted light can be further enhanced and directed into a Gaussian mode by coupling the QD to an optical cavity.~\citep{Haroche1989,Gerard1998, Solomon2001}
In the weak coupling regime between emitter and cavity, this is known as the Purcell effect~\citep{Purcell1946}.
For a cavity with quality factor $Q$ and mode volume $V$, the Purcell effect is characterized by $F_p=\frac{3}{4\pi^2}(\frac{\lambda}{n})^3\frac{Q}{V}$ for a dipole emitter in resonance with the cavity, placed at the maximum of the electric field, and with proper aligned polarization. $\lambda$ is the wavelength of fundamental mode resonance and $n$ is the material's index of refraction. With the emitter and cavity in resonance, this shortens the radiative lifetime.
While various optical cavities can be used~\citep{Senellart2017}, a particularly useful cavity is the pillar microcavity~\citep{Pelton2002} because the single-photon emission is in a well defined Gaussian mode.
Since weak cavity coupling reduces the radiative lifetime, decoherence contributions to the emission linewidth are reduced, leading to bandwidths that can approach the spontaneous-emission lifetime-limit, and near unity photon indistinguishability~\citep{Iles2017,Senellart2017}.
Because of the single mode nature of the micropillar cavities, the resonant excitation pump and resonance fluorescence signal are in the same spatial mode. Separating the pump laser from the signal is often achieved through pump-probe cross-polarization, leading to a signal reduction that is at best 50 \%~\citep{Englund2010,Somaschi2016,Unsleber2015,Ding2016,Muller2007}. This reduction in efficiency eliminates quantum applications that require high efficiency (as opposed to brightness)~\citep{Knill2001}.
It has previously been shown that orthogonal pumping of QDs embedded into planar cavities suppresses scattered laser light~\citep{Muller2007, Muller2008, Jayakumar2013, Thomay2017, Gazzano2018}. Nevertheless, this was limited to planar structures with moderate~\citep{Huber2013} or no Purcell enhancement of the emitter lifetime. Micro-pillar cavities, on the other hand, have much better Purcell enhancement~\citep{Gerard1998, Solomon2001} due to their high Q with a relatively small mode volume, but resonant excitation is limited to cross polarized excitation~\citep{Somaschi2016,Unsleber2015,Ding2016}. Current approaches for orthogonal pumping of micro-pillar cavities are free-space and require cross-polarization, and cannot couple to multiple cavities~\citep{Ates2009}.
Alternative approaches to in-plane excitation have recently be demonstrated by the Lu group that also remove the 50~\% photon loss associated with cross-polarization. They include using the polarization splitting induced by elliptical cavities\citep{gayral1998}, providing 24~\% efficiency when accounting for the detector efficiency~\citep{lu2019a}, and an alternative approach using a coherent two-color pump source~\citep{lu2019b}.
In this paper, we demonstrate a ridge waveguide-coupled optical cavity architecture where the resonant laser pump and the collected resonance fluorescence are spatially orthogonal. This combines orthogonal waveguide pumping with micro-pillar cavities, allowing for the filter-free off-chip coupling of single photons without the 50\% penalty in source brightness and efficiency present in most current device designs.
The device design combines the advantages of waveguides~\citep{Monniello2014, Stepanov2015, Javadi2015} with the advantages of cavity QED~\citep{Senellart2017,Gazzano2016}.
The waveguide enables us to excite several micro-pillar cavities simultaneously, while it significantly reduces laser scattering.
We verify our experimental results through simulation, and discuss the limitations of the current design.
We show that the presented device structure allows for confined cavity modes with a Purcell factor of about 2.5, in-plane guided waveguide modes for excitation, and suppression of unwanted pump laser scattering leading to a filter-free auto-correlation value of $\ensuremath{g^{(2)}(0)}_\mathrm{fit}=0^{+0.043}_{-0}$\, where by filter free we mean no spectral, temporal or polarization filtering.
The device fabrication begins with a distributive-Bragg reflector (DBR) planar microcavity with QDs at the center of a 4-$\lambda$ cavity. (See Supplemental Material and Fig. 2a.)
Our device design minimizes scattering between the waveguide modes, but also maintains confinement in the out-of-plane micro-pillar cavity mode.
Simulations indicate that the best results are pillar diameters between 2-- and 3--$\mu$m and waveguide widths between 0.55-- and 1.25--$\mu$m, where smaller waveguides increase the cavity confinement and decrease the polarization mode splitting, but increase the scattering at the waveguide-cavity interface. A FDTD simulation of the confined cavity mode and the in-plane waveguide mode can be seen in Fig.~\ref{fig:comsol}.
To suppress residual scattering we planarize the sample with a polymer and cover it with gold, opening circular apertures over the micropillars, allowing outcoupling of the QD emission~\citep{Hopfmann2016} (see Supplemental Material).
The device before planarizing and gold coating is shown in Fig.~\ref{fig:sem}, where Fig.~\ref{fig:sem}~(a) shows the cleaved edge of the device, which is used for coupling of a free space beam. The width of the waveguide then adiabatically tapers down to its design width. The current chip design combines 8 different waveguide widths and pillar-diameters. Fig.~\ref{fig:sem}~(b) shows the waveguide connecting 5 micro-pillar cavities. However, each waveguide connects 25 micro-pillar cavities of the same size along one waveguide, but differ in size for different waveguides. The cavity diameters increase from 2.1\un{\mu m} to 2.8\un{\mu m} and the waveguide width changes from 0.55\un{\mu m} to 1.25\un{\mu m}, both in 0.1\un{\mu m} steps. Fig.~\ref{fig:sem}~c shows a single micro-pillar cavity.
\begin{figure}[ht]%
\begin{centering}
\includegraphics[width=1\columnwidth]{comsol.eps}%
\caption{Simulation of the confined modes in the device for a 2.5\un{\mu m} diameter cavity and a 0.95\un{\mu m} waveguide. \textbf{a} intensity ($E_{x(y)}^2$) of the confined modes in the micro-pillar cavity. $E_{x(y)}^2$ is the electric field in the direction along (perpendicular) to the wave guide. These two directions define the two different polarization modes. The polarization modes have a different strength, but have the same spatial extend and their energies overlap within $0.05\un{nm}$. \textbf{b} electric field propagating in the waveguide. No scattering is visible when plotting the intensity, thus we show the electric field distribution for better clarity.}
\label{fig:comsol}%
\end{centering}
\end{figure}
\begin{figure}[ht]%
\begin{centering}
\includegraphics[]{sem_picture.eps}%
\caption{Scanning electrom microscopy (SEM) images of the sample. \textbf{a} Cleaved edge of the sample which is used to couple the laser into the waveguide. The waveguide at the sample edge is 5.5\un{\mu m} wide and adiabatically tapers down to the design width. \textbf{b} The waveguide is connecting micro-pillar cavities, which are used for out-of-plane enhancement of the emission of quantum dots. \textbf{c} zoom on a single micro-pillar cavity. The shown cavity is 2.8\un{\mu m} in diameter and the waveguide is 1.25\un{\mu m} wide.}%
\label{fig:sem}%
\end{centering}
\end{figure}
The $Qs$ of the cavities were measured in photoluminescence, using the QDs as gain medium. The cavity $Qs$ are low enough as to not be significantly affected by any QD absorption. Here, the QDs were pumped above-band with a cw Ti:sapphire laser at 780\un{nm} using a high excitation power density of $P_{\mathrm{pump}}\approx3\times10^3\un{W cm^{-2}}$. The mean of the measured $Q$ factors is plotted in Fig.~\ref{fig:q}(a). The large error bar comes from the distribution of measured
$Q$ factors. We assume that this is due to the moderate QD density, where the QD spontanteous emission does not uniformly fill the cavity mode.
To fit the size dependence of the $Q$ factors we used $\frac{1}{Q}=\frac{1}{Q_{planar}}+\frac{1}{Q_{scatt}}$\citep{Rivera1999}, where $Q_{planar}=8350(50)$ is the calculated fundamental mode $Q$ factor of the planar microcavity prior to etching of the micropillars, and $\frac{1}{Q_{scatt}}=\frac{\kappa J^2_0(k_tR)}{R}$ is an explicit function for scattering loss of the micro-pillar of radius $R$ with the Bessel function of the first kind $J_0(k_tR)$, where $k_t=n^2k^2-\beta^2$, with the core refractive index $n$, the mode propagation constant $\beta$, and the sidewall loss parameter $\kappa$.
\begin{figure}[ht]%
\begin{centering}\emph{}
\includegraphics[width=1\columnwidth]{fig3_v3.pdf}
\caption{ (a) Cavity quality ($Q$) factors and Purcell factors measured before planarization of the sample. Blue dots: mean of measured $Q$ factors, blue triangles: best single measured values for a given cavity diameter. Purple lines: fit of $Q$ factors with sidewall-scattering as a free parameter, and the expected Purcell factor from this fit and a calculated mode volume. (c) Variation of normal cavity mode wavelength and (d) the normal mode cavity splitting with cavity diameter, where blue dots are again mean values. The orange lines are numerical simulations. Error bars represent one-standard deviation.}%
\label{fig:q}%
\end{centering}
\end{figure}
The only free parameter for fitting is $\kappa$ and the fit estimates $\kappa=3.8(2)\times 10^{-10}\un{m}$, comparable to results by others~\citep{Reitzenstein2007,Schneider2016}. The expected Purcell factors are in the range of $2-3$ for the measured $Q$ factors with the mode volume from the electric field distribution from FDTD simulations, see Fig.~\ref{fig:q}(b). Since the {\it Q} values are determined from Fig.~\ref{fig:q}(a), the discrepancy between between the data and simulation is likely due to uncertainty in the FDTD simulations of the electric field distribution originating from finite mesh size. The normal-mode cavity wavelength shifts to shorter wavelength with small cavity diameters is shown in Fig.~\ref{fig:q}(c), reflecting the increased electric field confinement with smaller cavity diameters.
The cavity normal-mode splitting before planarization shown in Fig.~\ref{fig:q}(d) is roughly a factor of three larger than the simulations, indicating either uniform process variations because of the consistency of the offset, or is again, related to the FDTD simulations of the electric field distribution.
Although the QDs have random emission energy and position, we measure a single-photon lifetime enhancement above 2 for about 10 out of 100 devices at 5 K without tuning.
An example lifetime measurement is shown in Fig.~\ref{fig:lifetimes}, where we compare the lifetime of an exciton on resonance with an exciton out of resonance to the cavity energy. The Purcell factor is calculated as the ratio of the decay times of an emitter in a cavity and an emitter in bulk Here, approximated by the emitter decay time in the waveguide at the same cavity-resonance wavelength.). Based on the measured lifetimes the Pucrell factor is $F_P=2.44(6)$.
\begin{figure}[ht]%
\begin{centering}
\includegraphics[]{lifetime}%
\caption{Red: exciton lifetime out of resonance with the cavity mode, blue: exciton lifetime on resonance with the cavity mode. The quantum dot was excited above-band using a 2\un{ps} Ti:sapphire laser at 820\un{nm} with 76\un{MHz} repetition rate. The emission is collected synchronized to the emission laser to extract the lifetime. The excitation power for the red curve was slightly higher than for the blue curve, to measure with comparable count rates. This led to a different rise time of the two curves, probably due to excitation of biexciton-exciton cascades in the off-resonant case. Nevertheless, this is not affecting the measured exciton lifetimes. Uncertainties in the lifetime fit are one standard deviation. }%
\label{fig:lifetimes}%
\end{centering}
\end{figure}
To estimate the suppression of the resonant pump laser without filtering, we measure the second-order correlation statistics by exciting a QD state resonantly through the waveguide mode using a tunable cw semiconductor laser. One expects a flat second-order auto-correlation function with a \ensuremath{g^{(2)}(0)} ~close to 1 for a Poissonian source, such as an attenuated laser signal, and a dip in the auto-correlation function with a \ensuremath{g^{(2)}(0)}=0 for a perfect single photon source.
The measured auto-correlation is shown in Fig.~\ref{fig:g2}.
With no filtering, in resonance fluorescence with a Rabi frequency of $\Omega \approx 1\;GHz$ we find $\ensuremath{g^{(2)}(0)}=0.00^{+0.04}_{-0}$, where the error is calculated from the fit uncertainty. This value of the uncertainty of $\ensuremath{g^{(2)}(0)}$ is comparable or better to previously published, where cross-polarization and filters were used~\citep{Englund2010, Unsleber2015,Ding2016,Muller2007,Somaschi2016}. The fit function is a convolution of the known detector response and an exponential function (the single-photon avalanche detectors have a measured detector response of 289(5) ps). To estimate the Rabi frequency, we performed a series of $\ensuremath{g^{(2)}(0)}$ measurements and fit the correlation function following Muller \textit{et al.}~\citep{Muller2007}.
\begin{figure}[ht]%
\begin{centering}
\includegraphics[]{g2_resonant}%
\caption{Second-order auto-correlation of photons from a single quantum dot state in the weak excitation resonance fluorescence regime. The fit function is a convolution of the known detector resolution and the expected signal. The blue solid curve is the fit function and the red dashed line is the resulting auto-correlation function for an infinitely fast detector which gives $\ensuremath{g^{(2)}(0)}=0.00^{+0.04}_{-0} $ The Rabi frequency is 1 GHz. Uncertainties are one standard deviation. Inset: Resonance fluorescence when the laser is on resonance (orange) and residual laser scattering (blue) when the laser is detuned by 0.2 nm from the quantum dot resonance with an equivalent Rabi frequency of 6 GHz. The residual scattering signal is displayed a factor of 50 higher than measured, to make the signal visible.
\label{fig:g2}%
\end{centering}
\end{figure}
Beyond 1~GHz we cannot characterize the $g^{(2)}(0)$ as we enter the strong light-matter interaction regime. To estimate the laser scattering at high Rabi frequencies, we detune the laser from the QD resonance, see inset in Fig.~\ref{fig:g2}. If we assume this is roughly the resonant value, the estimated laser contribution to the single-photon resonance fluorescence signal from this measurement is $<1~\%$ at a Rabi frequency of 6 GHz. For a Rabi frequency of 6 GHz we measured 4~Mcts/s on the SPAD detectors when the QD is in resonance with the cavity. With a detector quantum efficiency of approximately 0.22 at 930~nm and considering a $10~\%$ counting error due to the detector dead time of 50~ns, the count rate corresponds to approximately 20~Mcts/s on the detector. We note that the large anti-bunching of the device is only present with the metal planarization. Without the metal planarization, the auto-correlation was at best close to 0.5 and in many cases it showed only a very small deviation from 1, as the laser scattering competes with the single-photon resonance fluorescence from the single QD state.
Four parameters are important in the characterization of single photon sources: The source brightness {\it i.e.,} how many useful photons are collected; the source efficiency, {\it i.e.,} the percentage of arbitrary time bins occupied by single photons; suppression of multi-photons, as measured by the second-order correlations ($g^{(2)}(t)$); and the indistinguishability of the quantum light.
In many emerging quantum optics experiments and applications the brightness of the source is critically important to a successful outcome. For example, boson sampling was simulated using quantum dot single photons \citep{Wang2017}, the source produces 26 million photons per second without normalizing out detector inefficiencies using cross polarization. With our approach this could be boosted by a factor of two, yet in some cases, as in Ref.\citep{Wang2017} where a single photon source is multiplexed, other processes (for instance, Pockell cells) are the limiting factor to useful brightness. For some quantum communications protocols such as BB84, single-photon brightness may provide an appealing advantage to attenuated lasers. For higher order photon correlations this reduces the measurement time by correlation-order squared (e.g. a factor of 4 for $g^{(2)}(t)$), which allows the expansion of the number of interacting nodes and photons.
In other applications, such as linear optical quantum computing~\citep{Knill2001} and quantum metrology~\citep{Giovannetti2004,Banaszek2009,Matthews2016,muller2017}, the source efficiency above certain thresholds is critically important while a source brightness is an added benefit. Single-photon sources require various degrees of multi-photon suppression, but whereas some applications require extremely high indisguishability, others require none.
Our device design has a variety of flexible attributes. The device has partially overlapping cavity modes of orthogonal polarization; thus, the emission can be unpolarized for certain applications such as BB84, or polarized for other applications such as boson sampling. Furthermore, the cavity-mode splitting can be adjusted through processing. However, the device design is not without issues. These include the alignment of the in-plane QD dipole with the waveguide mode for optimum pump light efficiency; and the alignment of the QD with the pillar cavity, which here is not optimized. Both of this issues relate to the classical efficiency of the device; for instance the number of working devices and the pump efficiency, and can be overcome with further engineering.
While the waveguide coupling to the cavity provides efficient in-plane QD resonant excitation, a small component of the QD resonance fluorescence couples back into the waveguide and not into the cavity mode. From simulations, we estimate this to be about 10-15 \% of the total QD emission. Finally, while a count rate of 20 Mcts/s constitutes a bright source, the system is pumped cw and the radiative decay rate is 2.5 GHz. The large difference is due to spectral diffusion induced blinking of the emission. Adding a small amount of nonresonant light can markedly reduce this effect~\citep{metcalfe2010,chen2016}; however, this was not implemented here, to avoid the need for spectral filters.
The presented device is a first step towards an all integrated single photon source. A future device could divert a small fraction of the light on chip for real-time metrology analysis (the 10-15 \% discussed above), while sending most of the light off chip to be used in an application.
Such an approach would require low-loss waveguides~\citep{Davanco2017}, on-chip detectors~\citep{faraz2015}, and schemes to measure on-chip indistinguishability and multi-photon suppression~\citep{Thomay2017}. While each presents its own challenges, they are individually useful in various emerging quantum photonics applications.
\bibliographystyle{apsrev4-1}
|
1,941,325,220,482 | arxiv | \section{Introduction}
There is large variety of single-channel models, proposed decades ago, which
describe spectra of hadrons with reasonable accuracy \cite{1}. The most popular
and widely used is the relativistic quark model of N.Isgur and coworkers for
mesons \cite{2} and baryons \cite{3}, where effective constants are used for
quark masses (constituent masses), as well as an overall negative constant
$(C<0)$, and several additional parameters for spin-dependent interactions. For
heavy quarkonia the Cornell model \cite{4}, based on nonrelativistic
Schr\"{o}edinger equation and linear plus Coulomb potential, was extensively
exploited.
Most of the models proposed are rather successful in predictions of low-lying
hadron masses and the idea, that relativistic quark Hamiltonian with confining
and the gluon-exchange potential, can be derived from QCD, seems to be
realistic. It was indeed done in Ref.~\cite{5}, using the Wilson loop and
field correlator technic, where for quarks at the ends of the rotating QCD
string the relativistic string Hamiltonian (RSH) was derived. The RSH contains
several improvements over old models:
i) At small $L$ (low rotation) it reduces to standard relativistic quark
Hamiltonian \cite{1,2,5,5'}, but with current quark masses used instead of
phenomenological constituent masses. The resulting hadron masses calculated are
exprssed through the former and the string tension $\sigma$ \cite{5,5',6}.
ii) The overall negative constant is absent while for a given
quark the universal negative self-energy correction appears,
calculated via $\sigma$ \cite{7}; its presence is crucially
important to reproduce linear behavior of the Regge trajectories.
iii) At high $L$ due to string rotation term, which naturally
appears in RSH \cite{5} and is absent in quark Hamiltonian models
\cite{1,2,3,4}, the Regge trajectories with correct slope and
intercept are calculated \cite{8,9}
As a result, one obtains the formalism, derived from QCD with minimal number
of the first-principle parameters (current quark masses, $\alpha_s$, and string
tension $\sigma$; connection of two latter was found in \cite{10}).
Theoretical calculation of hadron masses with the use of RSH in single-channel
approximation was successful for all states below open decay thresholds (see
\cite{11} and \cite{12} for charmonium and bottomonium, \cite{13} for
heavy-light mesons, \cite{8,9,14} for light mesons, and \cite{15} for higher
pionic states). However, for states above threshold RSH gives somewhat higher
masses and one can expect that taking coupling to decay channels into account
one obtains mass shifts of these levels down, closer to experimental values.
To this end the channel-coupling (CC) models were formulated in
\cite{4}, \cite{16,17}. They are based on the presumed form of
the decay Hamiltonian, which is usually taken to be the $~^3P_0$
model \cite{18,19}. More forms have been investigated in
\cite{20}, with a conclusion that the so-called $sKs$ model yields
results close to that within the $~^3P_0$ one. Influence of the
CC effects on the spectrum are significant and can be divided in
two parts.
First, the effect of close-by channels, when the energy of the
level in question is not far from the two-body threshold (e.g.
$\psi(3770)$ in connection with $DD, DD^*$ thresholds). As was
found in \cite{16, 21}, the overall shift from the sum of the
nearest thresholds (e.g. for charmonium) is of the order of
(100-200) MeV.
Another part of the mass shift is associated with the contribution
from higher intermediate thresholds (e.g. of a pair of higher $D$
and $\bar D$ mesons) and in this case convergence of such terms
appears to be questionable. This topic was investigated in
\cite{22} and in the first paper of \cite{22} the authors have
introduced additional form factor for quarks to make the $~^3P_0$
vertex nonlocal and ensure convergence of the sum of contributions
over thresholds.
Therefore the structure of the string-breaking vertex becomes a fundamental
issue and one should try to find its properties from the basic QCD Lagrangian,
which takes into account both confinement and chiral symmetry breaking. Such
the strong decay Hamiltonian was derived from the first principles in \cite{24}
(which also supported by the $sKs$ model in its relativistic version), the
interaction kernel being simply the confining potential between the newly born
quark (antiquark $\bar q$) and original (possibly heavy) antiquark $\bar Q$
(quark $Q$). This constitutes the strong decay term in action of the form (in
the local limit cf \cite{20})
\be S_{eff} = \int d^4 x \bar \psi_{\bar q} (x) \mathcal{M} (x)
\psi_q (x),\label{1}\ee
\be \mathcal{M} (\vex, \vex_Q, \vex_{\bar Q}) =\sigma ( | \vex_q
- \vex_{\bar Q} | + | \vex_{\bar q} - \vex _Q |). \label{2}\ee
From (\ref{1}), (\ref{2}) one obtains the decay matrix element
between the original state $(Q\bar Q)_{n}$ and decay products --
two mesons $(Q \bar q)_{n_2}$ and $(\bar Q q)_{n_3}$ with relative
momentum $\vep$,
\be J_{nn_2n_3} (\vep) = \frac{1}{\sqrt{N_c}} \int \bar y_{123}
\Psi^{(n)}_{Q\bar Q} (\veu-\vev) e^{i\vep\ver} \mathcal{M}
(\vex, \veu, \vev) \Psi^{(n_2)}_{Q\bar q} (\veu-\vex)
\Psi^{(n_3)}_{\bar Q q} (\vex-\vev) d^3\vex d^3
(\veu-\vev).\label{3}\ee
Here the factor $\bar y_{123}$ accommodates spin-angular variables
and the functions in (\ref{3}) refer to radial dependencies only,
$\ver = c (\veu-\vev)$, $c\approx 1$ for heavy $Q\bar Q$ masses.
In the way it was derived in \cite{24}, the $\mathcal{M} (\vex,
\veu, \vev)$ refers to the string between positions $\veu$ of
quark $Q$ and $\vev$ of antiquark $\bar Q$, which breaks at the
point $\vex$ somewhere between $\veu$ and $\vev$. It is clear,
that the point $\vex$ should lie in the body of the string, i.e.
within the string width $d$ from the axis of the string, see
Fig.1. This implies the necessity of an extra factor in (\ref{1}),
$\Theta_{\rm string}(\vex,\vel)$, which is proportional to the
energy density of the string with a fixed axis $\vel$ (the vector
$\veu-\vev$ in our case).
\begin{figure}[h]
\center{
\includegraphics{triangle.eps}
\caption{ String breaking (pair creation) point $x$ and heavy-light radii $R_2$
and $R_3$, shown together with the string of radius $d$ between charges $Q$ and
$\bar Q$.}}
\end{figure}
Now the string density was studied both analytically \cite{25} and
on the lattice \cite{26,27}. In the field correlator method
\cite{28,25} the string width $d$ is proportional to small vacuum
correlation length, $\lambda\approx 0.1 $ fm \cite{10,29}, and
therefore it is also small, $d\la 0.3 $ fm, for not highly
excited hadrons.
The string field density was computed in \cite{25,26,27} and one
can visualize there the field distribution in the string,
exponentially decreasing far from the string axis. In lattice
calculations similar estimates hold, but they depend on the way of
probing the string fields: in case of a connected probe one has
$d_{\rm con} \approx 0.3$ fm \cite{26} and in the case of a
disconnected probe $d$ is smaller, $d_{\rm disc}\approx 0.15$ fm
\cite{27}. A simple look into the configuration of large closed
Wilson loop for the string and a smaller one for $q\bar q$ closed
trajectory, shows that $d_{\rm disc}$ is closer to the string
breaking situation. In what follows we shall take $d$ to be
somewhere between the two (lattice) values. In next Section we
shall study the effect of the decaying string width, called the
factor $\Theta_{\rm string} (\vex,\vel)$, on the decay matrix
element and resulting mass shifts of energy levels.
\section{The width-of-the string correction in the string-breaking action}
In \cite{24} it was shown that the effective action of the $q\bar
q$ pair emission in the field of static charges $Q\bar Q$, placed
at fixed points, can be written as
\be S_{eff} = \int d^4 x d^4 y \bar \psi (x) \tilde \mathcal{M}
(x,y) \psi(y),\label{4}\ee where the mass operator $ \tilde
\mathcal{M} (x,y)$ is to be found from the nonlinear (integral)
equation
\be
\tilde \mathcal{M} (x,y) = \left( \frac{1}{\hat
\partial + m_q + \tilde\mathcal{M}}\right)_{ (x,y)} J(x,y)\label{5}\ee
and the kernel $J(x,y) = J_Q(x,y) + J_{\bar Q} (x,y)$ accounts for
the fields in the string. Taking into account only colorelectric
fields of scalar confining correlator $D(x)$, one can present
$J_Q(x,y)$ as
\be J_Q (x,y) = \frac{g^2}{N_c} \lan A_4 (x) A_4 (y)\ran = \int_Q^x du_i
\int^y_Q dv_i D(u-v).\label{6}\ee
Here $Q$ is the position of the closest static charge $Q$( or $\bar Q$) in 4d
space and the analogous term appears for the anticharge $\bar Q$ (or $Q$).
Note, that in \cite{24} it was tacitly implied that in (\ref{6}) the averaging
is over the vacuum configurations, and the points $\vex, \vey$ can be anywhere
in the space, surrounding static charge. It is the property of the kernel
$J(x,y)$ that it is asymptotically large for collinear $\vex||\vey$ , but the
direction of this vector can be arbitrary. That was enough for the proof of
Chiral Symmetry Breaking (CSB) due to confinement, but in our case one needs a
further specification.
Namely, at the moment of creation the created pair $q\bar q$ must lie on the
minimal surface of the Wilson loop of static charges $Q\bar Q$, i.e. on (or
inside) the string connecting static charges. This means that we must replace
in (\ref{6}) $\lan A_4A_4\ran$ by $ \lan A_4(x) A_4(y)\ran_{\rm string}$, where
the latter acquires the string profile factor $\Theta_{\rm string} (x,y)$,
proportional to the string density of colorelectric fields, \be \Theta_{\rm
string} (x,y) = \xi (\vex, \vel) \xi (\vey, \vel).\label{7}\ee
Here $\vel = \vex_Q (t) - \vex_{\bar Q} (t)$ is the string axis vector. For
long string, $|\vel|\gg \lambda$, one expects that $\xi$ depends only on the
distance $\vex_\bot$ from the string axis, e.g. \be \vex^2_\bot = \frac{|(\vex
- \vex_Q) \times \vel|^2}{\vel^2}.\label{8}\ee To simplify matter and for rough
estimates one can take $\xi$ as a Gaussian function of distance to the center
of the string, so we take \be \xi (\vex, \vel) \approx \exp \left( -\rho^2
\left(\vex- \frac{\vex_Q+\vex_{\bar Q}}{2}\right)^2\right),\label{9}\ee where
$\rho\sim \frac{1}{d} \sim O(1$ GeV).
Insertion of $\Theta_{\rm string} (x,y)$ in its local form, $\Theta_{\rm
string} (\vex,\vex)$, into (\ref{3}) is easily integrated and yields for
intermediate mesons with almost equal radius, $R_2\approx R_3$ (corresponding
SHO parameters $\beta_2 =\beta_3, \beta_i\sim 1/R_i$) \be J_{nn_2n_3} (\vep)
\to J_{nn_2n_3}^{\rm (string)}(\vep) = \eta (\beta_2, \rho)
J_{nn_2n_3}(\vep).\label{10}\ee Here \be \eta(\beta, \rho) =\left(
\frac{\beta^2_2}{2\rho^2 + \beta^2_2}\right)^{3/2}\cong \left(\frac{d^2}{d^2+
R^2_2}\right)^{3/2}.\label{11}\ee
One should note that the expression (\ref{11}) is valid as an
asymptotic estimate for large distances $R_2$, $R_2\gg d$. Besides
the approximation (\ref{9}) does not take into account an
additional suppression in the case of short strings, $|\vel|<
R_2+R_3$.
Thus $\eta(\beta, \rho)$ plays the role of the suppression factor for high
excited intermediate mesons. Indeed, radii of high excited mesons are growing
with radial and orbital quantum numbers $(n,L), R_2(n,L) \sim \sqrt{L},
\sqrt{n}$, and $\eta^2(\beta, \rho) \sim \frac{1}{L^3}, \frac{1}{n^3}$.
\section{Study of the decay vertex}
The string decay vertex, derived in Ref.~\cite{24}, has the form
(\ref{2}) in the local approximation. In the standard $^3P_0$
approach \cite{19} it is assumed that one can effectively replace
the kernel $\mathcal{M} (\vex, \vex_Q,\vex_{\bar Q})$ by a
constant. The same type of approximation was used in \cite{28*,
29*, 30*} and also in \cite{24}, where results were compared with
the analysis of decays of $\psi(3730)\to D\bar D$ and $\Upsilon
(4S) \to B\bar B$ in \cite{31*}. Below we shall study the
reliability of this replacement and show that the replacement of the kernel
$\mathcal{M}$ by a constant is not always valid, especially for high excited states.
To illustrate this statement we consider the decay matrix element
(\ref{3}) with kernel $\mathcal{M}$, written in momentum space, where the wave functions of heavy-light mesons are replaced by gaussians $\Psi^{(n_2,n_3)}(q) = \left(\frac{2\sqrt{\pi}}{\beta_2}\right)^{3/2}e^{-q^2/2\beta_2^2}$ ($\beta_2=0.48$ for $D$-mesons and $\beta_2=0.49$ for $B$-mesons):
\begin{equation} \label{J(p,sigma)}
J_{nn_2n_3}(\textbf{p}) = \frac{\sigma}{\sqrt{N_c}} \frac{32\sqrt{2}\pi}{\beta_2^4} \int \bar{y}_{123}\Psi^{(n)}_{Q\bar{Q}}(c\textbf{p}+\textbf{q})e^{-q^2/\beta_2^2} \Phi\left(-\frac{1}{2}; \frac{3}{2}; \frac{q^2}{2\beta_2^2}\right) \frac{d^3q}{(2\pi)^3}.
\end{equation}
Here $\Phi(a;b;z)$ is the confluent hypergeometric function,
\begin{equation}\label{hypergeom}
\Phi(a;b;z) = 1+ \frac{a}{b}z+\frac{a(a+1)}{b(b+1)}\frac{z^2}{2!} + \ldots,
\end{equation}
the wave function $\Psi^{(n)}_{Q\bar{Q}}(c\textbf{p}+\textbf{q})$ can by expressed as a series of oscillator wave function (see Appendix 2 of \cite{30*} for details).
Another expression, which should be compared with (\ref{J(p,sigma)}), can be obtained by replacement of the kernel $\mathcal{M}$ to some constant $M_\omega$:
\begin{equation}\label{J(p,Momega)}
J^{(M_\omega)}_{nn_2n_3}(\textbf{p}) = \frac{M_\omega}{\sqrt{N_c}} \left(\frac{2\sqrt{\pi}}{\beta_2}\right)^3 \int \bar{y}_{123}\Psi^{(n)}_{Q\bar{Q}}(c\textbf{p}+\textbf{q})e^{-q^2/\beta_2^2} \frac{d^3q}{(2\pi)^3}.
\end{equation}
We consider $nS$ ($n$ ranges from $1$ to $5$) charmonium states ($\Psi^{(n)}_{Q\bar{Q}}$ in (\ref{J(p,sigma)}) and (\ref{J(p,Momega)})), while the final states are $DD$ (or $DD^*, D^*D^*$) in all cases, then $\bar{y}_{123}$ is proportional to $q_i$. It is of interest
to notice that while for the $1S$, $2S$ and $3S$ states one can
reproduce $\mathcal{M}$ by constant rather well, on the
contrary, for the $4S$ and $5S$ states such the replacement
does not work (see left parts of Figs. 2,3,4,5,6). The constants $M_\omega$ appear to be different in these cases:
$M_\omega = 0.65$ GeV, 0.8 GeV and 1.1 GeV for the $1S, 2S$ and $3S$ states.
\begin{figure}[h]
\center{
\includegraphics[width= 6cm,height=5cm,keepaspectratio=true]{1Sc.eps}
\includegraphics[width= 6cm,height=5cm,keepaspectratio=true]{1Sb.eps}
\caption{Profiles of decay matrix elements, Eq.(3) (scalar parts), for $1S (c\bar c)$ into
$DD$ (left panel) and $1S(b\bar b)$ into $BB$ (right panel) calculated with
decay vertex of Eq. (2), expression (\ref{J(p,sigma)}) -- solid line, and for the constant decay vertex, expression (\ref{J(p,Momega)}) -- broken line.}}
\end{figure}
Surprisingly, that for the $nS$ bottomonium states the constant decay vertex reproduces results with the kernel $\mathcal{M}$ very well even for $4S$ and $5S$ states, however constants are different for different bottomonium states: $M_\omega$ varies from 0.65 GeV for $1S$ state to 1.3 GeV for $5S$ state (see right parts of Figs. 2,3,4,5,6), what we see in the case of charmonium states too.
\begin{figure}[h]
\center{
\includegraphics[width= 6cm,height=5cm,keepaspectratio=true]{2Sc.eps}
\includegraphics[width= 6cm,height=5cm,keepaspectratio=true]{2Sb.eps}
\caption{The same as in Fig. 2, but for the $2S (c\bar c)$ into $DD$ (left
panel) and $2S(b\bar b)$ into $BB$ (right panel).}}
\end{figure}
\begin{figure}[h]
\center{
\includegraphics[width= 6cm,height=5cm,keepaspectratio=true]{3Sc.eps}
\includegraphics[width= 6cm,height=5cm,keepaspectratio=true]{3Sb.eps}
\caption{The same as in Fig. 2, but for the $3S (c\bar c)$ into $DD$ (left
panel) and $3S(b\bar b)$ into $BB$ (right panel).}}
\end{figure}
\begin{figure}[h]
\center{
\includegraphics[width= 6cm,height=5cm,keepaspectratio=true]{4Sc.eps}
\includegraphics[width= 6cm,height=5cm,keepaspectratio=true]{4Sb.eps}
\caption{The same as in Fig. 2, but for the $4S (c\bar c)$ into $DD$ (left
panel) and $4S(b\bar b)$ into $BB$ (right panel).}}
\end{figure}
\begin{figure}[h]
\center{
\includegraphics[width= 6cm,height=5cm,keepaspectratio=true]{5Sc.eps}
\includegraphics[width= 6cm,height=5cm,keepaspectratio=true]{5Sb.eps}
\caption{The same as in Fig. 2, but for the $5S (c\bar c)$ into $DD$ (left
panel) and $5S(b\bar b)$ into $BB$ (right panel).}}
\end{figure}
\section{Analytic and phenomenological study of unquenched spectra}
Our RSH was derived in Ref.~\cite{5} starting from the Wilson loop for the
$q\bar q$ system and using Nambu-Goto action for the corresponding string. In
the derivation presence of additional quark loops was neglected (quenched
approximation), basing on the $1/N_c$ argument and additional
(phenomenological) numerical suppression of the quark-loop effects. It is the
purpose of the present Section to study these effects analytically and
phenomenologically, and compare them with lattice results in the forthcoming
Section.
The generating functional of heavy charges $Q\bar Q$, after
integrating over other quark-loops, has the form \be Z=\int DA\exp
\ \mathcal{L}_A W_{Q\bar Q} (A) \det (m_q+ \hat
D(A)),\label{12}\ee where $\mathcal{L}_A$ is the standard gluonic
action and $W_{Q\bar Q} (A)$ is the external (fixed) Wilson loop
of heavy quarks. The $\det$ term can be written in the path
integral form \cite{24}: \be \det (m_q+ \hat D(A)) =\exp\left [tr
\ln \left( \frac12 \int^\infty_0 \frac{ds}{s} (D^4z) e^{-K}
W_{q\bar q} (A) \right)\right],\label{13}\ee where $(D^4z)$ is the
path integration, $s$ is the proper time variable, $K=\frac14
\int^s_0 \left(\frac{dz_\mu(\tau)}{d\tau}\right)^2 d\tau$, and
$W_{q\bar q } (A)$ is the Wilson loop of sea quarks, while $tr$
implies summation over flavor indices and space-time coordinates.
Expanding in the number of sea-quark loops, one has the first correction term
\cite{24}: \be Z_{\rm 1 loop} = \int DA \exp \mathcal{L}_A \left( \frac12 tr
\left\{ \int^\infty_0 \frac{ds}{s} D^4 z) e^{-K} W_{q\bar q} (A) W_{Q\bar Q}
(A) \right\} \right).\label{14}\ee Integrating in (\ref{14}) over $DA$, one
obtains the effective one-loop partition function, \be Z_{\rm 1 loop} =
-\frac12 \int^\infty_0 \frac{ds}{s} (D^4z) e^{-K} \chi (W_{q\bar q}, W_{Q\bar
Q}),\label{15}\ee where $\chi$ is connected average of two loops, \be \chi
\equiv\lan W_{q\bar q} (A) W_{Q\bar Q} (A) \ran - \lan W_{q\bar q} (A) \ran
\lan W_{Q\bar Q} (A) \ran. \ee Properties of $\chi$ were studied in
\cite{25,26}, where it was shown that one can find a simple expression for
$\chi$ for small distances between minimal area surfaces of both Wilson loops,
\be \chi \cong \frac{1}{N_c^2} \exp (-\sigma S_\Delta)\label{17}\ee and
$S_\Delta$ is the minimal area of the surface connecting contours $C_1$ and
$C_2$ of Wilson loops $W_{Q\bar Q}$ and $W_{q\bar q}$, respectively, as shown
in Fig.7 One can see in Fig.7 that the width of the bands in $S_\Delta$ along
time direction is of the order of $ R_2 , R_3$, where $R_2, R_3$ are the radii
of intermediate mesons $(Q\bar q)$ and $(\bar Q q)$.
\begin{figure}[h]\center{
\includegraphics[height=8cm]{oval.eps}
\caption{Connected average of two Wilson loops, expressed via area
law for the surface $S_\Delta$ between two loops. Radii $R_2$ and
$R_3$ of two intermediate mesons $(Q\bar q)$ and $(\bar Q q)$ are
also shown.} }
\end{figure}
Several properties of the effective partition function (\ref{15}) can be
derived immediately:
1) The general property of the unquenching process: since the
integral $\int\frac{ds}{s}$ is obviously diverging at small $s$,
one needs a renormalization step, which means that the string
tension $\sigma$ in (\ref{17}) is the renormalized (by
unquenching) version of the quenched $\sigma$.
Expanding averaged static Wilson loop with sea quarks, one obtains \be \lan
W_{Q\bar Q|q\bar q}\ran = \lan W_{Q\bar Q} \ran + \frac{1}{N_c} \llan W_{Q\bar
Q} \bar W_{q\bar q}\rran + \frac{1}{N^2_c}\llan W_{Q\bar Q} \bar W_{q\bar
q}\bar W_{q\bar q}\rran+...,\label{18}\ee where the bar over $W_{q\bar q}$
implies averaging over all paths, i.e. all contours of light quarks $C_{q\bar
q}$ with the weight defined by the Fock-Feynman-Schwinger path integral, \be
\bar W_{q\bar q} = \int^\infty_{s_0} \frac{ds}{s} (D^4z) e^{-K} W_{q\bar q}
(C_{q\bar q}). \label{19}\ee Correspondingly, one can write each Wilson loop
and their products as (omitting correction terms independent of $T$). \be \lan
W_{Q\bar Q} \ran = \exp (-V_{Q\bar Q} (R) T)\label{20}\ee \be \lan W_{Q\bar Q}
\bar W_{q\bar q}\rran = \exp (-V_\Delta (R) T);\label{21}\ee \be \lan W_{Q\bar
Q} \bar W_{q\bar q} \bar W_{q\bar q} \rran = \exp (-V_{\Delta\Delta} (R)
T).\label{22}\ee Therefore the resulting $Q\bar Q$ interaction appears to be
dependent on $T$. Since $V_{Q\bar Q} (R) = \sigma_{\rm ren} R + V_{\rm GE}
(R)$, while $V_{\Delta\Delta}(R) < V_\Delta (R) < V_{Q\bar Q}(R)$ for large
enough $R$, with increasing $T$ the $Q\bar Q$ system will pass from the purely
confining regime $V_{Q\bar Q}$ to one-loop regime $V_\Delta$ and then to
two-loop regime $V_{\Delta\Delta}$ etc. In the next Section we shall show that
this type of transition was indeed observed on the lattice.
As to the form of $V_\Delta (R), V_{\Delta\Delta} (R)$ etc., one expects that
for $R< R_2 + R_3$, where $R_2, R_3$ are radii of lowest $(Q\bar q), (\bar Q
q)$ states, the form of $V_\Delta (R)$ does not change, i.e. \be V_\Delta(R)
\approx \sigma_{\rm ren} R,~~ R<R_2+R_3\approx 1 {\rm fm}.\label{23}\ee
In case of the $c\bar c$ system $R_2=R_3= R_D\approx R_{D^*} \approx 0.6$ fm.
The same is true for $b\bar b$ system with $R_B \approx 0.5$ fm.
In a similar way one can treat $V_{\Delta\Delta}$ and higher loop
terms. As a result one can predict that the static potential can
be defined from the sum (\ref{18}) in the $T$- independent way for
$R\la 1$ fm, \be V_{\rm static} (R) \approx V_{Q\bar Q} (R)
\approx \sigma_{\rm ren} R, ~~R\la 1 {\rm fm}.\label{24}\ee
For $R>1.2$ fm the situation is complicated and static
$(T$-independent) potential cannot be defined in the strict sense,
as was discussed above. In this case another approach can be used,
namely, the expansion of the connected averages $\llan W_{Q\bar
Q}\bar W_{q\bar q}\rran$ in the series over intermediate
heavy-light meson states, as was done in \cite{17}, and it is
equivalent to the expansions in \cite{4}, \cite{16},
\cite{19,20,21,22}. In this way instead of $V_\Delta (R)$ one
defines the energy-dependent nonlocal interaction \be V_{121}
(\veq,\veq', E) = \sum_{n_2n_3} \int \frac{d^3\vep}{(2\pi)^3}
\frac{X_{n_2n_3} (\veq-\vep) X^+_{n_2n_3}
(\veq'-\vep)}{E-E_{n_2n_3} (\vep)}, \label{25}\ee where subscripts
1,2 refer to the channels $Q\bar Q$ and $(Q\bar q) (\bar Q q)$,
respectively, while $n_2, n_3$ are quantum numbers of the mesons
$(Q\bar q)$ and $(\bar Q q)$ with the wave functions $\psi_{n_2}$
and $\psi_{n_3}$, and \be X_{n_2n_3} (\ver) =\frac{M_\omega \eta
(\beta, \gamma)}{\sqrt{N_c}} \int \frac{d^3\veq}{(2\pi)^3}
e^{i\veq\ver} \psi_{n_2} (\veq) \psi_{n_3} (\veq).\label{26}\ee In
a similar way instead of $V_{\Delta\Delta} (R)$, one defines the
interaction $V_{131}$ due to three-meson intermediate states.
As a result, the total Hamiltonian has the form \be H=H_{kin} + V_{Q\bar
Q}(\veR) + V_{121} (\veR, \veR', E) + V_{131} (\veR, \veR', E)
+....\label{27}\ee As one can see, in (\ref{27}) the $T$-dependent interaction
of (\ref{20}), (\ref{21}), (\ref{22}) is replaced by the energy-dependent nonlocal interaction.
Let us underline general properties of the new Hamiltonian (for an
earlier discussion see \cite{31}).
i) For energies below all thresholds the interaction
$\sum_{n\geq2} V_{1n1} (\veR, \veR, E)$ is negative, which implies
attraction on average from all higher intermediate states. Hence
the linear potential in $V_{Q\bar Q} (R)$ is modified (flattened)
by inclusion of intermediate states. This attraction also persists
in some energy region above thresholds, where the real part of $
V_{1n1}$ is still negative.
ii) Due to strong reduction of overlap integrals of the type
$||\Psi_k (\veR) V_{1n1} $ $(\veR, \veR', E) \Psi_l (\veR')||$ (as
was discussed in previous Section, it is due to the string width
effect), the series $\sum_{n\geq 2} V_{1n1}$ is fast converging
and therefore only few terms are important.
Summarizing the effect of sea-quark loop on the $Q\bar Q$
interaction, one can say that there is no energy-independent (or
time-independent) universal local interaction which can describe
the dynamics of $Q\bar Q$ system in the unquenched case. If one
tries to simulate the effect of quark loops on the static $Q\bar
Q$ potential, then it should be an approximate local interaction,
which is close to linear potential $\sigma _{\rm ren} R$ for $R\la
1$ fm, and becomes softer (flattening) for larger $R$, which can
be approximated by making $\sigma_{\rm ren}$ the energy -- and $R$
-dependent.
Such kind of flattening potential was introduced in \cite{14} to describe high
excitations of light mesons and used latter in \cite{11,12,13} for higher
charmonium states. \be \tilde V_{Q\bar Q} (R) = \sigma (R) \cdot R;~~ \sigma
(R) = \sigma_0 \left[ 1- \gamma_0 \frac{\exp (\sqrt{\sigma_0} (r-R_1))}{B+\exp
(\sqrt{\sigma_0} (r-R_1))}\right].\label{28}\ee Here $R_1 \approx 1.2$ fm is
the distance, where the string can decay into two mesons, $\sigma_0 R_1 +
2M_Q\approx 2 M_{Q\bar q}, \sigma_0= 0.19$ GeV$^2$. Putting $\gamma_0=0.40$,
the best description of radial excited light mesons was obtained in \cite{14}.
The modified potential $\tilde V_{Q\bar Q} (R)$, taken from \cite{14}, is shown
in Fig.8.
\begin{figure}
\epsfig{figure=Pot.eps,height=90mm,width=90mm} \caption{Modified potential with
parameters given in Eq.~(\ref{28}). For reference simple linear potential with
$\sigma = 0.19$ GeV$^2$ is also plotted. \label{fig.1}}
\end{figure}
The resulting light meson masses, taken from \cite{14}, are
compared with experimental data in Table 1. For heavy quarkonia
the role of the flattening is less important. Below in Table 2 one
can see corresponding effect in charmonium levels, which is of the
order of several tens of MeV for high excitations.
Another types of flattening potentials were suggested in \cite{*}. One should,
however, be careful with the large $R$ behavior of their flattening potentials,
which is bounded from above, and therefore the quarks $Q, \bar Q$ can
liberate themselves and this effect contradicts the physical picture in QCD,
when an unstable hadron decays into hadrons, but not into quarks.
\begin{table}
\caption{ Light meson masses $M_0(nL)$ (in GeV) from RSH with pure
linear potential and their masses $\tilde M_0(nL)$ in potential
with flattening. The corresponding flattening correction
$\delta_{\rm flat} $ (in MeV), the self-energy correction
$\Delta_{SE} =- \frac{12 \sigma}{\pi M_0}$, the string correction
$ \Delta_{str} = \frac{2 \sigma \lan r^{-1}\ran l (l+1)}{M^2_0}$,
and the resulting mass $M_{tot} = \tilde M_0 + \delta_{SE}
+\Delta_{str} + \delta_{GE}$ (where $\delta_{GE}$ is the
correction from the gluon-exchange potential) are given in GeV.}
\vspace{0.5cm}
\begin{tabular}{|r|r|r|r|r|r|c|}
\hline State& $\gamma=0$& $\gamma=0.4$& $\delta_{\rm flat}$
(MeV)&$\Delta_{SE}$& $\Delta_{str} $& $M=\tilde M_0$\\
nL & $M_0(nL)$& $M_0(nL)$&flattening& &&\\\hline
1S& 1.347&1.335& -12& -0.510& 0& $0.725$\\
2S & 2.009& 1.944& -65&-0.342&0& $1.503$\\
1D&2.167&2.122&-45&-0317&-0.087&1.662\\
3S&2.512&2.300&-212&-0.274&0& 1.937\\
2D&2.615&2.428&-187&-0.263& -0.058 &2.052\\
4S& 2.931&2.569&-362&-0.235&0&2.252\\
3D&3.006&2.647&-359&-0.229&-0.043&2.322\\
1P&1.802&1.777&-25&-0.382&-0.074&0.071\\
2P&2.328&2.213&-115&-0.295&-0.030&0.068\\
1F&2.479&2.402&-77&-0.277&-0.113&0.048\\
3P&2.766&2.472&-294&-0.249&-0.020&0.065\\2F&2.876&2.606&-270&-0.239&-0.083&0.047\\
4P& 3.146&2.731&-415&-0.219&-0.015&0.062\\
3F&3.233&2.821&-412&-0.213&-0.064&0.046\\
\hline
\end{tabular}
\end{table}
\begin{table}
\caption{Comparison of the single-channel and flattening potential results for
S,P,D states of charmonium with existing experimental data.}
\vspace{0.5cm}
\begin{center}
\begin{tabular}{|r|r|r|r|}
\hline
State & SC & flattening & exp \\
\hline
1S&3.068&3.066&3.067\\
2S&3.678&3670&3.74(4)\\
3S&4.116&4.093&$ 4.040(3^3S_1)$\\
4S& 4.482&4.424&$ 4.421(4^3S_1)$\\
5S &4.806& 4.670& ?\\
\hline
1P&3.488&3.484& 3.525 $(^1P_1)$\\
2P&3.954&3.940&$\sim 3.93 (^5P_2)$\\3P& 4.338&4.299& -\\
\hline
1D &3.79&3.78&$3.77(1^3D_1)$\\
2D &4.189&4.165& 4.153(3)\\
3D& 4537&4.475&-\\
\hline
\end{tabular}
\end{center}
\end{table}
\section{Comparison to other approaches}
Here we compare our string decay picture with lattice data and other
approaches. On the lattice the topic of string breaking and $Q\bar Q$
interaction above inelastic threshold was actively explored during last decade
(for the first attempts see \cite{33} and \cite{34}). A way to determine the
static potential in unquenched case was suggested in \cite{35} and several
spectra calculations, including sea-quark effects, were done in \cite{36}.
Recently careful studies of spectra of excited hadrons with open channels were
published in \cite{37} and \cite{38}, where the importance of inelastic
channels was stressed. The difficulty of existing lattice approaches is the
lack of the proper definition of a resonance state, which actually belongs to
the continuous spectrum and requires either the continuous density description
or the use of the Weinberg Eigenvalue Method, described recently in the last
paper of Ref.~\cite{18}. The first approach is made possible by the use of the
finite volume, when continuous states are discretized and the resonance is
defined by the scattering phase \cite{33,34}. The second approach, to our
knowledge, was never used on the lattice. As to precise definition of the
resonance parameters on the lattice, from \cite{37,38} one can see that it
needs a lot of efforts and is expected in the nearest future.
It is worth saying that the $Q\bar Q$ potential, calculated on
the lattice, is not sensitive to the effects of the virtual sea
quarks at least for distances $R\la 1$ fm (for the latest
calculation see \cite{39}). This result is in agreement with our
discussion of the structure of the unquenched Wilson loop in the
previous Section.
\begin{table}
\caption{Comparison of calculated mass $M_{tot}$ (in GeV) with
experimental data.}
\begin{center}
\begin{tabular}{|r|r|r|r|}
\hline
$M_{tot}$ & Theory& Exp.& $M_{cog}(th)$\\
\hline
$M(\rho(1S))$& 0.749&$\rho(0.775)$& 0.666\\
$M(\rho(2S))$& 1.519&$\rho(1.465)$& 1.479\\
$M(\rho(3S))$& 1.937&$\rho(1900)?$&1.849\\
$M(\rho(4S))$& 2.252&$\rho(2150)?$&2.166\\
$\bar M(1P)$& 1.25&$a_1(1230)$&\\
&&$f_1(1282)$&\\
$\bar M(2P)$& 1.82&$a_1(1647)? a_2(1732)?$&\\
&&$f_1(1815)?$&\\
$\bar M(3P)$& 2.14&$f_2(2157)$ mixing for $3^3P_2$&\\
$\bar M(4P)$&2.435&&1.65\\
$\bar M(1D)$& 1.66&$\rho_3(1690), \rho(1720)$&\\
&& mixing for $2^3S_1-1^3D_1$&\\
$\bar M(2D)$& 2.05&$\rho_3(1990)?$& 1.989\\
$\bar M(3D)$& 2.32&$\rho_3(2250)?, \rho_5(2330)?$&2.249\\
&&$(3D-2G$ mixing(?))&\\
$\bar M(1F)$& 1.96&$a_4(2000), f_4(2018)$&\\
$\bar M(2F)$& 2.24&$ f_4(2300)$&\\
$\bar M(3F)$& 2.50&&\\
\hline
\end{tabular}
\end{center}
\end{table}
|
1,941,325,220,483 | arxiv | \section{Introduction}
The no-arbitrage framework in mathematical finance
is not sufficient for providing
a unique price for a given contingent claim in an incomplete market.
Instead provided is only a no-arbitrage pricing bound.
Since it is in general too wide to be useful
in financial practice, needed is an alternative way to find
nice candidates of prices of contingent claims.
As a method to give a sharper pricing bound,
the framework of no-good-deal has been discussed in much literature;
for example,
\cite{A11} \cite{B09} \cite{BL00} \cite{BS06} \cite{CGM01}
\cite{CH02} \cite{C03} \cite{CS00} \cite{JK01} \cite{KS07}
\cite{LPST05} \cite{S04}.
The no-arbitrage pricing bound for a claim is obtained by excluding prices
which enable either a seller or buyer to enjoy an arbitrage opportunity
by trading the claim and selecting a suitable portfolio strategy.
The price in a market should be consistent with this bound
to make the market viable.
On the other hand, an upper (resp.~a lower) good deal bound may be
interpreted as determined by
the seller's (resp.~the buyer's) attitude to the risk associated with
the claim.
This can be considered as a generalization of the both pricing
principle of no-arbitrage and exponential utility indifference valuation.
Denote by $a(x)$ such an upper bound for a claim $x$.
The functional $a$ is supposed to have the following properties:
\begin{enumerate}
\item $a(0) = 0$,
\item $a(x)\leq a(y)$ if $x\leq y$,
\item $a(x+c)=a(x)+c$ for any $c\in \mathbb{R}$,
\item $a(\lambda x+(1-\lambda)y)\leq\lambda a(x)+(1-\lambda)a(y)$
for any $\lambda\in[0,1]$
\end{enumerate}
for any claims $x$ and $y$.
In the second property, the inequality $x \leq y$ is in the almost sure sense,
where we regard the claims as random variables.
In the third, the element $c \in \mathbb{R}$ stands for
a deterministic cash-flow.
The last one represents the risk-aversion of the seller
taking into account the impact of diversification.
In brief, we suppose that $\rho_a$ defined as $\rho_a(x):=a(-x)$
is a normalized convex risk measure.
If we impose additionally the positive homogeneity:
$a(\lambda x) = \lambda a (x)$ for all $x$ and $\lambda \geq 0$,
which implies the subadditivity:
$a(x+y) \leq a(x) + a(y)$ for all $x$ and $y$,
then $\rho_a$ becomes a coherent risk measure.
By the same sort argument as above, a functional $b$ which refers to a
lower good deal bound is given by a normalized convex risk measure
$\rho_b$ as $b(x)=-\rho_b(x)$.
A good deal bound should be a subinterval of the no-arbitrage pricing bound,
so not every convex risk measure yields a good deal bound.
The aim of this paper is to characterize such a convex risk measure,
which we call a good deal valuation (GDV hereafter); we
define GDV as a normalized convex risk measure $\rho$ with the Fatou property
such that for any claim $x$,
the value $\rho(-x)$ lies in the no-arbitrage pricing bound of $x$.
This definition of GDV is given from sellers' viewpoint;
for a GDV $\rho$ and a claim $x$,
$a(x):= \rho(-x)$ serves as an ask price of $x$.
Nevertheless, it is easy to see that if $\rho$ is a GDV, then
$b := -\rho$ gives bid prices.
We impose the Fatou property as a natural continuity condition for good
deal bounds.
First we investigate equivalent conditions for the existence of a GDV.
Among others, we show that a GDV exists under a condition
weaker than the no-arbitrage one, which means that there may be
GDVs even if the underlying market admits an arbitrage opportunity.
Further we study equivalent conditions for a given $\rho$ to be a GDV.
In particular, we see that any GDV is given as a risk indifference price.
The concept of risk indifference price has been undertaken by \cite{X06}.
There is much literature on this topic
(\cite{ES10} \cite{KS07-2} \cite{OS09} among others).
Some of the above papers observe that a risk indifference price provides
a good deal bound.
Our assertion is that its reverse implication also holds true,
which seems a new insight.
As mentioned before, GDV may exist even in markets with free lunch.
We observe the equivalence between the no-free-lunch condition (NFL)
and the existence of a relevant GDV, that is
a relevant convex risk measure which is a GDV.
This could be considered as a version of Fundamental Theorem of
Asset Pricing (FTAP).
Moreover as a version of Extension Theorem, we see that
the relevance of a GDV is equivalent to that
the extended market by the GDV satisfies NFL.
We see also that the relevance is equivalent to the no-near-arbitrage
condition (NNA) introduced by \cite{S04}.
We give an example (Example~\ref{ex5-1})
which shows that NFL for the original market
does not ensure NNA in general for a given GDV.
We investigate conditions under which any GDV is relevant,
and illustrate some examples related to this topic.
Now we mention the preceding results on FTAP from the viewpoint of
good deal bound.
Kreps \cite{K81} introduced NFL and proved FTAP as well as Extension Theorem.
\v{C}ern\'y and Hodges \cite{CH02} established the
framework of good deal bound and gave a version of Extension Theorem.
Jaschke and K\"uchler \cite{JK01}
showed that good deal bounds are essentially equivalent to
coherent risk measures and gave a variant of FTAP.
Staum \cite{S04} extended their results to the noncoherent case.
Bion-Nadal~\cite{B09} introduced a dynamic version and gave an
associated FTAP.
In \cite{JK01} and \cite{S04}, an acceptance set reflecting an investor's
preference is given first, and a convex risk measure induced by it
is considered as a functional describing a good deal bound.
Our approach is different, although we treat very similar problems.
In our study, a convex risk measure is given first,
and necessary and sufficient conditions for the given convex risk measure
to be a GDV is discussed.
This approach is in the same spirit as \cite{B09}.
Our results provide a deeper understanding of a convex risk measure
as a pricing functional in a market.
Although our framework appears to be static, an extension to the dynamic
framework of \cite{B09} can be done in a straightforward manner.
A detailed comparison with \cite{S04} and \cite{B09} will be given in
Remarks~\ref{rem-thm2-2}, \ref{staumrem} and \ref{bionnadalrem}.
In Section 2, we describe our model and prepare notation.
In particular, we introduce the definitions and some basic properties of
superhedging cost and risk indifference price.
Main results are given in Sections 3 and 4.
\setcounter{equation}{0}
\section{Preliminaries}
Here we introduce our framework and several basic results.
\subsection{The Orlicz space}
Let $(\Omega, \calF, \mathbb{P})$ be a complete probability space.
The Orlicz space $L^{\Psi}$ with Young function $\Psi$ is defined as
the set of the random variables $X$ such that there exists $c > 0$,
\begin{equation*}
\mathbb{E}[\Psi(cX)] < \infty.
\end{equation*}
Here we call
$\Psi : \mathbb{R} \to \mathbb{R}\cup \{\infty\}$
a Young function if it is an even convex function with
$\Psi(0)=0$, $\Psi (x) \uparrow \infty$ as $x \uparrow \infty$ and
$\Psi(x) < \infty$ for $x$ in a neighborhood of $0$.
It is a Banach lattice with the gauge norm
\begin{equation*}
\|X\|:= \inf\{c > 0; \mathbb{E}[\Psi(X/c)] \leq 1 \}
\end{equation*}
and pointwise ordering in the almost sure sense.
In the case of $\Psi = \Psi_{\infty}$:
\begin{equation*}
\Psi_{\infty}(x) := \begin{cases} 0 & \text{ if } |x| \leq 1, \\
\infty & \text{ otherwise }
\end{cases}
\end{equation*}
we have $L^\Psi = L^\infty$. Further, for $\Psi_p(x) := |x|^p$ with
$p\geq 1$, we have $L^{\Psi_p} = L^p$.
The Orlicz heart $M^\Psi$ is a subspace of $L^\Psi$ defined as
\begin{equation*}
M^\Psi := \{ X \in L^\Psi| \mathbb{E}[\Psi(cX)] < \infty \text{ for all }
c > 0\}.
\end{equation*}
In this paper we consider the set of the future cash-flows $L$ to be either
$L^\Psi$ or $M^\Psi$
with a fixed Young function $\Psi$.
This specification would be justified by noting that $L$ becomes
a linear space of random variables with natural ordering
and sufficiently abstract in that
it incorporates $L^p$ spaces with $1 \leq p \leq \infty$.
More importantly, a Young function $\Psi$ may be connected to a utility
function $u$ as $\Psi(x) = -u(-|x|))$
and then $L$ becomes a suitable space where expected utility
maximization is considered (see e.g., \cite{BF09}).
Note that the case of exponential utility is covered.
Our treatment and results do not depend on a specific choice of $\Psi$.
This generality is indeed necessary to derive a conclusion which does
not depend on a specific choice of utility function.
Let $M \subset L$ be the set of the $0$-attainable claims.
Each element of $M$ represents a future payoff
which investors can super-replicate with $0$ initial endowment.
Simultaneously, $M$ might be regarded as the set of strategies which
investors can take.
We suppose that $M$ is a convex cone including $L_-$,
where we denote $L_+$ (resp. $L_-$)$:=\{x\in L|x\geq0$ (resp. $\leq$)$\}$.
Let $L^\ast_+$ be the set of all positive linear functionals on $L$.
Remark that any element of $L^\ast_+$ is continuous by the
Namioka-Klee theorem (see \cite{BF09} for an extended result).
The both cases of $L=L^\Psi$ and $L=M^\Psi$
are treated in a unified way in the following.
Let $L^\dagger := L^{\Psi^\dagger}$, where $\Psi^\dagger$ is
the complimentary function of $\Psi$ defined as
\begin{equation*}
\Psi^\dagger(y) := \sup_{x \in \mathbb{R}}\{xy-\Psi(x)\}.
\end{equation*}
Define a set of probability measures
$\calP :=\{Q \ll \mathbb{P}|\mathrm{d}Q/\mathrm{d}\mathbb{P}\in L^\dagger\}$.
Further, let $\ol{L^*} :=\l\{g \in L^*_+ | g(1)=1,
g(m) \leq0\mbox{ for any }m\in M\r\}$,
$\calQ :=\{Q\in\calP| \mathrm{d}Q/\mathrm{d}\mathbb{P}\in\ol{L}^*\}$,
and $\calQ^e := \{Q\in\calQ|Q\sim \mathbb{P}\}$.
For $Q \in \calP$, denote by $\mathbb{E}_Q$ the corresponding
expectation operator.
By Young's inequality:
\begin{equation*}
\frac{xy}{ab} \leq \Psi(\frac{x}{a}) + \Psi^\dagger(\frac{y}{b})
\end{equation*}
for any $x, y \in \mathbb{R}$ and $a,b >0$,
the operation $\mathbb{E}_Q$ enables us to identify $\calP$ with a subset of
$L^\ast_+$.
\subsection{Convex risk measure}
Here we collect several notions and
results on convex risk measures which we utilize in this paper.
A convex risk measure $\rho$ is
a $(-\infty,+\infty]$-valued functional on $L$ satisfying
\begin{description}
\item[properness:] $\rho(0) < \infty$,
\item[monotonicity:] $\rho(x)\geq\rho(y)$
if $x\leq y$,
\item[cash-invariance:] $\rho(x+c)=\rho(x)-c$
for any $c\in\mathbb{R}$,
\item[convexity:] $\rho(\lambda x+(1-\lambda)y)
\leq\lambda\rho(x)+(1-\lambda)\rho(x)$
for any $\lambda\in[0,1]$,
\end{description}
for any $x$, $y\in L$.
A convex risk measure $\rho$ is a
{\bf coherent risk measure} if it satisfies in addition,
\begin{description}
\item[positive homogeneity:]
$\rho(cx)=c\rho(x)$ for any $x\in L$ and any $c>0$.
\end{description}
\begin{thm}[Biagini and Frittelli \cite{BF09}] \label{repfa}
Let $\rho$ be a convex risk measure. Then,
\begin{equation*}
\rho(-x) = \max_{g \in L^\ast_+, g(1)=1}\{g(x) - \rho^*(g)\}
\end{equation*}
for $x \in \mathrm{Int}\{\rho < \infty\}$, where for $g \in L^{\ast}_+$,
\begin{equation*}
\rho^*(g) := \sup_{x \in L}\{ g(x) - \rho(-x)\}.
\end{equation*}
\end{thm}
\noindent
A convex risk measure $\rho$ is said to have {\bf the Fatou property} if
for any increasing sequence $\{x_n\} \subset L $ with
$x_n \uparrow x_{\infty}$ a.s., $\rho(-x_n) \uparrow \rho(-x_{\infty})$.
Denote by $\calR$ the set of all convex risk measures
with $\rho(0)=0$ and the Fatou property.
\begin{thm}[Biagini and Frittelli \cite{BF09}]\label{BF}
For $\rho \in \calR$, we have for $x \in L$,
\begin{equation}
\label{eq-repre1}
\rho(x)=\sup_{Q \in\calP}\l\{ \mathbb{E}_Q[-x]-\rho^{\ast}(Q)\r\}.
\end{equation}
\end{thm}
A convex risk measure $\rho$ is said to be {\bf finite} if
$\rho(x) < \infty$ for all $x \in L$.
\begin{rem}
In the case of $L=M^\Psi$,
it is known that $L^\dagger$ coincides with the dual of $L$ and
the supremum in (\ref{eq-repre1}) is attained.
Moreover, every finite convex risk measure has the Fatou property.
See \cite{BF09} for the detail.
The finiteness condition cannot be dropped as we see in
Example~\ref{ex1} below.
If
$\Psi$ satisfies the $\Delta_2$ condition: there exist $t_0 > 0$ and
$K > 0$ such that
$\Psi(2t) \leq K \Psi(t)$ for any $t \geq t_0$,
then we have $L^\Psi = M^\Psi$.
For $ p \in [1,\infty)$, $L^p$ is an example of such cases.
\fin
\end{rem}
\noindent
A convex risk measure $\rho$ is said to have {\bf the Lebesgue property} if
for any sequence $\{x_n\} \subset L$
with $\sup_n\|x_n\|_{\infty} < \infty $ and
$x_n \to x_{\infty}$ a.s., it holds that $\rho(x_n) \to \rho(x_{\infty})$ as
$n \to \infty$.
Here $\|\cdot\|_{\infty}$ refers to the $L^\infty$ norm.
This definition was introduced in \cite{JST} for the $L=L^\infty$ case.
Since any continuous linear
functional on L can be decomposed into the sum of an element of
$L^\dagger \subset L^1$
and a purely finitely additive signed measure (see [23]),
the same argument as the proof of Theorem~2.4 in \cite{JST}
can apply to have the following result
with the aid of Theorem~\ref{repfa} above.
\begin{thm}\label{Leb}
For a finite convex risk measure $\rho$, the following are equivalent:
\begin{enumerate}
\item $\rho$ has the Lebesgue property.
\item for any $\alpha > 0$ and a sequence of measurable sets $A_n$ with
$P(A_n) \to 0$, it holds that
$\rho(-\alpha 1_{A_n}) \to 0$ as $n \to \infty$.
\item for any $c > 0$, the set $\{g \in L^{\ast}_+; \rho^*(g) \leq c\}$ is a
uniformly integrable subset of $L^\dagger$
and for any $x \in L$, it holds that
\begin{equation}\label{repL}
\rho(-x) = \max_{Q \in \calP}\{\mathbb{E}_Q[x]-\rho^\ast(Q)\}.
\end{equation}
\end{enumerate}
\end{thm}
\noindent
Note that the Fatou property follows from the Lebesgue property
by (\ref{repL}).\\
\noindent
A convex risk measure is said to be {\bf relevant } if $\rho(-z) > 0$
for any $z \in L_+\setminus \{0\}$.
The relevance was introduced in \cite{Delrel} as
a condition for coherent risk measures with the Fatou property to be
represented as (\ref{eq-repre1}) with a set of equivalent probability
measures instead of $\calP$.\\
\subsection{Superhedging cost}
Here we discuss superhedging cost.
Define a functional $\rho^0$ on $L$ as
\begin{equation}
\label{eq-rho0}
\rho^0(x):=\inf\{c\in\mathbb{R}|\mbox{ there exists $m\in M$ such that }
c+m+x\geq0\}.
\end{equation}
Since $\rho^0(-x)$ represents the superhedging cost for a claim $x$,
it gives the upper no-arbitrage pricing bound for $x$. In fact
if a seller could sell $x$ with a price greater than $\rho^0(-x)$,
then she could enjoy an arbitrage opportunity by taking a suitable
strategy from $M$.
By the same reasoning the lower no-arbitrage pricing bound for $x$
is given by $-\rho^0(x)$.
\begin{lem}
\label{lem0}
The superhedging cost $\rho^0$ is $(-\infty,\infty]$-valued if and only
if $\ol{L}^*\neq\emptyset$.
If $\rho^0$ is $(-\infty,\infty]$-valued, then it is a coherent risk
measure with
\begin{equation*}
(\rho^0)^\ast(g) =
\begin{cases}
0 & \text{ if } g \in \ol{L}^{\ast}, \\
\infty & \text{otherwise.}
\end{cases}
\end{equation*}
\end{lem}
\proof
Suppose that $\ol{L}^*\neq\emptyset$.
If there exists $x\in L$ with $\rho^0(x)=-\infty$, then
(\ref{eq-rho0}) implies that for any $c>0$, we can find $m^c\in M$
such that $-c+m^c+x\geq0$.
This gives $g(x) \geq c$, so that $g(x) = \infty$ for any $g\in\ol{L}^*$.
This is a contradiction, so $\rho^0$ is $(-\infty, \infty]$-valued.
Next, suppose that $\ol{L}^\ast = \emptyset$. Then there exists a sequence
$\{m_n\} \subset M$ such that $\|m_n-1\| \to 0$ as $n \to \infty$.
In fact if the closure $M^s$ of $M$ does not include $1$, then
the Hahn-Banach theorem implies the existence of a continuous linear
functional $\mu$ such that $\mu(1) > \sup_{m \in M^s} \mu(m)$.
The RHS is $0$ since $M^s$ is a cone.
That $L_- \subset M^s$ implies $\mu \in L^\ast_+$.
This means $\ol{L}^\ast \neq \emptyset$, which is a contradiction.
Now, taking a subsequence if necessary, we may suppose that
$\sum_{n =1}^\infty\|m_n-1\| < \infty$.
Then for $x := \sum_{n=1}^{\infty}|m_n-1| \in L$ and for
all $N \in \mathbb{N}$,
\begin{equation*}
x \geq \sum_{n=1}^N (1-m_n) = N - \sum_{n=1}^Nm_n,
\end{equation*}
which implies that $\rho^0(x)\leq -N$, and so $\rho^0(x) = -\infty$.
Now we see that $\rho^0$ is a coherent risk measure and
calculate $(\rho^0)^\ast$.
The convexity and positive homogeneity of $\rho^0$
follow from the assumption that $M$ is a convex cone.
The monotonicity and cash-invariance are obvious.
The fact that $\rho^0(0)\leq0$ implies that $(\rho^0)^\ast(g) \geq 0$
for any $g \in L^\ast_+$.
On the other hand, for any $\ve>0$ and $x \in L$,
we can find $m^\ve\in M$ so that $\rho^0(x)+\ve+m^\ve+x\geq0$.
Since $g(m^\ve)\leq0$ for $g \in \ol{L}^\ast$, we have
$\rho^0(x)+\ve\geq g(-x)$, which implies that
\[
\sup_{x\in L}\{g(-x)-\rho^0(x)\}\leq0.
\]
We therefore have $(\rho^0)^\ast(g)=0$ for $g \in \ol{L}^\ast$.
For $g\in L^\ast_+ \setminus\ol{L}^\ast$,
there exists $m\in M$ such that $g(m)>0$.
Since $M$ is a cone,
\[(\rho^0)^\ast(g) =
\sup_{x\in L}\{g(-x)-\rho^0(x)\}
\geq\sup_{m\in M}\{g(m)-\rho^0(-m)\}
\geq\sup_{m\in M}g(m)=\infty.
\]
\fin
\noindent
For later use, we define for $x \in L$,
\begin{equation*}
\wh{\rho^0}(x) :=
\begin{cases}
\sup_{Q \in \calQ} \mathbb{E}_Q[-x] & \text{ if } \calQ \neq \emptyset \\
-\infty & \text{ otherwise.}
\end{cases}
\end{equation*}
By definition $\wh{\rho^0}$ is a coherent risk measure on $L$ belonging to
$\calR$ if $\calQ \neq \emptyset$.
\begin{lem}
\label{lem0-2}
If $\calQ \neq \emptyset$,
then
$-\rho^0(x) \leq -\wh{\rho^0}(x) \leq
\wh{\rho^0}(-x) \leq \rho^0(-x)$ for any $x \in L$.
Moreover if $\calQ^e \neq \emptyset$, then $\wh{\rho^0}$ is relevant.
\end{lem}
\proof
For any $x \in L$ and $\ve>0$, there exists $m\in M$ such that
$\rho^0(x)+\ve+m+x\geq0$.
Then we have
$\mathbb{E}_Q[-x] \leq\rho^0(x)+\ve$
for any $Q\in\calQ$.
Since $Q\in\calQ$ and $\ve>0$ are arbitrary,
we have $\wh{\rho^0}(x)\leq\rho^0(x)$.
It suffices then to observe that $\wh{\rho^0}(x) + \wh{\rho^0}(-x) \geq 2
\wh{\rho^0}(0)=0$ by the convexity.
The relevance under $\calQ^e \neq \emptyset$ is shown by noting that
\begin{equation*}
\wh{\rho^0}(-x) = \sup_{Q \in \calQ}\mathbb{E}_Q[x] =
\sup_{Q \in \calQ^e}\mathbb{E}_Q[x].
\end{equation*}
In fact if there exists $Q_1 \in \calQ$ with
$\mathbb{E}_{Q_1}[x] > \sup_{Q \in \calQ^e} \mathbb{E}_Q[x]$,
then we have a contradiction since
for any $Q_0 \in \calQ^e$,
$\lambda Q_0 + (1-\lambda)Q_1 \in \calQ^e$ converges to $Q_1$
in $\sigma(L^\dagger,L)$ as $\lambda \downarrow 0$.
\fin
\noindent
The following example shows that
$\rho^0$ does not necessarily coincides with $\wh{\rho^0}$,
so is not always represented as (\ref{eq-repre1})
even though $\calQ$ is not empty.
\begin{ex}
\label{ex1}
Let $L =L^p$ with $p \in [1,\infty)$ and take the following set as $M$:
\[
M=\{-z+ \mathbb{E}_{Q^0}[z]|z\in L_+\}-L_+,
\]
where $Q^0 \in \calP$ is arbitrarily fixed.
Any element of $M$ is bounded from above.
Therefore by the definition of $\rho^0$, we have
$\rho^0(-z)=\infty$ for $z\in L_+$ which is not bounded from above.
It is clear that $Q^0\in\calQ$, so that
$\ol{L}^\ast \neq \emptyset$.
Therefore $\rho^0$ is a coherent risk measure by Lemma \ref{lem0}.
Moreover $\calQ =\{Q^0\}$ since for any $Q \in \calQ$,
we have
$\mathbb{E}_{Q^0}[z]\leq \mathbb{E}_Q[z]$ for any $z\in L_+$,
which implies that $Q=Q^0$.
Therefore $\rho^0$ cannot be represented as (\ref{eq-repre1}).
In fact we can prove that $\rho^0$ does not have the Fatou property.
Let $z\in L_+$ be unbounded from above.
Consider the increasing sequence $z_n = z\wedge n$, $n \in \mathbb{N}$.
Since $n-z_n\in L_+$, we have $z_n-\mathbb{E}_{Q^0}[z_n] \in M$.
It follows that $\rho^0(-z_n)\leq \mathbb{E}_{Q^0}[z_n]\to E_{Q^0}[z]<\infty$,
while $\rho^0(-z) = \infty$.
\fin
\end{ex}
\subsection{Risk indifference prices}
Here we recall risk indifference price.
Given a convex risk measure $\rho$, define a functional $I(\rho)$ on $L$ as
\begin{eqnarray} \label{eq-Ieta}
I(\rho)(x)
&:=\inf\l\{c\in\mathbb{R}|\inf_{m\in M}\rho(c+m+x)\leq\inf_{m\in M}\rho(m)\r\}
\nonumber \\
&= \inf\l\{c\in\mathbb{R}|\inf_{m\in M}\rho(m+x)-c\leq\inf_{m\in M}\rho(m)\r\}.
\end{eqnarray}
Then $I(\rho)(-x)$ describes the risk indifference seller's price for $x$
induced by $\rho$ as introduced in \cite{X06}.
The idea is explained as follows.
If a trader sells a claim $x$ with a price $c>I(\rho)(-x)$,
then she can find $\wh{m}\in M$ such that
$\rho(c+\wh{m}-x)\leq\inf_{m\in M}\rho(m)$.
This means that selling the claim with the price does not increase the risk
measured by $\rho$.
The following lemma gives a representation of $I(\rho)$.
Denote $\check{\rho} :=\rho -\inf_{m\in M}\rho(m)$.
\begin{lem} \label{implicit}
Let $\rho$ be a convex risk measure.
If $I(\rho)$ is $(-\infty, \infty]$-valued,
then we have $\inf_{m \in M}\rho(m) \in \mathbb{R}$ and that
$I(\rho)$ is a convex risk measure with
\begin{equation*}
I(\rho)^\ast(g)
=
\begin{cases}
\check{\rho}^\ast(g)
= \rho^\ast(g) + \inf_{m \in M}\rho(m), & \text{ if }
g \in \ol{L}^\ast \\
\infty & \text{ otherwise.}
\end{cases}
\end{equation*}
If $I(\rho)\in\calR$ in addition, then $\calQ \neq \emptyset$ and
\begin{equation}
\label{eqIeta}
I(\rho)(x)=\sup_{Q\in\calQ}\{\mathbb{E}_Q[-x]-\check{\rho}^\ast(Q)\}.
\end{equation}
\end{lem}
\proof
Since $\rho(0)<\infty$ and $0 \in M$,
we have $I(\rho)(0) = 0$ or $-\infty$ depending on whether
$\inf_{m\in M}\rho(m)$ is finite or $-\infty$. Therefore if
$I(\rho) > - \infty$ then $\inf_{m\in M}\rho(m)$ is finite and
$I(\rho)(x)=\inf_{m\in M}\rho(x+m)-\inf_{m\in M}\rho(m)
= \inf_{m \in M}\check{\rho}(x+m)$.
From this the cash-invariance and monotonicity of $I(\rho)$
are obvious. The convexity follows from that $M$ is convex.
Since $M$ is a cone, we have
\begin{equation*}
\begin{split}
I(\rho)^\ast(g)
=& \sup_{x \in L}\{g(-x) - I(\rho)(x)\} \\
=& \sup_{m \in M}\sup_{x \in L} \{g(-x) - \check{\rho}(x+m))\} \\
=& \sup_{m \in M}\{ g(m) + \check{\rho}^\ast(g)\} \\
=& \l\{
\begin{array}{ll}
\check{\rho}^\ast(g) & \mbox{if }g \in\ol{L}^\ast \\
\infty & \mbox{otherwise}.
\end{array}
\r.
\end{split}
\end{equation*}
By Theorem~\ref{BF}, we have (\ref{eqIeta})
if $I(\rho) \in \calR$ and in particular, $\calQ \neq \emptyset$.
\fin
\setcounter{equation}{0}
\section{Good deal valuations}
In this section we discuss conditions under which a convex risk measure
yields a good deal bound.
A good deal bound should be a subinterval of the no-arbitrage pricing
bound. We therefore introduce the following definition.
\begin{defn}
A convex risk measure $\rho\in\calR$ is said to be a good deal valuation (GDV)
if
\begin{equation}
\label{eqGDV}
\rho(-x)\in[-\rho^0(x), \rho^0(-x)]\mbox{ for any }x\in L.
\end{equation}
\end{defn}
\noindent
As mentioned in Introduction,
the above definition is given from seller's viewpoint.
Nevertheless, (\ref{eqGDV}) is equivalent to
\begin{equation}
-\rho(x)\in[-\rho^0(x), \rho^0(-x)]\mbox{ for any }x\in L,
\end{equation}
which is from buyer's viewpoint.
In addition, $-\rho(x)\leq\rho(-x)$ for any $x\in L$ because
$\rho(x)+ \rho(-x)\geq 2 \rho(0)=0$ by the convexity.
For a GDV $\rho$, a good deal bound may be constructed as
$[-\rho(x), \rho(-x)]$, which is a subinterval of $[-\rho^0(x), \rho^0(-x)]$.
Note that the upper and lower bounds of a good deal bound may be
described by different GDVs.
\subsection{Existence of good deal valuations}
Here we present a set of equivalent conditions for the existence of a GDV.
Denote by $\overline{M}$ the closure of $M$ in $\sigma(L,L^\dagger)$.
\begin{thm}
\label{thm1}
The following are equivalent:
\begin{enumerate}
\item $\calQ\neq\emptyset$.
\item There exists a GDV.
\item $\mathbb{P}(m>0)<1$ for any $m\in \ol{M}$.
\item $1 \notin \ \ol{M}$.
\end{enumerate}
\end{thm}
\proof
1$\Rightarrow$2: This is from Lemma~\ref{lem0-2}.
\noindent
2$\Rightarrow$1:
Let $\rho$ be a GDV.
Since $\rho(-m)\leq\rho^0(-m)\leq0$ for any $m\in M$,
\[
\rho^\ast(Q)=\sup_{x\in L}\{\mathbb{E}_Q[-x]-\rho(x)\}
\geq\sup_{m\in M}\{\mathbb{E}_Q[m]-\rho(-m)\}
\geq\sup_{m\in M}\mathbb{E}_Q[m].
\]
Then the cone property of $M$ implies that $\rho^\ast(Q)=+\infty$
for any $Q\in\calP\backslash\calQ$.
If $\calQ$ is empty, then $\rho$ equals to $-\infty$ identically by
(\ref{eq-repre1}), which contradicts $\rho \in \calR$.
\noindent
1$\Rightarrow$3:
If there exists $m\in \ol{M}$ such that $P(m>0)=1$, then
we have $\mathbb{E}_Q[m]>0$ for any $Q\in\calP$, and so
$\calQ=\emptyset$.
\noindent
3$\Rightarrow$4: This holds true clearly.
\noindent
4$\Rightarrow$1:
Since $1 \notin \ol{M}$,
the Hahn-Banach theorem implies that there exists $z\in L^\dagger$ such that
\begin{equation}
\label{eqHB}
\sup_{m\in \ol{M} }\mathbb{E}[zm]< \mathbb{E}[z].
\end{equation}
We have
$\sup_{m\in\overline{M}}\mathbb{E}[zm]=0$
because $ 0 \in M$ and $\ol{M}$ is a cone.
Since $L_-\subset \ol{M}$, we have then that $z \in L^\ast_+ \cap L^\dagger$,
so that $z/\mathbb{E}[z] \in \calQ$.
\fin
\noindent
Condition~3 in the above theorem is weaker than the no-arbitrage condition.
This means that a GDV may exist even if there is an arbitrage opportunity.
The following example shows that we cannot replace $\ol{M}$ with $M$
in Conditions 3 and 4.
\begin{ex}
We take the Lebesgue measure space on $(0,1]$ as the underlying probability
space $(\Omega,\calF,\mathbb{P})$.
Let $u$ be the random variable given by $u(\omega):=\omega$, and
$M$ be given by $\{cu|c\geq0\}-L_+$.
We can see several interesting facts on this example as follows:
\begin{enumerate}
\item We consider the following two conditions:
\begin{enumerate}
\item $\mathbb{P}(m>0)<1$ for any $m\in M$,
\item $1 \notin M$.
\end{enumerate}
This example satisfies (b), but does not satisfy (a).
Replacing $M$ by $\ol{M}$,
the two conditions become equivalent by Theorem~\ref{thm1}.
\item Since $1 \notin M$, we have $\rho^0(0)=0$.
Therefore if we take $L = L^\infty$,
then $\rho^0$ is a finite coherent risk measure.
In fact for any $x \in L^\infty$,
$-\|x\|_{\infty} = \rho^0(\|x\|_{\infty}) \leq \rho^0(x) \leq
\rho^0(-\|x\|_{\infty}) = \|x\|_{\infty}$ by monotonicity.
On the other hand,
$\rho^0$ is not a convex risk measure on $L=L^p$ with $p \in [1,\infty)$
since $\ol{L}^* = \calQ$ is empty.
Note that for $x(\omega):=\log\omega$, we have $\rho^0(-x)=-\infty$.
\item Notice that $\calQ$ is empty
despite that the above Condition (b) holds.
We therefore need to take the closure of $M$ in Condition 4
of Theorem \ref{thm1}.
In fact, considering the sequence $m_n:=(nu)\wedge1$,
$m_n$ converges to $1$, and so
this example does not satisfy Conditions 3 nor 4.
\end{enumerate}
\fin
\end{ex}
\subsection{Equivalent conditions for good deal valuations}
Here we present conditions for a given $\rho$ to be a GDV.
The main contribution of the following theorem,
is to show the equivalence between GDVs and risk indifference prices.
\begin{thm}
\label{thm2}
For any $\rho\in\calR$, the following conditions are equivalent:
\begin{enumerate}
\item $\rho$ is a GDV.
\item $\rho(-m)\leq0$ for any $m\in M$.
\item There exists a function $c : \calQ \to \mathbb{R}$ such that
for any $x \in L$,
$$\rho(x)=\sup_{Q\in\calQ}\{\mathbb{E}_Q[-x]-c(Q)\}.$$
\item There exists $\eta\in\calR$ such that $\rho=I(\eta)$.
\item[4$^\prime$.] $\rho=I(\rho)$, that is, $\rho$ is a fixed point of $I$.
\item $\rho(-x) \in [-\wh{\rho^0}(x),\wh{\rho^0}(-x)]$ for any $x \in L$.
\item $\{\rho^0 \leq 0\} \subset \{\rho \leq 0\}$.
\item $\calQ\supset \{Q\in\calP|\rho^\ast(Q)<+\infty\}$.
\item There exists a convex set $A \subset L$ including
$0$ with $A+L_+\subset A$ and
$A\cap\mathbb{R}=\mathbb{R}_+$ such that for any $x \in L$,
\begin{equation}
\label{eqrhoCA}
\rho(x)=\inf\{c\in\mathbb{R}|\mbox{ there exists $m\in M$
such that }c+m+x\in A\}.
\end{equation}
\end{enumerate}
\end{thm}
\proof
1$\Rightarrow $2: This is because $\rho(-m) \leq \rho^0(-m)\leq 0$
for any $m \in M$ by the definitions of GDV and $\rho^0$.
\noindent
2$\Rightarrow$7:
We have
\[
\rho^\ast(Q)=\sup_{x\in L}\{\mathbb{E}_Q[-x]-\rho(x)\}
\geq\sup_{m\in M}\{\mathbb{E}_Q[m]-\rho(-m)\}
\geq\sup_{m\in M}\mathbb{E}_Q[m].
\]
Since $M$ is a cone, we have
$\rho^\ast(Q)=\infty$ for any $Q\in \calP\setminus\calQ$.
\noindent
7$\Rightarrow$3: This is from Theorem~\ref{BF}.
\noindent
3$\Rightarrow$4$^\prime$ and 4:
Since $\rho \in \calR$, we have
\[
\rho(-m)=\sup_{Q\in\calQ}\{\mathbb{E}_Q[m]-c(Q)\}
\leq-\inf_{Q\in\calQ}c(Q)= \rho(0)=0
\]
for any $m\in M$.
Then, by the convexity,
we have $\rho(m) + \rho(-m) \geq 2\rho(0)=0$ and so,
$\inf_{m \in M }\rho(m) = 0$.
Therefore,
\begin{equation}
\label{eq3to4}
I(\rho)(x)=\inf_{m\in M}\rho(m+x)-\inf_{m\in M}\rho(m) \leq \rho(x)
\end{equation}
and
\begin{equation*}
I(\rho)(x) = \inf_{m \in M}\sup_{Q \in \calQ}\{ \mathbb{E}_Q[-m-x] -c(Q) \}
\geq \sup_{Q\in\calQ}\{\mathbb{E}_Q[-x]-c(Q)\} = \rho(x).
\end{equation*}
\noindent
4$\Rightarrow $5: By Lemma~\ref{implicit},
$\rho=I(\eta)$ is represented as
\[
\rho(x)=\sup_{Q\in\calQ}\l\{\mathbb{E}_Q[-x]-\check{\eta}^\ast(Q)\r\}.
\]
Since $\rho(0) = 0$, we have $\check{\eta}^\ast(Q) \geq 0$.
Therefore,
\[
\wh{\rho^0}(-x)
= \sup_{Q\in\calQ}\mathbb{E}_Q[x]
\geq\sup_{Q\in\calQ}\l\{\mathbb{E}_Q[x]-\check{\eta}^\ast(Q)\r\}
= \rho(-x)
\]
for all $x \in L$.
It suffices then to recall that $\rho(x) + \rho(-x) \geq 2\rho(0) = 0$
by the convexity.
\noindent 5$\Rightarrow$1:
This is from Lemma~\ref{lem0-2}.
\noindent
3$\Rightarrow$6:
For any $x\in\{\rho^0\leq0\}$, Lemma~\ref{lem0-2} implies that
$\sup_{Q\in\calQ}\mathbb{E}_Q[-x]=\wh{\rho^0}(x)\leq0$.
We have then
\[
\rho(x)= \sup_{Q\in\calQ}\{\mathbb{E}_Q[-x]-c(Q)\}
\leq\sup_{Q\in\calQ}\{-c(Q)\}=\rho(0)=0.
\]
\noindent
6$\Rightarrow$2:
This is because $\rho^0(-m)\leq 0$ by definition.
\noindent
4$^{\prime} \Rightarrow$8:
Taking $A = \{\rho \leq 0\}$ and noting that $\inf_{m\in M}\rho(m)=0$, we have
\begin{eqnarray*}
\rho(x)&=& I(\rho)(x)=\inf_{m\in M}\rho(m+x)
=\inf\{c\in\VecR|\inf_{m\in M}\rho(m+x)\leq c\} \\
&\leq&\inf\{c\in\VecR|\mbox{ there exists $m\in M$
such that }\rho(m+x)\leq c\} \\
&=& \inf\{c\in\VecR|\mbox{ there exists $m\in M$
such that }c+m+x\in A\} \\
&\leq&\inf\{c\in\VecR| c+x\in A\}=\rho(x).
\end{eqnarray*}
\noindent
8$\Rightarrow$2: This is obvious.
\fin
\begin{rem}
\label{rem-thm2-2}
Denote by $\rho_A$ the RHS of (\ref{eqrhoCA}).
In \cite{JK01} and \cite{S04},
the set $A$ is given as an acceptance set and
$\rho_A$ is considered as a functional describing a good deal bound.
Therefore they appear to treat a special class of convex risk measures
but Theorem~\ref{thm2} shows that it is the only class
giving good deal bounds.
The representation of GDV as $\rho_A$ is important in
that it implies robustness of GDV to quantitative specification of
investor's risk preference.
Notice however that $\rho_A$ is not necessarily normalized.
As long as treating $\rho_A$, the condition defining GDV is equivalent to
the no-cashout condition (NC) introduced in \cite{S04}:
$\rho_A(-x)\geq-\rho^0(x)$ for any $x\in L$.
In fact for any $x \in L$,
\begin{eqnarray*}
\rho^0(x)
&=& \inf\{c\in\mathbb{R}|\mbox{ there exists $m\in M$ such that }
c+m+x\in L_+\} \\
&\geq&\inf\{c\in\mathbb{R}|\mbox{ there exists $m\in M$ such that }
c+m+x\in A\} \\
&=& \rho_A(x),
\end{eqnarray*}
that is, the upper estimate for $\rho_A(-x)$ holds automatically.
The convexity of $\rho_A$ implies that NC is equivalent to $\rho_A(0)=0$.
Theorem~6.1(0th FTAP) of \cite{S04} states, in a more abstract setting,
a condition under which $\rho_A(0)=0$.
\fin
\end{rem}
\noindent
As mentioned in Introduction,
many papers (\cite{ES10}, \cite{KS07-2}, \cite{OS09}, \cite{X06},...)
treated risk indifference prices and some of them showed that
a risk indifference price yields a good deal bound.
On the other hand, Theorem~\ref{thm2} showed that a GDV is always
a risk indifference price.
It therefore supports the use of the operator $I$ in constructing
a good deal bound.
We utilized however that a GDV has the Fatou property by definition.
It should be noted that $I(\rho)$ does not necessarily
have the Fatou property even if $\rho \in \calR$.
In other words, the operation does not necessarily preserve the Fatou
property (see Example~\ref{fatou} below).
Now we remark that it preserves the Lebesgue property that
also could be regarded as a natural continuity requirement for
good deal bounds as well as the Fatou property.
\begin{prop}\label{Lebp1}
Let $\rho$ be a finite convex risk measure with
the Lebesgue property and suppose that
there exists $Q^0 \in \calQ$ such that $\rho^\ast(Q^0) < \infty$.
Then, $I(\rho)$ is a finite GDV with the Lebesgue property.
\end{prop}
\proof
By Theorem \ref{Leb} and the existence of $Q^0\in\calQ$
such that $\rho^\ast(Q^0)<\infty$, we have, for any $x\in L$ and $m\in M$,
\begin{eqnarray*}
\rho(x+m)&=& \max_{Q\in\calP}\{E_Q[-x-m]-\rho^\ast(Q)\}
\geq E_{Q^0}[-x-m]-\rho^\ast(Q^0) \\
&\geq&E_{Q^0}[-x]-\rho^\ast(Q^0)>-\infty.
\end{eqnarray*}
Therefore $I(\rho)$ is $(-\infty,\infty]$-valued by (\ref{eq-Ieta}), and so
it is a convex risk measure by Lemma~\ref{implicit}.
Since $\rho$ is finite, so is $I(\rho)$ by (\ref{eq-Ieta}).
Moreover for any $m\in M$, we have
\begin{equation}\label{irm}
I(\rho)(-m)= \inf_{m^\prime\in M}\rho(-m+m^\prime)
-\inf_{m^\prime\in M}\rho(m^\prime)
\leq \inf_{m^\prime\in M}\rho(m^\prime)
-\inf_{m^\prime\in M}\rho(m^\prime)=0.
\end{equation}
Therefore by Theorem~\ref{thm2},
it only remains to show that $I(\rho)$ has the Fatou property.
By (\ref{repL}),
it suffices to see that $I(\rho)$ has the Lebesgue property.
Note that $I(\rho)(m)\geq0$ for any $m\in M$ by the convexity.
For any $\alpha>0$, $\epsilon > 0$ and a sequence of measurable sets $A_n$
with $P(A_n) \to 0$, we have that
\begin{equation}\label{An}
\begin{split}
0 \leq& \ I(\rho)(-\alpha1_{A_n})
= \inf_{m\in M}\rho(m-\alpha1_{A_n})-\inf_{m\in M}\rho(m)
\\
\leq & (1-\epsilon)\inf_{m \in M}\rho(\frac{m}{1-\epsilon}) +
\epsilon \rho(-\frac{\alpha}{\epsilon} 1_{A_n}) -
\inf_{m \in M}\rho(m) \\
\to & -\epsilon \inf_{m \in M}\rho(m)
\end{split}
\end{equation}
as $n\to\infty$ by the Lebesgue property of $\rho$.
Since $\epsilon$ is arbitrary, we conclude
the Lebesgue property of $I(\rho)$ by Theorem \ref{Leb}.
\fin
\begin{prop}
For a finite convex risk measure $\rho$,
the following are equivalent:
\begin{enumerate}
\item $\rho$ is a GDV with the Lebesgue property.
\item there exists a convex risk measure $\eta$
with the Lebesgue property, $\rho = I(\eta)$.
\end{enumerate}
\end{prop}
\proof
1$\Rightarrow$2: This is because $\rho = I(\rho)$ by
Theorem~\ref{thm2}.
\noindent
2$\Rightarrow$1:
By Lemma~\ref{implicit}, we have $\inf_{m \in M}\eta(m) \in \mathbb{R}$,
and so
\begin{equation*}
I(\eta)(x) = \inf_{m\in M}\eta(x+m) - \inf_{m \in M}\eta(m).
\end{equation*}
In particular we have (\ref{irm}) and (\ref{An}) with $\eta$ instead of
$\rho$. By the finiteness of $\rho = I(\eta)$, Theorem~\ref{Leb} can be
applied to have the result.
\fin
\begin{ex}\label{fatou}
Consider $L = L^\infty(\mathbb{R},\mathcal{F},\mathbb{P})$,
where $\mathbb{P}$ is a normal distribution on $\mathbb{R}$.
Let $Q \in \calP$ have a compact support and define a sequence
$\{Q_n\} \subset \cal{P}$ by $Q_n(A) := Q(A-n)$ for $A \in \mathcal{F}$,
$n \in \mathbb{N}$.
Since $\{g \in L^\ast_+| g(1)=1\}$ is weak-* compact,
there exists a cluster point $\mu$ of $\{Q_n\}$.
Since $\{Q_n\}$ is not tight, $\mu \notin \mathcal{P}$.
Consider $M = \{x \in L | \mu(x) \leq 0\}$.
Observe that $\ol{L}^\ast = \{\mu\}$.
In fact if there exists $\nu \in \ol{L}^\ast$ and $x \in L$ with
$\nu(x) > \mu(x)$, then $y:=x -\mu(x) \in M$ and $\nu(y) > 0$,
which is a contradiction.
Now consider $\rho \in \cal{R}$ defined as
$\rho(-x) = \sup_{Q \in \calP}\mathbb{E}_Q[x]$.
Let us show that
$\rho^\ast(\mu)= 0$.
By $\rho(0) = 0$ we have $\rho^\ast(\mu)\geq 0$ and
$\rho^\ast(Q_n)= 0$.
If $\rho(\mu) > 0$, then there exists $x \in L$ such that
$\mu(x) > \sup_{Q \in \calP}\mathbb{E}_Q[x]$, which contradicts that
$\mu$ is a cluster point of $Q_n$.
By the same reason, we have also that for any $m \in M$ and $x \in L$,
$\rho(m+x) \geq \mu(-m - x) \geq - \mu(x)$, so that
$I(\rho)$ is finite.
By Lemma~\ref{implicit},
$I(\rho)^\ast(g)=\infty$ for any $g \in L^\ast_+ \setminus \ol{L}^\ast$, so
by Theorem~\ref{repfa}, we have $I(\rho)(-x) = \mu(x)$ for all $x \in L$.
To see that $I(\rho)$ does not have the Fatou property,
consider the increasing sequence $x_n := 1_{(-\infty,n)}$.
Then $I(\rho)(-x_n) = 0$ while $I(\rho)(-x_\infty) = 1$.
\fin
\end{ex}
\subsection{Shortfall risk measures}
Here we treat shortfall risk measure as an application.
We presume an investor who sells a claim $x$.
When she sells $x$ with price $c$ and selects $m\in M$ as her strategy,
her final cash-flow is $c+m-x$,
and so its shortfall is $(c+m-x)\wedge0$.
In general, shortfall risk is defined as a weighted expectation of the
shortfall with a loss function.
A loss function is a continuous strictly increasing convex function
$l :\mathbb{R}_+ \to \mathbb{R}_+$ with $l(0)=0$.
This represents the seller's attitude towards risk.
To suppress the shortfall risk less than a certain level $\delta>0$
which she can endure, the least price she can accept is given as
\begin{equation}
\label{eqrhol}
\rho_l(-x):=\inf\{c\in\mathbb{R}|\mbox{ there exists }m\in M
\mbox{ such that }E[l((c+m-x)^-)]\leq\delta\}.
\end{equation}
As shown in \cite{A11} and \cite{FS02}, $\rho_l$ is a convex risk measure
and it has the Fatou property under mild conditions.
However, it is not a GDV as $\rho_l(0)\neq 0$:
\begin{prop}
Any shortfall risk measure is not a GDV.
\end{prop}
\proof
For any shortfall risk measure $\rho_l$, (\ref{eqrhol}) implies that
\begin{eqnarray*}
\rho_l(0)
&=& \inf\{c\in\mathbb{R}|\mbox{ there exists }m\in M
\mbox{ such that }E[l((c+m)^-)]\leq\delta\} \\
&\leq&\inf\{c\in\mathbb{R}|l(c^-)\leq\delta\}
=-l^{-1}(\delta)<0.
\end{eqnarray*}
Hence, $\rho_l\notin\calR$, from which $\rho_l$ is not a GDV.
\fin
\noindent
Now we show that a normalized shortfall risk measure can be a GDV.
Define $\wh{\rho_l}$ as $\wh{\rho_l}(x):=\rho_l(x)-\rho_l(0)$.
\begin{prop}
\label{prop-shortfall}
If $\wh{\rho_l}\in\calR$, then $\wh{\rho_l}$ is a GDV.
\end{prop}
\proof
In light of Theorem~\ref{thm2}, it suffices to see
$I(\wh{\rho_l}) = \wh{\rho_l}$.
Since $\wh{\rho_l}(m) \geq - \wh{\rho_l}(-m) \geq 0$ for $m \in M$,
we have $\inf_{m \in M}\wh{\rho_l}(m) = 0$, and so
$I(\wh{\rho_l})(x) = \inf_{m\in M}\rho_l(m+x) - \rho_l(0)$.
Now let us observe that $\inf_{m\in M}\rho_l(m+x)=\rho_l(x)$
for any $x\in L$.
$\inf_{m\in M}\rho_l(m+x)\leq\rho_l(x)$ holds clearly.
Fix $m\in M$ and $c>\rho_l(m+x)$ arbitrarily.
Then there exists $m^\prime\in M$ such that
$E[l((c+m^\prime+m+x)^-)]\leq\delta$.
Since $m^\prime+m\in M$, we have $c\geq\rho_l(x)$.
\fin
\setcounter{equation}{0}
\section{Relevant good deal valuations}
\subsection{Fundamental Theorem of Asset Pricing}
We have seen that the condition $\calQ \neq \emptyset$ is
equivalent to the existence of a GDV.
Example~\ref{ex-sec4} below shows that $\calQ \neq \emptyset$ is
not sufficient to rule out arbitrage opportunities in general.
\begin{ex}
\label{ex-sec4}
Let $A \in \calF$ with $P(A)\in(0,1)$, $m^\prime := 1_A$ and
$M=\{cm^\prime|c\geq0\}-L_+$.
Any probability measure $Q\in\calP$ with
$Q(A)=0$ is in $\calQ$.
On the other hand, $cm^\prime$ with $c>0$ brings an arbitrage opportunity.
\fin
\end{ex}
\noindent
Kreps~\cite{K81} showed that
$\calQ^e \neq \emptyset$ is equivalent to NFL, that is,
$\ol{M} \cap L_+ = \{0\}$.
Here we prove that
$\calQ^e \neq \emptyset$ is equivalent to the existence of a relevant GDV,
that is, a relevant convex risk measure which is a GDV.
\begin{thm}[FTAP]
\label{thm3}
The following are equivalent:
\begin{enumerate}
\item $\calQ^e \neq \emptyset$.
\item $\ol{M}\cap L_+=\{0\}$.
\item There exists a relevant GDV.
\end{enumerate}
\end{thm}
\proof
2$\Rightarrow$1:
For any $a,b \in \mathbb{R}$, the set $\{x \in L| a \leq x \leq b \}$
is compact in $\sigma(L,L^\dagger)$.
In fact if $L = L^\infty$, then $L^\dagger = L^1$ and
$\sigma(L,L^\dagger)$ is the weak-* topology.
The compactness then follows from the Banach-Alaoglu theorem.
It suffices then to notice that $L^\infty \subset L$, $L^\dagger \subset L^1$
as sets of random variables and the natural inclusion
$(L^\infty, \sigma(L^\infty,L^1))\to (L,\sigma(L,L^\dagger))$ is continuous.
Therefore we can prove the existence of an element of $\calQ^e$
in exactly the same manner as in the proof of Theorem 5.2.3 of \cite{DS06}.
\noindent
1$\Rightarrow$3: This is from Lemma~\ref{lem0-2}.
\noindent
3$\Rightarrow$2:
Let $\rho$ be a relevant GDV.
We have
$\rho(x)=\sup_{Q\in\calQ}\{\mathbb{E}_Q[-x]-c(Q)\}$ by Item 3 of
Theorem~\ref{thm2}.
Since $\rho(-z) > 0$ for all $z \in L_+$ by the relevance, it
suffices to see that $\rho(-\ol{m}) \leq 0$
for any $\ol{m} \in\ol{M}$.
If there exists $\ol{m} \in \ol{M}$ with $\rho(-\ol{m}) > 0$,
there exists $Q \in \calQ$ such that $\mathbb{E}_Q[\ol{m}] > c(Q) \geq
\sup_{m \in M} \mathbb{E}_Q[m]$.
The last inequality is from the fact that
$\rho(-m) \leq 0$ for all $m\in M$.
This contradicts that $\ol{m}$ is in the closure of $M$
in $\sigma(L, L^\dagger)$.
\fin
\noindent
Now we give a set of
equivalent conditions for GDV to be relevant.
Let
\begin{eqnarray}
\label{eq-Mrho}
M^\rho&:=&\{x-\rho(-x)|x\in L, \rho(-x) < \infty \}-
L_+=\{x\in L|\rho(-x)=0\}-L_+ \nonumber \\
&=& \{x\in L|\rho(-x)\leq0\}.
\end{eqnarray}
Note that $M^\rho$ is a convex set including $M$ and interpreted as
the set of the $0$-attainable claims of an extended market where
an investor offers prices for all $x \in L$ by using $\rho$ as her
pricing functional.
In light of Theorem~\ref{BF}, $M^\rho$ is closed in $\sigma(L,L^\dagger)$.
Therefore NFL for this extended market is $M^\rho \cap L_+ = \{0\}$.
\begin{thm}
\label{thm4}
For a GDV $\rho$, the following are equivalent:
\begin{enumerate}
\item $\rho$ is relevant.
\item $-\wh{\rho^0}(x-z) < \rho(-x)$ for any $x \in L$ and
$z \in L_+\setminus \{0\}$.
\item $-\rho^0(x-z) < \rho(-x)$ for any $x \in L$ and
$z \in L_+\setminus \{0\}$.
\item $M^\rho \cap L_+ = \{0\}$.
\end{enumerate}
\end{thm}
\proof
1$\Rightarrow$2:
By the relevance and Theorem~\ref{thm2},
for any $z \in L_+\setminus \{0\}$, there exists $Q(z) \in \calQ$ such that
$\mathbb{E}_{Q(z)}[z] > \rho^\ast(Q(z))$. Therefore,
\begin{equation*}
-\wh{\rho^0}(x-z) = \inf_{Q \in \calQ}\mathbb{E}[x-z] \\
\leq \mathbb{E}_{Q(z)}[x-z] < \mathbb{E}_{Q(z)}[x]-\rho^\ast(Q(z))
\leq \rho(-x).
\end{equation*}
\noindent
2$\Rightarrow$3: This is from Lemma~\ref{lem0-2}.
\noindent
3$\Rightarrow$1: For a given $z \in L_+\setminus \{0\}$, let $x=z$.
\noindent
1$\Rightarrow$4: This is because
$\rho$ separates $M^\rho$ and $L_+\setminus \{0\}$.
\noindent
4$\Rightarrow$1: If $\rho$ is not relevant, then there exists
$z \in L_+\setminus \{0\}$ such that $\rho(-z) =0$.
In particular $z \in M^\rho$, which is a contradiction.
\fin
\begin{rem}\label{staumrem}
Item~3 of Theorem~\ref{thm4} is the no-near-arbitrage condition (NNA)
introduced in \cite{S04}.
Theorem~6.2 of \cite{S04} states a condition under which
$\rho_A$ satisfies NNA.
Proposition~\ref{cor5} below may be regarded as its counterpart.
\fin
\end{rem}
\begin{prop}
\label{cor5}
Let $\rho$ be a GDV.
If there exists $Q_0 \in\calQ^e$ such that
$\rho^\ast(Q_0)=0$, then $\rho$ is relevant.
The reverse implication holds true if $\rho$ is coherent.
\end{prop}
\proof
The relevance is clear from Theorem~\ref{BF}.
The converse is the Halmos-Savage theorem (see e.g. \cite{Delrel}).
\fin
\noindent
Note that for $Q \in \calP$ and $\rho\in\calR$,
$\rho^\ast(Q)=0$ is equivalent to that
$-\rho(x) \leq \mathbb{E}_Q[x] \leq \rho(-x)$ for all $x \in L$.
Therefore such $Q$ is interpreted as a consistent pricing kernel of the
extended market $M^\rho$.
The following example shows that the coherence in
the second assertion of Proposition~\ref{cor5} cannot be dropped.
In other words, there is no strictly positive consistent pricing kernel
in general even if $M^\rho$ satisfies NFL: $M^\rho \cap L_+ = \{0\}$.
\begin{ex}\label{exnoncoh}
Set $\Omega=\{\omega_1, \omega_2\}$, and $M=L_-$.
Denoting $q:=Q(\{\omega_1\})$, we can identify $q$ with $Q \in \calQ$.
From this viewpoint, $\calQ$ and $\calQ^e$ are corresponding to
$[0,1]$ and $(0,1)$ respectively.
Consider $\rho(-x)=\sup_{Q\in\calQ}\{\mathbb{E}_Q[x]-c(Q)\}$
with $c(Q)=q^2$. Then we have $\rho^\ast(Q)=c(Q)$.
Denoting $z_i:=z(\omega_i)$ for $i=1,2$, we have
$\rho(-z)=\sup_{Q\in\calQ}\{\mathbb{E}_Q[z]-c(Q)\}
=\sup_{q\in[0,1]}\{qz_1+(1-q)z_2-q^2\}
=\sup_{q\in(0,1)}\{qz_1+(1-q)z_2-q^2\}>0$
for any $z\in L_+\setminus\{0\}$.
Thus, $\rho$ is a noncoherent relevant GDV.
On the other hand, there is no $q\in(0,1)$ with $c(Q)=0$.
\fin
\end{ex}
\begin{rem}\label{bionnadalrem}
In \cite{B09}, NFL refers to the condition that
\begin{equation*}
\overline{\mathrm{cone}(M^\rho)} \cap L_+ =\{0\},
\end{equation*}
where $\overline{\mathrm{cone}(M^\rho)}$ is the closure of
$\mathrm{cone}(M^\rho) = \{\lambda m; m \in M^\rho, \lambda \geq 0\}$
in $\sigma(L, L^\dagger)$, which is a different condition to
$M^\rho \cap L_+ = \{0\}$ unless $\rho$ is coherent.
This alternative definition of NFL enabled to establish the
equivalence between NFL of $\rho$ and the existence of
$Q_0 \in \calQ^e$ with $\rho^\ast(Q_0)=0$ in \cite{B09}.
In fact since
$\overline{\mathrm{cone}(M^\rho)}$ becomes a cone,
the same argument as the proof of 2$\Rightarrow$1 of Theorem~\ref{thm3}
can apply to have $Q_0 \in \calQ^e$ with $\mathbb{E}_{Q_0}[m] \leq 0$
for all $m \in M^\rho$.
Since $x - \rho(-x) \in M^\rho$ for all $x \in L$, we have
$\rho^\ast(Q_0)=0$.
Note however that
$\mathrm{cone}(M^\rho)$ does not have any interpretation as
the set of the 0-attainable claims in general. For instance,
in the model of the preceding example,
we can find $x\in L$ with $\rho(-x)\leq0$ and $\lambda > 0$ satisfying
$\rho(-\lambda x) > 0$.
Therefore, it seems not adequate, from economical point of view,
to adapt such a definition of NFL.
Consequently, the existence of $Q_0$ with $\rho^\ast(Q_0)=0$
may not be considered as a necessary condition for $\rho$ to be a
reasonable pricing functional.
\end{rem}
\subsection{When are all good deal valuations relevant?}
As seen in Theorem \ref{thm4}, when we extend the underlying market $M$ to
$M^\rho$ by using a GDV $\rho$ as pricing functional,
the extended market $M^\rho$ remains to satisfy NFL if and only if
$\rho$ is relevant.
Therefore markets in which any GDV is relevant are stable
against such extensions of the market.
Here we study necessary and (or) sufficient conditions
under which all (coherent) GDVs are relevant.
\begin{thm}
\label{thm5}
Suppose $\calQ^e\neq \emptyset$ and consider the following conditions:
\begin{enumerate}
\item Any GDV is relevant.
\item $\wh{\rho^0}(z)<0$ for any $z\in L_+\setminus\{0\}$.
\item $\calQ=\calQ^e$.
\item[3$^\prime$]
$\calQ=\calQ^e$ and $\calQ^e$ is $\sigma(L^\dagger,L)$-compact.
\item Any coherent GDV is relevant.
\end{enumerate}
Then, we have 1$\Leftrightarrow$2, 2$\Rightarrow$3, 3$^\prime\Rightarrow$2,
3$\Leftrightarrow$4.
\end{thm}
\proof
1$\Rightarrow$2:
Assume that there exists $z_0 \in L_+\setminus\{0\}$
such that $\wh{\rho^0}(z_0)=0$.
Then $\inf_{Q \in \calQ}\mathbb{E}_Q[z_0] = 0$,
so that we can define $\rho \in \calR$ as
\begin{equation*}
\rho(-x) = \sup_{Q \in \calQ}\{\mathbb{E}_Q[x]-\mathbb{E}_Q[z_0]\}.
\end{equation*}
This is a GDV by Theorem~\ref{thm2} but not relevant.
In fact $\rho(-z_0)=0$.
\noindent
2$\Rightarrow$1:
Let $\rho$ be a GDV. Then by Item~5 of Theorem~\ref{thm2},
$\rho(-z) \geq -\wh{\rho^0}(z) > 0$ for any $z \in L_+\setminus\{0\}$.
\noindent
2$\Rightarrow$3:
If $\calQ\neq\calQ^e$, then there exists $Q^*\in\calQ\backslash\calQ^e$.
Denoting $A=\{\mathrm{d}Q^*/\mathrm{d}\mathbb{P}>0\}$,
$\wh{\rho^0}(1_{A^c})= \sup_{Q\in\calQ}\mathbb{E}_Q[-1_{A^c}]
\geq E_{Q^*}[-1_{A^c}]
=0$,
while $1_{A^c} \in L_+\setminus\{0\}$.
\noindent
3$^\prime\Rightarrow$2:
By compactness we have for any $z\in L_+\setminus\{0\}$,
\[
\wh{\rho^0}(z)=\sup_{Q\in\calQ}\mathbb{E}_Q[-z]
=\sup_{Q\in\calQ^e}\mathbb{E}_Q[-z]
=\max_{Q\in\calQ^e}\mathbb{E}_Q[-z]
<0.
\]
\noindent
3$\Rightarrow$4:
Any coherent GDV $\rho$ is represented as
$\rho(x)=\sup_{Q\in\wh{\calQ} }\mathbb{E}_Q[-x]$,
for some convex set $\wh{\calQ}\subset\calQ = \calQ^e$.
Therefore $\rho$ is relevant.
\noindent
4$\Rightarrow$3:
If $\calQ\neq\calQ^e$ then we can
take $Q^*$ and $A$ in the same way as ``2$\Rightarrow$3".
Let
$\rho(x)=\sup_{Q\in\calQ, Q(A)=1}\mathbb{E}_Q[-x]$.
Then $\rho \in \calR$ since $Q^\ast(A)=1$.
By Theorem~\ref{thm2}, $\rho$ is a coherent GDV but not relevant
since $\rho(1_{A^c})=0$.
\fin
\noindent
The implications ``3$\Rightarrow$3$^\prime$",
``3$\Rightarrow$1 (or 2)" and ``2$\Rightarrow$3$^\prime$" in Theorem \ref{thm5}
do not hold in general.
We illustrate counterexamples.
\begin{ex}
\label{ex5-1}
We give an example satisfying Item~3 of Theorem~\ref{thm5}
which does not satisfy Items 1 nor $3^\prime$.
Set $\Omega=\mathbb{R}$, $L=L^\infty$ and
$\mathbb{P}(\mathrm{d}u)=\phi(u)\mathrm{d}u$,
where $\phi(u)$ is the standard normal density.
We consider the set of the mixed normal distributions.
Let $V$ be the set of all probability measures on $(0,\infty)$,
\begin{equation*}
Q_\mu(\mathrm{d}u):=\int\frac{1}{\sqrt{v}}\phi(u/\sqrt{v})\mu(\mathrm{d}v)
\mathrm{d}u
\end{equation*}
for $\mu \in V$, and $\wh{\calQ} := \{Q_{\mu}| \mu \in V\}$.
Define $M$ as
\[
M=\{m\in L^\infty| \mathbb{E}_Q[m]\leq0\mbox{ for any }Q\in\wh{\calQ}\}.
\]
Note that all bounded odd functions are in $M$ and
$\wh{\calQ}\subset \calQ^e \subset\calQ$.
Now we show that $\wh{\calQ}$ is $\sigma(L^1,L^\infty)$-closed.
Let $\{\mu_n\} \subset V$ be a sequence with $Q_{\mu_n}\to Q$
in $\sigma(L^1, L^\infty)$.
Denote $y_w(u):=e^{iwu}$ for any $w$, $u\in\mathbb{R}$, where $i=\sqrt{-1}$.
We have
\[
E_{Q_{\mu_n}}[y_w]=\int e^{-\frac{v}{2}w^2}\mu_n(dv),
\]
which has the form of the Laplace transform of $\mu_n$.
Since $E_{Q_{\mu_n}}[y_w]\to \mathbb{E}_Q[y_w]$ and
$\lim_{w \to 0}\mathbb{E}_Q[y_w]=1$,
the continuity theorem of Laplace transforms (see Theorem~XIII.1.2 of
\cite{Feller}) implies the existence of $\mu \in V$ such that
\begin{equation*}
\mathbb{E}_Q[y_w]=\int e^{-\frac{v}{2}w^2}\mu(\mathrm{d}v),
\end{equation*}
which is the characteristic function of an element of $\wh{\calQ}$.
Hence, $Q \in \wh{\calQ}$.
Note that $\wh{\calQ}=\calQ^e=\calQ$.
In fact if there exists $Q^\ast \in \calQ \setminus \wh\calQ$,
by the Hahn-Banach theorem there exists $x \in L$ such that
$\mathbb{E}_{Q^\ast }[x] > \sup_{Q \in \hat{\calQ}}\mathbb{E}_Q[x] =: \alpha$.
However, $x -\alpha \in M$ and $\mathbb{E}_{Q^\ast}[x-\alpha] > 0$,
which contradicts $Q^\ast \in \calQ$.
On the other hand, $\calQ$ is not compact.
In fact for the sequence $\mu_n:=\delta_{1/n}$ for $n\in \mathbb{N}$,
where $\delta_u$ is the Delta measure concentrated on $\{u\}$,
$\{Q_{\mu_n}\}$ does not have a cluster point in $\wh{\calQ}$.
Finally, we construct a GDV $\rho$ which is not relevant.
Letting $y(u):=u^2$, we define $\rho$ as
$\rho(-x) = \sup_{Q \in \calQ}\{\mathbb{E}_Q[x]-c(Q)\}$ with
$c(Q)=\mathbb{E}_Q[y]$.
Obviously, we have $\rho(0)=0$ and $\rho(-y)=0$.
\fin
\end{ex}
\begin{ex}
Here we see that the implication ``2$\Rightarrow$3$^\prime$" in
Theorem~\ref{thm5} does not hold.
We modify Example \ref{ex5-1} as follows.
Let $\mu_0 \in V$ be fixed and
$\wh{\calQ}_0 := \{Q_{\nu}| \nu = (\mu_0 + \mu)/2, \mu \in V\}$.
By the same argument as in Example~\ref{ex5-1},
we can prove the closedness and noncompactness of $\wh{\calQ}_0$ and
that $\wh{\calQ}_0 = \calQ = \calQ^e$.
This model however satisfies Item~2 of Theorem \ref{thm5} since
\begin{equation*}
\wh{\rho^0}(z)
= \sup_{Q\in\wh{\calQ}_0 }\mathbb{E}_Q[-z]
= \frac{1}{2}E_{Q_{\mu_0}}[-z]+\frac{1}{2}\sup_{\mu \in V}E_{Q_\mu}[-z]
\leq\frac{1}{2}E_{Q_{\mu_0}}[-z] < 0.
\end{equation*}
\fin
\end{ex}
\noindent
We conclude the paper with one more example,
which is a simple model taking transaction cost into account.
In the following example, a model satisfying Item~3$^\prime$ of
Theorem~\ref{thm5} is constructed.
\begin{ex}
Let $\Omega = \{\omega_0,\omega_1,\dots,\omega_n\}$ and
the Arrow-Debreu securities for the $n$ states
$\omega_1,\dots \omega_n$ be tradable in a market subject to bid-ask spread.
Denote by $a_{1,j}$, $a_{-1,j}$ the ask and bid prices for the state $\omega_j$
respectively for each $j=1,\dots, n$.
Let $D := \{-1,1\}^n$.
If $a_{-1,j} \geq 0$ for each $j$ and $\sum_ja_{1,j} \leq 1$,
then for any $d \in D$, a probability measure $Q_d$ on $\Omega$
is uniquely determined by $Q_d(\{\omega_j\}) = a_{d(j),j}$ for $j=1,\dots, n$,
and $Q_d(\{\omega_0\}) = 1 - \sum_{j=1}^na_{d(j),j}$.
Now let
\begin{equation*}
M = \{x \in L| \mathbb{E}_d[x]\leq 0 \text{ for all } d \in D\}
= \l\{x - \max_{d \in D}\mathbb{E}_d[x]| x \in L\r\} - L_+,
\end{equation*}
where $\mathbb{E}_d$ is the expectation under $Q_d$.
Note that any cash-flow $x \in L$ can be uniquely represented as a sum of
a constant and the Arrow Debreu securities and that
the price for replicating $x$ is
$\max_{d \in D}\mathbb{E}_d[x]$.
Therefore $M$ is actually the set of the $0$-attainable claims in this market.
By the same separation argument as in the preceding examples,
we can show
\begin{equation*}
\calQ = \l\{\sum_{d \in D} \lambda_d Q_d | \lambda_d \geq 0 \text{ for all }
d \in D \text{ and } \sum_{d \in D}\lambda_d = 1\r\}.
\end{equation*}
This set is compact because the set of $(\lambda_d)$ is a
finite dimensional simplex.
If $\sum_ja_{1,j} < 1$ in addition, then $\calQ = \calQ^e$ and so,
Item~3$^\prime$ of Theorem~\ref{thm5} is satisfied.
Consequently, any GDV in this market is relevant.
Remark that $\sum_ja_{1,j} < 1$ is a condition which requires
market makers not to offer a set of prices which leads
an apparent arbitrage opportunity for themselves.
\fin
\end{ex}
\begin{center}
{\bf Acknowledgements}
\end{center}
The authors would like to thank Professors Freddy Delbaen and Martin Schweizer
for their valuable comments and suggestions.
This work was done when the authors were Visiting Professors of ETH Zurich.
Takuji Arai was supported by Scientific Research (C) No.22540149
from the Ministry of Education, Culture, Sports, Science and Technology of
Japan.
Masaaki Fukasawa was supported by Japan Science and Technology Agency,
CREST.
|
1,941,325,220,484 | arxiv | \section{Introduction}
\label{sec:intro}
In supervised learning, the term \emph{model selection} usually refers to the process of using validation data to tune hyperparameters. However, we are moving toward a world in which model selection refers to marketplaces of pre-trained deep learning models in which customers select from a vendor's collection of available models, often without the ability to run validation data through them or being able to change their hyperparameters. Such a marketplace paradigm is sensible because deep learning models have the ability to generalize from one dataset to another \cite{arpit2017closer, zhang2016understanding, kawaguchi2017generalization}. In the case of classifier selection, the use of data and decision boundary complexity measures, such as the critical sample ratio (the density of data points near the decision boundary), can be a helpful tool \cite{HoB2002,arpit2017closer}.
In this paper, we propose the use of persistent homology \cite{edelsbrunner2008persistent}, a type of topological data analysis (TDA) \cite{Carlsson2009}, to quantify the complexity of neural network decision boundaries. Persistent homology involves estimating the number of connected components and number of holes of various dimensions that are present in the underlying manifold that data samples come from. This complexity quantification can serve multiple purposes, but we focus how it can be used as an aid for matching vendor pre-trained models to customer data. To this end, we must extend the standard conception of TDA on point clouds of unlabeled data, and develop new techniques to apply TDA to decision boundaries of labeled data.
In our previous work \cite{VarshneyR2015}, the only prior work we are aware of on TDA of decision boundaries, we use persistent homology to tune hyperparameters of radial basis function kernels and polynomial kernels. The contributions herein have greater breadth and theoretical depth as we detail below. A recent preprint also examines TDA of labeled data \cite{GussS2018}, but approaches the problem as standard TDA on separate classes rather than trying to characterize the topology of the decision boundary. In the appendix, we discuss how this approach can be fooled by the internal structure of the classes. There has also been theoretical work using counts of homological features, known as Betti numbers, to upper and lower bound the number of layers and units of a neural network needed for representing a function \cite{BianchiniS2014}. That work does not deal with data, as we do here. Moreover, its bounds are quite loose and not really usable in practice, similar in their looseness to the bounds for algebraic varieties \cite{Milnor1964, BasuPR2005} cited by \cite{VarshneyR2015} for polynomial kernel machines.
The main steps in a persistent homology analysis are as follows. We treat each data point as a node in a graph, drawing edges between nearby nodes\hspace{1pt---}\hspace{1pt}where nearby is according to a scale parameter. We form complexes from the simplices formed by the nodes and edges, and examine the topology of the complexes as a function of the scale parameter. The topological features such as connected components, and holes of various dimensions that persist across scales are the ones that capture the underlying shape of the dataset. In all existing approaches to persistent homology, the scale parameter is a single global value that does not factor in the local scaling of the dataset, making the inference of Betti numbers from persistence brittle and difficult to automate.
Our main contributions are as follows:
\begin{enumerate}
\item We introduce a new simplicial complex construction called the labeled \v{C}ech complex that captures decision boundary topology. We provide theoretical conditions on the decision boundary and the data samples near the boundary that lead to the successful recovery of the homology of the decision boundary.
\item We propose a computationally efficient construction of decision boundary surfaces: the labeled Vietoris-Rips complex. We illustrate the need for local scaling to handle non-uniform sampling of data near the decision boundary and address this need by proposing a simplicial complex construction based on estimates of local scale using a k-nearest neighbors method.
\item We evaluate the merits of the above approaches using synthetic and real-world data experiments. Using synthetic data experiments, we show that the proposed approaches recover the homology even when there is extreme local scaling. Using the real-world application domains MNIST, FashionMNIST and CIFAR10, we show how these approaches can be used to evaluate the topological complexity of decision boundaries of deep neural network classifiers. Our main finding in terms of model selection can be summarized as follows: when choosing a pre-trained network, one whose topological complexity matches that of the dataset yields good generalization.
\end{enumerate}
We defer detailed background on persistent homology and simplicial constructions for \emph{unlabeled} point cloud data to the appendix. Throughout this work we assume the labels to be binary for simplicity; multi-class extensions can consider decision boundaries in one-vs-one, one-vs-all and Venn diagram constructions \cite{VarshneyW2010}.
\section{Labeled \v{C}ech Complex and Recovery Guarantees}
\label{sec:theory}
In this section, we introduce the labeled \v{C}ech (L\v{C}) complex and prove results on its use for recovering the homology of a decision boundary. The high-level idea is as follows: to recover the homology of a decision boundary, we must cover it such that the cover is its deformation retract. The practically- and computationally-oriented reader may safely proceed to Section~\ref{sec:top_dec_bound} after noting the definition of the decision boundary and the proposed (computationally intractable) L\v{C} complex.
\subsection{Decision Boundary Manifold}
Decision boundaries are hypersurfaces, surfaces of dimension $d-1$ in ambient spaces of dimension $d$, that divide a space into two classes. We define the overall probability space $\mathcal{Z}$ with the measure given by $\mu_z$ and the pdf $p_Z$. We assume two classes that can be conditioned from this space using the selector $C$; the pdfs being $p_X = p_{Z|C}(z|1)$ and $p_Y = p_{Z|C}(z|0)$. We denote the mixture probabilities as $p_C(0) = q$ and $p_C(1) = 1-q$, such that $p_Z(z) = p_{Z|C} (z|1) p_C(1) + p_{Z|C} (z|0) p_C(0)$. By the Neyman-Pearson rule, the decision boundary manifold is defined by $\mathcal{M} = \{z \in \mathcal{Z} \mid p_Y = p_X\}$.
Let us define the extent of the distribution where the two classes are mixed by the set
\begin{equation}
\mathcal{D} = \{z \in \mathcal{Z} | p_{Z|C}(z|0) > 0, p_{Z|C}(z|1) > 0 \}.
\end{equation} This is the set where both distributions have some mass. We also denote the set $\mathcal{G} = \{z \in \mathcal{Z} \mid (p_{Z|C}(z|0) = 0 \lor p_{Z|C}(z|1) > 0) \land (p_{Z|C}(z|0) > 0 \lor p_{Z|C}(z|1) = 0) \}$, where one of the classes has zero mass and the other class has non-zero mass.
\subsection{Labeled \v{C}ech Complex}
The homology of a manifold can be recovered by an appropriate random sampling and computing a \v{C}ech complex on the random samples. The same idea can be extended to the case of a decision boundary, which is a manifold at the intersection of the two classes. We need a construction which is homotopic to this manifold. To this end, we introduce the \textit{labeled \v{C}ech complex}.
\begin{definition}
\label{def:lc_complex}
An $(\epsilon, \gamma)$-labeled \v{C}ech complex, is a simplicial complex with a collection of simplices such that each simplex $\sigma$ is formed on the points in the set $S$ aided by the reference set $W$, when the following conditions are satisfied:
\begin{enumerate}
\item $\displaystyle \bigcap_{s_i \in \sigma} B_{\epsilon}(s_i) \neq \emptyset$, where $s_i$ are the vertices of $\sigma$.
\item $\forall s_i \in \sigma, \quad \exists w \in W$ such that, $\|s_i - w\| \leq \gamma$.
\end{enumerate}
\end{definition}
This definition matches the usual \v{C}ech complex, but introduces the additional constraint that a simplex is induced only if all its vertices are close to some point in the reference set $W$. The second condition also implies that $W$ is $\gamma$-dense in the vertices of the simplices of the $(\epsilon, \gamma)$-L\v{C} complex.
\subsection{Recovery Guarantees}
Now, we derive sufficient sampling conditions so that the L\v{C} complex is homotopic to the decision boundary manifold and hence recovers it homology. The general idea is that when sufficient samples are drawn near $\mathcal{M}$, we can cover $\mathcal{M}$ using balls of radius $r$ ($B_r(z)$), and $U$ deformation retracts to $\mathcal{M}$. The nerve of the covering will be homotopic to $\mathcal{M}$ according to the Nerve Lemma \cite{borsuk1948imbedding}. The intuition is that when we have dense enough sampling, the nerve of the \v{C}ech complex is homotopic to the manifold \cite{niyogi2008finding}. If the sampling is not sufficiently dense, we run into the danger of breaching the `tubular neighborhood' of the manifold since the $\epsilon$ in the \v{C}ech complex has to be large. In our L\v{C} complex, points from one class will be used to construct the actual complex, and the points from the other class will be used as the reference set per Definition \ref{def:lc_complex}.
\paragraph{Sketch of the theory:} Lemma \ref{lem:lc_complex} shows the equivalence of the L\v{C} complex to a particular \v{C}ech complex on unlabeled data, helping us build our theory from existing results in \cite{niyogi2008finding}. Theorem \ref{thm:set_density} lower bounds the sample size needed to cover two sets of sets, laying the ground for our main sample complexity result. Theorem \ref{thm:density_cover_manifold} provides the sample complexity for a dense sampling of the decision boundary manifold, and the main result in Theorem \ref{thm:sample_complexity} gives the sufficient conditions under which an L\v{C} complex on the sampled points from the two classes will be homotopic to the decision boundary.
Let us assume that the decision boundary is a manifold $\mathcal{M}$ with condition number $1/\tau$. This means that the open normal bundle about $\mathcal{M}$ of radius $\tau$ is embedded in $\mathbb{R}^d$. In other words, the normal bundle is non self-intersecting. We also place the following assumptions.
\begin{itemize}
\item $\mathcal{D}$ is contained in the tubular neighborhood of radius $r$ around $\mathcal{M}$, i.e., $\mathcal{D} \subset \text{Tub}_r(\mathcal{M})$.
\item For every $0 < s < r$, the mass around a point $p$ in $\mathcal{M}$ is at least $k_{s}^{(c)}$ in both classes. There is sufficient mass in both classes:
\begin{equation}
\label{eqn:reg_prop}
\inf_{p \in \mathcal{M}} \mu_c(B_{\epsilon} (p)) > k_{s}^{(c)} \quad \forall c \in \{0,1\}.
\end{equation}
\end{itemize}
\begin{lemma}
\label{lem:lc_complex}
As $\epsilon$ varies from $0$ to $\infty$, a filtration is induced on the $(\epsilon, \gamma)$-L\v{C} complex for a fixed $\gamma$.
\end{lemma}
\begin{proof}
Fixing $\gamma$, we choose $S_{\gamma} \subseteq S$, such that $W$ is $\gamma$-dense in $S_{\gamma}$. Therefore, the $(\epsilon, \gamma)$-L\v{C} complex on $V$ is equivalent to an $\epsilon$-\v{C}ech complex on $S_{\gamma}$, and hence varying $\epsilon$ induces a filtration.
\end{proof}
\begin{remark}
The $(\epsilon, \gamma)$-\v{C}ech complex can be used to delineate the decision boundary by choosing $S$ to be the samples of one class and $W$ to be the other class.
Given sufficient samples in $S$ and $W$, $\epsilon$-balls on $S_{\gamma}$ will be homotopic to $\mathcal{M}$. Since homotopy implies same homology, this is how we use the L\v{C} complex to identify the homology of the decision boundary.
\end{remark}
\begin{theorem}
\label{thm:set_density}
Let $\{A_i\}_{i=1}^{l_a}$ and $\{B_j\}_{j=1}^{l_b}$ be two sets of measurable sets. Let $\mu_x$ and $\mu_y$ be the probability measures on $\bigcup_{i=1}^{l_a} A_i$ and $\bigcup_{j=1}^{l_b} B_j$, respectively, such that $\mu_x(A_i) > \alpha_x, \forall i \in \{1, 2, \ldots, l_a\}$ and $\mu_y(B_i) > \alpha_y, \forall j \in \{1, 2, \ldots, l_b\}$. Let $\mu_x$ and $\mu_y$ be the component measures of $\mu_z$, such that $\mu_z(F) = q \mu_x(F)+(1-q) \mu_y(F)$, $q$ and $1-q$ being the mixture probabilities. Let $\overline{z} = \{z_1, z_2, \ldots, z_n\}$ be the set of $n$ i.i.d. draws according to $\mu_z$, which can be partitioned into two sets $\overline{x}$ and $\overline{y}$ which contain the samples from the measures $\mu_x$ and $\mu_y$. Then, if
\begin{equation}
\label{eqn:n_bound_thm}
n \geq \max \left( \frac{1}{\alpha_x q}\left( \log 2 l_a + \log \frac{1}{\delta} \right),
\frac{1}{\alpha_y (1-q)}\left( \log 2 l_b + \log \frac{1}{\delta} \right) \right)
\end{equation} we are guaranteed with probability greater than $1-\delta$ that
\begin{equation}
\label{eqn:intersect_cond_them}
\forall i, \overline{x} \cap A_i \neq \emptyset \quad \text{and} \quad \forall j, \overline{y} \cap B_j \neq \emptyset.
\end{equation}
\end{theorem}
\begin{proof}
Let us assume that among the $n$ samples $\overline{z}$ drawn, $|\overline{x}| = n_x$ and $|\overline{y}| = n_y$, so that $n = n_x + n_y$. Let us denote the event $x \notin A_i$ for any $i$ as $E_i^{a}$ and the event $y \notin B_j$ for any $j$ as $E_j^{b}$. The probability of these events are
\begin{align}
P(E_i^{a} | |\overline{x}| = n_x) &= (1-\mu_x(A_i))^{n_x} \leq (1-\alpha_x)^{n_x} \text{ and }\\
P(E_j^{b} | |\overline{y}| = n_y) &= (1-\mu_y(B_i))^{n_y} \leq (1-\alpha_y)^{n_y}.
\end{align} The probability bound on the composite event (\ref{eqn:intersect_cond_them}) is expressed as
\begin{equation}
P \left( \left( \cap_i\overline{E_i^{a}}\right) \cap \left( \cap_j\overline{E_j^{b}}\right) \right) > 1- \delta,
\end{equation} which simplifies to
\begin{equation}
P \left( \overline{\left( \cup_i E_i^{a}\right) \cup \left( \cup_j E_j^{b}\right)} \right) > 1- \delta.
\end{equation} This implies that
\begin{equation}
\label{eqn:E_a_b_prob}
P(\cup_i E_i^{a})+P(\cup_i E_j^{b})
\end{equation} should be bounded from above by $\delta$. The individual conditional probabilities can be union-bounded as $P(\cup_i E_i^{a} | |\overline{x}| = n_x) \leq l_a (1-\alpha_x)^{n_x}$ and $P(\cup_i E_j^{b} | |\overline{y}| = n_y) \leq l_b (1-\alpha_y)^{n_y}$. Hence, the upper bound on \eqref{eqn:E_a_b_prob} is
\begin{equation}
\label{eqn:E_a_b_prob_bound}
\sum_{n_x = 0}^n P(\cup_i E_i^{a} | |\overline{x}| = n_x) p(|\overline{x}| = n_x) + P(\cup_i E_j^{b} | |\overline{y}| = n-n_x) p(|\overline{y}| = n-n_x),
\end{equation} which, after some algebra, simplifies to
\begin{equation}
\label{eqn:E_a_b_prob_bound1}
l_a (1-q \alpha_x)^{n} + l_b (1-(1-q) \alpha_y)^{n}.
\end{equation}
We need to find an $n$ such that the expression in \eqref{eqn:E_a_b_prob_bound1} is bounded above by $\delta$.
Since we know $1-\alpha q \leq \exp(-\alpha q)$ using Taylor approximation, when $n > \frac{1}{\alpha q } (\log 2l + \log (\frac{1}{\delta}))$, $l \exp(-\alpha q n) \leq \delta/2$. Hence if we pick $n$ according to \eqref{eqn:n_bound_thm}, \eqref{eqn:E_a_b_prob} will be $ \leq \delta$, and with probability greater than $1-\delta$, we can ensure \eqref{eqn:intersect_cond_them}.
\end{proof}
\begin{lemma}
\label{lem:density_correspondence}
For three sets $S$, $W$, and $U$, if $S$ is $r$-dense in $U$ and $W$ is $t$-dense in $U$, there exists an $\hat{S} \subseteq S$, such that the following hold:
\begin{enumerate}
\item \label{lem:c1} $\hat{S}$ is $r$-dense in $U$,
\item \label{lem:c2} $U$ is $r$-dense in $\hat{S}$,
\item \label{lem:c3} $W$ is $(r+t)$-dense in $\hat{S}$.
\end{enumerate}
\end{lemma}
\begin{proof}
If $S$ is $r$-dense in $U$, for every $u \in U$, there exists an $s \in S$ such that $\|u-s\| < r$. Now, let $\hat{S} \subseteq S$, $\hat{S} = \{s \in S \mid \|u-s\| < r, u \in U\}$, i.e., for each element $\hat{s} \in \hat{S}$, we have at least one $u \in U$ such that $\|u-\hat{s}\| < r$ and vice-versa. This proves item \ref{lem:c1} and item \ref{lem:c2}. Since for each $u$, we have at least one $w \in W$, such that $\|u-w\| < t$. Hence, by the triangle inequality, for each $\hat{s} \in \hat{S}$, we have at least one $w \in W$ such that $\|s-w\| < (r+t)$.
\end{proof}
\begin{theorem}
\label{thm:density_cover_manifold}
Let $N_{r/2}$ and $N_{s/2}$ be the $r/2$ and $s/2$ covering numbers of the manifold $\mathcal{M}$. Let $G$ and $H$ be two sets of points in $\mathcal{M}$ of sizes $N_{r/2}$ and $N_{s/2}$ such that $B_{r/2}(g_i), g_i \in G$, and $B_{s/2}(h_j), h_j \in H$ are the $r/2$- and $s/2$-covers. Let $\overline{z}$ be generated by i.i.d.\ sampling from $\mu_z$ whose two component measures satisfy the regularity properties in \eqref{eqn:reg_prop}, and have mixing probabilities $q$ and $1-q$ for $q > 0$. Let the two component samples be $\overline{x}$ and $\overline{y}$. Then if
$\displaystyle |\overline{z}| > \max\left(\frac{1}{q k_{r/2}^{(0)}} \left(\log \left(2 N_{r/2} \right)+ \log \left(\frac{1}{\delta} \right)\right), \frac{1}{(1-q) k_{s/2}^{(0)}} \left( \log \left( 2 N_{r/2} \right)+ \log \left( \frac{1}{\delta} \right) \right) \right)$,
with probability greater than $1-\delta$, $\overline{x}$ will be $r$-dense in $\mathcal{M}$, and $\overline{y}$ will be $s$-dense in $\mathcal{M}$.
\end{theorem}
\begin{proof}
Letting $A_i = B_{r/2}(g_i)$, and $B_j = B_{s/2}(h_j)$, apply the previous Theorem. Hence, with probability greater than $1-\delta$, each of $A_i$ and $B_j$ are occupied by at least one of $x_i \in \overline{x}$, and $y_j \in \overline{y}$ respectively. There it follows that for any $p \in \mathcal{M}$, there is at least one $x \in \overline{x}$ and $y \in \overline{y}$ such that $\|p-x\| < r$, and $\|p-y\| < s$. Thus, with high probability, $\overline{x}$ is $r$-dense in $\mathcal{M}$ and $\overline{y}$ is $s$-dense in $\mathcal{M}$.
\end{proof}
Now we extend Theorem 7.1 in \cite{niyogi2008finding} to the case of the L\v{C} complex and provide the main conditions under which the homology of the decision boundary can be recovered.
\begin{theorem}
\label{thm:sample_complexity}
Let $N_{r/2}$ and $N_{s/2}$ be the $r/2$ and $s/2$ covering numbers of the submanifold $\mathcal{M}$ of $\mathbb{R}^N$.
Let $\overline{z}$ be generated by i.i.d.\ sampling from $\mu_z$ whose two component measures satisfy the regularity properties in (\ref{eqn:reg_prop}), and have mixing probabilities $q$ and $1-q$ for $q > 0$. Let the two component samples be $\overline{x}$ and $\overline{y}$. Then if
$\displaystyle |\overline{z}| > \max\left(\frac{1}{q k_{r/2}^{(0)}} \left(\log \left(2 N_{r/2} \right)+ \log \left(\frac{1}{\delta} \right)\right), \frac{1}{(1-q) k_{s/2}^{(0)}} \left( \log \left( 2 N_{s/2} \right)+ \log \left( \frac{1}{\delta} \right) \right) \right)$,
with probability greater than $1-\delta$, the $(\epsilon, r+s)$-L\v{C} complex will be homotopic to $\mathcal{M}$, if: (a) $r < (\sqrt{9}-\sqrt{8}) \tau$, and (b) $\epsilon \in \left( \frac{(r+\tau)+ \sqrt{r^2+\tau^2-6\tau r}}{2}, \frac{(r+\tau)+ \sqrt{r^2+\tau^2-6\tau r}}{2} \right)$.
\end{theorem}
\begin{proof}
From Lemma \ref{lem:density_correspondence}, we know that when $\overline{x}$ is $r$-dense in $\mathcal{M}$, and $\overline{y}$ is $s$-dense in $\mathcal{M}$, we have $\tilde{x} \subseteq \overline{x}$ which is also $r$-dense in $\mathcal{M}$ and $\overline{y}$ is $(r+s)$-dense in $\tilde{x}$. Also, from Lemma \ref{lem:lc_complex}, the $(\epsilon, r+s)$-L\v{C} complex on $\overline{x}$ with the reference set $\overline{y}$ is equivalent to the $\epsilon$-\v{C}ech complex on $\tilde{x}$.
Since $\tilde{x}$ is $r$-dense on $\mathcal{M}$, it follows from Theorem 7.1 in \cite{niyogi2008finding} that this $\epsilon$-\v{C}ech on $\tilde{x}$ will be homotopic to $\mathcal{M}$ if the conditions on $r$ and $\epsilon$ are satisfied.
\end{proof}
\section{Labeled Vietoris-Rips Complexes}
\label{sec:top_dec_bound}
In this section, we propose two computationally-tractable constructions for simplicial complexes of the decision boundary: one we name the plain labeled Vietoris-Rips complex and the other we name the locally scaled labeled Vietoris-Rips complex. We illustrate the need for the locally scaled version.
\subsection{Notation}
Let us start with a labeled discrete sample $\{(z_1, c_1), \ldots, (z_n, c_n)\}$ where $z \in \mathbb{R}^d$ is the data point and $c \in \{0, 1\}$ is the class label. Given a data point $z_i$, we define its neighborhood as $\mathcal{N}_{\theta}(z_i)$ where $\theta$ is a scalar neighborhood parameter. The neighbors are restricted to data points whose class $c_j$ is not the same as $c_i$. Our neighborhood construction is symmetric by definition, hence $z_j \in \mathcal{N}_{\theta}(z_i) \Leftrightarrow z_j \in \mathcal{N}_{\theta}(z_i)$. This results in a bipartite graph $G_{\theta}$.
We use Betti numbers to describe the topology of the decision boundary. $\beta_i$ is the $i^\text{th}$ Betti number: the number of homology groups $H_i$ of dimension $i$.
\subsection{Two Complexes}
To induce a simplicial complex with simplices of order greater than one from the bipartite graph $G$, we connect all $2$-hop neighbors. Since the original edges are only between points in opposing classes, all $2$-hop neighbors belong to the same class. A pictorial depiction of this is provided in Appendix \ref{sec:higher_order_simplices}. This new graph is defined to be one-skeleton of the decision boundary complex. We create a simplicial complex from this one-skeleton using the standard Vietoris-Rips induction \cite{zomorodian2010fast}: a simplex of dimension $r+1$ is inductively included in the complex if all its $r$-dimensional faces are included. We call this the labeled Vietoris-Rips (LVR) complex $\mathcal{V}_{\theta}$.
Our construction is such that, by definition, for $\theta_2 \geq \theta_1$, there is an inclusion $G_{\theta_2} \supseteq G_{\theta_1}$. Given this inclusion relationship in the bipartite graphs, we obtain a filtration as we vary $\theta$, i.e., for $\theta_2 \geq \theta_1$, $\mathcal{V}_{\theta_2} \supseteq \mathcal{V}_{\theta_1}$. We provide two approaches for creating the LVR complex and its filtration.
\paragraph{Plain LVR (P-LVR) Complex:} We set $\theta$ to be the radius parameter $\epsilon$ and define $\mathcal{N}_{\theta}(z_i)$ as the set of points $\{z_j\}_{c_j \neq c_i, \|z_i - z_j\| \leq \theta}$.
\paragraph{Locally Scaled LVR (LS-LVR) Complex:} We set $\theta$ to be $\kappa$, the multiplier to the local scale and define $\mathcal{N}_{\theta}(z_i)$ as the set of points $\{z_j\}_{c_j \neq c_i, \|z_i - z_j\| \leq \kappa \sqrt{\rho_i \rho_j}}$, where $\rho_i$ is the local scale of $z_i$. This is defined to be the radius of the smallest sphere centered at $z_i$ that encloses at least $k$ points from the opposite class. LS-LVR construction is based on the generalization of CkNN graph introduced in \cite{berry2016consistent} to labeled data.
After the LVR filtrations have been obtained, persistent homology of the decision boundaries can be estimated using standard approaches \cite{edelsbrunner2008persistent, zomorodian2005computing}, and represented using barcodes or persistence diagrams (PDs) \cite{edelsbrunner2012persistent}.
\subsection{Illustration of Homology Group Recovery}
\label{sec:demo_homology_rec}
\begin{wrapfigure}{R}{3.3cm}
\caption{A 2-class data with \emph{red} and \emph{blue} classes (top), and the LS-LVR decision boundary complex at $\kappa = 1.005$ (bottom).}\label{fig:two_cir}
\includegraphics[width=3.1cm]{Figures/two_circles.png}
\includegraphics[width=3.1cm]{Figures/knn_rho_complexes_1_005.png}
\vspace{-20pt}
\end{wrapfigure}
We illustrate these two approaches for constructing decision boundary complexes and estimating their persistent homology using a two-dimensional, two-class dataset given in Figure \ref{fig:two_cir} (top). The two decision boundaries are homotopic to two circles that separate the classes, and hence the true Betti numbers of the decision boundaries for this data are: $\beta_0 = 2$, and $\beta_1 = 2$. The sampling is non-uniform, with the smaller disk and annulus having more density than the larger ones.
We compute the persistent homology using the P-LVR and LS-LVR complexes. With P-LVR, we vary the radius parameter $\epsilon$ from $0$ to $10$, and with LS-LVR, we vary the local scale multiplier $\kappa$ from $0.5$ to $1.5$. The local scale $\rho$ is computed with $k=5$ neighbors. Figure \ref{fig:two_cir} (bottom) shows a LS-LVR complex at scale $1.005$ that accurately recovers the Betti numbers of the decision boundary.
Figure \ref{fig:pers_two_circle} shows the PDs as well as the Betti numbers for different scales using the two complexes.\footnote{Note that the PD for $H_0$ groups shows all the groups, whereas the Betti numbers in Figures \ref{fig:pers_two_circle}(b) and \ref{fig:pers_two_circle}(d) only show the number of non-trivial homology groups. Non-trivial $H_0$ groups are defined to be those that have more than one data point i.e., the number of simply connected components with size more than 1. Including trivial homology groups is meaningless when computing the topology of decision boundaries since decision boundaries are defined only across classes.} The LS-LVR construction recovers both $\beta_0$ and $\beta_1$ accurately for $\kappa$ slightly greater than 1 and persists until $\kappa$ is slightly less than 1.2. Around this value, one of the holes closes and a little later the other hole collapses as well. The resulting two simply connected components persist until $\kappa=1.5$.
In contrast, for the P-LVR complex, the $H_0$ and $H_1$ groups first come to life at $\epsilon = 0.9$ for the smaller decision boundary component. The $H_1$ group vanishes almost immediately. At $\epsilon = 0.38$, the $H_0$ and $H_1$ groups for the larger decision boundary component come to life, persisting for $0.12$. The overall topology ($\beta_0 = 2, \beta_1 = 2$) is not captured at any one scale due to varying sizes of homological features as well as non-uniform sampling. The widely varying life times for homological features make it hard to choose a threshold for estimating the correct number of homology groups. This is not a problem with LS-LVR since the $H_1$ groups appear clustered together in the PD. Another benefit of LS-LVR is that non-noisy homology groups appear around $\kappa = 1$, the natural local scale of the data. This does not hold true for the P-LVR complex.
The actual complexes for various scales with the two constructions are given in the appendix.
\begin{figure}
\begin{minipage}[c]{0.24\linewidth}
\centering
\includegraphics[width=3.4cm]{Figures/two_cir_eps_PD.eps}
\centerline{\footnotesize{(a)}}\medskip
\end{minipage}
\begin{minipage}[c]{0.24\linewidth}
\centering
\includegraphics[width=3.4cm]{Figures/two_cir_eps_betti_count.eps}
\centerline{\footnotesize{(b)}}\medskip
\end{minipage}
\begin{minipage}[c]{0.24\linewidth}
\centering
\includegraphics[width=3.4cm]{Figures/two_cir_knn_rho_PD.eps}
\centerline{\footnotesize{(c)}}\medskip
\end{minipage}
\begin{minipage}[c]{0.24\linewidth}
\centering
\includegraphics[width=3.4cm]{Figures/two_cir_knn_rho_betti_count.eps}
\centerline{\footnotesize{(d)}}\medskip
\end{minipage}
\caption{(a) Persistence diagram and (b) Betti numbers as a function of scale using P-LVR, and (c) persistence diagram and (d) Betti numbers using LS-LVR.}
\label{fig:pers_two_circle}
\end{figure}
\section{Experiments}
\label{sec:experiments}
We perform experiments with synthetic and high-dimensional real-world datasets to demonstrate: (a) the effectiveness of our approach in recovering homology groups accurately, and (b) the utility of this method in discovering the topological complexity of neural networks and their potential use in choosing pre-trained models for a new dataset.
In all experiments, to limit the number of simplices, we upper bound the number of neighbors used to compute the neighborhood graph to 20. The results can be reproduced using the code available at \url{https://github.com/nrkarthikeyan/topology-decision-boundaries}. More implementation notes are available in Appendix \ref{sec:impl_notes}.
\subsection{Synthetic Data: Homology Group Recovery}
\label{sec:homo_group_rec_25}
\begin{wrapfigure}{R}{3.3cm}
\caption{A $2-$class data with $\beta_0 = 25, \beta_1 = 25$. Notice the wide variation in sizes of topological features.}\label{fig:25_cir}
\includegraphics[width=3.1cm]{Figures/25_circles.png}
\vspace{-20pt}
\end{wrapfigure}
The first experiment demonstrates the effectiveness of our approach in recovering homology groups of complex synthetic data with wide variations in sizes of topological features (Figure \ref{fig:25_cir}). The decision boundary is homotopic to 25 circles ($\beta_0 = 25, \beta_1 = 25$). From Figures \ref{fig:pers_25circles}(c) and \ref{fig:pers_25circles}(d), it is clear that the LS-LVR complex shows similar persistence for all the $25$ $H_1$ groups irrespective of their varying sizes in the dataset. Observe the clumping in the PD, and the presence of a lone noisy $H_1$ group with almost zero life time. The P-LVR complex also recovers the $25$ $H_1$ groups, but does so at different times (Figures \ref{fig:pers_25circles}(a) and \ref{fig:pers_25circles}(b)). From the PD, we can see that there are five rough clumps of $H_1$ groups, around birth times $\{1,2,3,4,5\}$, each containing five $H_1$ groups. The birth times correspond to the radii of the five groups of decision boundaries in Figure \ref{fig:25_cir}. The staggered recovery of topology with the P-LVR complex makes it hard to fix a noise threshold on life times to estimate the correct Betti numbers.
\subsection{Real-World Data: Complexity Estimation and Model Selection}
\label{sec:model_selection}
\begin{figure}
\begin{minipage}[c]{0.24\linewidth}
\centering
\includegraphics[width=3.4cm]{Figures/25_cir_eps_PD.eps}
\centerline{\footnotesize{(a)}}\medskip
\end{minipage}
\begin{minipage}[c]{0.24\linewidth}
\centering
\includegraphics[width=3.4cm]{Figures/25_cir_eps_betti_count.eps}
\centerline{\footnotesize{(b)}}\medskip
\end{minipage}
\begin{minipage}[c]{0.24\linewidth}
\centering
\includegraphics[width=3.4cm]{Figures/25_cir_knn_rho_PD.eps}
\centerline{\footnotesize{(c)}}\medskip
\end{minipage}
\begin{minipage}[c]{0.24\linewidth}
\centering
\includegraphics[width=3.4cm]{Figures/25_cir_knn_rho_betti_count.eps}
\centerline{\footnotesize{(d)}}\medskip
\end{minipage}
\caption{(a) Persistence diagram and (b) Betti numbers as a function of scale using P-LVR, and (c) persistence diagram and (d) Betti numbers using LS-LVR.}
\label{fig:pers_25circles}
\end{figure}
\begin{figure}
\begin{minipage}[c]{0.32\linewidth}
\centering
\includegraphics[width=5cm]{Figures/sum_betti_perf_acc_model_for_data.eps}
\centerline{\footnotesize{(a)}}\medskip
\end{minipage}
\begin{minipage}[c]{0.32\linewidth}
\centering
\includegraphics[width=5cm]{Figures/betti0_perf_acc_model_for_data.eps}
\centerline{\footnotesize{(b)}}\medskip
\end{minipage}
\begin{minipage}[c]{0.32\linewidth}
\centering
\includegraphics[width=5cm]{Figures/betti1_perf_acc_model_for_data.eps}
\centerline{\footnotesize{(c)}}\medskip
\end{minipage}
\caption{Accuracy improvement or reduction in choosing pre-trained classifiers with topological complexity close to the dataset versus complexity far from the dataset. Complexity measures used: (a) Sum of total lifetimes of $H_0$ and $H_1$ groups, (b) Total lifetimes of $H_0$ groups, (c) Total lifetimes of $H_1$ groups. Blue bars show the accuracy difference when using only pre-trained classifiers with less topological complexity than the dataset, orange bars correspond to those with greater complexity, and green bars correspond to using all pre-trained classifiers. The black lines show the $95\%$ confidence interval.}
\label{fig:topo_comp_acc}
\end{figure}
We demonstrate how topological complexity can be used to guide selecting appropriate pre-trained models for a new dataset. We use only LS-LVR complexes for estimating topological complexities. We consider three application domains for our evaluation: MNIST \cite{mnistlecun}, FashionMNIST \cite{xiao2017fmnist} and CIFAR10 \cite{krizhevsky2009learning}. FashionMNIST is a drop-in replacement for MNIST with the same image sizes and train-test splits.
All three applications have $10$ classes and $50,000$ training and $10,000$ test images. Each instance of MNIST and FashionMNIST is a $28 \times 28$ grayscale image, whereas each instance of CIFAR10 is a $32 \times 32$ color image. We construct $\binom{10}{2} = 45$ binary classification datasets from each application domain, one for each combination of two classes. We then train individual binary classifiers for these $45$ datasets per application using the standard CNN architecture provided in \url{https://github.com/pytorch/examples/tree/master/mnist} for MNIST and FashionMNIST, and the VGG CNN - configuration D for CIFAR10 \cite{simonyan2014very}.
Given a trained model $f_i(\cdot)$, $i=1,\ldots,45$, we evaluate its topological complexity using the test data inputs and predicted labels. This labeled dataset is represented as $\hat{Z}_i = \{(\hat{z}_{i,1}, f_i(\hat{z}_{i,1})), \ldots, (\hat{z}_{i,n_i}, f_i(\hat{z}_{i,n_i}))\}$, and its Betti numbers for $H_0$ and $H_1$ at scale $\kappa$ are given as $\beta_{0, \kappa}(i), \beta_{1, \kappa}(i)$. The complexity of a novel dataset is estimated using its inputs and true labels, using three different measures. The first is total lifetime $H_0$ groups given by $\sum_{\kappa}\beta_{0, \kappa}$, the second is total lifetime $H_1$ groups, $\sum_{\kappa}\beta_{1, \kappa}$, and the third is sum of the two, $\sum_{\kappa}\beta_{0, \kappa} + \beta_{1, \kappa}$. Although these are natural measures of topological complexity, one can consider other reasonable variants as well.
Let us consider an example of matching a novel dataset to a pre-trained model. Our novel dataset is MNIST handwritten digit 0 vs. handwritten digit 4, whose $\beta_0 + \beta_1$ data complexity we compute to be 479. Then we look for pre-trained model complexities that are similar. Not surprisingly, the closest is the pre-trained model 0 vs. 4, which has a model complexity 479. The 0 vs. 9 pre-trained model has a similar complexity of 525. If we select the 0 vs. 4 model, we achieve 99.95\% accuracy on 0 vs. 4 data, and if we select the 0 vs. 9 model, we also achieve a high accuracy of 96.08\%. If we select a model that is not well-matched to the data complexity, for example the 0 vs. 5 model with complexity 1058, we achieve a low accuracy on 0 vs. 4 data of 63.41\%.
All data and model complexities are listed in the appendix. For MNIST, FashionMNIST and CIFAR10, the average binary classification accuracy of the best performing models is $99.61\%$, $98.39\%$, and $96.78\%$ respectively. Now let us conduct an experiment to see whether the example above holds in general. Treating each of the 45 datasets as the novel dataset, we select $5$ pre-trained models that are the closest and $5$ models that are the farthest in topological complexity. We evaluate these classifiers on the novel dataset and obtain the average difference in classification accuracy between the closest and farthest classifiers. If the difference in accuracy is significantly greater than zero, it means that using classifiers that have similar topological complexity as the dataset is beneficial. If the difference in accuracy is close to zero, it shows that there is no benefit in using topological complexity to guide the choice of the classifier. If it is significantly less than zero, it means that classifiers which do not have similar topological complexity are better suited for the novel dataset.
Armed with this intuition, we can interpret Figure \ref{fig:topo_comp_acc}. The green bars show the average accuracy difference obtained by repeating the above experiment on the $45$ two-class datasets in each of CIFAR10, MNIST and FashionMNIST. The black lines show the $95\%$ confidence interval. If the black line is completely above (below) $0$, with a $p$-value less than $0.05$, the null hypothesis that the accuracy difference is less than or equal to (greater than or equal to) $0$ can be rejected. If the black line intersects $0$, we cannot reject the null hypothesis that the accuracy difference is $0$, at a significance level of $0.05$.
From the green bars, we see that classifiers with similar topological complexity have higher performance on the novel dataset for all three complexity measures. We then divide the pre-trained classifiers into two groups: those that have lower topological complexity than the novel dataset, and those that have higher topological complexity. Results for classifiers with higher topological complexity are shown using orange bars, and the previous claim still holds. For classifiers with lower complexity, the results show a different trend. Note that in this case, the farthest classifiers have less complexity than the closest ones. For MNIST, there is no significant change in accuracy when choosing any classifier that has lower complexity than the dataset. For CIFAR10, there is a small improvement in accuracy when choosing classifiers that have much lower complexity than the dataset, and for FashionMNIST, this improvement is a little higher. For these three application domains, we observe that choosing classifiers with lower complexities than data is slightly favorable or neutral.
\section{Conclusion}
\label{sec:concl}
In this paper, we have investigated the use of topological data analysis in the study of labeled point clouds of data encountered in supervised classification. In contrast to \cite{GussS2018}, which simply applies known, standard, persistent homology inference methods to different classes of data separately and does not scale to high dimensions, we introduce new techniques and constructions for characterizing \emph{decision boundaries} and apply them to several commonly used datasets in deep learning. We propose and theoretically analyze the \emph{labeled} \v{C}ech complex, deriving conditions on recovering the decision boundary's homology with high probability based on the number of samples and the condition number of the decision boundary manifold.
Furthermore, we have proposed the computationally-tractable \emph{labeled} Vietoris-Rips complex and extended it to account for variation in the local scaling of data across a feature space. We have used this complex to provide a complexity quantification of pre-trained models and datasets that is able to correctly identify the complexity level below which a pre-trained model will suffer in its ability to generalize to a given dataset. This use has increasing relevance as model marketplaces become the norm.
|
1,941,325,220,485 | arxiv | \section{Introduction}
Over the last decade, the Eigenstate Thermalization Hypothesis (ETH)
\cite{peres_ergodicity_1984,deutsch_quantum_1991,srednicki_chaos_1994,
srednicki_approach_1999,rigol_thermalization_2008, DAlessio_fromquantum_2016,borgonovi_quantum_2016} has become the essential framework for reconciling quantum dynamics with statistical mechanics. In its simplest form, ETH posits that expectation values of local observables in energy eigenstates are smooth functions of the energy eigenvalue in the thermodynamic limit.
This provides a mechanism for thermalization in isolated quantum systems.
ETH can be understood in terms of random matrix theory: ergodic quantum systems are essentially well described by random matrix ensembles at least where local observables are concerned.
This leads to the above smoothness condition, and also predicts the correct scaling of statistical deviations from
it with system size~\cite{beugeling_finitesize_2014, DAlessio_fromquantum_2016,khaymovich_eigenstate_2019,ikeda_how_2015}.
In particular, the expectation values of local observables have
gaussian distributions --- the distribution shape is an important characteristic of ETH behavior~\cite{khaymovich_eigenstate_2019}.
It turns out that some disordered interacting systems can avoid thermalization if disorder is strong
enough. Such a nonequilibrium phase of matter is called the Many Body Localized (MBL) phase
\cite{anderson_absence_1958,fleishman_interactions_1980,georgeot_integrability_1998,basko_metalinsulator_2006,gornyi_interacting_2005,oganesyan_localization_2007,znidaric_many-body_2008,berkelbach_conductivity_2010,pal_many-body_2010,schreiber_observation_2015,luitz_many-body_2015,nandkishore_mbl_2015,altman_universal_2015,imbrie_many-body_2016,abanin_recent_2017,alet_many-body_2018,pietracaprina2019hilbert}.
In this phase, transport is completely halted and the system becomes a perfect insulator. In
particular ETH is not valid~\cite{pal_many-body_2010,luitz_many-body_2015}. The current theoretical understanding of the
MBL phase relies on the emergence of integrability via a complete set of local integrals of motion
(LIOM)~\cite{serbyn_local_2013,huse_phenomenology_2014,imbrie_many-body_2016,ros_integrals_2015,imbrie_diagonalization_2016}.
For instance, this theory accounts for the failure of thermalization, the area law of entanglement
entropy in infinite temperature eigenstates~\cite{bauer_area_2013} and the logarithmic growth of entanglement entropy after a
quench~\cite{znidaric_many-body_2008,bardarson_unbounded_2012}.
Even though the existence of the MBL phase is by now well established in one dimensional systems, in both theory \cite{basko_metalinsulator_2006,imbrie_diagonalization_2016,imbrie_many-body_2016} and experiments~\cite{schreiber_observation_2015,smith_many-body_2016}, the nature of the localization-delocalization transition remains an active area of research.
One outstanding question is the universality of anomalous thermalization~\cite{luitz_anomalous_2016,roy_anomalous_2018}, characterized by sub-diffusive transport, close to the localization transition coming from the ergodic side \cite{bar_lev_dynamics_2014,agarwal_anomalous_2015, potter_universal_2015,vosk_theory_2015,luitz_extended_2016,lev_transport_2017, bordia_probing_2017,luitz_ergodic_2017, znidaric_interaction_2018, kozarzewski_spin_2018, schulz_energy_2018,lezama_apparent_2019}.
Moreover, there is evidence that distributions of diagonal matrix elements of the local (globally conserved) density develop heavy tails in this anomalous thermal phase \cite{luitz_long_2016}. It has been suggested that the latter is connected to the sub-diffusive transport and, in addition, could be described by a modified version of ETH~\cite{luitz_anomalous_2016,roy_anomalous_2018}. However it is not clear whether power law tails in the distribution of local operators are a general feature of the sub-diffusive regime.
In this work, we consider the probability distributions of local correlation functions in mid-spectrum energy eigenstates to determine their features in the ergodic as well as in the MBL phase.
While the gaussian shape of these distributions is a central property of pure ETH, their behavior is equally important to characterize non-ergodic phases, in particular the MBL phase.
Considering the Heisenberg model with random on-site fields, we present and analyze the
distributions for two-point operators: spin-flip and $S_i^z S_{i+r}^z$ operators.
Due to the U(1) symmetry of the XXZ model, there are no other non-vanishing two-point correlators.
Furthermore, we carry out quantitative quasi-degenerate perturbation-theory calculations (around the limit of infinite
disorder) to explain various features of the distributions in the MBL phase.
The energy eigenstate distributions of spin-flip and $S_i^z S_{i+r}^z$ correlators considered in this paper are gaussian for small disorder strengths but acquire significant weight in the tails already for intermediate disorder $W\approx 2 < W_c\approx 3.7$, $W_c$ being the critical disorder strength to enter the MBL phase. Despite the heavy tails in the thermal regime, the variance of the distribution falls off with increasing system size for fixed $W$ up to the critical value $W_c$.
Within the MBL regime, $W>W_c$, the variation of the distribution with increasing system size is negligible and the distribution has features not present in the thermal regime. In particular, the spin-flip operator distribution exhibits a sharp peak at zero with smaller satellite peaks on each side and further small peaks at the edge of the distribution, $\pm 1/2$. Perturbation theory captures the form of the large disorder distribution quantitatively.
Perturbative methods to describe localization-delocalization phenomena in condensed matter physics have a long history dating back to Anderson's seminal work and continuing today to address questions relating to MBL \cite{anderson_absence_1958,ros_integrals_2015,scardicchiothiery,basko_metalinsulator_2006,gornyi_interacting_2005,imbrie_many-body_2016}.
In the context of MBL, two of the main questions were to systematically construct the local integrals of motion that are thought to characterize the MBL phase and to estimate the transition point between MBL and the thermal phase. Both can be achieved by computing perturbative corrections to the mutually commuting occupation numbers at infinite disorder under the constraint that the corrections themselves continue to commute~\cite{ros_integrals_2015}.
This has to be done to infinite order within some suitable approximation to capture possible delocalization. To make sense of the perturbation theory, as in the case of Anderson localization, there are resonances that lead to naive divergences coming from states close in energy that are mixed by hopping in the latter case and interactions in the former.
In both cases, the divergences may be resolvable giving the perturbation theory a finite radius of convergence.
Resolving these divergences amounts to diagonalizing the resonating configurations exactly.
The perturbation theory discussed in this paper is an expansion in the hopping part of the Hamiltonian around the infinite disorder limit. We carry out the expansion to low orders to be quantitative at large disorder for our finite system and to capture the main qualitative features for smaller disorder within the MBL regime. In the spirit of earlier works, we deal with resonances in the non-degenerate perturbation theory by diagonalizing exactly on the resonant subspaces.
In Section \ref{sec:background} we present the model and the local operators whose correlation functions we study.
Section~\ref{sec:exp_pm} focuses on the
spin-flip operators across the whole phase diagram: first to nearest neighbor, then the further neighbor spin-flip operator distributions.
The form of the spin-flip operator distributions in the MBL regime are rationalized within perturbation theory in Section~\ref{sec:exp_pm_pt}.
We then turn to the $\langle S_i^z S_{i+r}^z \rangle$ correlators (Section~\ref{sec:exp_zz}) and the corresponding connected correlators (Section~\ref{sec:exp_zz_connected}), in both cases showing the development of heavy tails at $W\approx 2.0$ and the evolution of these distributions into the MBL regime.
We highlight the distinctive form of the distributions in the Anderson localized phase and the difference with the corresponding MBL distributions (Section~\ref{sec:exp_zz_ai_mbl}).
Finally, we compute the distribution using quantitative perturbation theory showing, once again, that it captures well the form of the distributions at strong disorder (Section~\ref{sec:exp_zz_pt}).
\section{Background Material}
\label{sec:background}
\subsection{Model}
\label{sec:model}
We study the canonical XXZ model with random fields $h_i$ along the $z$ direction,
\begin{eqnarray}
\label{Model}
H = \sum_{i=0}^{L-1}\!\dfrac{J}{2}\!\left(S^{+}_{i}S^{-}_{i+1}\!+S^{-}_{i}S^{+}_{i+1}\right)\!+\Delta S^{z}_{i}S^{z}_{i+1}\!-h_{i}S^{z}_{i}.
\end{eqnarray}
This model -- which is widely studied in the context of MBL~\cite{znidaric_many-body_2008,pal_many-body_2010,berkelbach_conductivity_2010,
bardarson_unbounded_2012,deluca_ergodicity_2013,serbyn_local_2013,bauer_area_2013,
nanduri_entanglement_2014,lev_dynamics_2014,
luitz_many-body_2015,nandkishore_mbl_2015,agarwal_anomalous_2015,lev_absence_2015,bera_many-body_2015,torres-herrera_dynamics_2015,
luitz_extended_2016,serbyn_spectral_2016,singh_signatures_2016,
enss_mbl_2017,tomasi_quantum_2017,bera_density_2017,lezama_one-particle_2017,
alet_many-body_2018,
herviou_multiscale_2019,sierant_level_2019,vsuntajs_quantum_2019,serbyn_thouless_2017,maksymov_energy_2019} -- can be mapped to a spinless fermion model with nearest neighbor hopping $J/2$,
interaction term $\Delta$ and on-site potential $h_i$. In this paper, periodic boundary conditions
are set, $J=1$ is fixed throughout the paper and the fields $h_{i}$ are distributed uniformly in $[-W,W]$
with disorder strength $W$. We focus mainly on the isotropic
point $\Delta=1$ (interacting spinless fermions) of the parameter space. However in Section \ref{AI}
we compare also to results for various $\Delta$, including the point $\Delta=0$ (free spinless fermions). The operators
$S^\alpha_i=\sigma^\alpha_i/2$ are spin 1/2 operators, with $\alpha=0,x,y,z$ and $i$ is the site
index.
The total magnetization $M=\sum_{i=0}^{L-1}S^z_i$ along the $z$ direction is conserved. We
therefore focus on the largest magnetization sector $M=0$ for even system sizes $L$, corresponding
to the Hilbert space dimension $\text{dim}(\mathcal H)=\text{binom}(L,\lfloor{L/2\rfloor})$. For
each disorder realization $\{h_0,\dots, h_{L-1}\}$, we obtain $\gtrsim 50$ eigenstates closest to
the energy target $(E_\text{max}+E_\text{min})/2$ ($E_\text{min}$ being the ground state energy and
$E_\text{max}$ the highest energy of the sample) using a state-of-the-art shift-invert
code \cite{luitz_many-body_2015,pietracaprina_shift-invert_2018}. We consider the probability
distributions of various eigenstate expectation values of local operators, i.e. the diagonal matrix
elements of these operators in the eigenbasis of the Hamiltonian. Our results are histograms over
at least $10^{3}$ disorder realizations for each system size $L$ and disorder strength $W$, we also
calculate the correlators for all sites $i\in[0,L-1]$ to improve the statistics, since the average
over disorder is translation invariant. The mid-spectrum states of this model are known to exhibit
two dynamical phases \cite{pal_many-body_2010,luitz_many-body_2015}: at low disorder ($W \lesssim
3.7$) they obey the ETH, while at strong disorder ($W \gtrsim 3.7$) all eigenstates are many-body
localized (MBL).
\subsection{Operators}
\label{sec:operators}
In previous works in the context of many-body localization and the MBL transition, the distributions
of local operators were considered, mostly focussing on distributions of diagonal or off-diagonal
matrix elements of simple local observables such as the local magnetization (or number density in
the language of spinless fermions) $\bra{n} S_i^z \ket{n}$, where $\ket{n}$ is a central
eigenstate of the Hamiltonian
\cite{luitz_long_2016,luitz_anomalous_2016}. In this work, we consider more complicated operators
given by two point correlation functions. First, we consider the correlators
\[
\bra{n} S^+_{i} S^-_{i+r}/2
+h.c. \ket{n} = \bra{n} F_{i,i+r} \ket{n},
\]
i.e., the matrix elements of spin-flip operators $F_{i,i+r}$.
We also consider diagonal two-point correlators, namely $\bra{n} S^z_i S^z_{i+r} \ket{n}$ and its
`connected' version
\[
\bra{n} S^z_i S^z_{i+r} \ket{n}_c =
\bra{n} S^z_i S^z_{i+r} \ket{n} - \bra{n} S^z_i \ket{n} \bra{n} S^z_{i+r} \ket{n}.
\]
For $r=1$, the first expression above corresponds to the kinetic energy
density, while the second expression is the interaction energy density in the language of spinless
fermions. The connected correlator $\bra{n} S^z_i S^z_{i+r} \ket{n}_c$ was previously considered in
Ref. \onlinecite{pal_many-body_2010}.
\section{Eigenstate expectation values of $S^+_{i} S^-_{i+r} +h.c. $}\label{FlipFlop}
\label{sec:exp_pm}
In Fig. \ref{Distr_SxSy} we show the probability distribution of eigenstate expectation values of $ F_{i,i+r} = S^+_{i} S^-_{i+r}/2 +h.c.$ for a system of size $L=20$ and different disorder strengths $W$.
\subsection{Nearest neighbor flip}
\label{sec:exp_pm_nn}
We start by considering the special case $r=1$, where the operator $ F_{i,i+1}$ corresponds to
the kinetic energy per bond. In the thermal phase at weak disorder, we expect this operator to
follow ETH and be distributed according to a gaussian distribution, which is true to very good
precision.
We observe that at weak disorder ($W\lesssim 2$) and $r=1$, the mean of the distribution is slightly negative, a result of the fact that the eigenstates of the Hamiltonian we consider are in the center of the spectrum, and correspond to high but finite temperature due to the asymmetry of the density of states (cf. Appendix~\ref{sec:e_dependence_loc_op} for an analysis of the energy dependence). States corresponding to strictly infinite temperature correspond to energies given by $\mathrm{Tr} (H_{M=0})/ \text{dim}(\mathcal{H}_{M=0}) = -\frac{L}{4L-4}$, where ${H}_{M}$ is the Hamiltonian matrix in the zero magnetization sector. Such states have a zero mean for traceless operators like $ F_{i,i+r}$.
Zero mean distributions are recovered at intermediate disorder where the asymmetry of the spectrum is less pronounced and the energy of the eigenstates we consider is indeed close to $-\frac 1 4$ for large $L$.
At intermediate disorder $W\approx 2$, we observe the development of heavy tails in the distribution, very similar to the situation for the distribution of $\bra{n} S_i^z \ket{n}$ studied in Ref. \onlinecite{luitz_long_2016}, confirming that the presence of such tails appears to be a generic feature at intermediate disorder in the thermal phase. We note that heavy tails are also observed in the $S_i^zS_{i+r}^z$ correlation function studied in Sec.~\ref{sec:exp_zz}.
\begin{figure}[h]
\centering
\includegraphics{SxSy_done.pdf}
\caption{Probability density of eigenstate expectation values $\bra{n} F_{i,i+r} \ket{n}$ for distances $r =1,2,3,4$. The histogram was taken over $\gtrsim 50$ eigenstates, $>1000$ disorder realizations and all positions $i$ in the chain of length $L=20$. In each panel the histograms for the same set of representative disorder strengths $W\in\{0.4, 1.2, 2.0, 2.8, 3.6, 4.4\}$ are shown with the same color code (legend in lower right panel). }
\label{Distr_SxSy}
\end{figure}
At strong disorder $W>W_c$ in the MBL phase, we find a strikingly different distribution of the spin-flip operator expectation values $ F_{i,i+1}$;
it features a pronounced central peak at zero, accompanied by two minima adjacent to it, which are framed by two satellite peaks, before the probability density $p(\bra{n} F_{i,i+1} \ket{n})$ decays towards the edges of its domain $\left[-\frac{1}{2}, \frac {1}{2}\right]$. We have found that this intriguing shape persists at strong disorder and can be explained using perturbation theory, a discussion of which we postpone to the end of this section.
\begin{figure}[h]
\centering
\includegraphics{SxSy_L.pdf}
\caption{Finite size dependence of the probability density of eigenstate expectation values of the nearest neighbor flip operator $\bra{n} F_{i,i+1} \ket{n}$. As in Fig. \ref{Distr_SxSy}, the histogram is taken over $\gtrsim 50$ eigenstates per disorder realization, $>1500$ disorder realizations and all positions $i$ in the chain. The dashed blue line shows the gaussian distribution computed with the mean and variance belonging to the data $L=12,20$.
}
\label{Distr_SxSy_L}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics{Std.pdf}
\caption{Standard deviation of the difference between adjacent eigenstates of spin-flip operator (upper panels) $F^{n}_{i,i+r}=\bra{n+1}F_{i,i+r}\ket{n+1}-\bra{n}F_{i,i+r}\ket{n}$ and connected $z$ correlation (lower panels) $C^{n}_{i,i+r} = \bra{n+1}S^{z}_{i}S^{z}_{i+r}\ket{n+1}_c - \bra{n}S^{z}_{i}S^{z}_{i+r}\ket{n}_c$, where we have used the shorter notation $ \bra{n}S^{z}_{i}S^{z}_{i+r}\ket{n}_c=\bra{n}S^{z}_{i}S^{z}_{i+r}\ket{n}-\bra{n}S^{z}_{i}\ket{n}\bra{n}S^{z}_{i+r}\ket{n}$.
Instead of the direct variance of the distributions, we consider differences in adjacent eigenstates as in Ref. \onlinecite{luitz_long_2016} to mitigate the slightly different means of distributions at weak disorder due to energy targets depending on the disorder realization (cf. Appendix~\ref{sec:e_dependence_loc_op}).
}
\label{fig:standard}
\end{figure}
In Fig. \ref{Distr_SxSy_L}, we analyze the system size dependence of the probability density of the
nearest neighbor flip operator $ F_{i,i+1}$ over the whole range of disorder strengths, comparing
distributions for sizes $L=12, 14, 16, 18, 20$. At the weakest disorder $W=0.4$, we find gaussian
probability distributions, with the variance decreasing exponentially in system size $L$, as
expected from ETH (cf. Fig.~\ref{fig:standard}). At intermediate disorder $W=2.0$, the distribution
is no longer gaussian, but the variance still decreases exponentially with size. It appears that the
heavy tails, deviating from the gaussian shape, persist even at large system size, following the
same phenomenology observed for the $S_i^z$ operator in Ref.
\onlinecite{luitz_long_2016,luitz_anomalous_2016,roy_anomalous_2018}.
To quantify departures from gaussianity, we compute the excess kurtosis $\kappa = (\mu_4 /\sigma^4)-3$, ($\mu_4$ being the $4$th central moment of the distribution) and the Kullback-Leibler divergence defined by
\begin{equation}
D_{\rm KL} \equiv -\int \mathrm{d}{x}\, P(x) \log\left( \frac{Q(x)}{P(x)} \right)
\end{equation}
where $Q(x)$ is the reference gaussian distribution and $P(x)$ is the computed distribution of the correlator, where the gaussian is defined by the mean and variance of $P(x)$. Results are shown in Fig.~\ref{fig:kurtosis}. Both quantities indicate that the distribution is quantitatively gaussian for $W\lesssim 1.5$ and that they become strikingly less gaussian with a peak at about $W=2$ that increases with system size. Beyond the peak for larger disorder both measures increase smoothly with little system size dependence, indicating strongly nongaussian distributions in the MBL phase.
In the MBL phase at strong disorder, there is no discernible system size dependence of the distribution (Figs.~\ref{Distr_SxSy_L}, lower panels, and \ref{fig:standard}), showing a pronounced maximum at zero, framed by two symmetric satellite peaks, which seem to get closer to each other at stronger disorder.
\begin{figure}[h]
\centering
\includegraphics{Kurtosis_and_KL.pdf}
\caption{Upper panels: Excess kurtosis $\kappa = (\mu_4 /\sigma^4)-3$ of the distribution of diagonal matrix elements of (left) $\bra{n}F_{i,i+1}\ket{n}$ and (right) connected correlation $\langle S_i^z S_{i+1}^z \rangle_c$. A vanishing excess kurtosis corresponds to a gaussian distribution. Lower panels: Kullback-Leibler divergence of the matrix element distributions with respect to a gaussian with same mean and variance. }
\label{fig:kurtosis}
\end{figure}
\subsection{Long distance flip}
\label{sec:exp_pm_fnn}
The flip operator of distant spins $ F_{i,i+r}$, with $r>1$ is not a term of the Hamiltonian and could therefore behave differently.
We have verified that this is so by examining the energy dependence of the mean of the distribution which is constant over a large range of energies for $r>1$, linear for $r=1$, cf. Appendix~\ref{sec:e_dependence_loc_op}.
For this reason, the mean of the $r>1$ distribution is close to zero at weak disorder.
At intermediate disorder, the distribution again shows heavy tails, and, in general, the variance decreases with longer distance between the operators, which we attribute to decreasing long distance correlations.
Most interestingly, at strong disorder in the MBL phase and at long distance, the peculiar satellite
peaks of the distribution at $r=1$ disappear, leading to simple, yet heavy tails. Additionally, we see that the standard deviation of both correlation functions at larger distances decreases as function of disorder (Fig.~\ref{fig:standard}) and stay constant at $r=1$. In the limit $W\rightarrow\infty$ the spins are uncorrelated so both standard deviations will go to zero. In this range of disorder, the localization length is big enough for allowing correlations at $r=1$, hence we expect the standard deviation to start decreasing only at large enough disorder.
The absence of satellite peaks for $r>1$, as well as most of the other features in this and the
preceding subsection, can be understood through perturbation theory in $1/W$, as we describe in the
next subsection.
\subsection{Perturbation theory analysis}
\label{sec:exp_pm_pt}
We have postponed the discussion of the peculiar features of the distribution of the nearest neighbor flip operator $ F_{i,i+r}$ in the MBL phase -- a topic to which we now turn.
In Fig. \ref{Distr_SxSy_maximum} we have a closer look at its distribution for different (strong) disorder strengths. While the qualitative features (central and satellite peaks) are independent of disorder and apparently characteristic of MBL, there is a quantitative evolution: the satellite peaks become sharper and move towards zero as the disorder $W$ is increased (inset in Fig. \ref{Distr_SxSy_maximum}). Furthermore, at very strong disorder $W>10$, additional peaks at $-\frac 1 2$ and $\frac 1 2$ develop, which are not present at weaker disorder $W\lesssim6$ (cf. Fig. \ref{Distr_SxSy_L}).
As a first step towards a more quantitative analysis, we consider the drift of the position of the satellite peaks as a function of disorder. The lower left panel of Fig. \ref{Distr_SxSy_maximum} shows the estimated peak positions, which are consistent with a $1/W$ dependence, suggesting a perturbative analysis.
At very strong disorder, it is natural to treat the kinetic term of the Hamiltonian as a perturbation of order $1/W$. Noting that the eigenstates of $ H/W$ are equal to those of $ H$, we cast the Hamiltonian in the form
\begin{equation}
H /W = \frac{1}{W} \sum_i S_i^z S_{i+1}^z + \tilde{h}_i S_i^z + \frac{1}{W} \sum_i F_{i,i+1} = H_0 + \frac{1}{W} V.
\end{equation}
The scaled fields, $\tilde{h}_i$, are now distributed uniformly in a fixed range $[-1,1]$. The eigenstates of the unperturbed Hamiltonian $ H_0$ are product states and eigenstates of all $ S_i^z$ operators and can therefore be enumerated by their eigenvalues. The eigenenergies of $ H_0$ for each eigenstate can be easily calculated using these quantum numbers.
Naive perturbation theory produces divergences when the spacing between unperturbed energy levels goes to zero. Such divergences -- dubbed resonances -- are unphysical and are resolved by admixing clusters of nearly degenerate states. Resonances are of great importance in disordered systems and become increasingly so as the system size increases. In order to incorporate the effect of resonances from the large disorder limit, we carry out a \emph{mixed degenerate and non-degenerate perturbation theory} for the operator $ F_{i,i+r}$. Details of the perturbation theory are given in Appendix~\ref{sec:pt}.
In addition to the rather general discussion given in the appendix, we note here various peculiarities of the perturbative calculation of $\bra{\tilde n} F_{i,i+r} \ket{\tilde n}$ which simplify our task.
In order to obtain a matrix element $\bra{\tilde n} F_{i,i+r} \ket{\tilde n}$ of an eigenstate $\ket{\tilde n}$ of the perturbed Hamiltonian in perturbation theory, we start with an eigenstate $\ket{n_0}$ of $ H_0$.
The matrix element $\bra{n_0} F_{i,i+r} \ket{n_0}$ is the zeroth order contribution and is identical to zero because $ F_{i,i+r}$ is off-diagonal in the $z$ basis.
It therefore contributes to the prominent peak of the distribution of this matrix element at zero.
More precisely, for $r=1$, states with $\vert\dots 00 \dots\rangle$ or $\vert \dots 11 \dots \rangle$ on the sites $i$ and $i+1$ yield a zero contribution at zeroth and first order in perturbation theory. This accounts for half of the states so we expect the fraction of such states to tend to $1/2$ as $W$ increases and this is indeed what is found (Fig.~\ref{Distr_SxSy_maximum} lower right panel).
For $r>1$, one must go to higher order in perturbation to obtain any non-vanishing contribution so the central peak is significantly higher. To understand the satellite peaks, we have to go to first order in perturbation theory (cf. e.g. Fig. \ref{Distr_SxSy_maximum}).
\begin{figure}[h]
\centering
\includegraphics{SxSy_maximum.pdf}
\caption{Upper panel: distribution of matrix element $\langle S^{+}_{i}S^{-}_{i+1}/2 +h.c. \rangle$ at large disorder strength $W\geq10$, in the upper right corner the local maximum of the distributions is highlighted. Lower left panel: position of the local maximum as function of the disorder strength. The red dashed line is the exact maximum location extracted from first order perturbation theory and given in Eq. \ref{eq:dist}. Lower right panel: weight of the distribution at central peak $\int_{-\epsilon}^{\epsilon} \mathrm{d}x \, p(x)$ and weight of the right tail $\int_{\epsilon}^{0.5} \mathrm{d}x\, p(x)$ as function of disorder strength and $\epsilon=0.01$. The weight of the peak at zero tends to $\frac 1 2$ for strong disorder as predicted by perturbation theory.}
\label{Distr_SxSy_maximum}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics{PT.pdf}
\caption{Perturbation theory computation of $\langle S^{+}_{i}S^{-}_{i+1}/2 +h.c. \rangle$ compared the exact result (black). Upper and lower left panels display the results for $L=20$ and $W=10,12,16$. In the inset we can see that perturbation theory distribution is disconnected (see main text). The red dashed line is the analytical form of the probability distribution shown in Eq. \eqref{eq:dist}. Lower right panel: Perturbation theory results for $L=30,40,50,60,70$. }
\label{PT}
\end{figure}
The eigenstate $\ket{n_0}$ is connected to a set of other eigenstates $\{\ket{k_0} \}$ of $ H_0$ by the perturbation $ V$. By this, we mean that the $\bra{n_0} V\ket{k_0} \neq 0$ for all $\ket{k_0}$ in this set, while matrix elements of $ V$ with all other states vanish. Let us first deal with the case in which all energies $E_{k_0}$ are sufficiently different from $E_{n_0}$, such that in nondegenerate perturbation theory the denominators $1/(E_{k_0}-E_{n_0})$ do not diverge. In this case, we obtain for the matrix element $\bra{\tilde n} F_{i,i+r} \ket{\tilde n}$ up to first order in $1/W$:
\begin{equation}
\begin{split}
\bra{\tilde n} F_{i,i+r} \ket{\tilde n} &= \frac{1}{W} \sum_{k_0} \bra{k_0} F_{i,i+r}\ket{n_0} \frac{ \bra{n_0} V \ket{k_0} }{E_{k_0}-E_{n_0}} \\
&+ (k_0 \leftrightarrow n_0).
\end{split}
\label{eq:flipflop_PT}
\end{equation}
From this, we can now understand several features of the distribution: if $\ket{n_0}$ does not have eigenvalues of $ S_j^z$ with opposite sign on sites $j=i$ and $j=i+r$, the matrix element $\bra{\tilde n} F_{i,i+r} \ket{\tilde n}$ vanishes. This implies that due to the incompatibility of $ V$ and $ F_{i,i+r}$ for $r>1$, to first order in $1/W$ all matrix elements vanish, which explains the different behavior of $ F_{i,i+1}$ and $ F_{i,i+r}, r\geq 2$. If the spins on sites $i$, $i+1$ have opposite $S_j^z$ eigenvalues (i.e. they are ``flippable''), there is only one nonvanishing term in the sum, in Eq. \eqref{eq:flipflop_PT}. The matrix element $\bra{n_0}F_{i,i+1}\ket{k_0}=1/2$ in this case giving
\begin{equation}\label{app5}
\bra{\tilde{n}}F_{i,i+1}\ket{\tilde{n}} =\dfrac{1}{2W\left( E_{n_0}-E_{k_0}\right)}=\dfrac{1}{2W\left( \tilde{h}_{i}-\tilde{h}_{i+1}\right) }.
\end{equation}
Since the on-site fields have a uniform distribution bounded by $\tilde{h}_i\in [-1,1]$, the expression in Eq. \ref{app5} can be computed. We first note that the lower bound on the matrix element is $1/4W$ as the maximum difference in the fields is $\pm2$. Now, rewriting $x=\bra{\tilde{n}}F_{i,i+1}\ket{\tilde{n}}$ as a random variable that takes values in the range $[-\infty,-1/4W]\cap[1/4W,\infty]$, its probability distribution is:
\begin{equation}\label{eq:dist}
P(x) = \dfrac{4W\vert x\vert -1}{16W^2 \vert x\vert^3}, \hspace{5mm} \vert x\vert\geq 1/4W.
\end{equation}
The maxima of $P(x)$ are located at $x=\pm 3/8W $
To summarize, first order nondegenerate perturbation theory explains the presence and weight of the central peak at zero, the presence and location of the satellite peaks at $O(1/W)$, as well as the local minima separating these peaks. The satellite peak stems therefore from the admixture of states which change their energy maximally upon flipping of two neighboring spins. Fig.~\ref{PT} shows plots of the exact result (black) together with the distribution Eq.~\eqref{eq:dist} (red dashed) showing that the formula captures the exact distribution very well (the delta peak at zero with weight $\frac 1 2$ is not shown in Fig.~\ref{PT}). To examine the agreement in more detail, Fig.~\ref{Distr_SxSy_maximum} shows the close correspondence between the analytical calculation of the satellite peak location and the exact result at least for larger values of disorder. We notice that the perturbation theory produces a higher central peak. This is caused by the missing weight around the central maximum (see inset Fig.~\ref{Distr_SxSy_maximum}) due to the lower bound in magnitude of the matrix elements $\bra{\tilde{n}}F_{i,i+1}\ket{\tilde{n}}$ up to first order (Eq.~\ref{app5}).
The distribution, Eq.~\eqref{eq:dist}, does not reproduce the small peaks at the edge of the domain of the distribution, close to $\pm \frac{1}{2}$. To understand the origin of these peaks, we come back to the consideration of the case that the eigenenergy of the state with flipped spins $E_{k_0}$ is close to the energy of $E_{n_0}$, in which case we have a ``resonance'' and nondegenerate perturbation theory breaks down. In this case, we have to use quasi degenerate perturbation theory and include $\ket{n_0}$ and its flipped partner $\ket{k_0}$ in the \emph{model space} of quasi-degenerate states.
Due to the constraint by the matrix elements of $ F_{i,i+1}$ in the model space, this is the only state which contributes to the model space.
The mixing of these two states leads to the emergence of the peaks at $\pm \frac{1}{2}$ of the distribution. To see this, we consider the zeroth order mixing of quasi-degenerate states through a single spin flip term in the Hamiltonian. This generates pairs of admixed states of the form $\alpha \vert \ldots 10\ldots \rangle+ \beta \vert \ldots 01\ldots \rangle$. The flip-flop operator expectation value is ${\rm Re}(\alpha\beta^\star)$.Since the perturbation maximally mixes these quasidegenerate states $\alpha=\pm\beta$ and this accounts for the $\pm 1/2$ peaks.
Our numerical treatment of the exact mixing by quasi-degenerate perturbation theory up to second order in $1/W$ captures also corrections to these features quantitatively and we show the full distributions obtained from it as colored solid histograms in Fig. \ref{PT}. The perturbation theory can be carried out for much larger system sizes than treatable in shift-invert diagonalization and show no visible system size dependence at strong disorder as shown in Fig. \ref{PT}. We conclude that the parts of the distribution of $\bra{n} F_{i,i+1}\ket{n}$ close to zero are due to off-resonant mixing of flippable and not flippable states, while the edges of the distribution close to $\pm \frac 1 2$ reveal the effect of resonances. It should be noted that we do not compute $\bra{\tilde{n}}F_{i,i+r}\ket{\tilde{n}}$ at distances $r>1$ because low order contributions are trivial and higher orders in perturbation theory make the numerical implementation hard to deal with. Considering this, we restrict our perturbation theory computations to operators with $r=1$.
\section{Eigenstate Expectation Values of $S^z_{i} S^z_{i+r}$}
\label{sec:exp_zz}
\subsection{$\left\langle S^z_{i} S^z_{i+r}\right\rangle$ Correlators}
We now turn to the $S^z_i$ correlation function.
Fig. \ref{Distr_SzSz} shows the probability distribution of energy eigenstate expectation values of $S^z_{i} S^z_{i+r}$ for a system of size $L=20$, $r=1,2,3,4$ and for different disorder strengths $W$.
For weak disorder $W\lesssim 1.2$, the distributions are gaussian in accordance with ETH and the
variance of the distribution increases with disorder strength. As remarked in
Section~\ref{sec:exp_pm_nn} and, similarly to the spin-flip correlators studied there, heavy tails
are apparent for disorder strength $W=2$ in the thermal regime. Again, similarly to the spin-flip
correlators, the gaussian mean is displaced from zero and the reason for this displacement is the
same as in that case (cf.\ Appendix~\ref{sec:e_dependence_loc_op}). As one expects in the ETH regime, the variance of the distribution falls off inversely in the Hilbert space dimension (exponential in $L$), which is visible for the case of the connected correlator in Fig. \ref{fig:standard} by the equidistant spacing of the standard deviations for different system sizes on the semilogarithmic scale.
For strong disorder, deep in the MBL regime, the distribution is qualitatively different.
The central peak is still present but is obscured by a very broad distribution that extends out to the tails where there are more pronounced peaks.
There are again negligible differences between the distributions for different $L$ within the MBL regime.
The presence of the outer peaks is simply explained from the strong disorder limit where eigenstates of the Hamiltonian are also eigenstates of the local $S^z_{i}S^z_{i+r}$ operators with eigenvalue $\pm 1/4$.
The fact that the main new feature of the distribution appears in the large $W$ limit suggests that perturbation theory might be as successful as it was for the spin-flip correlators. We address this question in Section~\ref{sec:exp_zz_pt}.
\begin{figure}[h]
\centering
\includegraphics{SzSz_done.pdf}
\caption{Probability density of eigenstate expectation values $\bra{n} S^{z}_{i}S^{z}_{i+r} \ket{n}$ for distances $r =1,2,3,4$. The histogram was taken over $\gtrsim 50$ eigenstates, $>1500$ disorder realizations and all positions $i$ in the chain of length $L=20$. In each panel the color corresponds to the disorder strengths as indicated in the legend.}
\label{Distr_SzSz}
\end{figure}
In order to remove the trivial contribution to the correlation function coming from $\bra{n} S_i^z\ket{n}$ expectation values, we discuss, in the following section, the connected correlation function.
\subsection{Connected Correlators}
\label{sec:exp_zz_connected}
\begin{figure}[h]
\centering
\includegraphics{SzSz_connected.pdf}
\caption{Comparison of the distribution of connected correlators $\langle S^z_iS^z_{i+r} \rangle_c$ in energy eigenstates for different disorder strengths $W$. The histograms include data for different disorder realizations and all lattice sites $i$. For weak disorder ($W\lesssim 1.2$) they display a gaussian distribution. For strong disorder ($W>3.6$) the distribution exhibits a sharp peak at zero and and heavy tails, biased towards the negative side for short distances $r$.}
\label{Distr_SzSz_connected}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics{SzSz_L.pdf}
\caption{Comparison of the distribution of nearest neighbor connected correlators $\langle S^z_iS^z_{i+1} \rangle_c$ in energy eigenstates for different system sizes. The histograms include data for different disorder realizations and all lattice sites $i$.
In the ETH (upper panels) phase the width of the distributions decreases with system size $L$ while in the MBL phase (lower panels) there is no discernible dependence on the system size. The dashed orange lines show gaussian distributions computed with the mean and variance calculated from the data for $L=12,20$. }
\label{Distr_SzSz_L}
\end{figure}
Fig.~\ref{Distr_SzSz_connected} shows probability distributions of the connected correlation function for $L=20$ and for different values of $W$ and Fig.~\ref{Distr_SzSz_L} shows distributions for different system sizes. For small $W$, the expectation is that ETH is obeyed and the figures demonstrate that, at least for $W\lesssim 1.2$, the distributions are gaussian (Fig.~\ref{Distr_SzSz_connected}) while the finite size scaling is consistent with random matrix theory (cf. Fig~\ref{fig:standard}). In the ETH regime, there is little variation in the distributions for different $r$ for a given system size -- one merely observes that the mean of the distribution shifts towards zero from $r=1$ to $r>1$ as discussed above due to the different energy dependence of different $r$ operators (cf.\ Appendix~\ref{sec:e_dependence_loc_op}).
For larger values of $W$, a sharp peak forms at zero and persists deep into the MBL phase while the
distributions further depart from gaussianity by acquiring a distinctive asymmetry with higher
weight for negative values of the correlator. For $r=1$, the left-hand-side of the distribution
acquires a shoulder down to $-1/4$ while the positive side tapers off towards $+1/4$. For larger $r$
the shoulder is rounded on the left side, so that the asymmetry is less pronounced. Our analysis in
Appendix~\ref{sec:tails} confirms heavy tails on either side at strong disorder.
In common with other distributions of matrix elements of local operators there is little apparent variation between different system sizes in the MBL regime. In contrast, within the ETH regime for significant values of disorder as exemplified by the $W=2$ data, the central width of the distribution narrows for larger system sizes while weight at the tails remains.
\subsection{Anderson Insulator vs MBL}\label{AI}
\label{sec:exp_zz_ai_mbl}
\begin{figure}[h]
\centering
\includegraphics{AI.pdf}
\caption{Distribution of the connected correlator $\langle S_i^z S_{i+r}^z \rangle_c$ for the disordered field Heisenberg chain with interaction ($\Delta\neq 0$, MBL) and without ($\Delta=0$, Anderson insulator) at distances $r=1,2,3,4$ and strong disorder $W=6.0$. Remarkably, in the Anderson insulating phase there is no positive weight on the distributions.
}
\label{AI_correlators}
\end{figure}
To understand the asymmetry of the distribution of the connected correlator $\langle S_i^z S_{i+r}^z \rangle_c$ for small distances $r$, it is useful to compare to the noninteracting limit.
In Eq.~\ref{Model}, $\Delta=0$ corresponds to the case of an Anderson insulator of noninteracting spinless fermions.
Fig.~\ref{AI_correlators} shows the connected $S_i^z S_{i+r}^z$ correlator for $\Delta=0.0,0.1,0.2,0.5,1.0$ and for $W=6$ and $r=1,2,3,4$. We see that for distance $r=1$ the distribution of negative correlations has little sensitivity to the value of $\Delta$ while the weight of positive correlations is exactly zero for the Anderson insulator, giving a clear signature where MBL differs from the non-interacting case albeit one where the asymmetry between positive and negative weights persists to $\Delta=1$.
We can understand the vanishing positive weight for the Anderson insulating case for arbitrary $r$
through a straightforward application of Wick's theorem since the Anderson case $\Delta=0$ is a free
spinless fermion model with a gaussian action. In fermionic language, the $\hat{S}^z_i
\hat{S}^z_{i+r}$ operator takes the form $\left(c_i^\dagger c_i -
\frac{1}{2}\right)\left(c_{i+r}^\dagger c_{i+r} -\frac{1}{2}\right)$. Using Wick's theorem for
$\Delta=0$, we obtain in any eigenstate of the Hamiltonian: $\langle c_i^\dagger c_i c_j^\dagger c_j
\rangle= \langle c_i^\dagger c_i \rangle \langle c_i^\dagger c_i \rangle - \langle c_i^\dagger c_j
\rangle \langle c_j^\dagger c_i \rangle$. It follows that the connected correlator is $\langle
n\vert\hat{S}^z_i \hat{S}^z_{i+r}\vert n\rangle_c = -\left\vert \langle n\vert c_i^\dagger
c_{i+r}\vert n\rangle \right\vert^2$, which is necessarily $\leq0$. This leads to the extreme asymmetry of the connected
correlator distribution at $\Delta=0$.
We see in Fig.~\ref{AI_correlators} that the asymmetry decreases as $r$ increases. This can be
understood perturbatively, as discussed in the next section.
\subsection{Perturbation Theory}
\label{sec:exp_zz_pt}
\begin{figure}[h]
\centering
\includegraphics{PT_SzSz.pdf}
\caption{Comparison of the exact $L=18$ distribution (ED) of the connected $S^z_iS^z_{i+1}$ correlator at strong disorder to the results from quasi-degenerate perturbation theory up to second order in $1/W$. All panels show the results in perturbation theory up to second order, except for the yellow curve in the lower left panel, which shows also only zeroth order degenerate perturbation theory results (mixing within the model space), which appears insufficient to reproduce the full form of the distribution. Lower right: Perturbation theory distributions of the connected correlator for larger system sizes.}
\label{PTConnectedSZSZ}
\end{figure}
As in the case of the spin-flip correlator, the distributions of the $S^z$ correlator are well
reproduced at large $W$ by the perturbation theory discussed in Sec.~\ref{sec:exp_pm_pt} and
Appendix \ref{sec:pt}. This is demonstrated in Fig.~\ref{PTConnectedSZSZ} for the distributions of
connected correlators with $r=1$. Perturbation theory also provides qualitative physical
explanations for the features reported in the last three subsections, as we elaborate below.
In the infinite disorder limit $W\to\infty$, the correlators $\langle S^z_iS^z_{i+r}\rangle$ and $\langle S^z_i\rangle$ are respectively $\pm 1/4$ and $\pm 1/2$ and the connected correlator simply vanishes, contributing to the sharp peak of the distribution at zero.
Indeed, the numerical data in Fig. \ref{Distr_SzSz}, for large values of $W$, shows peaks at the extreme values of the matrix elements. Similarly the connected correlator distribution in Fig. \ref{Distr_SzSz_connected} exhibits a smooth decay of the central peak at zero towards finite values of the connected correlator.
Inspecting the general expressions for matrix elements in mixed degenerate and nondegenerate perturbation theory presented in appendix \ref{sec:pt}, we observe that the zeroth order term coming from mixing within the model space should account for some degree of broadening of the peaks.
The first order terms in $1/W$ trivially vanish, because the $S_i^zS_{i+r}^z$ operator does not connect states in the model space to those outside it, and this is why we proceed to compute the second order contribution.
Fig.~\ref{PTConnectedSZSZ} shows that perturbation theory to second order (yellow) compares very well to the exact finite size results for $L=18$ and large disorder $W=12, \dots 16$ whereas the zeroth order results (orange) fail to capture the smooth falloff of the distribution to the left of the central peak. We also show results for larger system sizes in Fig.~\ref{PTConnectedSZSZ}, which are not reachable otherwise and show that at large disorder the distributions are essentially converged.
We show below that, while zeroth order (i.e., quasi-degenerate) perturbation theory does not fully account
for the distribution shape of the connected correlators, it does capture the asymmetry.
Zeroth order perturbation theory mixes quasi-degenerate eigenstates of ${H}_0$ connected by ${V} = \sum_i F_{i,i+1}$ -- in other words states connected by flippable spins $\vert\ldots 01 \ldots\rangle $ or $\vert\ldots 10 \ldots\rangle $ where the ellipses denote some spin configurations. This means, starting from an eigenstate $\ket{n_0}$ of $ H_0$, we can expect mixing with the states $\{ F_{i,i+1} \ket{n_0} \}$, which are quasi-degenerate~\footnote{In practice, we set a cutoff value for the energy difference which we still treat as quasi degenerate of the order of $1/W$. We checked that choosing $2/W$ or larger values does not significantly change the results} with $\ket{n_0}$.
This means that the model space is then spanned by
\begin{equation}
\mathrm{D} = \text{span}\left(\ket{n_0}, \{ \ket{i_0} : \ket{i_0} = F_{i,i+1}\ket{n_0} \text{and} \, E_{i_0}\approx E_{n_0} \}\right).
\end{equation}
Let us now try to understand why the distribution of the connected correlator is asymmetric, starting from the case $r=1$.
For simplicity, we consider the case of a two dimensional model space, yielding a state (with $|b|^2 = 1-|a|^2$) of the form:
\begin{equation}
\ket{\psi} = a\ket{\dots \sigma_{i-1}\sigma_{i}\sigma_{i+1} \sigma_{i+2} \dots}+b\ket{\dots \tau_{i-1}\tau_{i}\tau_{i+1} \tau_{i+2}\dots}.
\end{equation}
The connected correlator is then given by
\begin{equation}
\begin{split}
&4\langle S_i^z S_{i+1}^z \rangle_c = |a|^{2} \sigma_{i} \sigma_{i+1} - \left(|a|^{2} - 1\right) \tau_{i} \tau_{i+1} \\ &- \left[|a|^{2} \sigma_{i} - \left(|a|^{2} - 1\right)\tau_{i} \right] \left[|a|^{2} \sigma_{i+1} - \left(|a|^{2} - 1\right)\tau_{i+1}\right].
\end{split}
\end{equation}
Inspecting this expression shows that most combinations of spin configurations
$\sigma_i$,$\sigma_{i+1}$,$\tau_{i}$,$\tau_{i+1}$ yield vanishing connected correlators and these
contribute to the central peak. The spin configurations on $i$, $i+1$ that yield non-vanishing
contributions are:
\begin{multline*}
\sigma_i\sigma_{i+1}\tau_i\tau_{i+1} \in \{ 0011, 1100 \}:
\\
4\langle S_i^z S_{i+1}^z \rangle_c =
1-(2|a|^2-1)^2 >0.
\end{multline*}
\begin{multline*}
\sigma_i\sigma_{i+1}\tau_i\tau_{i+1} \in \{ 0110, 1001 \}:
\\
4\langle S_i^z S_{i+1}^z \rangle_c = (2|a|^2-1)^2-1 <0.
\end{multline*}
This means we obtain two cases for positive correlators and two for negative correlators.
Evidently the case with a \emph{flippable} pair $\sigma_i\neq \sigma_{i+1}$ (yielding a negative correlator) appears at first order in $V$, since the two states are directly connected through $V$ and are included in the model space if they are quasi-degenerate, \emph{independent of the state of the neighboring spins} $\sigma_{i-1}$ and $\sigma_{i+2}$.
The case of an \emph{aligned} pair $\sigma_i=\sigma_{i+1}$ (yielding a positive correlator) is connected to its flipped partner state $\tau_i=-\sigma_i$ and $\tau_{i+1}=-\sigma_{i+1}$ only in second order of $V$, including a \emph{constraint on the neighboring spins} $\sigma_{i-1}=-\sigma_i$ and $\sigma_{i+2} = -\sigma_{i+1}$. We note that in addition, an intermediate state with one spin flip has to be quasi-degenerate, which is an additional constraint. For simplicity we have left this state out of the discussion.
From these arguments, we conclude that the case of admixed states which yield negative correlations is much more probable than the case yielding positive correlations due to their appearance at different orders in $V$ and, additionally, owing to constraints which reduce the number of possibilities giving obtaining positive correlations.
We now understand that for the case $r=1$, negative correlations are more probable than positive ones to zeroth order in degenerate perturbation theory for two reasons: negative correlations need only one order in $V$, while the appearance of positive weight requires two applications of $V$ and only a specific set of spin configurations can lead to positive correlations, thus reducing their likelihood.
The same set of arguments can now be generalized to the case $r=2$. We see that in this case, we always need to apply $V$ twice to get nonzero (both positive and negative) correlations, however there are more possibilities of having a flippable pair $i,i+2$ (necessary for negative correlations), compared to the possibilities of getting a mixture of $\ket{\dots 0\text{\textsf{x}}0\dots}$ and $\ket{\dots 1\text{\textsf{x}}1\dots}$ (necessary for positive correlations), since in this case the flippability depends on the state \textsf{x} of the middle spin $i+1$. Therefore, also in the case $r=2$, the distribution of the connected correlator is skewed towards negative correlations. In the case of longer distances $r>2$, these constraints become increasingly weak (while requiring an order of $V^r$ to get nonzero correlators), leading to more and more symmetric distributions.
\section{Conclusions}
We have presented the exact energy eigenstate distributions of spin-flip and $S_i^z S_{i+r}^z$ correlators in the disordered XXZ chain across the many-body localization transition.
While -- at very weak disorder -- we find gaussian distributions to very high precision, the distributions depart from gaussianity at intermediate disorder -- still well inside the thermal regime -- through the appearance of heavy tails that persist into the MBL regime.
The presence of these tails correlates to the appearance of sub-diffusive behavior in transport
properties observed in previous studies \cite{luitz_long_2016,luitz_anomalous_2016,roy_anomalous_2018}.
In the entire thermal regime, the variance of the correlator distributions falls off with increasing Hilbert space dimension as one should expect for operators obeying ETH but significant weight remains in the tails of the distribution and measures of departures from gaussianity including the Kullback-Leibler divergence and the kurtosis show a peak for $W<W_c$ that sharpens with system size. The system size dependence of local operator distributions is negligible inside the MBL regime where ETH fails.
For large disorder, we have carefully investigated the distinctive forms of the correlator
distributions, unraveling various features of the distributions. We find that strong disorder
perturbation theory can reproduce the full distributions in the MBL phase. We note that our semianalytical perturbation theory scheme should be applicable to other models and could provide information about the effect of resonances in different systems.
For the \emph{spin-flip correlator}, the distributions are highly structured with a central peak at zero, a pair of neighboring satellite peaks with disorder strength dependent positions at $\pm3/8W$ and further maxima at the edge of the distribution at $\pm 1/2$.
All these features are perfectly captured by a \emph{quantitative strong disorder perturbation theory} that also gives insight into their origins. In particular, (i) the central peak comes from eigenstates where the eigenstate carries no pairs of spins that are flippable by the spin-flip operator, this accounts for $1/2$ of all states at strong disorder
(ii) the satellite peaks at $\pm 3/8W$ arise from flipped pairs of spins that are maximally pinned by the random field and therefore maximally off resonant
(iii) the $\pm 1/2$ peaks \emph{arise from resonances} - strongly admixed quasi-degenerate states. These extremal peaks can only be captured by quasi-degenerate perturbation theory.
Overall, mixed quasi-degenerate and degenerate perturbation theory unifies all contributions and yields an unbiased result, matching the full exact distribution almost perfectly.
The $S_i^z S_j^z$ correlator distribution is more complicated to analyze since we have to go to second order in $1/W$ in our perturbative treatment. Our analysis reveals that for short distances $r=|i-j|$, the correlator is predominantly negative in the MBL phase, since \emph{eigenstates are biased to contain mixtures of flippable neighboring pairs}.
This leads to distributions skewed towards negative weights, most strongly so for noninteracting Anderson Insulators, where no weight for positive correlators is present due to Wick's theorem. Therefore, the $S_i^z S_j^z$ correlator distribution reveals a strikingly different behavior generated by interactions in the MBL case compared to the noninteracting model.
\begin{acknowledgments}
We would like to thank Jeff Rau for useful discussions.
We furthermore acknowledge PRACE for awarding access to
HLRS’s Hazel Hen computer based in Stuttgart, Germany under grant number 2016153659.
Our code is based on the PETSC\cite{petsc-web-page,petsc-efficient,petsc-user-ref} and SLEPc\cite{hernandez_slepc:_2005} libraries and uses MUMPS\cite{MUMPS1,MUMPS2}.
\end{acknowledgments}
|
1,941,325,220,486 | arxiv | \section{Introduction}
\label{sec:introduction}
Symbolic Regression (SR) aims at building mathematical models of numerical, and possibly experimental, data. Given data of the form $(y_i, \vec{x}_i)$, the goal is to discover automatically the analytical relationship between $y$ and $x$, as a function of a mathematical language that usually includes basic operators like $(+,-,\times,/)$, possibly also non-algebraic functions such as sine, exp, ..., some free scalars (pure numbers), and the variables $\vec{x}$.
If such an analytic relation exists by construction, \textit{i.e.}{} when we feed the program with data $y_i = f_{\textrm{target}}(x_i)$, then the goal is to write candidate equations $\tilde{f}(\textrm{vocabulary})$ until one hits the target $\tilde{f} = f_{\textrm{target}}$. If not, the goal is to find a good approximation of the target function on the provided data (the \emph{training set}) with good generalization properties, meaning that we want the discovered functions to behave well on previously unseen data, known as the \emph{test set} or \emph{validation set}.
Besides being a difficult machine learning problem that is interesting on its own, SR can also be used to provide accurate models of physical systems that are too complex to be theoretically modelled. Any complex phenomena emerging from the underlying dynamics of a large number of degrees of freedom typically fall in this category. This happens in particular in the fields of meteorology, climatology, material properties, heat transfers, astrophysics, economy, financial data, complex systems, etc. Notice that the vocabulary can include derivatives with respect to the variables so that dynamics can also be discovered provided the data has some temporal component. Even in the case where the outcome of the program is not a perfect fit, finding accurate solutions may guide the researcher towards a better understanding of the system's underlying physics. In this regard, the interpretability of the fittest candidate equations is important. We shall comment on this later on.
SR has been studied quite extensively along these lines. For instance, references~\cite{Bongard9943, Schmidt81} try to recover physical laws and invariants of some mechanical systems,~\cite{quade2016prediction} focuses on real-world complex systems data sets, while, related to our concern, ~\cite{2018arXiv181010525W} aims at building an "automated physicist".
One of the main approach to SR is Genetic Programming (GP) which is based on a computer simulated Darwinian evolution\footnote{Notice that GP can of course be used to solve other problems than SR.}, see, \textit{e.g.}{}, textbooks \cite{koza1992genetic, poli2008field}. In this field, the vocabulary is known as the \emph{primitive set}, while the \emph{individuals} built from correspond to candidate equations. Some number of individuals are created initially, and then selected according to their \emph{fitness} (\textit{i.e.}{} some metric) with regards to the problem at hand, and then evolved by genetic operations, namely mutations of the equation, or crossovers between two equations. This scheme then iterates until the target is found or some computational budget is reached.
Although GP has proven very successful for finding highly fit individuals in large search spaces, it has some long standing issues regarding in particular the recurrent loss of diversity during evolution (see Section \ref{sec:background:tree}). Recently however, a new paradigm emerged of evolutionary algorithms that aims at exploring both quality and diversity of individuals. These algorithms are not looking for the fittest individual only, but rather look for a set of high performing ones given their behavior with respect to hand-designed features (see Section \ref{sec:background:map}). This so-called MAP-Elites algorithm \cite{DBLP:journals/corr/MouretC15} has been used as an improvement of GP in algebraic problems \cite{dolson2019exploring}, path-finding \cite{pugh2016quality, 2018arXiv180702397G}, design discovery \cite{gaier2017data}, robotics \cite{duarte2018evolution}, and is available as a \textsc{Python} library \cite{qdpy}. As far as we are aware, it has not been applied to SR yet. Although later improvements have been proposed to the algorithm, \textit{e.g.}{} \cite{pugh2016searching, cully2018quality}, we shall restrict ourselves here to its original version as published in \cite{DBLP:journals/corr/MouretC15}.
In this paper, we will apply this enhanced exploration algorithm to the problem of Symbolic Regression. However, we are not only concerned here with maintaining diversity, but also with a better exploitation of the results. One striking issue concern the way free scalars in SR are handled. In most of the published literature, free scalars that appear while building an individual can either be picked up randomly from a given integer set, for instance $\{-2, -1, 0, 1, 2\}$, or be randomly chosen floating-point numbers in a predefined and fixed interval -- the so-called "ephemeral constants" \cite{Davidson:2003}.
This, we believe, is not quite satisfactory. If we limit ourselves to integer scalars only, given that the equation has a maximal size, we cannot build all real-valued scalars this way. On the other hand, using ephemeral constants requires by construction many iterations of the evolution scheme before finding a value that is accurate enough.
Instead, we will write candidate equations with not yet assigned free scalars under forms such as $f = A_1 \times \exp(A_2 \times x)$ and then find the best scalars $A_i$. However, achieving this cannot rely on common gradient-based techniques since they would often converge to a local optimum and miss the global one\footnote{Still we note that \cite{kommenda2013effects, quade2016prediction} try to fit numerical constants with gradient descent and show that it is already an improvement over the use of ephemeral constants. On the other hand, \cite{cerny2008using} uses instead another non-gradient based method, but limits itself to very simple targets only (at most order three polynomials).}. Instead, we use another evolution algorithm, namely a Covariance Matrix Adaptation-Evolution Strategy algorithm~\cite{hansen2003reducing} (CMA-ES) in order to look for the best fit for the free scalars. Details of the method can be found in Section \ref{sec:background}. Because CMA-ES is computationally heavy, it could not be reasonably applied to very long equations (say of more than 60 elements) with many scalars. However, it represents a substantial improvement that is worth considering for mid-sized targets.
To summarize, our model relies on an improved exploration of the fitness landscape via the evolutionary Quality-Diversity algorithm, and then fits the best scalars with another evolutionary algorithm (CMA-ES). This last technique is specific to SR and shall not apply to other types of problems that GP usually deals with. These two methods combined result in a very high success rate on many targets found in the literature. Moreover, even when the algorithm fails to find the exact target, it returns very accurate fits thanks to the CMA-ES method (although generalization may be poor in this case). Section \ref{sec:background} provides some background to both plain GP and its limitations and to the MAP-Elites algorithm. It finally gives a quick overview of how CMA-ES method works. Section \ref{sec:model} describes the entire model by putting together all these pieces, and Section \ref{sec:results} shows experimental results.
\section{Background}
\label{sec:background}
\subsection{Tree-based GP for SR}
\label{sec:background:tree}
Before going to the GP algorithm and its improvements, let us first quickly outline how SR is usually implemented in so-called \emph{tree-based GP}. Mathematical expressions are created and modified either as strings of symbols, usually in infix or postfix (reverse Polish) notation, as Abstract Syntax Trees (ASTs), or as a combination of the two depending on which representation fits best each section of the SR algorithm. ASTs are especially convenient to run genetic operators such as crossovers and point-wise mutations. The SR algorithm is given a \emph{primitive set} of symbols, including $\varnothing$ that serves as a \emph{halt} symbol to terminate the expression. For example, in the postfix notation that we use, $\left(x+y\right) \times x$ is represented as $x \, y \, + \, x\, \times \, \varnothing$ and corresponds to the tree given in Fig. \ref{fig:tree}.
\begin{figure}[ht]
\begin{centering}
\begin{tikzpicture}[level/.style={sibling distance = 2cm/#1,
level distance = 1cm}]
\tikzstyle{hollow node}=[circle, draw, inner sep = 4pt, align = center, fill = black!5]
\node(0)[hollow node]{$\times$}
child{node[hollow node]{$+$} child{node[hollow node]{$x$}} child{node[hollow node]{$y$}} }
child{node[hollow node]{$x$}};
\end{tikzpicture}
\caption{Tree representation of $\left(x+y\right) \times x$ or $x \, y \, + \, x\, \times \, \varnothing$.\label{fig:tree}}
\end{centering}
\end{figure}
Basic mathematical rules can easily be encoded in ASTs. While the literature usually imposes a limit on the tree depth, we will instead impose a hard limit on the length $L$ of the mathematical expressions produced by the algorithm. This corresponds to a parsimony measure~\cite{koza1992genetic,iba1994genetic} of the generated equations. This choice was motivated by the fact that parsimony is taken directly into account into our methodology, see next subsection.
GP algorithms require the following ingredients, whose relationship are summarized in Fig. (\ref{fig:plaingp}): a population of individuals and its initialization, fitness evaluation, selection, genetic mutations, population update, and meta-parameters. A very crude view of GP is the following. After the creation of an initial population of $N$ random equations (see \cite{koza1992genetic} for variations of initialization techniques), individuals are evaluated with respect to the target on the \emph{training set}; then some are selected either randomly or in relation to their fitness. These individuals go through genetic operations, basically mutations and crossovers; these new individuals are evaluated, and the population is updated to keep only the $N$ best equations. The algorithm then iterates over this scheme.
Termination occurs when one individual is accurate enough with respect to the \emph{validation set}, see Section \ref{sec:model:termination} for details, or when the computational budget is exhausted.
\begin{figure}[ht]
\begin{centering}
\includegraphics[width=\linewidth]{fig1.pdf}
\caption{High-level view of Genetic Programming iterations.\label{fig:plaingp}}
\end{centering}
\end{figure}
Many recent papers focus on improving one or more of these steps. For instance, one may not want to only target the best fit individual, but other features as well. This has led to \emph{multi-objective optimization} and \emph{Pareto-front exploitation} where fitness evaluation considers several objectives at the same time, see \textit{e.g.}{} \cite{6791888, laumanns2002archiving, smits2005pareto}. While the algorithm runs, a single population of individuals is kept in memory. However, it might be useful to decompose hard problems into smaller, easier sub-problems. Therefore this scheme can also be tweaked to incorporate problem decomposition, by keeping small blocks of expressions that have proven useful during the training, see, \textit{e.g.}{}, \cite{koza1994genetic, arnaldo2014multiple, astarabadi2018decomposition}. As we shall see, MAP-Elites includes a sort of problem decomposition when remembering the small but fit individuals.
GP still has long standing issues, however. One of them is the bloat that happens when no hard limits are set on the length of expressions; then, crossovers tend to create longer individuals while their fitness no longer improves. Many techniques have been designed to counter bloat, in particular setting hard limits, or setting a "soft limit" by disadvantaging long genomes (\textit{i.e.}{} individuals), see, \textit{e.g.}{}, \cite{Bloat}, or even variations of this \cite{poli2003simple}. As we shall see, the MAP-Elites algorithm can automatically counter this problem.
Another issue is the premature convergence or diversity loss during evolution. This may be prevented by increasing population size, modifying genetic operations, and/or selection mechanism. As a first guide to the improvement of genetic operations besides the basic point-wise mutation and crossovers, we refer the reader to \cite{poli2008field} and references therein.
\subsection{MAP-Elites}
\label{sec:background:map}
MAP-Elites algorithm belongs to the class of Quality-Diversity algorithms \cite{pugh2016quality, cully2018quality} that also includes, for example, Novelty Search with Local Competition (NS-LC) \cite{lehman2011evolving, lehman2013effective}. The algorithm is grid-based and stores the best-fit individuals in a grid of elites; the grid being $N$-dimensional with $N$ features chosen by the user.
As far as equations are concerned, quite natural features one may think of includes the length of the equation (or other metrics of its complexity), the number of free scalars, the number of nested non-algebraic functions such as $\sin(\sin(...))$, the number of trigonometric functions, and the order of non-linearity of \cite{vladislavleva2009order}.
The algorithm is then quite simple. After the generation of initial random expressions, individuals are evaluated. The individuals produced are then sorted in terms of grid bins. Inside a given bin, only the best individual is kept. Once the grid has been populated, an iteration consists in producing new equations by applying genetic operators between the elements of the grid -- and only them -- chosen at random (uniform selection, as in \cite{DBLP:journals/corr/MouretC15}). These new states are then evaluated, and the grid is updated: some of these new equations may replace some previous best individuals in several bins of the grid, see Fig. \ref{fig:map}. Termination is similar to the pure GP case.
\begin{figure}[ht]
\begin{centering}
\includegraphics[width=\linewidth]{fig2.pdf}
\caption{High-level view of MAP-Elites iterations.\label{fig:map}}
\end{centering}
\end{figure}
The algorithm, and its relation to other standard evolutionary methods such as, \textit{e.g.}{} Pareto-Front optimization or NS-LC is discussed at length in \cite{DBLP:journals/corr/MouretC15}. We want to emphasize that using such a feature map shall address in many ways the aforementioned issues of GP for SR. First of all, the preservation of diversity is kind of built-in in the method, while addressing at the same time the quest for multi-objective regression. Secondly, the bloat can be addressed by choosing as a feature the length of an equation (or its complexity). This will indeed force the algorithm to remember small individuals and automatically counter the bloat. Small individuals may not be excellent ones, but still are the best seen so far of that given size or complexity. As such, it also acts as a kind of problem decomposition, where small individuals can be seen as relevant blocks for later building a larger and better equation. Regarding now the parsimony and Occam's razor, it is clear that when two individuals have similar fitness, the one using less free scalar should be considered as better than the other one. Therefore, it is natural to use as a feature the free scalars count of the equation. On top of increasing the diversity, it provides a way to rank the best solutions by the number of free parameters used, which is good practice when dealing with analytic models of physical phenomena (compare to approach of \cite{de2018greedy}).
We believe that using both length (or complexity), and the number of free scalars in an equation, are two inescapable choices in MAP-Elites-based Symbolic Regression. Additional features might be included, either to increase the grid size, or to bring some more relevant diversity. To determine which additional features to use is quite a non-trivial question. We have chosen a 3-D grid (see Section \ref{sec:model}) based on the length, the number of scalars, and the number of non-algebraic functions such as $\sin$ or $\exp$. This last choice is arguable, but does increase the grid size and thus boosts the exploration.
\subsection{CMA-ES}
\label{sec:background:cmaes}
CMA-ES method is described at length in a series of papers, see, \textit{e.g.}{}, \cite{hansen2003reducing} and available as a library in many programming languages. Although details are quite complex, the main idea is that the algorithm will browse the landscape in an evolutionary way. Say one wants to minimize a function $f(A_1, \ldots, A_n)$. First, a population of candidates for $\vec{A}$ are sampled along a normalized multivariate Gaussian; then the best individuals (in a mean-squared error sense with respect to the actual target) are sub-sampled, and this in turn defines a new multivariate normal with a shifted mean and covariant matrix, according to which new candidates are drawn, \textit{etc}{}. The method requires many such iterations (from \SI{1000}{} to \SI{10000}{}), and each iteration relies on some number of function evaluation resulting in a quite slow, but powerful method. It can be trapped in local optima, of course, but is also able to capture quite often the global optimum. See also Section \ref{sec:results:grid} for an explicit example. Meta-parameters include, amongst other things, the population size, the maximum number of iterations, and a time limit that we have modified with respect to default values -- see next section for details.
As an example, consider the target "Korns-7" (as named in \cite{DBLP:journals/corr/abs-1805-10365}, \textit{i.e.}{} $f_{\textrm{target}} = 213.80940889 \,(1 - e^{-0.54723748542 \, x})$ on the range $x \in [-50, 50]$. It is formally quite a trivial target, but is also understandably difficult to find exactly without an appropriate method for finding these two scalars. Here CMA-ES does trivialize finding such an equation. In fact, such a simple equation is likely to be already present in the initial random population. In that case, applying then the CMA-ES method to find the best-fit for the $A$'s will terminate the algorithm in only one step. As a matter of fact, the target Korns-7 was always found very easily. Also, because of its triviality for our combined method MAP-Elites + CMA-ES, we decided to remove it from our target list.
\section{The model}
\label{sec:model}
\subsection{General specifications}
\label{sec:model:general}
The model shall run on \emph{all the targets} of Tables 1--5 with the \emph{same} following primitive set:
\begin{equation}
\label{voc}
\left \{\varnothing, x, y, z, \sin, \cos, \exp, \ln, \times, +, -, /, \wedge \right \}
\end{equation}
where $\wedge$ stands for exponentiation. We limited the set to three variables ($x ,y, z$) at most for reasons to be discussed later. The dictionary is then completed by pure numbers (also referred to as scalars in the following). Then two options exist. First the model can be run with integer scalars (namely '1' and '2' only in the following): this model is used to compare plain GP to MAP-Elites SR, see Section \ref{sec:model:comparison}. Second, the full model can be run with free floating-point scalars $A$ that are fitted by the CMA-ES method at the end, as detailed in Section \ref{sec:model:full}.
Basic simplifications on the fly are also included, such as $\exp(\ln(x)) =x$, \textit{etc}{}. As we do not rely on existing simplification packages, the algorithm is not equipped with a full expand-refactor simplification engine. Instead, it is limited to a set of basic hard-coded rules, which is sufficient for the application described in this paper. Note that when using formal scalars $A$, an expression like $A \times A \times \exp(x)$ can be simplified to $A \times \exp(x)$ prior to CMA-ES evaluation. Some simplifications require to add some special symbols to our dictionary, namely the neutral element -- if not already present--, the zero, and infinity.
We do not use protected divide of any kind. When infinity occurs in simplifications such as $1/(x -x) \to 1/0 \to \infty$, the equation is discarded prior to evaluation. Because our simplification engine is not comprehensive however, the program occasionally creates zero division errors, in which case the maximum penalty is attributed to the equation. The same applies for other kind of exceptions such as overflow errors.
We restrict ourselves to very basic genetic operations, namely point-wise mutations between symbols of equal arities, and basic crossovers. By this we mean nodes for crossovers are chosen randomly amongst internal nodes and leaves. We do not try to improve this by weighting probabilities for choosing internal versus terminal nodes, for instance, or other refinements such as the ones described in \cite{poli2008field}. Because we set a hard limit on the length of an equation, crossovers are tried but discarded as soon as the resulting equation does not fit inside this limit. This is a bit different from the literature where usually a hard limit on max depth of the tree is set up (using a maximum length $L$ instead is more relevant when using a MAP-Elites grid).
Also, and this can be seen as the only expert knowledge we do implement, we limit ourselves to a maximum number $K$ of nested functions such as $\sin(\sin(\exp(...)))$ for interpretability reasons. In particular, runs were made with $K=1$ ($K=2$ also works fine). Again, crossovers leading to out-of-bound equations are discarded. Alternatively, the number of nested functions could have been used as yet another feature for the MAP-Elites grid.
Regarding genetic operations, states are chosen randomly among the population (both for plain GP and MAP-Elites), and in $40\%$ of the cases, one symbol only is mutated, in $40\%$ of the cases, two random states go through a crossover and returns two offspring, and in the last $20 \%$, two states are chosen at random and go through both a mutation and a crossover. Also, when a non-algebraic function is chosen for mutation, there is a $30 \%$ probability to simply drop the symbol as in $x\times\sin(x) \rightarrow x^2$. This choice was made in order to limit the number of non-algebraic functions and helps fighting the growing number of nested functions that crossover usually produce (unless hard limit on such terms is set).
As said in Section \ref{sec:background}, the MAP-Elites grid that we use is three dimensional and uses as features the length of the equation (in post-fix notation), the number of free scalar parameters, and the number of non-algebraic functions such a $\sin, \exp, \ldots$ involved in an expression. We thus use a grid with one bin for each length of the equation from $1$ to $L$, one bin for each number of any non-algebraic function from $0$ to $8$ (if an equation has more than $8$ functions, then it enters the last bin), and one bin for each free scalar from $0$ to $L/2$ (which is the maximum possible). In order to give orders of magnitude, the grid is usually small with around $150$ elements for equations with a maximum size of $L=15$, and may be as large as around a thousand individuals for larger equations with $L \geq 35$.
Finally, because CMA-ES is not a perfect method, once it has returned a recommendation for the best $A_i$s, we apply thereafter a least square method to descend to the closest minimum, if not exactly found previously. This usually increases the method's precision. As a result, for a (trivial) target like, say, $f=x^3$, our method might well return the exact result $f = x \times x \times x$ or, also, $f=x^A$. In the latter case, the CMA-ES plus the least-square fit combined will in general return $A = 2.99999(\ldots)$. In this case, we consider that the target has been exactly found, see next section for termination criterion. Note that because non-integral exponents are permitted, we need to restrict the sampled values for the variables to positive ranges, see Section \ref{sec:results:targets}.
We have implemented the model in \textsc{Python}, using \textsc{CMA-ES for Python} \cite{cmapackage}, and \textsc{scipy} module for least squares. Everything else has been coded from scratch.
\subsection{Cost function and termination criterion}
\label{sec:model:termination}
Following common practices of SR literature, the program write equations $\tilde{f}= \tilde{f}(\textrm{primitive set})$, where the right-hand side cannot\footnote{This prevents finding polynomial equations in $f$ that would first require a numerical solver which is a more complex task and left for future work.} have terms depending on $\tilde{f}$. Differential equations of the form $(d/dx)^n \tilde{f} = \ldots$ could also be produced using the same approach, but we restrict ourselves to non-differential equations in this paper.
Once the best free scalars $A$ in $\tilde{f}$ are found, we simply compare the training set's right-hand side with the target. Similarly of what can be found in multi-objective regression papers, we found more effective to also take into account a "derivative cost". We thus use the cost $C$:
\begin{equation}
\label{cost}
C = \vert f_{\textrm{target}}-\tilde{f}\vert + \sum_i \vert \partial_i (f_{\textrm{target}} - \tilde f)\vert
\end{equation}
\textit{i.e.}{}, using the L1 norm on both the distance to the target, and the distance of the first derivatives to the target. A sum is used for the derivative cost when the function has more than one variable. The cost is then properly normalized in order to return a reward (or fitness) scaled between $-1$ and $1$. By running some preliminary tests, we found that including the derivative cost speeds up the convergence.
Because CMA-ES is so accurate however, termination criterion can be subtle. Indeed, the program quite often produces spectacular fits (accurate to $10^{-5}$ in relative values) to the target on its \emph{training} set, even if the formal equation is not the expected one. See for instance Fig. \ref{fig:fit}. Therefore, in our target list in Section \ref{sec:results} taken from the literature, we have been careful to often increase the range of the \emph{validation} set to prevent the algorithm from stopping early on false positives. (We recall that termination criterion is based on the validation set only). Going back to the trivial example where the target is $f_{\textrm{target}}=x^3$, our method might return $f=x^A $ with $A = 2.99999(\ldots)$. We consider that the target is exactly found in this case, in the following sense: one computes the NRMSE\footnote{As defined in Eq. (6) in \cite{miranda2017noisy}.} (Normalized Root Mean Square Error) which in that case would be typically close to $10^{-16}$ due to numerical precision. We defined our termination criterion as $NRMSE \leq 10^{-6}$.
\subsection{Comparison of plain GP versus MAP-Elites}
\label{sec:model:comparison}
As said, we made some preliminary runs to check whether MAP-Elites improves GP approaches. In order to do so, we used a vocabulary with two integer scalars '1' and '2' (\textit{i.e.}{} no CMA-ES), on targets of Table 1. Genetic operations have the same parameters in both runs; GP runs with a population size of 1000 individuals. Selection is made with a tournament\footnote{Two individuals are drawn randomly from the population. Only the best one is kept for mutation, otherwise they go through a crossover.} of size 2, and genetic operations are done until 2000 new individuals are produced. These ones are then evaluated, and the 1000 bests of these 3000 individuals become the new population. Regarding the MAP-Elites run, 4000 individuals are first randomly created, evaluated, and binned in the grid. Then, when the grid has $N$ elements, $2N$ new individuals are created by genetic operations, evaluated, binned, and the grid is updated.
The program either stops when the target is hit, or after $10^5$ evaluations of the fitness function. We made 100 such runs for the five targets of Table 1. It shows that MAP-Elites is a slight improvement over GP although it requires slightly more fitness evaluations.
\subsection{Full algorithm MAP-Elites + CMA-ES}
\label{sec:model:full}
We could have simply used a MAP-Elites + CMA-ES algorithm on an initial collection of random equations. Literature often goes for the half/half method for initializing the population. In fact, as we have realized that CMA-ES is a slow method while using integer scalars is very fast, our full algorithm is rather three-steps:
\begin{itemize}
\item First, we create a collection of 4000 random equations. Then we evolve for $P = 150$ iterations the grid of equations by using a grid with no free scalar $A$, but only scalars "1" and "2". The maximum size is set to $L +10$. In the case where the target includes no floating-point number, it may already be found at this step, and quite often is. This is step 1 of Fig. \ref{fig:fullalgo}.
\item If not, promising equations from the grid such as $\tilde{f} = \sin(2) x/(1+x)$ are transformed into their free scalar counterpart, namely: $\tilde{f} = \sin(A_1) x/(A_2+x)$, \textit{i.e.}{} $\tilde{f} = A_1 \ x/(A_2+x)$ after simplification. This is step 2 of Fig. \ref{fig:fullalgo}. They are then evaluated by CMA-ES, and stored inside a new grid.
\item This grid serves as an initialization for $Q=150$ iterations using MAP-Elites with free scalars $A$ and CMA-ES, see step 3 of Fig. \ref{fig:fullalgo}.
\end{itemize}
In other words, we use the MAP-Elites method with ephemeral constants (drawn randomly from the integer set $(1,2)$) to generate an initial population for CMA-ES which is \textit{a priori} much more relevant than random equations. This means that we do have two distinct vocabularies and two distinct sets of simplification rules in our code.
The full algorithm can be summarized by the following diagram:
\begin{figure}[ht]
\begin{centering}
\includegraphics[width=\linewidth]{fig3.pdf}
\caption{High-level view of the full algorithm.\label{fig:fullalgo}}
\end{centering}
\end{figure}
As a remark, even if it is not required in a strict sense, simplification often reduces the number of free parameters per equation. For instance $\tilde{f} = A\times A\times A\times(x+A)$ is equivalent to $\tilde{f} = A_1\times x + A_2$. Being able to make this simplification is a huge advantage because the fewer free scalars, the faster CMA-ES method executes. Next subsection gives order of magnitude about execution time.
\subsection{Execution time}
\label{sec:model:execution}
The first step of the algorithm described above runs way faster than the second one, even in mono-CPU implementation. The execution time is in fact no different than the one of standard SR packages like DEAP \cite{DeRainville:2012:DPF:2330784.2330799}.
The CMA-ES method is very much time-consuming. On mid-sized equations (say of length $30$) with around 5 free scalars, the CMA-ES method takes around 30 seconds on a single CPU. In practice, we ran CMA-ES on a 40 cores machine. But even in this case, for difficult targets with maximum size $L=40$ or $45$, it took almost two days to produce around \SI{200000}{} CMA-ES evaluations. The runs we report in Table 5 for difficult targets typically took between one and three days per target per run. Clearly, our method can not be generalized to very long expressions.
The details of the execution time required for each run are reported in the captions of Tables 1 to 5.
\section{Experimental Results}
\label{sec:results}
\subsection{Targets description}
\label{sec:results:targets}
As already said, we use the same vocabulary for all our targets. The only change from one to another is the maximum length an individual can have. The table of targets we used can be found on the last page of Ref.~\cite{DBLP:journals/corr/abs-1805-10365}, which is in itself a compilation of many targets from the literature (see also \cite{white2013better}). From this original list of 63 targets, we discarded the ones with more than three variables as well as the ones that are too simple for our implementation\footnote{Namely Keijzer-7, Keijzer-8, Keijzer-13, Korns-4, Korns-5, Korns-6, Korns-7, Nguyen-1, Nguyen-8, Vladisleva-6}.
We ended up with the 31 targets that are listed in result Tables 1 to 5. Table 2 lists "easy targets" for which a sentence length of 15 was enough (although the success rate might improve with slightly longer maximal size), while Table 3 reports "reasonably difficult ones" with length around $L=30$, and Table 4 and 5 list difficult targets for which our success rate dropped significantly.
As in Ref.~\cite{DBLP:journals/corr/abs-1805-10365}, the notation $x : E[0,1, 0.05]$ means that the variable $x$ is sampled in the interval $[0,1]$ with constant steps of size $0.05$, while $x : U[0,1, 100]$ means for instance sampling 100 points for $x$ in the interval $[0,1]$ according to uniform distribution.
Our success rate is in most cases greater than $80 \%$. One might object that fixing the maximal length $L$ from the start is expert-knowledge. But in practice $L$ is just a meta-parameter that can be adjusted by the user, starting from small values that lead to shorter execution times, and progressively increasing it when the success rate is too small. As a matter of fact, we did proceed in this way for some of the targets of Table 3 that we first thought would be easy in small length, but turned out to be harder than expected. In theory, nothing would prevent a more generic version of the algorithm presented in this paper to auto-adjust this parameter.
Since the CMA-ES method process floating-point numbers, one must avoid expressions that are not defined on $\mathbb{R}$, such as $x^{2.999}$ for $x\in\mathbb{R}^{-}$.
Therefore, compared to the table \cite{DBLP:journals/corr/abs-1805-10365}, the training and validation set ranges were systematically cut to subsets of $\mathbb{R}^+$. The same number of points were given for the training sets, but on a \emph{smaller} range. Yet, it did not prevent us to achieve high success rates.
Also, we give ourselves a much smaller training set for some of the multidimensional targets. Consider for example Keijzer-5 (see Table 2). Ref.~\cite{DBLP:journals/corr/abs-1805-10365} reports a training set $U [-1, 1, 1000]$ for $x, y$ and $U [-1, 1, 10000]$ for $z$. This would mean providing way too many points to the CMA-ES method, which already requires many iterations. This would translate into a very large execution time. For this reason, we limited the training set for targets of this sort. In particular, for this target, we give ourselves a training set of $5\times 5\times 10$ points only. Again, this does not prevent the method to achieve $100 \%$ success rate here on the validation set.
However, this is also a clear drawback of the method for high dimensional targets. Remind that CMA-ES optimizes the $A_i$'s in $\tilde{f}(\vec A, \vec x)$ with respect to the quadratic cost $\sum_{\vec x}(f_{\textrm{target}}(\vec{x}) - \tilde{f}(\vec A, \vec x))^2$. Therefore it can not really do so without taking too long when $\vec x$ is of the order of a thousand points. Thus, in practice, we could only experiment with this method up to 4-dimensional targets with 5 points along each dimensions ($5^4 = 625$). For this reason, we explore at most three-dimensional targets in this paper. This limitation may however be overcome by using another optimizer than CMA-ES.
\subsection{Result tables description}
\label{sec:results:tables}
Table 1 is self explanatory. Regarding results reported on Tables 2, 3, and 5 of the combined method MAP-Elites + CMA-ES of Section \ref{sec:model:full}, the first column corresponds to the target name listed in \cite{DBLP:journals/corr/abs-1805-10365}. The second column gives the formula, while the third column details the training set. As said, we only reduced the range to positive values with respect to \cite{DBLP:journals/corr/abs-1805-10365}, and sometimes we reduced the number of points provided to the evolutionary algorithm, but never increased them. The fourth column corresponds to the validation set, usually greater than the one given in this reference, for reasons already explained in Section \ref{sec:model:termination}.
Then, the fifth column gives the hit rate for the first step of the algorithm with integer scalars $1$ and $2$. Given that only a few of these targets involve floating-point numbers, this first step is already able to reach the target, especially the easy ones of Table 2. On the contrary, it can never hit Keijzer 1,2,3 which are $0.3 x \sin(2 \pi x)$ on various training sets. Column 6 gives the number of evaluations that were done before actually hitting the target, and when it was hit, averaged over the number of independent runs.
Column 7 and 8 are similar to 5 and 6, but this time for the second step of the algorithm involving CMA-ES, and after conversion of integer scalars to free scalars $A_i$. The last column is the sum of both hit rates, \textit{i.e.}{} our main experimental result.
The following Tables have only 29 targets, and not 31: this is because we shall not report on Vladislavleva-7 $$f = (x-3)(y-3) + 2 \sin((x-4) (y-4))$$ and Vladislavleva-5 $$f = \frac{30 (x-1) (z-1)}{(x-10) y^2},$$ for which we have no success at all (although quite good NRMSE).
\begin{table*}[p]
\caption{Comparison between plain GP and Map-Elites with only free scalars being ``$1$'' and ``$2$''. See Section \ref{sec:model:comparison} for more details. Based on 100 independent runs. $N$-eval is the average number of individuals evaluated when the solution \emph{is} exactly found. MAP-Elites shows a slight improvement over plain GP, although it requires a bit more evaluations before convergence. The training and validation set intervals are the same as the ones specified in Table 2.\label{Table1}}
\begin{tabularx}{\textwidth}{@{}l*{10}{c}c@{}}
\toprule
Target name & Target formula & Hit rate - GP & N-eval & Hit rate - Map-Elite & N-eval \\
Nguyen-2 & $x + x^2 + x^3 + x^4$ &67 \% & 22292 & 93 \% & 28969 \\
Koza-3 & $x^6 -2 x^4 + x^2$ & 24 \% & 34454 & 50 \% & 42143 \\
Meier-3* & $x^2 y^2/(x+y)$ &94 \% & 29251 & 100 \% & 26320
\\
Meier-4* & $x^5 y^{-3}$ &57 \% & 41364 &52 \% & 59896
\\
Nguyen-9* & $\sin(x)+ \sin(y^2)$ & 87 \% & 20121 &85 \% & 56461
\\
Burks & $4 x^4 + 3 x^3 + 2 x^2 + x$ & 2 \% & 46881 &34 \% & 74735
\\
\bottomrule
\end{tabularx}
\end{table*}
\begin{table*}[p]
\caption{Results for small targets with $L=25$ for the first step (no CMA-ES) and $L=15$ with CMA-ES. Starred targets correspond to targets where the intervals for $x$, $y$ (and $z$ if any) are the same. Based on 20 independent runs. When the run fails after $P = 150$ iterations for the first step and $Q = 150$ iterations for second step, computation time is around 20 minutes on a 40-cores computer (for one target).\label{Table2}}
\begin{tabularx}{\textwidth}{@{}l*{10}{c}c@{}}
\toprule
Target name & Target formula & Training set & Test Set & Hits - no CMA-ES & N-eval & Hits (CMA-ES) & N-eval & Hits (total) \\
Nguyen-2 & $x + x^2 + x^3 + x^4$ & U [0, 1, 20] & U [0, 2, 200] &95 \% & 24624 & 5 \% & 4221 & \textbf{100 \%} \\
Koza-3 & $x^6 -2 x^4 + x^2$ & U [0, 1, 20] & U [0, 2, 200] &40 \% & 48722 & 45 \% & 16899 & \textbf{85 \%} \\
Meier-3* & $x^2 y^2/(x+y)$ & U [0, 1, 20] & U [0, 2, 50] &100 \% & 27948 & - & - & \textbf{100 \%}
\\
Meier-4* & $x^5 y^{-3}$ & U [0, 1, 20] & U [0, 2, 50] &80 \% & 41217 & 20 \% & 3957 & \textbf{100 \%}
\\
Nguyen-9* & $\sin(x)+ \sin(y^2)$ & U [0, 1, 20] & U [0, 2, 100] &90 \% & 56254 & 10 \% & 2184 & \textbf{100 \%}
\\
Keijzer-1 & $0.3 x \sin(2\pi x)$ & E [0, 1, 0.05] & E [0, 10, 0.05] &0 \% & - &95 \% & 5704 & \textbf{95 \%}
\\
Keijzer-2 & $0.3 x\sin(2 \pi x)$ & E [0, 2, 0.05] & E [0, 4, 0.05] &0 \% & - & 100 \% & 5611 & \textbf{100 \%}
\\
Keijzer-3 & $0.3 x \sin(2 \pi x)$ & E [0, 3, 0.05] & E [0, 4, 0.05] &0 \% & - & 100 \% & 3717 & \textbf{100 \%}
\\
Nguyen-5 & $\sin(x^2) \cos(x) -1$ & U [0, 1, 20] & U [0, 1.2, 200] &20 \% & 46194 & 60 \% & 19551 & \textbf{80 \%}
\\
Nguyen-6 & $\sin(x) + \sin(x + x^2)$ & U [0, 1, 20] & U [0, 1.2, 200] &60 \% & 48362 & 35 \% & 13898 & \textbf{95 \%}
\\
Sine & $\sin(x) + \sin(x + x^2)$ & E [0, 6.2, 0.1] & U [0, 10, 100] &90 \% & 34619 & 10 \% & 10417 & \textbf{100 \%}
\\
Koza-2 & $x^5 -2 x^3 + x$ & U [0, 1, 20] & U [0, 2, 200] &45 \% & 57392 & 50 \% & 17520 & \textbf{95 \%}
\\
\bottomrule
\end{tabularx}
\end{table*}
\begin{table*}[p]
\caption{Results for mid-sized targets. Based on averaging 20 independent runs. Burks, Keijzer-14 and Nguyen-3 are run with maximum $L$ of 20 for the CMA-ES method, Nguyen-7 with $L=25$, and the remaining ones with $L=30$. One run per target takes at most one hour for $L=20$ (\textit{i.e.}{} when it fails), and up to 5 hours for $L=30$.
\label{Table3}}
\begin{tabularx}{\textwidth}{@{}l*{10}{c}c@{}}
\toprule
Target name & Target formula & Training set & Test Set & Hits & N-eval & Hits & N-eval & Hits \\
& & & &(no CMA-ES) & &(CMA-ES) & & (total)
\\
Burks & $4 x^4 + 3 x^3 + 2 x^2 + x$ & U [0, 1, 20] & U [0, 3, 200] &35 \% & 79033 & 60 \% & 16350 & \textbf{95 \%}
\\
Keijzer-14* & $ \frac{8}{2 + x^2 + y^2}$ & U [0, 3, 20] & E [0, 4, 0.1] &30 \% & 139554 & 65 \% & 6644 & \textbf{95 \%}
\\
Nguyen-3 & $x + x^2 + x^3 + x^4 + x^5$ & U [0, 1, 20] & U [0, 2, 200] &60 \% &65082 & 30 \% & 17523 & \textbf{90 \%}
\\
Nguyen-7 & $\ln(1+x) + \ln(1+ x^2)$ & U [0, 2, 20] & U [0, 3, 200] &0 \% & - & 20 \% & 42459 & \textbf{20 \%}
\\
R1 & $(x+1)^3/(x^2 - x +1)$ & E [0, 2, 0.1] & U [0, 3, 100] &5 \% &143850 & 90 \% & 50741 & \textbf{95 \%}
\\
R2 & $(x^5 - 3 x^3 +1)/(x^2 +1)$ & E [0, 2, 0.1] & U [0, 4, 400] &0 \% & - & 85 \% & 73009 & \textbf{85 \%}
\\
Keijzer-5 & $30 x z/((x-10) y^2)$ & $\frac{x,y \, : \, U [0, 2, 5]}{z \, : \, U [1, 5, 10]}$ & $\frac{x, y \, : \, U [0, 3, 20]}{z \, : \, U [0, 5, 30]} $ &5 \% & 348979 & 95 \% & 14983 & \textbf{100 \%}
\\
Keijzer-12* & $x^4 - x^3 + 0.5 y^2 - y$ & U [0, 3, 20] & E [0, 4, 0.1] &30 \% & 259639 & 70 \%
& 47086 & \textbf{100 \%} \\
Keijzer-15* & $\frac{x^3}{5} + \frac{y^3}{2} - y - x $ & U [0, 3, 20] & E [0, 4, 0.1] &0 \% & - & 100 \% & 35894 &\textbf{100 \%}
\\
Keijzer-11 & $x y + \sin((x-1)(y-1))$ & U [0, 3, 20] & E [0, 4, 0.1] &0 \% & - & 15 \%
& 71605 & \textbf{15 \%}\\
Nguyen-4 & $x + x^2 + x^3+ x^4 + x^5 + x^6$ & U [0, 1, 40] & U [0, 1.5, 200] &40 \% & 181816 & 55 \% & 41728 & \textbf{95 \%}
\\
Pagie-1 & $1/(1 + x^{-4}) + 1/(1+ y^{-4})$ & E [0, 5, 0.2] & U [0, 6, 20] &15 \% & 233542 & 85 \% & 48647 & \textbf{100\%}
\\
\bottomrule
\end{tabularx}
\end{table*}
\begin{table*}[!htpb]
\caption{Description of difficult targets. $\quad$ $\quad$ $\quad$ $\quad$ $\quad$ $\quad$ $\quad$ $\quad$ $\qquad$ $\qquad$ $\qquad$ $\qquad$ $\qquad$ $\qquad$ $\qquad$ $\qquad$ $\qquad$ $\qquad$ $\qquad$ $\qquad$ $\qquad$\label{Table4}}
\begin{tabularx}{\textwidth}{@{}l*{10}{c}c@{}}
\toprule
Target name & Target formula & Training set & Test Set & Maximal Length used (CMA-ES)\\
R3 & $\frac{x^6 + x^5}{x^4 + x^3 +x^2 + x +1}$ & E [0, 1, 0.05] & U [0, 2, 100] & 35
\\
Vladislavleva-1 & $e^{-(x-1)^2}/((1.2 + (y-2.5)^2)$ & (x,y) : U [0.3, 4, 20] & (x,y) : E [0, 8, 0.1] & 35
\\
Keijzer-4 & $x^3 e^{-x} \cos(x) \sin(x) (\cos(x) \sin(x)^2-1)$ & E [0, 10, 0.1] & U [0, 14, 1000] & 40
\\
Nonic & $\sum_{i=1}^{i=9} x^i $ & E [0, 1, 0.05] & U [0, 2, 100] & 40
\\
Vladislavleva-3 & $x^3 e^{-x}\cos(x)\sin(x)(\cos(x)\sin(x)^2-1)(y-5)$ & x : E [0.05, 10, 0.1] & x : U [0, 10, 50] & 45
\\
& & \, y : E [0.05, 10.05, 2] & y : U [0, 10, 10]
\\
\bottomrule
\end{tabularx}
\end{table*}
\begin{table*}[!htpb]
\caption{Results for difficult targets, based on 10 independent runs. The first step of the algorithm (without CMA-ES) can see more than \SI{500000}{} different equations. This is however not enough to hit the target. Computational time is around two days per target per run. \label{Table5}}
\begin{tabularx}{\textwidth}{@{}l*{10}{c}c@{}}
\toprule
Target name & Hits - no CMA-ES & N-eval & Hits (CMA-ES) & N-eval & Hits (total) \\
R3 &20 \% & 325312 & 70 \% & 64319 &\textbf{90\%}
\\
Vladislavleva-1 &0 \% & - & 30 \% & 101360 & \textbf{30 \%}
\\
Keijzer-4 &0 \% & - & 40 \% & 105969 & \textbf{40 \%}
\\
Nonic &0 \% & - & 20 \% & 170928 & \textbf{20 \%}
\\
Vladislavleva-3 &0 \% & - & 20 \% & 246756 & \textbf{20 \%}
\\
\bottomrule
\end{tabularx}
\end{table*}
\subsection{Illustration of MAP-Elites Grid}
\label{sec:results:grid}
As an illustration, next Fig. \ref{fig:grid} shows the population on the MAP-Elites grid for a successful run for target 'Burks', as described in Table 1 and 2. The $x$-axis corresponds to the number of free scalars from 0 to 16 and $y$-axis is the length (from 0 on the top to 30). This is not the full 3-D grid, but only a slice corresponding to a number of functions less or equal to 1. The grid is not (and actually can't be) populated on the top right corner. It shows that small individuals perform quite poorly, as well as individuals with too many or too few free scalars.
\begin{figure}[!htpb]
\begin{centering}
\vspace{-\baselineskip}
\includegraphics[width=\linewidth]{figgrid.pdf}
\vspace{-\baselineskip}
\caption{A slice of the MAP-Elites grid for target Burks $4 x^4 + 3 x^3 +2 x^2+x$.\label{fig:grid}}
\end{centering}
\end{figure}
\subsection{Discussion}
\label{sec:results:discussion}
\begin{figure}[!htpb]
\begin{centering}
\vspace{-\baselineskip}
\includegraphics[width=\linewidth]{keijzer3.png}
\vspace{-\baselineskip}
\caption{Landscape projection on only one dimension : this is $g(a) = \sum_{x_i}\left(0.3 x_i \sin(2 \pi x_i) - 0.3 x_i \sin(a x_i) \right)^2$ where $x_i$ are given by the training set in Table 2, Keijzer-3.\label{fig:mse}}
\end{centering}
\end{figure}
\begin{figure}[!htpb]
\begin{centering}
\vspace{-\baselineskip}
\includegraphics[width=\linewidth]{FIT.png}
\vspace{-\baselineskip}
\caption{In blue, the function found by the program, Eq. (3) versus the target $f = \ln((1+x)(1+x^2))$ on the range $x \in [0,40]$. Note that it has been trained only on the range [0,2].\label{fig:fit}}
\end{centering}
\end{figure}
It appears that target involving $sin$ functions are the most difficult ones. This is can be illustrated by Keijzer 1, 2, 3, $f = 0.3 x \sin(2 \pi x)$ that require many fitness evaluations before convergence although the target looks quite trivial (and see also the low hit rate for target Keijzer-11). As a matter of fact, the CMA-ES method has a very small success rate on fitting the exact equation $\tilde{f} = A_1 x \sin(A_2 x)$. The success rate with default CMA parameters is only around $2 \%$ based on 1000 runs for Keijzer-1. This means that CMA-ES will often miss the right target.
This is because the landscape is quite deceptive in this case. Next figure shows projections of that landscape as a function of $A_2$ (on the horizontal axis) alone of the mean squared error (used in the CMA-ES method) between the actual target and a test function $f = 0.3 x \sin(A_2 x)$ for the test range of Keijzer-3. We see that the global minimum in $A_2 = 2 \pi$ is kind of lost in many local optima, see Fig. \ref{fig:mse}.
In fact, when the target is found, it is usually with a more complex formula, \textit{e.g.}{} $\tilde{f} = A_1 x \sin(A_2 x + A_3)$ or even $\tilde{f} = A_1 x \sin(A_2 x + A_3) +A_4$, for which CMA-ES success rate increases significantly up to around $17 \%$. Interestingly enough then, the algorithm tries many variations around the correct formal expression until CMA-ES hits the right scalar parameters. Moreover CMA-ES seems to be able to do better on landscapes that have extra spurious dimensions. Most presumably, the landscape gets easier to browse in this extra dimensional embedding. This also means that simplification may be counter-productive. However, we did runs with and without simplifications and the results are overall similar. (Simplification is not used from Table 1 to 4, and used in Table 5).
In order to increase CMA-ES success rate, each CMA-ES instance is called with an initial Gaussian with a random mean varying between $-1$ and $1$, and a random initial variance chosen varying between 1 and 5.
The poor success rate on target Nguyen-7 (Table 3) is of different nature. It seems, looking at the detailed result, that the target on that range can be very easily approximated by rational fractions, so that the program does not get incentives to look for $\log$ solutions. Moreover, the provided range is very small; the success rate increases a lot and reaches easily $100 \%$ if we give a training set of \textit{e.g.}, $x \in [0,40]$. Given that we give ourselves only a small sample of the function, the difficult task here is more about finding good generalizations. In this regards, the algorithm actually performs very well. As an example, one of the program's result is the following (with a NRMSE of $5.6 \, 10^{-5}$):
\begin{eqnarray}
\tilde{f} &=& 0.000219974 \\ \nonumber &+&\frac{0.562568 x \left(12.9747 + x^{0.593097} - \frac{8.99871}{x^{2.30042} + x^{1.17464}+1.343}\right)}{x + 3.57942}
\end{eqnarray}
which is highly accurate not only on the validation set $x \in [0,3]$ but actually on a much larger range $x \in [0,40]$ as shown in Fig. \ref{fig:fit}.
\section{Conclusion}
\label{sec:conclusion}
We described an approach to the Symbolic Regression problem combining a MAP-Elites exploration scheme together with a evolutionary optimizer. The optimizer, namely CMA-ES, allowed us to find ephemeral constants involved in symbolic expression with high accuracy. Starting from the same primitive set, we demonstrated high success rates on a large sample of reference targets taken from the literature.
We use the MAP-Elites~\cite{DBLP:journals/corr/MouretC15} algorithm to search for equations that are both accurate and diverse, across a range of selected topological features (parsimony, number of free scalars, number of non-algebraic functions). Additionally, other features could be taken into account to increase the diversity of the explored equations, catering either to equation topology (\textit{e.g.} number of trigonometric functions) or to the mathematical properties of the target function (\textit{e.g.} number of modes of the optimized equation compared to the target function, error measures on the first-order derivative, etc). An associated difficulty is the increase of number of bins in the MAP-Elites grid when more features are taken into account. This can make the algorithm focus too much on diversity which would reduce convergence speed. This can be prevented by using the CVT-MAP-Elites algorithm~\cite{vassiliades2017using}.
The use of CMA-ES as ephemeral constants optimizer is a computationally intensive technique. Thus, it may difficult to scale our methodology to higher-dimensional or more complex target functions, as CMA-ES would require larger evaluations budgets.
To overcome these limitations, alternative optimization techniques will have a key role to play, such as CMA-ES with several populations~\cite{hansen2009benchmarking} or variants of CMA-ES capable of handling large number of dimensions~\cite{loshchilov2018large}.
Moreover, to put the algorithm in practice on noisy targets (see \cite{miranda2017noisy}), further work will need to be done on the performance of black-box optimizers facing noise. In particular, because of its accuracy, CMA-ES is likely to return a large set of possible solutions that all seem to perfectly fit to the data. In this context, the handling of error bars throughout the entire SR process will be critical. Alternatively, we could employ optimization techniques that are inherently tolerant to noise, like Bayesian Optimization~\cite{frazier2018tutorial,pelikan2002scalability}.
The set of genetic operations was intentionally left in its most basic form. We wanted to explore how the combination of several methods can perform on SR problems. Our algorithms leaves a lot of room for improvement and we hope that state-of-the-art GP techniques together with refined analyses of feature selection will allow to achieve better performance.
With very few adjustments, the algorithm we described here can handle systems of partial derivative equations. We leave this aspect and the automated discovery of coupled differential equations in physical systems for future work.
\section*{Acknowledgments}
The work of Vincent Reverdy has been supported by the National Science Foundation under the grants SI2-SSE-1642411 and CCF-1647432. The work of Leo Cazenille has been supported by Grant-in-Aid for JSPS Fellows JP19F19722.
\bibliographystyle{unsrt}
|
1,941,325,220,487 | arxiv | \section{\label{intro}Introduction}
The low-lying spectrum of the Faddeev-Popov (F-P) operator, in Coulomb and covariant gauges, is a probe of the infrared
properties of non-abelian gauge theories. Confinement in Coulomb gauge, in particular, is rather directly related to the F-P
spectrum. The color Coulomb potential, for example, involves a product of inverse F-P operators, and the Coulombic self-energy
of an isolated color charge, which is infrared divergent in a confining theory, depends crucially on the density of low-lying eigenvalues
of the F-P operator, as discussed below. The connection to confinement is less apparent in covariant gauges, although the density
of near-zero F-P eigenmodes is expected to be relevant to the infrared behavior of the ghost propagator.
Coulomb and Laudau gauges are defined on the lattice as the set of elements, on each gauge orbit, for which the quantity
\begin{equation}
R[U] = - \sum_{x} \sum_{\mu=1}^d \text{Tr}[U_\mu(x)]
\label{R}
\end{equation}
is stationary with respect to infinitesimal gauge transformations. Here $d$ denotes the number of space dimensions, in Coulomb gauge, and the number of spacetime dimensions in Landau gauge, which is a convention I will adopt from now on. In general, along any gauge orbit, there are many stationary points, known as Gribov copies, and at these points the F-P determinant may be positive or negative. This indefinite sign is closely to related to Neuberger's Theorem \cite{Herbert}, which demonstrates that BRST quantization of any non-abelian lattice gauge theory is ill-defined at the non-perturbative level. The picture is that in summing over all copies on a gauge orbit, the copies with a positive F-P determinant are exactly cancelled by the copies with a negative F-P determinant, and the functional integral vanishes. It is for this reason that a constraint of some kind, imposed on the domain of functional integration, is necessary. Ideally the range of functional integration should be a subspace (such as the Fundamental Modular region) containing only a single gauge copy with positive F-P determinant per gauge orbit, but at a minimum the integration range should lie within the Gribov region. This is the region which consists of all gauge copies in which the non-trivial eigenvalues of the F-P operator are all positive; i.e.\ the Gribov copies which are local minima of $R[U]$. These are, in fact, the configurations obtained by standard lattice gauge-fixing algorithms. The Gribov region is completely bounded, and the Fundamental Modular region is partially bounded, by the first Gribov horizon, where the lowest non-trivial F-P eigenvalue vanishes. It has been argued by Zwanziger \cite{Dan1} that the volume of the Gribov region is concentrated close to the horizon, much as the volume of a sphere in a high dimensional Euclidean space is concentrated near the surface. Since the dimensionality of the space of all lattice configurations is very high indeed, the values of observables obtained at the Gribov horizon should dominate the expectation value. It would then be interesting to understand exactly how proximity to the Gribov horizon affects the behavior of various observables, starting with the spectrum of the F-P operator.
As a step in that direction, this article presents a perturbative calculation of the F-P spectra. Perturbation theory is not necessarily trustworthy when dealing with the low-lying eigenmodes, but something may still be learned from it. In particular, it would be interesting to see whether proximity to the Gribov horizon changes the behavior of the low-lying spectra already at the perturbative level, and whether that behavior is different, for some reason, in Coulomb and Landau gauges. The calculation is carried out for Landau gauge in two and three spacetime dimensions, and Coulomb gauge in three spacetime dimensions, to avoid the complications associated with renormalization in four dimensions.\footnote{Yang-Mills theory in Coulomb gauge is trivial in two spacetime dimensions if the spacetime manifold is flat and non-compact, and for that reason Coulomb gauge in $D=2$ will not be considered here. Non-trivial features associated with the Coulomb gauge F-P operator do appear even in two dimensions, if the space direction is compactified to $S^1$, and this case has been thoroughly discussed by Reinhardt and Schleifenbaum in ref.\ \cite{oneone}.} The proximity to the Gribov horizon is controlled by a mass parameter in the transverse gluon propagator, which is where the non-perturbative information enters. I use an ansatz for the gluon propagator, motivated by Gribov's expression \cite{Gribov}, which allows for any desired power behavior in the infrared.
\section{\label{FPC} F-P eigenvalues and the Coulomb self-energy}
The Coulomb potential between a static quark-antiquark pair located at points $x$ and $y$ is given by the expression
\begin{equation}
V_C(|x-y|) = -{g^2 C_r \over d_A} \left\langle (M^{-1})^{ab}_{xz} (-\nabla_z^2) (M^{-1})^{ba}_{zy} \right\rangle
\end{equation}
where $C_r$ is the quadratic Casimir of quarks in color representation $r$, $d_A$ is the dimension of the adjoint representation of
the gauge group, and $M$ is the Faddeev-Popov operator, which is
\begin{equation}
M^{ab}_{xy} = \Bigl(-\d^{ab} \nabla^2 + gf^{abc} A^c_i(x) \partial_i \Bigr) \d^3(x-y)
\label{FPcont}
\end{equation}
in the continuum. If $V_C(|x-y|)$ is confining, then this can only be attributed to an infrared singular behavior of $M^{-1}$, which must be related
somehow to the low-lying F-P eigenvalue spectrum.
The perturbative evaluation of the F-P spectrum starts with the free-field, $g^2=0$ case, on a finite periodic lattice of extension $L$.
The eigenmodes of the corresponding F-P operator are simply the plane wave states
\begin{eqnarray}
\phi_{{\mathbf{n}},A}^{a(0)} &=& {1\over \sqrt V} e^{i p \cdot x } \chi^a_A
\nonumber \\
\l_{{\mathbf{n}},A}^{(0)} &=& 2\sum_\mu (1-\cos(p_\mu))
\nonumber \\
p_i &=& 2\pi {n_i \over L} ~~,~~ -{L\over 2} < n_i \le {L\over 2}
\label{zeroth}
\end{eqnarray}
and the $\vec{\chi}_A$ are some set of orthonormal vectors spanning the $d_A$-dimensional color space. The F-P eigenmodes and
eigenvalues $\{\phi_{{\mathbf{n}},A}(x),\l_{{\mathbf{n}},A}\}$ at $g^2 >0$ are also indexed by $({\mathbf{n}},A)$, denoted for brevity by $n \equiv ({\mathbf{n}},A)$, and it
is assumed that the eigenmodes and eigenvalues are continuous and differentiable functions of $g^2$, which smoothly approach \rf{zeroth}
as $g^2\rightarrow 0$. To connect the eigenmode spectrum to the Coulomb self-energy, we begin with the expression
\begin{equation}
E_{self} = {g^2 C_r \over d_A} \lim_{V\rightarrow \infty} {1\over V} \left\langle (M^{-1})^{ab}_{xz} (-\nabla_z^2) (M^{-1})^{ba}_{zx} \right\rangle
\end{equation}
and inserting the spectral representation
\begin{equation}
(M^{-1})^{ab}_{xy} = \sum_n {\phi^a_n(x) \phi^{*b}_n(y) \over \l_n}
\end{equation}
this becomes \cite{GOZ2}
\begin{eqnarray}
E_{self} &=& {g^2 C_r \over d_A} \lim_{V\rightarrow \infty} {1\over N_c V} \sum_n \left\langle {(\phi_n|-\nabla^2|\phi_n) \over \l_n} \right\rangle
\nonumber \\
&=& g^2 {C_r\over d_A} \int_0^{\l_{max}} d \l ~\left\langle \rho(\l) {(\phi_\l|-\nabla^2|\phi_\l) \over \l^2} \right\rangle
\nonumber \\
\label{Es}
\end{eqnarray}
where $\rho(\l)$ is the normalized eigenvalue density
\begin{equation}
\rho(\l) = \lim_{V\rightarrow \infty} {1\over N_c V} \sum_n \d(\l - \l_n)
\end{equation}
$N_c$ is the number of colors, and $V=L^d$.
In 2+1 dimensions, the integral in \rf{Es} is logarithmically divergent at the $\l\rightarrow 0$ end of the integration even in an abelian theory, and this is because the Coulomb potential in an abelian theory confines with a logarithmically rising potential. The criterion that the infrared divergence in the self-energy is stronger than logarithmic is
\begin{equation}
\lim_{\l \rightarrow 0} \left\langle {\r(\l) (\phi_\l|-\nabla^2|\phi_\l) \over \l^{1-\epsilon} }\right\rangle > 0 ~~\mbox{for some}~~ \epsilon > 0
\label{condition}
\end{equation}
This condition involves the near-zero F-P eigenmodes, as well as the eigenvalues. However, assuming that the eigenvalue spectrum is non-degenerate (apart from some rather special cases involving symmetric gauge-field configurations),
then at fixed ${g^2>0}$ each $\l_n$ is associated with a unique $({\mathbf{n}},A)$, which in turn determines ${\mathbf{p}}$.
Then $p^2 = p^2(\l)$ in the infinite volume limit and
\begin{equation}
\lim_{\l \rightarrow 0}\left\langle {\r(\l) (\phi_\l|-\nabla^2|\phi_\l) \over \l^{1-\epsilon}} \right\rangle \ge
\lim_{\l \rightarrow 0} \left\langle {\r(\l) p^2(\l) \over \l^{1-\epsilon}} \right\rangle
\label{ie0}
\end{equation}
The proof of this inequality is given in the Appendix. It follows that a sufficient condition for Coulomb confinement is
\begin{equation}
\lim_{\l \rightarrow 0} \left\langle {\r(\l) p^2(\l) \over \l^{1-\epsilon}} \right\rangle > 0 ~~\mbox{for some}~~ \epsilon > 0
\label{criterion}
\end{equation}
\section{\label{FPE} The approach to the horizon}
It was stated above that numerical simulations find local minima of $R$, which means, strictly speaking, that all of the eigenvalues of the F-P
operator are positive. This statement has to be qualified a little. Even apart from Gribov copies, the Coulomb and Landau gauge conditions
do not entirely fix the gauge, because if $U_\mu(x)$ satisfies the gauge condition, so does $GU_\mu(x) G^\dagger$, where $G \in$ SU(N) is
any position-independent group element. This is a remnant global gauge symmetry, and it implies that at
any stationary point of $R$ there must be flat directions along the gauge orbit corresponding to zero modes
of the F-P operator. These are the trivial eigenmodes
\begin{equation}
\phi^a_{0,A}(x) = {1\over \sqrt{V}} \chi^a_A
\end{equation}
The statement that the F-P determinant is positive in the Gribov region really refers to the determinant of the operator in the subspace orthogonal to these trivial zero modes.
\begin{figure*}[t*]
\begin{center}
\subfigure[~Type I scenario.]
{
\label{conj1}
\includegraphics[width=8truecm]{conj1.eps}
}
\hspace{0.25cm}
\subfigure[~Type II scenario. ]
{
\label{conj2}
\includegraphics[width=8truecm]{conj2.eps}
}
\end{center}
\caption{Two scenarios for the behavior of the FP eigenvalue spectrum for gauge field configurations near the first Gribov horizon.
Just outside the Gribov region ($d_H<0$), there is a small interval of negative eigenvalues, which shrinks to a single eigenvalue at the horizon
($d_H=0$). In the Type I scenario, the interval of negative eigenvalues begins at $p=0$; at the horizon the nontrivial zero-mode is at $p=0$, and
at small $p$ the eigenvalues grow with a non-standard power $\l_p \sim p^{2+s}$. For configurations inside the Gribov region ($d_H>0$) the growth
$\l_p \sim p^2$ is quadratic. In the Type II scenario, for configurations just outside the Gribov region, the interval of negative eigenvalues does not include $p=0$, and at the Gribov horizon the non-trivial zero mode is at $|p|>0$.}
\label{conj}
\end{figure*}
Outside the Gribov region, some of the non-trivial F-P eigenvalues become negative, which means that for configurations
which lie exactly on the Gribov horizon there must be at least one non-trivial zero eigenvalue. However, in an infinite volume,
the converse is not necessarily true: we cannot deduce, just from the fact that the spectrum of non-trivial eigenvalues begins
at zero, that the gauge field lies on the Gribov horizon. Even in an abelian theory, which has no Gribov horizon, the spectrum of
the F-P operator $-\nabla^2$ in an infinite volume begins at $\l=0$.
Let us begin with $g^2=0$, i.e.\ a free-field theory, with the eigenvalues and eigenstates shown in \rf{zeroth}. In this free case
we have\footnote{To compute $\rho(\l)$, begin with the volume measure in momentum space, proportional to $p^{d-1} dp$, and change
variables to $\l=\l(p)$ to arrive at $\rho(\l) d\l$.}
\begin{equation}
\rho(\l) \propto \l^{(d-2)/2} ~~~,~~~ (\phi_\l | -\nabla^2 |\phi_\l) = \l
\end{equation}
so that with an ultraviolet regulator, the Coulomb self-energy in $d+1$ dimensions is finite for all space dimensions $d \ge 3$, and
marginally divergent (divergent as $\log(L)$ as extension $L \rightarrow \infty$) at $d=2$. The latter divergence
is expected, since the Coulomb potential increases logarithmically in $2+1$ dimensions, so the question in 2+1 dimensions is
whether the condition in eq.\ \rf{condition} is satisfied for some $\epsilon > 0$
Outside the Gribov region, some of the non-trivial F-P eigenvalues become negative, and approaching the first Gribov horizon from
the outside, the range of negative eigenvalues should shrink away. Right on the horizon there must exist
a non-trivial zero eigenvalue even for a finite spacetime volume. So let us imagine increasing
$g^2$ away from zero, and also placing a constraint in
the functional integral by introducing a dimensionful parameter $d_H$, and requiring that if $d_H>0$, the integration
is over gauge fields inside the Gribov region, lying a distance $d_H$ from the first Gribov horizon, while if $d_H<0$, the integration is
over gauge fields \emph{outside} the Gribov region, at a distance $|d_H|$ from the horizon. Then
\begin{eqnarray}
\langle \l_p \rangle &=& \l^{(0)}_p + \langle \Delta \l_p \rangle
\nonumber \\
&=& p^2 (1 - F[g,p,d_H])
\end{eqnarray}
We use $p$ as an index because, in the infinite volume limit, it is better to replace the integer index ${\mathbf{n}}$ by the continuous index $p$. Also the expectation value of $\l_{{\mathbf{p}},A}$ can depend on neither the index $A$, since this would violate global color symmetry, nor on the direction of ${\mathbf{p}}$, which would violate rotation invariance.
If $g=0$ then $F=0$, but we may speculate on the behavior of $F[g,p,d_H]$ at $g^2>0$ as $d_H$ varies. Suppose $F$ has the form, near $p=0$,
\begin{eqnarray}
F[g,p,d_H] &=& a[g,d_H] - b[g,d_H] p^s
\nonumber \\
& & \qquad + \text{higher powers of} ~ p
\end{eqnarray}
and $b[g,d_H]$ is positive for small $|d_H|$. At $d_H > 0$ all non-trivial eigenvalues are positive, so it must be that $a[g,d_H]<1$ for small $p$. Note that the eigenvalue spectrum in an infinite volume still starts at $\l=0$, even though the configurations are, by definition, off the Gribov horizon. At $d_H<0$ some eigenvalues are negative, and if those are the eigenvalues near $p=0$, it means that $a[g,d_H]>1$. The negative eigenvalues must just disappear at $d_H=0$, and this is obtained if $a[g,0]=1$ exactly.
In this last case the subleading power of $p$ in $F[g,p,d_H]$ takes over,
and we have
\begin{eqnarray}
\l_p &\sim& p^{2+s}
\nonumber \\
\rho(\l) &\sim& \l^{(d-2-s)/(2+s)}
\label{horizon}
\end{eqnarray}
This a qualitative change in the low-lying F-P spectrum, compared to the behavior inside the Gribov region, and the sufficient condition \rf{criterion} for Coulomb confinement is satisfied if
\begin{equation}
2s + 2 > d
\label{cc-s}
\end{equation}
Inside the Gribov region, at $p\rightarrow 0$, the spectrum
is simply a rescaling of the zeroth-order spectrum
\begin{equation}
\l_p = (1-a[g,d_H]) p^2
\end{equation}
and, in the case of Coulomb gauge, the Coulomb self-energy is finite.
The conjectured behavior of $\langle \l_p \rangle$ vs.\ $p$, for $d_H$ positive, negative, and zero, is sketched in Fig.\ \ref{conj1}.
But the scenario just outlined is not the only possible behavior near the horizon. Consider, in particular, the case that $b[g,d_H]$ is negative for small $|d_H|$. Then we have
\begin{equation}
\langle \l_p \rangle = (1 - a[g,d_H]) p^2 - \Bigl| b[g,d_H]\Bigr| p^{2+s} + \text{higher powers of} ~ p
\end{equation}
and it is possible that $\langle \l_p \rangle$ is positive near $p=0$ where the $p^2$ term dominates, but negative in some finite region away from $p=0$. The conjectured behavior in this case, for positive, negative, and vanishing $d_H$, is indicated in Fig.\ \ref{conj2}, and in this case we would still have $\l_p \sim p^2$ at the horizon, for small $p^2$.
Of course, quantization in Coulomb and Landau gauge does not involve setting $d_H$ to some definite value. What \emph{is} required, however, is a constraint on the range of functional integration to lie within the first Gribov horizon. If it is true that entropy dominates due to the
high dimensionality of the configuration space, and almost all of the volume of the Gribov region is concentrated at the horizon, then only lattice configurations at or very near the horizon will contribute to vacuum expectation values in Coulomb and Landau gauge, just as if the constraint $d_H=0$ were imposed.
\section{Perturbative evaluation of the F-P spectrum}
The possible spectra shown in Figs.\ \ref{conj1} and \ref{conj2} are pure speculation at this point, but it is interesting, and somewhat in the spirit of Gribov's original work \cite{Gribov}, to see how far we can go in understanding the F-P spectrum with ordinary perturbation theory.
Let us begin with lattice SU(2) gauge theory in either $d+1$ spacetime dimensions (Coulomb gauge) or $d$ spacetime dimensions
(Landau gauge), starting on a finite $d$-dimensional volume $V$ and taking the infinite volume $V\rightarrow \infty$ and
lattice spacing $a\rightarrow 0$ limits at the end. The F-P operator on the lattice is given by \cite{GOZ2}
\begin{eqnarray}
M_{xy}^{ab} &=& (K_0)^{ab}_{xy} + (K_1)^{ab}_{xy} + (M_1)^{ab}_{xy}
\nonumber \\
(K_0)^{ab}_{xy} &=& \d^{ab} \sum_i (2\d_{xy} - \d_{x+\hat{i},y} - \d_{x-\hat{i},y})
\nonumber \\
(K_1)^{ab}_{xy} &=& {\textstyle{\frac{1}{2}}} g \epsilon^{acb} \sum_i \left[ -A_i^c(x) \d_{x+\hat{i},y} + A^c_i(y) \d_{x-\hat{i},y} \right]
\nonumber \\
(M_1)^{ab}_{xy} &=& - \d^{ab} \sum_i \Bigl\{ \d_{xy} \left[(1- {\textstyle{\frac{1}{2}}} \mbox{Tr}U_i(x)) + (1- {\textstyle{\frac{1}{2}}} \mbox{Tr}U_i(x-\hat{i}))\right]
\nonumber \\
& & \left. - \d_{x+\hat{i},y}(1- {\textstyle{\frac{1}{2}}} \mbox{Tr}U_i(x)) - \d_{x-\hat{i},y}(1- {\textstyle{\frac{1}{2}}} \mbox{Tr}U_i(y)) \right\}
\nonumber \\
\end{eqnarray}
where
\begin{equation}
A_j^a = {1\over 2ig} \mbox{Tr}[\sigma_a(U_j(x) - U^\dagger_j(x))]
\end{equation}
The dimensionless lattice coupling $g_L$ is related to the gauge coupling $g$ by $g^2_L = a^{4-D}g^2$, where $a$ is the lattice spacing and $D$ is the spacetime dimension.
The eigenvalues and eigenvectors of $K_0$ are those shown in eq.\ \rf{zeroth}. The operator $M_1$ vanishes in the
continuum limit, so I will just ignore it in what follows, and treat $K_1$ as the only
perturbation to $K_0$. Lattice Fourier transforms will be defined symmetrically
\begin{eqnarray}
A_i^a(x) &=& {1\over \sqrt{V}} \sum_k \widetilde{A}_i^a(k) e^{ikx}
\nonumber \\
\widetilde{A}_i^a(k) &=& {1\over \sqrt{V}} \sum_x A_i^a(x) e^{-ikx}
\label{FT}
\end{eqnarray}
The first-order correction to $\l^{(0)}_p$ is
\begin{eqnarray}
{\Delta} \l^{(1)}_{p,A} &=& \langle p,A| K_1 |p,A\rangle
\nonumber \\
&=& {1\over V} \sum_x \sum_y e^{-ipx} \chi_A^a (K_1)^{ac}_{xy} e^{ipy} \chi_A^c
\nonumber \\
&=& {\textstyle{\frac{1}{2}}} g \chi_A^a \epsilon^{abc} \chi_A^b \sum_i \sum_x {1\over V} \left[-A_i^b(x)e^{ip_i} + A_i^b(x-\hat{i})e^{-ip_i} \right]
\nonumber \\
&=& -ig \chi_A^a \epsilon^{abc} \chi_A^b {1\over \sqrt{V}} \sum_i \widetilde{A}_i^b(0) \sin(p_i)
\label{first}
\end{eqnarray}
Now, according to the above definition of the lattice Fourier transform, the lattice $A$-field at zero momentum is
\begin{equation}
\widetilde{A}_i^b(0) = {1\over \sqrt{V}} \sum_x A_i^b(x)
\end{equation}
with $-2/g < A^b_i(x) < 2/g$. Then suppose that the lattice $A$-field in Coulomb or Landau
gauge has a finite correlation length $l$. This implies
\begin{equation}
\sum_x A_i^b(x) \sim \pm \sqrt{V \over l^d} l^d {\cal A}
\end{equation}
where ${\cal A}$ is the average value of $A_i^b$ in a hypercubic region of volume $l^d$. Then, because of the factor of $1/\sqrt{V}$ in \rf{first}, the first-order correction to $\l_p^0$ vanishes in the infinite volume
limit. Of course, the first-order contribution vanishes even in a finite volume upon taking the expectation value, since
$\langle \widetilde{A}_i^b(0)\rangle = 0$.
At second order
\begin{equation}
{\Delta} \l_{p,A} = \sum_{k,B} { |( k,B | K_1 | p,A) |^2 \over \l^0_p - \l^0_k }
\end{equation}
where
\begin{eqnarray}
\lefteqn{( k,B | K_1 | p,A)}
\nonumber \\
&=& {\textstyle{\frac{1}{2}}} g \chi_B^a \epsilon^{abc} \chi_A^c \sum_i {1\over V} \sum_x e^{-ikx}
\left( -A_i^b(x) e^{ip(x+\hat{i})} \right.
\nonumber \\ & & \left. \qquad + A_i^b(x-\hat{i}) e^{ip(x-\hat{i})} \right)
\nonumber \\
&=& {\textstyle{\frac{1}{2}}} g \chi_B^a \epsilon^{abc} \chi_A^c {1\over \sqrt{V}} \sum_i A_i^b(k-p) \left( -e^{ip_i} + e^{-ik_i} \right)
\nonumber \\
\end{eqnarray}
Then
\begin{eqnarray}
{\Delta} \l_{p,A} &=& {\textstyle{\frac{1}{4}}} g^2 \sum_B (\chi_A^a \epsilon^{abc} \chi_B^c) (\chi_A^d \epsilon^{def}\chi_B^f)
{1\over V} \sum_k {1\over \l^0_p - \l^0_k}
\nonumber \\
& & \qquad \times \sum_{ij} \widetilde{A}_i^b(k-p) \widetilde{A}_j^e(p-k)
\nonumber \\
& &\qquad \times \left( -e^{ip_i} + e^{-ik_i} \right)\left( -e^{-ip_j} + e^{ik_j} \right)
\label{DL1}
\end{eqnarray}
In preparation for taking the continuum limit, we need to indicate powers of the lattice spacing explicitly.
Let
\begin{equation}
{\Delta} \l = a^2 {\Delta} \l' ~~,~~ p = a p' ~~,~~ A_i^c(x) = a A'^c_i(x)
\end{equation}
where the primed quantities have the standard engineering dimensions of these quantities in the continuum formulation.
We also have, using ${\Delta} k' = 2\pi /(La)$,
\begin{eqnarray}
{1\over V} \sum_k &=& {1\over L^d} {1\over ({\Delta} k')^d} \sum_k ({\Delta} k')^d
\nonumber \\
&=& a^d \sum_k \left( {{\Delta} k' \over 2\pi}\right)^d
\end{eqnarray}
Inserting these identities into \rf{DL1}
\begin{eqnarray}
{\Delta} \l'_{p,A} &=& {1\over a^2} {g^2 \over 4} \sum_B (\chi_A^a \epsilon^{abc} \chi_B^c) (\chi_A^d \epsilon^{def}\chi_B^f)
\nonumber \\
& & \times a^d \sum_k \left( {{\Delta} k' \over 2\pi}\right)^d {1 \over a^2(\l'^{(0)}_p - \l'^{(0)}_k)}
\nonumber \\
& & \sum_{ij} \widetilde{A}^b_i(k-p) \widetilde{A}^e_j(p-k)
\nonumber \\
& &\times \left( -e^{ip'_ia} + e^{-ik'_ia} \right)\left( -e^{-ip'_ja} + e^{ik'_ja} \right)
\end{eqnarray}
Now we take the vacuum expectation value of ${\Delta} \l'$, noting that
\begin{equation}
\langle \widetilde{A}^b_i(k) \widetilde{A}^c(-k) \rangle = a^{2-d} \d^{bc} D_{ij}(k')
\end{equation}
where, in Landau gauge, $D_{ij}(k')$ is the full (i.e.\ dressed) gluon propagator. In Coulomb gauge it is the spatial Fourier transform of the
full, equal-times gluon propagator.
This gives
\begin{eqnarray}
\langle {\Delta} \l'_p \rangle &=& {1\over a^2} {g^2 \over 4} \sum_B (\chi_A^a \epsilon^{abc} \chi_B^c) (\chi_A^d \epsilon^{dbf}\chi_B^f)
\nonumber \\
& & \times \sum_k \left( {{\Delta} k' \over 2\pi}\right)^d {1 \over \l'^{(0)}_p - \l'^{(0)}_k} \sum_{ij} D_{ij}(p'-k')
\nonumber \\
& & \times \left( -e^{ip'_ia} + e^{-ik'_ia} \right)\left( -e^{-ip'_ja} + e^{ik'_ja} \right)
\end{eqnarray}
At this point we can take the continuum limit, and make use of the transversality property $q_i D_{ij}(q)=0$ of the gluon propagator, to obtain\footnote{We have ignored, in lattice regularization, the case that $\l'^{(0)}_p = \l'^{(0)}_k$, which would have to be handled by degenerate perturbation theory. This case is zero measure in the continuum limit, and will not require special treatment.}
\begin{eqnarray}
\langle {\Delta} \l'_p \rangle &=& g^2 \sum_B (\chi_A^a \epsilon^{abc} \chi_B^c) (\chi_A^d \epsilon^{dbf}\chi_B^f)
\nonumber \\
& & \times \int {d^d k' \over (2\pi)^d} {1 \over p'^2 - k'^2} p'_i p'_j D_{ij}(p'-k')
\end{eqnarray}
The primes, having served their purpose, will now be dropped. It is understood that the unprimed quantities
now have their standard engineering dimensions.
Using the competeness property
\begin{equation}
\sum_B \chi_B^c \chi_B^f = \d^{cf}
\end{equation}
we sum over the color indices, which just gives an overall factor of two. The result is
\begin{equation}
\langle {\Delta} \l_p \rangle = -2 g^2 p_i p_j \int {d^d k \over (2\pi)^d} {1\over k^2 - p^2} D_{ij}(p-k)
\end{equation}
(note the interchange of $k^2$ and $p^2$ in the denominator).
Changing variables to $q=p-k$, and writing
\begin{equation}
D_{ij}(q) = \left(\d_{ij} - {q_i q_j \over q^2} \right) D(q)
\label{general_prop}
\end{equation}
gives
\begin{eqnarray}
\langle {\Delta} \l_p \rangle &=& -2 g^2 \int {d^d q \over (2\pi)^d} ~ {D(q) \over (p-q)^2 - p^2}
\nonumber \\
& & \quad \times \left(p^2 - {(p\cdot q)^2 \over q^2} \right)
\label{generalD}
\end{eqnarray}
We now go to $d$-dimensional spherical coordinates
\begin{equation}
\int d^dq = A_{d-1} \int_0^\infty dq ~ q^{d-1} \int_0^{\pi} \sin^{d-2}\th
\end{equation}
where
\begin{equation}
A_{d-1} = {2\pi^{(d-1)/2} \over \Gamma\left({d-1\over 2}\right) }
\end{equation}
Define
\begin{equation}
\widetilde{D}(q) = q^{d-2} D(q)
\end{equation}
and
\begin{equation}
R_d = {2A_{d-1} \over (2\pi)^d} = \left\{ \begin{array}{cc}
1/\pi^2 & d=2 \cr
1/(2\pi^2) & d=3 \cr
1/(6\pi^3) & d=4
\end{array} \right.
\end{equation}
Then
\begin{widetext}
\begin{eqnarray}
\langle {\Delta} \l_p \rangle &=& -g^2 R_d \int_0^\infty dq q^{d-1 } \int_0^\pi d\th ~ \sin^{d-2}\th
(1-\cos^2 \th) {1\over q^2-2pq\cos\th} D(q) p^2
\nonumber \\
&=& -g^2 R_d \int_0^\pi d\th ~ \sin^{d-2}\th (1-\cos^2 \th)
\int_0^\infty dq {1\over q-2p\cos\th} q^{d-2} D(q) p^2
\nonumber \\
&=& -g^2 R_d p^2 (I_1 + I_2)
\end{eqnarray}
where
\begin{eqnarray}
I_1 &=& \int_0^{\pi/2} d\th ~ \sin^{d-2}\th (1-\cos^2\th) \int_0^\infty dq {1\over q-2p\cos\th} \widetilde{D}(q)
\nonumber \\
I_2 &=& \int_{\pi/2}^\pi d\th ~ \sin^{d-2}\th (1-\cos^2\th) \int_0^\infty dq {1\over q-2p\cos\th} \widetilde{D}(q)
\nonumber \\
\end{eqnarray}
Make the change of variables $\th \rightarrow \pi - \th$ in $I_2$
\begin{equation}
I_2 = \int_0^{\pi/2} d\th ~ \sin^{d-2}\th (1-\cos^2\th) \int_0^\infty dq {1\over q+2p\cos\th} \widetilde{D}(q)
\end{equation}
Then, in $I_1$, it is useful to rewrite the integral over momenta $q$
\begin{eqnarray}
\int_0^\infty dq {\widetilde{D}(q) \over q - 2p\cos\th} &=& \left\{ \int_0^{2p\cos\th} + \int_{2p\cos\th}^{4p\cos\th}
+ \int_{4p\cos\th}^\infty \right\} dq {\widetilde{D}(q) \over q - 2p\cos\th}
\nonumber \\
&=& \int_0^{2p\cos\th} dq {\widetilde{D}(q) \over q - 2p\cos\th} + \int_0^{2p\cos\th} dq {\widetilde{D}(4p\cos\th - q) \over 2p\cos\th - q}
+ \int_0^\infty dq {\widetilde{D}(4p\cos\th + q) \over 2p\cos\th + q}
\nonumber \\
&=& \int_0^{2p\cos\th} dq {1 \over 2p\cos\th - q}[\widetilde{D}(4p\cos\th - q) - \widetilde{D}(q)]
+ \int_0^\infty dq {\widetilde{D}(4p\cos\th + q) \over 2p\cos\th + q}
\end{eqnarray}
Altogether, we have to second order
\begin{equation}
\langle \l_p \rangle = \l_p^{(0)} + \langle {\Delta} \l_p \rangle
= p^2 \Bigl(1 - g^2 R_d I[p,m,\a] \Bigr)
\label{pt2}
\end{equation}
where
\begin{eqnarray}
I[p,m,\a] &=& \int_0^{\pi/2} d\th ~ \sin^{d-2}\th (1-\cos^2\th) \left\{ \int_0^\infty dq
{1 \over q +2p\cos\th}[\widetilde{D}(4p\cos\th + q) + \widetilde{D}(q)] \right.
\nonumber \\
& & \qquad \left. + \int_0^{2p\cos\th} dq {1 \over 2p\cos\th - q} [\widetilde{D}(4p\cos\th - q) - \widetilde{D}(q)] \right\}
\label{I}
\end{eqnarray}
\end{widetext}
The $m,\a$ in $I[p,m,\a]$ are constants I will use to parametrize the transverse gluon propagator $D(q)$.
\section{An ansatz for the gluon Propagator}
The gluon propagators $D_{ij}$ in Coulomb and Landau gauges are transverse with respect to spatial momenta ${\mathbf q}$ in $d+1$ dimensions,
and spacetime momenta $q^\mu$ in $d$ Euclidean dimensions, respectively. Therefore these propagators have the form shown in \rf{general_prop}. In a free theory
\begin{equation}
D(q) = \left\{ \begin{array}{cl}
1/(2q) & \text{Coulomb gauge} \cr \cr
1/q^2 & \text{Landau gauge}
\end{array} \right.
\label{Dfree}
\end{equation}
where the propagator in Coulomb gauge is at equal times, with $q$ the space (rather than spacetime) momentum.
The behavior \rf{Dfree} is expected at high momenta, but it is certainly not correct at low momenta, as seen from lattice Monte Carlo simulations. In Landau gauge, the current evidence is that $D(0)$ is finite and non-zero at $q=0$ in three and four dimensions \cite{Brazil,Berlin}, while $D(q) \rightarrow 0$ in two dimensions \cite{Maas}. In Coulomb gauge it appears that $D(q) \rightarrow 0$ in four dimensions \cite{coulomb}.
In order to allow for non-singular power behavior in the transverse gluon propagator as $p\rightarrow 0$, I will adopt the ansatz
that
\begin{equation}
D(q) = {1 \over 2 \sqrt{q^2 + m^{2+\a}/q^\a}}
\label{D-coul}
\end{equation}
in Coulomb gauge, and
\begin{equation}
D(q) = {1 \over q^2 + m^{2+\a}/q^\a}
\label{D-lan}
\end{equation}
in Landau gauge. Gribov's proposal for the gluon propagator in these cases corresponds to $\a=2$. The propagators go over to free-field behavior as $q\rightarrow \infty$.
\begin{figure}[h!]
\centerline{\scalebox{0.7}{\includegraphics{d3Cprop.eps}}}
\caption{Equal-times Coulomb-gauge gluon propagator in 2+1 dimensions, at $\b=6$ and
$L^3$ lattice volume, for $L=24,32,50$.}
\label{prop}
\end{figure}
I am not aware of any lattice Monte Carlo computation of the transverse gluon propagator in Coulomb gauge in
$d=3$ dimensions, in position space. In Fig.\ \ref{prop} I show data for $D(R)$ obtained from
the equal times correlator
\begin{equation}
\langle \mbox{Tr}[\texttt{A}_j({\mathbf{x}},t) \texttt{A}_j({\mathbf{y}},t)]
\end{equation}
of gluon fields
\begin{equation}
\texttt{A}_j({\mathbf{x}},t) = {1\over 2i}(U_j({\mathbf{x}},t) - U^\dagger_j({\mathbf{x}},t))
\end{equation}
on the lattice. The correlator is calculated via lattice Monte Carlo with an SU(2) Wilson action,
on an $L^3$ lattice volume at coupling $\beta=6$ and $L=24,32,50$, with the equal-times correlator computed after transforming
the gauge fields to Coulomb gauge.
Note that as the lattice volume increases, the gluon propagator develops a ``dip'' and actually
becomes negative at the larger $R$ values. This behavior appears to rule out
$\a=0$, in which the propagator should be everywhere positive. A reliable computation of $D(q)$ as $q \rightarrow 0$ will
probably require a large-scale lattice calculation, as has been done for the Landau gauge.
\begin{figure*}[t!]
\centerline{\scalebox{0.5}{\includegraphics{fp.eps}}}
\caption{Summary of the qualitative behavior of the low-lying F-P spectra, according to 2nd order perturbation theory,
for Landau and Coulomb gauges in D=2 and 3 dimensions. The sketch illustrates how the behavior of the F-P spectra
depends on the assumed infrared behavior of the gluon propagator, which is parametrized by the exponent $\a$.}
\label{fp}
\end{figure*}
\section{Results for F-P Spectra}
In section \ref{FPE} I introduced a parameter $d_H$ to control the approach to the first Gribov horizon, and speculated on the low-$p$ behavior of $\l_p$ as the horizon is approached. In the perturbative calculation, the mass parameter $m$ in the gluon propagator plays essentially the same role as $d_H$. Note that in dimensions lower than 3+1, where $I[p,m,\a]$ is convergent, the coupling
$g^2$ is dimensionful, and we may as well choose units such that $g^2=1$. Then
\begin{equation}
\langle \l_p \rangle = p^2 \Bigl((1 - R_d I[p,m,\a] \Bigr)
\label{pt2a}
\end{equation}
Expanding $I[p,m,\a]$ in leading powers of $p$ near $p=0$, we have
\begin{eqnarray}
R_d I[p,m,\a] &=& a[m,\a] - b[m,\a] p^s
\nonumber \\
& & \qquad + \mbox{higher powers of $p$}
\end{eqnarray}
in which case
\begin{eqnarray}
\langle \l_p \rangle &=& (1-a[m,\a]) p^2 + b[m,\a] p^{2+s}
\nonumber \\
& & \qquad + \mbox{higher powers of $p$}
\label{general}
\end{eqnarray}
Suppose, for a given $\a$, it is possible to find a critical value $m=m_c$ such that $a[m_c,\a]=1$ and $b[m,\a]>0$.
In that case we have the Type I scenario conjectured in Fig.\ \ref{conj1} above; i.e.
\begin{enumerate}
\item $m<m_c$ and $a[m,\a] > 1$: The low-lying F-P eigenvalue spectrum has a range of negative eigenvalues, starting at $p=0$. We interpret this to mean that the transverse gluon propagator, which determines the spectrum at second order, is determined by configurations outside the Gribov region.
\item $m=m_c$ and $a[m_c,\a]=1$: The region of negative eigenvalues just disappears, and $\l_p \sim p^{2+s}$. This is the case of particular
interest, where the gluon propagator is derived from configurations which mainly lie right on the Gribov horizon.
\item $m>m_c$ and $a[m_c,\a] < 1$. In this case the low-lying spectrum $\l_p = (1-a[m,\a])p^2$ is just a rescaling of the free-field spectrum,
and the gluon propagator is derived from configurations inside the Gribov region.
\end{enumerate}
\begin{figure*}[tbh]
\begin{center}
\subfigure[]
{
\label{lowmass}
\includegraphics[width=8truecm]{low_mass.eps}
}
\hspace{0.25cm}
\subfigure[]
{
\label{mcrit}
\includegraphics[width=8truecm]{mcrit.eps}
}
\end{center}
\caption{F-P spectra at $\a=1$. (a) $m=0.20<m_c$. There is an interval of negative eigenvalues in the region $0<p<0.009$.
(b) $\l_p$ at low $p$, for $m=0.20 < m_c$, $m=m_c=0.2228$, and $m=0.25>m_c$.}
\label{masses}
\end{figure*}
\begin{figure}[h!]
\centerline{\includegraphics[width=8truecm]{alf10d3c.eps}}
\caption{Log-log plot of the spectrum of the Fadeev-Popov operator, for
$\a=1$ at the critical $m_c=0.223$. A best fit at $p<1$ yields $\l_p = 1.21 p^{2.53}$.}
\label{alf10}
\end{figure}
\begin{figure}[h!]
\centerline{\includegraphics[width=8truecm]{mass.eps}}
\caption{Critical value $m_c$ for the mass parameter in the transverse
gluon propagator, vs.\ the power $\a$.}
\label{mass}
\end{figure}
It should be noted at this point that the Type I scenario is in some ways reminiscent of the Dyson-Schwinger approach, and indeed eq.\ \rf{pt2}
resembles the Dyson-Schwinger equation for the ghost propagator in covariant gauges (see, e.g.,
Fischer \cite{Christian}). Of course these equations are not the same; \rf{pt2a} is an equation for the expectation value of F-P eigenvalues, not the inverse ghost propagator, and it is derived from a perturbative expansion, not the Dyson-Schwinger equations. Nevertheless, the scaling solution \cite{scaling} is obtained from the Dyson-Schwinger equation by tuning a coupling so that the bare inverse ghost propagator in that
equation is exactly cancelled by another term. In the absence of this tuning, the decoupling solution \cite{decoupling} is obtained. Similarly, in our approach, a mass parameter is tuned to exactly cancel the $p^2$ term in the eigenvalue spectrum, resulting
in an enhanced density of near-zero eigenmodes. The motivation for the tuning in our case is to study the F-P spectrum at
the Gribov horizon, which is only relevant to physics if, in fact, the functional integral over the Gribov region is dominated by horizon configurations.
The Type II scenario is obtained if $b[m,\a]$ is negative when $a[m,\a]=1$, in which case there is still a range of negative eigenvalues, so
this value of $m$ is not the critical value. The critical value, corresponding to the horizon, is obtained at a value $m=m_c$ where $a[m,\a]<1$, such that the function
\begin{equation}
\langle \l_p \rangle \approx \Bigl(1-a[m_c,\a]\Bigr) p^2 - \Bigl|b[m_c,\a]\Bigr| p^{r} + c[m_c,\a] p^q
\end{equation}
approximating $\langle \l_p \rangle$ at small $p$ has a zero value, but no negative values, for one choice of $p\ne 0$. In this case the horizon does not alter the power dependence $\l_p \sim p^2$ near $p=0$.
Both the Type I and Type II scenarios assume that
\begin{equation}
a[m,\a] = R_d I[p=0,m,\a]
\end{equation}
is finite. This is not necessarily the case, however, and it is easy to check that $I[0,m,\a]$ is divergent at all
$\a \le 0$ for Landau gauge in two dimensions and Coulomb gauge in three dimensions, and is divergent for all $\a \le -1$
for Landau gauge in three dimensions. There is no choice of $m$, for those choices of $\a$, which completely eliminates negative F-P eigenvalues.
This will be referred to as the ``no solution" case.
In order to determine which scenario is realized, at each choice of $\a$ for which $I[0,m,\a]$ is finite, it is necessary to calculate
$I[p,m,\a]$ numerically. The result, for Coulomb gauge in three dimensions, and Landau gauge in two and three dimensions, is indicated schematically in Fig.\ \ref{fp}. To illustrate how these results are obtained, we consider in particular the case of $\a=1$ for Coulomb and Landau gauges in
three dimensions (i.e.\ $d=2$ for Coulomb, and $d=3$ for Landau). We begin with Coulomb gauge (Figs.\ \ref{masses}-\ref{fit}). Figure \ref{lowmass} shows the low-lying F-P spectrum at $\a=1$ and $m=0.20<m_c$, and it is clear that there is a region of negative eigenvalues starting at $p=0$. As $m$ is increased, the region of negative eigenvalues shrinks in size, until at a critical value $m=m_c(\a)$ the interval of negative eigenvalues just vanishes. Figure \ref{mcrit} displays the low-lying spectrum just below, at, and just above the critical mass at $\a=1$, which is $m_c=0.2228$.
At $m \ne m_c$, $\l_p$ is proportional to $p^2$ near $p=0$, with a proportionality constant which is positive or negative, depending on
whether $m$ is greater or less than $m_c$. But precisely at $m=m_c$, we find that $\l_p \propto p^{2+s}$, with $s=s(\a) > 0$.
Fig.\ \ref{alf10} is a log-log plot of $\l_p$ vs.\ $p$ over a large range of $p$, at $\a=1$ and $m_c=0.223$. For the range $0<p<1$, we can
determine that $s=0.53$ in this case, and $\l_p \approx 1.21 p^{2.53}$ at small $p$. At around $p\equiv |{\mathbf{p}} |=1$ (in units $g^2=1$), the power behavior shifts to the free case, $\l_p=p^2$, and continues that way for all higher $p$, as expected. This is an example of the Type I scenario.
The next question is how $m_c$ and $2+s$ change as $\a$ is varied. As already noted, we must choose $\a > 0$ to reach the horizon, which means that $D(0)=0$, and therefore the transverse gluon propagator must vanish at zero momentum for Coulomb gauge in 2+1 dimensions, and for Landau gauge in 1+1 dimensions. As $\a \rightarrow 0^+$, the increasingly singular behavior of the integrand in $I[p,m,\a]$ must be countered by an increasingly large value of $m_c$, in order to satisy $a[m_c],\a]=1$. A plot of $m_c$ vs.\ $\a$ is shown in Fig.\ \ref{mass}.
The power behavior $\l_p = b p^{2+s}$ in the low-lying spectrum is crucial for Coulomb confinement, and the
exponent $2+s$ vs.\ $\a$, obtained at $m=m_c$ is shown in Fig.\ \ref{power}. In 2+1 dimensions the condition for Coulomb confinement (beyond the
marginal divergence of the free theory) is that $s>0$, which is seen to hold throughout the range shown.
We also see that there is a sudden jump in $s$ from roughly $s=1$ to $s=2$ at $\a=2$. This is where the transition from Type I to Type II behavior takes place. As $\a\rightarrow 2$ the coefficient $b[m_c,\a]$
approaches zero (cf.\ Fig. \ref{coeff}) and then changes sign. Exactly at $\a=2$, where $b[m_c,\a]=0$, the term which has the next
higher power in $p$ takes over, accounting for the sudden jump in $s$.
\begin{figure*}[tbh]
\begin{center}
\subfigure[]
{
\label{power}
\includegraphics[width=8truecm]{power.eps}
}
\hspace{0.25cm}
\subfigure[]
{
\label{coeff}
\includegraphics[width=8truecm]{coeff.eps}
}
\end{center}
\caption{(a) Exponent $2+s$ vs.\ $\a$; and (b) coefficient $b$ vs $\a$; for the power-law behavior $\l_p=b p^{2+s}$ at the critical
mass parameter $m=m_c(\a)$, for the Coulomb gauge F-P spectrum in 2+1 dimensions. The sudden rise to $2+s=4$ at $\a=2$ is correlated with $b\rightarrow 0$}
\label{fit}
\end{figure*}
\begin{figure}[h!]
\centerline{\scalebox{0.7}{\includegraphics{mcritL.eps}}}
\caption{The low-lying F-P eigenvalue spectra near the Gribov horizon, for Landau gauge in D=3 spacetime dimensions and $\a=1$, according to
second-order perturbation theory. This is an example of the Type II scenario.}
\label{mcritL}
\end{figure}
Landau gauge in three dimensions, at $\a=1$, furnishes an example of the Type II scenario. The F-P spectrum at small $p$ is shown in Fig.\
\ref{mcritL} for the mass parameter above ($m=0.087$), below ($m=0.086$), and equal $m=m_c=0.08644$ to the critical value.
\section{Conclusions}
If the integration over gauge fields is dominated by configurations on or near the first Gribov horizon, then the lowest non-trivial F-P eigenvalue
must be very close or equal to zero, even in a finite spacetime volume. The main finding of the perturbative treatment presented here is that
if there is, in fact, a non-trivial zero mode, and the F-P eigenvalues are labeled by the lattice momenta, then this non-trivial zero mode may occur at either zero momentum (Type I scenario) or non-zero momenta (Type II scenario), depending on the infrared behavior of the gluon propagator.
While the spectrum of F-P eigenvalues does not translate directly into a prediction for the behavior of the ghost propagator (because the
momentum behavior of the F-P eigenmodes must also be taken into account),
it is natural to conjecture that the Type I scenario is associated with an infrared singular ghost dressing function, as in Coulomb gauge, while the
Type II scenario corresponds to a finite ghost dressing function, as appears to be the case in Landau gauge. This would most likely be the
case if $|\phi_{pA}(k)|^2$ is narrowly peaked around $k=p$, where $\phi_{pA}(k)$ is the Fourier transform of an F-P eigenmode $\phi_{pA}(x)$
with a low-lying eigenvalue $\l_{pA}$.
Since the FP spectra at the Gribov horizon have been derived here from ordinary 2nd order perturbation theory (plus an ansatz for the
gluon propagator), there is obviously a question of whether perturbation theory can be trusted in this context. In $D=3$ spacetime dimensions the coupling $g^2$ has units of mass, so the expansion parameter at $p \rightarrow 0$ will be $g^2/m$, while the expansion parameter at large $p$ will
be $g^2/|p|$. The perturbative calculation of the FP eigenvalue spectrum at $p \rightarrow 0$ should therefore be trustworthy for large $m/g^2$.
Unfortunately, we have seen that the critical mass parameter $m_c$ corresponding to the Gribov horizon is actually rather small, in units of $g^2$,
with, e.g., $m_c/g^2=0.223$ in Coulomb gauge, and $m_c/g^2=0.0864$ in Landau gauge in three spacetime dimensions and $\a=1$. Of course, the perturbative expansion may also involve some numerical factors, and without calculating to higher orders, or estimating the radius of convergence in some way, it is difficult to judge the accuracy of the second-order term in the series. But there is no particular reason for confidence in the second-order results at $m=m_c$ at the quantitative level. It was argued however in section \ref{FPE}, on rather general grounds, that it is
natural to expect either Type I or Type II behavior of the Faddeev-Popov spectrum at the Gribov horizon. The perturbative calculation, at this stage, simply provides a concrete illustration in support of this rather general qualitative argument.
\acknowledgments{This research was supported in part by the U.S.\ Department of Energy under Grant No.\ DE-FG03-92ER40711. }
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.